ID,url,label,task_annotation,method_annotation,org_annotation,goal1_raw,goal2_raw,goal3_raw,title_abstract_clean,title,abstract,title_clean,abstract_clean,acknowledgments_clean,text,year
zhu-etal-2021-neural,https://aclanthology.org/2021.acl-long.339,0,,,,,,,"Neural Stylistic Response Generation with Disentangled Latent Variables. Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style. Meanwhile, using monolingual stylistic data to increase style intensity often leads to the expense of decreasing content relevance. In this paper, we propose to disentangle the content and style in latent space by diluting sentence-level information in style representations. Combining the desired style representation and a response content representation will then obtain a stylistic response. Our approach achieves a higher BERT-based style intensity score and comparable BLEU scores, compared with baselines. Human evaluation results show that our approach significantly improves style intensity and maintains content relevance.",Neural Stylistic Response Generation with Disentangled Latent Variables,"Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style. Meanwhile, using monolingual stylistic data to increase style intensity often leads to the expense of decreasing content relevance. In this paper, we propose to disentangle the content and style in latent space by diluting sentence-level information in style representations. Combining the desired style representation and a response content representation will then obtain a stylistic response. Our approach achieves a higher BERT-based style intensity score and comparable BLEU scores, compared with baselines. Human evaluation results show that our approach significantly improves style intensity and maintains content relevance.",Neural Stylistic Response Generation with Disentangled Latent Variables,"Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style. Meanwhile, using monolingual stylistic data to increase style intensity often leads to the expense of decreasing content relevance. In this paper, we propose to disentangle the content and style in latent space by diluting sentence-level information in style representations. Combining the desired style representation and a response content representation will then obtain a stylistic response. Our approach achieves a higher BERT-based style intensity score and comparable BLEU scores, compared with baselines. Human evaluation results show that our approach significantly improves style intensity and maintains content relevance.","The authors would like to thank all the anonymous reviewers for their insightful comments. The authors from HIT are supported by the National Natural Science Foundation of China (No. 62076081, No. 61772153, and No. 61936010) and Science and Technology Innovation 2030 Major Project of China (No. 2020AAA0108605). The author from UCSB is not supported by any of the projects above.","Neural Stylistic Response Generation with Disentangled Latent Variables. Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style. Meanwhile, using monolingual stylistic data to increase style intensity often leads to the expense of decreasing content relevance. In this paper, we propose to disentangle the content and style in latent space by diluting sentence-level information in style representations. Combining the desired style representation and a response content representation will then obtain a stylistic response. Our approach achieves a higher BERT-based style intensity score and comparable BLEU scores, compared with baselines. Human evaluation results show that our approach significantly improves style intensity and maintains content relevance.",2021
cybulska-vossen-2013-semantic,https://aclanthology.org/R13-1021,0,,,,,,,"Semantic Relations between Events and their Time, Locations and Participants for Event Coreference Resolution. In this study, we measure the contribution of different event components and particular semantic relations to the task of event coreference resolution. First we calculate what event times, locations and participants add to event coreference resolution. Secondly, we analyze the contribution by hyponymy and granularity within the participant component. Coreference of events is then calculated from the coreference match scores of each event component. Coreferent action candidates are accordingly filtered based on compatibility of their time, locations, or participants. We report the success rates of our experiments on a corpus annotated with coreferent events.","Semantic Relations between Events and their Time, Locations and Participants for Event Coreference Resolution","In this study, we measure the contribution of different event components and particular semantic relations to the task of event coreference resolution. First we calculate what event times, locations and participants add to event coreference resolution. Secondly, we analyze the contribution by hyponymy and granularity within the participant component. Coreference of events is then calculated from the coreference match scores of each event component. Coreferent action candidates are accordingly filtered based on compatibility of their time, locations, or participants. We report the success rates of our experiments on a corpus annotated with coreferent events.","Semantic Relations between Events and their Time, Locations and Participants for Event Coreference Resolution","In this study, we measure the contribution of different event components and particular semantic relations to the task of event coreference resolution. First we calculate what event times, locations and participants add to event coreference resolution. Secondly, we analyze the contribution by hyponymy and granularity within the participant component. Coreference of events is then calculated from the coreference match scores of each event component. Coreferent action candidates are accordingly filtered based on compatibility of their time, locations, or participants. We report the success rates of our experiments on a corpus annotated with coreferent events.",This study is part of the Semantics of History research project at the VU University Amsterdam and the European FP7 project NewsReader (316404). The authors are grateful to the anonymous reviewers as well as the generous support of the Network Institute of the VU University Amsterdam. All errors are our own.,"Semantic Relations between Events and their Time, Locations and Participants for Event Coreference Resolution. In this study, we measure the contribution of different event components and particular semantic relations to the task of event coreference resolution. First we calculate what event times, locations and participants add to event coreference resolution. Secondly, we analyze the contribution by hyponymy and granularity within the participant component. Coreference of events is then calculated from the coreference match scores of each event component. Coreferent action candidates are accordingly filtered based on compatibility of their time, locations, or participants. We report the success rates of our experiments on a corpus annotated with coreferent events.",2013
peldszus-2014-towards,https://aclanthology.org/W14-2112,0,,,,,,,"Towards segment-based recognition of argumentation structure in short texts. Despite recent advances in discourse parsing and causality detection, the automatic recognition of argumentation structure of authentic texts is still a very challenging task. To approach this problem, we collected a small corpus of German microtexts in a text generation experiment, resulting in texts that are authentic but of controlled linguistic and rhetoric complexity. We show that trained annotators can determine the argumentation structure on these microtexts reliably. We experiment with different machine learning approaches for automatic argumentation structure recognition on various levels of granularity of the scheme. Given the complex nature of such a discourse understanding tasks, the first results presented here are promising, but invite for further investigation.",Towards segment-based recognition of argumentation structure in short texts,"Despite recent advances in discourse parsing and causality detection, the automatic recognition of argumentation structure of authentic texts is still a very challenging task. To approach this problem, we collected a small corpus of German microtexts in a text generation experiment, resulting in texts that are authentic but of controlled linguistic and rhetoric complexity. We show that trained annotators can determine the argumentation structure on these microtexts reliably. We experiment with different machine learning approaches for automatic argumentation structure recognition on various levels of granularity of the scheme. Given the complex nature of such a discourse understanding tasks, the first results presented here are promising, but invite for further investigation.",Towards segment-based recognition of argumentation structure in short texts,"Despite recent advances in discourse parsing and causality detection, the automatic recognition of argumentation structure of authentic texts is still a very challenging task. To approach this problem, we collected a small corpus of German microtexts in a text generation experiment, resulting in texts that are authentic but of controlled linguistic and rhetoric complexity. We show that trained annotators can determine the argumentation structure on these microtexts reliably. We experiment with different machine learning approaches for automatic argumentation structure recognition on various levels of granularity of the scheme. Given the complex nature of such a discourse understanding tasks, the first results presented here are promising, but invite for further investigation.",Thanks to Manfred Stede and to the anonymous reviewers for their helpful comments. The author was supported by a grant from Cusanuswerk.,"Towards segment-based recognition of argumentation structure in short texts. Despite recent advances in discourse parsing and causality detection, the automatic recognition of argumentation structure of authentic texts is still a very challenging task. To approach this problem, we collected a small corpus of German microtexts in a text generation experiment, resulting in texts that are authentic but of controlled linguistic and rhetoric complexity. We show that trained annotators can determine the argumentation structure on these microtexts reliably. We experiment with different machine learning approaches for automatic argumentation structure recognition on various levels of granularity of the scheme. Given the complex nature of such a discourse understanding tasks, the first results presented here are promising, but invite for further investigation.",2014
girju-etal-2007-semeval,https://aclanthology.org/S07-1003,0,,,,,,,"SemEval-2007 Task 04: Classification of Semantic Relations between Nominals. The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of relations between pairs of words in a text. We present an evaluation task designed to provide a framework for comparing different approaches to classifying semantic relations between nominals in a sentence. This is part of SemEval, the 4 th edition of the semantic evaluation event previously known as SensEval. We define the task, describe the training/test data and their creation, list the participating systems and discuss their results. There were 14 teams who submitted 15 systems.",{S}em{E}val-2007 Task 04: Classification of Semantic Relations between Nominals,"The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of relations between pairs of words in a text. We present an evaluation task designed to provide a framework for comparing different approaches to classifying semantic relations between nominals in a sentence. This is part of SemEval, the 4 th edition of the semantic evaluation event previously known as SensEval. We define the task, describe the training/test data and their creation, list the participating systems and discuss their results. There were 14 teams who submitted 15 systems.",SemEval-2007 Task 04: Classification of Semantic Relations between Nominals,"The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of relations between pairs of words in a text. We present an evaluation task designed to provide a framework for comparing different approaches to classifying semantic relations between nominals in a sentence. This is part of SemEval, the 4 th edition of the semantic evaluation event previously known as SensEval. We define the task, describe the training/test data and their creation, list the participating systems and discuss their results. There were 14 teams who submitted 15 systems.","We thank Eneko Agirre, Lluís Màrquez and Richard Wicentowski, the organizers of SemEval 2007, for their guidance and prompt support in all organizational matters. We thank Marti Hearst for valuable advice throughout the task description and debates on semantic relation definitions. We thank the anonymous reviewers for their helpful comments.","SemEval-2007 Task 04: Classification of Semantic Relations between Nominals. The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of relations between pairs of words in a text. We present an evaluation task designed to provide a framework for comparing different approaches to classifying semantic relations between nominals in a sentence. This is part of SemEval, the 4 th edition of the semantic evaluation event previously known as SensEval. We define the task, describe the training/test data and their creation, list the participating systems and discuss their results. There were 14 teams who submitted 15 systems.",2007
van-der-goot-etal-2021-multilexnorm,https://aclanthology.org/2021.wnut-1.55,0,,,,,,,"MultiLexNorm: A Shared Task on Multilingual Lexical Normalization. Lexical normalization is the task of transforming an utterance into its standardized form. This task is beneficial for downstream analysis, as it provides a way to harmonize (often spontaneous) linguistic variation. Such variation is typical for social media on which information is shared in a multitude of ways, including diverse languages and code-switching. Since the seminal work of Han and Baldwin (2011) a decade ago, lexical normalization has attracted attention in English and multiple other languages. However, there exists a lack of a common benchmark for comparison of systems across languages with a homogeneous data and evaluation setup. The MUL-TILEXNORM shared task sets out to fill this gap. We provide the largest publicly available multilingual lexical normalization benchmark including 12 language variants. We propose a homogenized evaluation setup with both intrinsic and extrinsic evaluation. As extrinsic evaluation, we use dependency parsing and part-ofspeech tagging with adapted evaluation metrics (a-LAS, a-UAS, and a-POS) to account for alignment discrepancies. The shared task hosted at W-NUT 2021 attracted 9 participants and 18 submissions. The results show that neural normalization systems outperform the previous state-of-the-art system by a large margin. Downstream parsing and part-of-speech tagging performance is positively affected but to varying degrees, with improvements of up to 1.72 a-LAS, 0.85 a-UAS, and 1.54 a-POS for the winning system. 1",{M}ulti{L}ex{N}orm: A Shared Task on Multilingual Lexical Normalization,"Lexical normalization is the task of transforming an utterance into its standardized form. This task is beneficial for downstream analysis, as it provides a way to harmonize (often spontaneous) linguistic variation. Such variation is typical for social media on which information is shared in a multitude of ways, including diverse languages and code-switching. Since the seminal work of Han and Baldwin (2011) a decade ago, lexical normalization has attracted attention in English and multiple other languages. However, there exists a lack of a common benchmark for comparison of systems across languages with a homogeneous data and evaluation setup. The MUL-TILEXNORM shared task sets out to fill this gap. We provide the largest publicly available multilingual lexical normalization benchmark including 12 language variants. We propose a homogenized evaluation setup with both intrinsic and extrinsic evaluation. As extrinsic evaluation, we use dependency parsing and part-ofspeech tagging with adapted evaluation metrics (a-LAS, a-UAS, and a-POS) to account for alignment discrepancies. The shared task hosted at W-NUT 2021 attracted 9 participants and 18 submissions. The results show that neural normalization systems outperform the previous state-of-the-art system by a large margin. Downstream parsing and part-of-speech tagging performance is positively affected but to varying degrees, with improvements of up to 1.72 a-LAS, 0.85 a-UAS, and 1.54 a-POS for the winning system. 1",MultiLexNorm: A Shared Task on Multilingual Lexical Normalization,"Lexical normalization is the task of transforming an utterance into its standardized form. This task is beneficial for downstream analysis, as it provides a way to harmonize (often spontaneous) linguistic variation. Such variation is typical for social media on which information is shared in a multitude of ways, including diverse languages and code-switching. Since the seminal work of Han and Baldwin (2011) a decade ago, lexical normalization has attracted attention in English and multiple other languages. However, there exists a lack of a common benchmark for comparison of systems across languages with a homogeneous data and evaluation setup. The MUL-TILEXNORM shared task sets out to fill this gap. We provide the largest publicly available multilingual lexical normalization benchmark including 12 language variants. We propose a homogenized evaluation setup with both intrinsic and extrinsic evaluation. As extrinsic evaluation, we use dependency parsing and part-ofspeech tagging with adapted evaluation metrics (a-LAS, a-UAS, and a-POS) to account for alignment discrepancies. The shared task hosted at W-NUT 2021 attracted 9 participants and 18 submissions. The results show that neural normalization systems outperform the previous state-of-the-art system by a large margin. Downstream parsing and part-of-speech tagging performance is positively affected but to varying degrees, with improvements of up to 1.72 a-LAS, 0.85 a-UAS, and 1.54 a-POS for the winning system. 1",B.M. was funded by the French Research Agency via the ANR ParSiTi project (ANR-16-CE33-0021).,"MultiLexNorm: A Shared Task on Multilingual Lexical Normalization. Lexical normalization is the task of transforming an utterance into its standardized form. This task is beneficial for downstream analysis, as it provides a way to harmonize (often spontaneous) linguistic variation. Such variation is typical for social media on which information is shared in a multitude of ways, including diverse languages and code-switching. Since the seminal work of Han and Baldwin (2011) a decade ago, lexical normalization has attracted attention in English and multiple other languages. However, there exists a lack of a common benchmark for comparison of systems across languages with a homogeneous data and evaluation setup. The MUL-TILEXNORM shared task sets out to fill this gap. We provide the largest publicly available multilingual lexical normalization benchmark including 12 language variants. We propose a homogenized evaluation setup with both intrinsic and extrinsic evaluation. As extrinsic evaluation, we use dependency parsing and part-ofspeech tagging with adapted evaluation metrics (a-LAS, a-UAS, and a-POS) to account for alignment discrepancies. The shared task hosted at W-NUT 2021 attracted 9 participants and 18 submissions. The results show that neural normalization systems outperform the previous state-of-the-art system by a large margin. Downstream parsing and part-of-speech tagging performance is positively affected but to varying degrees, with improvements of up to 1.72 a-LAS, 0.85 a-UAS, and 1.54 a-POS for the winning system. 1",2021
einolghozati-etal-2021-el,https://aclanthology.org/2021.eacl-main.87,0,,,,,,,"El Volumen Louder Por Favor: Code-switching in Task-oriented Semantic Parsing. Being able to parse code-switched (CS) utterances, such as Spanish+English or Hindi+English, is essential to democratize task-oriented semantic parsing systems for certain locales. In this work, we focus on Spanglish (Spanish+English) and release a dataset, CSTOP, containing 5800 CS utterances alongside their semantic parses. We examine the CS generalizability of various Cross-lingual (XL) models and exhibit the advantage of pre-trained XL language models when data for only one language is present. As such, we focus on improving the pre-trained models for the case when only English corpus alongside either zero or a few CS training instances are available. We propose two data augmentation methods for the zero-shot and the few-shot settings: fine-tune using translate-and-align and augment using a generation model followed by match-and-filter. Combining the few-shot setting with the above improvements decreases the initial 30-point accuracy gap between the zero-shot and the full-data settings by two thirds.",El Volumen Louder Por Favor: Code-switching in Task-oriented Semantic Parsing,"Being able to parse code-switched (CS) utterances, such as Spanish+English or Hindi+English, is essential to democratize task-oriented semantic parsing systems for certain locales. In this work, we focus on Spanglish (Spanish+English) and release a dataset, CSTOP, containing 5800 CS utterances alongside their semantic parses. We examine the CS generalizability of various Cross-lingual (XL) models and exhibit the advantage of pre-trained XL language models when data for only one language is present. As such, we focus on improving the pre-trained models for the case when only English corpus alongside either zero or a few CS training instances are available. We propose two data augmentation methods for the zero-shot and the few-shot settings: fine-tune using translate-and-align and augment using a generation model followed by match-and-filter. Combining the few-shot setting with the above improvements decreases the initial 30-point accuracy gap between the zero-shot and the full-data settings by two thirds.",El Volumen Louder Por Favor: Code-switching in Task-oriented Semantic Parsing,"Being able to parse code-switched (CS) utterances, such as Spanish+English or Hindi+English, is essential to democratize task-oriented semantic parsing systems for certain locales. In this work, we focus on Spanglish (Spanish+English) and release a dataset, CSTOP, containing 5800 CS utterances alongside their semantic parses. We examine the CS generalizability of various Cross-lingual (XL) models and exhibit the advantage of pre-trained XL language models when data for only one language is present. As such, we focus on improving the pre-trained models for the case when only English corpus alongside either zero or a few CS training instances are available. We propose two data augmentation methods for the zero-shot and the few-shot settings: fine-tune using translate-and-align and augment using a generation model followed by match-and-filter. Combining the few-shot setting with the above improvements decreases the initial 30-point accuracy gap between the zero-shot and the full-data settings by two thirds.",,"El Volumen Louder Por Favor: Code-switching in Task-oriented Semantic Parsing. Being able to parse code-switched (CS) utterances, such as Spanish+English or Hindi+English, is essential to democratize task-oriented semantic parsing systems for certain locales. In this work, we focus on Spanglish (Spanish+English) and release a dataset, CSTOP, containing 5800 CS utterances alongside their semantic parses. We examine the CS generalizability of various Cross-lingual (XL) models and exhibit the advantage of pre-trained XL language models when data for only one language is present. As such, we focus on improving the pre-trained models for the case when only English corpus alongside either zero or a few CS training instances are available. We propose two data augmentation methods for the zero-shot and the few-shot settings: fine-tune using translate-and-align and augment using a generation model followed by match-and-filter. Combining the few-shot setting with the above improvements decreases the initial 30-point accuracy gap between the zero-shot and the full-data settings by two thirds.",2021
taji-etal-2017-universal,https://aclanthology.org/W17-1320,0,,,,,,,"Universal Dependencies for Arabic. We describe the process of creating NUDAR, a Universal Dependency treebank for Arabic. We present the conversion from the Penn Arabic Treebank to the Universal Dependency syntactic representation through an intermediate dependency representation. We discuss the challenges faced in the conversion of the trees, the decisions we made to solve them, and the validation of our conversion. We also present initial parsing results on NUDAR.",{U}niversal {D}ependencies for {A}rabic,"We describe the process of creating NUDAR, a Universal Dependency treebank for Arabic. We present the conversion from the Penn Arabic Treebank to the Universal Dependency syntactic representation through an intermediate dependency representation. We discuss the challenges faced in the conversion of the trees, the decisions we made to solve them, and the validation of our conversion. We also present initial parsing results on NUDAR.",Universal Dependencies for Arabic,"We describe the process of creating NUDAR, a Universal Dependency treebank for Arabic. We present the conversion from the Penn Arabic Treebank to the Universal Dependency syntactic representation through an intermediate dependency representation. We discuss the challenges faced in the conversion of the trees, the decisions we made to solve them, and the validation of our conversion. We also present initial parsing results on NUDAR.",The work done by the third author was supported by the grant 15-10472S of the Czech Science Foundation.,"Universal Dependencies for Arabic. We describe the process of creating NUDAR, a Universal Dependency treebank for Arabic. We present the conversion from the Penn Arabic Treebank to the Universal Dependency syntactic representation through an intermediate dependency representation. We discuss the challenges faced in the conversion of the trees, the decisions we made to solve them, and the validation of our conversion. We also present initial parsing results on NUDAR.",2017
habash-2012-mt,https://aclanthology.org/2012.amta-tutorials.3,0,,,,,,,"MT and Arabic Language Issues. The artistic pieces carry the personal characteristics of the artists. • … and they produce a ranked list of translations in the target language • Popular decoders: Moses (Koehn et al., 2007 ), cdec (Dyer et al., 2010 ), Joshua (Li et al., 2009 , Portage (Sadat et al, 2005) and others.
• BLEU (Papineni et al, 2001) -BiLingual Evaluation Understudy -Modified n-gram precision with length penalty -Quick, inexpensive and language independent -Bias against synonyms and inflectional variations -Most commonly used MT metric -Official metric of the NIST Open MT Evaluation ",{MT} and {A}rabic Language Issues,"The artistic pieces carry the personal characteristics of the artists. • … and they produce a ranked list of translations in the target language • Popular decoders: Moses (Koehn et al., 2007 ), cdec (Dyer et al., 2010 ), Joshua (Li et al., 2009 , Portage (Sadat et al, 2005) and others.
• BLEU (Papineni et al, 2001) -BiLingual Evaluation Understudy -Modified n-gram precision with length penalty -Quick, inexpensive and language independent -Bias against synonyms and inflectional variations -Most commonly used MT metric -Official metric of the NIST Open MT Evaluation ",MT and Arabic Language Issues,"The artistic pieces carry the personal characteristics of the artists. • … and they produce a ranked list of translations in the target language • Popular decoders: Moses (Koehn et al., 2007 ), cdec (Dyer et al., 2010 ), Joshua (Li et al., 2009 , Portage (Sadat et al, 2005) and others.
• BLEU (Papineni et al, 2001) -BiLingual Evaluation Understudy -Modified n-gram precision with length penalty -Quick, inexpensive and language independent -Bias against synonyms and inflectional variations -Most commonly used MT metric -Official metric of the NIST Open MT Evaluation ",,"MT and Arabic Language Issues. The artistic pieces carry the personal characteristics of the artists. • … and they produce a ranked list of translations in the target language • Popular decoders: Moses (Koehn et al., 2007 ), cdec (Dyer et al., 2010 ), Joshua (Li et al., 2009 , Portage (Sadat et al, 2005) and others.
• BLEU (Papineni et al, 2001) -BiLingual Evaluation Understudy -Modified n-gram precision with length penalty -Quick, inexpensive and language independent -Bias against synonyms and inflectional variations -Most commonly used MT metric -Official metric of the NIST Open MT Evaluation ",2012
arthern-1978-machine,https://aclanthology.org/1978.tc-1.5,0,,,,,,,"Machine translation and computerised terminology systems - a translator's viewpoint. Whether these criticisms were valid or not, machine translation development in the States was cut back immediately, translators heaved a sigh of relief, and machine translation researchers went underground. As we have already heard this morning however, they are now coming out into the open again and translators are asking the same question once more. The short answer is that no translator working now is going to lose his or her job in the next five years because of machine translation, and probably never will. Machine translation systems which are now operating are either limited in their scope, such as the Canadian ""METEO"" system which translates weather forecasts from English into French, or the CULT system which we are to hear about this afternoon, or cannot provide translations of generally acceptable quality without extensive revision, or ""post-editing"". In addition on, machine translation systems are expensive to develop and can only pay their way by translating large amounts of material. Another bar to using machine translation in smallscale operations is the variety of work, and therefore the variety of terminology involved. If a word is not in the machine's dictionary it just won't be translated, and if a translator has to spend time looking up terms and inserting them in a translation full of gaps, any economic benefit of machine translation will be lost. Consequently, as things stand at present most freelance translators and staff translators in small firms are unlikely to come into direct contact with machine translation, or to suffer from competition from machine translation. Competition would only come from the possible use of machine translation by large commercial agencies. It would be felt first either in very general areas, or in very specialized areas, with a clearly delimited vocabulary and standardized phraseology-in both cases, perhaps, in order to have a quick cheap translation to get the gist of a text, or to decide whether to have it translated by a translator. A final thought in this connection is that both freelances and small firms might conceivably buy raw machine translation from a large agency and post-edit it themselves. This would constitute a particular form of ""interactive"" machine translation, and would only be worth attempting if the time taken in post-editing to an acceptable standard was less than the time required to translate the text from scratch. While the size and complexity of machine translation operations mean that freelances and translators in small firms are unlikely to become directly involved with it, some MACHINE TRANSLATION AND COMPUTERIZED TERM1NOLOGY SYSTEMS 79 translators and revisers in the Commission of the European Communities have already done so. Some comments on ""Systran""",Machine translation and computerised terminology systems - a translator{'}s viewpoint,"Whether these criticisms were valid or not, machine translation development in the States was cut back immediately, translators heaved a sigh of relief, and machine translation researchers went underground. As we have already heard this morning however, they are now coming out into the open again and translators are asking the same question once more. The short answer is that no translator working now is going to lose his or her job in the next five years because of machine translation, and probably never will. Machine translation systems which are now operating are either limited in their scope, such as the Canadian ""METEO"" system which translates weather forecasts from English into French, or the CULT system which we are to hear about this afternoon, or cannot provide translations of generally acceptable quality without extensive revision, or ""post-editing"". In addition on, machine translation systems are expensive to develop and can only pay their way by translating large amounts of material. Another bar to using machine translation in smallscale operations is the variety of work, and therefore the variety of terminology involved. If a word is not in the machine's dictionary it just won't be translated, and if a translator has to spend time looking up terms and inserting them in a translation full of gaps, any economic benefit of machine translation will be lost. Consequently, as things stand at present most freelance translators and staff translators in small firms are unlikely to come into direct contact with machine translation, or to suffer from competition from machine translation. Competition would only come from the possible use of machine translation by large commercial agencies. It would be felt first either in very general areas, or in very specialized areas, with a clearly delimited vocabulary and standardized phraseology-in both cases, perhaps, in order to have a quick cheap translation to get the gist of a text, or to decide whether to have it translated by a translator. A final thought in this connection is that both freelances and small firms might conceivably buy raw machine translation from a large agency and post-edit it themselves. This would constitute a particular form of ""interactive"" machine translation, and would only be worth attempting if the time taken in post-editing to an acceptable standard was less than the time required to translate the text from scratch. While the size and complexity of machine translation operations mean that freelances and translators in small firms are unlikely to become directly involved with it, some MACHINE TRANSLATION AND COMPUTERIZED TERM1NOLOGY SYSTEMS 79 translators and revisers in the Commission of the European Communities have already done so. Some comments on ""Systran""",Machine translation and computerised terminology systems - a translator's viewpoint,"Whether these criticisms were valid or not, machine translation development in the States was cut back immediately, translators heaved a sigh of relief, and machine translation researchers went underground. As we have already heard this morning however, they are now coming out into the open again and translators are asking the same question once more. The short answer is that no translator working now is going to lose his or her job in the next five years because of machine translation, and probably never will. Machine translation systems which are now operating are either limited in their scope, such as the Canadian ""METEO"" system which translates weather forecasts from English into French, or the CULT system which we are to hear about this afternoon, or cannot provide translations of generally acceptable quality without extensive revision, or ""post-editing"". In addition on, machine translation systems are expensive to develop and can only pay their way by translating large amounts of material. Another bar to using machine translation in smallscale operations is the variety of work, and therefore the variety of terminology involved. If a word is not in the machine's dictionary it just won't be translated, and if a translator has to spend time looking up terms and inserting them in a translation full of gaps, any economic benefit of machine translation will be lost. Consequently, as things stand at present most freelance translators and staff translators in small firms are unlikely to come into direct contact with machine translation, or to suffer from competition from machine translation. Competition would only come from the possible use of machine translation by large commercial agencies. It would be felt first either in very general areas, or in very specialized areas, with a clearly delimited vocabulary and standardized phraseology-in both cases, perhaps, in order to have a quick cheap translation to get the gist of a text, or to decide whether to have it translated by a translator. A final thought in this connection is that both freelances and small firms might conceivably buy raw machine translation from a large agency and post-edit it themselves. This would constitute a particular form of ""interactive"" machine translation, and would only be worth attempting if the time taken in post-editing to an acceptable standard was less than the time required to translate the text from scratch. While the size and complexity of machine translation operations mean that freelances and translators in small firms are unlikely to become directly involved with it, some MACHINE TRANSLATION AND COMPUTERIZED TERM1NOLOGY SYSTEMS 79 translators and revisers in the Commission of the European Communities have already done so. Some comments on ""Systran""",,"Machine translation and computerised terminology systems - a translator's viewpoint. Whether these criticisms were valid or not, machine translation development in the States was cut back immediately, translators heaved a sigh of relief, and machine translation researchers went underground. As we have already heard this morning however, they are now coming out into the open again and translators are asking the same question once more. The short answer is that no translator working now is going to lose his or her job in the next five years because of machine translation, and probably never will. Machine translation systems which are now operating are either limited in their scope, such as the Canadian ""METEO"" system which translates weather forecasts from English into French, or the CULT system which we are to hear about this afternoon, or cannot provide translations of generally acceptable quality without extensive revision, or ""post-editing"". In addition on, machine translation systems are expensive to develop and can only pay their way by translating large amounts of material. Another bar to using machine translation in smallscale operations is the variety of work, and therefore the variety of terminology involved. If a word is not in the machine's dictionary it just won't be translated, and if a translator has to spend time looking up terms and inserting them in a translation full of gaps, any economic benefit of machine translation will be lost. Consequently, as things stand at present most freelance translators and staff translators in small firms are unlikely to come into direct contact with machine translation, or to suffer from competition from machine translation. Competition would only come from the possible use of machine translation by large commercial agencies. It would be felt first either in very general areas, or in very specialized areas, with a clearly delimited vocabulary and standardized phraseology-in both cases, perhaps, in order to have a quick cheap translation to get the gist of a text, or to decide whether to have it translated by a translator. A final thought in this connection is that both freelances and small firms might conceivably buy raw machine translation from a large agency and post-edit it themselves. This would constitute a particular form of ""interactive"" machine translation, and would only be worth attempting if the time taken in post-editing to an acceptable standard was less than the time required to translate the text from scratch. While the size and complexity of machine translation operations mean that freelances and translators in small firms are unlikely to become directly involved with it, some MACHINE TRANSLATION AND COMPUTERIZED TERM1NOLOGY SYSTEMS 79 translators and revisers in the Commission of the European Communities have already done so. Some comments on ""Systran""",1978
tiedemann-thottingal-2020-opus,https://aclanthology.org/2020.eamt-1.61,0,,,,,,,"OPUS-MT -- Building open translation services for the World. Equality among people requires, among other things, the ability to access information in the same way as others independent of the linguistic background of the individual user. Achieving this goal becomes an even more important challenge in a globalized world with digital channels and information flows being the most decisive factor in our integration in modern societies. Language barriers can lead to severe disadvantages and discrimination not to mention conflicts caused by simple misunderstandings based on broken communication. Linguistic discrimination leads to frustration, isolation and racism and the lack of technological language support may also cause what is known as the digital language death (Kornai, 2013) .",{OPUS}-{MT} {--} Building open translation services for the World,"Equality among people requires, among other things, the ability to access information in the same way as others independent of the linguistic background of the individual user. Achieving this goal becomes an even more important challenge in a globalized world with digital channels and information flows being the most decisive factor in our integration in modern societies. Language barriers can lead to severe disadvantages and discrimination not to mention conflicts caused by simple misunderstandings based on broken communication. Linguistic discrimination leads to frustration, isolation and racism and the lack of technological language support may also cause what is known as the digital language death (Kornai, 2013) .",OPUS-MT -- Building open translation services for the World,"Equality among people requires, among other things, the ability to access information in the same way as others independent of the linguistic background of the individual user. Achieving this goal becomes an even more important challenge in a globalized world with digital channels and information flows being the most decisive factor in our integration in modern societies. Language barriers can lead to severe disadvantages and discrimination not to mention conflicts caused by simple misunderstandings based on broken communication. Linguistic discrimination leads to frustration, isolation and racism and the lack of technological language support may also cause what is known as the digital language death (Kornai, 2013) .",,"OPUS-MT -- Building open translation services for the World. Equality among people requires, among other things, the ability to access information in the same way as others independent of the linguistic background of the individual user. Achieving this goal becomes an even more important challenge in a globalized world with digital channels and information flows being the most decisive factor in our integration in modern societies. Language barriers can lead to severe disadvantages and discrimination not to mention conflicts caused by simple misunderstandings based on broken communication. Linguistic discrimination leads to frustration, isolation and racism and the lack of technological language support may also cause what is known as the digital language death (Kornai, 2013) .",2020
grabski-etal-2012-controle,https://aclanthology.org/F12-1037,0,,,,,,,"Contr\^ole pr\'edictif et codage du but des actions oro-faciales (Predictice control and coding of orofacial actions) [in French]. Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Recent studies provide evidence for action goal coding of manual actions in premotor and posterior parietal cortices. To further extend these results, we used a repetition suppression paradigm while measuring neural activity with functional magnetic resonance imaging during repeated orofocial movements (lip protrusion, jaw lowering and tongue retraction movements). In the motor domain, this adaptation paradigm refers to decreased activity in specific neural populations due to repeated motor acts and has been proposed to reflect sensorimotor learning and reduced prediction errors by means of forward motor-to-sensory predictive processes. In the present study, orofacial movements activated a set of largely overlapping, common brain areas forming a core neural network classically involved in orofacial motor control. Crucially, suppressed neural responses during repeated orofacial actions were specifically observed in the left hemisphere, within the intraparietal sulcus and adjacent inferior parietal lobule, the superior parietal lobule and the ventral premotor cortex. These results provide evidence for action goal coding and forward motor-tosomatosensory predictive control of intransitive and silent orofacial actions in this frontoparietal circuit. (Rizzolatti et al., 1988; Fogassi et al., 2005; Bonnini et al., 2011) . Chez l'homme, la méthode d'imagerie par résonance magnétique fonctionnelle (IRMf) a été récemment utilisée conjointement à un paradigme d'adaptation afin de dissocier les substrats neuronaux liés aux différents niveaux de représentation des actions manuelles. Ce paradigme IRMf d'adaptation s'appuie sur un effet de répétition suppression (RS) consistant en une réduction du signal BOLD (pour blood oxygen level-dependent) de régions cérébrales spécifiquement reliées à différents niveaux de traitements d'une action perçue ou produite, lors de la présentation de stimuli ou de l'exécution d'un acte moteur répété (Grill-Spector & Malach, 2001; Grill-Spector et al., 2006) . En accord avec les études sur les primates nonhumains, cette approche a révélé que les actions manuelles répétées avec un but similaire induisent un effet RS dans le sulcus intrapariétal et la partie adjacente dorsale du lobule pariétal inférieur ainsi que dans le gyrus frontal inférieur et le cortex prémoteur ventral adjacent (Dinstein et al., 2007; Hamilton & Grafton, 2009; Kilner et al., 2009) .
Bien que discuté en termes de codage du but des actions, une interprétation convergente de l'effet RS dans ces aires pariétales et prémotrices est basée sur l'existence de processus prédictifs sensorimoteurs. Ces processus permettraient en effet de comparer les conséquences sensorielles d'une action réalisée avec les informations exogènes effectivement perçues et, de là, d'estimer de possibles erreurs en vue de corriger en ligne l'acte moteur (Wolpert, Ghahramani & Jordan, 1995; Kawato, 1999 ; Friston, 2011) . Dans ce cadre et relativement aux études IRMf précédemment citées, il est possible que la répétition d'actes moteurs manuels impliquant un même but ait entrainé un apprentissage sensorimoteur graduel et des mises à jour des représentations motrices liées au codage du but de l'action dans les aires pariétales et frontales inférieures, avec des erreurs de prédiction réduites reflétées par une diminution du signal BOLD.",Contr{\^o}le pr{\'e}dictif et codage du but des actions oro-faciales (Predictice control and coding of orofacial actions) [in {F}rench],"Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Recent studies provide evidence for action goal coding of manual actions in premotor and posterior parietal cortices. To further extend these results, we used a repetition suppression paradigm while measuring neural activity with functional magnetic resonance imaging during repeated orofocial movements (lip protrusion, jaw lowering and tongue retraction movements). In the motor domain, this adaptation paradigm refers to decreased activity in specific neural populations due to repeated motor acts and has been proposed to reflect sensorimotor learning and reduced prediction errors by means of forward motor-to-sensory predictive processes. In the present study, orofacial movements activated a set of largely overlapping, common brain areas forming a core neural network classically involved in orofacial motor control. Crucially, suppressed neural responses during repeated orofacial actions were specifically observed in the left hemisphere, within the intraparietal sulcus and adjacent inferior parietal lobule, the superior parietal lobule and the ventral premotor cortex. These results provide evidence for action goal coding and forward motor-tosomatosensory predictive control of intransitive and silent orofacial actions in this frontoparietal circuit. (Rizzolatti et al., 1988; Fogassi et al., 2005; Bonnini et al., 2011) . Chez l'homme, la méthode d'imagerie par résonance magnétique fonctionnelle (IRMf) a été récemment utilisée conjointement à un paradigme d'adaptation afin de dissocier les substrats neuronaux liés aux différents niveaux de représentation des actions manuelles. Ce paradigme IRMf d'adaptation s'appuie sur un effet de répétition suppression (RS) consistant en une réduction du signal BOLD (pour blood oxygen level-dependent) de régions cérébrales spécifiquement reliées à différents niveaux de traitements d'une action perçue ou produite, lors de la présentation de stimuli ou de l'exécution d'un acte moteur répété (Grill-Spector & Malach, 2001; Grill-Spector et al., 2006) . En accord avec les études sur les primates nonhumains, cette approche a révélé que les actions manuelles répétées avec un but similaire induisent un effet RS dans le sulcus intrapariétal et la partie adjacente dorsale du lobule pariétal inférieur ainsi que dans le gyrus frontal inférieur et le cortex prémoteur ventral adjacent (Dinstein et al., 2007; Hamilton & Grafton, 2009; Kilner et al., 2009) .
Bien que discuté en termes de codage du but des actions, une interprétation convergente de l'effet RS dans ces aires pariétales et prémotrices est basée sur l'existence de processus prédictifs sensorimoteurs. Ces processus permettraient en effet de comparer les conséquences sensorielles d'une action réalisée avec les informations exogènes effectivement perçues et, de là, d'estimer de possibles erreurs en vue de corriger en ligne l'acte moteur (Wolpert, Ghahramani & Jordan, 1995; Kawato, 1999 ; Friston, 2011) . Dans ce cadre et relativement aux études IRMf précédemment citées, il est possible que la répétition d'actes moteurs manuels impliquant un même but ait entrainé un apprentissage sensorimoteur graduel et des mises à jour des représentations motrices liées au codage du but de l'action dans les aires pariétales et frontales inférieures, avec des erreurs de prédiction réduites reflétées par une diminution du signal BOLD.",Contr\^ole pr\'edictif et codage du but des actions oro-faciales (Predictice control and coding of orofacial actions) [in French],"Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Recent studies provide evidence for action goal coding of manual actions in premotor and posterior parietal cortices. To further extend these results, we used a repetition suppression paradigm while measuring neural activity with functional magnetic resonance imaging during repeated orofocial movements (lip protrusion, jaw lowering and tongue retraction movements). In the motor domain, this adaptation paradigm refers to decreased activity in specific neural populations due to repeated motor acts and has been proposed to reflect sensorimotor learning and reduced prediction errors by means of forward motor-to-sensory predictive processes. In the present study, orofacial movements activated a set of largely overlapping, common brain areas forming a core neural network classically involved in orofacial motor control. Crucially, suppressed neural responses during repeated orofacial actions were specifically observed in the left hemisphere, within the intraparietal sulcus and adjacent inferior parietal lobule, the superior parietal lobule and the ventral premotor cortex. These results provide evidence for action goal coding and forward motor-tosomatosensory predictive control of intransitive and silent orofacial actions in this frontoparietal circuit. (Rizzolatti et al., 1988; Fogassi et al., 2005; Bonnini et al., 2011) . Chez l'homme, la méthode d'imagerie par résonance magnétique fonctionnelle (IRMf) a été récemment utilisée conjointement à un paradigme d'adaptation afin de dissocier les substrats neuronaux liés aux différents niveaux de représentation des actions manuelles. Ce paradigme IRMf d'adaptation s'appuie sur un effet de répétition suppression (RS) consistant en une réduction du signal BOLD (pour blood oxygen level-dependent) de régions cérébrales spécifiquement reliées à différents niveaux de traitements d'une action perçue ou produite, lors de la présentation de stimuli ou de l'exécution d'un acte moteur répété (Grill-Spector & Malach, 2001; Grill-Spector et al., 2006) . En accord avec les études sur les primates nonhumains, cette approche a révélé que les actions manuelles répétées avec un but similaire induisent un effet RS dans le sulcus intrapariétal et la partie adjacente dorsale du lobule pariétal inférieur ainsi que dans le gyrus frontal inférieur et le cortex prémoteur ventral adjacent (Dinstein et al., 2007; Hamilton & Grafton, 2009; Kilner et al., 2009) .
Bien que discuté en termes de codage du but des actions, une interprétation convergente de l'effet RS dans ces aires pariétales et prémotrices est basée sur l'existence de processus prédictifs sensorimoteurs. Ces processus permettraient en effet de comparer les conséquences sensorielles d'une action réalisée avec les informations exogènes effectivement perçues et, de là, d'estimer de possibles erreurs en vue de corriger en ligne l'acte moteur (Wolpert, Ghahramani & Jordan, 1995; Kawato, 1999 ; Friston, 2011) . Dans ce cadre et relativement aux études IRMf précédemment citées, il est possible que la répétition d'actes moteurs manuels impliquant un même but ait entrainé un apprentissage sensorimoteur graduel et des mises à jour des représentations motrices liées au codage du but de l'action dans les aires pariétales et frontales inférieures, avec des erreurs de prédiction réduites reflétées par une diminution du signal BOLD.",,"Contr\^ole pr\'edictif et codage du but des actions oro-faciales (Predictice control and coding of orofacial actions) [in French]. Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Recent studies provide evidence for action goal coding of manual actions in premotor and posterior parietal cortices. To further extend these results, we used a repetition suppression paradigm while measuring neural activity with functional magnetic resonance imaging during repeated orofocial movements (lip protrusion, jaw lowering and tongue retraction movements). In the motor domain, this adaptation paradigm refers to decreased activity in specific neural populations due to repeated motor acts and has been proposed to reflect sensorimotor learning and reduced prediction errors by means of forward motor-to-sensory predictive processes. In the present study, orofacial movements activated a set of largely overlapping, common brain areas forming a core neural network classically involved in orofacial motor control. Crucially, suppressed neural responses during repeated orofacial actions were specifically observed in the left hemisphere, within the intraparietal sulcus and adjacent inferior parietal lobule, the superior parietal lobule and the ventral premotor cortex. These results provide evidence for action goal coding and forward motor-tosomatosensory predictive control of intransitive and silent orofacial actions in this frontoparietal circuit. (Rizzolatti et al., 1988; Fogassi et al., 2005; Bonnini et al., 2011) . Chez l'homme, la méthode d'imagerie par résonance magnétique fonctionnelle (IRMf) a été récemment utilisée conjointement à un paradigme d'adaptation afin de dissocier les substrats neuronaux liés aux différents niveaux de représentation des actions manuelles. Ce paradigme IRMf d'adaptation s'appuie sur un effet de répétition suppression (RS) consistant en une réduction du signal BOLD (pour blood oxygen level-dependent) de régions cérébrales spécifiquement reliées à différents niveaux de traitements d'une action perçue ou produite, lors de la présentation de stimuli ou de l'exécution d'un acte moteur répété (Grill-Spector & Malach, 2001; Grill-Spector et al., 2006) . En accord avec les études sur les primates nonhumains, cette approche a révélé que les actions manuelles répétées avec un but similaire induisent un effet RS dans le sulcus intrapariétal et la partie adjacente dorsale du lobule pariétal inférieur ainsi que dans le gyrus frontal inférieur et le cortex prémoteur ventral adjacent (Dinstein et al., 2007; Hamilton & Grafton, 2009; Kilner et al., 2009) .
Bien que discuté en termes de codage du but des actions, une interprétation convergente de l'effet RS dans ces aires pariétales et prémotrices est basée sur l'existence de processus prédictifs sensorimoteurs. Ces processus permettraient en effet de comparer les conséquences sensorielles d'une action réalisée avec les informations exogènes effectivement perçues et, de là, d'estimer de possibles erreurs en vue de corriger en ligne l'acte moteur (Wolpert, Ghahramani & Jordan, 1995; Kawato, 1999 ; Friston, 2011) . Dans ce cadre et relativement aux études IRMf précédemment citées, il est possible que la répétition d'actes moteurs manuels impliquant un même but ait entrainé un apprentissage sensorimoteur graduel et des mises à jour des représentations motrices liées au codage du but de l'action dans les aires pariétales et frontales inférieures, avec des erreurs de prédiction réduites reflétées par une diminution du signal BOLD.",2012
ostendorff-etal-2020-aspect,https://aclanthology.org/2020.coling-main.545,1,,,,industry_innovation_infrastructure,,,"Aspect-based Document Similarity for Research Papers. Traditional document similarity measures provide a coarse-grained distinction between similar and dissimilar documents. Typically, they do not consider in what aspects two documents are similar. This limits the granularity of applications like recommender systems that rely on document similarity. In this paper, we extend similarity with aspect information by performing a pairwise document classification task. We evaluate our aspect-based document similarity approach for research papers. Paper citations indicate the aspect-based similarity, i. e., the title of a section in which a citation occurs acts as a label for the pair of citing and cited paper. We apply a series of Transformer models such as RoBERTa, ELECTRA, XLNet, and BERT variations and compare them to an LSTM baseline. We perform our experiments on two newly constructed datasets of 172,073 research paper pairs from the ACL Anthology and CORD-19 corpus. According to our results, SciBERT is the best performing system with F1-scores of up to 0.83. A qualitative analysis validates our quantitative results and indicates that aspect-based document similarity indeed leads to more fine-grained recommendations.",Aspect-based Document Similarity for Research Papers,"Traditional document similarity measures provide a coarse-grained distinction between similar and dissimilar documents. Typically, they do not consider in what aspects two documents are similar. This limits the granularity of applications like recommender systems that rely on document similarity. In this paper, we extend similarity with aspect information by performing a pairwise document classification task. We evaluate our aspect-based document similarity approach for research papers. Paper citations indicate the aspect-based similarity, i. e., the title of a section in which a citation occurs acts as a label for the pair of citing and cited paper. We apply a series of Transformer models such as RoBERTa, ELECTRA, XLNet, and BERT variations and compare them to an LSTM baseline. We perform our experiments on two newly constructed datasets of 172,073 research paper pairs from the ACL Anthology and CORD-19 corpus. According to our results, SciBERT is the best performing system with F1-scores of up to 0.83. A qualitative analysis validates our quantitative results and indicates that aspect-based document similarity indeed leads to more fine-grained recommendations.",Aspect-based Document Similarity for Research Papers,"Traditional document similarity measures provide a coarse-grained distinction between similar and dissimilar documents. Typically, they do not consider in what aspects two documents are similar. This limits the granularity of applications like recommender systems that rely on document similarity. In this paper, we extend similarity with aspect information by performing a pairwise document classification task. We evaluate our aspect-based document similarity approach for research papers. Paper citations indicate the aspect-based similarity, i. e., the title of a section in which a citation occurs acts as a label for the pair of citing and cited paper. We apply a series of Transformer models such as RoBERTa, ELECTRA, XLNet, and BERT variations and compare them to an LSTM baseline. We perform our experiments on two newly constructed datasets of 172,073 research paper pairs from the ACL Anthology and CORD-19 corpus. According to our results, SciBERT is the best performing system with F1-scores of up to 0.83. A qualitative analysis validates our quantitative results and indicates that aspect-based document similarity indeed leads to more fine-grained recommendations.","We would like to thank all reviewers and Christoph Alt for their comments and valuable feedback. The research presented in this article is funded by the German Federal Ministry of Education and Research (BMBF) through the project QURATOR (Unternehmen Region, Wachstumskern, no. 03WKDA1A).","Aspect-based Document Similarity for Research Papers. Traditional document similarity measures provide a coarse-grained distinction between similar and dissimilar documents. Typically, they do not consider in what aspects two documents are similar. This limits the granularity of applications like recommender systems that rely on document similarity. In this paper, we extend similarity with aspect information by performing a pairwise document classification task. We evaluate our aspect-based document similarity approach for research papers. Paper citations indicate the aspect-based similarity, i. e., the title of a section in which a citation occurs acts as a label for the pair of citing and cited paper. We apply a series of Transformer models such as RoBERTa, ELECTRA, XLNet, and BERT variations and compare them to an LSTM baseline. We perform our experiments on two newly constructed datasets of 172,073 research paper pairs from the ACL Anthology and CORD-19 corpus. According to our results, SciBERT is the best performing system with F1-scores of up to 0.83. A qualitative analysis validates our quantitative results and indicates that aspect-based document similarity indeed leads to more fine-grained recommendations.",2020
lu-etal-2016-joint,https://aclanthology.org/C16-1308,0,,,,,,,"Joint Inference for Event Coreference Resolution. Event coreference resolution is a challenging problem since it relies on several components of the information extraction pipeline that typically yield noisy outputs. We hypothesize that exploiting the inter-dependencies between these components can significantly improve the performance of an event coreference resolver, and subsequently propose a novel joint inference based event coreference resolver using Markov Logic Networks (MLNs). However, the rich features that are important for this task are typically very hard to explicitly encode as MLN formulas since they significantly increase the size of the MLN, thereby making joint inference and learning infeasible. To address this problem, we propose a novel solution where we implicitly encode rich features into our model by augmenting the MLN distribution with low dimensional unit clauses. Our approach achieves state-of-the-art results on two standard evaluation corpora.",Joint Inference for Event Coreference Resolution,"Event coreference resolution is a challenging problem since it relies on several components of the information extraction pipeline that typically yield noisy outputs. We hypothesize that exploiting the inter-dependencies between these components can significantly improve the performance of an event coreference resolver, and subsequently propose a novel joint inference based event coreference resolver using Markov Logic Networks (MLNs). However, the rich features that are important for this task are typically very hard to explicitly encode as MLN formulas since they significantly increase the size of the MLN, thereby making joint inference and learning infeasible. To address this problem, we propose a novel solution where we implicitly encode rich features into our model by augmenting the MLN distribution with low dimensional unit clauses. Our approach achieves state-of-the-art results on two standard evaluation corpora.",Joint Inference for Event Coreference Resolution,"Event coreference resolution is a challenging problem since it relies on several components of the information extraction pipeline that typically yield noisy outputs. We hypothesize that exploiting the inter-dependencies between these components can significantly improve the performance of an event coreference resolver, and subsequently propose a novel joint inference based event coreference resolver using Markov Logic Networks (MLNs). However, the rich features that are important for this task are typically very hard to explicitly encode as MLN formulas since they significantly increase the size of the MLN, thereby making joint inference and learning infeasible. To address this problem, we propose a novel solution where we implicitly encode rich features into our model by augmenting the MLN distribution with low dimensional unit clauses. Our approach achieves state-of-the-art results on two standard evaluation corpora.","We thank the three anonymous reviewers for their detailed comments. This work was supported in part by NSF Grants IIS-1219142 and IIS-1528037, and by the DARPA PPAML Program under AFRL prime contract number FA8750-14-C-0005. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of NSF, DARPA and AFRL.","Joint Inference for Event Coreference Resolution. Event coreference resolution is a challenging problem since it relies on several components of the information extraction pipeline that typically yield noisy outputs. We hypothesize that exploiting the inter-dependencies between these components can significantly improve the performance of an event coreference resolver, and subsequently propose a novel joint inference based event coreference resolver using Markov Logic Networks (MLNs). However, the rich features that are important for this task are typically very hard to explicitly encode as MLN formulas since they significantly increase the size of the MLN, thereby making joint inference and learning infeasible. To address this problem, we propose a novel solution where we implicitly encode rich features into our model by augmenting the MLN distribution with low dimensional unit clauses. Our approach achieves state-of-the-art results on two standard evaluation corpora.",2016
zhou-etal-2010-exploiting,https://aclanthology.org/W10-3015,1,,,,health,,,"Exploiting Multi-Features to Detect Hedges and their Scope in Biomedical Texts. In this paper, we present a machine learning approach that detects hedge cues and their scope in biomedical texts. Identifying hedged information in texts is a kind of semantic filtering of texts and it is important since it could extract speculative information from factual information. In order to deal with the semantic analysis problem, various evidential features are proposed and integrated through a Conditional Random Fields (CRFs) model. Hedge cues that appear in the training dataset are regarded as keywords and employed as an important feature in hedge cue identification system. For the scope finding, we construct a CRF-based system and a syntactic pattern-based system, and compare their performances. Experiments using test data from CoNLL-2010 shared task show that our proposed method is robust. F-score of the biological hedge detection task and scope finding task achieves 86.32% and 54.18% in in-domain evaluations respectively.",Exploiting Multi-Features to Detect Hedges and their Scope in Biomedical Texts,"In this paper, we present a machine learning approach that detects hedge cues and their scope in biomedical texts. Identifying hedged information in texts is a kind of semantic filtering of texts and it is important since it could extract speculative information from factual information. In order to deal with the semantic analysis problem, various evidential features are proposed and integrated through a Conditional Random Fields (CRFs) model. Hedge cues that appear in the training dataset are regarded as keywords and employed as an important feature in hedge cue identification system. For the scope finding, we construct a CRF-based system and a syntactic pattern-based system, and compare their performances. Experiments using test data from CoNLL-2010 shared task show that our proposed method is robust. F-score of the biological hedge detection task and scope finding task achieves 86.32% and 54.18% in in-domain evaluations respectively.",Exploiting Multi-Features to Detect Hedges and their Scope in Biomedical Texts,"In this paper, we present a machine learning approach that detects hedge cues and their scope in biomedical texts. Identifying hedged information in texts is a kind of semantic filtering of texts and it is important since it could extract speculative information from factual information. In order to deal with the semantic analysis problem, various evidential features are proposed and integrated through a Conditional Random Fields (CRFs) model. Hedge cues that appear in the training dataset are regarded as keywords and employed as an important feature in hedge cue identification system. For the scope finding, we construct a CRF-based system and a syntactic pattern-based system, and compare their performances. Experiments using test data from CoNLL-2010 shared task show that our proposed method is robust. F-score of the biological hedge detection task and scope finding task achieves 86.32% and 54.18% in in-domain evaluations respectively.",,"Exploiting Multi-Features to Detect Hedges and their Scope in Biomedical Texts. In this paper, we present a machine learning approach that detects hedge cues and their scope in biomedical texts. Identifying hedged information in texts is a kind of semantic filtering of texts and it is important since it could extract speculative information from factual information. In order to deal with the semantic analysis problem, various evidential features are proposed and integrated through a Conditional Random Fields (CRFs) model. Hedge cues that appear in the training dataset are regarded as keywords and employed as an important feature in hedge cue identification system. For the scope finding, we construct a CRF-based system and a syntactic pattern-based system, and compare their performances. Experiments using test data from CoNLL-2010 shared task show that our proposed method is robust. F-score of the biological hedge detection task and scope finding task achieves 86.32% and 54.18% in in-domain evaluations respectively.",2010
knauth-alfter-2014-dictionary,https://aclanthology.org/W14-5509,0,,,,,,,"A Dictionary Data Processing Environment and Its Application in Algorithmic Processing of Pali Dictionary Data for Future NLP Tasks. This paper presents a highly flexible infrastructure for processing digitized dictionaries and that can be used to build NLP tools in the future. This infrastructure is especially suitable for low resource languages where some digitized information is available but not (yet) suitable for algorithmic use. It allows researchers to do at least some processing in an algorithmic way using the full power of the C# programming language, reducing the effort of manual editing of the data. To test this in practice, the paper describes the processing steps taken by making use of this infrastructure in order to identify word classes and cross references in the dictionary of Pali in the context of the SeNeReKo project. We also conduct an experiment to make use of this data and show the importance of the dictionary. This paper presents the experiences and results of the selected approach.",A Dictionary Data Processing Environment and Its Application in Algorithmic Processing of {P}ali Dictionary Data for Future {NLP} Tasks,"This paper presents a highly flexible infrastructure for processing digitized dictionaries and that can be used to build NLP tools in the future. This infrastructure is especially suitable for low resource languages where some digitized information is available but not (yet) suitable for algorithmic use. It allows researchers to do at least some processing in an algorithmic way using the full power of the C# programming language, reducing the effort of manual editing of the data. To test this in practice, the paper describes the processing steps taken by making use of this infrastructure in order to identify word classes and cross references in the dictionary of Pali in the context of the SeNeReKo project. We also conduct an experiment to make use of this data and show the importance of the dictionary. This paper presents the experiences and results of the selected approach.",A Dictionary Data Processing Environment and Its Application in Algorithmic Processing of Pali Dictionary Data for Future NLP Tasks,"This paper presents a highly flexible infrastructure for processing digitized dictionaries and that can be used to build NLP tools in the future. This infrastructure is especially suitable for low resource languages where some digitized information is available but not (yet) suitable for algorithmic use. It allows researchers to do at least some processing in an algorithmic way using the full power of the C# programming language, reducing the effort of manual editing of the data. To test this in practice, the paper describes the processing steps taken by making use of this infrastructure in order to identify word classes and cross references in the dictionary of Pali in the context of the SeNeReKo project. We also conduct an experiment to make use of this data and show the importance of the dictionary. This paper presents the experiences and results of the selected approach.",,"A Dictionary Data Processing Environment and Its Application in Algorithmic Processing of Pali Dictionary Data for Future NLP Tasks. This paper presents a highly flexible infrastructure for processing digitized dictionaries and that can be used to build NLP tools in the future. This infrastructure is especially suitable for low resource languages where some digitized information is available but not (yet) suitable for algorithmic use. It allows researchers to do at least some processing in an algorithmic way using the full power of the C# programming language, reducing the effort of manual editing of the data. To test this in practice, the paper describes the processing steps taken by making use of this infrastructure in order to identify word classes and cross references in the dictionary of Pali in the context of the SeNeReKo project. We also conduct an experiment to make use of this data and show the importance of the dictionary. This paper presents the experiences and results of the selected approach.",2014
dilsizian-etal-2014-new,http://www.lrec-conf.org/proceedings/lrec2014/pdf/1138_Paper.pdf,1,,,,social_equality,,,"A New Framework for Sign Language Recognition based on 3D Handshape Identification and Linguistic Modeling. Current approaches to sign recognition by computer generally have at least some of the following limitations: they rely on laboratory conditions for sign production, are limited to a small vocabulary, rely on 2D modeling (and therefore cannot deal with occlusions and off-plane rotations), and/or achieve limited success. Here we propose a new framework that (1) provides a new tracking method less dependent than others on laboratory conditions and able to deal with variations in background and skin regions (such as the face, forearms, or other hands); (2) allows for identification of 3D hand configurations that are linguistically important in American Sign Language (ASL); and (3) incorporates statistical information reflecting linguistic constraints in sign production. For purposes of large-scale computer-based sign language recognition from video, the ability to distinguish hand configurations accurately is critical. Our current method estimates the 3D hand configuration to distinguish among 77 hand configurations linguistically relevant for ASL. Constraining the problem in this way makes recognition of 3D hand configuration more tractable and provides the information specifically needed for sign recognition. Further improvements are obtained by incorporation of statistical information about linguistic dependencies among handshapes within a sign derived from an annotated corpus of almost 10,000 sign tokens.",A New Framework for Sign Language Recognition based on 3{D} Handshape Identification and Linguistic Modeling,"Current approaches to sign recognition by computer generally have at least some of the following limitations: they rely on laboratory conditions for sign production, are limited to a small vocabulary, rely on 2D modeling (and therefore cannot deal with occlusions and off-plane rotations), and/or achieve limited success. Here we propose a new framework that (1) provides a new tracking method less dependent than others on laboratory conditions and able to deal with variations in background and skin regions (such as the face, forearms, or other hands); (2) allows for identification of 3D hand configurations that are linguistically important in American Sign Language (ASL); and (3) incorporates statistical information reflecting linguistic constraints in sign production. For purposes of large-scale computer-based sign language recognition from video, the ability to distinguish hand configurations accurately is critical. Our current method estimates the 3D hand configuration to distinguish among 77 hand configurations linguistically relevant for ASL. Constraining the problem in this way makes recognition of 3D hand configuration more tractable and provides the information specifically needed for sign recognition. Further improvements are obtained by incorporation of statistical information about linguistic dependencies among handshapes within a sign derived from an annotated corpus of almost 10,000 sign tokens.",A New Framework for Sign Language Recognition based on 3D Handshape Identification and Linguistic Modeling,"Current approaches to sign recognition by computer generally have at least some of the following limitations: they rely on laboratory conditions for sign production, are limited to a small vocabulary, rely on 2D modeling (and therefore cannot deal with occlusions and off-plane rotations), and/or achieve limited success. Here we propose a new framework that (1) provides a new tracking method less dependent than others on laboratory conditions and able to deal with variations in background and skin regions (such as the face, forearms, or other hands); (2) allows for identification of 3D hand configurations that are linguistically important in American Sign Language (ASL); and (3) incorporates statistical information reflecting linguistic constraints in sign production. For purposes of large-scale computer-based sign language recognition from video, the ability to distinguish hand configurations accurately is critical. Our current method estimates the 3D hand configuration to distinguish among 77 hand configurations linguistically relevant for ASL. Constraining the problem in this way makes recognition of 3D hand configuration more tractable and provides the information specifically needed for sign recognition. Further improvements are obtained by incorporation of statistical information about linguistic dependencies among handshapes within a sign derived from an annotated corpus of almost 10,000 sign tokens.",,"A New Framework for Sign Language Recognition based on 3D Handshape Identification and Linguistic Modeling. Current approaches to sign recognition by computer generally have at least some of the following limitations: they rely on laboratory conditions for sign production, are limited to a small vocabulary, rely on 2D modeling (and therefore cannot deal with occlusions and off-plane rotations), and/or achieve limited success. Here we propose a new framework that (1) provides a new tracking method less dependent than others on laboratory conditions and able to deal with variations in background and skin regions (such as the face, forearms, or other hands); (2) allows for identification of 3D hand configurations that are linguistically important in American Sign Language (ASL); and (3) incorporates statistical information reflecting linguistic constraints in sign production. For purposes of large-scale computer-based sign language recognition from video, the ability to distinguish hand configurations accurately is critical. Our current method estimates the 3D hand configuration to distinguish among 77 hand configurations linguistically relevant for ASL. Constraining the problem in this way makes recognition of 3D hand configuration more tractable and provides the information specifically needed for sign recognition. Further improvements are obtained by incorporation of statistical information about linguistic dependencies among handshapes within a sign derived from an annotated corpus of almost 10,000 sign tokens.",2014
clark-fijalkow-2021-consistent,https://aclanthology.org/2021.scil-1.60,0,,,,,,,"Consistent unsupervised estimators for anchored PCFGs. Learning probabilistic context-free grammars just from a sample of strings from the grammars is a classic problem going back to Horning (1969) . This abstract, based on the full paper in Clark and Fijalkow (2020) , presents an approach for strongly learning a linguistically interesting subclass of probabilistic context free grammars from strings in the realizable case. Unpacking this, we assume that we have some PCFG that we are interested in learning and that we have access only to a sample of strings generated by the PCFGi.e. sampled from the distribution defined by the context free grammar. Crucially we do not observe the derivation trees -the hierarchical latent structure. Strong learning means that we want the learned grammar to define the same distribution over derivation trees -i.e. the labeled trees -as the original grammar and not just the same distribution over strings.",Consistent unsupervised estimators for anchored {PCFG}s,"Learning probabilistic context-free grammars just from a sample of strings from the grammars is a classic problem going back to Horning (1969) . This abstract, based on the full paper in Clark and Fijalkow (2020) , presents an approach for strongly learning a linguistically interesting subclass of probabilistic context free grammars from strings in the realizable case. Unpacking this, we assume that we have some PCFG that we are interested in learning and that we have access only to a sample of strings generated by the PCFGi.e. sampled from the distribution defined by the context free grammar. Crucially we do not observe the derivation trees -the hierarchical latent structure. Strong learning means that we want the learned grammar to define the same distribution over derivation trees -i.e. the labeled trees -as the original grammar and not just the same distribution over strings.",Consistent unsupervised estimators for anchored PCFGs,"Learning probabilistic context-free grammars just from a sample of strings from the grammars is a classic problem going back to Horning (1969) . This abstract, based on the full paper in Clark and Fijalkow (2020) , presents an approach for strongly learning a linguistically interesting subclass of probabilistic context free grammars from strings in the realizable case. Unpacking this, we assume that we have some PCFG that we are interested in learning and that we have access only to a sample of strings generated by the PCFGi.e. sampled from the distribution defined by the context free grammar. Crucially we do not observe the derivation trees -the hierarchical latent structure. Strong learning means that we want the learned grammar to define the same distribution over derivation trees -i.e. the labeled trees -as the original grammar and not just the same distribution over strings.",,"Consistent unsupervised estimators for anchored PCFGs. Learning probabilistic context-free grammars just from a sample of strings from the grammars is a classic problem going back to Horning (1969) . This abstract, based on the full paper in Clark and Fijalkow (2020) , presents an approach for strongly learning a linguistically interesting subclass of probabilistic context free grammars from strings in the realizable case. Unpacking this, we assume that we have some PCFG that we are interested in learning and that we have access only to a sample of strings generated by the PCFGi.e. sampled from the distribution defined by the context free grammar. Crucially we do not observe the derivation trees -the hierarchical latent structure. Strong learning means that we want the learned grammar to define the same distribution over derivation trees -i.e. the labeled trees -as the original grammar and not just the same distribution over strings.",2021
tanenhaus-1996-using,https://aclanthology.org/P96-1007,0,,,,,,,"Using Eye Movements to Study Spoken Language Comprehension: Evidence for Incremental Interpretation (Invited Talk). We present an overview of recent work in which eye movements are monitored as people follow spoken instructions to move objects or pictures in a visual workspace. Subjects naturally make saccadic eye-movements to objects that are closely time-locked to relevant information in the instruction. Thus the eye-movements provide a window into the rapid mental processes that underlie spoken language comprehension. We review studies of reference resolution, word recognition, and pragmatic effects on syntactic ambiguity resolution. Our studies show that people seek to establish reference with respect to their behavioral goals during the earliest moments of linguistic processing. Moreover, referentially relevant non-linguistic information immediately affects how the linguistic input is initially structured.",Using Eye Movements to Study Spoken Language Comprehension: Evidence for Incremental Interpretation (Invited Talk),"We present an overview of recent work in which eye movements are monitored as people follow spoken instructions to move objects or pictures in a visual workspace. Subjects naturally make saccadic eye-movements to objects that are closely time-locked to relevant information in the instruction. Thus the eye-movements provide a window into the rapid mental processes that underlie spoken language comprehension. We review studies of reference resolution, word recognition, and pragmatic effects on syntactic ambiguity resolution. Our studies show that people seek to establish reference with respect to their behavioral goals during the earliest moments of linguistic processing. Moreover, referentially relevant non-linguistic information immediately affects how the linguistic input is initially structured.",Using Eye Movements to Study Spoken Language Comprehension: Evidence for Incremental Interpretation (Invited Talk),"We present an overview of recent work in which eye movements are monitored as people follow spoken instructions to move objects or pictures in a visual workspace. Subjects naturally make saccadic eye-movements to objects that are closely time-locked to relevant information in the instruction. Thus the eye-movements provide a window into the rapid mental processes that underlie spoken language comprehension. We review studies of reference resolution, word recognition, and pragmatic effects on syntactic ambiguity resolution. Our studies show that people seek to establish reference with respect to their behavioral goals during the earliest moments of linguistic processing. Moreover, referentially relevant non-linguistic information immediately affects how the linguistic input is initially structured.","* This paper summarizes work that the invited talk by the first author (MKT) was based upon. Supported by NIH resource grant 1-P41-RR09283; NIH HD27206 to MKT; NIH F32DC00210 to PDA, NSF Graduate Research Fellowships to MJS-K and JSM and a Canadian Social Science Research Fellowship to JCS.","Using Eye Movements to Study Spoken Language Comprehension: Evidence for Incremental Interpretation (Invited Talk). We present an overview of recent work in which eye movements are monitored as people follow spoken instructions to move objects or pictures in a visual workspace. Subjects naturally make saccadic eye-movements to objects that are closely time-locked to relevant information in the instruction. Thus the eye-movements provide a window into the rapid mental processes that underlie spoken language comprehension. We review studies of reference resolution, word recognition, and pragmatic effects on syntactic ambiguity resolution. Our studies show that people seek to establish reference with respect to their behavioral goals during the earliest moments of linguistic processing. Moreover, referentially relevant non-linguistic information immediately affects how the linguistic input is initially structured.",1996
patil-etal-2013-named,https://aclanthology.org/I13-1180,0,,,,,,,"Named Entity Extraction using Information Distance. Named entities (NE) are important information carrying units within documents. Named Entity extraction (NEX) task consists of automatic construction of a list of phrases belonging to each NE of interest. NEX is important for domains which lack a corpus with tagged NEs. We present an enhanced version and improved results of our unsupervised (bootstrapping) NEX technique (Patil et al., 2013) and establish its domain independence using experimental results on corpora from two different domains: agriculture and mechanical engineering (IC engine 1 parts). We use a new variant of Multiword Expression Distance (MED) (Bu et al., 2010) to quantify proximity of a candidate phrase with a given NE type. MED itself is an approximation of the information distance (Bennett et al., 1998). Efficacy of our method is shown using experimental comparison with pointwise mutual information (PMI), BASILISK and KNOWITALL. Our method discovered 8 new plant diseases which are not found in Wikipedia. To the best of our knowledge, this is the first use of NEX techniques for agriculture and mechanical engineering (engine parts) domains.",Named Entity Extraction using Information Distance,"Named entities (NE) are important information carrying units within documents. Named Entity extraction (NEX) task consists of automatic construction of a list of phrases belonging to each NE of interest. NEX is important for domains which lack a corpus with tagged NEs. We present an enhanced version and improved results of our unsupervised (bootstrapping) NEX technique (Patil et al., 2013) and establish its domain independence using experimental results on corpora from two different domains: agriculture and mechanical engineering (IC engine 1 parts). We use a new variant of Multiword Expression Distance (MED) (Bu et al., 2010) to quantify proximity of a candidate phrase with a given NE type. MED itself is an approximation of the information distance (Bennett et al., 1998). Efficacy of our method is shown using experimental comparison with pointwise mutual information (PMI), BASILISK and KNOWITALL. Our method discovered 8 new plant diseases which are not found in Wikipedia. To the best of our knowledge, this is the first use of NEX techniques for agriculture and mechanical engineering (engine parts) domains.",Named Entity Extraction using Information Distance,"Named entities (NE) are important information carrying units within documents. Named Entity extraction (NEX) task consists of automatic construction of a list of phrases belonging to each NE of interest. NEX is important for domains which lack a corpus with tagged NEs. We present an enhanced version and improved results of our unsupervised (bootstrapping) NEX technique (Patil et al., 2013) and establish its domain independence using experimental results on corpora from two different domains: agriculture and mechanical engineering (IC engine 1 parts). We use a new variant of Multiword Expression Distance (MED) (Bu et al., 2010) to quantify proximity of a candidate phrase with a given NE type. MED itself is an approximation of the information distance (Bennett et al., 1998). Efficacy of our method is shown using experimental comparison with pointwise mutual information (PMI), BASILISK and KNOWITALL. Our method discovered 8 new plant diseases which are not found in Wikipedia. To the best of our knowledge, this is the first use of NEX techniques for agriculture and mechanical engineering (engine parts) domains.",,"Named Entity Extraction using Information Distance. Named entities (NE) are important information carrying units within documents. Named Entity extraction (NEX) task consists of automatic construction of a list of phrases belonging to each NE of interest. NEX is important for domains which lack a corpus with tagged NEs. We present an enhanced version and improved results of our unsupervised (bootstrapping) NEX technique (Patil et al., 2013) and establish its domain independence using experimental results on corpora from two different domains: agriculture and mechanical engineering (IC engine 1 parts). We use a new variant of Multiword Expression Distance (MED) (Bu et al., 2010) to quantify proximity of a candidate phrase with a given NE type. MED itself is an approximation of the information distance (Bennett et al., 1998). Efficacy of our method is shown using experimental comparison with pointwise mutual information (PMI), BASILISK and KNOWITALL. Our method discovered 8 new plant diseases which are not found in Wikipedia. To the best of our knowledge, this is the first use of NEX techniques for agriculture and mechanical engineering (engine parts) domains.",2013
zhou-2000-local,https://aclanthology.org/C00-2141,0,,,,,,,"Local context templates for Chinese constituent boundary prediction. In this paper, we proposed a shallow syntactic knowledge description: constituent boundary representation and its simple and efficient prediction algorithm, based on different local context templates learned from the annotated corpus. An open test on 2780 Chinese real text sentences showed the satisfying results: 94%(92%) precision for the words with multiple (single) boundary tag output.",Local context templates for {C}hinese constituent boundary prediction,"In this paper, we proposed a shallow syntactic knowledge description: constituent boundary representation and its simple and efficient prediction algorithm, based on different local context templates learned from the annotated corpus. An open test on 2780 Chinese real text sentences showed the satisfying results: 94%(92%) precision for the words with multiple (single) boundary tag output.",Local context templates for Chinese constituent boundary prediction,"In this paper, we proposed a shallow syntactic knowledge description: constituent boundary representation and its simple and efficient prediction algorithm, based on different local context templates learned from the annotated corpus. An open test on 2780 Chinese real text sentences showed the satisfying results: 94%(92%) precision for the words with multiple (single) boundary tag output.",The research was supported by National Natural Science Foundation of China (NSFC) (Grant No. 69903007).,"Local context templates for Chinese constituent boundary prediction. In this paper, we proposed a shallow syntactic knowledge description: constituent boundary representation and its simple and efficient prediction algorithm, based on different local context templates learned from the annotated corpus. An open test on 2780 Chinese real text sentences showed the satisfying results: 94%(92%) precision for the words with multiple (single) boundary tag output.",2000
yirmibesoglu-gungor-2020-ermi,https://aclanthology.org/2020.mwe-1.17,0,,,,,,,"ERMI at PARSEME Shared Task 2020: Embedding-Rich Multiword Expression Identification. This paper describes the ERMI system submitted to the closed track of the PARSEME shared task 2020 on automatic identification of verbal multiword expressions (VMWEs). ERMI is an embedding-rich bidirectional LSTM-CRF model, which takes into account the embeddings of the word, its POS tag, dependency relation, and its head word. The results are reported for 14 languages, where the system is ranked 1 st in the general cross-lingual ranking of the closed track systems, according to the Unseen MWE-based F 1 .",{ERMI} at {PARSEME} Shared Task 2020: Embedding-Rich Multiword Expression Identification,"This paper describes the ERMI system submitted to the closed track of the PARSEME shared task 2020 on automatic identification of verbal multiword expressions (VMWEs). ERMI is an embedding-rich bidirectional LSTM-CRF model, which takes into account the embeddings of the word, its POS tag, dependency relation, and its head word. The results are reported for 14 languages, where the system is ranked 1 st in the general cross-lingual ranking of the closed track systems, according to the Unseen MWE-based F 1 .",ERMI at PARSEME Shared Task 2020: Embedding-Rich Multiword Expression Identification,"This paper describes the ERMI system submitted to the closed track of the PARSEME shared task 2020 on automatic identification of verbal multiword expressions (VMWEs). ERMI is an embedding-rich bidirectional LSTM-CRF model, which takes into account the embeddings of the word, its POS tag, dependency relation, and its head word. The results are reported for 14 languages, where the system is ranked 1 st in the general cross-lingual ranking of the closed track systems, according to the Unseen MWE-based F 1 .","The numerical calculations reported in this paper were partially performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources).","ERMI at PARSEME Shared Task 2020: Embedding-Rich Multiword Expression Identification. This paper describes the ERMI system submitted to the closed track of the PARSEME shared task 2020 on automatic identification of verbal multiword expressions (VMWEs). ERMI is an embedding-rich bidirectional LSTM-CRF model, which takes into account the embeddings of the word, its POS tag, dependency relation, and its head word. The results are reported for 14 languages, where the system is ranked 1 st in the general cross-lingual ranking of the closed track systems, according to the Unseen MWE-based F 1 .",2020
nn-1978-finite-string-volume,https://aclanthology.org/J78-2005,0,,,,,,,"The FINITE STRING, Volume 15, Number 2 (continued). Information Indw t r y ASsociation. Fol lowing Mr, 2urk0wski'~s presentation, Sen. llollings mlicited ""help from your organization and others, on t h e convergence of computer and cofnmunications.","The {F}INITE {S}TRING, Volume 15, Number 2 (continued)","Information Indw t r y ASsociation. Fol lowing Mr, 2urk0wski'~s presentation, Sen. llollings mlicited ""help from your organization and others, on t h e convergence of computer and cofnmunications.","The FINITE STRING, Volume 15, Number 2 (continued)","Information Indw t r y ASsociation. Fol lowing Mr, 2urk0wski'~s presentation, Sen. llollings mlicited ""help from your organization and others, on t h e convergence of computer and cofnmunications.",,"The FINITE STRING, Volume 15, Number 2 (continued). Information Indw t r y ASsociation. Fol lowing Mr, 2urk0wski'~s presentation, Sen. llollings mlicited ""help from your organization and others, on t h e convergence of computer and cofnmunications.",1978
christensen-etal-2009-rose,https://aclanthology.org/P09-2049,0,,,,,,,"A Rose is a Roos is a Ruusu: Querying Translations for Web Image Search. We query Web Image search engines with words (e.g., spring) but need images that correspond to particular senses of the word (e.g., flexible coil). Querying with polysemous words often yields unsatisfactory results from engines such as Google Images. We build an image search engine, IDIOM, which improves the quality of returned images by focusing search on the desired sense. Our algorithm, instead of searching for the original query, searches for multiple, automatically chosen translations of the sense in several languages. Experimental results show that IDIOM outperforms Google Images and other competing algorithms returning 22% more relevant images.",A Rose is a Roos is a Ruusu: Querying Translations for Web Image Search,"We query Web Image search engines with words (e.g., spring) but need images that correspond to particular senses of the word (e.g., flexible coil). Querying with polysemous words often yields unsatisfactory results from engines such as Google Images. We build an image search engine, IDIOM, which improves the quality of returned images by focusing search on the desired sense. Our algorithm, instead of searching for the original query, searches for multiple, automatically chosen translations of the sense in several languages. Experimental results show that IDIOM outperforms Google Images and other competing algorithms returning 22% more relevant images.",A Rose is a Roos is a Ruusu: Querying Translations for Web Image Search,"We query Web Image search engines with words (e.g., spring) but need images that correspond to particular senses of the word (e.g., flexible coil). Querying with polysemous words often yields unsatisfactory results from engines such as Google Images. We build an image search engine, IDIOM, which improves the quality of returned images by focusing search on the desired sense. Our algorithm, instead of searching for the original query, searches for multiple, automatically chosen translations of the sense in several languages. Experimental results show that IDIOM outperforms Google Images and other competing algorithms returning 22% more relevant images.",,"A Rose is a Roos is a Ruusu: Querying Translations for Web Image Search. We query Web Image search engines with words (e.g., spring) but need images that correspond to particular senses of the word (e.g., flexible coil). Querying with polysemous words often yields unsatisfactory results from engines such as Google Images. We build an image search engine, IDIOM, which improves the quality of returned images by focusing search on the desired sense. Our algorithm, instead of searching for the original query, searches for multiple, automatically chosen translations of the sense in several languages. Experimental results show that IDIOM outperforms Google Images and other competing algorithms returning 22% more relevant images.",2009
marinelli-2010-lexical,http://www.lrec-conf.org/proceedings/lrec2010/pdf/830_Paper.pdf,0,,,,,,,"Lexical Resources and Ontological Classifications for the Recognition of Proper Names Sense Extension. Particular uses of PNs with sense extension are focussed on and inspected taking into account the presence of PNs in lexical semantic databases and electronic corpora. Methodology to select ad include PNs in semantic databases is described; the use of PNs in corpora of Italian Language is examined and evaluated, analyzing the behaviour of a set of PNs in different periods of time. Computational resources can facilitate our study in this field in an effective way by helping codify, translate and handle particular cases of polysemy, but also guiding in metaphorical and metonymic sense recognition, supported by the ontological classification of the lexical semantic entities. The relationship between the ""abstract"" and the ""concrete"", which is at the basis of the Conceptual Metaphor perspective, can be considered strictly related to the variation of the ontological values found in our analysis of the PNs and their belonging classes which are codified in the ItalWordNet database.",Lexical Resources and Ontological Classifications for the Recognition of Proper Names Sense Extension,"Particular uses of PNs with sense extension are focussed on and inspected taking into account the presence of PNs in lexical semantic databases and electronic corpora. Methodology to select ad include PNs in semantic databases is described; the use of PNs in corpora of Italian Language is examined and evaluated, analyzing the behaviour of a set of PNs in different periods of time. Computational resources can facilitate our study in this field in an effective way by helping codify, translate and handle particular cases of polysemy, but also guiding in metaphorical and metonymic sense recognition, supported by the ontological classification of the lexical semantic entities. The relationship between the ""abstract"" and the ""concrete"", which is at the basis of the Conceptual Metaphor perspective, can be considered strictly related to the variation of the ontological values found in our analysis of the PNs and their belonging classes which are codified in the ItalWordNet database.",Lexical Resources and Ontological Classifications for the Recognition of Proper Names Sense Extension,"Particular uses of PNs with sense extension are focussed on and inspected taking into account the presence of PNs in lexical semantic databases and electronic corpora. Methodology to select ad include PNs in semantic databases is described; the use of PNs in corpora of Italian Language is examined and evaluated, analyzing the behaviour of a set of PNs in different periods of time. Computational resources can facilitate our study in this field in an effective way by helping codify, translate and handle particular cases of polysemy, but also guiding in metaphorical and metonymic sense recognition, supported by the ontological classification of the lexical semantic entities. The relationship between the ""abstract"" and the ""concrete"", which is at the basis of the Conceptual Metaphor perspective, can be considered strictly related to the variation of the ontological values found in our analysis of the PNs and their belonging classes which are codified in the ItalWordNet database.",,"Lexical Resources and Ontological Classifications for the Recognition of Proper Names Sense Extension. Particular uses of PNs with sense extension are focussed on and inspected taking into account the presence of PNs in lexical semantic databases and electronic corpora. Methodology to select ad include PNs in semantic databases is described; the use of PNs in corpora of Italian Language is examined and evaluated, analyzing the behaviour of a set of PNs in different periods of time. Computational resources can facilitate our study in this field in an effective way by helping codify, translate and handle particular cases of polysemy, but also guiding in metaphorical and metonymic sense recognition, supported by the ontological classification of the lexical semantic entities. The relationship between the ""abstract"" and the ""concrete"", which is at the basis of the Conceptual Metaphor perspective, can be considered strictly related to the variation of the ontological values found in our analysis of the PNs and their belonging classes which are codified in the ItalWordNet database.",2010
polajnar-etal-2015-exploration,https://aclanthology.org/W15-2701,0,,,,,,,"An Exploration of Discourse-Based Sentence Spaces for Compositional Distributional Semantics. This paper investigates whether the wider context in which a sentence is located can contribute to a distributional representation of sentence meaning. We compare a vector space for sentences in which the features are words occurring within the sentence, with two new vector spaces that only make use of surrounding context. Experiments on simple subject-verbobject similarity tasks show that all sentence spaces produce results that are comparable with previous work. However, qualitative analysis and user experiments indicate that extra-sentential contexts capture more diverse, yet topically coherent information.",An Exploration of Discourse-Based Sentence Spaces for Compositional Distributional Semantics,"This paper investigates whether the wider context in which a sentence is located can contribute to a distributional representation of sentence meaning. We compare a vector space for sentences in which the features are words occurring within the sentence, with two new vector spaces that only make use of surrounding context. Experiments on simple subject-verbobject similarity tasks show that all sentence spaces produce results that are comparable with previous work. However, qualitative analysis and user experiments indicate that extra-sentential contexts capture more diverse, yet topically coherent information.",An Exploration of Discourse-Based Sentence Spaces for Compositional Distributional Semantics,"This paper investigates whether the wider context in which a sentence is located can contribute to a distributional representation of sentence meaning. We compare a vector space for sentences in which the features are words occurring within the sentence, with two new vector spaces that only make use of surrounding context. Experiments on simple subject-verbobject similarity tasks show that all sentence spaces produce results that are comparable with previous work. However, qualitative analysis and user experiments indicate that extra-sentential contexts capture more diverse, yet topically coherent information.",,"An Exploration of Discourse-Based Sentence Spaces for Compositional Distributional Semantics. This paper investigates whether the wider context in which a sentence is located can contribute to a distributional representation of sentence meaning. We compare a vector space for sentences in which the features are words occurring within the sentence, with two new vector spaces that only make use of surrounding context. Experiments on simple subject-verbobject similarity tasks show that all sentence spaces produce results that are comparable with previous work. However, qualitative analysis and user experiments indicate that extra-sentential contexts capture more diverse, yet topically coherent information.",2015
dehouck-denis-2019-phylogenic,https://aclanthology.org/N19-1017,0,,,,,,,"Phylogenic Multi-Lingual Dependency Parsing. Languages evolve and diverge over time. Their evolutionary history is often depicted in the shape of a phylogenetic tree. Assuming parsing models are representations of their languages grammars, their evolution should follow a structure similar to that of the phylogenetic tree. In this paper, drawing inspiration from multi-task learning, we make use of the phylogenetic tree to guide the learning of multilingual dependency parsers leveraging languages structural similarities. Experiments on data from the Universal Dependency project show that phylogenetic training is beneficial to low resourced languages and to well furnished languages families. As a side product of phylogenetic training, our model is able to perform zero-shot parsing of previously unseen languages.",Phylogenic Multi-Lingual Dependency Parsing,"Languages evolve and diverge over time. Their evolutionary history is often depicted in the shape of a phylogenetic tree. Assuming parsing models are representations of their languages grammars, their evolution should follow a structure similar to that of the phylogenetic tree. In this paper, drawing inspiration from multi-task learning, we make use of the phylogenetic tree to guide the learning of multilingual dependency parsers leveraging languages structural similarities. Experiments on data from the Universal Dependency project show that phylogenetic training is beneficial to low resourced languages and to well furnished languages families. As a side product of phylogenetic training, our model is able to perform zero-shot parsing of previously unseen languages.",Phylogenic Multi-Lingual Dependency Parsing,"Languages evolve and diverge over time. Their evolutionary history is often depicted in the shape of a phylogenetic tree. Assuming parsing models are representations of their languages grammars, their evolution should follow a structure similar to that of the phylogenetic tree. In this paper, drawing inspiration from multi-task learning, we make use of the phylogenetic tree to guide the learning of multilingual dependency parsers leveraging languages structural similarities. Experiments on data from the Universal Dependency project show that phylogenetic training is beneficial to low resourced languages and to well furnished languages families. As a side product of phylogenetic training, our model is able to perform zero-shot parsing of previously unseen languages.",This work was supported by ANR Grant GRASP No. ANR-16-CE33-0011-01 and Grant from CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020. We also thank the reviewers for their valuable feedback.,"Phylogenic Multi-Lingual Dependency Parsing. Languages evolve and diverge over time. Their evolutionary history is often depicted in the shape of a phylogenetic tree. Assuming parsing models are representations of their languages grammars, their evolution should follow a structure similar to that of the phylogenetic tree. In this paper, drawing inspiration from multi-task learning, we make use of the phylogenetic tree to guide the learning of multilingual dependency parsers leveraging languages structural similarities. Experiments on data from the Universal Dependency project show that phylogenetic training is beneficial to low resourced languages and to well furnished languages families. As a side product of phylogenetic training, our model is able to perform zero-shot parsing of previously unseen languages.",2019
terragni-etal-2021-octis,https://aclanthology.org/2021.eacl-demos.31,0,,,,,,,"OCTIS: Comparing and Optimizing Topic models is Simple!. In this paper, we present OCTIS, a framework for training, analyzing, and comparing Topic Models, whose optimal hyper-parameters are estimated using a Bayesian Optimization approach. The proposed solution integrates several state-of-the-art topic models and evaluation metrics. These metrics can be targeted as objective by the underlying optimization procedure to determine the best hyper-parameter configuration. OCTIS allows researchers and practitioners to have a fair comparison between topic models of interest, using several benchmark datasets and well-known evaluation metrics, to integrate novel algorithms, and to have an interactive visualization of the results for understanding the behavior of each model. The code is available at the following link: https://github.com/ MIND-Lab/OCTIS.",{OCTIS}: Comparing and Optimizing Topic models is Simple!,"In this paper, we present OCTIS, a framework for training, analyzing, and comparing Topic Models, whose optimal hyper-parameters are estimated using a Bayesian Optimization approach. The proposed solution integrates several state-of-the-art topic models and evaluation metrics. These metrics can be targeted as objective by the underlying optimization procedure to determine the best hyper-parameter configuration. OCTIS allows researchers and practitioners to have a fair comparison between topic models of interest, using several benchmark datasets and well-known evaluation metrics, to integrate novel algorithms, and to have an interactive visualization of the results for understanding the behavior of each model. The code is available at the following link: https://github.com/ MIND-Lab/OCTIS.",OCTIS: Comparing and Optimizing Topic models is Simple!,"In this paper, we present OCTIS, a framework for training, analyzing, and comparing Topic Models, whose optimal hyper-parameters are estimated using a Bayesian Optimization approach. The proposed solution integrates several state-of-the-art topic models and evaluation metrics. These metrics can be targeted as objective by the underlying optimization procedure to determine the best hyper-parameter configuration. OCTIS allows researchers and practitioners to have a fair comparison between topic models of interest, using several benchmark datasets and well-known evaluation metrics, to integrate novel algorithms, and to have an interactive visualization of the results for understanding the behavior of each model. The code is available at the following link: https://github.com/ MIND-Lab/OCTIS.",,"OCTIS: Comparing and Optimizing Topic models is Simple!. In this paper, we present OCTIS, a framework for training, analyzing, and comparing Topic Models, whose optimal hyper-parameters are estimated using a Bayesian Optimization approach. The proposed solution integrates several state-of-the-art topic models and evaluation metrics. These metrics can be targeted as objective by the underlying optimization procedure to determine the best hyper-parameter configuration. OCTIS allows researchers and practitioners to have a fair comparison between topic models of interest, using several benchmark datasets and well-known evaluation metrics, to integrate novel algorithms, and to have an interactive visualization of the results for understanding the behavior of each model. The code is available at the following link: https://github.com/ MIND-Lab/OCTIS.",2021
maillard-clark-2015-learning,https://aclanthology.org/K15-1035,0,,,,,,,"Learning Adjective Meanings with a Tensor-Based Skip-Gram Model. We present a compositional distributional semantic model which is an implementation of the tensor-based framework of Coecke et al. (2011). It is an extended skipgram model (Mikolov et al., 2013) which we apply to adjective-noun combinations, learning nouns as vectors and adjectives as matrices. We also propose a novel measure of adjective similarity, and show that adjective matrix representations lead to improved performance in adjective and adjective-noun similarity tasks, as well as in the detection of semantically anomalous adjective-noun pairs.",Learning Adjective Meanings with a Tensor-Based Skip-Gram Model,"We present a compositional distributional semantic model which is an implementation of the tensor-based framework of Coecke et al. (2011). It is an extended skipgram model (Mikolov et al., 2013) which we apply to adjective-noun combinations, learning nouns as vectors and adjectives as matrices. We also propose a novel measure of adjective similarity, and show that adjective matrix representations lead to improved performance in adjective and adjective-noun similarity tasks, as well as in the detection of semantically anomalous adjective-noun pairs.",Learning Adjective Meanings with a Tensor-Based Skip-Gram Model,"We present a compositional distributional semantic model which is an implementation of the tensor-based framework of Coecke et al. (2011). It is an extended skipgram model (Mikolov et al., 2013) which we apply to adjective-noun combinations, learning nouns as vectors and adjectives as matrices. We also propose a novel measure of adjective similarity, and show that adjective matrix representations lead to improved performance in adjective and adjective-noun similarity tasks, as well as in the detection of semantically anomalous adjective-noun pairs.","Jean Maillard is supported by an EPSRC Doctoral Training Grant and a St John's Scholarship. Stephen Clark is supported by ERC Starting Grant DisCoTex (306920) and EPSRC grant EP/I037512/1. We would like to thank Tamara Polajnar, Laura Rimell, and Eva Vecchi for useful discussion.","Learning Adjective Meanings with a Tensor-Based Skip-Gram Model. We present a compositional distributional semantic model which is an implementation of the tensor-based framework of Coecke et al. (2011). It is an extended skipgram model (Mikolov et al., 2013) which we apply to adjective-noun combinations, learning nouns as vectors and adjectives as matrices. We also propose a novel measure of adjective similarity, and show that adjective matrix representations lead to improved performance in adjective and adjective-noun similarity tasks, as well as in the detection of semantically anomalous adjective-noun pairs.",2015
bilac-tanaka-2004-hybrid,https://aclanthology.org/C04-1086,0,,,,,,,A hybrid back-transliteration system for Japanese. ,A hybrid back-transliteration system for {J}apanese,,A hybrid back-transliteration system for Japanese,,,A hybrid back-transliteration system for Japanese. ,2004
al-sabbagh-etal-2013-using,https://aclanthology.org/I13-1047,0,,,,,,,"Using the Semantic-Syntactic Interface for Reliable Arabic Modality Annotation. We introduce a novel modality scheme where triggers are words and phrases that convey modality meanings and subcategorize for clauses and verbal phrases. This semanticsyntactic working definition of modality enables us to design practical and replicable annotation guidelines and procedures that alleviate some shortcomings of current purely semantic modality annotation schemes and yield high inter-annotator agreement rates. We use this scheme to annotate a tweet-based Arabic corpus for modality information. This novel language resource, being the first, initiates NLP research on Arabic modality.",Using the Semantic-Syntactic Interface for Reliable {A}rabic Modality Annotation,"We introduce a novel modality scheme where triggers are words and phrases that convey modality meanings and subcategorize for clauses and verbal phrases. This semanticsyntactic working definition of modality enables us to design practical and replicable annotation guidelines and procedures that alleviate some shortcomings of current purely semantic modality annotation schemes and yield high inter-annotator agreement rates. We use this scheme to annotate a tweet-based Arabic corpus for modality information. This novel language resource, being the first, initiates NLP research on Arabic modality.",Using the Semantic-Syntactic Interface for Reliable Arabic Modality Annotation,"We introduce a novel modality scheme where triggers are words and phrases that convey modality meanings and subcategorize for clauses and verbal phrases. This semanticsyntactic working definition of modality enables us to design practical and replicable annotation guidelines and procedures that alleviate some shortcomings of current purely semantic modality annotation schemes and yield high inter-annotator agreement rates. We use this scheme to annotate a tweet-based Arabic corpus for modality information. This novel language resource, being the first, initiates NLP research on Arabic modality.",This work has been partially supported by a grant on social media and mobile computing from the Beckman Institute for Advanced Science and Technology.,"Using the Semantic-Syntactic Interface for Reliable Arabic Modality Annotation. We introduce a novel modality scheme where triggers are words and phrases that convey modality meanings and subcategorize for clauses and verbal phrases. This semanticsyntactic working definition of modality enables us to design practical and replicable annotation guidelines and procedures that alleviate some shortcomings of current purely semantic modality annotation schemes and yield high inter-annotator agreement rates. We use this scheme to annotate a tweet-based Arabic corpus for modality information. This novel language resource, being the first, initiates NLP research on Arabic modality.",2013
popovic-ney-2004-towards,http://www.lrec-conf.org/proceedings/lrec2004/pdf/372.pdf,0,,,,,,,"Towards the Use of Word Stems and Suffixes for Statistical Machine Translation. In this paper we present methods for improving the quality of translation from an inflected language into English by making use of part-of-speech tags and word stems and suffixes in the source language. Results for translations from Spanish and Catalan into English are presented on the LC-STAR trilingual corpus which consists of spontaneously spoken dialogues in the domain of travelling and appointment scheduling. Results for translation from Serbian into English are presented on the Assimil language course, the bilingual corpus from unrestricted domain. We achieve up to 5% relative reduction of error rates for Spanish and Catalan and about 8% for Serbian.",Towards the Use of Word Stems and Suffixes for Statistical Machine Translation,"In this paper we present methods for improving the quality of translation from an inflected language into English by making use of part-of-speech tags and word stems and suffixes in the source language. Results for translations from Spanish and Catalan into English are presented on the LC-STAR trilingual corpus which consists of spontaneously spoken dialogues in the domain of travelling and appointment scheduling. Results for translation from Serbian into English are presented on the Assimil language course, the bilingual corpus from unrestricted domain. We achieve up to 5% relative reduction of error rates for Spanish and Catalan and about 8% for Serbian.",Towards the Use of Word Stems and Suffixes for Statistical Machine Translation,"In this paper we present methods for improving the quality of translation from an inflected language into English by making use of part-of-speech tags and word stems and suffixes in the source language. Results for translations from Spanish and Catalan into English are presented on the LC-STAR trilingual corpus which consists of spontaneously spoken dialogues in the domain of travelling and appointment scheduling. Results for translation from Serbian into English are presented on the Assimil language course, the bilingual corpus from unrestricted domain. We achieve up to 5% relative reduction of error rates for Spanish and Catalan and about 8% for Serbian.",This work was partly supported by the LC-STAR project by the European Community (IST project ref. no. 2001-32216).,"Towards the Use of Word Stems and Suffixes for Statistical Machine Translation. In this paper we present methods for improving the quality of translation from an inflected language into English by making use of part-of-speech tags and word stems and suffixes in the source language. Results for translations from Spanish and Catalan into English are presented on the LC-STAR trilingual corpus which consists of spontaneously spoken dialogues in the domain of travelling and appointment scheduling. Results for translation from Serbian into English are presented on the Assimil language course, the bilingual corpus from unrestricted domain. We achieve up to 5% relative reduction of error rates for Spanish and Catalan and about 8% for Serbian.",2004
zhang-etal-2021-beyond,https://aclanthology.org/2021.acl-long.200,0,,,,,,,"Beyond Sentence-Level End-to-End Speech Translation: Context Helps. Document-level contextual information has shown benefits to text-based machine translation, but whether and how context helps endto-end (E2E) speech translation (ST) is still under-studied. We fill this gap through extensive experiments using a simple concatenationbased context-aware ST model, paired with adaptive feature selection on speech encodings for computational efficiency. We investigate several decoding approaches, and introduce inmodel ensemble decoding which jointly performs document-and sentence-level translation using the same model. Our results on the MuST-C benchmark with Transformer demonstrate the effectiveness of context to E2E ST. Compared to sentence-level ST, context-aware ST obtains better translation quality (+0.18-2.61 BLEU), improves pronoun and homophone translation, shows better robustness to (artificial) audio segmentation errors, and reduces latency and flicker to deliver higher quality for simultaneous translation. 1",Beyond Sentence-Level End-to-End Speech Translation: Context Helps,"Document-level contextual information has shown benefits to text-based machine translation, but whether and how context helps endto-end (E2E) speech translation (ST) is still under-studied. We fill this gap through extensive experiments using a simple concatenationbased context-aware ST model, paired with adaptive feature selection on speech encodings for computational efficiency. We investigate several decoding approaches, and introduce inmodel ensemble decoding which jointly performs document-and sentence-level translation using the same model. Our results on the MuST-C benchmark with Transformer demonstrate the effectiveness of context to E2E ST. Compared to sentence-level ST, context-aware ST obtains better translation quality (+0.18-2.61 BLEU), improves pronoun and homophone translation, shows better robustness to (artificial) audio segmentation errors, and reduces latency and flicker to deliver higher quality for simultaneous translation. 1",Beyond Sentence-Level End-to-End Speech Translation: Context Helps,"Document-level contextual information has shown benefits to text-based machine translation, but whether and how context helps endto-end (E2E) speech translation (ST) is still under-studied. We fill this gap through extensive experiments using a simple concatenationbased context-aware ST model, paired with adaptive feature selection on speech encodings for computational efficiency. We investigate several decoding approaches, and introduce inmodel ensemble decoding which jointly performs document-and sentence-level translation using the same model. Our results on the MuST-C benchmark with Transformer demonstrate the effectiveness of context to E2E ST. Compared to sentence-level ST, context-aware ST obtains better translation quality (+0.18-2.61 BLEU), improves pronoun and homophone translation, shows better robustness to (artificial) audio segmentation errors, and reduces latency and flicker to deliver higher quality for simultaneous translation. 1",We thank the reviewers for their insightful comments. This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreements 825460 (ELITR). Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727).,"Beyond Sentence-Level End-to-End Speech Translation: Context Helps. Document-level contextual information has shown benefits to text-based machine translation, but whether and how context helps endto-end (E2E) speech translation (ST) is still under-studied. We fill this gap through extensive experiments using a simple concatenationbased context-aware ST model, paired with adaptive feature selection on speech encodings for computational efficiency. We investigate several decoding approaches, and introduce inmodel ensemble decoding which jointly performs document-and sentence-level translation using the same model. Our results on the MuST-C benchmark with Transformer demonstrate the effectiveness of context to E2E ST. Compared to sentence-level ST, context-aware ST obtains better translation quality (+0.18-2.61 BLEU), improves pronoun and homophone translation, shows better robustness to (artificial) audio segmentation errors, and reduces latency and flicker to deliver higher quality for simultaneous translation. 1",2021
yuret-2007-ku,https://aclanthology.org/S07-1044,0,,,,,,,"KU: Word Sense Disambiguation by Substitution. Data sparsity is one of the main factors that make word sense disambiguation (WSD) difficult. To overcome this problem we need to find effective ways to use resources other than sense labeled data. In this paper I describe a WSD system that uses a statistical language model based on a large unannotated corpus. The model is used to evaluate the likelihood of various substitutes for a word in a given context. These likelihoods are then used to determine the best sense for the word in novel contexts. The resulting system participated in three tasks in the Se-mEval 2007 workshop. The WSD of prepositions task proved to be challenging for the system, possibly illustrating some of its limitations: e.g. not all words have good substitutes. The system achieved promising results for the English lexical sample and English lexical substitution tasks.",{KU}: Word Sense Disambiguation by Substitution,"Data sparsity is one of the main factors that make word sense disambiguation (WSD) difficult. To overcome this problem we need to find effective ways to use resources other than sense labeled data. In this paper I describe a WSD system that uses a statistical language model based on a large unannotated corpus. The model is used to evaluate the likelihood of various substitutes for a word in a given context. These likelihoods are then used to determine the best sense for the word in novel contexts. The resulting system participated in three tasks in the Se-mEval 2007 workshop. The WSD of prepositions task proved to be challenging for the system, possibly illustrating some of its limitations: e.g. not all words have good substitutes. The system achieved promising results for the English lexical sample and English lexical substitution tasks.",KU: Word Sense Disambiguation by Substitution,"Data sparsity is one of the main factors that make word sense disambiguation (WSD) difficult. To overcome this problem we need to find effective ways to use resources other than sense labeled data. In this paper I describe a WSD system that uses a statistical language model based on a large unannotated corpus. The model is used to evaluate the likelihood of various substitutes for a word in a given context. These likelihoods are then used to determine the best sense for the word in novel contexts. The resulting system participated in three tasks in the Se-mEval 2007 workshop. The WSD of prepositions task proved to be challenging for the system, possibly illustrating some of its limitations: e.g. not all words have good substitutes. The system achieved promising results for the English lexical sample and English lexical substitution tasks.",,"KU: Word Sense Disambiguation by Substitution. Data sparsity is one of the main factors that make word sense disambiguation (WSD) difficult. To overcome this problem we need to find effective ways to use resources other than sense labeled data. In this paper I describe a WSD system that uses a statistical language model based on a large unannotated corpus. The model is used to evaluate the likelihood of various substitutes for a word in a given context. These likelihoods are then used to determine the best sense for the word in novel contexts. The resulting system participated in three tasks in the Se-mEval 2007 workshop. The WSD of prepositions task proved to be challenging for the system, possibly illustrating some of its limitations: e.g. not all words have good substitutes. The system achieved promising results for the English lexical sample and English lexical substitution tasks.",2007
srinet-etal-2020-craftassist,https://aclanthology.org/2020.acl-main.427,0,,,,,,,"CraftAssist Instruction Parsing: Semantic Parsing for a Voxel-World Assistant. We propose a semantic parsing dataset focused on instruction-driven communication with an agent in the game Minecraft 1. The dataset consists of 7K human utterances and their corresponding parses. Given proper world state, the parses can be interpreted and executed in game. We report the performance of baseline models, and analyze their successes and failures. * Equal contribution † Work done while at Facebook AI Research 1 Minecraft features: c Mojang Synergies AB included courtesy of Mojang AB",{C}raft{A}ssist Instruction Parsing: Semantic Parsing for a Voxel-World Assistant,"We propose a semantic parsing dataset focused on instruction-driven communication with an agent in the game Minecraft 1. The dataset consists of 7K human utterances and their corresponding parses. Given proper world state, the parses can be interpreted and executed in game. We report the performance of baseline models, and analyze their successes and failures. * Equal contribution † Work done while at Facebook AI Research 1 Minecraft features: c Mojang Synergies AB included courtesy of Mojang AB",CraftAssist Instruction Parsing: Semantic Parsing for a Voxel-World Assistant,"We propose a semantic parsing dataset focused on instruction-driven communication with an agent in the game Minecraft 1. The dataset consists of 7K human utterances and their corresponding parses. Given proper world state, the parses can be interpreted and executed in game. We report the performance of baseline models, and analyze their successes and failures. * Equal contribution † Work done while at Facebook AI Research 1 Minecraft features: c Mojang Synergies AB included courtesy of Mojang AB",,"CraftAssist Instruction Parsing: Semantic Parsing for a Voxel-World Assistant. We propose a semantic parsing dataset focused on instruction-driven communication with an agent in the game Minecraft 1. The dataset consists of 7K human utterances and their corresponding parses. Given proper world state, the parses can be interpreted and executed in game. We report the performance of baseline models, and analyze their successes and failures. * Equal contribution † Work done while at Facebook AI Research 1 Minecraft features: c Mojang Synergies AB included courtesy of Mojang AB",2020
xu-etal-2016-unimelb,https://aclanthology.org/S16-1027,0,,,,,,,"UNIMELB at SemEval-2016 Tasks 4A and 4B: An Ensemble of Neural Networks and a Word2Vec Based Model for Sentiment Classification. This paper describes our sentiment classification system for microblog-sized documents, and documents where a topic is present. The system consists of a softvoting ensemble of a word2vec language model adapted to classification, a convolutional neural network (CNN), and a longshort term memory network (LSTM). Our main contribution consists of a way to introduce topic information into this model, by concatenating a topic embedding, consisting of the averaged word embedding for that topic, to each word embedding vector in our neural networks. When we apply our models to SemEval 2016 Task 4 subtasks A and B, we demonstrate that the ensemble performed better than any single classifier, and our method of including topic information achieves a substantial performance gain. According to results on the official test sets, our model ranked 3rd for PN in the message-only subtask A (among 34 teams) and 1st for accuracy on the topic-dependent subtask B (among 19 teams). 1 There were some issues surrounding the evaluation metrics. We only got 7th for PN and 2nd for PN officially, but when we retrained our model using PN as the subtask intended, we place first across all metrics.",{UNIMELB} at {S}em{E}val-2016 Tasks 4{A} and 4{B}: An Ensemble of Neural Networks and a {W}ord2{V}ec Based Model for Sentiment Classification,"This paper describes our sentiment classification system for microblog-sized documents, and documents where a topic is present. The system consists of a softvoting ensemble of a word2vec language model adapted to classification, a convolutional neural network (CNN), and a longshort term memory network (LSTM). Our main contribution consists of a way to introduce topic information into this model, by concatenating a topic embedding, consisting of the averaged word embedding for that topic, to each word embedding vector in our neural networks. When we apply our models to SemEval 2016 Task 4 subtasks A and B, we demonstrate that the ensemble performed better than any single classifier, and our method of including topic information achieves a substantial performance gain. According to results on the official test sets, our model ranked 3rd for PN in the message-only subtask A (among 34 teams) and 1st for accuracy on the topic-dependent subtask B (among 19 teams). 1 There were some issues surrounding the evaluation metrics. We only got 7th for PN and 2nd for PN officially, but when we retrained our model using PN as the subtask intended, we place first across all metrics.",UNIMELB at SemEval-2016 Tasks 4A and 4B: An Ensemble of Neural Networks and a Word2Vec Based Model for Sentiment Classification,"This paper describes our sentiment classification system for microblog-sized documents, and documents where a topic is present. The system consists of a softvoting ensemble of a word2vec language model adapted to classification, a convolutional neural network (CNN), and a longshort term memory network (LSTM). Our main contribution consists of a way to introduce topic information into this model, by concatenating a topic embedding, consisting of the averaged word embedding for that topic, to each word embedding vector in our neural networks. When we apply our models to SemEval 2016 Task 4 subtasks A and B, we demonstrate that the ensemble performed better than any single classifier, and our method of including topic information achieves a substantial performance gain. According to results on the official test sets, our model ranked 3rd for PN in the message-only subtask A (among 34 teams) and 1st for accuracy on the topic-dependent subtask B (among 19 teams). 1 There were some issues surrounding the evaluation metrics. We only got 7th for PN and 2nd for PN officially, but when we retrained our model using PN as the subtask intended, we place first across all metrics.",,"UNIMELB at SemEval-2016 Tasks 4A and 4B: An Ensemble of Neural Networks and a Word2Vec Based Model for Sentiment Classification. This paper describes our sentiment classification system for microblog-sized documents, and documents where a topic is present. The system consists of a softvoting ensemble of a word2vec language model adapted to classification, a convolutional neural network (CNN), and a longshort term memory network (LSTM). Our main contribution consists of a way to introduce topic information into this model, by concatenating a topic embedding, consisting of the averaged word embedding for that topic, to each word embedding vector in our neural networks. When we apply our models to SemEval 2016 Task 4 subtasks A and B, we demonstrate that the ensemble performed better than any single classifier, and our method of including topic information achieves a substantial performance gain. According to results on the official test sets, our model ranked 3rd for PN in the message-only subtask A (among 34 teams) and 1st for accuracy on the topic-dependent subtask B (among 19 teams). 1 There were some issues surrounding the evaluation metrics. We only got 7th for PN and 2nd for PN officially, but when we retrained our model using PN as the subtask intended, we place first across all metrics.",2016
koyama-etal-1998-japanese,https://aclanthology.org/Y98-1029,0,,,,,,,"Japanese Kana-to-Kanji Conversion Using Large Scale Collocation Data. Japanese wad prucessa. cr the cvmputer rated in Japaz employs, input method through keyboard vole canbinxIwith Kay Ohmetic) character b Kaiji (ickogrcphi4 Chime) cirraier aynersiattedsvlogy. .71r key fret►. of Karkto-Kanji co► tersion technology is how to rase the wary cfthe cantersicn hough the hamophae pvcwsirg we hate so many homcplvnes kits pcpet. , we sprat the mass cf our Karr-taKayi canersicn experiments which embo* dr homcialme processing using catnsite colloartion daft It is shown that ciprzimately 135,000 °goo:0m dai2yields 9.1 %rnise cfie amtersicn axunory ccmparedwith the protoope .Dstan which ha rro collocatiatcbta",{J}apanese Kana-to-Kanji Conversion Using Large Scale Collocation Data,"Japanese wad prucessa. cr the cvmputer rated in Japaz employs, input method through keyboard vole canbinxIwith Kay Ohmetic) character b Kaiji (ickogrcphi4 Chime) cirraier aynersiattedsvlogy. .71r key fret►. of Karkto-Kanji co► tersion technology is how to rase the wary cfthe cantersicn hough the hamophae pvcwsirg we hate so many homcplvnes kits pcpet. , we sprat the mass cf our Karr-taKayi canersicn experiments which embo* dr homcialme processing using catnsite colloartion daft It is shown that ciprzimately 135,000 °goo:0m dai2yields 9.1 %rnise cfie amtersicn axunory ccmparedwith the protoope .Dstan which ha rro collocatiatcbta",Japanese Kana-to-Kanji Conversion Using Large Scale Collocation Data,"Japanese wad prucessa. cr the cvmputer rated in Japaz employs, input method through keyboard vole canbinxIwith Kay Ohmetic) character b Kaiji (ickogrcphi4 Chime) cirraier aynersiattedsvlogy. .71r key fret►. of Karkto-Kanji co► tersion technology is how to rase the wary cfthe cantersicn hough the hamophae pvcwsirg we hate so many homcplvnes kits pcpet. , we sprat the mass cf our Karr-taKayi canersicn experiments which embo* dr homcialme processing using catnsite colloartion daft It is shown that ciprzimately 135,000 °goo:0m dai2yields 9.1 %rnise cfie amtersicn axunory ccmparedwith the protoope .Dstan which ha rro collocatiatcbta",,"Japanese Kana-to-Kanji Conversion Using Large Scale Collocation Data. Japanese wad prucessa. cr the cvmputer rated in Japaz employs, input method through keyboard vole canbinxIwith Kay Ohmetic) character b Kaiji (ickogrcphi4 Chime) cirraier aynersiattedsvlogy. .71r key fret►. of Karkto-Kanji co► tersion technology is how to rase the wary cfthe cantersicn hough the hamophae pvcwsirg we hate so many homcplvnes kits pcpet. , we sprat the mass cf our Karr-taKayi canersicn experiments which embo* dr homcialme processing using catnsite colloartion daft It is shown that ciprzimately 135,000 °goo:0m dai2yields 9.1 %rnise cfie amtersicn axunory ccmparedwith the protoope .Dstan which ha rro collocatiatcbta",1998
zeyrek-basibuyuk-2019-tcl,https://aclanthology.org/W19-3308,0,,,,,,,"TCL - a Lexicon of Turkish Discourse Connectives. It is known that discourse connectives are the most salient indicators of discourse relations. State-of-the-art parsers being developed to predict explicit discourse connectives exploit annotated discourse corpora but a lexicon of discourse connectives is also needed to enable further research in discourse structure and support the development of language technologies that use these structures for text understanding. This paper presents a lexicon of Turkish discourse connectives built by automatic means. The lexicon has the format of the German connective lexicon, DiMLex, where for each discourse connective, information about the connective's orthographic variants, syntactic category and senses are provided along with sample relations. In this paper, we describe the data sources we used and the development steps of the lexicon.",{TCL} - a Lexicon of {T}urkish Discourse Connectives,"It is known that discourse connectives are the most salient indicators of discourse relations. State-of-the-art parsers being developed to predict explicit discourse connectives exploit annotated discourse corpora but a lexicon of discourse connectives is also needed to enable further research in discourse structure and support the development of language technologies that use these structures for text understanding. This paper presents a lexicon of Turkish discourse connectives built by automatic means. The lexicon has the format of the German connective lexicon, DiMLex, where for each discourse connective, information about the connective's orthographic variants, syntactic category and senses are provided along with sample relations. In this paper, we describe the data sources we used and the development steps of the lexicon.",TCL - a Lexicon of Turkish Discourse Connectives,"It is known that discourse connectives are the most salient indicators of discourse relations. State-of-the-art parsers being developed to predict explicit discourse connectives exploit annotated discourse corpora but a lexicon of discourse connectives is also needed to enable further research in discourse structure and support the development of language technologies that use these structures for text understanding. This paper presents a lexicon of Turkish discourse connectives built by automatic means. The lexicon has the format of the German connective lexicon, DiMLex, where for each discourse connective, information about the connective's orthographic variants, syntactic category and senses are provided along with sample relations. In this paper, we describe the data sources we used and the development steps of the lexicon.",,"TCL - a Lexicon of Turkish Discourse Connectives. It is known that discourse connectives are the most salient indicators of discourse relations. State-of-the-art parsers being developed to predict explicit discourse connectives exploit annotated discourse corpora but a lexicon of discourse connectives is also needed to enable further research in discourse structure and support the development of language technologies that use these structures for text understanding. This paper presents a lexicon of Turkish discourse connectives built by automatic means. The lexicon has the format of the German connective lexicon, DiMLex, where for each discourse connective, information about the connective's orthographic variants, syntactic category and senses are provided along with sample relations. In this paper, we describe the data sources we used and the development steps of the lexicon.",2019
mota-etal-2004-multiword,https://aclanthology.org/W04-2115,0,,,,,,,"Multiword Lexical Acquisition and Dictionary Formalization. In this paper, we present the current state of development of a large-scale lexicon built at LabEL 1 for Portuguese. We will concentrate on multiword expressions (MWE), particularly on multiword nouns, (i) illustrating their most relevant morphological features, and (ii) pointing out the methods and techniques adopted to generate the inflected forms from lemmas. Moreover, we describe a corpus-based aproach for the acquisition of new multiword nouns, which led to a significant enlargement of the existing lexicon. Evaluation results concerning lexical coverage in the corpus are also discussed.",Multiword Lexical Acquisition and Dictionary Formalization,"In this paper, we present the current state of development of a large-scale lexicon built at LabEL 1 for Portuguese. We will concentrate on multiword expressions (MWE), particularly on multiword nouns, (i) illustrating their most relevant morphological features, and (ii) pointing out the methods and techniques adopted to generate the inflected forms from lemmas. Moreover, we describe a corpus-based aproach for the acquisition of new multiword nouns, which led to a significant enlargement of the existing lexicon. Evaluation results concerning lexical coverage in the corpus are also discussed.",Multiword Lexical Acquisition and Dictionary Formalization,"In this paper, we present the current state of development of a large-scale lexicon built at LabEL 1 for Portuguese. We will concentrate on multiword expressions (MWE), particularly on multiword nouns, (i) illustrating their most relevant morphological features, and (ii) pointing out the methods and techniques adopted to generate the inflected forms from lemmas. Moreover, we describe a corpus-based aproach for the acquisition of new multiword nouns, which led to a significant enlargement of the existing lexicon. Evaluation results concerning lexical coverage in the corpus are also discussed.",,"Multiword Lexical Acquisition and Dictionary Formalization. In this paper, we present the current state of development of a large-scale lexicon built at LabEL 1 for Portuguese. We will concentrate on multiword expressions (MWE), particularly on multiword nouns, (i) illustrating their most relevant morphological features, and (ii) pointing out the methods and techniques adopted to generate the inflected forms from lemmas. Moreover, we describe a corpus-based aproach for the acquisition of new multiword nouns, which led to a significant enlargement of the existing lexicon. Evaluation results concerning lexical coverage in the corpus are also discussed.",2004
su-etal-2020-towards,https://aclanthology.org/2020.acl-main.63,0,,,,,,,"Towards Unsupervised Language Understanding and Generation by Joint Dual Learning. In modular dialogue systems, natural language understanding (NLU) and natural language generation (NLG) are two critical components, where NLU extracts the semantics from the given texts and NLG is to construct corresponding natural language sentences based on the input semantic representations. However, the dual property between understanding and generation has been rarely explored. The prior work (Su et al., 2019) is the first attempt that utilized the duality between NLU and NLG to improve the performance via a dual supervised learning framework. However, the prior work still learned both components in a supervised manner; instead, this paper introduces a general learning framework to effectively exploit such duality, providing flexibility of incorporating both supervised and unsupervised learning algorithms to train language understanding and generation models in a joint fashion. The benchmark experiments demonstrate that the proposed approach is capable of boosting the performance of both NLU and NLG. 1",Towards Unsupervised Language Understanding and Generation by Joint Dual Learning,"In modular dialogue systems, natural language understanding (NLU) and natural language generation (NLG) are two critical components, where NLU extracts the semantics from the given texts and NLG is to construct corresponding natural language sentences based on the input semantic representations. However, the dual property between understanding and generation has been rarely explored. The prior work (Su et al., 2019) is the first attempt that utilized the duality between NLU and NLG to improve the performance via a dual supervised learning framework. However, the prior work still learned both components in a supervised manner; instead, this paper introduces a general learning framework to effectively exploit such duality, providing flexibility of incorporating both supervised and unsupervised learning algorithms to train language understanding and generation models in a joint fashion. The benchmark experiments demonstrate that the proposed approach is capable of boosting the performance of both NLU and NLG. 1",Towards Unsupervised Language Understanding and Generation by Joint Dual Learning,"In modular dialogue systems, natural language understanding (NLU) and natural language generation (NLG) are two critical components, where NLU extracts the semantics from the given texts and NLG is to construct corresponding natural language sentences based on the input semantic representations. However, the dual property between understanding and generation has been rarely explored. The prior work (Su et al., 2019) is the first attempt that utilized the duality between NLU and NLG to improve the performance via a dual supervised learning framework. However, the prior work still learned both components in a supervised manner; instead, this paper introduces a general learning framework to effectively exploit such duality, providing flexibility of incorporating both supervised and unsupervised learning algorithms to train language understanding and generation models in a joint fashion. The benchmark experiments demonstrate that the proposed approach is capable of boosting the performance of both NLU and NLG. 1","We thank reviewers for their insightful comments. This work was financially supported from the Young Scholar Fellowship Program by Ministry of Science and Technology (MOST) in Taiwan, under Grant 109-2636-E-002-026.","Towards Unsupervised Language Understanding and Generation by Joint Dual Learning. In modular dialogue systems, natural language understanding (NLU) and natural language generation (NLG) are two critical components, where NLU extracts the semantics from the given texts and NLG is to construct corresponding natural language sentences based on the input semantic representations. However, the dual property between understanding and generation has been rarely explored. The prior work (Su et al., 2019) is the first attempt that utilized the duality between NLU and NLG to improve the performance via a dual supervised learning framework. However, the prior work still learned both components in a supervised manner; instead, this paper introduces a general learning framework to effectively exploit such duality, providing flexibility of incorporating both supervised and unsupervised learning algorithms to train language understanding and generation models in a joint fashion. The benchmark experiments demonstrate that the proposed approach is capable of boosting the performance of both NLU and NLG. 1",2020
losch-etal-2018-european,https://aclanthology.org/L18-1213,1,,,,industry_innovation_infrastructure,,,"European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management. In order to help improve the quality, coverage and performance of automated translation solutions for current and future Connecting Europe Facility (CEF) digital services, the European Language Resource Coordination (ELRC) consortium was set up through a service contract operating under the European Commission's CEF SMART 2014/1074 programme to initiate a number of actions to support the collection of Language Resources (LRs) within the public sector in EU member and CEF-affiliated countries. The first action focused on raising awareness in the public sector through the organisation of dedicated events: 2 international conferences and 29 country-specific workshops to engage national as well as regional/municipal governmental organisations, language competence centres, relevant European institutions and other potential holders of LRs from public service administrations and NGOs. In order to gather resources shared by the contributors, the ELRC-SHARE Repository was set up together with services supporting the sharing of LRs, such as the ELRC Helpdesk and Intellectual Property Rights (IPR) clearance support. All collected LRs pass a validation process developed by ELRC. The collected LRs cover all official EU languages, plus Icelandic and Norwegian.",{E}uropean Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management,"In order to help improve the quality, coverage and performance of automated translation solutions for current and future Connecting Europe Facility (CEF) digital services, the European Language Resource Coordination (ELRC) consortium was set up through a service contract operating under the European Commission's CEF SMART 2014/1074 programme to initiate a number of actions to support the collection of Language Resources (LRs) within the public sector in EU member and CEF-affiliated countries. The first action focused on raising awareness in the public sector through the organisation of dedicated events: 2 international conferences and 29 country-specific workshops to engage national as well as regional/municipal governmental organisations, language competence centres, relevant European institutions and other potential holders of LRs from public service administrations and NGOs. In order to gather resources shared by the contributors, the ELRC-SHARE Repository was set up together with services supporting the sharing of LRs, such as the ELRC Helpdesk and Intellectual Property Rights (IPR) clearance support. All collected LRs pass a validation process developed by ELRC. The collected LRs cover all official EU languages, plus Icelandic and Norwegian.",European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management,"In order to help improve the quality, coverage and performance of automated translation solutions for current and future Connecting Europe Facility (CEF) digital services, the European Language Resource Coordination (ELRC) consortium was set up through a service contract operating under the European Commission's CEF SMART 2014/1074 programme to initiate a number of actions to support the collection of Language Resources (LRs) within the public sector in EU member and CEF-affiliated countries. The first action focused on raising awareness in the public sector through the organisation of dedicated events: 2 international conferences and 29 country-specific workshops to engage national as well as regional/municipal governmental organisations, language competence centres, relevant European institutions and other potential holders of LRs from public service administrations and NGOs. In order to gather resources shared by the contributors, the ELRC-SHARE Repository was set up together with services supporting the sharing of LRs, such as the ELRC Helpdesk and Intellectual Property Rights (IPR) clearance support. All collected LRs pass a validation process developed by ELRC. The collected LRs cover all official EU languages, plus Icelandic and Norwegian.",,"European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management. In order to help improve the quality, coverage and performance of automated translation solutions for current and future Connecting Europe Facility (CEF) digital services, the European Language Resource Coordination (ELRC) consortium was set up through a service contract operating under the European Commission's CEF SMART 2014/1074 programme to initiate a number of actions to support the collection of Language Resources (LRs) within the public sector in EU member and CEF-affiliated countries. The first action focused on raising awareness in the public sector through the organisation of dedicated events: 2 international conferences and 29 country-specific workshops to engage national as well as regional/municipal governmental organisations, language competence centres, relevant European institutions and other potential holders of LRs from public service administrations and NGOs. In order to gather resources shared by the contributors, the ELRC-SHARE Repository was set up together with services supporting the sharing of LRs, such as the ELRC Helpdesk and Intellectual Property Rights (IPR) clearance support. All collected LRs pass a validation process developed by ELRC. The collected LRs cover all official EU languages, plus Icelandic and Norwegian.",2018
lurcock-etal-2004-framework,https://aclanthology.org/U04-1014,0,,,,,,,"A framework for utterance disambiguation in dialogue. We discuss the data sources available for utterance disambiguation in a bilingual dialogue system, distinguishing global, contextual, and user-specific domains, and syntactic and semantic levels. We propose a framework for combining the available information, and techniques for increasing a stochastic grammar's sensitivity to local context and a speaker's idiolect.",A framework for utterance disambiguation in dialogue,"We discuss the data sources available for utterance disambiguation in a bilingual dialogue system, distinguishing global, contextual, and user-specific domains, and syntactic and semantic levels. We propose a framework for combining the available information, and techniques for increasing a stochastic grammar's sensitivity to local context and a speaker's idiolect.",A framework for utterance disambiguation in dialogue,"We discuss the data sources available for utterance disambiguation in a bilingual dialogue system, distinguishing global, contextual, and user-specific domains, and syntactic and semantic levels. We propose a framework for combining the available information, and techniques for increasing a stochastic grammar's sensitivity to local context and a speaker's idiolect.",,"A framework for utterance disambiguation in dialogue. We discuss the data sources available for utterance disambiguation in a bilingual dialogue system, distinguishing global, contextual, and user-specific domains, and syntactic and semantic levels. We propose a framework for combining the available information, and techniques for increasing a stochastic grammar's sensitivity to local context and a speaker's idiolect.",2004
gerlach-etal-2013-combining,https://aclanthology.org/2013.mtsummit-wptp.6,0,,,,,,,"Combining pre-editing and post-editing to improve SMT of user-generated content. The poor quality of user-generated content (UGC) found in forums hinders both readability and machine-translatability. To improve these two aspects, we have developed human-and machine-oriented pre-editing rules, which correct or reformulate this content. In this paper we present the results of a study which investigates whether pre-editing rules that improve the quality of statistical machine translation (SMT) output also have a positive impact on post-editing productivity. For this study, pre-editing rules were applied to a set of French sentences extracted from a technical forum. After SMT, the post-editing temporal effort and final quality are compared for translations of the raw source and its pre-edited version. Results obtained suggest that pre-editing speeds up post-editing and that the combination of the two processes is worthy of further investigation. J'ai redémarrer l'ordi (apparition de la croix rouge) mais pas besoin de restaurer le système:Toute ces mises à jour on été faite le 2013-03-13",Combining pre-editing and post-editing to improve {SMT} of user-generated content,"The poor quality of user-generated content (UGC) found in forums hinders both readability and machine-translatability. To improve these two aspects, we have developed human-and machine-oriented pre-editing rules, which correct or reformulate this content. In this paper we present the results of a study which investigates whether pre-editing rules that improve the quality of statistical machine translation (SMT) output also have a positive impact on post-editing productivity. For this study, pre-editing rules were applied to a set of French sentences extracted from a technical forum. After SMT, the post-editing temporal effort and final quality are compared for translations of the raw source and its pre-edited version. Results obtained suggest that pre-editing speeds up post-editing and that the combination of the two processes is worthy of further investigation. J'ai redémarrer l'ordi (apparition de la croix rouge) mais pas besoin de restaurer le système:Toute ces mises à jour on été faite le 2013-03-13",Combining pre-editing and post-editing to improve SMT of user-generated content,"The poor quality of user-generated content (UGC) found in forums hinders both readability and machine-translatability. To improve these two aspects, we have developed human-and machine-oriented pre-editing rules, which correct or reformulate this content. In this paper we present the results of a study which investigates whether pre-editing rules that improve the quality of statistical machine translation (SMT) output also have a positive impact on post-editing productivity. For this study, pre-editing rules were applied to a set of French sentences extracted from a technical forum. After SMT, the post-editing temporal effort and final quality are compared for translations of the raw source and its pre-edited version. Results obtained suggest that pre-editing speeds up post-editing and that the combination of the two processes is worthy of further investigation. J'ai redémarrer l'ordi (apparition de la croix rouge) mais pas besoin de restaurer le système:Toute ces mises à jour on été faite le 2013-03-13",The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007 under grant agreement n° 288769.,"Combining pre-editing and post-editing to improve SMT of user-generated content. The poor quality of user-generated content (UGC) found in forums hinders both readability and machine-translatability. To improve these two aspects, we have developed human-and machine-oriented pre-editing rules, which correct or reformulate this content. In this paper we present the results of a study which investigates whether pre-editing rules that improve the quality of statistical machine translation (SMT) output also have a positive impact on post-editing productivity. For this study, pre-editing rules were applied to a set of French sentences extracted from a technical forum. After SMT, the post-editing temporal effort and final quality are compared for translations of the raw source and its pre-edited version. Results obtained suggest that pre-editing speeds up post-editing and that the combination of the two processes is worthy of further investigation. J'ai redémarrer l'ordi (apparition de la croix rouge) mais pas besoin de restaurer le système:Toute ces mises à jour on été faite le 2013-03-13",2013
dinu-lapata-2010-measuring,https://aclanthology.org/D10-1113,0,,,,,,,Measuring Distributional Similarity in Context. The computation of meaning similarity as operationalized by vector-based models has found widespread use in many tasks ranging from the acquisition of synonyms and paraphrases to word sense disambiguation and textual entailment. Vector-based models are typically directed at representing words in isolation and thus best suited for measuring similarity out of context. In his paper we propose a probabilistic framework for measuring similarity in context. Central to our approach is the intuition that word meaning is represented as a probability distribution over a set of latent senses and is modulated by context. Experimental results on lexical substitution and word similarity show that our algorithm outperforms previously proposed models.,Measuring Distributional Similarity in Context,The computation of meaning similarity as operationalized by vector-based models has found widespread use in many tasks ranging from the acquisition of synonyms and paraphrases to word sense disambiguation and textual entailment. Vector-based models are typically directed at representing words in isolation and thus best suited for measuring similarity out of context. In his paper we propose a probabilistic framework for measuring similarity in context. Central to our approach is the intuition that word meaning is represented as a probability distribution over a set of latent senses and is modulated by context. Experimental results on lexical substitution and word similarity show that our algorithm outperforms previously proposed models.,Measuring Distributional Similarity in Context,The computation of meaning similarity as operationalized by vector-based models has found widespread use in many tasks ranging from the acquisition of synonyms and paraphrases to word sense disambiguation and textual entailment. Vector-based models are typically directed at representing words in isolation and thus best suited for measuring similarity out of context. In his paper we propose a probabilistic framework for measuring similarity in context. Central to our approach is the intuition that word meaning is represented as a probability distribution over a set of latent senses and is modulated by context. Experimental results on lexical substitution and word similarity show that our algorithm outperforms previously proposed models.,"Acknowledgments The authors acknowledge the support of the DFG (Dinu; International Research Training Group ""Language Technology and Cognitive Systems"") and EPSRC (Lapata; grant GR/T04540/01).",Measuring Distributional Similarity in Context. The computation of meaning similarity as operationalized by vector-based models has found widespread use in many tasks ranging from the acquisition of synonyms and paraphrases to word sense disambiguation and textual entailment. Vector-based models are typically directed at representing words in isolation and thus best suited for measuring similarity out of context. In his paper we propose a probabilistic framework for measuring similarity in context. Central to our approach is the intuition that word meaning is represented as a probability distribution over a set of latent senses and is modulated by context. Experimental results on lexical substitution and word similarity show that our algorithm outperforms previously proposed models.,2010
butnaru-2019-bam,https://aclanthology.org/W19-1413,0,,,,,,,"BAM: A combination of deep and shallow models for German Dialect Identification.. In this paper, we present a machine learning approach for the German Dialect Identification (GDI) Closed Shared Task of the DSL 2019 Challenge. The proposed approach combines deep and shallow models, by applying a voting scheme on the outputs resulted from a Character-level Convolutional Neural Networks (Char-CNN), a Long Short-Term Memory (LSTM) network, and a model based on String Kernels. The first model used is the Char-CNN model that merges multiple convolutions computed with kernels of different sizes. The second model is the LSTM network which applies a global max pooling over the returned sequences over time. Both models pass the activation maps to two fullyconnected layers. The final model is based on String Kernels, computed on character pgrams extracted from speech transcripts. The model combines two blended kernel functions, one is the presence bits kernel, and the other is the intersection kernel. The empirical results obtained in the shared task prove that the approach can achieve good results. The system proposed in this paper obtained the fourth place with a macro-F 1 score of 62.55%.",{BAM}: A combination of deep and shallow models for {G}erman Dialect Identification.,"In this paper, we present a machine learning approach for the German Dialect Identification (GDI) Closed Shared Task of the DSL 2019 Challenge. The proposed approach combines deep and shallow models, by applying a voting scheme on the outputs resulted from a Character-level Convolutional Neural Networks (Char-CNN), a Long Short-Term Memory (LSTM) network, and a model based on String Kernels. The first model used is the Char-CNN model that merges multiple convolutions computed with kernels of different sizes. The second model is the LSTM network which applies a global max pooling over the returned sequences over time. Both models pass the activation maps to two fullyconnected layers. The final model is based on String Kernels, computed on character pgrams extracted from speech transcripts. The model combines two blended kernel functions, one is the presence bits kernel, and the other is the intersection kernel. The empirical results obtained in the shared task prove that the approach can achieve good results. The system proposed in this paper obtained the fourth place with a macro-F 1 score of 62.55%.",BAM: A combination of deep and shallow models for German Dialect Identification.,"In this paper, we present a machine learning approach for the German Dialect Identification (GDI) Closed Shared Task of the DSL 2019 Challenge. The proposed approach combines deep and shallow models, by applying a voting scheme on the outputs resulted from a Character-level Convolutional Neural Networks (Char-CNN), a Long Short-Term Memory (LSTM) network, and a model based on String Kernels. The first model used is the Char-CNN model that merges multiple convolutions computed with kernels of different sizes. The second model is the LSTM network which applies a global max pooling over the returned sequences over time. Both models pass the activation maps to two fullyconnected layers. The final model is based on String Kernels, computed on character pgrams extracted from speech transcripts. The model combines two blended kernel functions, one is the presence bits kernel, and the other is the intersection kernel. The empirical results obtained in the shared task prove that the approach can achieve good results. The system proposed in this paper obtained the fourth place with a macro-F 1 score of 62.55%.",,"BAM: A combination of deep and shallow models for German Dialect Identification.. In this paper, we present a machine learning approach for the German Dialect Identification (GDI) Closed Shared Task of the DSL 2019 Challenge. The proposed approach combines deep and shallow models, by applying a voting scheme on the outputs resulted from a Character-level Convolutional Neural Networks (Char-CNN), a Long Short-Term Memory (LSTM) network, and a model based on String Kernels. The first model used is the Char-CNN model that merges multiple convolutions computed with kernels of different sizes. The second model is the LSTM network which applies a global max pooling over the returned sequences over time. Both models pass the activation maps to two fullyconnected layers. The final model is based on String Kernels, computed on character pgrams extracted from speech transcripts. The model combines two blended kernel functions, one is the presence bits kernel, and the other is the intersection kernel. The empirical results obtained in the shared task prove that the approach can achieve good results. The system proposed in this paper obtained the fourth place with a macro-F 1 score of 62.55%.",2019
bunescu-mooney-2005-shortest,https://aclanthology.org/H05-1091,0,,,,,,,"A Shortest Path Dependency Kernel for Relation Extraction. We present a novel approach to relation extraction, based on the observation that the information required to assert a relationship between two named entities in the same sentence is typically captured by the shortest path between the two entities in the dependency graph. Experiments on extracting top-level relations from the ACE (Automated Content Extraction) newspaper corpus show that the new shortest path dependency kernel outperforms a recent approach based on dependency tree kernels.",A Shortest Path Dependency Kernel for Relation Extraction,"We present a novel approach to relation extraction, based on the observation that the information required to assert a relationship between two named entities in the same sentence is typically captured by the shortest path between the two entities in the dependency graph. Experiments on extracting top-level relations from the ACE (Automated Content Extraction) newspaper corpus show that the new shortest path dependency kernel outperforms a recent approach based on dependency tree kernels.",A Shortest Path Dependency Kernel for Relation Extraction,"We present a novel approach to relation extraction, based on the observation that the information required to assert a relationship between two named entities in the same sentence is typically captured by the shortest path between the two entities in the dependency graph. Experiments on extracting top-level relations from the ACE (Automated Content Extraction) newspaper corpus show that the new shortest path dependency kernel outperforms a recent approach based on dependency tree kernels.",This work was supported by grants IIS-0117308 and IIS-0325116 from the NSF.,"A Shortest Path Dependency Kernel for Relation Extraction. We present a novel approach to relation extraction, based on the observation that the information required to assert a relationship between two named entities in the same sentence is typically captured by the shortest path between the two entities in the dependency graph. Experiments on extracting top-level relations from the ACE (Automated Content Extraction) newspaper corpus show that the new shortest path dependency kernel outperforms a recent approach based on dependency tree kernels.",2005
koizumi-etal-2002-annotated,http://www.lrec-conf.org/proceedings/lrec2002/pdf/318.pdf,1,,,,social_equality,,,"An Annotated Japanese Sign Language Corpus. Sign language is characterized by its interactivity and multimodality, which cause difficulties in data collection and annotation. To address these difficulties, we have developed a video-based Japanese sign language (JSL) corpus and a corpus tool for annotation and linguistic analysis. As the first step of linguistic annotation, we transcribed manual signs expressing lexical information as well as non-manual signs (NMSs)-including head movements, facial actions, and posture-that are used to express grammatical information. Our purpose is to extract grammatical rules from this corpus for the sign-language translation system underdevelopment. From this viewpoint, we will discuss methods for collecting elicited data, annotation required for grammatical analysis, as well as corpus tool required for annotation and grammatical analysis. As the result of annotating 2800 utterances, we confirmed that there are at least 50 kinds of NMSs in JSL, using head (seven kinds), jaw (six kinds), mouth (18 kinds), cheeks (one kind), eyebrows (four kinds), eyes (seven kinds), eye gaze (two kinds), bydy posture (five kinds). We use this corpus for designing and testing an algorithm and grammatical rules for the sign-language translation system underdevelopment.",An Annotated {J}apanese {S}ign {L}anguage Corpus,"Sign language is characterized by its interactivity and multimodality, which cause difficulties in data collection and annotation. To address these difficulties, we have developed a video-based Japanese sign language (JSL) corpus and a corpus tool for annotation and linguistic analysis. As the first step of linguistic annotation, we transcribed manual signs expressing lexical information as well as non-manual signs (NMSs)-including head movements, facial actions, and posture-that are used to express grammatical information. Our purpose is to extract grammatical rules from this corpus for the sign-language translation system underdevelopment. From this viewpoint, we will discuss methods for collecting elicited data, annotation required for grammatical analysis, as well as corpus tool required for annotation and grammatical analysis. As the result of annotating 2800 utterances, we confirmed that there are at least 50 kinds of NMSs in JSL, using head (seven kinds), jaw (six kinds), mouth (18 kinds), cheeks (one kind), eyebrows (four kinds), eyes (seven kinds), eye gaze (two kinds), bydy posture (five kinds). We use this corpus for designing and testing an algorithm and grammatical rules for the sign-language translation system underdevelopment.",An Annotated Japanese Sign Language Corpus,"Sign language is characterized by its interactivity and multimodality, which cause difficulties in data collection and annotation. To address these difficulties, we have developed a video-based Japanese sign language (JSL) corpus and a corpus tool for annotation and linguistic analysis. As the first step of linguistic annotation, we transcribed manual signs expressing lexical information as well as non-manual signs (NMSs)-including head movements, facial actions, and posture-that are used to express grammatical information. Our purpose is to extract grammatical rules from this corpus for the sign-language translation system underdevelopment. From this viewpoint, we will discuss methods for collecting elicited data, annotation required for grammatical analysis, as well as corpus tool required for annotation and grammatical analysis. As the result of annotating 2800 utterances, we confirmed that there are at least 50 kinds of NMSs in JSL, using head (seven kinds), jaw (six kinds), mouth (18 kinds), cheeks (one kind), eyebrows (four kinds), eyes (seven kinds), eye gaze (two kinds), bydy posture (five kinds). We use this corpus for designing and testing an algorithm and grammatical rules for the sign-language translation system underdevelopment.","The research reported here was carried out within the Real World Computing Project, supported by Ministry of Economy, Trade and Industry.","An Annotated Japanese Sign Language Corpus. Sign language is characterized by its interactivity and multimodality, which cause difficulties in data collection and annotation. To address these difficulties, we have developed a video-based Japanese sign language (JSL) corpus and a corpus tool for annotation and linguistic analysis. As the first step of linguistic annotation, we transcribed manual signs expressing lexical information as well as non-manual signs (NMSs)-including head movements, facial actions, and posture-that are used to express grammatical information. Our purpose is to extract grammatical rules from this corpus for the sign-language translation system underdevelopment. From this viewpoint, we will discuss methods for collecting elicited data, annotation required for grammatical analysis, as well as corpus tool required for annotation and grammatical analysis. As the result of annotating 2800 utterances, we confirmed that there are at least 50 kinds of NMSs in JSL, using head (seven kinds), jaw (six kinds), mouth (18 kinds), cheeks (one kind), eyebrows (four kinds), eyes (seven kinds), eye gaze (two kinds), bydy posture (five kinds). We use this corpus for designing and testing an algorithm and grammatical rules for the sign-language translation system underdevelopment.",2002
couto-vale-etal-2016-automatic,https://aclanthology.org/L16-1574,0,,,,,,,"Automatic Recognition of Linguistic Replacements in Text Series Generated from Keystroke Logs. This paper introduces a toolkit used for the purpose of detecting replacements of different grammatical and semantic structures in ongoing text production logged as a chronological series of computer interaction events (so-called keystroke logs). The specific case we use involves human translations where replacements can be indicative of translator behaviour that leads to specific features of translations that distinguish them from non-translated texts. The toolkit uses a novel CCG chart parser customised so as to recognise grammatical words independently of space and punctuation boundaries. On the basis of the linguistic analysis, structures in different versions of the target text are compared and classified as potential equivalents of the same source text segment by 'equivalence judges'. In that way, replacements of grammatical and semantic structures can be detected. Beyond the specific task at hand the approach will also be useful for the analysis of other types of spaceless text such as Twitter hashtags and texts in agglutinative or spaceless languages like Finnish or Chinese.",Automatic Recognition of Linguistic Replacements in Text Series Generated from Keystroke Logs,"This paper introduces a toolkit used for the purpose of detecting replacements of different grammatical and semantic structures in ongoing text production logged as a chronological series of computer interaction events (so-called keystroke logs). The specific case we use involves human translations where replacements can be indicative of translator behaviour that leads to specific features of translations that distinguish them from non-translated texts. The toolkit uses a novel CCG chart parser customised so as to recognise grammatical words independently of space and punctuation boundaries. On the basis of the linguistic analysis, structures in different versions of the target text are compared and classified as potential equivalents of the same source text segment by 'equivalence judges'. In that way, replacements of grammatical and semantic structures can be detected. Beyond the specific task at hand the approach will also be useful for the analysis of other types of spaceless text such as Twitter hashtags and texts in agglutinative or spaceless languages like Finnish or Chinese.",Automatic Recognition of Linguistic Replacements in Text Series Generated from Keystroke Logs,"This paper introduces a toolkit used for the purpose of detecting replacements of different grammatical and semantic structures in ongoing text production logged as a chronological series of computer interaction events (so-called keystroke logs). The specific case we use involves human translations where replacements can be indicative of translator behaviour that leads to specific features of translations that distinguish them from non-translated texts. The toolkit uses a novel CCG chart parser customised so as to recognise grammatical words independently of space and punctuation boundaries. On the basis of the linguistic analysis, structures in different versions of the target text are compared and classified as potential equivalents of the same source text segment by 'equivalence judges'. In that way, replacements of grammatical and semantic structures can be detected. Beyond the specific task at hand the approach will also be useful for the analysis of other types of spaceless text such as Twitter hashtags and texts in agglutinative or spaceless languages like Finnish or Chinese.","The research reported here was funded by the German Research Council, grant no. NE 1822/2-1.","Automatic Recognition of Linguistic Replacements in Text Series Generated from Keystroke Logs. This paper introduces a toolkit used for the purpose of detecting replacements of different grammatical and semantic structures in ongoing text production logged as a chronological series of computer interaction events (so-called keystroke logs). The specific case we use involves human translations where replacements can be indicative of translator behaviour that leads to specific features of translations that distinguish them from non-translated texts. The toolkit uses a novel CCG chart parser customised so as to recognise grammatical words independently of space and punctuation boundaries. On the basis of the linguistic analysis, structures in different versions of the target text are compared and classified as potential equivalents of the same source text segment by 'equivalence judges'. In that way, replacements of grammatical and semantic structures can be detected. Beyond the specific task at hand the approach will also be useful for the analysis of other types of spaceless text such as Twitter hashtags and texts in agglutinative or spaceless languages like Finnish or Chinese.",2016
gu-etal-2018-incorporating,https://aclanthology.org/W18-5212,0,,,,,,,"Incorporating Topic Aspects for Online Comment Convincingness Evaluation. In this paper, we propose to incorporate topic aspects information for online comments convincingness evaluation. Our model makes use of graph convolutional network to utilize implicit topic information within a discussion thread to assist the evaluation of convincingness of each single comment. In order to test the effectiveness of our proposed model, we annotate topic information on top of a public dataset for argument convincingness evaluation. Experimental results show that topic information is able to improve the performance for convincingness evaluation. We also make a move to detect topic aspects automatically.",Incorporating Topic Aspects for Online Comment Convincingness Evaluation,"In this paper, we propose to incorporate topic aspects information for online comments convincingness evaluation. Our model makes use of graph convolutional network to utilize implicit topic information within a discussion thread to assist the evaluation of convincingness of each single comment. In order to test the effectiveness of our proposed model, we annotate topic information on top of a public dataset for argument convincingness evaluation. Experimental results show that topic information is able to improve the performance for convincingness evaluation. We also make a move to detect topic aspects automatically.",Incorporating Topic Aspects for Online Comment Convincingness Evaluation,"In this paper, we propose to incorporate topic aspects information for online comments convincingness evaluation. Our model makes use of graph convolutional network to utilize implicit topic information within a discussion thread to assist the evaluation of convincingness of each single comment. In order to test the effectiveness of our proposed model, we annotate topic information on top of a public dataset for argument convincingness evaluation. Experimental results show that topic information is able to improve the performance for convincingness evaluation. We also make a move to detect topic aspects automatically.","The work is partially supported by National Natural Science Foundation of China (Grant No. 61702106), Shanghai Science and Technology Commission (Grant No. 17JC1420200, Grant No. 17YF1427600 and Grant No.16JC1420401).","Incorporating Topic Aspects for Online Comment Convincingness Evaluation. In this paper, we propose to incorporate topic aspects information for online comments convincingness evaluation. Our model makes use of graph convolutional network to utilize implicit topic information within a discussion thread to assist the evaluation of convincingness of each single comment. In order to test the effectiveness of our proposed model, we annotate topic information on top of a public dataset for argument convincingness evaluation. Experimental results show that topic information is able to improve the performance for convincingness evaluation. We also make a move to detect topic aspects automatically.",2018
wang-etal-2009-classifying,https://aclanthology.org/D09-1157,1,,,,health,,,"Classifying Relations for Biomedical Named Entity Disambiguation. Named entity disambiguation concerns linking a potentially ambiguous mention of named entity in text to an unambiguous identifier in a standard database. One approach to this task is supervised classification. However, the availability of training data is often limited, and the available data sets tend to be imbalanced and, in some cases, heterogeneous. We propose a new method that distinguishes a named entity by finding the informative keywords in its surrounding context, and then trains a model to predict whether each keyword indicates the semantic class of the entity. While maintaining a comparable performance to supervised classification, this method avoids using expensive manually annotated data for each new domain, and thus achieves better portability.",Classifying Relations for Biomedical Named Entity Disambiguation,"Named entity disambiguation concerns linking a potentially ambiguous mention of named entity in text to an unambiguous identifier in a standard database. One approach to this task is supervised classification. However, the availability of training data is often limited, and the available data sets tend to be imbalanced and, in some cases, heterogeneous. We propose a new method that distinguishes a named entity by finding the informative keywords in its surrounding context, and then trains a model to predict whether each keyword indicates the semantic class of the entity. While maintaining a comparable performance to supervised classification, this method avoids using expensive manually annotated data for each new domain, and thus achieves better portability.",Classifying Relations for Biomedical Named Entity Disambiguation,"Named entity disambiguation concerns linking a potentially ambiguous mention of named entity in text to an unambiguous identifier in a standard database. One approach to this task is supervised classification. However, the availability of training data is often limited, and the available data sets tend to be imbalanced and, in some cases, heterogeneous. We propose a new method that distinguishes a named entity by finding the informative keywords in its surrounding context, and then trains a model to predict whether each keyword indicates the semantic class of the entity. While maintaining a comparable performance to supervised classification, this method avoids using expensive manually annotated data for each new domain, and thus achieves better portability.","The work reported in this paper is funded by Pfizer Ltd.. The UK National Centre for Text Mining is funded by JISC. The ITI-TXM corpus used in the experiments was developed at School of Informatics, University of Edinburgh, in the TXM project, which was funded by ITI Life Sciences, Scotland.","Classifying Relations for Biomedical Named Entity Disambiguation. Named entity disambiguation concerns linking a potentially ambiguous mention of named entity in text to an unambiguous identifier in a standard database. One approach to this task is supervised classification. However, the availability of training data is often limited, and the available data sets tend to be imbalanced and, in some cases, heterogeneous. We propose a new method that distinguishes a named entity by finding the informative keywords in its surrounding context, and then trains a model to predict whether each keyword indicates the semantic class of the entity. While maintaining a comparable performance to supervised classification, this method avoids using expensive manually annotated data for each new domain, and thus achieves better portability.",2009
rosti-etal-2007-combining,https://aclanthology.org/N07-1029,0,,,,,,,"Combining Outputs from Multiple Machine Translation Systems. Currently there are several approaches to machine translation (MT) based on different paradigms; e.g., phrasal, hierarchical and syntax-based. These three approaches yield similar translation accuracy despite using fairly different levels of linguistic knowledge. The availability of such a variety of systems has led to a growing interest toward finding better translations by combining outputs from multiple systems. This paper describes three different approaches to MT system combination. These combination methods operate on sentence, phrase and word level exploiting information from ¤-best lists, system scores and target-to-source phrase alignments. The word-level combination provides the most robust gains but the best results on the development test sets (NIST MT05 and the newsgroup portion of GALE 2006 dry-run) were achieved by combining all three methods.",Combining Outputs from Multiple Machine Translation Systems,"Currently there are several approaches to machine translation (MT) based on different paradigms; e.g., phrasal, hierarchical and syntax-based. These three approaches yield similar translation accuracy despite using fairly different levels of linguistic knowledge. The availability of such a variety of systems has led to a growing interest toward finding better translations by combining outputs from multiple systems. This paper describes three different approaches to MT system combination. These combination methods operate on sentence, phrase and word level exploiting information from ¤-best lists, system scores and target-to-source phrase alignments. The word-level combination provides the most robust gains but the best results on the development test sets (NIST MT05 and the newsgroup portion of GALE 2006 dry-run) were achieved by combining all three methods.",Combining Outputs from Multiple Machine Translation Systems,"Currently there are several approaches to machine translation (MT) based on different paradigms; e.g., phrasal, hierarchical and syntax-based. These three approaches yield similar translation accuracy despite using fairly different levels of linguistic knowledge. The availability of such a variety of systems has led to a growing interest toward finding better translations by combining outputs from multiple systems. This paper describes three different approaches to MT system combination. These combination methods operate on sentence, phrase and word level exploiting information from ¤-best lists, system scores and target-to-source phrase alignments. The word-level combination provides the most robust gains but the best results on the development test sets (NIST MT05 and the newsgroup portion of GALE 2006 dry-run) were achieved by combining all three methods.","This work was supported by DARPA/IPTO Contract No. HR0011-06-C-0022 under the GALE program (approved for public release, distribution unlimited). The authors would like to thank ISI and University of Edinburgh for sharing their MT system outputs.","Combining Outputs from Multiple Machine Translation Systems. Currently there are several approaches to machine translation (MT) based on different paradigms; e.g., phrasal, hierarchical and syntax-based. These three approaches yield similar translation accuracy despite using fairly different levels of linguistic knowledge. The availability of such a variety of systems has led to a growing interest toward finding better translations by combining outputs from multiple systems. This paper describes three different approaches to MT system combination. These combination methods operate on sentence, phrase and word level exploiting information from ¤-best lists, system scores and target-to-source phrase alignments. The word-level combination provides the most robust gains but the best results on the development test sets (NIST MT05 and the newsgroup portion of GALE 2006 dry-run) were achieved by combining all three methods.",2007
luo-etal-2019-improving,https://aclanthology.org/P19-1144,0,,,,,,,"Improving Neural Language Models by Segmenting, Attending, and Predicting the Future. Common language models typically predict the next word given the context. In this work, we propose a method that improves language modeling by learning to align the given context and the following phrase. The model does not require any linguistic annotation of phrase segmentation. Instead, we define syntactic heights and phrase segmentation rules, enabling the model to automatically induce phrases, recognize their task-specific heads, and generate phrase embeddings in an unsupervised learning manner. Our method can easily be applied to language models with different network architectures since an independent module is used for phrase induction and context-phrase alignment, and no change is required in the underlying language modeling network. Experiments have shown that our model outperformed several strong baseline models on different data sets. We achieved a new state-of-the-art performance of 17.4 perplexity on the Wikitext-103 dataset. Additionally, visualizing the outputs of the phrase induction module showed that our model is able to learn approximate phrase-level structural knowledge without any annotation.","Improving Neural Language Models by Segmenting, Attending, and Predicting the Future","Common language models typically predict the next word given the context. In this work, we propose a method that improves language modeling by learning to align the given context and the following phrase. The model does not require any linguistic annotation of phrase segmentation. Instead, we define syntactic heights and phrase segmentation rules, enabling the model to automatically induce phrases, recognize their task-specific heads, and generate phrase embeddings in an unsupervised learning manner. Our method can easily be applied to language models with different network architectures since an independent module is used for phrase induction and context-phrase alignment, and no change is required in the underlying language modeling network. Experiments have shown that our model outperformed several strong baseline models on different data sets. We achieved a new state-of-the-art performance of 17.4 perplexity on the Wikitext-103 dataset. Additionally, visualizing the outputs of the phrase induction module showed that our model is able to learn approximate phrase-level structural knowledge without any annotation.","Improving Neural Language Models by Segmenting, Attending, and Predicting the Future","Common language models typically predict the next word given the context. In this work, we propose a method that improves language modeling by learning to align the given context and the following phrase. The model does not require any linguistic annotation of phrase segmentation. Instead, we define syntactic heights and phrase segmentation rules, enabling the model to automatically induce phrases, recognize their task-specific heads, and generate phrase embeddings in an unsupervised learning manner. Our method can easily be applied to language models with different network architectures since an independent module is used for phrase induction and context-phrase alignment, and no change is required in the underlying language modeling network. Experiments have shown that our model outperformed several strong baseline models on different data sets. We achieved a new state-of-the-art performance of 17.4 perplexity on the Wikitext-103 dataset. Additionally, visualizing the outputs of the phrase induction module showed that our model is able to learn approximate phrase-level structural knowledge without any annotation.",,"Improving Neural Language Models by Segmenting, Attending, and Predicting the Future. Common language models typically predict the next word given the context. In this work, we propose a method that improves language modeling by learning to align the given context and the following phrase. The model does not require any linguistic annotation of phrase segmentation. Instead, we define syntactic heights and phrase segmentation rules, enabling the model to automatically induce phrases, recognize their task-specific heads, and generate phrase embeddings in an unsupervised learning manner. Our method can easily be applied to language models with different network architectures since an independent module is used for phrase induction and context-phrase alignment, and no change is required in the underlying language modeling network. Experiments have shown that our model outperformed several strong baseline models on different data sets. We achieved a new state-of-the-art performance of 17.4 perplexity on the Wikitext-103 dataset. Additionally, visualizing the outputs of the phrase induction module showed that our model is able to learn approximate phrase-level structural knowledge without any annotation.",2019
opitz-etal-2018-induction,https://aclanthology.org/W18-4518,0,,,,,,,"Induction of a Large-Scale Knowledge Graph from the Regesta Imperii. We induce and visualize a Knowledge Graph over the Regesta Imperii (RI), an important largescale resource for medieval history research. The RI comprise more than 150,000 digitized abstracts of medieval charters issued by the Roman-German kings and popes distributed over many European locations and a time span of more than 700 years. Our goal is to provide a resource for historians to visualize and query the RI, possibly aiding medieval history research. The resulting medieval graph and visualization tools are shared publicly.",Induction of a Large-Scale Knowledge Graph from the {R}egesta {I}mperii,"We induce and visualize a Knowledge Graph over the Regesta Imperii (RI), an important largescale resource for medieval history research. The RI comprise more than 150,000 digitized abstracts of medieval charters issued by the Roman-German kings and popes distributed over many European locations and a time span of more than 700 years. Our goal is to provide a resource for historians to visualize and query the RI, possibly aiding medieval history research. The resulting medieval graph and visualization tools are shared publicly.",Induction of a Large-Scale Knowledge Graph from the Regesta Imperii,"We induce and visualize a Knowledge Graph over the Regesta Imperii (RI), an important largescale resource for medieval history research. The RI comprise more than 150,000 digitized abstracts of medieval charters issued by the Roman-German kings and popes distributed over many European locations and a time span of more than 700 years. Our goal is to provide a resource for historians to visualize and query the RI, possibly aiding medieval history research. The resulting medieval graph and visualization tools are shared publicly.",,"Induction of a Large-Scale Knowledge Graph from the Regesta Imperii. We induce and visualize a Knowledge Graph over the Regesta Imperii (RI), an important largescale resource for medieval history research. The RI comprise more than 150,000 digitized abstracts of medieval charters issued by the Roman-German kings and popes distributed over many European locations and a time span of more than 700 years. Our goal is to provide a resource for historians to visualize and query the RI, possibly aiding medieval history research. The resulting medieval graph and visualization tools are shared publicly.",2018
waszczuk-etal-2019-neural,https://aclanthology.org/W19-5113,0,,,,,,,"A Neural Graph-based Approach to Verbal MWE Identification. We propose to tackle the problem of verbal multiword expression (VMWE) identification using a neural graph parsing-based approach. Our solution involves encoding VMWE annotations as labellings of dependency trees and, subsequently, applying a neural network to model the probabilities of different labellings. This strategy can be particularly effective when applied to discontinuous VMWEs and, thanks to dense, pre-trained word vector representations, VMWEs unseen during training. Evaluation of our approach on three PARSEME datasets (German, French, and Polish) shows that it allows to achieve performance on par with the previous state-ofthe-art (Al Saied et al., 2018).",A Neural Graph-based Approach to Verbal {MWE} Identification,"We propose to tackle the problem of verbal multiword expression (VMWE) identification using a neural graph parsing-based approach. Our solution involves encoding VMWE annotations as labellings of dependency trees and, subsequently, applying a neural network to model the probabilities of different labellings. This strategy can be particularly effective when applied to discontinuous VMWEs and, thanks to dense, pre-trained word vector representations, VMWEs unseen during training. Evaluation of our approach on three PARSEME datasets (German, French, and Polish) shows that it allows to achieve performance on par with the previous state-ofthe-art (Al Saied et al., 2018).",A Neural Graph-based Approach to Verbal MWE Identification,"We propose to tackle the problem of verbal multiword expression (VMWE) identification using a neural graph parsing-based approach. Our solution involves encoding VMWE annotations as labellings of dependency trees and, subsequently, applying a neural network to model the probabilities of different labellings. This strategy can be particularly effective when applied to discontinuous VMWEs and, thanks to dense, pre-trained word vector representations, VMWEs unseen during training. Evaluation of our approach on three PARSEME datasets (German, French, and Polish) shows that it allows to achieve performance on par with the previous state-ofthe-art (Al Saied et al., 2018).","We thank the anonymous reviewers for their valuable comments. The work presented in this paper was funded by the German Research Foundation (DFG) within the CRC 991 and the Beyond CFG project, as well as by the Land North Rhine-Westphalia within the NRW-Forschungskolleg Online-Partizipation.","A Neural Graph-based Approach to Verbal MWE Identification. We propose to tackle the problem of verbal multiword expression (VMWE) identification using a neural graph parsing-based approach. Our solution involves encoding VMWE annotations as labellings of dependency trees and, subsequently, applying a neural network to model the probabilities of different labellings. This strategy can be particularly effective when applied to discontinuous VMWEs and, thanks to dense, pre-trained word vector representations, VMWEs unseen during training. Evaluation of our approach on three PARSEME datasets (German, French, and Polish) shows that it allows to achieve performance on par with the previous state-ofthe-art (Al Saied et al., 2018).",2019
li-etal-2019-findings,https://aclanthology.org/W19-5303,0,,,,,,,"Findings of the First Shared Task on Machine Translation Robustness. We share the findings of the first shared task on improving robustness of Machine Translation (MT). The task provides a testbed representing challenges facing MT models deployed in the real world, and facilitates new approaches to improve models' robustness to noisy input and domain mismatch. We focus on two language pairs (English-French and English-Japanese), and the submitted systems are evaluated on a blind test set consisting of noisy comments on Reddit 1 and professionally sourced translations. As a new task, we received 23 submissions by 11 participating teams from universities, companies, national labs, etc. All submitted systems achieved large improvements over baselines, with the best improvement having +22.33 BLEU. We evaluated submissions by both human judgment and automatic evaluation (BLEU), which shows high correlations (Pearson's r = 0.94 and 0.95). Furthermore, we conducted a qualitative analysis of the submitted systems using compare-mt 2 , which revealed their salient differences in handling challenges in this task. Such analysis provides additional insights when there is occasional disagreement between human judgment and BLEU, e.g. systems better at producing colloquial expressions received higher score from human judgment.",Findings of the First Shared Task on Machine Translation Robustness,"We share the findings of the first shared task on improving robustness of Machine Translation (MT). The task provides a testbed representing challenges facing MT models deployed in the real world, and facilitates new approaches to improve models' robustness to noisy input and domain mismatch. We focus on two language pairs (English-French and English-Japanese), and the submitted systems are evaluated on a blind test set consisting of noisy comments on Reddit 1 and professionally sourced translations. As a new task, we received 23 submissions by 11 participating teams from universities, companies, national labs, etc. All submitted systems achieved large improvements over baselines, with the best improvement having +22.33 BLEU. We evaluated submissions by both human judgment and automatic evaluation (BLEU), which shows high correlations (Pearson's r = 0.94 and 0.95). Furthermore, we conducted a qualitative analysis of the submitted systems using compare-mt 2 , which revealed their salient differences in handling challenges in this task. Such analysis provides additional insights when there is occasional disagreement between human judgment and BLEU, e.g. systems better at producing colloquial expressions received higher score from human judgment.",Findings of the First Shared Task on Machine Translation Robustness,"We share the findings of the first shared task on improving robustness of Machine Translation (MT). The task provides a testbed representing challenges facing MT models deployed in the real world, and facilitates new approaches to improve models' robustness to noisy input and domain mismatch. We focus on two language pairs (English-French and English-Japanese), and the submitted systems are evaluated on a blind test set consisting of noisy comments on Reddit 1 and professionally sourced translations. As a new task, we received 23 submissions by 11 participating teams from universities, companies, national labs, etc. All submitted systems achieved large improvements over baselines, with the best improvement having +22.33 BLEU. We evaluated submissions by both human judgment and automatic evaluation (BLEU), which shows high correlations (Pearson's r = 0.94 and 0.95). Furthermore, we conducted a qualitative analysis of the submitted systems using compare-mt 2 , which revealed their salient differences in handling challenges in this task. Such analysis provides additional insights when there is occasional disagreement between human judgment and BLEU, e.g. systems better at producing colloquial expressions received higher score from human judgment.",We thank Facebook for funding the human evaluation and blind test set creation.,"Findings of the First Shared Task on Machine Translation Robustness. We share the findings of the first shared task on improving robustness of Machine Translation (MT). The task provides a testbed representing challenges facing MT models deployed in the real world, and facilitates new approaches to improve models' robustness to noisy input and domain mismatch. We focus on two language pairs (English-French and English-Japanese), and the submitted systems are evaluated on a blind test set consisting of noisy comments on Reddit 1 and professionally sourced translations. As a new task, we received 23 submissions by 11 participating teams from universities, companies, national labs, etc. All submitted systems achieved large improvements over baselines, with the best improvement having +22.33 BLEU. We evaluated submissions by both human judgment and automatic evaluation (BLEU), which shows high correlations (Pearson's r = 0.94 and 0.95). Furthermore, we conducted a qualitative analysis of the submitted systems using compare-mt 2 , which revealed their salient differences in handling challenges in this task. Such analysis provides additional insights when there is occasional disagreement between human judgment and BLEU, e.g. systems better at producing colloquial expressions received higher score from human judgment.",2019
huck-etal-2017-lmu,https://aclanthology.org/W17-4730,1,,,,health,,,"LMU Munich's Neural Machine Translation Systems for News Articles and Health Information Texts. This paper describes the LMU Munich English→German machine translation systems. We participated with neural translation engines in the WMT17 shared task on machine translation of news, as well as in the biomedical translation task. LMU Munich's systems deliver competitive machine translation quality on both news articles and health information texts.",{LMU} {M}unich{'}s Neural Machine Translation Systems for News Articles and Health Information Texts,"This paper describes the LMU Munich English→German machine translation systems. We participated with neural translation engines in the WMT17 shared task on machine translation of news, as well as in the biomedical translation task. LMU Munich's systems deliver competitive machine translation quality on both news articles and health information texts.",LMU Munich's Neural Machine Translation Systems for News Articles and Health Information Texts,"This paper describes the LMU Munich English→German machine translation systems. We participated with neural translation engines in the WMT17 shared task on machine translation of news, as well as in the biomedical translation task. LMU Munich's systems deliver competitive machine translation quality on both news articles and health information texts.",This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement № 644402 (HimL). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement № 640550).,"LMU Munich's Neural Machine Translation Systems for News Articles and Health Information Texts. This paper describes the LMU Munich English→German machine translation systems. We participated with neural translation engines in the WMT17 shared task on machine translation of news, as well as in the biomedical translation task. LMU Munich's systems deliver competitive machine translation quality on both news articles and health information texts.",2017
zou-li-2021-lz1904,https://aclanthology.org/2021.semeval-1.138,1,,,,peace_justice_and_strong_institutions,,,"LZ1904 at SemEval-2021 Task 5: Bi-LSTM-CRF for Toxic Span Detection using Pretrained Word Embedding. Recurrent Neural Networks (RNN) have been widely used in various Natural Language Processing (NLP) tasks such as text classification, sequence tagging, and machine translation. Long Short Term Memory (LSTM), a special unit of RNN, has the advantage of memorizing past and even future information in a sentence (especially for bidirectional LSTM). In the shared task of detecting toxic spans in texts, we first apply pretrained word embedding (GloVe) to generate the word vectors after tokenization. Then we construct Bidirectional Long Short Term Memory-Conditional Random Field (Bi-LSTM-CRF) model by Baidu research to predict whether each word in the sentence is toxic or not. We tune hyperparameters of dropout rate, number of LSTM units, embedding size with 10 epochs and choose the epoch with best validation recall. Our model achieves an F1 score of 66.99% on test dataset.",{LZ}1904 at {S}em{E}val-2021 Task 5: {B}i-{LSTM}-{CRF} for Toxic Span Detection using Pretrained Word Embedding,"Recurrent Neural Networks (RNN) have been widely used in various Natural Language Processing (NLP) tasks such as text classification, sequence tagging, and machine translation. Long Short Term Memory (LSTM), a special unit of RNN, has the advantage of memorizing past and even future information in a sentence (especially for bidirectional LSTM). In the shared task of detecting toxic spans in texts, we first apply pretrained word embedding (GloVe) to generate the word vectors after tokenization. Then we construct Bidirectional Long Short Term Memory-Conditional Random Field (Bi-LSTM-CRF) model by Baidu research to predict whether each word in the sentence is toxic or not. We tune hyperparameters of dropout rate, number of LSTM units, embedding size with 10 epochs and choose the epoch with best validation recall. Our model achieves an F1 score of 66.99% on test dataset.",LZ1904 at SemEval-2021 Task 5: Bi-LSTM-CRF for Toxic Span Detection using Pretrained Word Embedding,"Recurrent Neural Networks (RNN) have been widely used in various Natural Language Processing (NLP) tasks such as text classification, sequence tagging, and machine translation. Long Short Term Memory (LSTM), a special unit of RNN, has the advantage of memorizing past and even future information in a sentence (especially for bidirectional LSTM). In the shared task of detecting toxic spans in texts, we first apply pretrained word embedding (GloVe) to generate the word vectors after tokenization. Then we construct Bidirectional Long Short Term Memory-Conditional Random Field (Bi-LSTM-CRF) model by Baidu research to predict whether each word in the sentence is toxic or not. We tune hyperparameters of dropout rate, number of LSTM units, embedding size with 10 epochs and choose the epoch with best validation recall. Our model achieves an F1 score of 66.99% on test dataset.",,"LZ1904 at SemEval-2021 Task 5: Bi-LSTM-CRF for Toxic Span Detection using Pretrained Word Embedding. Recurrent Neural Networks (RNN) have been widely used in various Natural Language Processing (NLP) tasks such as text classification, sequence tagging, and machine translation. Long Short Term Memory (LSTM), a special unit of RNN, has the advantage of memorizing past and even future information in a sentence (especially for bidirectional LSTM). In the shared task of detecting toxic spans in texts, we first apply pretrained word embedding (GloVe) to generate the word vectors after tokenization. Then we construct Bidirectional Long Short Term Memory-Conditional Random Field (Bi-LSTM-CRF) model by Baidu research to predict whether each word in the sentence is toxic or not. We tune hyperparameters of dropout rate, number of LSTM units, embedding size with 10 epochs and choose the epoch with best validation recall. Our model achieves an F1 score of 66.99% on test dataset.",2021
kotani-yoshimi-2015-design,https://aclanthology.org/Y15-1040,1,,,,education,,,"Design of a Learner Corpus for Listening and Speaking Performance. A learner corpus is a useful resource for developing automatic assessment techniques for implementation in a computer-assisted language learning system. However, presently, learner corpora are only helpful in terms of evaluating the accuracy of learner output (speaking and writing). Therefore, the present study proposes a learner corpus annotated with evaluation results regarding the accuracy and fluency of performance in speaking (output) and listening (input).",Design of a Learner Corpus for Listening and Speaking Performance,"A learner corpus is a useful resource for developing automatic assessment techniques for implementation in a computer-assisted language learning system. However, presently, learner corpora are only helpful in terms of evaluating the accuracy of learner output (speaking and writing). Therefore, the present study proposes a learner corpus annotated with evaluation results regarding the accuracy and fluency of performance in speaking (output) and listening (input).",Design of a Learner Corpus for Listening and Speaking Performance,"A learner corpus is a useful resource for developing automatic assessment techniques for implementation in a computer-assisted language learning system. However, presently, learner corpora are only helpful in terms of evaluating the accuracy of learner output (speaking and writing). Therefore, the present study proposes a learner corpus annotated with evaluation results regarding the accuracy and fluency of performance in speaking (output) and listening (input).","This work was supported by JSPS KAKENHI Grant Numbers, 22300299, 15H02940","Design of a Learner Corpus for Listening and Speaking Performance. A learner corpus is a useful resource for developing automatic assessment techniques for implementation in a computer-assisted language learning system. However, presently, learner corpora are only helpful in terms of evaluating the accuracy of learner output (speaking and writing). Therefore, the present study proposes a learner corpus annotated with evaluation results regarding the accuracy and fluency of performance in speaking (output) and listening (input).",2015
fraser-etal-2012-modeling,https://aclanthology.org/E12-1068,0,,,,,,,Modeling Inflection and Word-Formation in SMT. The current state-of-the-art in statistical machine translation (SMT) suffers from issues of sparsity and inadequate modeling power when translating into morphologically rich languages. We model both inflection and word-formation for the task of translating into German. We translate from English words to an underspecified German representation and then use linearchain CRFs to predict the fully specified German representation. We show that improved modeling of inflection and wordformation leads to improved SMT.,Modeling Inflection and Word-Formation in {SMT},The current state-of-the-art in statistical machine translation (SMT) suffers from issues of sparsity and inadequate modeling power when translating into morphologically rich languages. We model both inflection and word-formation for the task of translating into German. We translate from English words to an underspecified German representation and then use linearchain CRFs to predict the fully specified German representation. We show that improved modeling of inflection and wordformation leads to improved SMT.,Modeling Inflection and Word-Formation in SMT,The current state-of-the-art in statistical machine translation (SMT) suffers from issues of sparsity and inadequate modeling power when translating into morphologically rich languages. We model both inflection and word-formation for the task of translating into German. We translate from English words to an underspecified German representation and then use linearchain CRFs to predict the fully specified German representation. We show that improved modeling of inflection and wordformation leads to improved SMT.,"The authors wish to thank the anonymous reviewers for their comments. Aoife Cahill was partly supported by Deutsche Forschungsgemeinschaft grant SFB 732. Alexander Fraser, Marion Weller and Fabienne Cap were funded by Deutsche Forschungsgemeinschaft grant Models of Morphosyntax for Statistical Machine Translation. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement Nr. 248005. This work was supported in part by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886. This publication only reflects the authors' views. We thank Thomas Lavergne and Helmut Schmid.",Modeling Inflection and Word-Formation in SMT. The current state-of-the-art in statistical machine translation (SMT) suffers from issues of sparsity and inadequate modeling power when translating into morphologically rich languages. We model both inflection and word-formation for the task of translating into German. We translate from English words to an underspecified German representation and then use linearchain CRFs to predict the fully specified German representation. We show that improved modeling of inflection and wordformation leads to improved SMT.,2012
lang-etal-2022-visually,https://aclanthology.org/2022.cmcl-1.3,0,,,,,,,"Visually Grounded Interpretation of Noun-Noun Compounds in English. Noun-noun compounds (NNCs) occur frequently in the English language. Accurate NNC interpretation, i.e. determining the implicit relationship between the constituents of a NNC, is crucial for the advancement of many natural language processing tasks. Until now, computational NNC interpretation has been limited to approaches involving linguistic representations only. However, research suggests that grounding linguistic representations in vision or other modalities can increase performance on this and other tasks. Our work is a novel comparison of linguistic and visuolinguistic representations for the task of NNC interpretation. We frame NNC interpretation as a relation classification task, evaluating on a large, relationally-annotated NNC dataset. We combine distributional word vectors with image vectors to investigate how visual information can help improve NNC interpretation systems. We find that adding visual vectors yields modest increases in performance on several configurations of our dataset. We view this as a promising first exploration of the benefits of using visually grounded representations for NNC interpretation.",Visually Grounded Interpretation of Noun-Noun Compounds in {E}nglish,"Noun-noun compounds (NNCs) occur frequently in the English language. Accurate NNC interpretation, i.e. determining the implicit relationship between the constituents of a NNC, is crucial for the advancement of many natural language processing tasks. Until now, computational NNC interpretation has been limited to approaches involving linguistic representations only. However, research suggests that grounding linguistic representations in vision or other modalities can increase performance on this and other tasks. Our work is a novel comparison of linguistic and visuolinguistic representations for the task of NNC interpretation. We frame NNC interpretation as a relation classification task, evaluating on a large, relationally-annotated NNC dataset. We combine distributional word vectors with image vectors to investigate how visual information can help improve NNC interpretation systems. We find that adding visual vectors yields modest increases in performance on several configurations of our dataset. We view this as a promising first exploration of the benefits of using visually grounded representations for NNC interpretation.",Visually Grounded Interpretation of Noun-Noun Compounds in English,"Noun-noun compounds (NNCs) occur frequently in the English language. Accurate NNC interpretation, i.e. determining the implicit relationship between the constituents of a NNC, is crucial for the advancement of many natural language processing tasks. Until now, computational NNC interpretation has been limited to approaches involving linguistic representations only. However, research suggests that grounding linguistic representations in vision or other modalities can increase performance on this and other tasks. Our work is a novel comparison of linguistic and visuolinguistic representations for the task of NNC interpretation. We frame NNC interpretation as a relation classification task, evaluating on a large, relationally-annotated NNC dataset. We combine distributional word vectors with image vectors to investigate how visual information can help improve NNC interpretation systems. We find that adding visual vectors yields modest increases in performance on several configurations of our dataset. We view this as a promising first exploration of the benefits of using visually grounded representations for NNC interpretation.",,"Visually Grounded Interpretation of Noun-Noun Compounds in English. Noun-noun compounds (NNCs) occur frequently in the English language. Accurate NNC interpretation, i.e. determining the implicit relationship between the constituents of a NNC, is crucial for the advancement of many natural language processing tasks. Until now, computational NNC interpretation has been limited to approaches involving linguistic representations only. However, research suggests that grounding linguistic representations in vision or other modalities can increase performance on this and other tasks. Our work is a novel comparison of linguistic and visuolinguistic representations for the task of NNC interpretation. We frame NNC interpretation as a relation classification task, evaluating on a large, relationally-annotated NNC dataset. We combine distributional word vectors with image vectors to investigate how visual information can help improve NNC interpretation systems. We find that adding visual vectors yields modest increases in performance on several configurations of our dataset. We view this as a promising first exploration of the benefits of using visually grounded representations for NNC interpretation.",2022
kozhevnikov-titov-2013-cross,https://aclanthology.org/P13-1117,0,,,,,,,"Cross-lingual Transfer of Semantic Role Labeling Models. Semantic Role Labeling (SRL) has become one of the standard tasks of natural language processing and proven useful as a source of information for a number of other applications. We address the problem of transferring an SRL model from one language to another using a shared feature representation. This approach is then evaluated on three language pairs, demonstrating competitive performance as compared to a state-of-the-art unsupervised SRL system and a cross-lingual annotation projection baseline. We also consider the contribution of different aspects of the feature representation to the performance of the model and discuss practical applicability of this method.",Cross-lingual Transfer of Semantic Role Labeling Models,"Semantic Role Labeling (SRL) has become one of the standard tasks of natural language processing and proven useful as a source of information for a number of other applications. We address the problem of transferring an SRL model from one language to another using a shared feature representation. This approach is then evaluated on three language pairs, demonstrating competitive performance as compared to a state-of-the-art unsupervised SRL system and a cross-lingual annotation projection baseline. We also consider the contribution of different aspects of the feature representation to the performance of the model and discuss practical applicability of this method.",Cross-lingual Transfer of Semantic Role Labeling Models,"Semantic Role Labeling (SRL) has become one of the standard tasks of natural language processing and proven useful as a source of information for a number of other applications. We address the problem of transferring an SRL model from one language to another using a shared feature representation. This approach is then evaluated on three language pairs, demonstrating competitive performance as compared to a state-of-the-art unsupervised SRL system and a cross-lingual annotation projection baseline. We also consider the contribution of different aspects of the feature representation to the performance of the model and discuss practical applicability of this method.",The authors would like to thank Alexandre Klementiev and Ryan McDonald for useful suggestions and Täckström et al. (2012) for sharing the cross-lingual word representations. This research is supported by the MMCI Cluster of Excellence.,"Cross-lingual Transfer of Semantic Role Labeling Models. Semantic Role Labeling (SRL) has become one of the standard tasks of natural language processing and proven useful as a source of information for a number of other applications. We address the problem of transferring an SRL model from one language to another using a shared feature representation. This approach is then evaluated on three language pairs, demonstrating competitive performance as compared to a state-of-the-art unsupervised SRL system and a cross-lingual annotation projection baseline. We also consider the contribution of different aspects of the feature representation to the performance of the model and discuss practical applicability of this method.",2013
kuhlmann-2013-mildly,https://aclanthology.org/J13-2004,0,,,,,,,"Mildly Non-Projective Dependency Grammar. Syntactic representations based on word-to-word dependencies have a long-standing tradition in descriptive linguistics, and receive considerable interest in many applications. Nevertheless, dependency syntax has remained something of an island from a formal point of view. Moreover, most formalisms available for dependency grammar are restricted to projective analyses, and thus not able to support natural accounts of phenomena such as wh-movement and cross-serial dependencies. In this article we present a formalism for non-projective dependency grammar in the framework of linear context-free rewriting systems. A characteristic property of our formalism is a close correspondence between the non-projectivity of the dependency trees admitted by a grammar on the one hand, and the parsing complexity of the grammar on the other. We show that parsing with unrestricted grammars is intractable. We therefore study two constraints on non-projectivity, block-degree and well-nestedness. Jointly, these two constraints define a class of ""mildly"" non-projective dependency grammars that can be parsed in polynomial time. An evaluation on five dependency treebanks shows that these grammars have a good coverage of empirical data.",Mildly Non-Projective Dependency Grammar,"Syntactic representations based on word-to-word dependencies have a long-standing tradition in descriptive linguistics, and receive considerable interest in many applications. Nevertheless, dependency syntax has remained something of an island from a formal point of view. Moreover, most formalisms available for dependency grammar are restricted to projective analyses, and thus not able to support natural accounts of phenomena such as wh-movement and cross-serial dependencies. In this article we present a formalism for non-projective dependency grammar in the framework of linear context-free rewriting systems. A characteristic property of our formalism is a close correspondence between the non-projectivity of the dependency trees admitted by a grammar on the one hand, and the parsing complexity of the grammar on the other. We show that parsing with unrestricted grammars is intractable. We therefore study two constraints on non-projectivity, block-degree and well-nestedness. Jointly, these two constraints define a class of ""mildly"" non-projective dependency grammars that can be parsed in polynomial time. An evaluation on five dependency treebanks shows that these grammars have a good coverage of empirical data.",Mildly Non-Projective Dependency Grammar,"Syntactic representations based on word-to-word dependencies have a long-standing tradition in descriptive linguistics, and receive considerable interest in many applications. Nevertheless, dependency syntax has remained something of an island from a formal point of view. Moreover, most formalisms available for dependency grammar are restricted to projective analyses, and thus not able to support natural accounts of phenomena such as wh-movement and cross-serial dependencies. In this article we present a formalism for non-projective dependency grammar in the framework of linear context-free rewriting systems. A characteristic property of our formalism is a close correspondence between the non-projectivity of the dependency trees admitted by a grammar on the one hand, and the parsing complexity of the grammar on the other. We show that parsing with unrestricted grammars is intractable. We therefore study two constraints on non-projectivity, block-degree and well-nestedness. Jointly, these two constraints define a class of ""mildly"" non-projective dependency grammars that can be parsed in polynomial time. An evaluation on five dependency treebanks shows that these grammars have a good coverage of empirical data.","The author gratefully acknowledges financial support from The German Research Foundation (Sonderforschungsbereich 378, project MI 2) and The Swedish Research Council (diary no. 2008-296).","Mildly Non-Projective Dependency Grammar. Syntactic representations based on word-to-word dependencies have a long-standing tradition in descriptive linguistics, and receive considerable interest in many applications. Nevertheless, dependency syntax has remained something of an island from a formal point of view. Moreover, most formalisms available for dependency grammar are restricted to projective analyses, and thus not able to support natural accounts of phenomena such as wh-movement and cross-serial dependencies. In this article we present a formalism for non-projective dependency grammar in the framework of linear context-free rewriting systems. A characteristic property of our formalism is a close correspondence between the non-projectivity of the dependency trees admitted by a grammar on the one hand, and the parsing complexity of the grammar on the other. We show that parsing with unrestricted grammars is intractable. We therefore study two constraints on non-projectivity, block-degree and well-nestedness. Jointly, these two constraints define a class of ""mildly"" non-projective dependency grammars that can be parsed in polynomial time. An evaluation on five dependency treebanks shows that these grammars have a good coverage of empirical data.",2013
guo-diab-2009-improvements,https://aclanthology.org/W09-2410,0,,,,,,,"Improvements To Monolingual English Word Sense Disambiguation. Word Sense Disambiguation remains one of the most complex problems facing computational linguists to date. In this paper we present modification to the graph based state of the art algorithm In-Degree. Our modifications entail augmenting the basic Lesk similarity measure with more relations based on the structure of WordNet, adding SemCor examples to the basic WordNet lexical resource and finally instead of using the LCH similarity measure for computing verb verb similarity in the In-Degree algorithm, we use JCN. We report results on three standard data sets using three different versions of WordNet. We report the highest performing monolingual unsupervised results to date on the Senseval 2 all words data set. Our system yields a performance of 62.7% using WordNet 1.7.1. * The second author has been partially funded by DARPA GALE project. We would also like to thank the useful comments rendered by three anonymous reviewers.",Improvements To Monolingual {E}nglish Word Sense Disambiguation,"Word Sense Disambiguation remains one of the most complex problems facing computational linguists to date. In this paper we present modification to the graph based state of the art algorithm In-Degree. Our modifications entail augmenting the basic Lesk similarity measure with more relations based on the structure of WordNet, adding SemCor examples to the basic WordNet lexical resource and finally instead of using the LCH similarity measure for computing verb verb similarity in the In-Degree algorithm, we use JCN. We report results on three standard data sets using three different versions of WordNet. We report the highest performing monolingual unsupervised results to date on the Senseval 2 all words data set. Our system yields a performance of 62.7% using WordNet 1.7.1. * The second author has been partially funded by DARPA GALE project. We would also like to thank the useful comments rendered by three anonymous reviewers.",Improvements To Monolingual English Word Sense Disambiguation,"Word Sense Disambiguation remains one of the most complex problems facing computational linguists to date. In this paper we present modification to the graph based state of the art algorithm In-Degree. Our modifications entail augmenting the basic Lesk similarity measure with more relations based on the structure of WordNet, adding SemCor examples to the basic WordNet lexical resource and finally instead of using the LCH similarity measure for computing verb verb similarity in the In-Degree algorithm, we use JCN. We report results on three standard data sets using three different versions of WordNet. We report the highest performing monolingual unsupervised results to date on the Senseval 2 all words data set. Our system yields a performance of 62.7% using WordNet 1.7.1. * The second author has been partially funded by DARPA GALE project. We would also like to thank the useful comments rendered by three anonymous reviewers.",,"Improvements To Monolingual English Word Sense Disambiguation. Word Sense Disambiguation remains one of the most complex problems facing computational linguists to date. In this paper we present modification to the graph based state of the art algorithm In-Degree. Our modifications entail augmenting the basic Lesk similarity measure with more relations based on the structure of WordNet, adding SemCor examples to the basic WordNet lexical resource and finally instead of using the LCH similarity measure for computing verb verb similarity in the In-Degree algorithm, we use JCN. We report results on three standard data sets using three different versions of WordNet. We report the highest performing monolingual unsupervised results to date on the Senseval 2 all words data set. Our system yields a performance of 62.7% using WordNet 1.7.1. * The second author has been partially funded by DARPA GALE project. We would also like to thank the useful comments rendered by three anonymous reviewers.",2009
nuhn-etal-2012-deciphering,https://aclanthology.org/P12-1017,0,,,,,,,"Deciphering Foreign Language by Combining Language Models and Context Vectors. In this paper we show how to train statistical machine translation systems on reallife tasks using only non-parallel monolingual data from two languages. We present a modification of the method shown in (Ravi and Knight, 2011) that is scalable to vocabulary sizes of several thousand words. On the task shown in (Ravi and Knight, 2011) we obtain better results with only 5% of the computational effort when running our method with an n-gram language model. The efficiency improvement of our method allows us to run experiments with vocabulary sizes of around 5,000 words, such as a non-parallel version of the VERBMOBIL corpus. We also report results using data from the monolingual French and English GIGAWORD corpora.",Deciphering Foreign Language by Combining Language Models and Context Vectors,"In this paper we show how to train statistical machine translation systems on reallife tasks using only non-parallel monolingual data from two languages. We present a modification of the method shown in (Ravi and Knight, 2011) that is scalable to vocabulary sizes of several thousand words. On the task shown in (Ravi and Knight, 2011) we obtain better results with only 5% of the computational effort when running our method with an n-gram language model. The efficiency improvement of our method allows us to run experiments with vocabulary sizes of around 5,000 words, such as a non-parallel version of the VERBMOBIL corpus. We also report results using data from the monolingual French and English GIGAWORD corpora.",Deciphering Foreign Language by Combining Language Models and Context Vectors,"In this paper we show how to train statistical machine translation systems on reallife tasks using only non-parallel monolingual data from two languages. We present a modification of the method shown in (Ravi and Knight, 2011) that is scalable to vocabulary sizes of several thousand words. On the task shown in (Ravi and Knight, 2011) we obtain better results with only 5% of the computational effort when running our method with an n-gram language model. The efficiency improvement of our method allows us to run experiments with vocabulary sizes of around 5,000 words, such as a non-parallel version of the VERBMOBIL corpus. We also report results using data from the monolingual French and English GIGAWORD corpora.","This work was realized as part of the Quaero Programme, funded by OSEO, French State agency for innovation. The authors would like to thank Sujith Ravi and Kevin Knight for providing us with the OPUS subtitle corpus and David Rybach for kindly sharing his knowledge about the OpenFST library.","Deciphering Foreign Language by Combining Language Models and Context Vectors. In this paper we show how to train statistical machine translation systems on reallife tasks using only non-parallel monolingual data from two languages. We present a modification of the method shown in (Ravi and Knight, 2011) that is scalable to vocabulary sizes of several thousand words. On the task shown in (Ravi and Knight, 2011) we obtain better results with only 5% of the computational effort when running our method with an n-gram language model. The efficiency improvement of our method allows us to run experiments with vocabulary sizes of around 5,000 words, such as a non-parallel version of the VERBMOBIL corpus. We also report results using data from the monolingual French and English GIGAWORD corpora.",2012
xu-etal-2014-joint,https://aclanthology.org/C14-1064,0,,,,,,,"Joint Opinion Relation Detection Using One-Class Deep Neural Network. Detecting opinion relation is a crucial step for fine-gained opinion summarization. A valid opinion relation has three requirements: a correct opinion word, a correct opinion target and the linking relation between them. Previous works prone to only verifying two of these requirements for opinion extraction, while leave the other requirement unverified. This could inevitably introduce noise terms. To tackle this problem, this paper proposes a joint approach, where all three requirements are simultaneously verified by a deep neural network in a classification scenario. Some seeds are provided as positive labeled data for the classifier. However, negative labeled data are hard to acquire for this task. We consequently introduce one-class classification problem and develop a One-Class Deep Neural Network. Experimental results show that the proposed joint approach significantly outperforms state-of-the-art weakly supervised methods.",Joint Opinion Relation Detection Using One-Class Deep Neural Network,"Detecting opinion relation is a crucial step for fine-gained opinion summarization. A valid opinion relation has three requirements: a correct opinion word, a correct opinion target and the linking relation between them. Previous works prone to only verifying two of these requirements for opinion extraction, while leave the other requirement unverified. This could inevitably introduce noise terms. To tackle this problem, this paper proposes a joint approach, where all three requirements are simultaneously verified by a deep neural network in a classification scenario. Some seeds are provided as positive labeled data for the classifier. However, negative labeled data are hard to acquire for this task. We consequently introduce one-class classification problem and develop a One-Class Deep Neural Network. Experimental results show that the proposed joint approach significantly outperforms state-of-the-art weakly supervised methods.",Joint Opinion Relation Detection Using One-Class Deep Neural Network,"Detecting opinion relation is a crucial step for fine-gained opinion summarization. A valid opinion relation has three requirements: a correct opinion word, a correct opinion target and the linking relation between them. Previous works prone to only verifying two of these requirements for opinion extraction, while leave the other requirement unverified. This could inevitably introduce noise terms. To tackle this problem, this paper proposes a joint approach, where all three requirements are simultaneously verified by a deep neural network in a classification scenario. Some seeds are provided as positive labeled data for the classifier. However, negative labeled data are hard to acquire for this task. We consequently introduce one-class classification problem and develop a One-Class Deep Neural Network. Experimental results show that the proposed joint approach significantly outperforms state-of-the-art weakly supervised methods.",This work was sponsored by the National Natural Science Foundation of China (No. 61202329 and No. 61333018) and CCF-Tencent Open Research Fund.,"Joint Opinion Relation Detection Using One-Class Deep Neural Network. Detecting opinion relation is a crucial step for fine-gained opinion summarization. A valid opinion relation has three requirements: a correct opinion word, a correct opinion target and the linking relation between them. Previous works prone to only verifying two of these requirements for opinion extraction, while leave the other requirement unverified. This could inevitably introduce noise terms. To tackle this problem, this paper proposes a joint approach, where all three requirements are simultaneously verified by a deep neural network in a classification scenario. Some seeds are provided as positive labeled data for the classifier. However, negative labeled data are hard to acquire for this task. We consequently introduce one-class classification problem and develop a One-Class Deep Neural Network. Experimental results show that the proposed joint approach significantly outperforms state-of-the-art weakly supervised methods.",2014
muraki-etal-1985-augmented,https://aclanthology.org/E85-1029,0,,,,,,,"Augmented Dependency Grammar: A Simple Interface between the Grammar Rule and the Knowledge. The VENUS analysis model consists of two components, Legato and Crescendo, as shown in Fig. I .
Legato based on the ADG framework, constructs semantic dependency structure of Japanese input sentences by feature-oriented dependency grammar rules as main control information for syntactic analysis, and by semantic inference mechanism on a object fields' fact knowledge base.",Augmented Dependency Grammar: A Simple Interface between the Grammar Rule and the Knowledge,"The VENUS analysis model consists of two components, Legato and Crescendo, as shown in Fig. I .
Legato based on the ADG framework, constructs semantic dependency structure of Japanese input sentences by feature-oriented dependency grammar rules as main control information for syntactic analysis, and by semantic inference mechanism on a object fields' fact knowledge base.",Augmented Dependency Grammar: A Simple Interface between the Grammar Rule and the Knowledge,"The VENUS analysis model consists of two components, Legato and Crescendo, as shown in Fig. I .
Legato based on the ADG framework, constructs semantic dependency structure of Japanese input sentences by feature-oriented dependency grammar rules as main control information for syntactic analysis, and by semantic inference mechanism on a object fields' fact knowledge base.",,"Augmented Dependency Grammar: A Simple Interface between the Grammar Rule and the Knowledge. The VENUS analysis model consists of two components, Legato and Crescendo, as shown in Fig. I .
Legato based on the ADG framework, constructs semantic dependency structure of Japanese input sentences by feature-oriented dependency grammar rules as main control information for syntactic analysis, and by semantic inference mechanism on a object fields' fact knowledge base.",1985
choi-etal-1998-hybrid-approaches,https://aclanthology.org/P98-1039,0,,,,,,,"Hybrid Approaches to Improvement of Translation Quality in Web-based English-Korean Machine Translation. The previous English-Korean MT system that was the transfer-based MT system and applied to only written text enumerated a following brief list of the problems that had not seemed to be easy to solve in the near future : 1) processing of non-continuous idiomatic expressions 2) reduction of too many ambiguities in English syntactic analysis 3) robust processing for failed or illformed sentences 4) selecting correct word correspondence between several alternatives 5) generation of Korean sentence style. The problems can be considered as factors that have influence on the translation quality of machine translation system. This paper describes the symbolic and statistical hybrid approaches to solutions of problems of the previous English-to-Korean machine translation system in terms of the improvement of translation quality. The solutions are now successfully applied to the web-based English-Korean machine translation system ""FromTo/EK"" which has been developed from 1997.",Hybrid Approaches to Improvement of Translation Quality in Web-based {E}nglish-{K}orean Machine Translation,"The previous English-Korean MT system that was the transfer-based MT system and applied to only written text enumerated a following brief list of the problems that had not seemed to be easy to solve in the near future : 1) processing of non-continuous idiomatic expressions 2) reduction of too many ambiguities in English syntactic analysis 3) robust processing for failed or illformed sentences 4) selecting correct word correspondence between several alternatives 5) generation of Korean sentence style. The problems can be considered as factors that have influence on the translation quality of machine translation system. This paper describes the symbolic and statistical hybrid approaches to solutions of problems of the previous English-to-Korean machine translation system in terms of the improvement of translation quality. The solutions are now successfully applied to the web-based English-Korean machine translation system ""FromTo/EK"" which has been developed from 1997.",Hybrid Approaches to Improvement of Translation Quality in Web-based English-Korean Machine Translation,"The previous English-Korean MT system that was the transfer-based MT system and applied to only written text enumerated a following brief list of the problems that had not seemed to be easy to solve in the near future : 1) processing of non-continuous idiomatic expressions 2) reduction of too many ambiguities in English syntactic analysis 3) robust processing for failed or illformed sentences 4) selecting correct word correspondence between several alternatives 5) generation of Korean sentence style. The problems can be considered as factors that have influence on the translation quality of machine translation system. This paper describes the symbolic and statistical hybrid approaches to solutions of problems of the previous English-to-Korean machine translation system in terms of the improvement of translation quality. The solutions are now successfully applied to the web-based English-Korean machine translation system ""FromTo/EK"" which has been developed from 1997.",,"Hybrid Approaches to Improvement of Translation Quality in Web-based English-Korean Machine Translation. The previous English-Korean MT system that was the transfer-based MT system and applied to only written text enumerated a following brief list of the problems that had not seemed to be easy to solve in the near future : 1) processing of non-continuous idiomatic expressions 2) reduction of too many ambiguities in English syntactic analysis 3) robust processing for failed or illformed sentences 4) selecting correct word correspondence between several alternatives 5) generation of Korean sentence style. The problems can be considered as factors that have influence on the translation quality of machine translation system. This paper describes the symbolic and statistical hybrid approaches to solutions of problems of the previous English-to-Korean machine translation system in terms of the improvement of translation quality. The solutions are now successfully applied to the web-based English-Korean machine translation system ""FromTo/EK"" which has been developed from 1997.",1998
hoffman-1992-ccg,https://aclanthology.org/P92-1044,0,,,,,,,"A CCG Approach to Free Word Order Languages. In this paper, I present work in progress on an extension of Combinatory Categorial Grammars, CCGs, (Steedman 1985) to handle languages with freer word order than English, specifically Turkish. The approach I develop takes advantage of CCGs' ability to combine the syntactic as well as the semantic representations of adjacent elements in a sentence in an incremental manner. The linguistic claim behind my approach is that free word order in Turkish is a direct result of its grammar and lexical categories; this approach is not compatible with a linguistic theory involving movement operations and traces.
A rich system of case markings identifies the predicate-argument structure of a Turkish sentence, while the word order serves a pragmatic function. The pragmatic functions of certain positions in the sentence roughly consist of a sentence-initial position for the topic, an immediately pre-verbal position for the focus, and post-verbal positions for backgrounded information (Erguvanli 1984) . The most common word order in simple transitive sentences is SOV (Subject-Object-Verb). However, all of the permutations of the sentence seen below are grammatical in the proper discourse situations. (1) a. Ay~e gazeteyi okuyor. Ay~e newspaper-acc read-present. Ay~e is reading the newspaper. b. Gazeteyi Ay~e okuyor. c. Ay~e okuyor gazeteyi. d. Gazeteyi okuyor Ay~e. e. Okuyor gazeteyi Ay~e. f. Okuyor Ay~e gazeteyi. Elements with overt case marking generally can scramble freely, even out of embedded clauses. This suggest a CCG approach where case-marked elements are functions which can combine with one another and with verbs in any order. *I thank Young-Suk Lee, Michael Niv, Jong Park, Mark Steedman, and Michael White for their valuable advice. This work was partially supported by ARt DAAL03-89-C-0031, DARPA N00014-90-J-1863, NSF IRI 90-16592, Ben Franklin 91S.3078C-1. Karttunen (1986) has proposed a Categorial Grammar formalism to handle free word order in Finnish, in which noun phrases are functors that apply to the verbal basic elements. Our approach treats case-marked noun phrases as functors as well; however, we allow verbs to maintain their status as functors in order to handle object-incorporation and the combining of nested verbs. In addition, CCGs, unlike Karttunen's grammar, allow the operations of composition and type raising which have been useful in handling a variety of linguistic phenomena including long distance dependencies and nonconstituent coordination (Steedman 1985) and will play an essential role in this analysis.",A {CCG} Approach to Free Word Order Languages,"In this paper, I present work in progress on an extension of Combinatory Categorial Grammars, CCGs, (Steedman 1985) to handle languages with freer word order than English, specifically Turkish. The approach I develop takes advantage of CCGs' ability to combine the syntactic as well as the semantic representations of adjacent elements in a sentence in an incremental manner. The linguistic claim behind my approach is that free word order in Turkish is a direct result of its grammar and lexical categories; this approach is not compatible with a linguistic theory involving movement operations and traces.
A rich system of case markings identifies the predicate-argument structure of a Turkish sentence, while the word order serves a pragmatic function. The pragmatic functions of certain positions in the sentence roughly consist of a sentence-initial position for the topic, an immediately pre-verbal position for the focus, and post-verbal positions for backgrounded information (Erguvanli 1984) . The most common word order in simple transitive sentences is SOV (Subject-Object-Verb). However, all of the permutations of the sentence seen below are grammatical in the proper discourse situations. (1) a. Ay~e gazeteyi okuyor. Ay~e newspaper-acc read-present. Ay~e is reading the newspaper. b. Gazeteyi Ay~e okuyor. c. Ay~e okuyor gazeteyi. d. Gazeteyi okuyor Ay~e. e. Okuyor gazeteyi Ay~e. f. Okuyor Ay~e gazeteyi. Elements with overt case marking generally can scramble freely, even out of embedded clauses. This suggest a CCG approach where case-marked elements are functions which can combine with one another and with verbs in any order. *I thank Young-Suk Lee, Michael Niv, Jong Park, Mark Steedman, and Michael White for their valuable advice. This work was partially supported by ARt DAAL03-89-C-0031, DARPA N00014-90-J-1863, NSF IRI 90-16592, Ben Franklin 91S.3078C-1. Karttunen (1986) has proposed a Categorial Grammar formalism to handle free word order in Finnish, in which noun phrases are functors that apply to the verbal basic elements. Our approach treats case-marked noun phrases as functors as well; however, we allow verbs to maintain their status as functors in order to handle object-incorporation and the combining of nested verbs. In addition, CCGs, unlike Karttunen's grammar, allow the operations of composition and type raising which have been useful in handling a variety of linguistic phenomena including long distance dependencies and nonconstituent coordination (Steedman 1985) and will play an essential role in this analysis.",A CCG Approach to Free Word Order Languages,"In this paper, I present work in progress on an extension of Combinatory Categorial Grammars, CCGs, (Steedman 1985) to handle languages with freer word order than English, specifically Turkish. The approach I develop takes advantage of CCGs' ability to combine the syntactic as well as the semantic representations of adjacent elements in a sentence in an incremental manner. The linguistic claim behind my approach is that free word order in Turkish is a direct result of its grammar and lexical categories; this approach is not compatible with a linguistic theory involving movement operations and traces.
A rich system of case markings identifies the predicate-argument structure of a Turkish sentence, while the word order serves a pragmatic function. The pragmatic functions of certain positions in the sentence roughly consist of a sentence-initial position for the topic, an immediately pre-verbal position for the focus, and post-verbal positions for backgrounded information (Erguvanli 1984) . The most common word order in simple transitive sentences is SOV (Subject-Object-Verb). However, all of the permutations of the sentence seen below are grammatical in the proper discourse situations. (1) a. Ay~e gazeteyi okuyor. Ay~e newspaper-acc read-present. Ay~e is reading the newspaper. b. Gazeteyi Ay~e okuyor. c. Ay~e okuyor gazeteyi. d. Gazeteyi okuyor Ay~e. e. Okuyor gazeteyi Ay~e. f. Okuyor Ay~e gazeteyi. Elements with overt case marking generally can scramble freely, even out of embedded clauses. This suggest a CCG approach where case-marked elements are functions which can combine with one another and with verbs in any order. *I thank Young-Suk Lee, Michael Niv, Jong Park, Mark Steedman, and Michael White for their valuable advice. This work was partially supported by ARt DAAL03-89-C-0031, DARPA N00014-90-J-1863, NSF IRI 90-16592, Ben Franklin 91S.3078C-1. Karttunen (1986) has proposed a Categorial Grammar formalism to handle free word order in Finnish, in which noun phrases are functors that apply to the verbal basic elements. Our approach treats case-marked noun phrases as functors as well; however, we allow verbs to maintain their status as functors in order to handle object-incorporation and the combining of nested verbs. In addition, CCGs, unlike Karttunen's grammar, allow the operations of composition and type raising which have been useful in handling a variety of linguistic phenomena including long distance dependencies and nonconstituent coordination (Steedman 1985) and will play an essential role in this analysis.",,"A CCG Approach to Free Word Order Languages. In this paper, I present work in progress on an extension of Combinatory Categorial Grammars, CCGs, (Steedman 1985) to handle languages with freer word order than English, specifically Turkish. The approach I develop takes advantage of CCGs' ability to combine the syntactic as well as the semantic representations of adjacent elements in a sentence in an incremental manner. The linguistic claim behind my approach is that free word order in Turkish is a direct result of its grammar and lexical categories; this approach is not compatible with a linguistic theory involving movement operations and traces.
A rich system of case markings identifies the predicate-argument structure of a Turkish sentence, while the word order serves a pragmatic function. The pragmatic functions of certain positions in the sentence roughly consist of a sentence-initial position for the topic, an immediately pre-verbal position for the focus, and post-verbal positions for backgrounded information (Erguvanli 1984) . The most common word order in simple transitive sentences is SOV (Subject-Object-Verb). However, all of the permutations of the sentence seen below are grammatical in the proper discourse situations. (1) a. Ay~e gazeteyi okuyor. Ay~e newspaper-acc read-present. Ay~e is reading the newspaper. b. Gazeteyi Ay~e okuyor. c. Ay~e okuyor gazeteyi. d. Gazeteyi okuyor Ay~e. e. Okuyor gazeteyi Ay~e. f. Okuyor Ay~e gazeteyi. Elements with overt case marking generally can scramble freely, even out of embedded clauses. This suggest a CCG approach where case-marked elements are functions which can combine with one another and with verbs in any order. *I thank Young-Suk Lee, Michael Niv, Jong Park, Mark Steedman, and Michael White for their valuable advice. This work was partially supported by ARt DAAL03-89-C-0031, DARPA N00014-90-J-1863, NSF IRI 90-16592, Ben Franklin 91S.3078C-1. Karttunen (1986) has proposed a Categorial Grammar formalism to handle free word order in Finnish, in which noun phrases are functors that apply to the verbal basic elements. Our approach treats case-marked noun phrases as functors as well; however, we allow verbs to maintain their status as functors in order to handle object-incorporation and the combining of nested verbs. In addition, CCGs, unlike Karttunen's grammar, allow the operations of composition and type raising which have been useful in handling a variety of linguistic phenomena including long distance dependencies and nonconstituent coordination (Steedman 1985) and will play an essential role in this analysis.",1992
li-etal-2008-optimal,https://aclanthology.org/W08-0118,0,,,,,,,"Optimal Dialog in Consumer-Rating Systems using POMDP Framework. Voice-Rate is an experimental dialog system through which a user can call to get product information. In this paper, we describe an optimal dialog management algorithm for Voice-Rate. Our algorithm uses a POMDP framework, which is probabilistic and captures uncertainty in speech recognition and user knowledge. We propose a novel method to learn a user knowledge model from a review database. Simulation results show that the POMDP system performs significantly better than a deterministic baseline system in terms of both dialog failure rate and dialog interaction time. To the best of our knowledge, our work is the first to show that a POMDP can be successfully used for disambiguation in a complex voice search domain like Voice-Rate.",Optimal Dialog in Consumer-Rating Systems using {POMDP} Framework,"Voice-Rate is an experimental dialog system through which a user can call to get product information. In this paper, we describe an optimal dialog management algorithm for Voice-Rate. Our algorithm uses a POMDP framework, which is probabilistic and captures uncertainty in speech recognition and user knowledge. We propose a novel method to learn a user knowledge model from a review database. Simulation results show that the POMDP system performs significantly better than a deterministic baseline system in terms of both dialog failure rate and dialog interaction time. To the best of our knowledge, our work is the first to show that a POMDP can be successfully used for disambiguation in a complex voice search domain like Voice-Rate.",Optimal Dialog in Consumer-Rating Systems using POMDP Framework,"Voice-Rate is an experimental dialog system through which a user can call to get product information. In this paper, we describe an optimal dialog management algorithm for Voice-Rate. Our algorithm uses a POMDP framework, which is probabilistic and captures uncertainty in speech recognition and user knowledge. We propose a novel method to learn a user knowledge model from a review database. Simulation results show that the POMDP system performs significantly better than a deterministic baseline system in terms of both dialog failure rate and dialog interaction time. To the best of our knowledge, our work is the first to show that a POMDP can be successfully used for disambiguation in a complex voice search domain like Voice-Rate.","This work was conducted during the first author's internship at Microsoft Research; thanks to Dan Bohus, Ghinwa Choueiter, Yun-Cheng Ju, Xiao Li, Milind Mahajan, Tim Paek, Yeyi Wang, and Dong Yu for helpful discussions.","Optimal Dialog in Consumer-Rating Systems using POMDP Framework. Voice-Rate is an experimental dialog system through which a user can call to get product information. In this paper, we describe an optimal dialog management algorithm for Voice-Rate. Our algorithm uses a POMDP framework, which is probabilistic and captures uncertainty in speech recognition and user knowledge. We propose a novel method to learn a user knowledge model from a review database. Simulation results show that the POMDP system performs significantly better than a deterministic baseline system in terms of both dialog failure rate and dialog interaction time. To the best of our knowledge, our work is the first to show that a POMDP can be successfully used for disambiguation in a complex voice search domain like Voice-Rate.",2008
riloff-etal-2002-inducing,https://aclanthology.org/C02-1070,0,,,,,,,"Inducing Information Extraction Systems for New Languages via Cross-language Projection. Information extraction (IE) systems are costly to build because they require development texts, parsing tools, and specialized dictionaries for each application domain and each natural language that needs to be processed. We present a novel method for rapidly creating IE systems for new languages by exploiting existing IE systems via crosslanguage projection. Given an IE system for a source language (e.g., English), we can transfer its annotations to corresponding texts in a target language (e.g., French) and learn information extraction rules for the new language automatically. In this paper, we explore several ways of realizing both the transfer and learning processes using off-theshelf machine translation systems, induced word alignment, attribute projection, and transformationbased learning. We present a variety of experiments that show how an English IE system for a plane crash domain can be leveraged to automatically create a French IE system for the same domain.",Inducing Information Extraction Systems for New Languages via Cross-language Projection,"Information extraction (IE) systems are costly to build because they require development texts, parsing tools, and specialized dictionaries for each application domain and each natural language that needs to be processed. We present a novel method for rapidly creating IE systems for new languages by exploiting existing IE systems via crosslanguage projection. Given an IE system for a source language (e.g., English), we can transfer its annotations to corresponding texts in a target language (e.g., French) and learn information extraction rules for the new language automatically. In this paper, we explore several ways of realizing both the transfer and learning processes using off-theshelf machine translation systems, induced word alignment, attribute projection, and transformationbased learning. We present a variety of experiments that show how an English IE system for a plane crash domain can be leveraged to automatically create a French IE system for the same domain.",Inducing Information Extraction Systems for New Languages via Cross-language Projection,"Information extraction (IE) systems are costly to build because they require development texts, parsing tools, and specialized dictionaries for each application domain and each natural language that needs to be processed. We present a novel method for rapidly creating IE systems for new languages by exploiting existing IE systems via crosslanguage projection. Given an IE system for a source language (e.g., English), we can transfer its annotations to corresponding texts in a target language (e.g., French) and learn information extraction rules for the new language automatically. In this paper, we explore several ways of realizing both the transfer and learning processes using off-theshelf machine translation systems, induced word alignment, attribute projection, and transformationbased learning. We present a variety of experiments that show how an English IE system for a plane crash domain can be leveraged to automatically create a French IE system for the same domain.",,"Inducing Information Extraction Systems for New Languages via Cross-language Projection. Information extraction (IE) systems are costly to build because they require development texts, parsing tools, and specialized dictionaries for each application domain and each natural language that needs to be processed. We present a novel method for rapidly creating IE systems for new languages by exploiting existing IE systems via crosslanguage projection. Given an IE system for a source language (e.g., English), we can transfer its annotations to corresponding texts in a target language (e.g., French) and learn information extraction rules for the new language automatically. In this paper, we explore several ways of realizing both the transfer and learning processes using off-theshelf machine translation systems, induced word alignment, attribute projection, and transformationbased learning. We present a variety of experiments that show how an English IE system for a plane crash domain can be leveraged to automatically create a French IE system for the same domain.",2002
bakhshandeh-etal-2016-learning,https://aclanthology.org/K16-1007,0,,,,,,,"Learning to Jointly Predict Ellipsis and Comparison Structures. Domain-independent meaning representation of text has received a renewed interest in the NLP community. Comparison plays a crucial role in shaping objective and subjective opinion and measurement in natural language, and is often expressed in complex constructions including ellipsis. In this paper, we introduce a novel framework for jointly capturing the semantic structure of comparison and ellipsis constructions. Our framework models ellipsis and comparison as interconnected predicate-argument structures, which enables automatic ellipsis resolution. We show that a structured prediction model trained on our dataset of 2,800 gold annotated review sentences yields promising results. Together with this paper we release the dataset and an annotation tool which enables two-stage expert annotation on top of tree structures.",Learning to Jointly Predict Ellipsis and Comparison Structures,"Domain-independent meaning representation of text has received a renewed interest in the NLP community. Comparison plays a crucial role in shaping objective and subjective opinion and measurement in natural language, and is often expressed in complex constructions including ellipsis. In this paper, we introduce a novel framework for jointly capturing the semantic structure of comparison and ellipsis constructions. Our framework models ellipsis and comparison as interconnected predicate-argument structures, which enables automatic ellipsis resolution. We show that a structured prediction model trained on our dataset of 2,800 gold annotated review sentences yields promising results. Together with this paper we release the dataset and an annotation tool which enables two-stage expert annotation on top of tree structures.",Learning to Jointly Predict Ellipsis and Comparison Structures,"Domain-independent meaning representation of text has received a renewed interest in the NLP community. Comparison plays a crucial role in shaping objective and subjective opinion and measurement in natural language, and is often expressed in complex constructions including ellipsis. In this paper, we introduce a novel framework for jointly capturing the semantic structure of comparison and ellipsis constructions. Our framework models ellipsis and comparison as interconnected predicate-argument structures, which enables automatic ellipsis resolution. We show that a structured prediction model trained on our dataset of 2,800 gold annotated review sentences yields promising results. Together with this paper we release the dataset and an annotation tool which enables two-stage expert annotation on top of tree structures.",We thank the anonymous reviewers for their invaluable comments and Brian Rinehart and other annotators for their great work on the annotations. This work was supported in part by Grant W911NF-15-1-0542 with the US Defense Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO).,"Learning to Jointly Predict Ellipsis and Comparison Structures. Domain-independent meaning representation of text has received a renewed interest in the NLP community. Comparison plays a crucial role in shaping objective and subjective opinion and measurement in natural language, and is often expressed in complex constructions including ellipsis. In this paper, we introduce a novel framework for jointly capturing the semantic structure of comparison and ellipsis constructions. Our framework models ellipsis and comparison as interconnected predicate-argument structures, which enables automatic ellipsis resolution. We show that a structured prediction model trained on our dataset of 2,800 gold annotated review sentences yields promising results. Together with this paper we release the dataset and an annotation tool which enables two-stage expert annotation on top of tree structures.",2016
sanchan-etal-2017-automatic,https://doi.org/10.26615/978-954-452-038-0_003,0,,,,,,,Automatic Summarization of Online Debates. ,Automatic Summarization of Online Debates,,Automatic Summarization of Online Debates,,,Automatic Summarization of Online Debates. ,2017
naskar-bandyopadhyay-2005-phrasal,https://aclanthology.org/2005.mtsummit-posters.8,0,,,,,,,"A Phrasal EBMT System for Translating English to Bengali. The present work describes a Phrasal Example Based Machine Translation system from English to Bengali that identifies the phrases in the input through a shallow analysis, retrieves the target phrases using a Phrasal Example base and finally combines the target language phrases employing some heuristics based on the phrase ordering rules for Bengali. The paper focuses on the structure of the noun, verb and prepositional phrases in English and how these phrases are realized in Bengali. This study has an effect on the design of the phrasal Example Base and recombination rules for the target language phrases.",A Phrasal {EBMT} System for Translating {E}nglish to {B}engali,"The present work describes a Phrasal Example Based Machine Translation system from English to Bengali that identifies the phrases in the input through a shallow analysis, retrieves the target phrases using a Phrasal Example base and finally combines the target language phrases employing some heuristics based on the phrase ordering rules for Bengali. The paper focuses on the structure of the noun, verb and prepositional phrases in English and how these phrases are realized in Bengali. This study has an effect on the design of the phrasal Example Base and recombination rules for the target language phrases.",A Phrasal EBMT System for Translating English to Bengali,"The present work describes a Phrasal Example Based Machine Translation system from English to Bengali that identifies the phrases in the input through a shallow analysis, retrieves the target phrases using a Phrasal Example base and finally combines the target language phrases employing some heuristics based on the phrase ordering rules for Bengali. The paper focuses on the structure of the noun, verb and prepositional phrases in English and how these phrases are realized in Bengali. This study has an effect on the design of the phrasal Example Base and recombination rules for the target language phrases.",,"A Phrasal EBMT System for Translating English to Bengali. The present work describes a Phrasal Example Based Machine Translation system from English to Bengali that identifies the phrases in the input through a shallow analysis, retrieves the target phrases using a Phrasal Example base and finally combines the target language phrases employing some heuristics based on the phrase ordering rules for Bengali. The paper focuses on the structure of the noun, verb and prepositional phrases in English and how these phrases are realized in Bengali. This study has an effect on the design of the phrasal Example Base and recombination rules for the target language phrases.",2005
wojatzki-etal-2018-quantifying,https://aclanthology.org/L18-1224,1,,,,peace_justice_and_strong_institutions,,,"Quantifying Qualitative Data for Understanding Controversial Issues. Understanding public opinion on complex controversial issues such as 'Legalization of Marijuana' and 'Gun Rights' is of considerable importance for a number of objectives such as identifying the most divisive facets of the issue, developing a consensus, and making informed policy decisions. However, an individual's position on a controversial issue is often not just a binary support-or-oppose stance on the issue, but rather a conglomerate of nuanced opinions and beliefs on various aspects of the issue. These opinions and beliefs are often expressed qualitatively in free text in issue-focused surveys or on social media. However, quantifying vast amounts of qualitative information remains a significant challenge. The goal of this work is to provide a new approach for quantifying qualitative data for the understanding of controversial issues. First, we show how we can engage people directly through crowdsourcing to create a comprehensive dataset of assertions (claims, opinions, arguments, etc.) relevant to an issue. Next, the assertions are judged for agreement and strength of support or opposition, again by crowdsourcing. The collected Dataset of Nuanced Assertions on Controversial Issues (NAoCI dataset) consists of over 2,000 assertions on sixteen different controversial issues. It has over 100,000 judgments of whether people agree or disagree with the assertions, and of about 70,000 judgments indicating how strongly people support or oppose the assertions. This dataset allows for several useful analyses that help summarize public opinion. Across the sixteen issues, we find that when people judge a large set of assertions they often do not disagree with the individual assertions that the opposite side makes, but that they differently judge the relative importance of these assertions. We show how assertions that cause dissent or consensus can be identified by ranking the whole set of assertions based on the collected judgments. We also show how free-text assertions in social media can be analyzed in conjunction with the crowdsourced information to quantify and summarize public opinion on controversial issues.",Quantifying Qualitative Data for Understanding Controversial Issues,"Understanding public opinion on complex controversial issues such as 'Legalization of Marijuana' and 'Gun Rights' is of considerable importance for a number of objectives such as identifying the most divisive facets of the issue, developing a consensus, and making informed policy decisions. However, an individual's position on a controversial issue is often not just a binary support-or-oppose stance on the issue, but rather a conglomerate of nuanced opinions and beliefs on various aspects of the issue. These opinions and beliefs are often expressed qualitatively in free text in issue-focused surveys or on social media. However, quantifying vast amounts of qualitative information remains a significant challenge. The goal of this work is to provide a new approach for quantifying qualitative data for the understanding of controversial issues. First, we show how we can engage people directly through crowdsourcing to create a comprehensive dataset of assertions (claims, opinions, arguments, etc.) relevant to an issue. Next, the assertions are judged for agreement and strength of support or opposition, again by crowdsourcing. The collected Dataset of Nuanced Assertions on Controversial Issues (NAoCI dataset) consists of over 2,000 assertions on sixteen different controversial issues. It has over 100,000 judgments of whether people agree or disagree with the assertions, and of about 70,000 judgments indicating how strongly people support or oppose the assertions. This dataset allows for several useful analyses that help summarize public opinion. Across the sixteen issues, we find that when people judge a large set of assertions they often do not disagree with the individual assertions that the opposite side makes, but that they differently judge the relative importance of these assertions. We show how assertions that cause dissent or consensus can be identified by ranking the whole set of assertions based on the collected judgments. We also show how free-text assertions in social media can be analyzed in conjunction with the crowdsourced information to quantify and summarize public opinion on controversial issues.",Quantifying Qualitative Data for Understanding Controversial Issues,"Understanding public opinion on complex controversial issues such as 'Legalization of Marijuana' and 'Gun Rights' is of considerable importance for a number of objectives such as identifying the most divisive facets of the issue, developing a consensus, and making informed policy decisions. However, an individual's position on a controversial issue is often not just a binary support-or-oppose stance on the issue, but rather a conglomerate of nuanced opinions and beliefs on various aspects of the issue. These opinions and beliefs are often expressed qualitatively in free text in issue-focused surveys or on social media. However, quantifying vast amounts of qualitative information remains a significant challenge. The goal of this work is to provide a new approach for quantifying qualitative data for the understanding of controversial issues. First, we show how we can engage people directly through crowdsourcing to create a comprehensive dataset of assertions (claims, opinions, arguments, etc.) relevant to an issue. Next, the assertions are judged for agreement and strength of support or opposition, again by crowdsourcing. The collected Dataset of Nuanced Assertions on Controversial Issues (NAoCI dataset) consists of over 2,000 assertions on sixteen different controversial issues. It has over 100,000 judgments of whether people agree or disagree with the assertions, and of about 70,000 judgments indicating how strongly people support or oppose the assertions. This dataset allows for several useful analyses that help summarize public opinion. Across the sixteen issues, we find that when people judge a large set of assertions they often do not disagree with the individual assertions that the opposite side makes, but that they differently judge the relative importance of these assertions. We show how assertions that cause dissent or consensus can be identified by ranking the whole set of assertions based on the collected judgments. We also show how free-text assertions in social media can be analyzed in conjunction with the crowdsourced information to quantify and summarize public opinion on controversial issues.",,"Quantifying Qualitative Data for Understanding Controversial Issues. Understanding public opinion on complex controversial issues such as 'Legalization of Marijuana' and 'Gun Rights' is of considerable importance for a number of objectives such as identifying the most divisive facets of the issue, developing a consensus, and making informed policy decisions. However, an individual's position on a controversial issue is often not just a binary support-or-oppose stance on the issue, but rather a conglomerate of nuanced opinions and beliefs on various aspects of the issue. These opinions and beliefs are often expressed qualitatively in free text in issue-focused surveys or on social media. However, quantifying vast amounts of qualitative information remains a significant challenge. The goal of this work is to provide a new approach for quantifying qualitative data for the understanding of controversial issues. First, we show how we can engage people directly through crowdsourcing to create a comprehensive dataset of assertions (claims, opinions, arguments, etc.) relevant to an issue. Next, the assertions are judged for agreement and strength of support or opposition, again by crowdsourcing. The collected Dataset of Nuanced Assertions on Controversial Issues (NAoCI dataset) consists of over 2,000 assertions on sixteen different controversial issues. It has over 100,000 judgments of whether people agree or disagree with the assertions, and of about 70,000 judgments indicating how strongly people support or oppose the assertions. This dataset allows for several useful analyses that help summarize public opinion. Across the sixteen issues, we find that when people judge a large set of assertions they often do not disagree with the individual assertions that the opposite side makes, but that they differently judge the relative importance of these assertions. We show how assertions that cause dissent or consensus can be identified by ranking the whole set of assertions based on the collected judgments. We also show how free-text assertions in social media can be analyzed in conjunction with the crowdsourced information to quantify and summarize public opinion on controversial issues.",2018
jang-etal-1999-using,https://aclanthology.org/P99-1029,0,,,,,,,"Using Mutual Information to Resolve Query Translation Ambiguities and Query Term Weighting. An easy way of translating queries in one language to the other for cross-language information retrieval (IR) is to use a simple bilingual dictionary. Because of the generalpurpose nature of such dictionaries, however, this simple method yields a severe translation ambiguity problem. This paper describes the degree to which this problem arises in Korean-English cross-language IR and suggests a relatively simple yet effective method for disambiguation using mutual information statistics obtained only from the target document collection. In this method, mutual information is used not only to select the best candidate but also to assign a weight to query terms in the target language. Our experimental results based on the TREC-6 collection shows that this method can achieve up to 85% of the monolingual retrieval case and 96% of the manual disambiguation case.",Using Mutual Information to Resolve Query Translation Ambiguities and Query Term Weighting,"An easy way of translating queries in one language to the other for cross-language information retrieval (IR) is to use a simple bilingual dictionary. Because of the generalpurpose nature of such dictionaries, however, this simple method yields a severe translation ambiguity problem. This paper describes the degree to which this problem arises in Korean-English cross-language IR and suggests a relatively simple yet effective method for disambiguation using mutual information statistics obtained only from the target document collection. In this method, mutual information is used not only to select the best candidate but also to assign a weight to query terms in the target language. Our experimental results based on the TREC-6 collection shows that this method can achieve up to 85% of the monolingual retrieval case and 96% of the manual disambiguation case.",Using Mutual Information to Resolve Query Translation Ambiguities and Query Term Weighting,"An easy way of translating queries in one language to the other for cross-language information retrieval (IR) is to use a simple bilingual dictionary. Because of the generalpurpose nature of such dictionaries, however, this simple method yields a severe translation ambiguity problem. This paper describes the degree to which this problem arises in Korean-English cross-language IR and suggests a relatively simple yet effective method for disambiguation using mutual information statistics obtained only from the target document collection. In this method, mutual information is used not only to select the best candidate but also to assign a weight to query terms in the target language. Our experimental results based on the TREC-6 collection shows that this method can achieve up to 85% of the monolingual retrieval case and 96% of the manual disambiguation case.",,"Using Mutual Information to Resolve Query Translation Ambiguities and Query Term Weighting. An easy way of translating queries in one language to the other for cross-language information retrieval (IR) is to use a simple bilingual dictionary. Because of the generalpurpose nature of such dictionaries, however, this simple method yields a severe translation ambiguity problem. This paper describes the degree to which this problem arises in Korean-English cross-language IR and suggests a relatively simple yet effective method for disambiguation using mutual information statistics obtained only from the target document collection. In this method, mutual information is used not only to select the best candidate but also to assign a weight to query terms in the target language. Our experimental results based on the TREC-6 collection shows that this method can achieve up to 85% of the monolingual retrieval case and 96% of the manual disambiguation case.",1999
yamamoto-etal-2021-dependency,https://aclanthology.org/2021.starsem-1.20,0,,,,,,,"Dependency Patterns of Complex Sentences and Semantic Disambiguation for Abstract Meaning Representation Parsing. Meaning Representation (AMR) is a sentence-level meaning representation based on predicate argument structure. One of the challenges we find in AMR parsing is to capture the structure of complex sentences which expresses the relation between predicates. Knowing the core part of the sentence structure in advance may be beneficial in such a task. In this paper, we present a list of dependency patterns for English complex sentence constructions designed for AMR parsing. With a dedicated pattern matcher, all occurrences of complex sentence constructions are retrieved from an input sentence. While some of the subordinators have semantic ambiguities, we deal with this problem through training classification models on data derived from AMR and Wikipedia corpus, establishing a new baseline for future works. The developed complex sentence patterns and the corresponding AMR descriptions will be made public1.",Dependency Patterns of Complex Sentences and Semantic Disambiguation for {A}bstract {M}eaning {R}epresentation Parsing,"Meaning Representation (AMR) is a sentence-level meaning representation based on predicate argument structure. One of the challenges we find in AMR parsing is to capture the structure of complex sentences which expresses the relation between predicates. Knowing the core part of the sentence structure in advance may be beneficial in such a task. In this paper, we present a list of dependency patterns for English complex sentence constructions designed for AMR parsing. With a dedicated pattern matcher, all occurrences of complex sentence constructions are retrieved from an input sentence. While some of the subordinators have semantic ambiguities, we deal with this problem through training classification models on data derived from AMR and Wikipedia corpus, establishing a new baseline for future works. The developed complex sentence patterns and the corresponding AMR descriptions will be made public1.",Dependency Patterns of Complex Sentences and Semantic Disambiguation for Abstract Meaning Representation Parsing,"Meaning Representation (AMR) is a sentence-level meaning representation based on predicate argument structure. One of the challenges we find in AMR parsing is to capture the structure of complex sentences which expresses the relation between predicates. Knowing the core part of the sentence structure in advance may be beneficial in such a task. In this paper, we present a list of dependency patterns for English complex sentence constructions designed for AMR parsing. With a dedicated pattern matcher, all occurrences of complex sentence constructions are retrieved from an input sentence. While some of the subordinators have semantic ambiguities, we deal with this problem through training classification models on data derived from AMR and Wikipedia corpus, establishing a new baseline for future works. The developed complex sentence patterns and the corresponding AMR descriptions will be made public1.",,"Dependency Patterns of Complex Sentences and Semantic Disambiguation for Abstract Meaning Representation Parsing. Meaning Representation (AMR) is a sentence-level meaning representation based on predicate argument structure. One of the challenges we find in AMR parsing is to capture the structure of complex sentences which expresses the relation between predicates. Knowing the core part of the sentence structure in advance may be beneficial in such a task. In this paper, we present a list of dependency patterns for English complex sentence constructions designed for AMR parsing. With a dedicated pattern matcher, all occurrences of complex sentence constructions are retrieved from an input sentence. While some of the subordinators have semantic ambiguities, we deal with this problem through training classification models on data derived from AMR and Wikipedia corpus, establishing a new baseline for future works. The developed complex sentence patterns and the corresponding AMR descriptions will be made public1.",2021
finch-etal-2011-nict,https://aclanthology.org/2011.iwslt-evaluation.5,0,,,,,,,"The NICT translation system for IWSLT 2011. This paper describes NICT's participation in the IWSLT 2011 evaluation campaign for the TED speech translation Chinese-English shared-task. Our approach was based on a phrasebased statistical machine translation system that was augmented in two ways. Firstly we introduced rule-based reordering constraints on the decoding. This consisted of a set of rules that were used to segment the input utterances into segments that could be decoded almost independently. This idea here being that constraining the decoding process in this manner would greatly reduce the search space of the decoder, and cut out many possibilities for error while at the same time allowing for a correct output to be generated. The rules we used exploit punctuation and spacing in the input utterances, and we use these positions to delimit our segments. Not all punctuation/spacing positions were used as segment boundaries, and the set of used positions were determined by a set of linguistically-based heuristics. Secondly we used two heterogeneous methods to build the translation model, and lexical reordering model for our systems. The first method employed the popular method of using GIZA++ for alignment in combination with phraseextraction heuristics. The second method used a recentlydeveloped Bayesian alignment technique that is able to perform both phrase-to-phrase alignment and phrase pair extraction within a single unsupervised process. The models produced by this type of alignment technique are typically very compact whilst at the same time maintaining a high level of translation quality. We evaluated both of these methods of translation model construction in isolation, and our results show their performance is comparable. We also integrated both models by linear interpolation to obtain a model that outperforms either component. Finally, we added an indicator feature into the log-linear model to indicate those phrases that were in the intersection of the two translation models. The addition of this feature was also able to provide a small improvement in performance.",The {NICT} translation system for {IWSLT} 2011,"This paper describes NICT's participation in the IWSLT 2011 evaluation campaign for the TED speech translation Chinese-English shared-task. Our approach was based on a phrasebased statistical machine translation system that was augmented in two ways. Firstly we introduced rule-based reordering constraints on the decoding. This consisted of a set of rules that were used to segment the input utterances into segments that could be decoded almost independently. This idea here being that constraining the decoding process in this manner would greatly reduce the search space of the decoder, and cut out many possibilities for error while at the same time allowing for a correct output to be generated. The rules we used exploit punctuation and spacing in the input utterances, and we use these positions to delimit our segments. Not all punctuation/spacing positions were used as segment boundaries, and the set of used positions were determined by a set of linguistically-based heuristics. Secondly we used two heterogeneous methods to build the translation model, and lexical reordering model for our systems. The first method employed the popular method of using GIZA++ for alignment in combination with phraseextraction heuristics. The second method used a recentlydeveloped Bayesian alignment technique that is able to perform both phrase-to-phrase alignment and phrase pair extraction within a single unsupervised process. The models produced by this type of alignment technique are typically very compact whilst at the same time maintaining a high level of translation quality. We evaluated both of these methods of translation model construction in isolation, and our results show their performance is comparable. We also integrated both models by linear interpolation to obtain a model that outperforms either component. Finally, we added an indicator feature into the log-linear model to indicate those phrases that were in the intersection of the two translation models. The addition of this feature was also able to provide a small improvement in performance.",The NICT translation system for IWSLT 2011,"This paper describes NICT's participation in the IWSLT 2011 evaluation campaign for the TED speech translation Chinese-English shared-task. Our approach was based on a phrasebased statistical machine translation system that was augmented in two ways. Firstly we introduced rule-based reordering constraints on the decoding. This consisted of a set of rules that were used to segment the input utterances into segments that could be decoded almost independently. This idea here being that constraining the decoding process in this manner would greatly reduce the search space of the decoder, and cut out many possibilities for error while at the same time allowing for a correct output to be generated. The rules we used exploit punctuation and spacing in the input utterances, and we use these positions to delimit our segments. Not all punctuation/spacing positions were used as segment boundaries, and the set of used positions were determined by a set of linguistically-based heuristics. Secondly we used two heterogeneous methods to build the translation model, and lexical reordering model for our systems. The first method employed the popular method of using GIZA++ for alignment in combination with phraseextraction heuristics. The second method used a recentlydeveloped Bayesian alignment technique that is able to perform both phrase-to-phrase alignment and phrase pair extraction within a single unsupervised process. The models produced by this type of alignment technique are typically very compact whilst at the same time maintaining a high level of translation quality. We evaluated both of these methods of translation model construction in isolation, and our results show their performance is comparable. We also integrated both models by linear interpolation to obtain a model that outperforms either component. Finally, we added an indicator feature into the log-linear model to indicate those phrases that were in the intersection of the two translation models. The addition of this feature was also able to provide a small improvement in performance.",This work was performed while the first was supported by the JSPS Research Fellowsh Young Scientists.,"The NICT translation system for IWSLT 2011. This paper describes NICT's participation in the IWSLT 2011 evaluation campaign for the TED speech translation Chinese-English shared-task. Our approach was based on a phrasebased statistical machine translation system that was augmented in two ways. Firstly we introduced rule-based reordering constraints on the decoding. This consisted of a set of rules that were used to segment the input utterances into segments that could be decoded almost independently. This idea here being that constraining the decoding process in this manner would greatly reduce the search space of the decoder, and cut out many possibilities for error while at the same time allowing for a correct output to be generated. The rules we used exploit punctuation and spacing in the input utterances, and we use these positions to delimit our segments. Not all punctuation/spacing positions were used as segment boundaries, and the set of used positions were determined by a set of linguistically-based heuristics. Secondly we used two heterogeneous methods to build the translation model, and lexical reordering model for our systems. The first method employed the popular method of using GIZA++ for alignment in combination with phraseextraction heuristics. The second method used a recentlydeveloped Bayesian alignment technique that is able to perform both phrase-to-phrase alignment and phrase pair extraction within a single unsupervised process. The models produced by this type of alignment technique are typically very compact whilst at the same time maintaining a high level of translation quality. We evaluated both of these methods of translation model construction in isolation, and our results show their performance is comparable. We also integrated both models by linear interpolation to obtain a model that outperforms either component. Finally, we added an indicator feature into the log-linear model to indicate those phrases that were in the intersection of the two translation models. The addition of this feature was also able to provide a small improvement in performance.",2011
banik-etal-2016-smt,https://aclanthology.org/W16-6303,0,,,,,,,"Can SMT and RBMT Improve each other's Performance?- An Experiment with English-Hindi Translation. Rule-based machine translation (RBMT) and Statistical machine translation (SMT) are two well-known approaches for translation which have their own benefits. System architecture of SMT often complements RBMT, and the vice-versa. In this paper, we propose an effective method of serial coupling where we attempt to build a hybrid model that exploits the benefits of both the architectures. The first part of coupling is used to obtain good lexical selection and robustness, second part is used to improve syntax and the final one is designed to combine other modules along with the best phrase reordering. Our experiments on a English-Hindi product domain dataset show the effectiveness of the proposed approach with improvement in BLEU score.",Can {SMT} and {RBMT} Improve each other{'}s Performance?- An Experiment with {E}nglish-{H}indi Translation,"Rule-based machine translation (RBMT) and Statistical machine translation (SMT) are two well-known approaches for translation which have their own benefits. System architecture of SMT often complements RBMT, and the vice-versa. In this paper, we propose an effective method of serial coupling where we attempt to build a hybrid model that exploits the benefits of both the architectures. The first part of coupling is used to obtain good lexical selection and robustness, second part is used to improve syntax and the final one is designed to combine other modules along with the best phrase reordering. Our experiments on a English-Hindi product domain dataset show the effectiveness of the proposed approach with improvement in BLEU score.",Can SMT and RBMT Improve each other's Performance?- An Experiment with English-Hindi Translation,"Rule-based machine translation (RBMT) and Statistical machine translation (SMT) are two well-known approaches for translation which have their own benefits. System architecture of SMT often complements RBMT, and the vice-versa. In this paper, we propose an effective method of serial coupling where we attempt to build a hybrid model that exploits the benefits of both the architectures. The first part of coupling is used to obtain good lexical selection and robustness, second part is used to improve syntax and the final one is designed to combine other modules along with the best phrase reordering. Our experiments on a English-Hindi product domain dataset show the effectiveness of the proposed approach with improvement in BLEU score.",,"Can SMT and RBMT Improve each other's Performance?- An Experiment with English-Hindi Translation. Rule-based machine translation (RBMT) and Statistical machine translation (SMT) are two well-known approaches for translation which have their own benefits. System architecture of SMT often complements RBMT, and the vice-versa. In this paper, we propose an effective method of serial coupling where we attempt to build a hybrid model that exploits the benefits of both the architectures. The first part of coupling is used to obtain good lexical selection and robustness, second part is used to improve syntax and the final one is designed to combine other modules along with the best phrase reordering. Our experiments on a English-Hindi product domain dataset show the effectiveness of the proposed approach with improvement in BLEU score.",2016
dhuliawala-etal-2015-judge,https://aclanthology.org/W15-5925,0,,,,,,,"Judge a Book by its Cover: Conservative Focused Crawling under Resource Constraints. In this paper, we propose a domain specific crawler that decides the domain relevance of a URL without downloading the page. In contrast, a focused crawler relies on the content of the page to make the same decision. To achieve this, we use a classifier model which harnesses features such as the page's URL and its parents' information to score a page. The classifier model is incrementally trained at each depth in order to learn the facets of the domain. Our approach modifies the focused crawler by circumventing the need for extra resource usage in terms of bandwidth. We test the performance of our approach on Wikipedia data. Our Conservative Focused Crawler (CFC) shows a performance equivalent to that of a focused crawler (skyline system) with an average resource usage reduction of ≈30% across two domains viz., tourism and sports.",Judge a Book by its Cover: Conservative Focused Crawling under Resource Constraints,"In this paper, we propose a domain specific crawler that decides the domain relevance of a URL without downloading the page. In contrast, a focused crawler relies on the content of the page to make the same decision. To achieve this, we use a classifier model which harnesses features such as the page's URL and its parents' information to score a page. The classifier model is incrementally trained at each depth in order to learn the facets of the domain. Our approach modifies the focused crawler by circumventing the need for extra resource usage in terms of bandwidth. We test the performance of our approach on Wikipedia data. Our Conservative Focused Crawler (CFC) shows a performance equivalent to that of a focused crawler (skyline system) with an average resource usage reduction of ≈30% across two domains viz., tourism and sports.",Judge a Book by its Cover: Conservative Focused Crawling under Resource Constraints,"In this paper, we propose a domain specific crawler that decides the domain relevance of a URL without downloading the page. In contrast, a focused crawler relies on the content of the page to make the same decision. To achieve this, we use a classifier model which harnesses features such as the page's URL and its parents' information to score a page. The classifier model is incrementally trained at each depth in order to learn the facets of the domain. Our approach modifies the focused crawler by circumventing the need for extra resource usage in terms of bandwidth. We test the performance of our approach on Wikipedia data. Our Conservative Focused Crawler (CFC) shows a performance equivalent to that of a focused crawler (skyline system) with an average resource usage reduction of ≈30% across two domains viz., tourism and sports.",,"Judge a Book by its Cover: Conservative Focused Crawling under Resource Constraints. In this paper, we propose a domain specific crawler that decides the domain relevance of a URL without downloading the page. In contrast, a focused crawler relies on the content of the page to make the same decision. To achieve this, we use a classifier model which harnesses features such as the page's URL and its parents' information to score a page. The classifier model is incrementally trained at each depth in order to learn the facets of the domain. Our approach modifies the focused crawler by circumventing the need for extra resource usage in terms of bandwidth. We test the performance of our approach on Wikipedia data. Our Conservative Focused Crawler (CFC) shows a performance equivalent to that of a focused crawler (skyline system) with an average resource usage reduction of ≈30% across two domains viz., tourism and sports.",2015
lakew-etal-2017-fbks,https://aclanthology.org/2017.iwslt-1.5,0,,,,,,,"FBK's Multilingual Neural Machine Translation System for IWSLT 2017. Neural Machine Translation has been shown to enable inference and cross-lingual knowledge transfer across multiple language directions using a single multilingual model. Focusing on this multilingual translation scenario, this work summarizes FBK's participation in the IWSLT 2017 shared task. Our submissions rely on two multilingual systems trained on five languages (English, Dutch, German, Italian, and Romanian). The first one is a 20 language direction model, which handles all possible combinations of the five languages. The second multilingual system is trained only on 16 directions, leaving the others as zero-shot translation directions (i.e representing a more complex inference task on language pairs not seen at training time). More specifically, our zero-shot directions are Dutch$German and Italian$Romanian (resulting in four language combinations). Despite the small amount of parallel data used for training these systems, the resulting multilingual models are effective, even in comparison with models trained separately for every language pair (i.e. in more favorable conditions). We compare and show the results of the two multilingual models against a baseline single language pair systems. Particularly, we focus on the four zero-shot directions and show how a multilingual model trained with small data can provide reasonable results. Furthermore, we investigate how pivoting (i.e using a bridge/pivot language for inference in a source!pivot!target translations) using a multilingual model can be an alternative to enable zero-shot translation in a low resource setting.",{FBK}{'}s Multilingual Neural Machine Translation System for {IWSLT} 2017,"Neural Machine Translation has been shown to enable inference and cross-lingual knowledge transfer across multiple language directions using a single multilingual model. Focusing on this multilingual translation scenario, this work summarizes FBK's participation in the IWSLT 2017 shared task. Our submissions rely on two multilingual systems trained on five languages (English, Dutch, German, Italian, and Romanian). The first one is a 20 language direction model, which handles all possible combinations of the five languages. The second multilingual system is trained only on 16 directions, leaving the others as zero-shot translation directions (i.e representing a more complex inference task on language pairs not seen at training time). More specifically, our zero-shot directions are Dutch$German and Italian$Romanian (resulting in four language combinations). Despite the small amount of parallel data used for training these systems, the resulting multilingual models are effective, even in comparison with models trained separately for every language pair (i.e. in more favorable conditions). We compare and show the results of the two multilingual models against a baseline single language pair systems. Particularly, we focus on the four zero-shot directions and show how a multilingual model trained with small data can provide reasonable results. Furthermore, we investigate how pivoting (i.e using a bridge/pivot language for inference in a source!pivot!target translations) using a multilingual model can be an alternative to enable zero-shot translation in a low resource setting.",FBK's Multilingual Neural Machine Translation System for IWSLT 2017,"Neural Machine Translation has been shown to enable inference and cross-lingual knowledge transfer across multiple language directions using a single multilingual model. Focusing on this multilingual translation scenario, this work summarizes FBK's participation in the IWSLT 2017 shared task. Our submissions rely on two multilingual systems trained on five languages (English, Dutch, German, Italian, and Romanian). The first one is a 20 language direction model, which handles all possible combinations of the five languages. The second multilingual system is trained only on 16 directions, leaving the others as zero-shot translation directions (i.e representing a more complex inference task on language pairs not seen at training time). More specifically, our zero-shot directions are Dutch$German and Italian$Romanian (resulting in four language combinations). Despite the small amount of parallel data used for training these systems, the resulting multilingual models are effective, even in comparison with models trained separately for every language pair (i.e. in more favorable conditions). We compare and show the results of the two multilingual models against a baseline single language pair systems. Particularly, we focus on the four zero-shot directions and show how a multilingual model trained with small data can provide reasonable results. Furthermore, we investigate how pivoting (i.e using a bridge/pivot language for inference in a source!pivot!target translations) using a multilingual model can be an alternative to enable zero-shot translation in a low resource setting.",This work has been partially supported by the EC-funded projects ModernMT (H2020 grant agreement no. 645487) and QT21 (H2020 grant agreement no. 645452). The Titan Xp used for this research was donated by the NVIDIA Corporation. This work was also supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1 and by a donation of Azure credits by Microsoft.,"FBK's Multilingual Neural Machine Translation System for IWSLT 2017. Neural Machine Translation has been shown to enable inference and cross-lingual knowledge transfer across multiple language directions using a single multilingual model. Focusing on this multilingual translation scenario, this work summarizes FBK's participation in the IWSLT 2017 shared task. Our submissions rely on two multilingual systems trained on five languages (English, Dutch, German, Italian, and Romanian). The first one is a 20 language direction model, which handles all possible combinations of the five languages. The second multilingual system is trained only on 16 directions, leaving the others as zero-shot translation directions (i.e representing a more complex inference task on language pairs not seen at training time). More specifically, our zero-shot directions are Dutch$German and Italian$Romanian (resulting in four language combinations). Despite the small amount of parallel data used for training these systems, the resulting multilingual models are effective, even in comparison with models trained separately for every language pair (i.e. in more favorable conditions). We compare and show the results of the two multilingual models against a baseline single language pair systems. Particularly, we focus on the four zero-shot directions and show how a multilingual model trained with small data can provide reasonable results. Furthermore, we investigate how pivoting (i.e using a bridge/pivot language for inference in a source!pivot!target translations) using a multilingual model can be an alternative to enable zero-shot translation in a low resource setting.",2017
hasan-ng-2014-taking,https://aclanthology.org/D14-1083,1,,,,peace_justice_and_strong_institutions,,,"Why are You Taking this Stance? Identifying and Classifying Reasons in Ideological Debates. Recent years have seen a surge of interest in stance classification in online debates. Oftentimes, however, it is important to determine not only the stance expressed by an author in her debate posts, but also the reasons behind her supporting or opposing the issue under debate. We therefore examine the new task of reason classification in this paper. Given the close interplay between stance classification and reason classification, we design computational models for examining how automatically computed stance information can be profitably exploited for reason classification. Experiments on our reason-annotated corpus of ideological debate posts from four domains demonstrate that sophisticated models of stances and reasons can indeed yield more accurate reason and stance classification results than their simpler counterparts.",Why are You Taking this Stance? Identifying and Classifying Reasons in Ideological Debates,"Recent years have seen a surge of interest in stance classification in online debates. Oftentimes, however, it is important to determine not only the stance expressed by an author in her debate posts, but also the reasons behind her supporting or opposing the issue under debate. We therefore examine the new task of reason classification in this paper. Given the close interplay between stance classification and reason classification, we design computational models for examining how automatically computed stance information can be profitably exploited for reason classification. Experiments on our reason-annotated corpus of ideological debate posts from four domains demonstrate that sophisticated models of stances and reasons can indeed yield more accurate reason and stance classification results than their simpler counterparts.",Why are You Taking this Stance? Identifying and Classifying Reasons in Ideological Debates,"Recent years have seen a surge of interest in stance classification in online debates. Oftentimes, however, it is important to determine not only the stance expressed by an author in her debate posts, but also the reasons behind her supporting or opposing the issue under debate. We therefore examine the new task of reason classification in this paper. Given the close interplay between stance classification and reason classification, we design computational models for examining how automatically computed stance information can be profitably exploited for reason classification. Experiments on our reason-annotated corpus of ideological debate posts from four domains demonstrate that sophisticated models of stances and reasons can indeed yield more accurate reason and stance classification results than their simpler counterparts.","We thank the three anonymous reviewers for their detailed and insightful comments on an earlier draft of this paper. This work was supported in part by NSF Grants IIS-1147644 and IIS-1219142. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of NSF.","Why are You Taking this Stance? Identifying and Classifying Reasons in Ideological Debates. Recent years have seen a surge of interest in stance classification in online debates. Oftentimes, however, it is important to determine not only the stance expressed by an author in her debate posts, but also the reasons behind her supporting or opposing the issue under debate. We therefore examine the new task of reason classification in this paper. Given the close interplay between stance classification and reason classification, we design computational models for examining how automatically computed stance information can be profitably exploited for reason classification. Experiments on our reason-annotated corpus of ideological debate posts from four domains demonstrate that sophisticated models of stances and reasons can indeed yield more accurate reason and stance classification results than their simpler counterparts.",2014
glavas-etal-2012-experiments,https://aclanthology.org/W12-0501,0,,,,,,,"Experiments on Hybrid Corpus-Based Sentiment Lexicon Acquisition. Numerous sentiment analysis applications make usage of a sentiment lexicon. In this paper we present experiments on hybrid sentiment lexicon acquisition. The approach is corpus-based and thus suitable for languages lacking general dictionarybased resources. The approach is a hybrid two-step process that combines semisupervised graph-based algorithms and supervised models. We evaluate the performance on three tasks that capture different aspects of a sentiment lexicon: polarity ranking task, polarity regression task, and sentiment classification task. Extensive evaluation shows that the results are comparable to those of a well-known sentiment lexicon SentiWordNet on the polarity ranking task. On the sentiment classification task, the results are also comparable to SentiWordNet when restricted to monosentimous (all senses carry the same sentiment) words. This is satisfactory, given the absence of explicit semantic relations between words in the corpus.",Experiments on Hybrid Corpus-Based Sentiment Lexicon Acquisition,"Numerous sentiment analysis applications make usage of a sentiment lexicon. In this paper we present experiments on hybrid sentiment lexicon acquisition. The approach is corpus-based and thus suitable for languages lacking general dictionarybased resources. The approach is a hybrid two-step process that combines semisupervised graph-based algorithms and supervised models. We evaluate the performance on three tasks that capture different aspects of a sentiment lexicon: polarity ranking task, polarity regression task, and sentiment classification task. Extensive evaluation shows that the results are comparable to those of a well-known sentiment lexicon SentiWordNet on the polarity ranking task. On the sentiment classification task, the results are also comparable to SentiWordNet when restricted to monosentimous (all senses carry the same sentiment) words. This is satisfactory, given the absence of explicit semantic relations between words in the corpus.",Experiments on Hybrid Corpus-Based Sentiment Lexicon Acquisition,"Numerous sentiment analysis applications make usage of a sentiment lexicon. In this paper we present experiments on hybrid sentiment lexicon acquisition. The approach is corpus-based and thus suitable for languages lacking general dictionarybased resources. The approach is a hybrid two-step process that combines semisupervised graph-based algorithms and supervised models. We evaluate the performance on three tasks that capture different aspects of a sentiment lexicon: polarity ranking task, polarity regression task, and sentiment classification task. Extensive evaluation shows that the results are comparable to those of a well-known sentiment lexicon SentiWordNet on the polarity ranking task. On the sentiment classification task, the results are also comparable to SentiWordNet when restricted to monosentimous (all senses carry the same sentiment) words. This is satisfactory, given the absence of explicit semantic relations between words in the corpus.","We thank the anonymous reviewers for their useful comments. This work has been supported by the Ministry of Science, Education and Sports, Republic of Croatia under the Grant 036-1300646-1986. ","Experiments on Hybrid Corpus-Based Sentiment Lexicon Acquisition. Numerous sentiment analysis applications make usage of a sentiment lexicon. In this paper we present experiments on hybrid sentiment lexicon acquisition. The approach is corpus-based and thus suitable for languages lacking general dictionarybased resources. The approach is a hybrid two-step process that combines semisupervised graph-based algorithms and supervised models. We evaluate the performance on three tasks that capture different aspects of a sentiment lexicon: polarity ranking task, polarity regression task, and sentiment classification task. Extensive evaluation shows that the results are comparable to those of a well-known sentiment lexicon SentiWordNet on the polarity ranking task. On the sentiment classification task, the results are also comparable to SentiWordNet when restricted to monosentimous (all senses carry the same sentiment) words. This is satisfactory, given the absence of explicit semantic relations between words in the corpus.",2012
trieu-etal-2016-dealing,https://aclanthology.org/Y16-2024,0,,,,,,,"Dealing with Out-Of-Vocabulary Problem in Sentence Alignment Using Word Similarity. Sentence alignment plays an essential role in building bilingual corpora which are valuable resources for many applications like statistical machine translation. In various approaches of sentence alignment, length-and-word-based methods which are based on sentence length and word correspondences have been shown to be the most effective. Nevertheless a drawback of using bilingual dictionaries trained by IBM Models in length-and-word-based methods is the problem of out-of-vocabulary (OOV). We propose using word similarity learned from monolingual corpora to overcome the problem. Experimental results showed that our method can reduce the OOV ratio and achieve a better performance than some other lengthand-word-based methods. This implies that using word similarity learned from monolingual data may help to deal with OOV problem in sentence alignment.",Dealing with Out-Of-Vocabulary Problem in Sentence Alignment Using Word Similarity,"Sentence alignment plays an essential role in building bilingual corpora which are valuable resources for many applications like statistical machine translation. In various approaches of sentence alignment, length-and-word-based methods which are based on sentence length and word correspondences have been shown to be the most effective. Nevertheless a drawback of using bilingual dictionaries trained by IBM Models in length-and-word-based methods is the problem of out-of-vocabulary (OOV). We propose using word similarity learned from monolingual corpora to overcome the problem. Experimental results showed that our method can reduce the OOV ratio and achieve a better performance than some other lengthand-word-based methods. This implies that using word similarity learned from monolingual data may help to deal with OOV problem in sentence alignment.",Dealing with Out-Of-Vocabulary Problem in Sentence Alignment Using Word Similarity,"Sentence alignment plays an essential role in building bilingual corpora which are valuable resources for many applications like statistical machine translation. In various approaches of sentence alignment, length-and-word-based methods which are based on sentence length and word correspondences have been shown to be the most effective. Nevertheless a drawback of using bilingual dictionaries trained by IBM Models in length-and-word-based methods is the problem of out-of-vocabulary (OOV). We propose using word similarity learned from monolingual corpora to overcome the problem. Experimental results showed that our method can reduce the OOV ratio and achieve a better performance than some other lengthand-word-based methods. This implies that using word similarity learned from monolingual data may help to deal with OOV problem in sentence alignment.",,"Dealing with Out-Of-Vocabulary Problem in Sentence Alignment Using Word Similarity. Sentence alignment plays an essential role in building bilingual corpora which are valuable resources for many applications like statistical machine translation. In various approaches of sentence alignment, length-and-word-based methods which are based on sentence length and word correspondences have been shown to be the most effective. Nevertheless a drawback of using bilingual dictionaries trained by IBM Models in length-and-word-based methods is the problem of out-of-vocabulary (OOV). We propose using word similarity learned from monolingual corpora to overcome the problem. Experimental results showed that our method can reduce the OOV ratio and achieve a better performance than some other lengthand-word-based methods. This implies that using word similarity learned from monolingual data may help to deal with OOV problem in sentence alignment.",2016
ernestus-etal-2014-nijmegen,http://www.lrec-conf.org/proceedings/lrec2014/pdf/134_Paper.pdf,0,,,,,,,"The Nijmegen Corpus of Casual Czech. This article introduces a new speech corpus, the Nijmegen Corpus of Casual Czech (NCCCz), which contains more than 30 hours of high-quality recordings of casual conversations in Common Czech, among ten groups of three male and ten groups of three female friends. All speakers were native speakers of Czech, raised in Prague or in the region of Central Bohemia, and were between 19 and 26 years old. Every group of speakers consisted of one confederate, who was instructed to keep the conversations lively, and two speakers naive to the purposes of the recordings. The naive speakers were engaged in conversations for approximately 90 minutes, while the confederate joined them for approximately the last 72 minutes. The corpus was orthographically annotated by experienced transcribers and this orthographic transcription was aligned with the speech signal. In addition, the conversations were videotaped. This corpus can form the basis for all types of research on casual conversations in Czech, including phonetic research and research on how to improve automatic speech recognition. The corpus will be freely available.",The Nijmegen Corpus of Casual {C}zech,"This article introduces a new speech corpus, the Nijmegen Corpus of Casual Czech (NCCCz), which contains more than 30 hours of high-quality recordings of casual conversations in Common Czech, among ten groups of three male and ten groups of three female friends. All speakers were native speakers of Czech, raised in Prague or in the region of Central Bohemia, and were between 19 and 26 years old. Every group of speakers consisted of one confederate, who was instructed to keep the conversations lively, and two speakers naive to the purposes of the recordings. The naive speakers were engaged in conversations for approximately 90 minutes, while the confederate joined them for approximately the last 72 minutes. The corpus was orthographically annotated by experienced transcribers and this orthographic transcription was aligned with the speech signal. In addition, the conversations were videotaped. This corpus can form the basis for all types of research on casual conversations in Czech, including phonetic research and research on how to improve automatic speech recognition. The corpus will be freely available.",The Nijmegen Corpus of Casual Czech,"This article introduces a new speech corpus, the Nijmegen Corpus of Casual Czech (NCCCz), which contains more than 30 hours of high-quality recordings of casual conversations in Common Czech, among ten groups of three male and ten groups of three female friends. All speakers were native speakers of Czech, raised in Prague or in the region of Central Bohemia, and were between 19 and 26 years old. Every group of speakers consisted of one confederate, who was instructed to keep the conversations lively, and two speakers naive to the purposes of the recordings. The naive speakers were engaged in conversations for approximately 90 minutes, while the confederate joined them for approximately the last 72 minutes. The corpus was orthographically annotated by experienced transcribers and this orthographic transcription was aligned with the speech signal. In addition, the conversations were videotaped. This corpus can form the basis for all types of research on casual conversations in Czech, including phonetic research and research on how to improve automatic speech recognition. The corpus will be freely available.","Our thanks to the staff at the Phonetic Institute at Charles University in Prague for their help during the recordings of the corpus in Prague. Our special thanks to Lou Boves for valuable discussions. This work was funded by a European Young Investigator Award given to the first author. In addition, it was supported by two Czech grants","The Nijmegen Corpus of Casual Czech. This article introduces a new speech corpus, the Nijmegen Corpus of Casual Czech (NCCCz), which contains more than 30 hours of high-quality recordings of casual conversations in Common Czech, among ten groups of three male and ten groups of three female friends. All speakers were native speakers of Czech, raised in Prague or in the region of Central Bohemia, and were between 19 and 26 years old. Every group of speakers consisted of one confederate, who was instructed to keep the conversations lively, and two speakers naive to the purposes of the recordings. The naive speakers were engaged in conversations for approximately 90 minutes, while the confederate joined them for approximately the last 72 minutes. The corpus was orthographically annotated by experienced transcribers and this orthographic transcription was aligned with the speech signal. In addition, the conversations were videotaped. This corpus can form the basis for all types of research on casual conversations in Czech, including phonetic research and research on how to improve automatic speech recognition. The corpus will be freely available.",2014
nakano-etal-2022-pseudo,https://aclanthology.org/2022.dialdoc-1.4,0,,,,,,,"Pseudo Ambiguous and Clarifying Questions Based on Sentence Structures Toward Clarifying Question Answering System. Question answering (QA) with disambiguation questions is essential for practical QA systems because user questions often do not contain information enough to find their answers. We call this task clarifying question answering, a task to find answers to ambiguous user questions by disambiguating their intents through interactions. There are two major problems in building a clarifying question answering system: data preparation of possible ambiguous questions and the generation of clarifying questions. In this paper, we tackle these problems by sentence generation methods using sentence structures. Ambiguous questions are generated by eliminating a part of a sentence considering the sentence structure. Clarifying the question generation method based on case frame dictionary and sentence structure is also proposed. Our experimental results verify that our pseudo ambiguous question generation successfully adds ambiguity to questions. Moreover, the proposed clarifying question generation recovers the performance drop by asking the user for missing information.",Pseudo Ambiguous and Clarifying Questions Based on Sentence Structures Toward Clarifying Question Answering System,"Question answering (QA) with disambiguation questions is essential for practical QA systems because user questions often do not contain information enough to find their answers. We call this task clarifying question answering, a task to find answers to ambiguous user questions by disambiguating their intents through interactions. There are two major problems in building a clarifying question answering system: data preparation of possible ambiguous questions and the generation of clarifying questions. In this paper, we tackle these problems by sentence generation methods using sentence structures. Ambiguous questions are generated by eliminating a part of a sentence considering the sentence structure. Clarifying the question generation method based on case frame dictionary and sentence structure is also proposed. Our experimental results verify that our pseudo ambiguous question generation successfully adds ambiguity to questions. Moreover, the proposed clarifying question generation recovers the performance drop by asking the user for missing information.",Pseudo Ambiguous and Clarifying Questions Based on Sentence Structures Toward Clarifying Question Answering System,"Question answering (QA) with disambiguation questions is essential for practical QA systems because user questions often do not contain information enough to find their answers. We call this task clarifying question answering, a task to find answers to ambiguous user questions by disambiguating their intents through interactions. There are two major problems in building a clarifying question answering system: data preparation of possible ambiguous questions and the generation of clarifying questions. In this paper, we tackle these problems by sentence generation methods using sentence structures. Ambiguous questions are generated by eliminating a part of a sentence considering the sentence structure. Clarifying the question generation method based on case frame dictionary and sentence structure is also proposed. Our experimental results verify that our pseudo ambiguous question generation successfully adds ambiguity to questions. Moreover, the proposed clarifying question generation recovers the performance drop by asking the user for missing information.",,"Pseudo Ambiguous and Clarifying Questions Based on Sentence Structures Toward Clarifying Question Answering System. Question answering (QA) with disambiguation questions is essential for practical QA systems because user questions often do not contain information enough to find their answers. We call this task clarifying question answering, a task to find answers to ambiguous user questions by disambiguating their intents through interactions. There are two major problems in building a clarifying question answering system: data preparation of possible ambiguous questions and the generation of clarifying questions. In this paper, we tackle these problems by sentence generation methods using sentence structures. Ambiguous questions are generated by eliminating a part of a sentence considering the sentence structure. Clarifying the question generation method based on case frame dictionary and sentence structure is also proposed. Our experimental results verify that our pseudo ambiguous question generation successfully adds ambiguity to questions. Moreover, the proposed clarifying question generation recovers the performance drop by asking the user for missing information.",2022
balusu-2012-complex,https://aclanthology.org/C12-3001,0,,,,,,,"Complex Predicates in Telugu: A Computational Perspective. Complex predicates raise the question of how to encode them in computational lexicons. Their computational implementation in South Asian languages is in its infancy. This paper examines in detail the variety of complex predicates in Telugu revealing the syntactic process of their composition and the constraints on their formation. The framework used is First Phase Syntax (Ramchand 2008). In this lexical semantic approach that ties together the constraints on the meaning and the argument structure of complex predicates, each verb breaks down into 3 sub-event heads which determine the nature of the verb. Complex predicates are formed by one verb subsuming the sub-event heads of another verb, and this is constrained in principled ways. The data analysed and the constraints developed in the paper are of use to linguists working on computational solutions for Telugu and other languages, for design and development of predicate structure functions in linguistic processors.",Complex Predicates in {T}elugu: A Computational Perspective,"Complex predicates raise the question of how to encode them in computational lexicons. Their computational implementation in South Asian languages is in its infancy. This paper examines in detail the variety of complex predicates in Telugu revealing the syntactic process of their composition and the constraints on their formation. The framework used is First Phase Syntax (Ramchand 2008). In this lexical semantic approach that ties together the constraints on the meaning and the argument structure of complex predicates, each verb breaks down into 3 sub-event heads which determine the nature of the verb. Complex predicates are formed by one verb subsuming the sub-event heads of another verb, and this is constrained in principled ways. The data analysed and the constraints developed in the paper are of use to linguists working on computational solutions for Telugu and other languages, for design and development of predicate structure functions in linguistic processors.",Complex Predicates in Telugu: A Computational Perspective,"Complex predicates raise the question of how to encode them in computational lexicons. Their computational implementation in South Asian languages is in its infancy. This paper examines in detail the variety of complex predicates in Telugu revealing the syntactic process of their composition and the constraints on their formation. The framework used is First Phase Syntax (Ramchand 2008). In this lexical semantic approach that ties together the constraints on the meaning and the argument structure of complex predicates, each verb breaks down into 3 sub-event heads which determine the nature of the verb. Complex predicates are formed by one verb subsuming the sub-event heads of another verb, and this is constrained in principled ways. The data analysed and the constraints developed in the paper are of use to linguists working on computational solutions for Telugu and other languages, for design and development of predicate structure functions in linguistic processors.",,"Complex Predicates in Telugu: A Computational Perspective. Complex predicates raise the question of how to encode them in computational lexicons. Their computational implementation in South Asian languages is in its infancy. This paper examines in detail the variety of complex predicates in Telugu revealing the syntactic process of their composition and the constraints on their formation. The framework used is First Phase Syntax (Ramchand 2008). In this lexical semantic approach that ties together the constraints on the meaning and the argument structure of complex predicates, each verb breaks down into 3 sub-event heads which determine the nature of the verb. Complex predicates are formed by one verb subsuming the sub-event heads of another verb, and this is constrained in principled ways. The data analysed and the constraints developed in the paper are of use to linguists working on computational solutions for Telugu and other languages, for design and development of predicate structure functions in linguistic processors.",2012
laban-etal-2020-summary,https://aclanthology.org/2020.acl-main.460,0,,,,,,,"The Summary Loop: Learning to Write Abstractive Summaries Without Examples. This work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint. It introduces a novel method that encourages the inclusion of key terms from the original document into the summary: key terms are masked out of the original document and must be filled in by a coverage model using the current generated summary. A novel unsupervised training procedure leverages this coverage model along with a fluency model to generate and score summaries. When tested on popular news summarization datasets, the method outperforms previous unsupervised methods by more than 2 R-1 points, and approaches results of competitive supervised methods. Our model attains higher levels of abstraction with copied passages roughly two times shorter than prior work, and learns to compress and merge sentences without supervision.",The Summary Loop: Learning to Write Abstractive Summaries Without Examples,"This work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint. It introduces a novel method that encourages the inclusion of key terms from the original document into the summary: key terms are masked out of the original document and must be filled in by a coverage model using the current generated summary. A novel unsupervised training procedure leverages this coverage model along with a fluency model to generate and score summaries. When tested on popular news summarization datasets, the method outperforms previous unsupervised methods by more than 2 R-1 points, and approaches results of competitive supervised methods. Our model attains higher levels of abstraction with copied passages roughly two times shorter than prior work, and learns to compress and merge sentences without supervision.",The Summary Loop: Learning to Write Abstractive Summaries Without Examples,"This work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint. It introduces a novel method that encourages the inclusion of key terms from the original document into the summary: key terms are masked out of the original document and must be filled in by a coverage model using the current generated summary. A novel unsupervised training procedure leverages this coverage model along with a fluency model to generate and score summaries. When tested on popular news summarization datasets, the method outperforms previous unsupervised methods by more than 2 R-1 points, and approaches results of competitive supervised methods. Our model attains higher levels of abstraction with copied passages roughly two times shorter than prior work, and learns to compress and merge sentences without supervision.","We would like to thank Forrest Huang, David Chan, Roshan Rao, Katie Stasaski and the ACL reviewers for their helpful comments. This work was supported by the first author's internship at Bloomberg, and a Bloomberg Data Science grant. We also gratefully acknowledge support received from an Amazon Web Services Machine Learning Research Award and an NVIDIA Corporation GPU grant.","The Summary Loop: Learning to Write Abstractive Summaries Without Examples. This work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint. It introduces a novel method that encourages the inclusion of key terms from the original document into the summary: key terms are masked out of the original document and must be filled in by a coverage model using the current generated summary. A novel unsupervised training procedure leverages this coverage model along with a fluency model to generate and score summaries. When tested on popular news summarization datasets, the method outperforms previous unsupervised methods by more than 2 R-1 points, and approaches results of competitive supervised methods. Our model attains higher levels of abstraction with copied passages roughly two times shorter than prior work, and learns to compress and merge sentences without supervision.",2020
arnold-etal-1985-mul,https://aclanthology.org/1985.tmi-1.1,0,,,,,,,"A MUl View of the \textlessC,A\textgreater, T Framework in EUROTRA. ","A {MU}l View of the {\textless}{C},A{\textgreater}, {T} Framework in {EUROTRA}",,"A MUl View of the \textlessC,A\textgreater, T Framework in EUROTRA",,,"A MUl View of the \textlessC,A\textgreater, T Framework in EUROTRA. ",1985
feng-etal-2012-hierarchical,https://aclanthology.org/P12-1100,0,,,,,,,"Hierarchical Chunk-to-String Translation. We present a hierarchical chunk-to-string translation model, which can be seen as a compromise between the hierarchical phrasebased model and the tree-to-string model, to combine the merits of the two models. With the help of shallow parsing, our model learns rules consisting of words and chunks and meanwhile introduce syntax cohesion. Under the weighed synchronous context-free grammar defined by these rules, our model searches for the best translation derivation and yields target translation simultaneously. Our experiments show that our model significantly outperforms the hierarchical phrasebased model and the tree-to-string model on English-Chinese Translation tasks.",Hierarchical Chunk-to-String Translation,"We present a hierarchical chunk-to-string translation model, which can be seen as a compromise between the hierarchical phrasebased model and the tree-to-string model, to combine the merits of the two models. With the help of shallow parsing, our model learns rules consisting of words and chunks and meanwhile introduce syntax cohesion. Under the weighed synchronous context-free grammar defined by these rules, our model searches for the best translation derivation and yields target translation simultaneously. Our experiments show that our model significantly outperforms the hierarchical phrasebased model and the tree-to-string model on English-Chinese Translation tasks.",Hierarchical Chunk-to-String Translation,"We present a hierarchical chunk-to-string translation model, which can be seen as a compromise between the hierarchical phrasebased model and the tree-to-string model, to combine the merits of the two models. With the help of shallow parsing, our model learns rules consisting of words and chunks and meanwhile introduce syntax cohesion. Under the weighed synchronous context-free grammar defined by these rules, our model searches for the best translation derivation and yields target translation simultaneously. Our experiments show that our model significantly outperforms the hierarchical phrasebased model and the tree-to-string model on English-Chinese Translation tasks.","We would like to thank Trevor Cohn, Shujie Liu, Nan Duan, Lei Cui and Mo Yu for their help, and anonymous reviewers for their valuable comments and suggestions. This work was supported in part by EPSRC grant EP/I034750/1 and in part by High Technology R&D Program Project No. 2011AA01A207.","Hierarchical Chunk-to-String Translation. We present a hierarchical chunk-to-string translation model, which can be seen as a compromise between the hierarchical phrasebased model and the tree-to-string model, to combine the merits of the two models. With the help of shallow parsing, our model learns rules consisting of words and chunks and meanwhile introduce syntax cohesion. Under the weighed synchronous context-free grammar defined by these rules, our model searches for the best translation derivation and yields target translation simultaneously. Our experiments show that our model significantly outperforms the hierarchical phrasebased model and the tree-to-string model on English-Chinese Translation tasks.",2012
lindsey-etal-2012-phrase,https://aclanthology.org/D12-1020,0,,,,,,,"A Phrase-Discovering Topic Model Using Hierarchical Pitman-Yor Processes. Topic models traditionally rely on the bagof-words assumption. In data mining applications, this often results in end-users being presented with inscrutable lists of topical unigrams, single words inferred as representative of their topics. In this article, we present a hierarchical generative probabilistic model of topical phrases. The model simultaneously infers the location, length, and topic of phrases within a corpus and relaxes the bagof-words assumption within phrases by using a hierarchy of Pitman-Yor processes. We use Markov chain Monte Carlo techniques for approximate inference in the model and perform slice sampling to learn its hyperparameters. We show via an experiment on human subjects that our model finds substantially better, more interpretable topical phrases than do competing models.",A Phrase-Discovering Topic Model Using Hierarchical {P}itman-{Y}or Processes,"Topic models traditionally rely on the bagof-words assumption. In data mining applications, this often results in end-users being presented with inscrutable lists of topical unigrams, single words inferred as representative of their topics. In this article, we present a hierarchical generative probabilistic model of topical phrases. The model simultaneously infers the location, length, and topic of phrases within a corpus and relaxes the bagof-words assumption within phrases by using a hierarchy of Pitman-Yor processes. We use Markov chain Monte Carlo techniques for approximate inference in the model and perform slice sampling to learn its hyperparameters. We show via an experiment on human subjects that our model finds substantially better, more interpretable topical phrases than do competing models.",A Phrase-Discovering Topic Model Using Hierarchical Pitman-Yor Processes,"Topic models traditionally rely on the bagof-words assumption. In data mining applications, this often results in end-users being presented with inscrutable lists of topical unigrams, single words inferred as representative of their topics. In this article, we present a hierarchical generative probabilistic model of topical phrases. The model simultaneously infers the location, length, and topic of phrases within a corpus and relaxes the bagof-words assumption within phrases by using a hierarchy of Pitman-Yor processes. We use Markov chain Monte Carlo techniques for approximate inference in the model and perform slice sampling to learn its hyperparameters. We show via an experiment on human subjects that our model finds substantially better, more interpretable topical phrases than do competing models.","The first author is supported by an NSF Graduate Research Fellowship. The first and second authors began this project while working at J.D. Power & Associates. We are indebted to Michael Mozer, Matt Wilder, and Nicolas Nicolov for their advice.","A Phrase-Discovering Topic Model Using Hierarchical Pitman-Yor Processes. Topic models traditionally rely on the bagof-words assumption. In data mining applications, this often results in end-users being presented with inscrutable lists of topical unigrams, single words inferred as representative of their topics. In this article, we present a hierarchical generative probabilistic model of topical phrases. The model simultaneously infers the location, length, and topic of phrases within a corpus and relaxes the bagof-words assumption within phrases by using a hierarchy of Pitman-Yor processes. We use Markov chain Monte Carlo techniques for approximate inference in the model and perform slice sampling to learn its hyperparameters. We show via an experiment on human subjects that our model finds substantially better, more interpretable topical phrases than do competing models.",2012
li-etal-2012-simple,https://aclanthology.org/W12-4508,0,,,,,,,"Simple Maximum Entropy Models for Multilingual Coreference Resolution. This paper describes our system participating in the CoNLL-2012 shared task: Modeling Multilingual Unrestricted Coreference in Ontonotes. Maximum entropy models are used for our system as classifiers to determine the coreference relationship between every two mentions (usually noun phrases and pronouns) in each document. We exploit rich lexical, syntactic and semantic features for the system, and the final features are selected using a greedy forward and backward strategy from an initial feature set. Our system participated in the closed track for both English and Chinese languages.",Simple Maximum Entropy Models for Multilingual Coreference Resolution,"This paper describes our system participating in the CoNLL-2012 shared task: Modeling Multilingual Unrestricted Coreference in Ontonotes. Maximum entropy models are used for our system as classifiers to determine the coreference relationship between every two mentions (usually noun phrases and pronouns) in each document. We exploit rich lexical, syntactic and semantic features for the system, and the final features are selected using a greedy forward and backward strategy from an initial feature set. Our system participated in the closed track for both English and Chinese languages.",Simple Maximum Entropy Models for Multilingual Coreference Resolution,"This paper describes our system participating in the CoNLL-2012 shared task: Modeling Multilingual Unrestricted Coreference in Ontonotes. Maximum entropy models are used for our system as classifiers to determine the coreference relationship between every two mentions (usually noun phrases and pronouns) in each document. We exploit rich lexical, syntactic and semantic features for the system, and the final features are selected using a greedy forward and backward strategy from an initial feature set. Our system participated in the closed track for both English and Chinese languages.",,"Simple Maximum Entropy Models for Multilingual Coreference Resolution. This paper describes our system participating in the CoNLL-2012 shared task: Modeling Multilingual Unrestricted Coreference in Ontonotes. Maximum entropy models are used for our system as classifiers to determine the coreference relationship between every two mentions (usually noun phrases and pronouns) in each document. We exploit rich lexical, syntactic and semantic features for the system, and the final features are selected using a greedy forward and backward strategy from an initial feature set. Our system participated in the closed track for both English and Chinese languages.",2012
ben-ari-etal-1988-translational,https://aclanthology.org/1988.tmi-1.15,0,,,,,,,"Translational ambiguity rephrased. Presented are the special aspects of translation-oriented disambiguation, which differentiate it from conventional text-understanding-oriented disambiguation. Also presented are the necessity of interaction to cover the failure of automatic disambiguation, and the idea of disambiguation by rephrasing. The types of ambiguities to which rephrasing is applicable are defined, and the four stages of the rephrasing procedure are described for each type of ambiguity. The concept of an interactive disambiguation module, which is logically located between the parser and the transfer phase, is described. The function of this module is to bridge the gap between several possible trees and/or other ambiguities, and one well-defined tree that may be satisfactorily translated.",Translational ambiguity rephrased,"Presented are the special aspects of translation-oriented disambiguation, which differentiate it from conventional text-understanding-oriented disambiguation. Also presented are the necessity of interaction to cover the failure of automatic disambiguation, and the idea of disambiguation by rephrasing. The types of ambiguities to which rephrasing is applicable are defined, and the four stages of the rephrasing procedure are described for each type of ambiguity. The concept of an interactive disambiguation module, which is logically located between the parser and the transfer phase, is described. The function of this module is to bridge the gap between several possible trees and/or other ambiguities, and one well-defined tree that may be satisfactorily translated.",Translational ambiguity rephrased,"Presented are the special aspects of translation-oriented disambiguation, which differentiate it from conventional text-understanding-oriented disambiguation. Also presented are the necessity of interaction to cover the failure of automatic disambiguation, and the idea of disambiguation by rephrasing. The types of ambiguities to which rephrasing is applicable are defined, and the four stages of the rephrasing procedure are described for each type of ambiguity. The concept of an interactive disambiguation module, which is logically located between the parser and the transfer phase, is described. The function of this module is to bridge the gap between several possible trees and/or other ambiguities, and one well-defined tree that may be satisfactorily translated.",,"Translational ambiguity rephrased. Presented are the special aspects of translation-oriented disambiguation, which differentiate it from conventional text-understanding-oriented disambiguation. Also presented are the necessity of interaction to cover the failure of automatic disambiguation, and the idea of disambiguation by rephrasing. The types of ambiguities to which rephrasing is applicable are defined, and the four stages of the rephrasing procedure are described for each type of ambiguity. The concept of an interactive disambiguation module, which is logically located between the parser and the transfer phase, is described. The function of this module is to bridge the gap between several possible trees and/or other ambiguities, and one well-defined tree that may be satisfactorily translated.",1988
gladkova-drozd-2016-intrinsic,https://aclanthology.org/W16-2507,0,,,,,,,"Intrinsic Evaluations of Word Embeddings: What Can We Do Better?. This paper presents an analysis of existing methods for the intrinsic evaluation of word embeddings. We show that the main methodological premise of such evaluations is ""interpretability"" of word embeddings: a ""good"" embedding produces results that make sense in terms of traditional linguistic categories. This approach is not only of limited practical use, but also fails to do justice to the strengths of distributional meaning representations. We argue for a shift from abstract ratings of word embedding ""quality"" to exploration of their strengths and weaknesses.",Intrinsic Evaluations of Word Embeddings: What Can We Do Better?,"This paper presents an analysis of existing methods for the intrinsic evaluation of word embeddings. We show that the main methodological premise of such evaluations is ""interpretability"" of word embeddings: a ""good"" embedding produces results that make sense in terms of traditional linguistic categories. This approach is not only of limited practical use, but also fails to do justice to the strengths of distributional meaning representations. We argue for a shift from abstract ratings of word embedding ""quality"" to exploration of their strengths and weaknesses.",Intrinsic Evaluations of Word Embeddings: What Can We Do Better?,"This paper presents an analysis of existing methods for the intrinsic evaluation of word embeddings. We show that the main methodological premise of such evaluations is ""interpretability"" of word embeddings: a ""good"" embedding produces results that make sense in terms of traditional linguistic categories. This approach is not only of limited practical use, but also fails to do justice to the strengths of distributional meaning representations. We argue for a shift from abstract ratings of word embedding ""quality"" to exploration of their strengths and weaknesses.",,"Intrinsic Evaluations of Word Embeddings: What Can We Do Better?. This paper presents an analysis of existing methods for the intrinsic evaluation of word embeddings. We show that the main methodological premise of such evaluations is ""interpretability"" of word embeddings: a ""good"" embedding produces results that make sense in terms of traditional linguistic categories. This approach is not only of limited practical use, but also fails to do justice to the strengths of distributional meaning representations. We argue for a shift from abstract ratings of word embedding ""quality"" to exploration of their strengths and weaknesses.",2016
elita-birladeanu-2005-first,https://aclanthology.org/2005.mtsummit-swtmt.5,0,,,,,,,A First Step in Integrating an EBMT into the Semantic Web. In this paper we present the actions we made to prepare an EBMT system to be integrated into the Semantic Web. We also described briefly the developed EBMT tool for translators.,A First Step in Integrating an {EBMT} into the Semantic Web,In this paper we present the actions we made to prepare an EBMT system to be integrated into the Semantic Web. We also described briefly the developed EBMT tool for translators.,A First Step in Integrating an EBMT into the Semantic Web,In this paper we present the actions we made to prepare an EBMT system to be integrated into the Semantic Web. We also described briefly the developed EBMT tool for translators.,,A First Step in Integrating an EBMT into the Semantic Web. In this paper we present the actions we made to prepare an EBMT system to be integrated into the Semantic Web. We also described briefly the developed EBMT tool for translators.,2005
krishnakumaran-zhu-2007-hunting,https://aclanthology.org/W07-0103,0,,,,,,,"Hunting Elusive Metaphors Using Lexical Resources.. In this paper we propose algorithms to automatically classify sentences into metaphoric or normal usages. Our algorithms only need the WordNet and bigram counts, and does not require training. We present empirical results on a test set derived from the Master Metaphor List. We also discuss issues that make classification of metaphors a tough problem in general.",Hunting Elusive Metaphors Using Lexical Resources.,"In this paper we propose algorithms to automatically classify sentences into metaphoric or normal usages. Our algorithms only need the WordNet and bigram counts, and does not require training. We present empirical results on a test set derived from the Master Metaphor List. We also discuss issues that make classification of metaphors a tough problem in general.",Hunting Elusive Metaphors Using Lexical Resources.,"In this paper we propose algorithms to automatically classify sentences into metaphoric or normal usages. Our algorithms only need the WordNet and bigram counts, and does not require training. We present empirical results on a test set derived from the Master Metaphor List. We also discuss issues that make classification of metaphors a tough problem in general.",,"Hunting Elusive Metaphors Using Lexical Resources.. In this paper we propose algorithms to automatically classify sentences into metaphoric or normal usages. Our algorithms only need the WordNet and bigram counts, and does not require training. We present empirical results on a test set derived from the Master Metaphor List. We also discuss issues that make classification of metaphors a tough problem in general.",2007
cho-2017-wh,https://aclanthology.org/Y17-1044,0,,,,,,,"Wh-island Effects in Korean Scrambling Constructions. This study examines the wh-island effects in Korean. Since wh-in-situ languages like Korean allow wh-scrambling, the absence of wh-island constraints is accepted. However, it is controversial whether whclauses can take a matrix scope or not. In order to clarify the issue of wh-islands in Korean, the current paper designed an offline experiment with three factors: island or non-island, scrambling or non-scrambling, and embedded scope or matrix scope. The following acceptability judgment task revealed that wh-PF-island does not exist but wh-LF-island plays a role in Korean. Among results of wh-LF-island, it was observed that a majority of speakers prefer the matrix scope reading.",Wh-island Effects in {K}orean Scrambling Constructions,"This study examines the wh-island effects in Korean. Since wh-in-situ languages like Korean allow wh-scrambling, the absence of wh-island constraints is accepted. However, it is controversial whether whclauses can take a matrix scope or not. In order to clarify the issue of wh-islands in Korean, the current paper designed an offline experiment with three factors: island or non-island, scrambling or non-scrambling, and embedded scope or matrix scope. The following acceptability judgment task revealed that wh-PF-island does not exist but wh-LF-island plays a role in Korean. Among results of wh-LF-island, it was observed that a majority of speakers prefer the matrix scope reading.",Wh-island Effects in Korean Scrambling Constructions,"This study examines the wh-island effects in Korean. Since wh-in-situ languages like Korean allow wh-scrambling, the absence of wh-island constraints is accepted. However, it is controversial whether whclauses can take a matrix scope or not. In order to clarify the issue of wh-islands in Korean, the current paper designed an offline experiment with three factors: island or non-island, scrambling or non-scrambling, and embedded scope or matrix scope. The following acceptability judgment task revealed that wh-PF-island does not exist but wh-LF-island plays a role in Korean. Among results of wh-LF-island, it was observed that a majority of speakers prefer the matrix scope reading.",,"Wh-island Effects in Korean Scrambling Constructions. This study examines the wh-island effects in Korean. Since wh-in-situ languages like Korean allow wh-scrambling, the absence of wh-island constraints is accepted. However, it is controversial whether whclauses can take a matrix scope or not. In order to clarify the issue of wh-islands in Korean, the current paper designed an offline experiment with three factors: island or non-island, scrambling or non-scrambling, and embedded scope or matrix scope. The following acceptability judgment task revealed that wh-PF-island does not exist but wh-LF-island plays a role in Korean. Among results of wh-LF-island, it was observed that a majority of speakers prefer the matrix scope reading.",2017
kawamori-etal-1996-phonological,https://aclanthology.org/Y96-1031,0,,,,,,,"A Phonological Study on Japanese Discourse Markers. A spontaneously spoken, natural Japanese discourse contains many instances of the so-called redundant interjections and of backchannel utterances. These expressions have not hitherto received much attention and few systematic analyses have been made. We show that these utterances are characterizable as discourse markers, and that they comprise a well-defined category, characterizable in a regular manner by their phonologico-prosodic properties. Our report is based on an experiment involving spontaneously spoken conversations, recorded in a laboratory environment and analyzed using digital devices. Prosodic patterns of discourse markers occurring in the recorded conversations have been analyzed. Several pitch patterns have been found that characterize the most frequently used Japanese discourse markers.",A Phonological Study on {J}apanese Discourse Markers,"A spontaneously spoken, natural Japanese discourse contains many instances of the so-called redundant interjections and of backchannel utterances. These expressions have not hitherto received much attention and few systematic analyses have been made. We show that these utterances are characterizable as discourse markers, and that they comprise a well-defined category, characterizable in a regular manner by their phonologico-prosodic properties. Our report is based on an experiment involving spontaneously spoken conversations, recorded in a laboratory environment and analyzed using digital devices. Prosodic patterns of discourse markers occurring in the recorded conversations have been analyzed. Several pitch patterns have been found that characterize the most frequently used Japanese discourse markers.",A Phonological Study on Japanese Discourse Markers,"A spontaneously spoken, natural Japanese discourse contains many instances of the so-called redundant interjections and of backchannel utterances. These expressions have not hitherto received much attention and few systematic analyses have been made. We show that these utterances are characterizable as discourse markers, and that they comprise a well-defined category, characterizable in a regular manner by their phonologico-prosodic properties. Our report is based on an experiment involving spontaneously spoken conversations, recorded in a laboratory environment and analyzed using digital devices. Prosodic patterns of discourse markers occurring in the recorded conversations have been analyzed. Several pitch patterns have been found that characterize the most frequently used Japanese discourse markers.",,"A Phonological Study on Japanese Discourse Markers. A spontaneously spoken, natural Japanese discourse contains many instances of the so-called redundant interjections and of backchannel utterances. These expressions have not hitherto received much attention and few systematic analyses have been made. We show that these utterances are characterizable as discourse markers, and that they comprise a well-defined category, characterizable in a regular manner by their phonologico-prosodic properties. Our report is based on an experiment involving spontaneously spoken conversations, recorded in a laboratory environment and analyzed using digital devices. Prosodic patterns of discourse markers occurring in the recorded conversations have been analyzed. Several pitch patterns have been found that characterize the most frequently used Japanese discourse markers.",1996
pyysalo-etal-2009-static,https://aclanthology.org/W09-1301,1,,,,health,,,"Static Relations: a Piece in the Biomedical Information Extraction Puzzle. We propose a static relation extraction task to complement biomedical information extraction approaches. We argue that static relations such as part-whole are implicitly involved in many common extraction settings, define a task setting making them explicit, and discuss their integration into previously proposed tasks and extraction methods. We further identify a specific static relation extraction task motivated by the BioNLP'09 shared task on event extraction, introduce an annotated corpus for the task, and demonstrate the feasibility of the task by experiments showing that the defined relations can be reliably extracted. The task setting and corpus can serve to support several forms of domain information extraction.",Static Relations: a Piece in the Biomedical Information Extraction Puzzle,"We propose a static relation extraction task to complement biomedical information extraction approaches. We argue that static relations such as part-whole are implicitly involved in many common extraction settings, define a task setting making them explicit, and discuss their integration into previously proposed tasks and extraction methods. We further identify a specific static relation extraction task motivated by the BioNLP'09 shared task on event extraction, introduce an annotated corpus for the task, and demonstrate the feasibility of the task by experiments showing that the defined relations can be reliably extracted. The task setting and corpus can serve to support several forms of domain information extraction.",Static Relations: a Piece in the Biomedical Information Extraction Puzzle,"We propose a static relation extraction task to complement biomedical information extraction approaches. We argue that static relations such as part-whole are implicitly involved in many common extraction settings, define a task setting making them explicit, and discuss their integration into previously proposed tasks and extraction methods. We further identify a specific static relation extraction task motivated by the BioNLP'09 shared task on event extraction, introduce an annotated corpus for the task, and demonstrate the feasibility of the task by experiments showing that the defined relations can be reliably extracted. The task setting and corpus can serve to support several forms of domain information extraction.","Discussions with members of the BioInfer group were central for developing many of the ideas presented here. We are grateful for the efforts of Maki Niihori in producing supporting annotation applied in this work. This work was partially supported by Grant-in-Aid for Specially Promoted Research (Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan), and Genome Network Project (MEXT, Japan).","Static Relations: a Piece in the Biomedical Information Extraction Puzzle. We propose a static relation extraction task to complement biomedical information extraction approaches. We argue that static relations such as part-whole are implicitly involved in many common extraction settings, define a task setting making them explicit, and discuss their integration into previously proposed tasks and extraction methods. We further identify a specific static relation extraction task motivated by the BioNLP'09 shared task on event extraction, introduce an annotated corpus for the task, and demonstrate the feasibility of the task by experiments showing that the defined relations can be reliably extracted. The task setting and corpus can serve to support several forms of domain information extraction.",2009
popovic-etal-2020-neural,https://aclanthology.org/2020.vardial-1.10,0,,,,,,,"Neural Machine Translation for translating into Croatian and Serbian. In this work, we systematically investigate different setups for training of neural machine translation (NMT) systems for translation into Croatian and Serbian, two closely related South Slavic languages. We explore English and German as source languages, different sizes and types of training corpora, as well as bilingual and multilingual systems. We also explore translation of English IMDb user movie reviews, a domain/genre where only monolingual data are available. First, our results confirm that multilingual systems with joint target languages perform better. Furthermore, translation performance from English is much better than from German, partly because German is morphologically more complex and partly because the corpus consists mostly of parallel human translations instead of original text and its human translation. The translation from German should be further investigated systematically. For translating user reviews, creating synthetic in-domain parallel data through back-and forward-translation and adding them to a small out-of-domain parallel corpus can yield performance comparable with a system trained on a full out-of-domain corpus. However, it is still not clear what is the optimal size of synthetic in-domain data, especially for forward-translated data where the target language is machine translated. More detailed research including manual evaluation and analysis is needed in this direction.",Neural Machine Translation for translating into {C}roatian and {S}erbian,"In this work, we systematically investigate different setups for training of neural machine translation (NMT) systems for translation into Croatian and Serbian, two closely related South Slavic languages. We explore English and German as source languages, different sizes and types of training corpora, as well as bilingual and multilingual systems. We also explore translation of English IMDb user movie reviews, a domain/genre where only monolingual data are available. First, our results confirm that multilingual systems with joint target languages perform better. Furthermore, translation performance from English is much better than from German, partly because German is morphologically more complex and partly because the corpus consists mostly of parallel human translations instead of original text and its human translation. The translation from German should be further investigated systematically. For translating user reviews, creating synthetic in-domain parallel data through back-and forward-translation and adding them to a small out-of-domain parallel corpus can yield performance comparable with a system trained on a full out-of-domain corpus. However, it is still not clear what is the optimal size of synthetic in-domain data, especially for forward-translated data where the target language is machine translated. More detailed research including manual evaluation and analysis is needed in this direction.",Neural Machine Translation for translating into Croatian and Serbian,"In this work, we systematically investigate different setups for training of neural machine translation (NMT) systems for translation into Croatian and Serbian, two closely related South Slavic languages. We explore English and German as source languages, different sizes and types of training corpora, as well as bilingual and multilingual systems. We also explore translation of English IMDb user movie reviews, a domain/genre where only monolingual data are available. First, our results confirm that multilingual systems with joint target languages perform better. Furthermore, translation performance from English is much better than from German, partly because German is morphologically more complex and partly because the corpus consists mostly of parallel human translations instead of original text and its human translation. The translation from German should be further investigated systematically. For translating user reviews, creating synthetic in-domain parallel data through back-and forward-translation and adding them to a small out-of-domain parallel corpus can yield performance comparable with a system trained on a full out-of-domain corpus. However, it is still not clear what is the optimal size of synthetic in-domain data, especially for forward-translated data where the target language is machine translated. More detailed research including manual evaluation and analysis is needed in this direction.","The ADAPT SFI Centre for Digital Media Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant 13/RC/2106. This research was partly funded by financial support of the European Association for Machine Translation (EAMT) under its programme ""2019 Sponsorship of Activities"".","Neural Machine Translation for translating into Croatian and Serbian. In this work, we systematically investigate different setups for training of neural machine translation (NMT) systems for translation into Croatian and Serbian, two closely related South Slavic languages. We explore English and German as source languages, different sizes and types of training corpora, as well as bilingual and multilingual systems. We also explore translation of English IMDb user movie reviews, a domain/genre where only monolingual data are available. First, our results confirm that multilingual systems with joint target languages perform better. Furthermore, translation performance from English is much better than from German, partly because German is morphologically more complex and partly because the corpus consists mostly of parallel human translations instead of original text and its human translation. The translation from German should be further investigated systematically. For translating user reviews, creating synthetic in-domain parallel data through back-and forward-translation and adding them to a small out-of-domain parallel corpus can yield performance comparable with a system trained on a full out-of-domain corpus. However, it is still not clear what is the optimal size of synthetic in-domain data, especially for forward-translated data where the target language is machine translated. More detailed research including manual evaluation and analysis is needed in this direction.",2020
kate-mooney-2007-semi,https://aclanthology.org/N07-2021,0,,,,,,,"Semi-Supervised Learning for Semantic Parsing using Support Vector Machines. We present a method for utilizing unannotated sentences to improve a semantic parser which maps natural language (NL) sentences into their formal meaning representations (MRs). Given NL sentences annotated with their MRs, the initial supervised semantic parser learns the mapping by training Support Vector Machine (SVM) classifiers for every production in the MR grammar. Our new method applies the learned semantic parser to the unannotated sentences and collects unlabeled examples which are then used to retrain the classifiers using a variant of transductive SVMs. Experimental results show the improvements obtained over the purely supervised parser, particularly when the annotated training set is small.",Semi-Supervised Learning for Semantic Parsing using Support Vector Machines,"We present a method for utilizing unannotated sentences to improve a semantic parser which maps natural language (NL) sentences into their formal meaning representations (MRs). Given NL sentences annotated with their MRs, the initial supervised semantic parser learns the mapping by training Support Vector Machine (SVM) classifiers for every production in the MR grammar. Our new method applies the learned semantic parser to the unannotated sentences and collects unlabeled examples which are then used to retrain the classifiers using a variant of transductive SVMs. Experimental results show the improvements obtained over the purely supervised parser, particularly when the annotated training set is small.",Semi-Supervised Learning for Semantic Parsing using Support Vector Machines,"We present a method for utilizing unannotated sentences to improve a semantic parser which maps natural language (NL) sentences into their formal meaning representations (MRs). Given NL sentences annotated with their MRs, the initial supervised semantic parser learns the mapping by training Support Vector Machine (SVM) classifiers for every production in the MR grammar. Our new method applies the learned semantic parser to the unannotated sentences and collects unlabeled examples which are then used to retrain the classifiers using a variant of transductive SVMs. Experimental results show the improvements obtained over the purely supervised parser, particularly when the annotated training set is small.",This research was supported by a Google research grant. The experiments were run on the Mastodon cluster provided by NSF grant EIA-0303609.,"Semi-Supervised Learning for Semantic Parsing using Support Vector Machines. We present a method for utilizing unannotated sentences to improve a semantic parser which maps natural language (NL) sentences into their formal meaning representations (MRs). Given NL sentences annotated with their MRs, the initial supervised semantic parser learns the mapping by training Support Vector Machine (SVM) classifiers for every production in the MR grammar. Our new method applies the learned semantic parser to the unannotated sentences and collects unlabeled examples which are then used to retrain the classifiers using a variant of transductive SVMs. Experimental results show the improvements obtained over the purely supervised parser, particularly when the annotated training set is small.",2007
mitchell-etal-2013-community,https://aclanthology.org/2013.mtsummit-wptp.5,0,,,,,,,"Community-based post-editing of machine-translated content: monolingual vs. bilingual. We carried out a machine-translation postediting pilot study with users of an IT support forum community. For both language pairs (English to German, English to French), 4 native speakers for each language were recruited. They performed monolingual and bilingual postediting tasks on machine-translated forum content. The post-edited content was evaluated using human evaluation (fluency, comprehensibility, fidelity). We found that monolingual post-editing can lead to improved fluency and comprehensibility scores similar to those achieved through bilingual post-editing, while we found that fidelity improved considerably more for the bilingual setup. Furthermore, the performance across post-editors varied greatly and it was found that some post-editors are able to produce better quality in a monolingual setup than others.",Community-based post-editing of machine-translated content: monolingual vs. bilingual,"We carried out a machine-translation postediting pilot study with users of an IT support forum community. For both language pairs (English to German, English to French), 4 native speakers for each language were recruited. They performed monolingual and bilingual postediting tasks on machine-translated forum content. The post-edited content was evaluated using human evaluation (fluency, comprehensibility, fidelity). We found that monolingual post-editing can lead to improved fluency and comprehensibility scores similar to those achieved through bilingual post-editing, while we found that fidelity improved considerably more for the bilingual setup. Furthermore, the performance across post-editors varied greatly and it was found that some post-editors are able to produce better quality in a monolingual setup than others.",Community-based post-editing of machine-translated content: monolingual vs. bilingual,"We carried out a machine-translation postediting pilot study with users of an IT support forum community. For both language pairs (English to German, English to French), 4 native speakers for each language were recruited. They performed monolingual and bilingual postediting tasks on machine-translated forum content. The post-edited content was evaluated using human evaluation (fluency, comprehensibility, fidelity). We found that monolingual post-editing can lead to improved fluency and comprehensibility scores similar to those achieved through bilingual post-editing, while we found that fidelity improved considerably more for the bilingual setup. Furthermore, the performance across post-editors varied greatly and it was found that some post-editors are able to produce better quality in a monolingual setup than others.",This work is supported by the European Commission's Seventh Framework Programme (Grant 288769). The authors would like to thank Dr. Pratyush Banerjee for contributing the building of the clusters to group similar posts together for this post-editing study.,"Community-based post-editing of machine-translated content: monolingual vs. bilingual. We carried out a machine-translation postediting pilot study with users of an IT support forum community. For both language pairs (English to German, English to French), 4 native speakers for each language were recruited. They performed monolingual and bilingual postediting tasks on machine-translated forum content. The post-edited content was evaluated using human evaluation (fluency, comprehensibility, fidelity). We found that monolingual post-editing can lead to improved fluency and comprehensibility scores similar to those achieved through bilingual post-editing, while we found that fidelity improved considerably more for the bilingual setup. Furthermore, the performance across post-editors varied greatly and it was found that some post-editors are able to produce better quality in a monolingual setup than others.",2013
liu-soo-1994-corpus,https://aclanthology.org/C94-1073,0,,,,,,,"A Corpus-Based Learning Technique for Building A Self-Extensible Parser. IIuman intervention and/or training corpora tagged with various kinds of information were often assumed in many natural language acquisition models. This assumption is a major source of inconsistencies, errors, and inefficiency in learning. In this paper, we explore the extent to which a parser may extend itself without relying on extra input from the outside world. A learning technique called SEP is proposed and attached to the parser. The input to SEP is raw sentences, while the output is the knowledge that is missing in the parser. Since parsers and raw sentences are commonly available and no human intervention is needed in learning, SEP could make fully automatic large-scale acquisition more feasible.",A Corpus-Based Learning Technique for Building A Self-Extensible Parser,"IIuman intervention and/or training corpora tagged with various kinds of information were often assumed in many natural language acquisition models. This assumption is a major source of inconsistencies, errors, and inefficiency in learning. In this paper, we explore the extent to which a parser may extend itself without relying on extra input from the outside world. A learning technique called SEP is proposed and attached to the parser. The input to SEP is raw sentences, while the output is the knowledge that is missing in the parser. Since parsers and raw sentences are commonly available and no human intervention is needed in learning, SEP could make fully automatic large-scale acquisition more feasible.",A Corpus-Based Learning Technique for Building A Self-Extensible Parser,"IIuman intervention and/or training corpora tagged with various kinds of information were often assumed in many natural language acquisition models. This assumption is a major source of inconsistencies, errors, and inefficiency in learning. In this paper, we explore the extent to which a parser may extend itself without relying on extra input from the outside world. A learning technique called SEP is proposed and attached to the parser. The input to SEP is raw sentences, while the output is the knowledge that is missing in the parser. Since parsers and raw sentences are commonly available and no human intervention is needed in learning, SEP could make fully automatic large-scale acquisition more feasible.",Acknowledgement This research is supported in part by NSC (National Science Council of R.@.C.) under the grant NSC83-0408-E-007-008.,"A Corpus-Based Learning Technique for Building A Self-Extensible Parser. IIuman intervention and/or training corpora tagged with various kinds of information were often assumed in many natural language acquisition models. This assumption is a major source of inconsistencies, errors, and inefficiency in learning. In this paper, we explore the extent to which a parser may extend itself without relying on extra input from the outside world. A learning technique called SEP is proposed and attached to the parser. The input to SEP is raw sentences, while the output is the knowledge that is missing in the parser. Since parsers and raw sentences are commonly available and no human intervention is needed in learning, SEP could make fully automatic large-scale acquisition more feasible.",1994
ikehara-etal-1996-statistical,https://aclanthology.org/C96-1097,0,,,,,,,"A Statistical Method for Extracting Uninterrupted and Interrupted Collocations from Very Large Corpora. In order to extract rigid expressions with a high frequency of use, new algorithm that can efficiently extract both uninterrupted and interrupted collocations from very large corpora has been proposed. The statistical method recently proposed for calculating N-gram of m'bitrary N can be applied to the extraction of uninterrupted collocations. But this method posed problems that so large volumes of fractional and unnecessary expressions are extracted that it was impossible to extract interrupted collocations combining the results. To solve this problem, this paper proposed a new algorithm that restrains extraction of unnecessary substrings. This is followed by the proposal of a method that enable to extract interrupted collocations. The new methods are applied to Japanese newspaper articles involving 8.92 million characters. In the case of uninterrupted collocations with string length of 2 or mere characters and frequency of appearance 2 or more times, there were 4.4 millions types of expressions (total frequency of 31.2 millions times) extracted by the N-gram method. In contrast, the new method has reduced this to 0.97 million types (total frequency of 2.6 million times) revealing a substantial reduction in fractional and unnecessary expressions. In the case of interrupted collocational substring extraction, combining the substring with frequency of 10 times or more extracted by the first method, 6.5 thousand types of pairs of substrings with the total frequency of 21.8 thousands were extracted.",A Statistical Method for Extracting Uninterrupted and Interrupted Collocations from Very Large Corpora,"In order to extract rigid expressions with a high frequency of use, new algorithm that can efficiently extract both uninterrupted and interrupted collocations from very large corpora has been proposed. The statistical method recently proposed for calculating N-gram of m'bitrary N can be applied to the extraction of uninterrupted collocations. But this method posed problems that so large volumes of fractional and unnecessary expressions are extracted that it was impossible to extract interrupted collocations combining the results. To solve this problem, this paper proposed a new algorithm that restrains extraction of unnecessary substrings. This is followed by the proposal of a method that enable to extract interrupted collocations. The new methods are applied to Japanese newspaper articles involving 8.92 million characters. In the case of uninterrupted collocations with string length of 2 or mere characters and frequency of appearance 2 or more times, there were 4.4 millions types of expressions (total frequency of 31.2 millions times) extracted by the N-gram method. In contrast, the new method has reduced this to 0.97 million types (total frequency of 2.6 million times) revealing a substantial reduction in fractional and unnecessary expressions. In the case of interrupted collocational substring extraction, combining the substring with frequency of 10 times or more extracted by the first method, 6.5 thousand types of pairs of substrings with the total frequency of 21.8 thousands were extracted.",A Statistical Method for Extracting Uninterrupted and Interrupted Collocations from Very Large Corpora,"In order to extract rigid expressions with a high frequency of use, new algorithm that can efficiently extract both uninterrupted and interrupted collocations from very large corpora has been proposed. The statistical method recently proposed for calculating N-gram of m'bitrary N can be applied to the extraction of uninterrupted collocations. But this method posed problems that so large volumes of fractional and unnecessary expressions are extracted that it was impossible to extract interrupted collocations combining the results. To solve this problem, this paper proposed a new algorithm that restrains extraction of unnecessary substrings. This is followed by the proposal of a method that enable to extract interrupted collocations. The new methods are applied to Japanese newspaper articles involving 8.92 million characters. In the case of uninterrupted collocations with string length of 2 or mere characters and frequency of appearance 2 or more times, there were 4.4 millions types of expressions (total frequency of 31.2 millions times) extracted by the N-gram method. In contrast, the new method has reduced this to 0.97 million types (total frequency of 2.6 million times) revealing a substantial reduction in fractional and unnecessary expressions. In the case of interrupted collocational substring extraction, combining the substring with frequency of 10 times or more extracted by the first method, 6.5 thousand types of pairs of substrings with the total frequency of 21.8 thousands were extracted.",,"A Statistical Method for Extracting Uninterrupted and Interrupted Collocations from Very Large Corpora. In order to extract rigid expressions with a high frequency of use, new algorithm that can efficiently extract both uninterrupted and interrupted collocations from very large corpora has been proposed. The statistical method recently proposed for calculating N-gram of m'bitrary N can be applied to the extraction of uninterrupted collocations. But this method posed problems that so large volumes of fractional and unnecessary expressions are extracted that it was impossible to extract interrupted collocations combining the results. To solve this problem, this paper proposed a new algorithm that restrains extraction of unnecessary substrings. This is followed by the proposal of a method that enable to extract interrupted collocations. The new methods are applied to Japanese newspaper articles involving 8.92 million characters. In the case of uninterrupted collocations with string length of 2 or mere characters and frequency of appearance 2 or more times, there were 4.4 millions types of expressions (total frequency of 31.2 millions times) extracted by the N-gram method. In contrast, the new method has reduced this to 0.97 million types (total frequency of 2.6 million times) revealing a substantial reduction in fractional and unnecessary expressions. In the case of interrupted collocational substring extraction, combining the substring with frequency of 10 times or more extracted by the first method, 6.5 thousand types of pairs of substrings with the total frequency of 21.8 thousands were extracted.",1996
ren-etal-2020-simulspeech,https://aclanthology.org/2020.acl-main.350,0,,,,,,,"SimulSpeech: End-to-End Simultaneous Speech to Text Translation. In this work, we develop SimulSpeech, an endto-end simultaneous speech to text translation system which translates speech in source language to text in target language concurrently. SimulSpeech consists of a speech encoder, a speech segmenter and a text decoder, where 1) the segmenter builds upon the encoder and leverages a connectionist temporal classification (CTC) loss to split the input streaming speech in real time, 2) the encoder-decoder attention adopts a wait-k strategy for simultaneous translation. SimulSpeech is more challenging than previous cascaded systems (with simultaneous automatic speech recognition (ASR) and simultaneous neural machine translation (NMT)). We introduce two novel knowledge distillation methods to ensure the performance: 1) Attention-level knowledge distillation transfers the knowledge from the multiplication of the attention matrices of simultaneous NMT and ASR models to help the training of the attention mechanism in SimulSpeech; 2) Data-level knowledge distillation transfers the knowledge from the full-sentence NMT model and also reduces the complexity of data distribution to help on the optimization of Simul-Speech. Experiments on MuST-C English-Spanish and English-German spoken language translation datasets show that SimulSpeech achieves reasonable BLEU scores and lower delay compared to full-sentence end-to-end speech to text translation (without simultaneous translation), and better performance than the two-stage cascaded simultaneous translation model in terms of BLEU scores and translation delay.
Simultaneous speech to text translation (Fügen et al., 2007; Oda et al., 2014; Dalvi et al., 2018) , which translates source-language speech into targetlanguage text concurrently, is of great importance to the real-time understanding of spoken lectures or conversations and now widely used in many scenarios including live video streaming and international conferences. However, it is widely considered as one of the challenging tasks in machine translation domain because simultaneous speech to text translation has to understand the speech and trade off translation accuracy and delay. Conventional approaches to simultaneous speech to text translation (Fügen et al., 2007; Oda et al., 2014; Dalvi et al., 2018) divide the translation process into two stages: simultaneous automatic speech recognition (ASR) (Rao et al., 2017) and simultaneous neural machine translation (NMT) (Gu et al., 2016) , which cannot be optimized jointly and result in inferior accuracy, and also incurs more translation delay due to two stages.",{S}imul{S}peech: End-to-End Simultaneous Speech to Text Translation,"In this work, we develop SimulSpeech, an endto-end simultaneous speech to text translation system which translates speech in source language to text in target language concurrently. SimulSpeech consists of a speech encoder, a speech segmenter and a text decoder, where 1) the segmenter builds upon the encoder and leverages a connectionist temporal classification (CTC) loss to split the input streaming speech in real time, 2) the encoder-decoder attention adopts a wait-k strategy for simultaneous translation. SimulSpeech is more challenging than previous cascaded systems (with simultaneous automatic speech recognition (ASR) and simultaneous neural machine translation (NMT)). We introduce two novel knowledge distillation methods to ensure the performance: 1) Attention-level knowledge distillation transfers the knowledge from the multiplication of the attention matrices of simultaneous NMT and ASR models to help the training of the attention mechanism in SimulSpeech; 2) Data-level knowledge distillation transfers the knowledge from the full-sentence NMT model and also reduces the complexity of data distribution to help on the optimization of Simul-Speech. Experiments on MuST-C English-Spanish and English-German spoken language translation datasets show that SimulSpeech achieves reasonable BLEU scores and lower delay compared to full-sentence end-to-end speech to text translation (without simultaneous translation), and better performance than the two-stage cascaded simultaneous translation model in terms of BLEU scores and translation delay.
Simultaneous speech to text translation (Fügen et al., 2007; Oda et al., 2014; Dalvi et al., 2018) , which translates source-language speech into targetlanguage text concurrently, is of great importance to the real-time understanding of spoken lectures or conversations and now widely used in many scenarios including live video streaming and international conferences. However, it is widely considered as one of the challenging tasks in machine translation domain because simultaneous speech to text translation has to understand the speech and trade off translation accuracy and delay. Conventional approaches to simultaneous speech to text translation (Fügen et al., 2007; Oda et al., 2014; Dalvi et al., 2018) divide the translation process into two stages: simultaneous automatic speech recognition (ASR) (Rao et al., 2017) and simultaneous neural machine translation (NMT) (Gu et al., 2016) , which cannot be optimized jointly and result in inferior accuracy, and also incurs more translation delay due to two stages.",SimulSpeech: End-to-End Simultaneous Speech to Text Translation,"In this work, we develop SimulSpeech, an endto-end simultaneous speech to text translation system which translates speech in source language to text in target language concurrently. SimulSpeech consists of a speech encoder, a speech segmenter and a text decoder, where 1) the segmenter builds upon the encoder and leverages a connectionist temporal classification (CTC) loss to split the input streaming speech in real time, 2) the encoder-decoder attention adopts a wait-k strategy for simultaneous translation. SimulSpeech is more challenging than previous cascaded systems (with simultaneous automatic speech recognition (ASR) and simultaneous neural machine translation (NMT)). We introduce two novel knowledge distillation methods to ensure the performance: 1) Attention-level knowledge distillation transfers the knowledge from the multiplication of the attention matrices of simultaneous NMT and ASR models to help the training of the attention mechanism in SimulSpeech; 2) Data-level knowledge distillation transfers the knowledge from the full-sentence NMT model and also reduces the complexity of data distribution to help on the optimization of Simul-Speech. Experiments on MuST-C English-Spanish and English-German spoken language translation datasets show that SimulSpeech achieves reasonable BLEU scores and lower delay compared to full-sentence end-to-end speech to text translation (without simultaneous translation), and better performance than the two-stage cascaded simultaneous translation model in terms of BLEU scores and translation delay.
Simultaneous speech to text translation (Fügen et al., 2007; Oda et al., 2014; Dalvi et al., 2018) , which translates source-language speech into targetlanguage text concurrently, is of great importance to the real-time understanding of spoken lectures or conversations and now widely used in many scenarios including live video streaming and international conferences. However, it is widely considered as one of the challenging tasks in machine translation domain because simultaneous speech to text translation has to understand the speech and trade off translation accuracy and delay. Conventional approaches to simultaneous speech to text translation (Fügen et al., 2007; Oda et al., 2014; Dalvi et al., 2018) divide the translation process into two stages: simultaneous automatic speech recognition (ASR) (Rao et al., 2017) and simultaneous neural machine translation (NMT) (Gu et al., 2016) , which cannot be optimized jointly and result in inferior accuracy, and also incurs more translation delay due to two stages.","This work was supported in part by the National Key R&D Program of China (Grant No.2018AAA0100603), Zhejiang Natural Science Foundation (LR19F020006), National Natural Science Foundation of China (Grant No.61836002), National Natural Science Foundation of China (Grant No.U1611461), and National Natural Science Foundation of China (Grant No.61751209). This work was also partially funded by Microsoft Research Asia.","SimulSpeech: End-to-End Simultaneous Speech to Text Translation. In this work, we develop SimulSpeech, an endto-end simultaneous speech to text translation system which translates speech in source language to text in target language concurrently. SimulSpeech consists of a speech encoder, a speech segmenter and a text decoder, where 1) the segmenter builds upon the encoder and leverages a connectionist temporal classification (CTC) loss to split the input streaming speech in real time, 2) the encoder-decoder attention adopts a wait-k strategy for simultaneous translation. SimulSpeech is more challenging than previous cascaded systems (with simultaneous automatic speech recognition (ASR) and simultaneous neural machine translation (NMT)). We introduce two novel knowledge distillation methods to ensure the performance: 1) Attention-level knowledge distillation transfers the knowledge from the multiplication of the attention matrices of simultaneous NMT and ASR models to help the training of the attention mechanism in SimulSpeech; 2) Data-level knowledge distillation transfers the knowledge from the full-sentence NMT model and also reduces the complexity of data distribution to help on the optimization of Simul-Speech. Experiments on MuST-C English-Spanish and English-German spoken language translation datasets show that SimulSpeech achieves reasonable BLEU scores and lower delay compared to full-sentence end-to-end speech to text translation (without simultaneous translation), and better performance than the two-stage cascaded simultaneous translation model in terms of BLEU scores and translation delay.
Simultaneous speech to text translation (Fügen et al., 2007; Oda et al., 2014; Dalvi et al., 2018) , which translates source-language speech into targetlanguage text concurrently, is of great importance to the real-time understanding of spoken lectures or conversations and now widely used in many scenarios including live video streaming and international conferences. However, it is widely considered as one of the challenging tasks in machine translation domain because simultaneous speech to text translation has to understand the speech and trade off translation accuracy and delay. Conventional approaches to simultaneous speech to text translation (Fügen et al., 2007; Oda et al., 2014; Dalvi et al., 2018) divide the translation process into two stages: simultaneous automatic speech recognition (ASR) (Rao et al., 2017) and simultaneous neural machine translation (NMT) (Gu et al., 2016) , which cannot be optimized jointly and result in inferior accuracy, and also incurs more translation delay due to two stages.",2020
tam-etal-2019-optimal,https://aclanthology.org/P19-1592,0,,,,,,,"Optimal Transport-based Alignment of Learned Character Representations for String Similarity. String similarity models are vital for record linkage, entity resolution, and search. In this work, we present STANCE-a learned model for computing the similarity of two strings. Our approach encodes the characters of each string, aligns the encodings using Sinkhorn Iteration (alignment is posed as an instance of optimal transport) and scores the alignment with a convolutional neural network. We evaluate STANCE's ability to detect whether two strings can refer to the same entity-a task we term alias detection. We construct five new alias detection datasets (and make them publicly available). We show that STANCE (or one of its variants) outperforms both state-ofthe-art and classic, parameter-free similarity models on four of the five datasets. We also demonstrate STANCE's ability to improve downstream tasks by applying it to an instance of cross-document coreference and show that it leads to a 2.8 point improvement in B 3 F1 over the previous state-of-the-art approach.",Optimal Transport-based Alignment of Learned Character Representations for String Similarity,"String similarity models are vital for record linkage, entity resolution, and search. In this work, we present STANCE-a learned model for computing the similarity of two strings. Our approach encodes the characters of each string, aligns the encodings using Sinkhorn Iteration (alignment is posed as an instance of optimal transport) and scores the alignment with a convolutional neural network. We evaluate STANCE's ability to detect whether two strings can refer to the same entity-a task we term alias detection. We construct five new alias detection datasets (and make them publicly available). We show that STANCE (or one of its variants) outperforms both state-ofthe-art and classic, parameter-free similarity models on four of the five datasets. We also demonstrate STANCE's ability to improve downstream tasks by applying it to an instance of cross-document coreference and show that it leads to a 2.8 point improvement in B 3 F1 over the previous state-of-the-art approach.",Optimal Transport-based Alignment of Learned Character Representations for String Similarity,"String similarity models are vital for record linkage, entity resolution, and search. In this work, we present STANCE-a learned model for computing the similarity of two strings. Our approach encodes the characters of each string, aligns the encodings using Sinkhorn Iteration (alignment is posed as an instance of optimal transport) and scores the alignment with a convolutional neural network. We evaluate STANCE's ability to detect whether two strings can refer to the same entity-a task we term alias detection. We construct five new alias detection datasets (and make them publicly available). We show that STANCE (or one of its variants) outperforms both state-ofthe-art and classic, parameter-free similarity models on four of the five datasets. We also demonstrate STANCE's ability to improve downstream tasks by applying it to an instance of cross-document coreference and show that it leads to a 2.8 point improvement in B 3 F1 over the previous state-of-the-art approach."," 1 We used a xml dump of Wikipedia from 2016-03-05. We restrict the entities and hyperlinked spans to come from non-talk, non-list Wikipedia pages.","Optimal Transport-based Alignment of Learned Character Representations for String Similarity. String similarity models are vital for record linkage, entity resolution, and search. In this work, we present STANCE-a learned model for computing the similarity of two strings. Our approach encodes the characters of each string, aligns the encodings using Sinkhorn Iteration (alignment is posed as an instance of optimal transport) and scores the alignment with a convolutional neural network. We evaluate STANCE's ability to detect whether two strings can refer to the same entity-a task we term alias detection. We construct five new alias detection datasets (and make them publicly available). We show that STANCE (or one of its variants) outperforms both state-ofthe-art and classic, parameter-free similarity models on four of the five datasets. We also demonstrate STANCE's ability to improve downstream tasks by applying it to an instance of cross-document coreference and show that it leads to a 2.8 point improvement in B 3 F1 over the previous state-of-the-art approach.",2019
mundra-etal-2021-wassa,https://aclanthology.org/2021.wassa-1.12,1,,,,health,,,"WASSA@IITK at WASSA 2021: Multi-task Learning and Transformer Finetuning for Emotion Classification and Empathy Prediction. This paper describes our contribution to the WASSA 2021 shared task on Empathy Prediction and Emotion Classification. The broad goal of this task was to model an empathy score, a distress score and the overall level of emotion of an essay written in response to a newspaper article associated with harm to someone. We have used the ELECTRA model abundantly and also advanced deep learning approaches like multi-task learning. Additionally, we also leveraged standard machine learning techniques like ensembling. Our system achieves a Pearson Correlation Coefficient of 0.533 on sub-task I and a macro F1 score of 0.5528 on sub-task II. We ranked 1 st in Emotion Classification sub-task and 3 rd in Empathy Prediction sub-task.",{WASSA}@{IITK} at {WASSA} 2021: Multi-task Learning and Transformer Finetuning for Emotion Classification and Empathy Prediction,"This paper describes our contribution to the WASSA 2021 shared task on Empathy Prediction and Emotion Classification. The broad goal of this task was to model an empathy score, a distress score and the overall level of emotion of an essay written in response to a newspaper article associated with harm to someone. We have used the ELECTRA model abundantly and also advanced deep learning approaches like multi-task learning. Additionally, we also leveraged standard machine learning techniques like ensembling. Our system achieves a Pearson Correlation Coefficient of 0.533 on sub-task I and a macro F1 score of 0.5528 on sub-task II. We ranked 1 st in Emotion Classification sub-task and 3 rd in Empathy Prediction sub-task.",WASSA@IITK at WASSA 2021: Multi-task Learning and Transformer Finetuning for Emotion Classification and Empathy Prediction,"This paper describes our contribution to the WASSA 2021 shared task on Empathy Prediction and Emotion Classification. The broad goal of this task was to model an empathy score, a distress score and the overall level of emotion of an essay written in response to a newspaper article associated with harm to someone. We have used the ELECTRA model abundantly and also advanced deep learning approaches like multi-task learning. Additionally, we also leveraged standard machine learning techniques like ensembling. Our system achieves a Pearson Correlation Coefficient of 0.533 on sub-task I and a macro F1 score of 0.5528 on sub-task II. We ranked 1 st in Emotion Classification sub-task and 3 rd in Empathy Prediction sub-task.",,"WASSA@IITK at WASSA 2021: Multi-task Learning and Transformer Finetuning for Emotion Classification and Empathy Prediction. This paper describes our contribution to the WASSA 2021 shared task on Empathy Prediction and Emotion Classification. The broad goal of this task was to model an empathy score, a distress score and the overall level of emotion of an essay written in response to a newspaper article associated with harm to someone. We have used the ELECTRA model abundantly and also advanced deep learning approaches like multi-task learning. Additionally, we also leveraged standard machine learning techniques like ensembling. Our system achieves a Pearson Correlation Coefficient of 0.533 on sub-task I and a macro F1 score of 0.5528 on sub-task II. We ranked 1 st in Emotion Classification sub-task and 3 rd in Empathy Prediction sub-task.",2021
light-1996-morphological,https://aclanthology.org/P96-1004,0,,,,,,,"Morphological Cues for Lexical Semantics. Most natural language processing tasks require lexical semantic information. Automated acquisition of this information would thus increase the robustness and portability of NLP systems. This paper describes an acquisition method which makes use of fixed correspondences between derivational affixes and lexical semantic information. One advantage of this method, and of other methods that rely only on surface characteristics of language, is that the necessary input is currently available.",Morphological Cues for Lexical Semantics,"Most natural language processing tasks require lexical semantic information. Automated acquisition of this information would thus increase the robustness and portability of NLP systems. This paper describes an acquisition method which makes use of fixed correspondences between derivational affixes and lexical semantic information. One advantage of this method, and of other methods that rely only on surface characteristics of language, is that the necessary input is currently available.",Morphological Cues for Lexical Semantics,"Most natural language processing tasks require lexical semantic information. Automated acquisition of this information would thus increase the robustness and portability of NLP systems. This paper describes an acquisition method which makes use of fixed correspondences between derivational affixes and lexical semantic information. One advantage of this method, and of other methods that rely only on surface characteristics of language, is that the necessary input is currently available.",A portion of this work was performed at the University of Rochester Computer Science Department and supported by ONR/ARPA research grant number N00014-92-J-1512.,"Morphological Cues for Lexical Semantics. Most natural language processing tasks require lexical semantic information. Automated acquisition of this information would thus increase the robustness and portability of NLP systems. This paper describes an acquisition method which makes use of fixed correspondences between derivational affixes and lexical semantic information. One advantage of this method, and of other methods that rely only on surface characteristics of language, is that the necessary input is currently available.",1996
janssen-2021-udwiki,https://aclanthology.org/2021.udw-1.7,0,,,,,,,"UDWiki: guided creation and exploitation of UD treebanks. UDWiki is an online environment designed to make creating new UD treebanks easier. It helps in setting up all the necessary data needed for a new treebank up in a GUI, where the interface takes care of guiding you through all the descriptive files needed, adding new texts to your corpus, and helping in annotating the texts. The system is built on top of the TEITOK corpus environment, using an XML based version of UD annotation, where dependencies can be combined with various other types of annotations. UDWiki can run all the necessary or helpful scripts (taggers, parsers, validators) via the interface. It also makes treebanks under development directly searchable, and can be used to maintain or search existing UD treebanks.",{UDW}iki: guided creation and exploitation of {UD} treebanks,"UDWiki is an online environment designed to make creating new UD treebanks easier. It helps in setting up all the necessary data needed for a new treebank up in a GUI, where the interface takes care of guiding you through all the descriptive files needed, adding new texts to your corpus, and helping in annotating the texts. The system is built on top of the TEITOK corpus environment, using an XML based version of UD annotation, where dependencies can be combined with various other types of annotations. UDWiki can run all the necessary or helpful scripts (taggers, parsers, validators) via the interface. It also makes treebanks under development directly searchable, and can be used to maintain or search existing UD treebanks.",UDWiki: guided creation and exploitation of UD treebanks,"UDWiki is an online environment designed to make creating new UD treebanks easier. It helps in setting up all the necessary data needed for a new treebank up in a GUI, where the interface takes care of guiding you through all the descriptive files needed, adding new texts to your corpus, and helping in annotating the texts. The system is built on top of the TEITOK corpus environment, using an XML based version of UD annotation, where dependencies can be combined with various other types of annotations. UDWiki can run all the necessary or helpful scripts (taggers, parsers, validators) via the interface. It also makes treebanks under development directly searchable, and can be used to maintain or search existing UD treebanks.",,"UDWiki: guided creation and exploitation of UD treebanks. UDWiki is an online environment designed to make creating new UD treebanks easier. It helps in setting up all the necessary data needed for a new treebank up in a GUI, where the interface takes care of guiding you through all the descriptive files needed, adding new texts to your corpus, and helping in annotating the texts. The system is built on top of the TEITOK corpus environment, using an XML based version of UD annotation, where dependencies can be combined with various other types of annotations. UDWiki can run all the necessary or helpful scripts (taggers, parsers, validators) via the interface. It also makes treebanks under development directly searchable, and can be used to maintain or search existing UD treebanks.",2021
bod-2007-linguistic,https://aclanthology.org/W07-0601,0,,,,,,,"A Linguistic Investigation into Unsupervised DOP. Unsupervised Data-Oriented Parsing models (U-DOP) represent a class of structure bootstrapping models that have achieved some of the best unsupervised parsing results in the literature. While U-DOP was originally proposed as an engineering approach to language learning (Bod 2005, 2006a), it turns out that the model has a number of properties that may also be of linguistic and cognitive interest. In this paper we will focus on the original U-DOP model proposed in Bod (2005) which computes the most probable tree from among the shortest derivations of sentences. We will show that this U-DOP model can learn both rule-based and exemplar-based aspects of language, ranging from agreement and movement phenomena to discontiguous contructions, provided that productive units of arbitrary size are allowed. We argue that our results suggest a rapprochement between nativism and empiricism.",A Linguistic Investigation into Unsupervised {DOP},"Unsupervised Data-Oriented Parsing models (U-DOP) represent a class of structure bootstrapping models that have achieved some of the best unsupervised parsing results in the literature. While U-DOP was originally proposed as an engineering approach to language learning (Bod 2005, 2006a), it turns out that the model has a number of properties that may also be of linguistic and cognitive interest. In this paper we will focus on the original U-DOP model proposed in Bod (2005) which computes the most probable tree from among the shortest derivations of sentences. We will show that this U-DOP model can learn both rule-based and exemplar-based aspects of language, ranging from agreement and movement phenomena to discontiguous contructions, provided that productive units of arbitrary size are allowed. We argue that our results suggest a rapprochement between nativism and empiricism.",A Linguistic Investigation into Unsupervised DOP,"Unsupervised Data-Oriented Parsing models (U-DOP) represent a class of structure bootstrapping models that have achieved some of the best unsupervised parsing results in the literature. While U-DOP was originally proposed as an engineering approach to language learning (Bod 2005, 2006a), it turns out that the model has a number of properties that may also be of linguistic and cognitive interest. In this paper we will focus on the original U-DOP model proposed in Bod (2005) which computes the most probable tree from among the shortest derivations of sentences. We will show that this U-DOP model can learn both rule-based and exemplar-based aspects of language, ranging from agreement and movement phenomena to discontiguous contructions, provided that productive units of arbitrary size are allowed. We argue that our results suggest a rapprochement between nativism and empiricism.",,"A Linguistic Investigation into Unsupervised DOP. Unsupervised Data-Oriented Parsing models (U-DOP) represent a class of structure bootstrapping models that have achieved some of the best unsupervised parsing results in the literature. While U-DOP was originally proposed as an engineering approach to language learning (Bod 2005, 2006a), it turns out that the model has a number of properties that may also be of linguistic and cognitive interest. In this paper we will focus on the original U-DOP model proposed in Bod (2005) which computes the most probable tree from among the shortest derivations of sentences. We will show that this U-DOP model can learn both rule-based and exemplar-based aspects of language, ranging from agreement and movement phenomena to discontiguous contructions, provided that productive units of arbitrary size are allowed. We argue that our results suggest a rapprochement between nativism and empiricism.",2007
indurkhya-2021-using,https://aclanthology.org/2021.ranlp-1.71,0,,,,,,,"Using Collaborative Filtering to Model Argument Selection. This study evaluates whether model-based Collaborative Filtering (CF) algorithms, which have been extensively studied and widely used to build recommender systems, can be used to predict which common nouns a predicate can take as its complement. We find that, when trained on verb-noun co-occurrence data drawn from the Corpus of Contemporary American-English (COCA), two popular model-based CF algorithms, Singular Value Decomposition and Non-negative Matrix Factorization, perform well on this task, each achieving an AUROC of at least 0.89 and surpassing several different baselines. We then show that the embedding-vectors for verbs and nouns learned by the two CF models can be quantized (via application of k-means clustering) with minimal loss of performance on the prediction task while only using a small number of verb and noun clusters (relative to the number of distinct verbs and nouns). Finally we evaluate the alignment between the quantized embedding vectors for verbs and the Levin verb classes, finding that the alignment surpassed several randomized baselines. We conclude by discussing how model-based CF algorithms might be applied to learning restrictions on constituent selection between various lexical categories and how these (learned) models could then be used to augment a (rulebased) constituency grammar.",Using Collaborative Filtering to Model Argument Selection,"This study evaluates whether model-based Collaborative Filtering (CF) algorithms, which have been extensively studied and widely used to build recommender systems, can be used to predict which common nouns a predicate can take as its complement. We find that, when trained on verb-noun co-occurrence data drawn from the Corpus of Contemporary American-English (COCA), two popular model-based CF algorithms, Singular Value Decomposition and Non-negative Matrix Factorization, perform well on this task, each achieving an AUROC of at least 0.89 and surpassing several different baselines. We then show that the embedding-vectors for verbs and nouns learned by the two CF models can be quantized (via application of k-means clustering) with minimal loss of performance on the prediction task while only using a small number of verb and noun clusters (relative to the number of distinct verbs and nouns). Finally we evaluate the alignment between the quantized embedding vectors for verbs and the Levin verb classes, finding that the alignment surpassed several randomized baselines. We conclude by discussing how model-based CF algorithms might be applied to learning restrictions on constituent selection between various lexical categories and how these (learned) models could then be used to augment a (rulebased) constituency grammar.",Using Collaborative Filtering to Model Argument Selection,"This study evaluates whether model-based Collaborative Filtering (CF) algorithms, which have been extensively studied and widely used to build recommender systems, can be used to predict which common nouns a predicate can take as its complement. We find that, when trained on verb-noun co-occurrence data drawn from the Corpus of Contemporary American-English (COCA), two popular model-based CF algorithms, Singular Value Decomposition and Non-negative Matrix Factorization, perform well on this task, each achieving an AUROC of at least 0.89 and surpassing several different baselines. We then show that the embedding-vectors for verbs and nouns learned by the two CF models can be quantized (via application of k-means clustering) with minimal loss of performance on the prediction task while only using a small number of verb and noun clusters (relative to the number of distinct verbs and nouns). Finally we evaluate the alignment between the quantized embedding vectors for verbs and the Levin verb classes, finding that the alignment surpassed several randomized baselines. We conclude by discussing how model-based CF algorithms might be applied to learning restrictions on constituent selection between various lexical categories and how these (learned) models could then be used to augment a (rulebased) constituency grammar.",Three anonymous reviewers are thanked for critically reading the manuscript and providing helpful comments.,"Using Collaborative Filtering to Model Argument Selection. This study evaluates whether model-based Collaborative Filtering (CF) algorithms, which have been extensively studied and widely used to build recommender systems, can be used to predict which common nouns a predicate can take as its complement. We find that, when trained on verb-noun co-occurrence data drawn from the Corpus of Contemporary American-English (COCA), two popular model-based CF algorithms, Singular Value Decomposition and Non-negative Matrix Factorization, perform well on this task, each achieving an AUROC of at least 0.89 and surpassing several different baselines. We then show that the embedding-vectors for verbs and nouns learned by the two CF models can be quantized (via application of k-means clustering) with minimal loss of performance on the prediction task while only using a small number of verb and noun clusters (relative to the number of distinct verbs and nouns). Finally we evaluate the alignment between the quantized embedding vectors for verbs and the Levin verb classes, finding that the alignment surpassed several randomized baselines. We conclude by discussing how model-based CF algorithms might be applied to learning restrictions on constituent selection between various lexical categories and how these (learned) models could then be used to augment a (rulebased) constituency grammar.",2021
claeser-etal-2018-multilingual,https://aclanthology.org/W18-3218,0,,,,,,,Multilingual Named Entity Recognition on Spanish-English Code-switched Tweets using Support Vector Machines. This paper describes our system submission for the ACL 2018 shared task on named entity recognition (NER) in codeswitched Twitter data. Our best result (F1 = 53.65) was obtained using a Support Vector Machine (SVM) with 14 features combined with rule-based postprocessing.,Multilingual Named Entity Recognition on {S}panish-{E}nglish Code-switched Tweets using Support Vector Machines,This paper describes our system submission for the ACL 2018 shared task on named entity recognition (NER) in codeswitched Twitter data. Our best result (F1 = 53.65) was obtained using a Support Vector Machine (SVM) with 14 features combined with rule-based postprocessing.,Multilingual Named Entity Recognition on Spanish-English Code-switched Tweets using Support Vector Machines,This paper describes our system submission for the ACL 2018 shared task on named entity recognition (NER) in codeswitched Twitter data. Our best result (F1 = 53.65) was obtained using a Support Vector Machine (SVM) with 14 features combined with rule-based postprocessing.,,Multilingual Named Entity Recognition on Spanish-English Code-switched Tweets using Support Vector Machines. This paper describes our system submission for the ACL 2018 shared task on named entity recognition (NER) in codeswitched Twitter data. Our best result (F1 = 53.65) was obtained using a Support Vector Machine (SVM) with 14 features combined with rule-based postprocessing.,2018
blackwood-etal-2010-fluency,https://aclanthology.org/C10-1009,0,,,,,,,Fluency Constraints for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices. A novel and robust approach to improving statistical machine translation fluency is developed within a minimum Bayesrisk decoding framework. By segmenting translation lattices according to confidence measures over the maximum likelihood translation hypothesis we are able to focus on regions with potential translation errors. Hypothesis space constraints based on monolingual coverage are applied to the low confidence regions to improve overall translation fluency.,Fluency Constraints for Minimum {B}ayes-Risk Decoding of Statistical Machine Translation Lattices,A novel and robust approach to improving statistical machine translation fluency is developed within a minimum Bayesrisk decoding framework. By segmenting translation lattices according to confidence measures over the maximum likelihood translation hypothesis we are able to focus on regions with potential translation errors. Hypothesis space constraints based on monolingual coverage are applied to the low confidence regions to improve overall translation fluency.,Fluency Constraints for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices,A novel and robust approach to improving statistical machine translation fluency is developed within a minimum Bayesrisk decoding framework. By segmenting translation lattices according to confidence measures over the maximum likelihood translation hypothesis we are able to focus on regions with potential translation errors. Hypothesis space constraints based on monolingual coverage are applied to the low confidence regions to improve overall translation fluency.,"We would like to thank Matt Gibson and the human judges who participated in the evaluation. This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-C-0022 and the European Union Seventh Framework Programme (FP7-ICT-2009-4) under Grant Agreement No. 247762.",Fluency Constraints for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices. A novel and robust approach to improving statistical machine translation fluency is developed within a minimum Bayesrisk decoding framework. By segmenting translation lattices according to confidence measures over the maximum likelihood translation hypothesis we are able to focus on regions with potential translation errors. Hypothesis space constraints based on monolingual coverage are applied to the low confidence regions to improve overall translation fluency.,2010
ikehara-etal-1991-toward,https://aclanthology.org/1991.mtsummit-papers.16,0,,,,,,,"Toward an MT System without Pre-Editing: Effects of a New Method in ALT-J/E. Recently, several types of Japanese to English MT (machine translation) systems have been developed, but prior to using such systems, they have required a pre-editing process of rewriting the original text into Japanese that could be easily translated. For communication of translated information requiring speed in dissemination, application of these systems would necessarily pose problems. To overcome such problems, a Multi-Level Translation Method based on Constructive Process Theory had been proposed. In this paper, the benefits of this method in ALT-J/E will be described. In comparison with the conventional elementary composition method, the Multi-Level Translation Method, emphasizing the importance of the meaning contained in expression structures, has been ascertained to be capable of conducting translation according to meaning and context processing with comparative ease. We are now hopeful of realizing machine translation omitting the process of pre-editing.",Toward an {MT} System without Pre-Editing: Effects of a New Method in {ALT}-{J}/{E},"Recently, several types of Japanese to English MT (machine translation) systems have been developed, but prior to using such systems, they have required a pre-editing process of rewriting the original text into Japanese that could be easily translated. For communication of translated information requiring speed in dissemination, application of these systems would necessarily pose problems. To overcome such problems, a Multi-Level Translation Method based on Constructive Process Theory had been proposed. In this paper, the benefits of this method in ALT-J/E will be described. In comparison with the conventional elementary composition method, the Multi-Level Translation Method, emphasizing the importance of the meaning contained in expression structures, has been ascertained to be capable of conducting translation according to meaning and context processing with comparative ease. We are now hopeful of realizing machine translation omitting the process of pre-editing.",Toward an MT System without Pre-Editing: Effects of a New Method in ALT-J/E,"Recently, several types of Japanese to English MT (machine translation) systems have been developed, but prior to using such systems, they have required a pre-editing process of rewriting the original text into Japanese that could be easily translated. For communication of translated information requiring speed in dissemination, application of these systems would necessarily pose problems. To overcome such problems, a Multi-Level Translation Method based on Constructive Process Theory had been proposed. In this paper, the benefits of this method in ALT-J/E will be described. In comparison with the conventional elementary composition method, the Multi-Level Translation Method, emphasizing the importance of the meaning contained in expression structures, has been ascertained to be capable of conducting translation according to meaning and context processing with comparative ease. We are now hopeful of realizing machine translation omitting the process of pre-editing.","The authors wish to thank Dr. Masahiro Miyazaki, Mr. Kentarou Ogura and other members of the research group on MT for their valuable contribution to discussions.","Toward an MT System without Pre-Editing: Effects of a New Method in ALT-J/E. Recently, several types of Japanese to English MT (machine translation) systems have been developed, but prior to using such systems, they have required a pre-editing process of rewriting the original text into Japanese that could be easily translated. For communication of translated information requiring speed in dissemination, application of these systems would necessarily pose problems. To overcome such problems, a Multi-Level Translation Method based on Constructive Process Theory had been proposed. In this paper, the benefits of this method in ALT-J/E will be described. In comparison with the conventional elementary composition method, the Multi-Level Translation Method, emphasizing the importance of the meaning contained in expression structures, has been ascertained to be capable of conducting translation according to meaning and context processing with comparative ease. We are now hopeful of realizing machine translation omitting the process of pre-editing.",1991
chalaguine-schulz-2017-assessing,https://aclanthology.org/E17-4008,0,,,,,,,"Assessing Convincingness of Arguments in Online Debates with Limited Number of Features. We propose a new method in the field of argument analysis in social media to determining convincingness of arguments in online debates, following previous research by Habernal and Gurevych (2016). Rather than using argument specific feature values, we measure feature values relative to the average value in the debate, allowing us to determine argument convincingness with fewer features (between 5 and 35) than normally used for natural language processing tasks. We use a simple forward-feeding neural network for this task and achieve an accuracy of 0.77 which is comparable to the accuracy obtained using 64k features and a support vector machine by Habernal and Gurevych.",Assessing Convincingness of Arguments in Online Debates with Limited Number of Features,"We propose a new method in the field of argument analysis in social media to determining convincingness of arguments in online debates, following previous research by Habernal and Gurevych (2016). Rather than using argument specific feature values, we measure feature values relative to the average value in the debate, allowing us to determine argument convincingness with fewer features (between 5 and 35) than normally used for natural language processing tasks. We use a simple forward-feeding neural network for this task and achieve an accuracy of 0.77 which is comparable to the accuracy obtained using 64k features and a support vector machine by Habernal and Gurevych.",Assessing Convincingness of Arguments in Online Debates with Limited Number of Features,"We propose a new method in the field of argument analysis in social media to determining convincingness of arguments in online debates, following previous research by Habernal and Gurevych (2016). Rather than using argument specific feature values, we measure feature values relative to the average value in the debate, allowing us to determine argument convincingness with fewer features (between 5 and 35) than normally used for natural language processing tasks. We use a simple forward-feeding neural network for this task and achieve an accuracy of 0.77 which is comparable to the accuracy obtained using 64k features and a support vector machine by Habernal and Gurevych.","We thank our colleague Oana Cocarascu from Imperial College London who provided insight and expertise that greatly assisted the research, as well as Luka Milic for assistance with the implementation of the neural network.","Assessing Convincingness of Arguments in Online Debates with Limited Number of Features. We propose a new method in the field of argument analysis in social media to determining convincingness of arguments in online debates, following previous research by Habernal and Gurevych (2016). Rather than using argument specific feature values, we measure feature values relative to the average value in the debate, allowing us to determine argument convincingness with fewer features (between 5 and 35) than normally used for natural language processing tasks. We use a simple forward-feeding neural network for this task and achieve an accuracy of 0.77 which is comparable to the accuracy obtained using 64k features and a support vector machine by Habernal and Gurevych.",2017
pezzelle-etal-2018-comparatives,https://aclanthology.org/N18-1039,0,,,,,,,"Comparatives, Quantifiers, Proportions: a Multi-Task Model for the Learning of Quantities from Vision. The present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model. The motivation is that, in humans, these processes underlie the same cognitive, nonsymbolic ability, which allows an automatic estimation and comparison of set magnitudes. We show that when information about lowercomplexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation. Moreover, the multi-task model is able to generalize to unseen combinations of target/non-target objects. Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.","Comparatives, Quantifiers, Proportions: a Multi-Task Model for the Learning of Quantities from Vision","The present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model. The motivation is that, in humans, these processes underlie the same cognitive, nonsymbolic ability, which allows an automatic estimation and comparison of set magnitudes. We show that when information about lowercomplexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation. Moreover, the multi-task model is able to generalize to unseen combinations of target/non-target objects. Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.","Comparatives, Quantifiers, Proportions: a Multi-Task Model for the Learning of Quantities from Vision","The present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model. The motivation is that, in humans, these processes underlie the same cognitive, nonsymbolic ability, which allows an automatic estimation and comparison of set magnitudes. We show that when information about lowercomplexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation. Moreover, the multi-task model is able to generalize to unseen combinations of target/non-target objects. Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.","We kindly acknowledge Gemma Boleda and the AMORE team (UPF), Raquel Fernández and the Dialogue Modelling Group (UvA) for the feedback, advice and support. We are also grateful to Aurélie Herbelot, Stephan Lee, Manuela Piazza, Sebastian Ruder, and the anonymous reviewers for their valuable comments. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 715154). We gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used for this research. This paper reflects the authors' view only, and the EU is not responsible for any use that may be made of the information it contains.","Comparatives, Quantifiers, Proportions: a Multi-Task Model for the Learning of Quantities from Vision. The present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model. The motivation is that, in humans, these processes underlie the same cognitive, nonsymbolic ability, which allows an automatic estimation and comparison of set magnitudes. We show that when information about lowercomplexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation. Moreover, the multi-task model is able to generalize to unseen combinations of target/non-target objects. Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.",2018
hsieh-etal-2019-robustness,https://aclanthology.org/P19-1147,0,,,,,,,"On the Robustness of Self-Attentive Models. This work examines the robustness of selfattentive neural networks against adversarial input perturbations. Specifically, we investigate the attention and feature extraction mechanisms of state-of-the-art recurrent neural networks and self-attentive architectures for sentiment analysis, entailment and machine translation under adversarial attacks. We also propose a novel attack algorithm for generating more natural adversarial examples that could mislead neural models but not humans. Experimental results show that, compared to recurrent neural models, self-attentive models are more robust against adversarial perturbation. In addition, we provide theoretical explanations for their superior robustness to support our claims.",On the Robustness of Self-Attentive Models,"This work examines the robustness of selfattentive neural networks against adversarial input perturbations. Specifically, we investigate the attention and feature extraction mechanisms of state-of-the-art recurrent neural networks and self-attentive architectures for sentiment analysis, entailment and machine translation under adversarial attacks. We also propose a novel attack algorithm for generating more natural adversarial examples that could mislead neural models but not humans. Experimental results show that, compared to recurrent neural models, self-attentive models are more robust against adversarial perturbation. In addition, we provide theoretical explanations for their superior robustness to support our claims.",On the Robustness of Self-Attentive Models,"This work examines the robustness of selfattentive neural networks against adversarial input perturbations. Specifically, we investigate the attention and feature extraction mechanisms of state-of-the-art recurrent neural networks and self-attentive architectures for sentiment analysis, entailment and machine translation under adversarial attacks. We also propose a novel attack algorithm for generating more natural adversarial examples that could mislead neural models but not humans. Experimental results show that, compared to recurrent neural models, self-attentive models are more robust against adversarial perturbation. In addition, we provide theoretical explanations for their superior robustness to support our claims.","We are grateful for the insightful comments from anonymous reviewers. This work is supported by the Ministry of Science and Technology of Taiwan under grant numbers 107-2917-I-004-001, 108-2634-F-001-005. The author Yu-Lun Hsieh wishes to acknowledge, with thanks, the Taiwan International Graduate Program (TIGP) of Academia Sinica for financial support towards attending this conference. We also acknowledge the support from NSF via IIS1719097, Intel and Google Cloud.","On the Robustness of Self-Attentive Models. This work examines the robustness of selfattentive neural networks against adversarial input perturbations. Specifically, we investigate the attention and feature extraction mechanisms of state-of-the-art recurrent neural networks and self-attentive architectures for sentiment analysis, entailment and machine translation under adversarial attacks. We also propose a novel attack algorithm for generating more natural adversarial examples that could mislead neural models but not humans. Experimental results show that, compared to recurrent neural models, self-attentive models are more robust against adversarial perturbation. In addition, we provide theoretical explanations for their superior robustness to support our claims.",2019
hahn-choi-2019-self,https://aclanthology.org/R19-1050,0,,,,,,,"Self-Knowledge Distillation in Natural Language Processing. Since deep learning became a key player in natural language processing (NLP), many deep learning models have been showing remarkable performances in a variety of NLP tasks, and in some cases, they are even outperforming humans. Such high performance can be explained by efficient knowledge representation of deep learning models. While many methods have been proposed to learn more efficient representation, knowledge distillation from pretrained deep networks suggest that we can use more information from the soft target probability to train other neural networks. In this paper, we propose a new knowledge distillation method self-knowledge distillation, based on the soft target probabilities of the training model itself, where multimode information is distilled from the word embedding space right below the softmax layer. Due to the time complexity, our method approximates the soft target probabilities. In experiments, we applied the proposed method to two different and fundamental NLP tasks: language model and neural machine translation. The experiment results show that our proposed method improves performance on the tasks.",Self-Knowledge Distillation in Natural Language Processing,"Since deep learning became a key player in natural language processing (NLP), many deep learning models have been showing remarkable performances in a variety of NLP tasks, and in some cases, they are even outperforming humans. Such high performance can be explained by efficient knowledge representation of deep learning models. While many methods have been proposed to learn more efficient representation, knowledge distillation from pretrained deep networks suggest that we can use more information from the soft target probability to train other neural networks. In this paper, we propose a new knowledge distillation method self-knowledge distillation, based on the soft target probabilities of the training model itself, where multimode information is distilled from the word embedding space right below the softmax layer. Due to the time complexity, our method approximates the soft target probabilities. In experiments, we applied the proposed method to two different and fundamental NLP tasks: language model and neural machine translation. The experiment results show that our proposed method improves performance on the tasks.",Self-Knowledge Distillation in Natural Language Processing,"Since deep learning became a key player in natural language processing (NLP), many deep learning models have been showing remarkable performances in a variety of NLP tasks, and in some cases, they are even outperforming humans. Such high performance can be explained by efficient knowledge representation of deep learning models. While many methods have been proposed to learn more efficient representation, knowledge distillation from pretrained deep networks suggest that we can use more information from the soft target probability to train other neural networks. In this paper, we propose a new knowledge distillation method self-knowledge distillation, based on the soft target probabilities of the training model itself, where multimode information is distilled from the word embedding space right below the softmax layer. Due to the time complexity, our method approximates the soft target probabilities. In experiments, we applied the proposed method to two different and fundamental NLP tasks: language model and neural machine translation. The experiment results show that our proposed method improves performance on the tasks.",,"Self-Knowledge Distillation in Natural Language Processing. Since deep learning became a key player in natural language processing (NLP), many deep learning models have been showing remarkable performances in a variety of NLP tasks, and in some cases, they are even outperforming humans. Such high performance can be explained by efficient knowledge representation of deep learning models. While many methods have been proposed to learn more efficient representation, knowledge distillation from pretrained deep networks suggest that we can use more information from the soft target probability to train other neural networks. In this paper, we propose a new knowledge distillation method self-knowledge distillation, based on the soft target probabilities of the training model itself, where multimode information is distilled from the word embedding space right below the softmax layer. Due to the time complexity, our method approximates the soft target probabilities. In experiments, we applied the proposed method to two different and fundamental NLP tasks: language model and neural machine translation. The experiment results show that our proposed method improves performance on the tasks.",2019
vanni-zajac-1996-temple,https://aclanthology.org/X96-1024,0,,,,,,,"The Temple Translator's Workstation Project. Tipster document management architecture and it allows both translator/analysts and monolingual analysts to use the machinetranslation function for assessing the relevance of a translated document or otherwise using its information in the performance of other types of information processing. Translators can also use its output as a rough draft from which to begin the process of producing a translation, following up with specific post-editing functions.
Glossary-Based Machine-Translation (GBMT) was first developed at CMU as part of the Pangloss project [Nirenburg 95; Cohen et al., 93; Nirenburg et al., 93; Frederking et al., 93] , and a sizeable Spanish-English GBMT system was implemented.",The {T}emple {T}ranslator{'}s {W}orkstation Project,"Tipster document management architecture and it allows both translator/analysts and monolingual analysts to use the machinetranslation function for assessing the relevance of a translated document or otherwise using its information in the performance of other types of information processing. Translators can also use its output as a rough draft from which to begin the process of producing a translation, following up with specific post-editing functions.
Glossary-Based Machine-Translation (GBMT) was first developed at CMU as part of the Pangloss project [Nirenburg 95; Cohen et al., 93; Nirenburg et al., 93; Frederking et al., 93] , and a sizeable Spanish-English GBMT system was implemented.",The Temple Translator's Workstation Project,"Tipster document management architecture and it allows both translator/analysts and monolingual analysts to use the machinetranslation function for assessing the relevance of a translated document or otherwise using its information in the performance of other types of information processing. Translators can also use its output as a rough draft from which to begin the process of producing a translation, following up with specific post-editing functions.
Glossary-Based Machine-Translation (GBMT) was first developed at CMU as part of the Pangloss project [Nirenburg 95; Cohen et al., 93; Nirenburg et al., 93; Frederking et al., 93] , and a sizeable Spanish-English GBMT system was implemented.",,"The Temple Translator's Workstation Project. Tipster document management architecture and it allows both translator/analysts and monolingual analysts to use the machinetranslation function for assessing the relevance of a translated document or otherwise using its information in the performance of other types of information processing. Translators can also use its output as a rough draft from which to begin the process of producing a translation, following up with specific post-editing functions.
Glossary-Based Machine-Translation (GBMT) was first developed at CMU as part of the Pangloss project [Nirenburg 95; Cohen et al., 93; Nirenburg et al., 93; Frederking et al., 93] , and a sizeable Spanish-English GBMT system was implemented.",1996
tihelka-matousek-2004-design,http://www.lrec-conf.org/proceedings/lrec2004/pdf/119.pdf,0,,,,,,,"The Design of Czech Language Formal Listening Tests for the Evaluation of TTS Systems. This paper presents an attempt to design listening tests for the Czech synthesis speech evaluation. The design is based on standardized and widely used listening tests for English; therefore, we can benefit from the advantages provided by standards. Bearing the Czech language phenomena in mind, we filled the standard frameworks of several listening tests, especially the MRT (Modified Rhyme Test) and the SUS (Semantically Unpredictable Sentences) test; the Czech National Corpus was used for this purpose. Designed tests were instantly used for real tests in which 88 people took part, a procedure which proved correct. This was the first attempt to design Czech listening tests according to given standard frameworks and it was successful.",The Design of {C}zech Language Formal Listening Tests for the Evaluation of {TTS} Systems,"This paper presents an attempt to design listening tests for the Czech synthesis speech evaluation. The design is based on standardized and widely used listening tests for English; therefore, we can benefit from the advantages provided by standards. Bearing the Czech language phenomena in mind, we filled the standard frameworks of several listening tests, especially the MRT (Modified Rhyme Test) and the SUS (Semantically Unpredictable Sentences) test; the Czech National Corpus was used for this purpose. Designed tests were instantly used for real tests in which 88 people took part, a procedure which proved correct. This was the first attempt to design Czech listening tests according to given standard frameworks and it was successful.",The Design of Czech Language Formal Listening Tests for the Evaluation of TTS Systems,"This paper presents an attempt to design listening tests for the Czech synthesis speech evaluation. The design is based on standardized and widely used listening tests for English; therefore, we can benefit from the advantages provided by standards. Bearing the Czech language phenomena in mind, we filled the standard frameworks of several listening tests, especially the MRT (Modified Rhyme Test) and the SUS (Semantically Unpredictable Sentences) test; the Czech National Corpus was used for this purpose. Designed tests were instantly used for real tests in which 88 people took part, a procedure which proved correct. This was the first attempt to design Czech listening tests according to given standard frameworks and it was successful.",,"The Design of Czech Language Formal Listening Tests for the Evaluation of TTS Systems. This paper presents an attempt to design listening tests for the Czech synthesis speech evaluation. The design is based on standardized and widely used listening tests for English; therefore, we can benefit from the advantages provided by standards. Bearing the Czech language phenomena in mind, we filled the standard frameworks of several listening tests, especially the MRT (Modified Rhyme Test) and the SUS (Semantically Unpredictable Sentences) test; the Czech National Corpus was used for this purpose. Designed tests were instantly used for real tests in which 88 people took part, a procedure which proved correct. This was the first attempt to design Czech listening tests according to given standard frameworks and it was successful.",2004
lin-dyer-2009-data,https://aclanthology.org/N09-4001,0,,,,,,,"Data Intensive Text Processing with MapReduce. This half-day tutorial introduces participants to data-intensive text processing with the MapReduce programming model [1], using the open-source Hadoop implementation. The focus will be on scalability and the tradeoffs associated with distributed processing of large datasets. Content will include general discussions about algorithm design, presentation of illustrative algorithms, case studies in HLT applications, as well as practical advice in writing Hadoop programs and running Hadoop clusters. Amazon has generously agreed to provide each participant with $100 in Amazon Web Services (AWS) credits that can used toward its Elastic Compute Cloud (EC2) ""utility computing"" service (sufficient for 1000 instance-hours). EC2 allows anyone to rapidly provision Hadoop clusters ""on the fly"" without upfront hardware investments, and provides a low-cost vehicle for exploring Hadoop.",Data Intensive Text Processing with {M}ap{R}educe,"This half-day tutorial introduces participants to data-intensive text processing with the MapReduce programming model [1], using the open-source Hadoop implementation. The focus will be on scalability and the tradeoffs associated with distributed processing of large datasets. Content will include general discussions about algorithm design, presentation of illustrative algorithms, case studies in HLT applications, as well as practical advice in writing Hadoop programs and running Hadoop clusters. Amazon has generously agreed to provide each participant with $100 in Amazon Web Services (AWS) credits that can used toward its Elastic Compute Cloud (EC2) ""utility computing"" service (sufficient for 1000 instance-hours). EC2 allows anyone to rapidly provision Hadoop clusters ""on the fly"" without upfront hardware investments, and provides a low-cost vehicle for exploring Hadoop.",Data Intensive Text Processing with MapReduce,"This half-day tutorial introduces participants to data-intensive text processing with the MapReduce programming model [1], using the open-source Hadoop implementation. The focus will be on scalability and the tradeoffs associated with distributed processing of large datasets. Content will include general discussions about algorithm design, presentation of illustrative algorithms, case studies in HLT applications, as well as practical advice in writing Hadoop programs and running Hadoop clusters. Amazon has generously agreed to provide each participant with $100 in Amazon Web Services (AWS) credits that can used toward its Elastic Compute Cloud (EC2) ""utility computing"" service (sufficient for 1000 instance-hours). EC2 allows anyone to rapidly provision Hadoop clusters ""on the fly"" without upfront hardware investments, and provides a low-cost vehicle for exploring Hadoop.","This work is supported by NSF under awards IIS-0705832 and IIS-0836560; the Intramural Research Program of the NIH, National Library of Medicine; DARPA/IPTO Contract No. HR0011-06-2-0001 under the GALE program. Any opinions, findings, conclusions, or recommendations expressed here are the instructors' and do not necessarily reflect those of the sponsors. We are grateful to Amazon for its support of tutorial participants.","Data Intensive Text Processing with MapReduce. This half-day tutorial introduces participants to data-intensive text processing with the MapReduce programming model [1], using the open-source Hadoop implementation. The focus will be on scalability and the tradeoffs associated with distributed processing of large datasets. Content will include general discussions about algorithm design, presentation of illustrative algorithms, case studies in HLT applications, as well as practical advice in writing Hadoop programs and running Hadoop clusters. Amazon has generously agreed to provide each participant with $100 in Amazon Web Services (AWS) credits that can used toward its Elastic Compute Cloud (EC2) ""utility computing"" service (sufficient for 1000 instance-hours). EC2 allows anyone to rapidly provision Hadoop clusters ""on the fly"" without upfront hardware investments, and provides a low-cost vehicle for exploring Hadoop.",2009
vulic-korhonen-2016-role,https://aclanthology.org/P16-1024,0,,,,,,,"On the Role of Seed Lexicons in Learning Bilingual Word Embeddings. A shared bilingual word embedding space (SBWES) is an indispensable resource in a variety of cross-language NLP and IR tasks. A common approach to the SB-WES induction is to learn a mapping function between monolingual semantic spaces, where the mapping critically relies on a seed word lexicon used in the learning process. In this work, we analyze the importance and properties of seed lexicons for the SBWES induction across different dimensions (i.e., lexicon source, lexicon size, translation method, translation pair reliability). On the basis of our analysis, we propose a simple but effective hybrid bilingual word embedding (BWE) model. This model (HYBWE) learns the mapping between two monolingual embedding spaces using only highly reliable symmetric translation pairs from a seed document-level embedding space. We perform bilingual lexicon learning (BLL) with 3 language pairs and show that by carefully selecting reliable translation pairs our new HYBWE model outperforms benchmarking BWE learning models, all of which use more expensive bilingual signals. Effectively, we demonstrate that a SBWES may be induced by leveraging only a very weak bilingual signal (document alignments) along with monolingual data.",On the Role of Seed Lexicons in Learning Bilingual Word Embeddings,"A shared bilingual word embedding space (SBWES) is an indispensable resource in a variety of cross-language NLP and IR tasks. A common approach to the SB-WES induction is to learn a mapping function between monolingual semantic spaces, where the mapping critically relies on a seed word lexicon used in the learning process. In this work, we analyze the importance and properties of seed lexicons for the SBWES induction across different dimensions (i.e., lexicon source, lexicon size, translation method, translation pair reliability). On the basis of our analysis, we propose a simple but effective hybrid bilingual word embedding (BWE) model. This model (HYBWE) learns the mapping between two monolingual embedding spaces using only highly reliable symmetric translation pairs from a seed document-level embedding space. We perform bilingual lexicon learning (BLL) with 3 language pairs and show that by carefully selecting reliable translation pairs our new HYBWE model outperforms benchmarking BWE learning models, all of which use more expensive bilingual signals. Effectively, we demonstrate that a SBWES may be induced by leveraging only a very weak bilingual signal (document alignments) along with monolingual data.",On the Role of Seed Lexicons in Learning Bilingual Word Embeddings,"A shared bilingual word embedding space (SBWES) is an indispensable resource in a variety of cross-language NLP and IR tasks. A common approach to the SB-WES induction is to learn a mapping function between monolingual semantic spaces, where the mapping critically relies on a seed word lexicon used in the learning process. In this work, we analyze the importance and properties of seed lexicons for the SBWES induction across different dimensions (i.e., lexicon source, lexicon size, translation method, translation pair reliability). On the basis of our analysis, we propose a simple but effective hybrid bilingual word embedding (BWE) model. This model (HYBWE) learns the mapping between two monolingual embedding spaces using only highly reliable symmetric translation pairs from a seed document-level embedding space. We perform bilingual lexicon learning (BLL) with 3 language pairs and show that by carefully selecting reliable translation pairs our new HYBWE model outperforms benchmarking BWE learning models, all of which use more expensive bilingual signals. Effectively, we demonstrate that a SBWES may be induced by leveraging only a very weak bilingual signal (document alignments) along with monolingual data.",This work is supported by ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The authors are grateful to Roi Reichart and the anonymous reviewers for their helpful comments and suggestions.,"On the Role of Seed Lexicons in Learning Bilingual Word Embeddings. A shared bilingual word embedding space (SBWES) is an indispensable resource in a variety of cross-language NLP and IR tasks. A common approach to the SB-WES induction is to learn a mapping function between monolingual semantic spaces, where the mapping critically relies on a seed word lexicon used in the learning process. In this work, we analyze the importance and properties of seed lexicons for the SBWES induction across different dimensions (i.e., lexicon source, lexicon size, translation method, translation pair reliability). On the basis of our analysis, we propose a simple but effective hybrid bilingual word embedding (BWE) model. This model (HYBWE) learns the mapping between two monolingual embedding spaces using only highly reliable symmetric translation pairs from a seed document-level embedding space. We perform bilingual lexicon learning (BLL) with 3 language pairs and show that by carefully selecting reliable translation pairs our new HYBWE model outperforms benchmarking BWE learning models, all of which use more expensive bilingual signals. Effectively, we demonstrate that a SBWES may be induced by leveraging only a very weak bilingual signal (document alignments) along with monolingual data.",2016
li-etal-2021-tdeer,https://aclanthology.org/2021.emnlp-main.635,0,,,,,,,"TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations. Joint extraction of entities and relations from unstructured texts to form factual triples is a fundamental task of constructing a Knowledge Base (KB). A common method is to decode triples by predicting entity pairs to obtain the corresponding relation. However, it is still challenging to handle this task efficiently, especially for the overlapping triple problem. To address such a problem, this paper proposes a novel efficient entities and relations extraction model called TDEER, which stands for Translating Decoding Schema for Joint Extraction of Entities and Relations. Unlike the common approaches, the proposed translating decoding schema regards the relation as a translating operation from subject to objects, i.e., TDEER decodes triples as subject + relation → objects. TDEER can naturally handle the overlapping triple problem, because the translating decoding schema can recognize all possible triples, including overlapping and non-overlapping triples. To enhance model robustness, we introduce negative samples to alleviate error accumulation at different stages. Extensive experiments on public datasets demonstrate that TDEER produces competitive results compared with the state-of-the-art (SOTA) baselines. Furthermore, the computation complexity analysis indicates that TDEER is more efficient than powerful baselines. Especially, the proposed TDEER is 2 times faster than the recent SOTA models. The code is available at https://github.com/4AI/TDEER.",{TDEER}: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations,"Joint extraction of entities and relations from unstructured texts to form factual triples is a fundamental task of constructing a Knowledge Base (KB). A common method is to decode triples by predicting entity pairs to obtain the corresponding relation. However, it is still challenging to handle this task efficiently, especially for the overlapping triple problem. To address such a problem, this paper proposes a novel efficient entities and relations extraction model called TDEER, which stands for Translating Decoding Schema for Joint Extraction of Entities and Relations. Unlike the common approaches, the proposed translating decoding schema regards the relation as a translating operation from subject to objects, i.e., TDEER decodes triples as subject + relation → objects. TDEER can naturally handle the overlapping triple problem, because the translating decoding schema can recognize all possible triples, including overlapping and non-overlapping triples. To enhance model robustness, we introduce negative samples to alleviate error accumulation at different stages. Extensive experiments on public datasets demonstrate that TDEER produces competitive results compared with the state-of-the-art (SOTA) baselines. Furthermore, the computation complexity analysis indicates that TDEER is more efficient than powerful baselines. Especially, the proposed TDEER is 2 times faster than the recent SOTA models. The code is available at https://github.com/4AI/TDEER.",TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations,"Joint extraction of entities and relations from unstructured texts to form factual triples is a fundamental task of constructing a Knowledge Base (KB). A common method is to decode triples by predicting entity pairs to obtain the corresponding relation. However, it is still challenging to handle this task efficiently, especially for the overlapping triple problem. To address such a problem, this paper proposes a novel efficient entities and relations extraction model called TDEER, which stands for Translating Decoding Schema for Joint Extraction of Entities and Relations. Unlike the common approaches, the proposed translating decoding schema regards the relation as a translating operation from subject to objects, i.e., TDEER decodes triples as subject + relation → objects. TDEER can naturally handle the overlapping triple problem, because the translating decoding schema can recognize all possible triples, including overlapping and non-overlapping triples. To enhance model robustness, we introduce negative samples to alleviate error accumulation at different stages. Extensive experiments on public datasets demonstrate that TDEER produces competitive results compared with the state-of-the-art (SOTA) baselines. Furthermore, the computation complexity analysis indicates that TDEER is more efficient than powerful baselines. Especially, the proposed TDEER is 2 times faster than the recent SOTA models. The code is available at https://github.com/4AI/TDEER.",,"TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations. Joint extraction of entities and relations from unstructured texts to form factual triples is a fundamental task of constructing a Knowledge Base (KB). A common method is to decode triples by predicting entity pairs to obtain the corresponding relation. However, it is still challenging to handle this task efficiently, especially for the overlapping triple problem. To address such a problem, this paper proposes a novel efficient entities and relations extraction model called TDEER, which stands for Translating Decoding Schema for Joint Extraction of Entities and Relations. Unlike the common approaches, the proposed translating decoding schema regards the relation as a translating operation from subject to objects, i.e., TDEER decodes triples as subject + relation → objects. TDEER can naturally handle the overlapping triple problem, because the translating decoding schema can recognize all possible triples, including overlapping and non-overlapping triples. To enhance model robustness, we introduce negative samples to alleviate error accumulation at different stages. Extensive experiments on public datasets demonstrate that TDEER produces competitive results compared with the state-of-the-art (SOTA) baselines. Furthermore, the computation complexity analysis indicates that TDEER is more efficient than powerful baselines. Especially, the proposed TDEER is 2 times faster than the recent SOTA models. The code is available at https://github.com/4AI/TDEER.",2021
schilder-1999-reference,https://aclanthology.org/W99-0112,0,,,,,,,"Reference Hashed. This paper argues for a novel data structure for the representation of discourse referents. A so-called hashing list is employed to store discourse referents according to their grammatical features. The account proposed combines insights from several theodes of discourse comprehension. Segmented Discourse Representation Theory (Asher, 1993) is enriched by the ranking system developed in centering theory (Grosz et al., 1995). In addition, a tree logic is used to represent underspecification within the discourse structure (Schilder, 1998).",Reference Hashed,"This paper argues for a novel data structure for the representation of discourse referents. A so-called hashing list is employed to store discourse referents according to their grammatical features. The account proposed combines insights from several theodes of discourse comprehension. Segmented Discourse Representation Theory (Asher, 1993) is enriched by the ranking system developed in centering theory (Grosz et al., 1995). In addition, a tree logic is used to represent underspecification within the discourse structure (Schilder, 1998).",Reference Hashed,"This paper argues for a novel data structure for the representation of discourse referents. A so-called hashing list is employed to store discourse referents according to their grammatical features. The account proposed combines insights from several theodes of discourse comprehension. Segmented Discourse Representation Theory (Asher, 1993) is enriched by the ranking system developed in centering theory (Grosz et al., 1995). In addition, a tree logic is used to represent underspecification within the discourse structure (Schilder, 1998).",I would like to thank the two annomynous reviewers to their comments and feedback. Special thanks to Christie Manning for providing me with all her help.,"Reference Hashed. This paper argues for a novel data structure for the representation of discourse referents. A so-called hashing list is employed to store discourse referents according to their grammatical features. The account proposed combines insights from several theodes of discourse comprehension. Segmented Discourse Representation Theory (Asher, 1993) is enriched by the ranking system developed in centering theory (Grosz et al., 1995). In addition, a tree logic is used to represent underspecification within the discourse structure (Schilder, 1998).",1999
mosbach-etal-2020-interplay,https://aclanthology.org/2020.blackboxnlp-1.7,0,,,,,,,"On the Interplay Between Fine-tuning and Sentence-Level Probing for Linguistic Knowledge in Pre-Trained Transformers. Fine-tuning pre-trained contextualized embedding models has become an integral part of the NLP pipeline. At the same time, probing has emerged as a way to investigate the linguistic knowledge captured by pre-trained models. Very little is, however, understood about how fine-tuning affects the representations of pre-trained models and thereby the linguistic knowledge they encode. This paper contributes towards closing this gap. We study three different pre-trained models: BERT, RoBERTa, and ALBERT, and investigate through sentence-level probing how finetuning affects their representations. We find that for some probing tasks fine-tuning leads to substantial changes in accuracy, possibly suggesting that fine-tuning introduces or even removes linguistic knowledge from a pre-trained model. These changes, however, vary greatly across different models, fine-tuning and probing tasks. Our analysis reveals that while finetuning indeed changes the representations of a pre-trained model and these changes are typically larger for higher layers, only in very few cases, fine-tuning has a positive effect on probing accuracy that is larger than just using the pre-trained model with a strong pooling method. Based on our findings, we argue that both positive and negative effects of finetuning on probing require a careful interpretation.",On the Interplay Between Fine-tuning and Sentence-Level Probing for Linguistic Knowledge in Pre-Trained Transformers,"Fine-tuning pre-trained contextualized embedding models has become an integral part of the NLP pipeline. At the same time, probing has emerged as a way to investigate the linguistic knowledge captured by pre-trained models. Very little is, however, understood about how fine-tuning affects the representations of pre-trained models and thereby the linguistic knowledge they encode. This paper contributes towards closing this gap. We study three different pre-trained models: BERT, RoBERTa, and ALBERT, and investigate through sentence-level probing how finetuning affects their representations. We find that for some probing tasks fine-tuning leads to substantial changes in accuracy, possibly suggesting that fine-tuning introduces or even removes linguistic knowledge from a pre-trained model. These changes, however, vary greatly across different models, fine-tuning and probing tasks. Our analysis reveals that while finetuning indeed changes the representations of a pre-trained model and these changes are typically larger for higher layers, only in very few cases, fine-tuning has a positive effect on probing accuracy that is larger than just using the pre-trained model with a strong pooling method. Based on our findings, we argue that both positive and negative effects of finetuning on probing require a careful interpretation.",On the Interplay Between Fine-tuning and Sentence-Level Probing for Linguistic Knowledge in Pre-Trained Transformers,"Fine-tuning pre-trained contextualized embedding models has become an integral part of the NLP pipeline. At the same time, probing has emerged as a way to investigate the linguistic knowledge captured by pre-trained models. Very little is, however, understood about how fine-tuning affects the representations of pre-trained models and thereby the linguistic knowledge they encode. This paper contributes towards closing this gap. We study three different pre-trained models: BERT, RoBERTa, and ALBERT, and investigate through sentence-level probing how finetuning affects their representations. We find that for some probing tasks fine-tuning leads to substantial changes in accuracy, possibly suggesting that fine-tuning introduces or even removes linguistic knowledge from a pre-trained model. These changes, however, vary greatly across different models, fine-tuning and probing tasks. Our analysis reveals that while finetuning indeed changes the representations of a pre-trained model and these changes are typically larger for higher layers, only in very few cases, fine-tuning has a positive effect on probing accuracy that is larger than just using the pre-trained model with a strong pooling method. Based on our findings, we argue that both positive and negative effects of finetuning on probing require a careful interpretation.","We thank Badr Abdullah for his comments and suggestions. We would also like to thank the reviewers for their useful comments and feedback, in particular R1. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -project-id 232722074 -SFB 1102.","On the Interplay Between Fine-tuning and Sentence-Level Probing for Linguistic Knowledge in Pre-Trained Transformers. Fine-tuning pre-trained contextualized embedding models has become an integral part of the NLP pipeline. At the same time, probing has emerged as a way to investigate the linguistic knowledge captured by pre-trained models. Very little is, however, understood about how fine-tuning affects the representations of pre-trained models and thereby the linguistic knowledge they encode. This paper contributes towards closing this gap. We study three different pre-trained models: BERT, RoBERTa, and ALBERT, and investigate through sentence-level probing how finetuning affects their representations. We find that for some probing tasks fine-tuning leads to substantial changes in accuracy, possibly suggesting that fine-tuning introduces or even removes linguistic knowledge from a pre-trained model. These changes, however, vary greatly across different models, fine-tuning and probing tasks. Our analysis reveals that while finetuning indeed changes the representations of a pre-trained model and these changes are typically larger for higher layers, only in very few cases, fine-tuning has a positive effect on probing accuracy that is larger than just using the pre-trained model with a strong pooling method. Based on our findings, we argue that both positive and negative effects of finetuning on probing require a careful interpretation.",2020
kundu-choudhury-2014-know,https://aclanthology.org/W14-5127,0,,,,,,,"How to Know the Best Machine Translation System in Advance before Translating a Sentence?. The aim of the paper is to identify a machine translation (MT) system from a set of multiple MT systems in advance, capable of producing most appropriate translation for a source sentence. The prediction is done based on the analysis of a source sentence before translating it using these MT systems. This selection procedure has been framed as a classification task. A machine learning based approach leveraging features extracting from analysis of a source sentence has been proposed here. The main contribution of the paper is selection of sourceside features. These features help machine learning approaches to discriminate MT systems according to their translation quality though these approaches have no idea about working principle of these MT systems. The proposed approach is language independent and has shown promising result when applied on English-Bangla MT task.",How to Know the Best Machine Translation System in Advance before Translating a Sentence?,"The aim of the paper is to identify a machine translation (MT) system from a set of multiple MT systems in advance, capable of producing most appropriate translation for a source sentence. The prediction is done based on the analysis of a source sentence before translating it using these MT systems. This selection procedure has been framed as a classification task. A machine learning based approach leveraging features extracting from analysis of a source sentence has been proposed here. The main contribution of the paper is selection of sourceside features. These features help machine learning approaches to discriminate MT systems according to their translation quality though these approaches have no idea about working principle of these MT systems. The proposed approach is language independent and has shown promising result when applied on English-Bangla MT task.",How to Know the Best Machine Translation System in Advance before Translating a Sentence?,"The aim of the paper is to identify a machine translation (MT) system from a set of multiple MT systems in advance, capable of producing most appropriate translation for a source sentence. The prediction is done based on the analysis of a source sentence before translating it using these MT systems. This selection procedure has been framed as a classification task. A machine learning based approach leveraging features extracting from analysis of a source sentence has been proposed here. The main contribution of the paper is selection of sourceside features. These features help machine learning approaches to discriminate MT systems according to their translation quality though these approaches have no idea about working principle of these MT systems. The proposed approach is language independent and has shown promising result when applied on English-Bangla MT task.",,"How to Know the Best Machine Translation System in Advance before Translating a Sentence?. The aim of the paper is to identify a machine translation (MT) system from a set of multiple MT systems in advance, capable of producing most appropriate translation for a source sentence. The prediction is done based on the analysis of a source sentence before translating it using these MT systems. This selection procedure has been framed as a classification task. A machine learning based approach leveraging features extracting from analysis of a source sentence has been proposed here. The main contribution of the paper is selection of sourceside features. These features help machine learning approaches to discriminate MT systems according to their translation quality though these approaches have no idea about working principle of these MT systems. The proposed approach is language independent and has shown promising result when applied on English-Bangla MT task.",2014
ravi-kozareva-2019-device,https://aclanthology.org/P19-1368,0,,,,,,,"On-device Structured and Context Partitioned Projection Networks. A challenging problem in on-device text classification is to build highly accurate neural models that can fit in small memory footprint and have low latency. To address this challenge, we propose an on-device neural network SGNN++ which dynamically learns compact projection vectors from raw text using structured and context-dependent partition projections. We show that this results in accelerated inference and performance improvements. We conduct extensive evaluation on multiple conversational tasks and languages such as English, Japanese, Spanish and French. Our SGNN++ model significantly outperforms all baselines, improves upon existing on-device neural models and even surpasses RNN, CNN and BiLSTM models on dialog act and intent prediction. Through a series of ablation studies we show the impact of the partitioned projections and structured information leading to 10% improvement. We study the impact of the model size on accuracy and introduce quantization-aware training for SGNN++ to further reduce the model size while preserving the same quality. Finally, we show fast inference on mobile phones.",On-device Structured and Context Partitioned Projection Networks,"A challenging problem in on-device text classification is to build highly accurate neural models that can fit in small memory footprint and have low latency. To address this challenge, we propose an on-device neural network SGNN++ which dynamically learns compact projection vectors from raw text using structured and context-dependent partition projections. We show that this results in accelerated inference and performance improvements. We conduct extensive evaluation on multiple conversational tasks and languages such as English, Japanese, Spanish and French. Our SGNN++ model significantly outperforms all baselines, improves upon existing on-device neural models and even surpasses RNN, CNN and BiLSTM models on dialog act and intent prediction. Through a series of ablation studies we show the impact of the partitioned projections and structured information leading to 10% improvement. We study the impact of the model size on accuracy and introduce quantization-aware training for SGNN++ to further reduce the model size while preserving the same quality. Finally, we show fast inference on mobile phones.",On-device Structured and Context Partitioned Projection Networks,"A challenging problem in on-device text classification is to build highly accurate neural models that can fit in small memory footprint and have low latency. To address this challenge, we propose an on-device neural network SGNN++ which dynamically learns compact projection vectors from raw text using structured and context-dependent partition projections. We show that this results in accelerated inference and performance improvements. We conduct extensive evaluation on multiple conversational tasks and languages such as English, Japanese, Spanish and French. Our SGNN++ model significantly outperforms all baselines, improves upon existing on-device neural models and even surpasses RNN, CNN and BiLSTM models on dialog act and intent prediction. Through a series of ablation studies we show the impact of the partitioned projections and structured information leading to 10% improvement. We study the impact of the model size on accuracy and introduce quantization-aware training for SGNN++ to further reduce the model size while preserving the same quality. Finally, we show fast inference on mobile phones.",We would like to thank the organizers of the customer feedback challenging for sharing the data and the anonymous reviewers for their valuable feedback and suggestions.,"On-device Structured and Context Partitioned Projection Networks. A challenging problem in on-device text classification is to build highly accurate neural models that can fit in small memory footprint and have low latency. To address this challenge, we propose an on-device neural network SGNN++ which dynamically learns compact projection vectors from raw text using structured and context-dependent partition projections. We show that this results in accelerated inference and performance improvements. We conduct extensive evaluation on multiple conversational tasks and languages such as English, Japanese, Spanish and French. Our SGNN++ model significantly outperforms all baselines, improves upon existing on-device neural models and even surpasses RNN, CNN and BiLSTM models on dialog act and intent prediction. Through a series of ablation studies we show the impact of the partitioned projections and structured information leading to 10% improvement. We study the impact of the model size on accuracy and introduce quantization-aware training for SGNN++ to further reduce the model size while preserving the same quality. Finally, we show fast inference on mobile phones.",2019
etcheverry-wonsever-2019-unraveling,https://aclanthology.org/P19-1319,0,,,,,,,"Unraveling Antonym's Word Vectors through a Siamese-like Network. Discriminating antonyms and synonyms is an important NLP task that has the difficulty that both, antonyms and synonyms, contains similar distributional information. Consequently, pairs of antonyms and synonyms may have similar word vectors. We present an approach to unravel antonymy and synonymy from word vectors based on a siamese network inspired approach. The model consists of a two-phase training of the same base network: a pre-training phase according to a siamese model supervised by synonyms and a training phase on antonyms through a siamese-like model that supports the antitransitivity present in antonymy. The approach makes use of the claim that the antonyms in common of a word tend to be synonyms. We show that our approach outperforms distributional and patternbased approaches, relaying on a simple feed forward network as base network of the training phases.",Unraveling Antonym{'}s Word Vectors through a {S}iamese-like Network,"Discriminating antonyms and synonyms is an important NLP task that has the difficulty that both, antonyms and synonyms, contains similar distributional information. Consequently, pairs of antonyms and synonyms may have similar word vectors. We present an approach to unravel antonymy and synonymy from word vectors based on a siamese network inspired approach. The model consists of a two-phase training of the same base network: a pre-training phase according to a siamese model supervised by synonyms and a training phase on antonyms through a siamese-like model that supports the antitransitivity present in antonymy. The approach makes use of the claim that the antonyms in common of a word tend to be synonyms. We show that our approach outperforms distributional and patternbased approaches, relaying on a simple feed forward network as base network of the training phases.",Unraveling Antonym's Word Vectors through a Siamese-like Network,"Discriminating antonyms and synonyms is an important NLP task that has the difficulty that both, antonyms and synonyms, contains similar distributional information. Consequently, pairs of antonyms and synonyms may have similar word vectors. We present an approach to unravel antonymy and synonymy from word vectors based on a siamese network inspired approach. The model consists of a two-phase training of the same base network: a pre-training phase according to a siamese model supervised by synonyms and a training phase on antonyms through a siamese-like model that supports the antitransitivity present in antonymy. The approach makes use of the claim that the antonyms in common of a word tend to be synonyms. We show that our approach outperforms distributional and patternbased approaches, relaying on a simple feed forward network as base network of the training phases.",,"Unraveling Antonym's Word Vectors through a Siamese-like Network. Discriminating antonyms and synonyms is an important NLP task that has the difficulty that both, antonyms and synonyms, contains similar distributional information. Consequently, pairs of antonyms and synonyms may have similar word vectors. We present an approach to unravel antonymy and synonymy from word vectors based on a siamese network inspired approach. The model consists of a two-phase training of the same base network: a pre-training phase according to a siamese model supervised by synonyms and a training phase on antonyms through a siamese-like model that supports the antitransitivity present in antonymy. The approach makes use of the claim that the antonyms in common of a word tend to be synonyms. We show that our approach outperforms distributional and patternbased approaches, relaying on a simple feed forward network as base network of the training phases.",2019
slawik-etal-2015-stripping,https://aclanthology.org/2015.eamt-1.18,0,,,,,,,"Stripping Adjectives: Integration Techniques for Selective Stemming in SMT Systems. In this paper we present an approach to reduce data sparsity problems when translating from morphologically rich languages into less inflected languages by selectively stemming certain word types. We develop and compare three different integration strategies: replacing words with their stemmed form, combined input using alternative lattice paths for the stemmed and surface forms and a novel hidden combination strategy, where we replace the stems in the stemmed phrase table by the observed surface forms in the test data. This allows us to apply advanced models trained on the surface forms of the words. We evaluate our approach by stemming German adjectives in two German→English translation scenarios: a low-resource condition as well as a large-scale state-of-the-art translation system. We are able to improve between 0.2 and 0.4 BLEU points over our baseline and reduce the number of out-of-vocabulary words by up to 16.5%.",Stripping Adjectives: Integration Techniques for Selective Stemming in {SMT} Systems,"In this paper we present an approach to reduce data sparsity problems when translating from morphologically rich languages into less inflected languages by selectively stemming certain word types. We develop and compare three different integration strategies: replacing words with their stemmed form, combined input using alternative lattice paths for the stemmed and surface forms and a novel hidden combination strategy, where we replace the stems in the stemmed phrase table by the observed surface forms in the test data. This allows us to apply advanced models trained on the surface forms of the words. We evaluate our approach by stemming German adjectives in two German→English translation scenarios: a low-resource condition as well as a large-scale state-of-the-art translation system. We are able to improve between 0.2 and 0.4 BLEU points over our baseline and reduce the number of out-of-vocabulary words by up to 16.5%.",Stripping Adjectives: Integration Techniques for Selective Stemming in SMT Systems,"In this paper we present an approach to reduce data sparsity problems when translating from morphologically rich languages into less inflected languages by selectively stemming certain word types. We develop and compare three different integration strategies: replacing words with their stemmed form, combined input using alternative lattice paths for the stemmed and surface forms and a novel hidden combination strategy, where we replace the stems in the stemmed phrase table by the observed surface forms in the test data. This allows us to apply advanced models trained on the surface forms of the words. We evaluate our approach by stemming German adjectives in two German→English translation scenarios: a low-resource condition as well as a large-scale state-of-the-art translation system. We are able to improve between 0.2 and 0.4 BLEU points over our baseline and reduce the number of out-of-vocabulary words by up to 16.5%.",The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement n • 645452.,"Stripping Adjectives: Integration Techniques for Selective Stemming in SMT Systems. In this paper we present an approach to reduce data sparsity problems when translating from morphologically rich languages into less inflected languages by selectively stemming certain word types. We develop and compare three different integration strategies: replacing words with their stemmed form, combined input using alternative lattice paths for the stemmed and surface forms and a novel hidden combination strategy, where we replace the stems in the stemmed phrase table by the observed surface forms in the test data. This allows us to apply advanced models trained on the surface forms of the words. We evaluate our approach by stemming German adjectives in two German→English translation scenarios: a low-resource condition as well as a large-scale state-of-the-art translation system. We are able to improve between 0.2 and 0.4 BLEU points over our baseline and reduce the number of out-of-vocabulary words by up to 16.5%.",2015
di-eugenio-1992-understanding,https://aclanthology.org/P92-1016,0,,,,,,,"Understanding Natural Language Instructions: The Case of Purpose Clauses. This paper presents an analysis of purpose clauses in the context of instruction understanding. Such analysis shows that goals affect the interpretation and / or execution of actions, lends support to the proposal of using generation and enablement to model relations between actions, and sheds light on some inference processes necessary to interpret purpose clauses.",Understanding Natural Language Instructions: The Case of Purpose Clauses,"This paper presents an analysis of purpose clauses in the context of instruction understanding. Such analysis shows that goals affect the interpretation and / or execution of actions, lends support to the proposal of using generation and enablement to model relations between actions, and sheds light on some inference processes necessary to interpret purpose clauses.",Understanding Natural Language Instructions: The Case of Purpose Clauses,"This paper presents an analysis of purpose clauses in the context of instruction understanding. Such analysis shows that goals affect the interpretation and / or execution of actions, lends support to the proposal of using generation and enablement to model relations between actions, and sheds light on some inference processes necessary to interpret purpose clauses.","For financial support I acknowledge DARPA grant no. N0014-90-J-1863 and ARt grant no. DAALO3-89-C0031PR1. Thanks to Bonnie Webber for support, insights and countless discussions, and to all the members of the AnimNL group, in particular to Mike White. Finally, thanks to the Dipartimento di Informatica -Universita' di Torino -Italy for making their computing environment available to me, and in particular thanks to Felice Cardone, Luca Console, Leonardo Lesmo, and Vincenzo Lombardo, who helped me through a last minute computer crash.","Understanding Natural Language Instructions: The Case of Purpose Clauses. This paper presents an analysis of purpose clauses in the context of instruction understanding. Such analysis shows that goals affect the interpretation and / or execution of actions, lends support to the proposal of using generation and enablement to model relations between actions, and sheds light on some inference processes necessary to interpret purpose clauses.",1992
piits-etal-2007-designing,https://aclanthology.org/W07-2459,0,,,,,,,"Designing a Speech Corpus for Estonian Unit Selection Synthesis. The article reports the development of a speech corpus for Estonian text-to-speech synthesis based on unit selection. Introduced are the principles of the corpus as well as the procedure of its creation, from text compilation to corpus analysis and text recording. Also described are the choices made in the process of producing a text of 400 sentences, the relevant lexical and morphological preferences, and the way to the most natural sentence context for the words used.",Designing a Speech Corpus for {E}stonian Unit Selection Synthesis,"The article reports the development of a speech corpus for Estonian text-to-speech synthesis based on unit selection. Introduced are the principles of the corpus as well as the procedure of its creation, from text compilation to corpus analysis and text recording. Also described are the choices made in the process of producing a text of 400 sentences, the relevant lexical and morphological preferences, and the way to the most natural sentence context for the words used.",Designing a Speech Corpus for Estonian Unit Selection Synthesis,"The article reports the development of a speech corpus for Estonian text-to-speech synthesis based on unit selection. Introduced are the principles of the corpus as well as the procedure of its creation, from text compilation to corpus analysis and text recording. Also described are the choices made in the process of producing a text of 400 sentences, the relevant lexical and morphological preferences, and the way to the most natural sentence context for the words used.",The support from the program Language Technology Support of the Estonian has made the present work possible.,"Designing a Speech Corpus for Estonian Unit Selection Synthesis. The article reports the development of a speech corpus for Estonian text-to-speech synthesis based on unit selection. Introduced are the principles of the corpus as well as the procedure of its creation, from text compilation to corpus analysis and text recording. Also described are the choices made in the process of producing a text of 400 sentences, the relevant lexical and morphological preferences, and the way to the most natural sentence context for the words used.",2007
choi-etal-2010-propbank,http://www.lrec-conf.org/proceedings/lrec2010/pdf/73_Paper.pdf,0,,,,,,,"Propbank Frameset Annotation Guidelines Using a Dedicated Editor, Cornerstone. This paper gives guidelines of how to create and update Propbank frameset files using a dedicated editor, Cornerstone. Propbank is a corpus in which the arguments of each verb predicate are annotated with their semantic roles in relation to the predicate. Propbank annotation also requires the choice of a sense ID for each predicate. Thus, for each predicate in Propbank, there exists a corresponding frameset file showing the expected predicate argument structure of each sense related to the predicate. Since most Propbank annotations are based on the predicate argument structure defined in the frameset files, it is important to keep the files consistent, simple to read as well as easy to update. The frameset files are written in XML, which can be difficult to edit when using a simple text editor. Therefore, it is helpful to develop a user-friendly editor such as Cornerstone, specifically customized to create and edit frameset files. Cornerstone runs platform independently, is light enough to run as an X11 application and supports multiple languages such as Arabic, Chinese, English, Hindi and Korean.","{P}ropbank Frameset Annotation Guidelines Using a Dedicated Editor, Cornerstone","This paper gives guidelines of how to create and update Propbank frameset files using a dedicated editor, Cornerstone. Propbank is a corpus in which the arguments of each verb predicate are annotated with their semantic roles in relation to the predicate. Propbank annotation also requires the choice of a sense ID for each predicate. Thus, for each predicate in Propbank, there exists a corresponding frameset file showing the expected predicate argument structure of each sense related to the predicate. Since most Propbank annotations are based on the predicate argument structure defined in the frameset files, it is important to keep the files consistent, simple to read as well as easy to update. The frameset files are written in XML, which can be difficult to edit when using a simple text editor. Therefore, it is helpful to develop a user-friendly editor such as Cornerstone, specifically customized to create and edit frameset files. Cornerstone runs platform independently, is light enough to run as an X11 application and supports multiple languages such as Arabic, Chinese, English, Hindi and Korean.","Propbank Frameset Annotation Guidelines Using a Dedicated Editor, Cornerstone","This paper gives guidelines of how to create and update Propbank frameset files using a dedicated editor, Cornerstone. Propbank is a corpus in which the arguments of each verb predicate are annotated with their semantic roles in relation to the predicate. Propbank annotation also requires the choice of a sense ID for each predicate. Thus, for each predicate in Propbank, there exists a corresponding frameset file showing the expected predicate argument structure of each sense related to the predicate. Since most Propbank annotations are based on the predicate argument structure defined in the frameset files, it is important to keep the files consistent, simple to read as well as easy to update. The frameset files are written in XML, which can be difficult to edit when using a simple text editor. Therefore, it is helpful to develop a user-friendly editor such as Cornerstone, specifically customized to create and edit frameset files. Cornerstone runs platform independently, is light enough to run as an X11 application and supports multiple languages such as Arabic, Chinese, English, Hindi and Korean.","We gratefully acknowledge the support of the National Science Foundation Grants CISE-CRI-0551615, Towards a Comprehensive Linguistic Annotation and CISE-CRI 0709167, Collaborative: A Multi-Representational and Multi-Layered Treebank for Hindi/Urdu, and a grant from the Defense Advanced Research Projects Agency (DARPA/IPTO) under the GALE program, DARPA/CMO Contract No. HR0011-06-C-0022, subcontract from BBN, Inc. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.","Propbank Frameset Annotation Guidelines Using a Dedicated Editor, Cornerstone. This paper gives guidelines of how to create and update Propbank frameset files using a dedicated editor, Cornerstone. Propbank is a corpus in which the arguments of each verb predicate are annotated with their semantic roles in relation to the predicate. Propbank annotation also requires the choice of a sense ID for each predicate. Thus, for each predicate in Propbank, there exists a corresponding frameset file showing the expected predicate argument structure of each sense related to the predicate. Since most Propbank annotations are based on the predicate argument structure defined in the frameset files, it is important to keep the files consistent, simple to read as well as easy to update. The frameset files are written in XML, which can be difficult to edit when using a simple text editor. Therefore, it is helpful to develop a user-friendly editor such as Cornerstone, specifically customized to create and edit frameset files. Cornerstone runs platform independently, is light enough to run as an X11 application and supports multiple languages such as Arabic, Chinese, English, Hindi and Korean.",2010
ueda-washio-2021-relationship,https://aclanthology.org/2021.acl-srw.6,0,,,,,,,"On the Relationship between Zipf's Law of Abbreviation and Interfering Noise in Emergent Languages. This paper studies whether emergent languages in a signaling game follow Zipf's law of abbreviation (ZLA), especially when the communication ability of agents is limited because of interfering noises. ZLA is a wellknown tendency in human languages where the more frequently a word is used, the shorter it will be. Surprisingly, previous work demonstrated that emergent languages do not obey ZLA at all when neural agents play a signaling game. It also reported that a ZLA-like tendency appeared by adding an explicit penalty on word lengths, which can be considered some external factors in reality such as articulatory effort. We hypothesize, on the other hand, that there might be not only such external factors but also some internal factors related to cognitive abilities. We assume that it could be simulated by modeling the effect of noises on the agents' environment. In our experimental setup, the hidden states of the LSTM-based speaker and listener were added with Gaussian noise, while the channel was subject to discrete random replacement. Our results suggest that noise on a speaker is one of the factors for ZLA or at least causes emergent languages to approach ZLA, while noise on a listener and a channel is not.",On the Relationship between {Z}ipf{'}s Law of Abbreviation and Interfering Noise in Emergent Languages,"This paper studies whether emergent languages in a signaling game follow Zipf's law of abbreviation (ZLA), especially when the communication ability of agents is limited because of interfering noises. ZLA is a wellknown tendency in human languages where the more frequently a word is used, the shorter it will be. Surprisingly, previous work demonstrated that emergent languages do not obey ZLA at all when neural agents play a signaling game. It also reported that a ZLA-like tendency appeared by adding an explicit penalty on word lengths, which can be considered some external factors in reality such as articulatory effort. We hypothesize, on the other hand, that there might be not only such external factors but also some internal factors related to cognitive abilities. We assume that it could be simulated by modeling the effect of noises on the agents' environment. In our experimental setup, the hidden states of the LSTM-based speaker and listener were added with Gaussian noise, while the channel was subject to discrete random replacement. Our results suggest that noise on a speaker is one of the factors for ZLA or at least causes emergent languages to approach ZLA, while noise on a listener and a channel is not.",On the Relationship between Zipf's Law of Abbreviation and Interfering Noise in Emergent Languages,"This paper studies whether emergent languages in a signaling game follow Zipf's law of abbreviation (ZLA), especially when the communication ability of agents is limited because of interfering noises. ZLA is a wellknown tendency in human languages where the more frequently a word is used, the shorter it will be. Surprisingly, previous work demonstrated that emergent languages do not obey ZLA at all when neural agents play a signaling game. It also reported that a ZLA-like tendency appeared by adding an explicit penalty on word lengths, which can be considered some external factors in reality such as articulatory effort. We hypothesize, on the other hand, that there might be not only such external factors but also some internal factors related to cognitive abilities. We assume that it could be simulated by modeling the effect of noises on the agents' environment. In our experimental setup, the hidden states of the LSTM-based speaker and listener were added with Gaussian noise, while the channel was subject to discrete random replacement. Our results suggest that noise on a speaker is one of the factors for ZLA or at least causes emergent languages to approach ZLA, while noise on a listener and a channel is not.","We would like to thank Professor Yusuke Miyao for supervising our research, Jason Naradowsky for fruitful discussions and proofreading, and the anonymous reviewers for helpful suggestions. The first author would also like to thank his colleagues Taiga Ishii and Hiroaki Mizuno as they have encouraged each other in their senior theses.","On the Relationship between Zipf's Law of Abbreviation and Interfering Noise in Emergent Languages. This paper studies whether emergent languages in a signaling game follow Zipf's law of abbreviation (ZLA), especially when the communication ability of agents is limited because of interfering noises. ZLA is a wellknown tendency in human languages where the more frequently a word is used, the shorter it will be. Surprisingly, previous work demonstrated that emergent languages do not obey ZLA at all when neural agents play a signaling game. It also reported that a ZLA-like tendency appeared by adding an explicit penalty on word lengths, which can be considered some external factors in reality such as articulatory effort. We hypothesize, on the other hand, that there might be not only such external factors but also some internal factors related to cognitive abilities. We assume that it could be simulated by modeling the effect of noises on the agents' environment. In our experimental setup, the hidden states of the LSTM-based speaker and listener were added with Gaussian noise, while the channel was subject to discrete random replacement. Our results suggest that noise on a speaker is one of the factors for ZLA or at least causes emergent languages to approach ZLA, while noise on a listener and a channel is not.",2021
indig-etal-2018-whats,https://aclanthology.org/L18-1091,0,,,,,,,"What's Wrong, Python? -- A Visual Differ and Graph Library for NLP in Python. The correct analysis of the output of a program based on supervised learning is inevitable in order to be able to identify the errors it produced and characterise its error types. This task is fairly difficult without a proper tool, especially if one works with complex data structures such as parse trees or sentence alignments. In this paper, we present a library that allows the user to interactively visualise and compare the output of any program that yields a well-known data format. Our goal is to create a tool granting the total control of the visualisation to the user, including extensions, but also have the common primitives and data-formats implemented for typical cases. We describe the common features of the common NLP tasks from the viewpoint of visualisation in order to specify the essential primitive functions. We enumerate many popular off-the-shelf NLP visualisation programs to compare with our implementation, which unifies all of the profitable features of the existing programs adding extendibility as a crucial feature to them.","What{'}s Wrong, Python? {--} A Visual Differ and Graph Library for {NLP} in Python","The correct analysis of the output of a program based on supervised learning is inevitable in order to be able to identify the errors it produced and characterise its error types. This task is fairly difficult without a proper tool, especially if one works with complex data structures such as parse trees or sentence alignments. In this paper, we present a library that allows the user to interactively visualise and compare the output of any program that yields a well-known data format. Our goal is to create a tool granting the total control of the visualisation to the user, including extensions, but also have the common primitives and data-formats implemented for typical cases. We describe the common features of the common NLP tasks from the viewpoint of visualisation in order to specify the essential primitive functions. We enumerate many popular off-the-shelf NLP visualisation programs to compare with our implementation, which unifies all of the profitable features of the existing programs adding extendibility as a crucial feature to them.","What's Wrong, Python? -- A Visual Differ and Graph Library for NLP in Python","The correct analysis of the output of a program based on supervised learning is inevitable in order to be able to identify the errors it produced and characterise its error types. This task is fairly difficult without a proper tool, especially if one works with complex data structures such as parse trees or sentence alignments. In this paper, we present a library that allows the user to interactively visualise and compare the output of any program that yields a well-known data format. Our goal is to create a tool granting the total control of the visualisation to the user, including extensions, but also have the common primitives and data-formats implemented for typical cases. We describe the common features of the common NLP tasks from the viewpoint of visualisation in order to specify the essential primitive functions. We enumerate many popular off-the-shelf NLP visualisation programs to compare with our implementation, which unifies all of the profitable features of the existing programs adding extendibility as a crucial feature to them.",,"What's Wrong, Python? -- A Visual Differ and Graph Library for NLP in Python. The correct analysis of the output of a program based on supervised learning is inevitable in order to be able to identify the errors it produced and characterise its error types. This task is fairly difficult without a proper tool, especially if one works with complex data structures such as parse trees or sentence alignments. In this paper, we present a library that allows the user to interactively visualise and compare the output of any program that yields a well-known data format. Our goal is to create a tool granting the total control of the visualisation to the user, including extensions, but also have the common primitives and data-formats implemented for typical cases. We describe the common features of the common NLP tasks from the viewpoint of visualisation in order to specify the essential primitive functions. We enumerate many popular off-the-shelf NLP visualisation programs to compare with our implementation, which unifies all of the profitable features of the existing programs adding extendibility as a crucial feature to them.",2018
brandt-skelbye-dannells-2021-ocr,https://aclanthology.org/2021.ranlp-1.23,0,,,,,,,"OCR Processing of Swedish Historical Newspapers Using Deep Hybrid CNN--LSTM Networks. Deep CNN-LSTM hybrid neural networks have proven to improve the accuracy of Optical Character Recognition (OCR) models for different languages. In this paper we examine to what extent these networks improve the OCR accuracy rates on Swedish historical newspapers. By experimenting with the open source OCR engine Calamari, we are able to show that mixed deep CNN-LSTM hybrid models outperform previous models on the task of character recognition of Swedish historical newspapers spanning 1818-1848. We achieved an average character accuracy rate (CAR) of 97.43% which is a new state-of-the-art result on 19th century Swedish newspaper text. Our data, code and models are released under CC BY licence.",{OCR} Processing of {S}wedish Historical Newspapers Using Deep Hybrid {CNN}{--}{LSTM} Networks,"Deep CNN-LSTM hybrid neural networks have proven to improve the accuracy of Optical Character Recognition (OCR) models for different languages. In this paper we examine to what extent these networks improve the OCR accuracy rates on Swedish historical newspapers. By experimenting with the open source OCR engine Calamari, we are able to show that mixed deep CNN-LSTM hybrid models outperform previous models on the task of character recognition of Swedish historical newspapers spanning 1818-1848. We achieved an average character accuracy rate (CAR) of 97.43% which is a new state-of-the-art result on 19th century Swedish newspaper text. Our data, code and models are released under CC BY licence.",OCR Processing of Swedish Historical Newspapers Using Deep Hybrid CNN--LSTM Networks,"Deep CNN-LSTM hybrid neural networks have proven to improve the accuracy of Optical Character Recognition (OCR) models for different languages. In this paper we examine to what extent these networks improve the OCR accuracy rates on Swedish historical newspapers. By experimenting with the open source OCR engine Calamari, we are able to show that mixed deep CNN-LSTM hybrid models outperform previous models on the task of character recognition of Swedish historical newspapers spanning 1818-1848. We achieved an average character accuracy rate (CAR) of 97.43% which is a new state-of-the-art result on 19th century Swedish newspaper text. Our data, code and models are released under CC BY licence.","This work has been funded by the Swedish Research Council as part of the project Evaluation and refinement of an enhanced OCR-process for mass digitisation (2019-2020; dnr IN18-0940:1). It is also supported by Språkbanken Text and Swe-Clarin, a Swedish consortium in Common Language Resources and Technology Infrastructure (CLARIN) Swedish CLARIN (dnr 821-2013CLARIN (dnr 821- -2003. The authors would like to thank the RANLP anonymous reviewers for their valuable comments.","OCR Processing of Swedish Historical Newspapers Using Deep Hybrid CNN--LSTM Networks. Deep CNN-LSTM hybrid neural networks have proven to improve the accuracy of Optical Character Recognition (OCR) models for different languages. In this paper we examine to what extent these networks improve the OCR accuracy rates on Swedish historical newspapers. By experimenting with the open source OCR engine Calamari, we are able to show that mixed deep CNN-LSTM hybrid models outperform previous models on the task of character recognition of Swedish historical newspapers spanning 1818-1848. We achieved an average character accuracy rate (CAR) of 97.43% which is a new state-of-the-art result on 19th century Swedish newspaper text. Our data, code and models are released under CC BY licence.",2021
amin-etal-2022-using,https://aclanthology.org/2022.ltedi-1.5,1,,,,social_equality,,,"Using BERT Embeddings to Model Word Importance in Conversational Transcripts for Deaf and Hard of Hearing Users. Deaf and hard of hearing individuals regularly rely on captioning while watching live TV. Live TV captioning is evaluated by regulatory agencies using various caption evaluation metrics. However, caption evaluation metrics are often not informed by preferences of DHH users or how meaningful the captions are. There is a need to construct caption evaluation metrics that take the relative importance of words in a transcript into account. We conducted correlation analysis between two types of word embeddings and human-annotated labeled wordimportance scores in existing corpus. We found that normalized contextualized word embeddings generated using BERT correlated better with manually annotated importance scores than word2vec-based word embeddings. We make available a pairing of word embeddings and their human-annotated importance scores. We also provide proof-of-concept utility by training word importance models, achieving an F1-score of 0.57 in the 6-class word importance classification task.",Using {BERT} Embeddings to Model Word Importance in Conversational Transcripts for Deaf and Hard of Hearing Users,"Deaf and hard of hearing individuals regularly rely on captioning while watching live TV. Live TV captioning is evaluated by regulatory agencies using various caption evaluation metrics. However, caption evaluation metrics are often not informed by preferences of DHH users or how meaningful the captions are. There is a need to construct caption evaluation metrics that take the relative importance of words in a transcript into account. We conducted correlation analysis between two types of word embeddings and human-annotated labeled wordimportance scores in existing corpus. We found that normalized contextualized word embeddings generated using BERT correlated better with manually annotated importance scores than word2vec-based word embeddings. We make available a pairing of word embeddings and their human-annotated importance scores. We also provide proof-of-concept utility by training word importance models, achieving an F1-score of 0.57 in the 6-class word importance classification task.",Using BERT Embeddings to Model Word Importance in Conversational Transcripts for Deaf and Hard of Hearing Users,"Deaf and hard of hearing individuals regularly rely on captioning while watching live TV. Live TV captioning is evaluated by regulatory agencies using various caption evaluation metrics. However, caption evaluation metrics are often not informed by preferences of DHH users or how meaningful the captions are. There is a need to construct caption evaluation metrics that take the relative importance of words in a transcript into account. We conducted correlation analysis between two types of word embeddings and human-annotated labeled wordimportance scores in existing corpus. We found that normalized contextualized word embeddings generated using BERT correlated better with manually annotated importance scores than word2vec-based word embeddings. We make available a pairing of word embeddings and their human-annotated importance scores. We also provide proof-of-concept utility by training word importance models, achieving an F1-score of 0.57 in the 6-class word importance classification task.","This material is based on work supported by the Department of Health and Human Services under Award No. 90DPCP0002-0100, and by the National Science Foundation under Award No. DGE-2125362. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of Health and Human Services or National Science Foundation.","Using BERT Embeddings to Model Word Importance in Conversational Transcripts for Deaf and Hard of Hearing Users. Deaf and hard of hearing individuals regularly rely on captioning while watching live TV. Live TV captioning is evaluated by regulatory agencies using various caption evaluation metrics. However, caption evaluation metrics are often not informed by preferences of DHH users or how meaningful the captions are. There is a need to construct caption evaluation metrics that take the relative importance of words in a transcript into account. We conducted correlation analysis between two types of word embeddings and human-annotated labeled wordimportance scores in existing corpus. We found that normalized contextualized word embeddings generated using BERT correlated better with manually annotated importance scores than word2vec-based word embeddings. We make available a pairing of word embeddings and their human-annotated importance scores. We also provide proof-of-concept utility by training word importance models, achieving an F1-score of 0.57 in the 6-class word importance classification task.",2022
arora-etal-2020-supervised,https://aclanthology.org/2020.acl-main.696,0,,,,,,,"Supervised Grapheme-to-Phoneme Conversion of Orthographic Schwas in Hindi and Punjabi. Hindi grapheme-to-phoneme (G2P) conversion is mostly trivial, with one exception: whether a schwa represented in the orthography is pronounced or unpronounced (deleted). Previous work has attempted to predict schwa deletion in a rule-based fashion using prosodic or phonetic analysis. We present the first statistical schwa deletion classifier for Hindi, which relies solely on the orthography as the input and outperforms previous approaches. We trained our model on a newly-compiled pronunciation lexicon extracted from various online dictionaries. Our best Hindi model achieves state of the art performance, and also achieves good performance on a closely related language, Punjabi, without modification.",Supervised Grapheme-to-Phoneme Conversion of Orthographic Schwas in {H}indi and {P}unjabi,"Hindi grapheme-to-phoneme (G2P) conversion is mostly trivial, with one exception: whether a schwa represented in the orthography is pronounced or unpronounced (deleted). Previous work has attempted to predict schwa deletion in a rule-based fashion using prosodic or phonetic analysis. We present the first statistical schwa deletion classifier for Hindi, which relies solely on the orthography as the input and outperforms previous approaches. We trained our model on a newly-compiled pronunciation lexicon extracted from various online dictionaries. Our best Hindi model achieves state of the art performance, and also achieves good performance on a closely related language, Punjabi, without modification.",Supervised Grapheme-to-Phoneme Conversion of Orthographic Schwas in Hindi and Punjabi,"Hindi grapheme-to-phoneme (G2P) conversion is mostly trivial, with one exception: whether a schwa represented in the orthography is pronounced or unpronounced (deleted). Previous work has attempted to predict schwa deletion in a rule-based fashion using prosodic or phonetic analysis. We present the first statistical schwa deletion classifier for Hindi, which relies solely on the orthography as the input and outperforms previous approaches. We trained our model on a newly-compiled pronunciation lexicon extracted from various online dictionaries. Our best Hindi model achieves state of the art performance, and also achieves good performance on a closely related language, Punjabi, without modification.",,"Supervised Grapheme-to-Phoneme Conversion of Orthographic Schwas in Hindi and Punjabi. Hindi grapheme-to-phoneme (G2P) conversion is mostly trivial, with one exception: whether a schwa represented in the orthography is pronounced or unpronounced (deleted). Previous work has attempted to predict schwa deletion in a rule-based fashion using prosodic or phonetic analysis. We present the first statistical schwa deletion classifier for Hindi, which relies solely on the orthography as the input and outperforms previous approaches. We trained our model on a newly-compiled pronunciation lexicon extracted from various online dictionaries. Our best Hindi model achieves state of the art performance, and also achieves good performance on a closely related language, Punjabi, without modification.",2020
zhao-chen-2009-simplex,https://aclanthology.org/N09-2006,0,,,,,,,"A Simplex Armijo Downhill Algorithm for Optimizing Statistical Machine Translation Decoding Parameters. We propose a variation of simplex-downhill algorithm specifically customized for optimizing parameters in statistical machine translation (SMT) decoder for better end-user automatic evaluation metric scores for translations, such as versions of BLEU, TER and mixtures of them. Traditional simplexdownhill has the advantage of derivative-free computations of objective functions, yet still gives satisfactory searching directions in most scenarios. This is suitable for optimizing translation metrics as they are not differentiable in nature. On the other hand, Armijo algorithm usually performs line search efficiently given a searching direction. It is a deep hidden fact that an efficient line search method will change the iterations of simplex, and hence the searching trajectories. We propose to embed the Armijo inexact line search within the simplexdownhill algorithm. We show, in our experiments, the proposed algorithm improves over the widelyapplied Minimum Error Rate training algorithm for optimizing machine translation parameters.",A Simplex Armijo Downhill Algorithm for Optimizing Statistical Machine Translation Decoding Parameters,"We propose a variation of simplex-downhill algorithm specifically customized for optimizing parameters in statistical machine translation (SMT) decoder for better end-user automatic evaluation metric scores for translations, such as versions of BLEU, TER and mixtures of them. Traditional simplexdownhill has the advantage of derivative-free computations of objective functions, yet still gives satisfactory searching directions in most scenarios. This is suitable for optimizing translation metrics as they are not differentiable in nature. On the other hand, Armijo algorithm usually performs line search efficiently given a searching direction. It is a deep hidden fact that an efficient line search method will change the iterations of simplex, and hence the searching trajectories. We propose to embed the Armijo inexact line search within the simplexdownhill algorithm. We show, in our experiments, the proposed algorithm improves over the widelyapplied Minimum Error Rate training algorithm for optimizing machine translation parameters.",A Simplex Armijo Downhill Algorithm for Optimizing Statistical Machine Translation Decoding Parameters,"We propose a variation of simplex-downhill algorithm specifically customized for optimizing parameters in statistical machine translation (SMT) decoder for better end-user automatic evaluation metric scores for translations, such as versions of BLEU, TER and mixtures of them. Traditional simplexdownhill has the advantage of derivative-free computations of objective functions, yet still gives satisfactory searching directions in most scenarios. This is suitable for optimizing translation metrics as they are not differentiable in nature. On the other hand, Armijo algorithm usually performs line search efficiently given a searching direction. It is a deep hidden fact that an efficient line search method will change the iterations of simplex, and hence the searching trajectories. We propose to embed the Armijo inexact line search within the simplexdownhill algorithm. We show, in our experiments, the proposed algorithm improves over the widelyapplied Minimum Error Rate training algorithm for optimizing machine translation parameters.",,"A Simplex Armijo Downhill Algorithm for Optimizing Statistical Machine Translation Decoding Parameters. We propose a variation of simplex-downhill algorithm specifically customized for optimizing parameters in statistical machine translation (SMT) decoder for better end-user automatic evaluation metric scores for translations, such as versions of BLEU, TER and mixtures of them. Traditional simplexdownhill has the advantage of derivative-free computations of objective functions, yet still gives satisfactory searching directions in most scenarios. This is suitable for optimizing translation metrics as they are not differentiable in nature. On the other hand, Armijo algorithm usually performs line search efficiently given a searching direction. It is a deep hidden fact that an efficient line search method will change the iterations of simplex, and hence the searching trajectories. We propose to embed the Armijo inexact line search within the simplexdownhill algorithm. We show, in our experiments, the proposed algorithm improves over the widelyapplied Minimum Error Rate training algorithm for optimizing machine translation parameters.",2009
clarke-lapata-2006-constraint,https://aclanthology.org/P06-2019,0,,,,,,,Constraint-Based Sentence Compression: An Integer Programming Approach. The ability to compress sentences while preserving their grammaticality and most of their meaning has recently received much attention. Our work views sentence compression as an optimisation problem. We develop an integer programming formulation and infer globally optimal compressions in the face of linguistically motivated constraints. We show that such a formulation allows for relatively simple and knowledge-lean compression models that do not require parallel corpora or largescale resources. The proposed approach yields results comparable and in some cases superior to state-of-the-art.,Constraint-Based Sentence Compression: An Integer Programming Approach,The ability to compress sentences while preserving their grammaticality and most of their meaning has recently received much attention. Our work views sentence compression as an optimisation problem. We develop an integer programming formulation and infer globally optimal compressions in the face of linguistically motivated constraints. We show that such a formulation allows for relatively simple and knowledge-lean compression models that do not require parallel corpora or largescale resources. The proposed approach yields results comparable and in some cases superior to state-of-the-art.,Constraint-Based Sentence Compression: An Integer Programming Approach,The ability to compress sentences while preserving their grammaticality and most of their meaning has recently received much attention. Our work views sentence compression as an optimisation problem. We develop an integer programming formulation and infer globally optimal compressions in the face of linguistically motivated constraints. We show that such a formulation allows for relatively simple and knowledge-lean compression models that do not require parallel corpora or largescale resources. The proposed approach yields results comparable and in some cases superior to state-of-the-art.,"Thanks to Jean Carletta, Amit Dubey, Frank Keller, Steve Renals, and Sebastian Riedel for helpful comments and suggestions. Lapata acknowledges the support of EPSRC (grant GR/T04540/01).",Constraint-Based Sentence Compression: An Integer Programming Approach. The ability to compress sentences while preserving their grammaticality and most of their meaning has recently received much attention. Our work views sentence compression as an optimisation problem. We develop an integer programming formulation and infer globally optimal compressions in the face of linguistically motivated constraints. We show that such a formulation allows for relatively simple and knowledge-lean compression models that do not require parallel corpora or largescale resources. The proposed approach yields results comparable and in some cases superior to state-of-the-art.,2006
saif-etal-2014-stopwords,http://www.lrec-conf.org/proceedings/lrec2014/pdf/292_Paper.pdf,0,,,,,,,"On Stopwords, Filtering and Data Sparsity for Sentiment Analysis of Twitter. Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier's feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and substantially shrinking the feature space.","On Stopwords, Filtering and Data Sparsity for Sentiment Analysis of {T}witter","Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier's feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and substantially shrinking the feature space.","On Stopwords, Filtering and Data Sparsity for Sentiment Analysis of Twitter","Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier's feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and substantially shrinking the feature space.",This work was supported by the EU-FP7 project SENSE4US (grant no. 611242).,"On Stopwords, Filtering and Data Sparsity for Sentiment Analysis of Twitter. Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier's feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and substantially shrinking the feature space.",2014
sennrich-haddow-2016-linguistic,https://aclanthology.org/W16-2209,0,,,,,,,"Linguistic Input Features Improve Neural Machine Translation. Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder-decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-ofspeech tags, and syntactic dependency labels as input features to English↔German and English→Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An opensource implementation of our neural MT system is available 1 , as are sample files and configurations 2 .",Linguistic Input Features Improve Neural Machine Translation,"Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder-decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-ofspeech tags, and syntactic dependency labels as input features to English↔German and English→Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An opensource implementation of our neural MT system is available 1 , as are sample files and configurations 2 .",Linguistic Input Features Improve Neural Machine Translation,"Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder-decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-ofspeech tags, and syntactic dependency labels as input features to English↔German and English→Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An opensource implementation of our neural MT system is available 1 , as are sample files and configurations 2 .","This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements 645452 (QT21), and 644402 (HimL).","Linguistic Input Features Improve Neural Machine Translation. Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder-decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-ofspeech tags, and syntactic dependency labels as input features to English↔German and English→Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An opensource implementation of our neural MT system is available 1 , as are sample files and configurations 2 .",2016
heylen-etal-2014-termwise,http://www.lrec-conf.org/proceedings/lrec2014/pdf/706_Paper.pdf,0,,,,,,,"TermWise: A CAT-tool with Context-Sensitive Terminological Support.. Increasingly, large bilingual document collections are being made available online, especially in the legal domain. This type of Big Data is a valuable resource that specialized translators exploit to search for informative examples of how domain-specific expressions should be translated. However, general purpose search engines are not optimized to retrieve previous translations that are maximally relevant to a translator. In this paper, we report on the TermWise project, a cooperation of terminologists, corpus linguists and computer scientists, that aims to leverage big online translation data for terminological support to legal translators at the Belgian Federal Ministry of Justice. The project developed dedicated knowledge extraction algorithms and a server-based tool to provide translators with the most relevant previous translations of domain-specific expressions relative to the current translation assignment. The functionality is implemented as an extra database, a Term&Phrase Memory, that is meant to be integrated with existing Computer Assisted Translation tools. In the paper, we give an overview of the system, give a demo of the user interface, we present a user-based evaluation by translators and discuss how the tool is part of the general evolution towards exploiting Big Data in translation.",{T}erm{W}ise: A {CAT}-tool with Context-Sensitive Terminological Support.,"Increasingly, large bilingual document collections are being made available online, especially in the legal domain. This type of Big Data is a valuable resource that specialized translators exploit to search for informative examples of how domain-specific expressions should be translated. However, general purpose search engines are not optimized to retrieve previous translations that are maximally relevant to a translator. In this paper, we report on the TermWise project, a cooperation of terminologists, corpus linguists and computer scientists, that aims to leverage big online translation data for terminological support to legal translators at the Belgian Federal Ministry of Justice. The project developed dedicated knowledge extraction algorithms and a server-based tool to provide translators with the most relevant previous translations of domain-specific expressions relative to the current translation assignment. The functionality is implemented as an extra database, a Term&Phrase Memory, that is meant to be integrated with existing Computer Assisted Translation tools. In the paper, we give an overview of the system, give a demo of the user interface, we present a user-based evaluation by translators and discuss how the tool is part of the general evolution towards exploiting Big Data in translation.",TermWise: A CAT-tool with Context-Sensitive Terminological Support.,"Increasingly, large bilingual document collections are being made available online, especially in the legal domain. This type of Big Data is a valuable resource that specialized translators exploit to search for informative examples of how domain-specific expressions should be translated. However, general purpose search engines are not optimized to retrieve previous translations that are maximally relevant to a translator. In this paper, we report on the TermWise project, a cooperation of terminologists, corpus linguists and computer scientists, that aims to leverage big online translation data for terminological support to legal translators at the Belgian Federal Ministry of Justice. The project developed dedicated knowledge extraction algorithms and a server-based tool to provide translators with the most relevant previous translations of domain-specific expressions relative to the current translation assignment. The functionality is implemented as an extra database, a Term&Phrase Memory, that is meant to be integrated with existing Computer Assisted Translation tools. In the paper, we give an overview of the system, give a demo of the user interface, we present a user-based evaluation by translators and discuss how the tool is part of the general evolution towards exploiting Big Data in translation.",,"TermWise: A CAT-tool with Context-Sensitive Terminological Support.. Increasingly, large bilingual document collections are being made available online, especially in the legal domain. This type of Big Data is a valuable resource that specialized translators exploit to search for informative examples of how domain-specific expressions should be translated. However, general purpose search engines are not optimized to retrieve previous translations that are maximally relevant to a translator. In this paper, we report on the TermWise project, a cooperation of terminologists, corpus linguists and computer scientists, that aims to leverage big online translation data for terminological support to legal translators at the Belgian Federal Ministry of Justice. The project developed dedicated knowledge extraction algorithms and a server-based tool to provide translators with the most relevant previous translations of domain-specific expressions relative to the current translation assignment. The functionality is implemented as an extra database, a Term&Phrase Memory, that is meant to be integrated with existing Computer Assisted Translation tools. In the paper, we give an overview of the system, give a demo of the user interface, we present a user-based evaluation by translators and discuss how the tool is part of the general evolution towards exploiting Big Data in translation.",2014
sabir-etal-2021-reinforcebug,https://aclanthology.org/2021.naacl-main.477,0,,,,,,,"ReinforceBug: A Framework to Generate Adversarial Textual Examples. Adversarial Examples (AEs) generated by perturbing original training examples are useful in improving the robustness of Deep Learning (DL) based models. Most prior works generate AEs that are either unconscionable due to lexical errors or semantically and functionally deviant from original examples. In this paper, we present ReinforceBug, a reinforcement learning framework, that learns a policy that is transferable on unseen datasets and generates utility-preserving and transferable (on other models) AEs. Our experiments show that ReinforceBug is on average 10% more successful as compared to the state-ofthe-art attack TextFooler. Moreover, the target models have on average 73.64% confidence in wrong prediction, the generated AEs preserve the functional equivalence and semantic similarity (83.38%) to their original counterparts, and are transferable on other models with an average success rate of 46%.",ReinforceBug: A Framework to Generate Adversarial Textual Examples,"Adversarial Examples (AEs) generated by perturbing original training examples are useful in improving the robustness of Deep Learning (DL) based models. Most prior works generate AEs that are either unconscionable due to lexical errors or semantically and functionally deviant from original examples. In this paper, we present ReinforceBug, a reinforcement learning framework, that learns a policy that is transferable on unseen datasets and generates utility-preserving and transferable (on other models) AEs. Our experiments show that ReinforceBug is on average 10% more successful as compared to the state-ofthe-art attack TextFooler. Moreover, the target models have on average 73.64% confidence in wrong prediction, the generated AEs preserve the functional equivalence and semantic similarity (83.38%) to their original counterparts, and are transferable on other models with an average success rate of 46%.",ReinforceBug: A Framework to Generate Adversarial Textual Examples,"Adversarial Examples (AEs) generated by perturbing original training examples are useful in improving the robustness of Deep Learning (DL) based models. Most prior works generate AEs that are either unconscionable due to lexical errors or semantically and functionally deviant from original examples. In this paper, we present ReinforceBug, a reinforcement learning framework, that learns a policy that is transferable on unseen datasets and generates utility-preserving and transferable (on other models) AEs. Our experiments show that ReinforceBug is on average 10% more successful as compared to the state-ofthe-art attack TextFooler. Moreover, the target models have on average 73.64% confidence in wrong prediction, the generated AEs preserve the functional equivalence and semantic similarity (83.38%) to their original counterparts, and are transferable on other models with an average success rate of 46%.",This work was supported with super-computing resources provided by the Phoenix HPC service at the University of Adelaide.,"ReinforceBug: A Framework to Generate Adversarial Textual Examples. Adversarial Examples (AEs) generated by perturbing original training examples are useful in improving the robustness of Deep Learning (DL) based models. Most prior works generate AEs that are either unconscionable due to lexical errors or semantically and functionally deviant from original examples. In this paper, we present ReinforceBug, a reinforcement learning framework, that learns a policy that is transferable on unseen datasets and generates utility-preserving and transferable (on other models) AEs. Our experiments show that ReinforceBug is on average 10% more successful as compared to the state-ofthe-art attack TextFooler. Moreover, the target models have on average 73.64% confidence in wrong prediction, the generated AEs preserve the functional equivalence and semantic similarity (83.38%) to their original counterparts, and are transferable on other models with an average success rate of 46%.",2021
matero-etal-2019-suicide,https://aclanthology.org/W19-3005,1,,,,health,,,"Suicide Risk Assessment with Multi-level Dual-Context Language and BERT. Mental health predictive systems typically model language as if from a single context (e.g. Twitter posts, status updates, or forum posts) and often limited to a single level of analysis (e.g. either the message-level or userlevel). Here, we bring these pieces together to explore the use of open-vocabulary (BERT embeddings, topics) and theoretical features (emotional expression lexica, personality) for the task of suicide risk assessment on support forums (the CLPsych-2019 Shared Task). We used dual context based approaches (modeling content from suicide forums separate from other content), built over both traditional ML models as well as a novel dual RNN architecture with user-factor adaptation. We find that while affect from the suicide context distinguishes with no-risk from those with ""anyrisk"", personality factors from the non-suicide contexts provide distinction of the levels of risk: low, medium, and high risk. Within the shared task, our dual-context approach (listed as SBU-HLAB in the official results) achieved state-of-the-art performance predicting suicide risk using a combination of suicide-context and non-suicide posts (Task B), achieving an F1 score of 0.50 over hidden test set labels.",Suicide Risk Assessment with Multi-level Dual-Context Language and {BERT},"Mental health predictive systems typically model language as if from a single context (e.g. Twitter posts, status updates, or forum posts) and often limited to a single level of analysis (e.g. either the message-level or userlevel). Here, we bring these pieces together to explore the use of open-vocabulary (BERT embeddings, topics) and theoretical features (emotional expression lexica, personality) for the task of suicide risk assessment on support forums (the CLPsych-2019 Shared Task). We used dual context based approaches (modeling content from suicide forums separate from other content), built over both traditional ML models as well as a novel dual RNN architecture with user-factor adaptation. We find that while affect from the suicide context distinguishes with no-risk from those with ""anyrisk"", personality factors from the non-suicide contexts provide distinction of the levels of risk: low, medium, and high risk. Within the shared task, our dual-context approach (listed as SBU-HLAB in the official results) achieved state-of-the-art performance predicting suicide risk using a combination of suicide-context and non-suicide posts (Task B), achieving an F1 score of 0.50 over hidden test set labels.",Suicide Risk Assessment with Multi-level Dual-Context Language and BERT,"Mental health predictive systems typically model language as if from a single context (e.g. Twitter posts, status updates, or forum posts) and often limited to a single level of analysis (e.g. either the message-level or userlevel). Here, we bring these pieces together to explore the use of open-vocabulary (BERT embeddings, topics) and theoretical features (emotional expression lexica, personality) for the task of suicide risk assessment on support forums (the CLPsych-2019 Shared Task). We used dual context based approaches (modeling content from suicide forums separate from other content), built over both traditional ML models as well as a novel dual RNN architecture with user-factor adaptation. We find that while affect from the suicide context distinguishes with no-risk from those with ""anyrisk"", personality factors from the non-suicide contexts provide distinction of the levels of risk: low, medium, and high risk. Within the shared task, our dual-context approach (listed as SBU-HLAB in the official results) achieved state-of-the-art performance predicting suicide risk using a combination of suicide-context and non-suicide posts (Task B), achieving an F1 score of 0.50 over hidden test set labels.",,"Suicide Risk Assessment with Multi-level Dual-Context Language and BERT. Mental health predictive systems typically model language as if from a single context (e.g. Twitter posts, status updates, or forum posts) and often limited to a single level of analysis (e.g. either the message-level or userlevel). Here, we bring these pieces together to explore the use of open-vocabulary (BERT embeddings, topics) and theoretical features (emotional expression lexica, personality) for the task of suicide risk assessment on support forums (the CLPsych-2019 Shared Task). We used dual context based approaches (modeling content from suicide forums separate from other content), built over both traditional ML models as well as a novel dual RNN architecture with user-factor adaptation. We find that while affect from the suicide context distinguishes with no-risk from those with ""anyrisk"", personality factors from the non-suicide contexts provide distinction of the levels of risk: low, medium, and high risk. Within the shared task, our dual-context approach (listed as SBU-HLAB in the official results) achieved state-of-the-art performance predicting suicide risk using a combination of suicide-context and non-suicide posts (Task B), achieving an F1 score of 0.50 over hidden test set labels.",2019
leonova-zuters-2021-frustration,https://aclanthology.org/2021.ranlp-1.93,0,,,,,,,"Frustration Level Annotation in Latvian Tweets with Non-Lexical Means of Expression. We present a neural-network-driven model for annotating frustration intensity in customer support tweets, based on representing tweet texts using a bag-ofwords encoding after processing with subword segmentation together with nonlexical features. The model was evaluated on tweets in English and Latvian languages, focusing on aspects beyond the pure bag-of-words representations used in previous research. The experimental results show that the model can be successfully applied for texts in a non-English language, and that adding non-lexical features to tweet representations significantly improves performance, while subword segmentation has a moderate but positive effect on model accuracy. Our code and training data are publicly available 1 .",Frustration Level Annotation in {L}atvian Tweets with Non-Lexical Means of Expression,"We present a neural-network-driven model for annotating frustration intensity in customer support tweets, based on representing tweet texts using a bag-ofwords encoding after processing with subword segmentation together with nonlexical features. The model was evaluated on tweets in English and Latvian languages, focusing on aspects beyond the pure bag-of-words representations used in previous research. The experimental results show that the model can be successfully applied for texts in a non-English language, and that adding non-lexical features to tweet representations significantly improves performance, while subword segmentation has a moderate but positive effect on model accuracy. Our code and training data are publicly available 1 .",Frustration Level Annotation in Latvian Tweets with Non-Lexical Means of Expression,"We present a neural-network-driven model for annotating frustration intensity in customer support tweets, based on representing tweet texts using a bag-ofwords encoding after processing with subword segmentation together with nonlexical features. The model was evaluated on tweets in English and Latvian languages, focusing on aspects beyond the pure bag-of-words representations used in previous research. The experimental results show that the model can be successfully applied for texts in a non-English language, and that adding non-lexical features to tweet representations significantly improves performance, while subword segmentation has a moderate but positive effect on model accuracy. Our code and training data are publicly available 1 .","The research has been supported by the European Regional Development Fund within the joint project of SIA TILDE and University of Latvia ""Multilingual Artificial Intelligence Based Human Computer Interaction"" No.1.1.1.1/18/A/148.","Frustration Level Annotation in Latvian Tweets with Non-Lexical Means of Expression. We present a neural-network-driven model for annotating frustration intensity in customer support tweets, based on representing tweet texts using a bag-ofwords encoding after processing with subword segmentation together with nonlexical features. The model was evaluated on tweets in English and Latvian languages, focusing on aspects beyond the pure bag-of-words representations used in previous research. The experimental results show that the model can be successfully applied for texts in a non-English language, and that adding non-lexical features to tweet representations significantly improves performance, while subword segmentation has a moderate but positive effect on model accuracy. Our code and training data are publicly available 1 .",2021
maillette-de-buy-wenniger-simaan-2013-formal,https://aclanthology.org/W13-0807,0,,,,,,,"A Formal Characterization of Parsing Word Alignments by Synchronous Grammars with Empirical Evidence to the ITG Hypothesis.. Deciding whether a synchronous grammar formalism generates a given word alignment (the alignment coverage problem) depends on finding an adequate instance grammar and then using it to parse the word alignment. But what does it mean to parse a word alignment by a synchronous grammar? This is formally undefined until we define an unambiguous mapping between grammatical derivations and word-level alignments. This paper proposes an initial, formal characterization of alignment coverage as intersecting two partially ordered sets (graphs) of translation equivalence units, one derived by a grammar instance and another defined by the word alignment. As a first sanity check, we report extensive coverage results for ITG on automatic and manual alignments. Even for the ITG formalism, our formal characterization makes explicit many algorithmic choices often left underspecified in earlier work.
The training data used by current statistical machine translation (SMT) models consists of source and target sentence pairs aligned together at the word level (word alignments). For the hierarchical and syntactically-enriched SMT models, e.g., (Chiang, 2007; Zollmann and Venugopal, 2006) , this training data is used for extracting statistically weighted Synchronous Context-Free Grammars (SCFGs). Formally speaking, a synchronous grammar defines a set of (source-target) sentence pairs derived synchronously by the grammar. Contrary to common belief, however, a synchronous grammar (see e.g., (Chiang, 2005; Satta and Peserico, 2005) ) does not accept (or parse) word alignments. This is because a synchronous derivation generates a tree pair with a bijective binary relation (links) between their nonterminal nodes. For deciding whether a given word alignment is generated/accepted by a given synchronous grammar, it is necessary to interpret the synchronous derivations down to the lexical level. However, it is formally defined yet how to unambiguously interpret the synchronous derivations of a synchronous grammar as word alignments. One major difficulty is that synchronous productions, in their most general form, may contain unaligned terminal sequences. Consider, for instance, the relatively non-complex synchronous production X → α X (1) β X (2) γ X (3) , X → σ X (2) τ X (1) µ X (3) where superscript (i) stands for aligned instances of nonterminal X and all Greek symbols stand for arbitrary non-empty terminals sequences. Given a word aligned sentence pair it is necessary to bind the terminal sequence by alignments consistent with the given word alignment, and then parse the word alignment with the thus enriched grammar rules. This is not complex if we assume that each of the source terminal sequences is contiguously aligned with a target contiguous sequence, but difficult if we assume arbitrary alignments, including many-to-one and non-contiguously aligned chunks.",A Formal Characterization of Parsing Word Alignments by Synchronous Grammars with Empirical Evidence to the {ITG} Hypothesis.,"Deciding whether a synchronous grammar formalism generates a given word alignment (the alignment coverage problem) depends on finding an adequate instance grammar and then using it to parse the word alignment. But what does it mean to parse a word alignment by a synchronous grammar? This is formally undefined until we define an unambiguous mapping between grammatical derivations and word-level alignments. This paper proposes an initial, formal characterization of alignment coverage as intersecting two partially ordered sets (graphs) of translation equivalence units, one derived by a grammar instance and another defined by the word alignment. As a first sanity check, we report extensive coverage results for ITG on automatic and manual alignments. Even for the ITG formalism, our formal characterization makes explicit many algorithmic choices often left underspecified in earlier work.
The training data used by current statistical machine translation (SMT) models consists of source and target sentence pairs aligned together at the word level (word alignments). For the hierarchical and syntactically-enriched SMT models, e.g., (Chiang, 2007; Zollmann and Venugopal, 2006) , this training data is used for extracting statistically weighted Synchronous Context-Free Grammars (SCFGs). Formally speaking, a synchronous grammar defines a set of (source-target) sentence pairs derived synchronously by the grammar. Contrary to common belief, however, a synchronous grammar (see e.g., (Chiang, 2005; Satta and Peserico, 2005) ) does not accept (or parse) word alignments. This is because a synchronous derivation generates a tree pair with a bijective binary relation (links) between their nonterminal nodes. For deciding whether a given word alignment is generated/accepted by a given synchronous grammar, it is necessary to interpret the synchronous derivations down to the lexical level. However, it is formally defined yet how to unambiguously interpret the synchronous derivations of a synchronous grammar as word alignments. One major difficulty is that synchronous productions, in their most general form, may contain unaligned terminal sequences. Consider, for instance, the relatively non-complex synchronous production X → α X (1) β X (2) γ X (3) , X → σ X (2) τ X (1) µ X (3) where superscript (i) stands for aligned instances of nonterminal X and all Greek symbols stand for arbitrary non-empty terminals sequences. Given a word aligned sentence pair it is necessary to bind the terminal sequence by alignments consistent with the given word alignment, and then parse the word alignment with the thus enriched grammar rules. This is not complex if we assume that each of the source terminal sequences is contiguously aligned with a target contiguous sequence, but difficult if we assume arbitrary alignments, including many-to-one and non-contiguously aligned chunks.",A Formal Characterization of Parsing Word Alignments by Synchronous Grammars with Empirical Evidence to the ITG Hypothesis.,"Deciding whether a synchronous grammar formalism generates a given word alignment (the alignment coverage problem) depends on finding an adequate instance grammar and then using it to parse the word alignment. But what does it mean to parse a word alignment by a synchronous grammar? This is formally undefined until we define an unambiguous mapping between grammatical derivations and word-level alignments. This paper proposes an initial, formal characterization of alignment coverage as intersecting two partially ordered sets (graphs) of translation equivalence units, one derived by a grammar instance and another defined by the word alignment. As a first sanity check, we report extensive coverage results for ITG on automatic and manual alignments. Even for the ITG formalism, our formal characterization makes explicit many algorithmic choices often left underspecified in earlier work.
The training data used by current statistical machine translation (SMT) models consists of source and target sentence pairs aligned together at the word level (word alignments). For the hierarchical and syntactically-enriched SMT models, e.g., (Chiang, 2007; Zollmann and Venugopal, 2006) , this training data is used for extracting statistically weighted Synchronous Context-Free Grammars (SCFGs). Formally speaking, a synchronous grammar defines a set of (source-target) sentence pairs derived synchronously by the grammar. Contrary to common belief, however, a synchronous grammar (see e.g., (Chiang, 2005; Satta and Peserico, 2005) ) does not accept (or parse) word alignments. This is because a synchronous derivation generates a tree pair with a bijective binary relation (links) between their nonterminal nodes. For deciding whether a given word alignment is generated/accepted by a given synchronous grammar, it is necessary to interpret the synchronous derivations down to the lexical level. However, it is formally defined yet how to unambiguously interpret the synchronous derivations of a synchronous grammar as word alignments. One major difficulty is that synchronous productions, in their most general form, may contain unaligned terminal sequences. Consider, for instance, the relatively non-complex synchronous production X → α X (1) β X (2) γ X (3) , X → σ X (2) τ X (1) µ X (3) where superscript (i) stands for aligned instances of nonterminal X and all Greek symbols stand for arbitrary non-empty terminals sequences. Given a word aligned sentence pair it is necessary to bind the terminal sequence by alignments consistent with the given word alignment, and then parse the word alignment with the thus enriched grammar rules. This is not complex if we assume that each of the source terminal sequences is contiguously aligned with a target contiguous sequence, but difficult if we assume arbitrary alignments, including many-to-one and non-contiguously aligned chunks.","We thank reviewers for their helpful comments, and thank Mark-Jan Nederhof for illuminating discussions on parsing as intersection. This work is supported by The Netherlands Organization for Scientific Research (NWO) under grant nr. 612.066.929.","A Formal Characterization of Parsing Word Alignments by Synchronous Grammars with Empirical Evidence to the ITG Hypothesis.. Deciding whether a synchronous grammar formalism generates a given word alignment (the alignment coverage problem) depends on finding an adequate instance grammar and then using it to parse the word alignment. But what does it mean to parse a word alignment by a synchronous grammar? This is formally undefined until we define an unambiguous mapping between grammatical derivations and word-level alignments. This paper proposes an initial, formal characterization of alignment coverage as intersecting two partially ordered sets (graphs) of translation equivalence units, one derived by a grammar instance and another defined by the word alignment. As a first sanity check, we report extensive coverage results for ITG on automatic and manual alignments. Even for the ITG formalism, our formal characterization makes explicit many algorithmic choices often left underspecified in earlier work.
The training data used by current statistical machine translation (SMT) models consists of source and target sentence pairs aligned together at the word level (word alignments). For the hierarchical and syntactically-enriched SMT models, e.g., (Chiang, 2007; Zollmann and Venugopal, 2006) , this training data is used for extracting statistically weighted Synchronous Context-Free Grammars (SCFGs). Formally speaking, a synchronous grammar defines a set of (source-target) sentence pairs derived synchronously by the grammar. Contrary to common belief, however, a synchronous grammar (see e.g., (Chiang, 2005; Satta and Peserico, 2005) ) does not accept (or parse) word alignments. This is because a synchronous derivation generates a tree pair with a bijective binary relation (links) between their nonterminal nodes. For deciding whether a given word alignment is generated/accepted by a given synchronous grammar, it is necessary to interpret the synchronous derivations down to the lexical level. However, it is formally defined yet how to unambiguously interpret the synchronous derivations of a synchronous grammar as word alignments. One major difficulty is that synchronous productions, in their most general form, may contain unaligned terminal sequences. Consider, for instance, the relatively non-complex synchronous production X → α X (1) β X (2) γ X (3) , X → σ X (2) τ X (1) µ X (3) where superscript (i) stands for aligned instances of nonterminal X and all Greek symbols stand for arbitrary non-empty terminals sequences. Given a word aligned sentence pair it is necessary to bind the terminal sequence by alignments consistent with the given word alignment, and then parse the word alignment with the thus enriched grammar rules. This is not complex if we assume that each of the source terminal sequences is contiguously aligned with a target contiguous sequence, but difficult if we assume arbitrary alignments, including many-to-one and non-contiguously aligned chunks.",2013
ciobotaru-dinu-2021-red,https://aclanthology.org/2021.ranlp-1.34,0,,,,,,,"RED: A Novel Dataset for Romanian Emotion Detection from Tweets. In Romanian language there are some resources for automatic text comprehension, but for Emotion Detection, not lexicon-based, there are none. To cover this gap, we extracted data from Twitter and created the first dataset containing tweets annotated with five types of emotions: joy, fear, sadness, anger and neutral, with the intent of being used for opinion mining and analysis tasks. In this article we present some features of our novel dataset, and create a benchmark to achieve the first supervised machine learning model for automatic Emotion Detection in Romanian short texts. We investigate the performance of four classical machine learning models: Multinomial Naive Bayes, Logistic Regression, Support Vector Classification and Linear Support Vector Classification. We also investigate more modern approaches like fastText, which makes use of subword information. Lastly, we finetune the Romanian BERT for text classification and our experiments show that the BERTbased model has the best performance for the task of Emotion Detection from Romanian tweets.",{RED}: A Novel Dataset for {R}omanian Emotion Detection from Tweets,"In Romanian language there are some resources for automatic text comprehension, but for Emotion Detection, not lexicon-based, there are none. To cover this gap, we extracted data from Twitter and created the first dataset containing tweets annotated with five types of emotions: joy, fear, sadness, anger and neutral, with the intent of being used for opinion mining and analysis tasks. In this article we present some features of our novel dataset, and create a benchmark to achieve the first supervised machine learning model for automatic Emotion Detection in Romanian short texts. We investigate the performance of four classical machine learning models: Multinomial Naive Bayes, Logistic Regression, Support Vector Classification and Linear Support Vector Classification. We also investigate more modern approaches like fastText, which makes use of subword information. Lastly, we finetune the Romanian BERT for text classification and our experiments show that the BERTbased model has the best performance for the task of Emotion Detection from Romanian tweets.",RED: A Novel Dataset for Romanian Emotion Detection from Tweets,"In Romanian language there are some resources for automatic text comprehension, but for Emotion Detection, not lexicon-based, there are none. To cover this gap, we extracted data from Twitter and created the first dataset containing tweets annotated with five types of emotions: joy, fear, sadness, anger and neutral, with the intent of being used for opinion mining and analysis tasks. In this article we present some features of our novel dataset, and create a benchmark to achieve the first supervised machine learning model for automatic Emotion Detection in Romanian short texts. We investigate the performance of four classical machine learning models: Multinomial Naive Bayes, Logistic Regression, Support Vector Classification and Linear Support Vector Classification. We also investigate more modern approaches like fastText, which makes use of subword information. Lastly, we finetune the Romanian BERT for text classification and our experiments show that the BERTbased model has the best performance for the task of Emotion Detection from Romanian tweets.","We would like to thank Nicu Ciobotaru and Ioana Alexandra Rȃducanu for their help with the annotation process, Ligia Maria Bȃtrînca for proof reading and suggestions, as well as the anonymous reviewers for their time and valuable comments.We acknowledge the support of a grant of the Romanian Ministry of Education and Research, CCCDI-UEFISCDI, project number 411PED/2020, code PN-III-P2-2.1-PED-2019-2271, within PNCDI III.","RED: A Novel Dataset for Romanian Emotion Detection from Tweets. In Romanian language there are some resources for automatic text comprehension, but for Emotion Detection, not lexicon-based, there are none. To cover this gap, we extracted data from Twitter and created the first dataset containing tweets annotated with five types of emotions: joy, fear, sadness, anger and neutral, with the intent of being used for opinion mining and analysis tasks. In this article we present some features of our novel dataset, and create a benchmark to achieve the first supervised machine learning model for automatic Emotion Detection in Romanian short texts. We investigate the performance of four classical machine learning models: Multinomial Naive Bayes, Logistic Regression, Support Vector Classification and Linear Support Vector Classification. We also investigate more modern approaches like fastText, which makes use of subword information. Lastly, we finetune the Romanian BERT for text classification and our experiments show that the BERTbased model has the best performance for the task of Emotion Detection from Romanian tweets.",2021
hattasch-etal-2020-summarization,https://aclanthology.org/2020.lrec-1.827,0,,,,,,,"Summarization Beyond News: The Automatically Acquired Fandom Corpora. Large state-of-the-art corpora for training neural networks to create abstractive summaries are mostly limited to the news genre, as it is expensive to acquire human-written summaries for other types of text at a large scale. In this paper, we present a novel automatic corpus construction approach to tackle this issue as well as three new large open-licensed summarization corpora based on our approach that can be used for training abstractive summarization models. Our constructed corpora contain fictional narratives, descriptive texts, and summaries about movies, television, and book series from different domains. All sources use a creative commons (CC) license, hence we can provide the corpora for download. In addition, we also provide a ready-to-use framework that implements our automatic construction approach to create custom corpora with desired parameters like the length of the target summary and the number of source documents from which to create the summary. The main idea behind our automatic construction approach is to use existing large text collections (e.g., thematic wikis) and automatically classify whether the texts can be used as (query-focused) multi-document summaries and align them with potential source texts. As a final contribution, we show the usefulness of our automatic construction approach by running state-of-the-art summarizers on the corpora and through a manual evaluation with human annotators.",Summarization Beyond News: The Automatically Acquired Fandom Corpora,"Large state-of-the-art corpora for training neural networks to create abstractive summaries are mostly limited to the news genre, as it is expensive to acquire human-written summaries for other types of text at a large scale. In this paper, we present a novel automatic corpus construction approach to tackle this issue as well as three new large open-licensed summarization corpora based on our approach that can be used for training abstractive summarization models. Our constructed corpora contain fictional narratives, descriptive texts, and summaries about movies, television, and book series from different domains. All sources use a creative commons (CC) license, hence we can provide the corpora for download. In addition, we also provide a ready-to-use framework that implements our automatic construction approach to create custom corpora with desired parameters like the length of the target summary and the number of source documents from which to create the summary. The main idea behind our automatic construction approach is to use existing large text collections (e.g., thematic wikis) and automatically classify whether the texts can be used as (query-focused) multi-document summaries and align them with potential source texts. As a final contribution, we show the usefulness of our automatic construction approach by running state-of-the-art summarizers on the corpora and through a manual evaluation with human annotators.",Summarization Beyond News: The Automatically Acquired Fandom Corpora,"Large state-of-the-art corpora for training neural networks to create abstractive summaries are mostly limited to the news genre, as it is expensive to acquire human-written summaries for other types of text at a large scale. In this paper, we present a novel automatic corpus construction approach to tackle this issue as well as three new large open-licensed summarization corpora based on our approach that can be used for training abstractive summarization models. Our constructed corpora contain fictional narratives, descriptive texts, and summaries about movies, television, and book series from different domains. All sources use a creative commons (CC) license, hence we can provide the corpora for download. In addition, we also provide a ready-to-use framework that implements our automatic construction approach to create custom corpora with desired parameters like the length of the target summary and the number of source documents from which to create the summary. The main idea behind our automatic construction approach is to use existing large text collections (e.g., thematic wikis) and automatically classify whether the texts can be used as (query-focused) multi-document summaries and align them with potential source texts. As a final contribution, we show the usefulness of our automatic construction approach by running state-of-the-art summarizers on the corpora and through a manual evaluation with human annotators.",This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No. GRK 1994/1. Thanks to Aurel Kilian and Ben Kohr who helped with the implementation of the first prototype and to all human annotators.,"Summarization Beyond News: The Automatically Acquired Fandom Corpora. Large state-of-the-art corpora for training neural networks to create abstractive summaries are mostly limited to the news genre, as it is expensive to acquire human-written summaries for other types of text at a large scale. In this paper, we present a novel automatic corpus construction approach to tackle this issue as well as three new large open-licensed summarization corpora based on our approach that can be used for training abstractive summarization models. Our constructed corpora contain fictional narratives, descriptive texts, and summaries about movies, television, and book series from different domains. All sources use a creative commons (CC) license, hence we can provide the corpora for download. In addition, we also provide a ready-to-use framework that implements our automatic construction approach to create custom corpora with desired parameters like the length of the target summary and the number of source documents from which to create the summary. The main idea behind our automatic construction approach is to use existing large text collections (e.g., thematic wikis) and automatically classify whether the texts can be used as (query-focused) multi-document summaries and align them with potential source texts. As a final contribution, we show the usefulness of our automatic construction approach by running state-of-the-art summarizers on the corpora and through a manual evaluation with human annotators.",2020
neill-2019-lda,https://aclanthology.org/W19-7505,0,,,,,,,"LDA Topic Modeling for pram\=aṇa Texts: A Case Study in Sanskrit NLP Corpus Building. Sanskrit texts in epistemology, metaphysics, and logic (i.e., pramāṇa texts) remain underrepresented in computational work. To begin to remedy this, a 3.5 million-token digital corpus has been prepared for document-and word-level analysis, and its potential demonstrated through Latent Dirichlet Allocation (LDA) topic modeling. Attention is also given to data consistency issues, with special reference to the SARIT corpus. 1 Credits This research was supported by DFG Project 279803509 ""Digitale kritische Edition des Nyāyabhāṣya"" 1 and by the Humboldt Chair of Digital Humanities at the University of Leipzig, especially Dr. Thomas Köntges. Special thanks also to conversation partner Yuki Kyogoku.",{LDA} Topic Modeling for pram{\=a}ṇa Texts: A Case Study in {S}anskrit {NLP} Corpus Building,"Sanskrit texts in epistemology, metaphysics, and logic (i.e., pramāṇa texts) remain underrepresented in computational work. To begin to remedy this, a 3.5 million-token digital corpus has been prepared for document-and word-level analysis, and its potential demonstrated through Latent Dirichlet Allocation (LDA) topic modeling. Attention is also given to data consistency issues, with special reference to the SARIT corpus. 1 Credits This research was supported by DFG Project 279803509 ""Digitale kritische Edition des Nyāyabhāṣya"" 1 and by the Humboldt Chair of Digital Humanities at the University of Leipzig, especially Dr. Thomas Köntges. Special thanks also to conversation partner Yuki Kyogoku.",LDA Topic Modeling for pram\=aṇa Texts: A Case Study in Sanskrit NLP Corpus Building,"Sanskrit texts in epistemology, metaphysics, and logic (i.e., pramāṇa texts) remain underrepresented in computational work. To begin to remedy this, a 3.5 million-token digital corpus has been prepared for document-and word-level analysis, and its potential demonstrated through Latent Dirichlet Allocation (LDA) topic modeling. Attention is also given to data consistency issues, with special reference to the SARIT corpus. 1 Credits This research was supported by DFG Project 279803509 ""Digitale kritische Edition des Nyāyabhāṣya"" 1 and by the Humboldt Chair of Digital Humanities at the University of Leipzig, especially Dr. Thomas Köntges. Special thanks also to conversation partner Yuki Kyogoku.",,"LDA Topic Modeling for pram\=aṇa Texts: A Case Study in Sanskrit NLP Corpus Building. Sanskrit texts in epistemology, metaphysics, and logic (i.e., pramāṇa texts) remain underrepresented in computational work. To begin to remedy this, a 3.5 million-token digital corpus has been prepared for document-and word-level analysis, and its potential demonstrated through Latent Dirichlet Allocation (LDA) topic modeling. Attention is also given to data consistency issues, with special reference to the SARIT corpus. 1 Credits This research was supported by DFG Project 279803509 ""Digitale kritische Edition des Nyāyabhāṣya"" 1 and by the Humboldt Chair of Digital Humanities at the University of Leipzig, especially Dr. Thomas Köntges. Special thanks also to conversation partner Yuki Kyogoku.",2019
zhao-etal-2020-spanmlt,https://aclanthology.org/2020.acl-main.296,0,,,,,,,"SpanMlt: A Span-based Multi-Task Learning Framework for Pair-wise Aspect and Opinion Terms Extraction. Aspect terms extraction and opinion terms extraction are two key problems of fine-grained Aspect Based Sentiment Analysis (ABSA). The aspect-opinion pairs can provide a global profile about a product or service for consumers and opinion mining systems. However, traditional methods can not directly output aspect-opinion pairs without given aspect terms or opinion terms. Although some recent co-extraction methods have been proposed to extract both terms jointly, they fail to extract them as pairs. To this end, this paper proposes an end-to-end method to solve the task of Pair-wise Aspect and Opinion Terms Extraction (PAOTE). Furthermore, this paper treats the problem from a perspective of joint term and relation extraction rather than under the sequence tagging formulation performed in most prior works. We propose a multi-task learning framework based on shared spans, where the terms are extracted under the supervision of span boundaries. Meanwhile, the pair-wise relations are jointly identified using the span representations. Extensive experiments show that our model consistently outperforms stateof-the-art methods.",{S}pan{M}lt: A Span-based Multi-Task Learning Framework for Pair-wise Aspect and Opinion Terms Extraction,"Aspect terms extraction and opinion terms extraction are two key problems of fine-grained Aspect Based Sentiment Analysis (ABSA). The aspect-opinion pairs can provide a global profile about a product or service for consumers and opinion mining systems. However, traditional methods can not directly output aspect-opinion pairs without given aspect terms or opinion terms. Although some recent co-extraction methods have been proposed to extract both terms jointly, they fail to extract them as pairs. To this end, this paper proposes an end-to-end method to solve the task of Pair-wise Aspect and Opinion Terms Extraction (PAOTE). Furthermore, this paper treats the problem from a perspective of joint term and relation extraction rather than under the sequence tagging formulation performed in most prior works. We propose a multi-task learning framework based on shared spans, where the terms are extracted under the supervision of span boundaries. Meanwhile, the pair-wise relations are jointly identified using the span representations. Extensive experiments show that our model consistently outperforms stateof-the-art methods.",SpanMlt: A Span-based Multi-Task Learning Framework for Pair-wise Aspect and Opinion Terms Extraction,"Aspect terms extraction and opinion terms extraction are two key problems of fine-grained Aspect Based Sentiment Analysis (ABSA). The aspect-opinion pairs can provide a global profile about a product or service for consumers and opinion mining systems. However, traditional methods can not directly output aspect-opinion pairs without given aspect terms or opinion terms. Although some recent co-extraction methods have been proposed to extract both terms jointly, they fail to extract them as pairs. To this end, this paper proposes an end-to-end method to solve the task of Pair-wise Aspect and Opinion Terms Extraction (PAOTE). Furthermore, this paper treats the problem from a perspective of joint term and relation extraction rather than under the sequence tagging formulation performed in most prior works. We propose a multi-task learning framework based on shared spans, where the terms are extracted under the supervision of span boundaries. Meanwhile, the pair-wise relations are jointly identified using the span representations. Extensive experiments show that our model consistently outperforms stateof-the-art methods.",This research is supported in part by the National Natural Science Foundation of China under Grant 61702500.,"SpanMlt: A Span-based Multi-Task Learning Framework for Pair-wise Aspect and Opinion Terms Extraction. Aspect terms extraction and opinion terms extraction are two key problems of fine-grained Aspect Based Sentiment Analysis (ABSA). The aspect-opinion pairs can provide a global profile about a product or service for consumers and opinion mining systems. However, traditional methods can not directly output aspect-opinion pairs without given aspect terms or opinion terms. Although some recent co-extraction methods have been proposed to extract both terms jointly, they fail to extract them as pairs. To this end, this paper proposes an end-to-end method to solve the task of Pair-wise Aspect and Opinion Terms Extraction (PAOTE). Furthermore, this paper treats the problem from a perspective of joint term and relation extraction rather than under the sequence tagging formulation performed in most prior works. We propose a multi-task learning framework based on shared spans, where the terms are extracted under the supervision of span boundaries. Meanwhile, the pair-wise relations are jointly identified using the span representations. Extensive experiments show that our model consistently outperforms stateof-the-art methods.",2020
ye-etal-2020-safer,https://aclanthology.org/2020.acl-main.317,0,,,,,,,"SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions. State-of-the-art NLP models can often be fooled by human-unaware transformations such as synonymous word substitution. For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution. In this work, we propose a certified robust method based on a new randomized smoothing technique, which constructs a stochastic ensemble by applying random word substitutions on the input sentences, and leverage the statistical properties of the ensemble to provably certify the robustness. Our method is simple and structure-free in that it only requires the black-box queries of the model outputs, and hence can be applied to any pre-trained models (such as BERT) and any types of models (world-level or subwordlevel). Our method significantly outperforms recent state-of-the-art methods for certified robustness on both IMDB and Amazon text classification tasks. To the best of our knowledge, we are the first work to achieve certified robustness on large systems such as BERT with practically meaningful certified accuracy.",{SAFER}: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions,"State-of-the-art NLP models can often be fooled by human-unaware transformations such as synonymous word substitution. For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution. In this work, we propose a certified robust method based on a new randomized smoothing technique, which constructs a stochastic ensemble by applying random word substitutions on the input sentences, and leverage the statistical properties of the ensemble to provably certify the robustness. Our method is simple and structure-free in that it only requires the black-box queries of the model outputs, and hence can be applied to any pre-trained models (such as BERT) and any types of models (world-level or subwordlevel). Our method significantly outperforms recent state-of-the-art methods for certified robustness on both IMDB and Amazon text classification tasks. To the best of our knowledge, we are the first work to achieve certified robustness on large systems such as BERT with practically meaningful certified accuracy.",SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions,"State-of-the-art NLP models can often be fooled by human-unaware transformations such as synonymous word substitution. For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution. In this work, we propose a certified robust method based on a new randomized smoothing technique, which constructs a stochastic ensemble by applying random word substitutions on the input sentences, and leverage the statistical properties of the ensemble to provably certify the robustness. Our method is simple and structure-free in that it only requires the black-box queries of the model outputs, and hence can be applied to any pre-trained models (such as BERT) and any types of models (world-level or subwordlevel). Our method significantly outperforms recent state-of-the-art methods for certified robustness on both IMDB and Amazon text classification tasks. To the best of our knowledge, we are the first work to achieve certified robustness on large systems such as BERT with practically meaningful certified accuracy.",This work is supported in part by NSF CRII 1830161 and NSF CAREER 1846421.,"SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions. State-of-the-art NLP models can often be fooled by human-unaware transformations such as synonymous word substitution. For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution. In this work, we propose a certified robust method based on a new randomized smoothing technique, which constructs a stochastic ensemble by applying random word substitutions on the input sentences, and leverage the statistical properties of the ensemble to provably certify the robustness. Our method is simple and structure-free in that it only requires the black-box queries of the model outputs, and hence can be applied to any pre-trained models (such as BERT) and any types of models (world-level or subwordlevel). Our method significantly outperforms recent state-of-the-art methods for certified robustness on both IMDB and Amazon text classification tasks. To the best of our knowledge, we are the first work to achieve certified robustness on large systems such as BERT with practically meaningful certified accuracy.",2020
siegel-1997-learning,https://aclanthology.org/W97-0318,0,,,,,,,"Learning Methods for Combining Linguistic Indicators to Classify Verbs. Fourteen linguistically-motivated numerical indicators are evaluated for their ability to categorize verbs as either states or events. The values for each indicator are computed automatically across a corpus of text. To improve classification performance, machine learning techniques are employed to combine multiple indicators. Three machine learning methods are compared for this task: decision tree induction, a genetic algorithm, and log-linear regression.",Learning Methods for Combining Linguistic Indicators to Classify Verbs,"Fourteen linguistically-motivated numerical indicators are evaluated for their ability to categorize verbs as either states or events. The values for each indicator are computed automatically across a corpus of text. To improve classification performance, machine learning techniques are employed to combine multiple indicators. Three machine learning methods are compared for this task: decision tree induction, a genetic algorithm, and log-linear regression.",Learning Methods for Combining Linguistic Indicators to Classify Verbs,"Fourteen linguistically-motivated numerical indicators are evaluated for their ability to categorize verbs as either states or events. The values for each indicator are computed automatically across a corpus of text. To improve classification performance, machine learning techniques are employed to combine multiple indicators. Three machine learning methods are compared for this task: decision tree induction, a genetic algorithm, and log-linear regression.","Kathleen R. McKeown was extremely helpful regarding the formulation of our work and Judith Klavans regarding linguistic techniques. Alexander D. Charfee, Vasileios Hatzivassiloglou, Dragomir Radev and Dekai Wu provided many helpful insights regarding the evaluation and presentation of our results.This research is supported in part by the Columbia University Center for Advanced Technology in High Performance Computing and Communications in Healthcare (funded by the New York State Science and Technology Foundation), the Office of Naval Research under contract N00014-95-1-0745 and by the National Science Foundation under contract GER-90-24069.Finally, we would like to thank Andy Singleton for the use of his GPQuick software.","Learning Methods for Combining Linguistic Indicators to Classify Verbs. Fourteen linguistically-motivated numerical indicators are evaluated for their ability to categorize verbs as either states or events. The values for each indicator are computed automatically across a corpus of text. To improve classification performance, machine learning techniques are employed to combine multiple indicators. Three machine learning methods are compared for this task: decision tree induction, a genetic algorithm, and log-linear regression.",1997
tiedemann-2013-experiences,https://aclanthology.org/W13-5606,0,,,,,,,"Experiences in Building the Let's MT! Portal on Amazon EC2. In this presentation I will discuss the design and implementation of Let's MT!, a collaborative platform for building statistical machine translation systems. The goal of this platform is to make MT technology, that has been developed in academia, accessible for professional translators, freelancers and everyday users without requiring technical skills and deep background knowledge of the approaches used in the backend of the translation engine. The main challenge in this project was the development of a robust environment that can serve a growing community and large numbers of user requests. The key for success is a distributed environment that allows a maximum of scalability and robustness. With this in mind, we developed a modular platform that can be scaled by adding new nodes to the different components of the system. We opted for a cloud-based solution based on Amazon EC2 to create a cost-efficient environment that can dynamically be adjusted to user needs and system load. In the presentation I will explain our design of the distributed resource repository, the SMT training facilities and the actual translation service. I will mention issues of data security and optimization of the training procedures in order to fit our setup and the expected usage of the system.",Experiences in Building the Let{'}s {MT}! Portal on {A}mazon {EC}2,"In this presentation I will discuss the design and implementation of Let's MT!, a collaborative platform for building statistical machine translation systems. The goal of this platform is to make MT technology, that has been developed in academia, accessible for professional translators, freelancers and everyday users without requiring technical skills and deep background knowledge of the approaches used in the backend of the translation engine. The main challenge in this project was the development of a robust environment that can serve a growing community and large numbers of user requests. The key for success is a distributed environment that allows a maximum of scalability and robustness. With this in mind, we developed a modular platform that can be scaled by adding new nodes to the different components of the system. We opted for a cloud-based solution based on Amazon EC2 to create a cost-efficient environment that can dynamically be adjusted to user needs and system load. In the presentation I will explain our design of the distributed resource repository, the SMT training facilities and the actual translation service. I will mention issues of data security and optimization of the training procedures in order to fit our setup and the expected usage of the system.",Experiences in Building the Let's MT! Portal on Amazon EC2,"In this presentation I will discuss the design and implementation of Let's MT!, a collaborative platform for building statistical machine translation systems. The goal of this platform is to make MT technology, that has been developed in academia, accessible for professional translators, freelancers and everyday users without requiring technical skills and deep background knowledge of the approaches used in the backend of the translation engine. The main challenge in this project was the development of a robust environment that can serve a growing community and large numbers of user requests. The key for success is a distributed environment that allows a maximum of scalability and robustness. With this in mind, we developed a modular platform that can be scaled by adding new nodes to the different components of the system. We opted for a cloud-based solution based on Amazon EC2 to create a cost-efficient environment that can dynamically be adjusted to user needs and system load. In the presentation I will explain our design of the distributed resource repository, the SMT training facilities and the actual translation service. I will mention issues of data security and optimization of the training procedures in order to fit our setup and the expected usage of the system.",,"Experiences in Building the Let's MT! Portal on Amazon EC2. In this presentation I will discuss the design and implementation of Let's MT!, a collaborative platform for building statistical machine translation systems. The goal of this platform is to make MT technology, that has been developed in academia, accessible for professional translators, freelancers and everyday users without requiring technical skills and deep background knowledge of the approaches used in the backend of the translation engine. The main challenge in this project was the development of a robust environment that can serve a growing community and large numbers of user requests. The key for success is a distributed environment that allows a maximum of scalability and robustness. With this in mind, we developed a modular platform that can be scaled by adding new nodes to the different components of the system. We opted for a cloud-based solution based on Amazon EC2 to create a cost-efficient environment that can dynamically be adjusted to user needs and system load. In the presentation I will explain our design of the distributed resource repository, the SMT training facilities and the actual translation service. I will mention issues of data security and optimization of the training procedures in order to fit our setup and the expected usage of the system.",2013
liu-etal-2021-dexperts,https://aclanthology.org/2021.acl-long.522,0,,,,,,,"DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts. Despite recent advances in natural language generation, it remains challenging to control attributes of generated text. We propose DEX-PERTS: Decoding-time Experts, a decodingtime method for controlled text generation that combines a pretrained language model with ""expert"" LMs and/or ""anti-expert"" LMs in a product of experts. Intuitively, under the ensemble, tokens only get high probability if they are considered likely by the experts and unlikely by the anti-experts. We apply DEXPERTS to language detoxification and sentiment-controlled generation, where we outperform existing controllable generation methods on both automatic and human evaluations. Moreover, because DEXPERTS operates only on the output of the pretrained LM, it is effective with (anti-)experts of smaller size, including when operating on GPT-3. Our work highlights the promise of tuning small LMs on text with (un)desirable attributes for efficient decoding-time steering.",{DE}xperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts,"Despite recent advances in natural language generation, it remains challenging to control attributes of generated text. We propose DEX-PERTS: Decoding-time Experts, a decodingtime method for controlled text generation that combines a pretrained language model with ""expert"" LMs and/or ""anti-expert"" LMs in a product of experts. Intuitively, under the ensemble, tokens only get high probability if they are considered likely by the experts and unlikely by the anti-experts. We apply DEXPERTS to language detoxification and sentiment-controlled generation, where we outperform existing controllable generation methods on both automatic and human evaluations. Moreover, because DEXPERTS operates only on the output of the pretrained LM, it is effective with (anti-)experts of smaller size, including when operating on GPT-3. Our work highlights the promise of tuning small LMs on text with (un)desirable attributes for efficient decoding-time steering.",DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts,"Despite recent advances in natural language generation, it remains challenging to control attributes of generated text. We propose DEX-PERTS: Decoding-time Experts, a decodingtime method for controlled text generation that combines a pretrained language model with ""expert"" LMs and/or ""anti-expert"" LMs in a product of experts. Intuitively, under the ensemble, tokens only get high probability if they are considered likely by the experts and unlikely by the anti-experts. We apply DEXPERTS to language detoxification and sentiment-controlled generation, where we outperform existing controllable generation methods on both automatic and human evaluations. Moreover, because DEXPERTS operates only on the output of the pretrained LM, it is effective with (anti-)experts of smaller size, including when operating on GPT-3. Our work highlights the promise of tuning small LMs on text with (un)desirable attributes for efficient decoding-time steering.","This research is supported in part by NSF (IIS-1714566), DARPA MCS program through NIWC Pacific (N66001-19-2-4031), and Allen Institute for AI. We thank OpenAI, specifically Bianca Martin and Miles Brundage, for providing access to GPT-3 through the OpenAI API Academic Access Program. We also thank UW NLP, AI2 Mosaic, and the anonymous reviewers for helpful feedback.","DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts. Despite recent advances in natural language generation, it remains challenging to control attributes of generated text. We propose DEX-PERTS: Decoding-time Experts, a decodingtime method for controlled text generation that combines a pretrained language model with ""expert"" LMs and/or ""anti-expert"" LMs in a product of experts. Intuitively, under the ensemble, tokens only get high probability if they are considered likely by the experts and unlikely by the anti-experts. We apply DEXPERTS to language detoxification and sentiment-controlled generation, where we outperform existing controllable generation methods on both automatic and human evaluations. Moreover, because DEXPERTS operates only on the output of the pretrained LM, it is effective with (anti-)experts of smaller size, including when operating on GPT-3. Our work highlights the promise of tuning small LMs on text with (un)desirable attributes for efficient decoding-time steering.",2021
liu-seneff-2009-review,https://aclanthology.org/D09-1017,0,,,,,,,"Review Sentiment Scoring via a Parse-and-Paraphrase Paradigm. This paper presents a parse-and-paraphrase paradigm to assess the degrees of sentiment for product reviews. Sentiment identification has been well studied; however, most previous work provides binary polarities only (positive and negative), and the polarity of sentiment is simply reversed when a negation is detected. The extraction of lexical features such as unigram/bigram also complicates the sentiment classification task, as linguistic structure such as implicit long-distance dependency is often disregarded. In this paper, we propose an approach to extracting adverb-adjective-noun phrases based on clause structure obtained by parsing sentences into a hierarchical representation. We also propose a robust general solution for modeling the contribution of adverbials and negation to the score for degree of sentiment. In an application involving extracting aspect-based pros and cons from restaurant reviews, we obtained a 45% relative improvement in recall through the use of parsing methods, while also improving precision.",Review Sentiment Scoring via a Parse-and-Paraphrase Paradigm,"This paper presents a parse-and-paraphrase paradigm to assess the degrees of sentiment for product reviews. Sentiment identification has been well studied; however, most previous work provides binary polarities only (positive and negative), and the polarity of sentiment is simply reversed when a negation is detected. The extraction of lexical features such as unigram/bigram also complicates the sentiment classification task, as linguistic structure such as implicit long-distance dependency is often disregarded. In this paper, we propose an approach to extracting adverb-adjective-noun phrases based on clause structure obtained by parsing sentences into a hierarchical representation. We also propose a robust general solution for modeling the contribution of adverbials and negation to the score for degree of sentiment. In an application involving extracting aspect-based pros and cons from restaurant reviews, we obtained a 45% relative improvement in recall through the use of parsing methods, while also improving precision.",Review Sentiment Scoring via a Parse-and-Paraphrase Paradigm,"This paper presents a parse-and-paraphrase paradigm to assess the degrees of sentiment for product reviews. Sentiment identification has been well studied; however, most previous work provides binary polarities only (positive and negative), and the polarity of sentiment is simply reversed when a negation is detected. The extraction of lexical features such as unigram/bigram also complicates the sentiment classification task, as linguistic structure such as implicit long-distance dependency is often disregarded. In this paper, we propose an approach to extracting adverb-adjective-noun phrases based on clause structure obtained by parsing sentences into a hierarchical representation. We also propose a robust general solution for modeling the contribution of adverbials and negation to the score for degree of sentiment. In an application involving extracting aspect-based pros and cons from restaurant reviews, we obtained a 45% relative improvement in recall through the use of parsing methods, while also improving precision.",,"Review Sentiment Scoring via a Parse-and-Paraphrase Paradigm. This paper presents a parse-and-paraphrase paradigm to assess the degrees of sentiment for product reviews. Sentiment identification has been well studied; however, most previous work provides binary polarities only (positive and negative), and the polarity of sentiment is simply reversed when a negation is detected. The extraction of lexical features such as unigram/bigram also complicates the sentiment classification task, as linguistic structure such as implicit long-distance dependency is often disregarded. In this paper, we propose an approach to extracting adverb-adjective-noun phrases based on clause structure obtained by parsing sentences into a hierarchical representation. We also propose a robust general solution for modeling the contribution of adverbials and negation to the score for degree of sentiment. In an application involving extracting aspect-based pros and cons from restaurant reviews, we obtained a 45% relative improvement in recall through the use of parsing methods, while also improving precision.",2009
pratt-pacak-1969-automated,https://aclanthology.org/C69-1101,1,,,,health,,,"Automated Processing of Medical English. Int ro duct ion The present interest of the scientific community in automated language processing has been awakened by the enormous capabilities of the high speed digltal computer. It was recognized that the computer which has the capacity to handle symbols effectively can also treat words as symbols and language as a string of symbols. Automated language processing as exemplified by current research, had its origin in machine translation. The first attempt to use the computer for automatic language processing took place in 1954. It is known as the ""IBM-Georgetown Experiment"" in machine translation from Russian into English. (I ,2) The experiment revealed the following facts: a. the digital computer can be used for automated language processing 2 but b. much deeper knowledge about the structure and semantics of language will be required for the determination and semantic interpretation of sentence structure. The field of automated language processing is quite broad; it includes machine translation, automatic information retrieval (if based on language data), production of computer generated abstracts, indexes and catalogs, development of artificial languages, question answering systems, automatic speech analysis and synthesis, and others.",Automated Processing of Medical {E}nglish,"Int ro duct ion The present interest of the scientific community in automated language processing has been awakened by the enormous capabilities of the high speed digltal computer. It was recognized that the computer which has the capacity to handle symbols effectively can also treat words as symbols and language as a string of symbols. Automated language processing as exemplified by current research, had its origin in machine translation. The first attempt to use the computer for automatic language processing took place in 1954. It is known as the ""IBM-Georgetown Experiment"" in machine translation from Russian into English. (I ,2) The experiment revealed the following facts: a. the digital computer can be used for automated language processing 2 but b. much deeper knowledge about the structure and semantics of language will be required for the determination and semantic interpretation of sentence structure. The field of automated language processing is quite broad; it includes machine translation, automatic information retrieval (if based on language data), production of computer generated abstracts, indexes and catalogs, development of artificial languages, question answering systems, automatic speech analysis and synthesis, and others.",Automated Processing of Medical English,"Int ro duct ion The present interest of the scientific community in automated language processing has been awakened by the enormous capabilities of the high speed digltal computer. It was recognized that the computer which has the capacity to handle symbols effectively can also treat words as symbols and language as a string of symbols. Automated language processing as exemplified by current research, had its origin in machine translation. The first attempt to use the computer for automatic language processing took place in 1954. It is known as the ""IBM-Georgetown Experiment"" in machine translation from Russian into English. (I ,2) The experiment revealed the following facts: a. the digital computer can be used for automated language processing 2 but b. much deeper knowledge about the structure and semantics of language will be required for the determination and semantic interpretation of sentence structure. The field of automated language processing is quite broad; it includes machine translation, automatic information retrieval (if based on language data), production of computer generated abstracts, indexes and catalogs, development of artificial languages, question answering systems, automatic speech analysis and synthesis, and others.",,"Automated Processing of Medical English. Int ro duct ion The present interest of the scientific community in automated language processing has been awakened by the enormous capabilities of the high speed digltal computer. It was recognized that the computer which has the capacity to handle symbols effectively can also treat words as symbols and language as a string of symbols. Automated language processing as exemplified by current research, had its origin in machine translation. The first attempt to use the computer for automatic language processing took place in 1954. It is known as the ""IBM-Georgetown Experiment"" in machine translation from Russian into English. (I ,2) The experiment revealed the following facts: a. the digital computer can be used for automated language processing 2 but b. much deeper knowledge about the structure and semantics of language will be required for the determination and semantic interpretation of sentence structure. The field of automated language processing is quite broad; it includes machine translation, automatic information retrieval (if based on language data), production of computer generated abstracts, indexes and catalogs, development of artificial languages, question answering systems, automatic speech analysis and synthesis, and others.",1969
christensen-etal-2014-hierarchical,https://aclanthology.org/P14-1085,0,,,,,,,"Hierarchical Summarization: Scaling Up Multi-Document Summarization. Multi-document summarization (MDS) systems have been designed for short, unstructured summaries of 10-15 documents, and are inadequate for larger document collections. We propose a new approach to scaling up summarization called hierarchical summarization, and present the first implemented system, SUMMA. SUMMA produces a hierarchy of relatively short summaries, in which the top level provides a general overview and users can navigate the hierarchy to drill down for more details on topics of interest. SUMMA optimizes for coherence as well as coverage of salient information. In an Amazon Mechanical Turk evaluation, users prefered SUMMA ten times as often as flat MDS and three times as often as timelines.",Hierarchical Summarization: Scaling Up Multi-Document Summarization,"Multi-document summarization (MDS) systems have been designed for short, unstructured summaries of 10-15 documents, and are inadequate for larger document collections. We propose a new approach to scaling up summarization called hierarchical summarization, and present the first implemented system, SUMMA. SUMMA produces a hierarchy of relatively short summaries, in which the top level provides a general overview and users can navigate the hierarchy to drill down for more details on topics of interest. SUMMA optimizes for coherence as well as coverage of salient information. In an Amazon Mechanical Turk evaluation, users prefered SUMMA ten times as often as flat MDS and three times as often as timelines.",Hierarchical Summarization: Scaling Up Multi-Document Summarization,"Multi-document summarization (MDS) systems have been designed for short, unstructured summaries of 10-15 documents, and are inadequate for larger document collections. We propose a new approach to scaling up summarization called hierarchical summarization, and present the first implemented system, SUMMA. SUMMA produces a hierarchy of relatively short summaries, in which the top level provides a general overview and users can navigate the hierarchy to drill down for more details on topics of interest. SUMMA optimizes for coherence as well as coverage of salient information. In an Amazon Mechanical Turk evaluation, users prefered SUMMA ten times as often as flat MDS and three times as often as timelines.","We thank Amitabha Bagchi, Niranjan Balasubramanian, Danish Contractor, Oren Etzioni, Tony Fader, Carlos Guestrin, Prachi Jain, Lucy Vanderwende, Luke Zettlemoyer, and the anonymous reviewers for their helpful suggestions and feedback. We thank Hui Lin and Jeff Bilmes for providing us with their code. This research was supported in part by ARO contract W911NF-13-1-0246, DARPA Air Force Research Laboratory (AFRL) contract FA8750-13-2-0019, UW-IITD subcontract RP02815, and the Yahoo! Faculty Research and Engagement Award. This paper is also supported in part by the Intelligence Advanced Research Projects Activity (IARPA) via AFRL contract number FA8650-10-C-7058. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. Government.","Hierarchical Summarization: Scaling Up Multi-Document Summarization. Multi-document summarization (MDS) systems have been designed for short, unstructured summaries of 10-15 documents, and are inadequate for larger document collections. We propose a new approach to scaling up summarization called hierarchical summarization, and present the first implemented system, SUMMA. SUMMA produces a hierarchy of relatively short summaries, in which the top level provides a general overview and users can navigate the hierarchy to drill down for more details on topics of interest. SUMMA optimizes for coherence as well as coverage of salient information. In an Amazon Mechanical Turk evaluation, users prefered SUMMA ten times as often as flat MDS and three times as often as timelines.",2014
johnson-1984-discovery,https://aclanthology.org/P84-1070,0,,,,,,,"A Discovery Procedure for Certain Phonological Rules. Acquisition of phonological systems can be insightfully studied in terms of discovery procedures. This paper describes a discovery procedure, implemented in Lisp, capable of determining a set of ordered phonological rules, which may be in opaque contexts~ from a set of surface forms arranged in paradigms. 1.",A Discovery Procedure for Certain Phonological Rules,"Acquisition of phonological systems can be insightfully studied in terms of discovery procedures. This paper describes a discovery procedure, implemented in Lisp, capable of determining a set of ordered phonological rules, which may be in opaque contexts~ from a set of surface forms arranged in paradigms. 1.",A Discovery Procedure for Certain Phonological Rules,"Acquisition of phonological systems can be insightfully studied in terms of discovery procedures. This paper describes a discovery procedure, implemented in Lisp, capable of determining a set of ordered phonological rules, which may be in opaque contexts~ from a set of surface forms arranged in paradigms. 1.",,"A Discovery Procedure for Certain Phonological Rules. Acquisition of phonological systems can be insightfully studied in terms of discovery procedures. This paper describes a discovery procedure, implemented in Lisp, capable of determining a set of ordered phonological rules, which may be in opaque contexts~ from a set of surface forms arranged in paradigms. 1.",1984
adams-etal-2020-induced,https://aclanthology.org/2020.sigmorphon-1.25,0,,,,,,,"Induced Inflection-Set Keyword Search in Speech. We investigate the problem of searching for a lexeme-set in speech by searching for its inflectional variants. Experimental results indicate how lexeme-set search performance changes with the number of hypothesized inflections, while ablation experiments highlight the relative importance of different components in the lexeme-set search pipeline and the value of using curated inflectional paradigms. We provide a recipe and evaluation set for the community to use as an extrinsic measure of the performance of inflection generation approaches.",Induced Inflection-Set Keyword Search in Speech,"We investigate the problem of searching for a lexeme-set in speech by searching for its inflectional variants. Experimental results indicate how lexeme-set search performance changes with the number of hypothesized inflections, while ablation experiments highlight the relative importance of different components in the lexeme-set search pipeline and the value of using curated inflectional paradigms. We provide a recipe and evaluation set for the community to use as an extrinsic measure of the performance of inflection generation approaches.",Induced Inflection-Set Keyword Search in Speech,"We investigate the problem of searching for a lexeme-set in speech by searching for its inflectional variants. Experimental results indicate how lexeme-set search performance changes with the number of hypothesized inflections, while ablation experiments highlight the relative importance of different components in the lexeme-set search pipeline and the value of using curated inflectional paradigms. We provide a recipe and evaluation set for the community to use as an extrinsic measure of the performance of inflection generation approaches.",We would like to thank all reviewers for their constructive feedback.,"Induced Inflection-Set Keyword Search in Speech. We investigate the problem of searching for a lexeme-set in speech by searching for its inflectional variants. Experimental results indicate how lexeme-set search performance changes with the number of hypothesized inflections, while ablation experiments highlight the relative importance of different components in the lexeme-set search pipeline and the value of using curated inflectional paradigms. We provide a recipe and evaluation set for the community to use as an extrinsic measure of the performance of inflection generation approaches.",2020
aggarwal-etal-2020-sukhan,https://aclanthology.org/2020.icon-main.29,0,,,,,,,"SUKHAN: Corpus of Hindi Shayaris annotated with Sentiment Polarity Information. Shayari is a form of poetry mainly popular in the Indian subcontinent, in which the poet expresses his emotions and feelings in a very poetic manner. It is one of the best ways to express our thoughts and opinions. Therefore, it is of prime importance to have an annotated corpus of Hindi shayaris for the task of sentiment analysis. In this paper, we introduce SUKHAN, a dataset consisting of Hindi shayaris along with sentiment polarity labels. To the best of our knowledge, this is the first corpus of Hindi shayaris annotated with sentiment polarity information. This corpus contains a total of 733 Hindi shayaris of various genres. Also, this dataset is of utmost value as all the annotation is done manually by five annotators and this makes it a very rich dataset for training purposes. This annotated corpus is also used to build baseline sentiment classification models using machine learning techniques.",{SUKHAN}: Corpus of {H}indi Shayaris annotated with Sentiment Polarity Information,"Shayari is a form of poetry mainly popular in the Indian subcontinent, in which the poet expresses his emotions and feelings in a very poetic manner. It is one of the best ways to express our thoughts and opinions. Therefore, it is of prime importance to have an annotated corpus of Hindi shayaris for the task of sentiment analysis. In this paper, we introduce SUKHAN, a dataset consisting of Hindi shayaris along with sentiment polarity labels. To the best of our knowledge, this is the first corpus of Hindi shayaris annotated with sentiment polarity information. This corpus contains a total of 733 Hindi shayaris of various genres. Also, this dataset is of utmost value as all the annotation is done manually by five annotators and this makes it a very rich dataset for training purposes. This annotated corpus is also used to build baseline sentiment classification models using machine learning techniques.",SUKHAN: Corpus of Hindi Shayaris annotated with Sentiment Polarity Information,"Shayari is a form of poetry mainly popular in the Indian subcontinent, in which the poet expresses his emotions and feelings in a very poetic manner. It is one of the best ways to express our thoughts and opinions. Therefore, it is of prime importance to have an annotated corpus of Hindi shayaris for the task of sentiment analysis. In this paper, we introduce SUKHAN, a dataset consisting of Hindi shayaris along with sentiment polarity labels. To the best of our knowledge, this is the first corpus of Hindi shayaris annotated with sentiment polarity information. This corpus contains a total of 733 Hindi shayaris of various genres. Also, this dataset is of utmost value as all the annotation is done manually by five annotators and this makes it a very rich dataset for training purposes. This annotated corpus is also used to build baseline sentiment classification models using machine learning techniques.",,"SUKHAN: Corpus of Hindi Shayaris annotated with Sentiment Polarity Information. Shayari is a form of poetry mainly popular in the Indian subcontinent, in which the poet expresses his emotions and feelings in a very poetic manner. It is one of the best ways to express our thoughts and opinions. Therefore, it is of prime importance to have an annotated corpus of Hindi shayaris for the task of sentiment analysis. In this paper, we introduce SUKHAN, a dataset consisting of Hindi shayaris along with sentiment polarity labels. To the best of our knowledge, this is the first corpus of Hindi shayaris annotated with sentiment polarity information. This corpus contains a total of 733 Hindi shayaris of various genres. Also, this dataset is of utmost value as all the annotation is done manually by five annotators and this makes it a very rich dataset for training purposes. This annotated corpus is also used to build baseline sentiment classification models using machine learning techniques.",2020
ramanand-etal-2010-wishful,https://aclanthology.org/W10-0207,0,,,,business_use,,,"Wishful Thinking - Finding suggestions and 'buy' wishes from product reviews. This paper describes methods aimed at solving the novel problem of automatically discovering 'wishes' from (English) documents such as reviews or customer surveys. These wishes are sentences in which authors make suggestions (especially for improvements) about a product or service or show intentions to purchase a product or service. Such 'wishes' are of great use to product managers and sales personnel, and supplement the area of sentiment analysis by providing insights into the minds of consumers. We describe rules that can help detect these 'wishes' from text. We evaluate these methods on texts from the electronic and banking industries.",Wishful Thinking - Finding suggestions and {'}buy{'} wishes from product reviews,"This paper describes methods aimed at solving the novel problem of automatically discovering 'wishes' from (English) documents such as reviews or customer surveys. These wishes are sentences in which authors make suggestions (especially for improvements) about a product or service or show intentions to purchase a product or service. Such 'wishes' are of great use to product managers and sales personnel, and supplement the area of sentiment analysis by providing insights into the minds of consumers. We describe rules that can help detect these 'wishes' from text. We evaluate these methods on texts from the electronic and banking industries.",Wishful Thinking - Finding suggestions and 'buy' wishes from product reviews,"This paper describes methods aimed at solving the novel problem of automatically discovering 'wishes' from (English) documents such as reviews or customer surveys. These wishes are sentences in which authors make suggestions (especially for improvements) about a product or service or show intentions to purchase a product or service. Such 'wishes' are of great use to product managers and sales personnel, and supplement the area of sentiment analysis by providing insights into the minds of consumers. We describe rules that can help detect these 'wishes' from text. We evaluate these methods on texts from the electronic and banking industries.",,"Wishful Thinking - Finding suggestions and 'buy' wishes from product reviews. This paper describes methods aimed at solving the novel problem of automatically discovering 'wishes' from (English) documents such as reviews or customer surveys. These wishes are sentences in which authors make suggestions (especially for improvements) about a product or service or show intentions to purchase a product or service. Such 'wishes' are of great use to product managers and sales personnel, and supplement the area of sentiment analysis by providing insights into the minds of consumers. We describe rules that can help detect these 'wishes' from text. We evaluate these methods on texts from the electronic and banking industries.",2010
channarukul-etal-2000-enriching,https://aclanthology.org/W00-1422,0,,,,,,,"Enriching partially-specified representations for text realization using an attribute grammar. We present a new approach to enriching underspecified representations of content to be realized as text. Our approach uses an attribute grammar to propagate missing information where needed in a tree that represents the text to be realized. This declaratively-specified grammar mediates between application-produced output and the input to a generation system and, as a consequence, can easily augment an existing generation system. Endapplications that use this approach can produce high quality text without a fine-grained specification of the text to be realized, thereby reducing the burden to the application. Additionally, representations used by the generator are compact, because values that can be constructed from the constraints encoded by the grammar will be propagated where necessary. This approach is more flexible than defaulting or making a statistically good choice because it can deal with long-distance dependencies (such as gaps and reflexive pronouns). Our approach differs from other approaches that use attribute grammars in that we use the grammar to enrich the representations of the content to be realized, rather than to generate the text itself. We illustrate the approach with examples from our template-based textrealizer, YAG.",Enriching partially-specified representations for text realization using an attribute grammar,"We present a new approach to enriching underspecified representations of content to be realized as text. Our approach uses an attribute grammar to propagate missing information where needed in a tree that represents the text to be realized. This declaratively-specified grammar mediates between application-produced output and the input to a generation system and, as a consequence, can easily augment an existing generation system. Endapplications that use this approach can produce high quality text without a fine-grained specification of the text to be realized, thereby reducing the burden to the application. Additionally, representations used by the generator are compact, because values that can be constructed from the constraints encoded by the grammar will be propagated where necessary. This approach is more flexible than defaulting or making a statistically good choice because it can deal with long-distance dependencies (such as gaps and reflexive pronouns). Our approach differs from other approaches that use attribute grammars in that we use the grammar to enrich the representations of the content to be realized, rather than to generate the text itself. We illustrate the approach with examples from our template-based textrealizer, YAG.",Enriching partially-specified representations for text realization using an attribute grammar,"We present a new approach to enriching underspecified representations of content to be realized as text. Our approach uses an attribute grammar to propagate missing information where needed in a tree that represents the text to be realized. This declaratively-specified grammar mediates between application-produced output and the input to a generation system and, as a consequence, can easily augment an existing generation system. Endapplications that use this approach can produce high quality text without a fine-grained specification of the text to be realized, thereby reducing the burden to the application. Additionally, representations used by the generator are compact, because values that can be constructed from the constraints encoded by the grammar will be propagated where necessary. This approach is more flexible than defaulting or making a statistically good choice because it can deal with long-distance dependencies (such as gaps and reflexive pronouns). Our approach differs from other approaches that use attribute grammars in that we use the grammar to enrich the representations of the content to be realized, rather than to generate the text itself. We illustrate the approach with examples from our template-based textrealizer, YAG.",The authors are indebted to John T. Boyland for his helpful comments and suggestions.,"Enriching partially-specified representations for text realization using an attribute grammar. We present a new approach to enriching underspecified representations of content to be realized as text. Our approach uses an attribute grammar to propagate missing information where needed in a tree that represents the text to be realized. This declaratively-specified grammar mediates between application-produced output and the input to a generation system and, as a consequence, can easily augment an existing generation system. Endapplications that use this approach can produce high quality text without a fine-grained specification of the text to be realized, thereby reducing the burden to the application. Additionally, representations used by the generator are compact, because values that can be constructed from the constraints encoded by the grammar will be propagated where necessary. This approach is more flexible than defaulting or making a statistically good choice because it can deal with long-distance dependencies (such as gaps and reflexive pronouns). Our approach differs from other approaches that use attribute grammars in that we use the grammar to enrich the representations of the content to be realized, rather than to generate the text itself. We illustrate the approach with examples from our template-based textrealizer, YAG.",2000
xu-etal-2021-adaptive,https://aclanthology.org/2021.emnlp-main.198,0,,,,,,,"Adaptive Bridge between Training and Inference for Dialogue Generation. Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In real human dialogue, there are many appropriate responses for the same context, not only with different expressions, but also with different topics. Therefore, due to the much bigger gap between various ground-truth responses and the generated synthetic response, exposure bias is more challenging in dialogue generation task. What's more, as MLE encourages the model to only learn the common words among different ground-truth responses, but ignores the interesting and specific parts, exposure bias may further lead to the common response generation problem, such as ""I don't know"" and ""HaHa?"" In this paper, we propose a novel adaptive switching mechanism, which learns to automatically transit between ground-truth learning and generated learning regarding the word-level matching score, such as the cosine similarity. Experimental results on both Chinese STC dataset and English Reddit dataset, show that our adaptive method achieves a significant improvement in terms of metric-based evaluation and human evaluation, as compared with the state-of-the-art exposure bias approaches. Further analysis on NMT task also shows that our model can achieve a significant improvement.",Adaptive Bridge between Training and Inference for Dialogue Generation,"Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In real human dialogue, there are many appropriate responses for the same context, not only with different expressions, but also with different topics. Therefore, due to the much bigger gap between various ground-truth responses and the generated synthetic response, exposure bias is more challenging in dialogue generation task. What's more, as MLE encourages the model to only learn the common words among different ground-truth responses, but ignores the interesting and specific parts, exposure bias may further lead to the common response generation problem, such as ""I don't know"" and ""HaHa?"" In this paper, we propose a novel adaptive switching mechanism, which learns to automatically transit between ground-truth learning and generated learning regarding the word-level matching score, such as the cosine similarity. Experimental results on both Chinese STC dataset and English Reddit dataset, show that our adaptive method achieves a significant improvement in terms of metric-based evaluation and human evaluation, as compared with the state-of-the-art exposure bias approaches. Further analysis on NMT task also shows that our model can achieve a significant improvement.",Adaptive Bridge between Training and Inference for Dialogue Generation,"Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In real human dialogue, there are many appropriate responses for the same context, not only with different expressions, but also with different topics. Therefore, due to the much bigger gap between various ground-truth responses and the generated synthetic response, exposure bias is more challenging in dialogue generation task. What's more, as MLE encourages the model to only learn the common words among different ground-truth responses, but ignores the interesting and specific parts, exposure bias may further lead to the common response generation problem, such as ""I don't know"" and ""HaHa?"" In this paper, we propose a novel adaptive switching mechanism, which learns to automatically transit between ground-truth learning and generated learning regarding the word-level matching score, such as the cosine similarity. Experimental results on both Chinese STC dataset and English Reddit dataset, show that our adaptive method achieves a significant improvement in terms of metric-based evaluation and human evaluation, as compared with the state-of-the-art exposure bias approaches. Further analysis on NMT task also shows that our model can achieve a significant improvement.","This work is supported by the Beijing Academy of Artificial Intelligence (BAAI), and the National Natural Science Foundation of China (NSFC) (No.61773362).","Adaptive Bridge between Training and Inference for Dialogue Generation. Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In real human dialogue, there are many appropriate responses for the same context, not only with different expressions, but also with different topics. Therefore, due to the much bigger gap between various ground-truth responses and the generated synthetic response, exposure bias is more challenging in dialogue generation task. What's more, as MLE encourages the model to only learn the common words among different ground-truth responses, but ignores the interesting and specific parts, exposure bias may further lead to the common response generation problem, such as ""I don't know"" and ""HaHa?"" In this paper, we propose a novel adaptive switching mechanism, which learns to automatically transit between ground-truth learning and generated learning regarding the word-level matching score, such as the cosine similarity. Experimental results on both Chinese STC dataset and English Reddit dataset, show that our adaptive method achieves a significant improvement in terms of metric-based evaluation and human evaluation, as compared with the state-of-the-art exposure bias approaches. Further analysis on NMT task also shows that our model can achieve a significant improvement.",2021
davoodi-kosseim-2016-contribution,https://aclanthology.org/W16-3620,0,,,,,,,"On the Contribution of Discourse Structure on Text Complexity Assessment. This paper investigates the influence of discourse features on text complexity assessment. To do so, we created two data sets based on the Penn Discourse Treebank and the Simple English Wikipedia corpora and compared the influence of coherence, cohesion, surface, lexical and syntactic features to assess text complexity. Results show that with both data sets coherence features are more correlated to text complexity than the other types of features. In addition, feature selection revealed that with both data sets the top most discriminating feature is a coherence feature.",On the Contribution of Discourse Structure on Text Complexity Assessment,"This paper investigates the influence of discourse features on text complexity assessment. To do so, we created two data sets based on the Penn Discourse Treebank and the Simple English Wikipedia corpora and compared the influence of coherence, cohesion, surface, lexical and syntactic features to assess text complexity. Results show that with both data sets coherence features are more correlated to text complexity than the other types of features. In addition, feature selection revealed that with both data sets the top most discriminating feature is a coherence feature.",On the Contribution of Discourse Structure on Text Complexity Assessment,"This paper investigates the influence of discourse features on text complexity assessment. To do so, we created two data sets based on the Penn Discourse Treebank and the Simple English Wikipedia corpora and compared the influence of coherence, cohesion, surface, lexical and syntactic features to assess text complexity. Results show that with both data sets coherence features are more correlated to text complexity than the other types of features. In addition, feature selection revealed that with both data sets the top most discriminating feature is a coherence feature.",The authors would like to thank the anonymous reviewers for their feedback on the paper. This work was financially supported by NSERC.,"On the Contribution of Discourse Structure on Text Complexity Assessment. This paper investigates the influence of discourse features on text complexity assessment. To do so, we created two data sets based on the Penn Discourse Treebank and the Simple English Wikipedia corpora and compared the influence of coherence, cohesion, surface, lexical and syntactic features to assess text complexity. Results show that with both data sets coherence features are more correlated to text complexity than the other types of features. In addition, feature selection revealed that with both data sets the top most discriminating feature is a coherence feature.",2016
buck-vlachos-2021-trajectory,https://aclanthology.org/2021.adaptnlp-1.15,0,,,,,,,"Trajectory-Based Meta-Learning for Out-Of-Vocabulary Word Embedding Learning. Word embedding learning methods require a large number of occurrences of a word to accurately learn its embedding. However, outof-vocabulary (OOV) words which do not appear in the training corpus emerge frequently in the smaller downstream data. Recent work formulated OOV embedding learning as a fewshot regression problem and demonstrated that meta-learning can improve results obtained. However, the algorithm used, model-agnostic meta-learning (MAML) is known to be unstable and perform worse when a large number of gradient steps are used for parameter updates. In this work, we propose the use of Leap, a meta-learning algorithm which leverages the entire trajectory of the learning process instead of just the beginning and the end points, and thus ameliorates these two issues. In our experiments on a benchmark OOV embedding learning dataset and in an extrinsic evaluation, Leap performs comparably or better than MAML. We go on to examine which contexts are most beneficial to learn an OOV embedding from, and propose that the choice of contexts may matter more than the meta-learning employed.",Trajectory-Based Meta-Learning for Out-Of-Vocabulary Word Embedding Learning,"Word embedding learning methods require a large number of occurrences of a word to accurately learn its embedding. However, outof-vocabulary (OOV) words which do not appear in the training corpus emerge frequently in the smaller downstream data. Recent work formulated OOV embedding learning as a fewshot regression problem and demonstrated that meta-learning can improve results obtained. However, the algorithm used, model-agnostic meta-learning (MAML) is known to be unstable and perform worse when a large number of gradient steps are used for parameter updates. In this work, we propose the use of Leap, a meta-learning algorithm which leverages the entire trajectory of the learning process instead of just the beginning and the end points, and thus ameliorates these two issues. In our experiments on a benchmark OOV embedding learning dataset and in an extrinsic evaluation, Leap performs comparably or better than MAML. We go on to examine which contexts are most beneficial to learn an OOV embedding from, and propose that the choice of contexts may matter more than the meta-learning employed.",Trajectory-Based Meta-Learning for Out-Of-Vocabulary Word Embedding Learning,"Word embedding learning methods require a large number of occurrences of a word to accurately learn its embedding. However, outof-vocabulary (OOV) words which do not appear in the training corpus emerge frequently in the smaller downstream data. Recent work formulated OOV embedding learning as a fewshot regression problem and demonstrated that meta-learning can improve results obtained. However, the algorithm used, model-agnostic meta-learning (MAML) is known to be unstable and perform worse when a large number of gradient steps are used for parameter updates. In this work, we propose the use of Leap, a meta-learning algorithm which leverages the entire trajectory of the learning process instead of just the beginning and the end points, and thus ameliorates these two issues. In our experiments on a benchmark OOV embedding learning dataset and in an extrinsic evaluation, Leap performs comparably or better than MAML. We go on to examine which contexts are most beneficial to learn an OOV embedding from, and propose that the choice of contexts may matter more than the meta-learning employed.",,"Trajectory-Based Meta-Learning for Out-Of-Vocabulary Word Embedding Learning. Word embedding learning methods require a large number of occurrences of a word to accurately learn its embedding. However, outof-vocabulary (OOV) words which do not appear in the training corpus emerge frequently in the smaller downstream data. Recent work formulated OOV embedding learning as a fewshot regression problem and demonstrated that meta-learning can improve results obtained. However, the algorithm used, model-agnostic meta-learning (MAML) is known to be unstable and perform worse when a large number of gradient steps are used for parameter updates. In this work, we propose the use of Leap, a meta-learning algorithm which leverages the entire trajectory of the learning process instead of just the beginning and the end points, and thus ameliorates these two issues. In our experiments on a benchmark OOV embedding learning dataset and in an extrinsic evaluation, Leap performs comparably or better than MAML. We go on to examine which contexts are most beneficial to learn an OOV embedding from, and propose that the choice of contexts may matter more than the meta-learning employed.",2021
kondratyuk-2019-cross,https://aclanthology.org/W19-4203,0,,,,,,,"Cross-Lingual Lemmatization and Morphology Tagging with Two-Stage Multilingual BERT Fine-Tuning. We present our CHARLES-SAARLAND system for the SIGMORPHON 2019 Shared Task on Crosslinguality and Context in Morphology, in task 2, Morphological Analysis and Lemmatization in Context. We leverage the multilingual BERT model and apply several fine-tuning strategies introduced by UDify demonstrating exceptional evaluation performance on morpho-syntactic tasks. Our results show that fine-tuning multilingual BERT on the concatenation of all available treebanks allows the model to learn cross-lingual information that is able to boost lemmatization and morphology tagging accuracy over fine-tuning it purely monolingually. Unlike UDify, however, we show that when paired with additional character-level and word-level LSTM layers, a second stage of fine-tuning on each treebank individually can improve evaluation even further. Out of all submissions for this shared task, our system achieves the highest average accuracy and f1 score in morphology tagging and places second in average lemmatization accuracy.",Cross-Lingual Lemmatization and Morphology Tagging with Two-Stage Multilingual {BERT} Fine-Tuning,"We present our CHARLES-SAARLAND system for the SIGMORPHON 2019 Shared Task on Crosslinguality and Context in Morphology, in task 2, Morphological Analysis and Lemmatization in Context. We leverage the multilingual BERT model and apply several fine-tuning strategies introduced by UDify demonstrating exceptional evaluation performance on morpho-syntactic tasks. Our results show that fine-tuning multilingual BERT on the concatenation of all available treebanks allows the model to learn cross-lingual information that is able to boost lemmatization and morphology tagging accuracy over fine-tuning it purely monolingually. Unlike UDify, however, we show that when paired with additional character-level and word-level LSTM layers, a second stage of fine-tuning on each treebank individually can improve evaluation even further. Out of all submissions for this shared task, our system achieves the highest average accuracy and f1 score in morphology tagging and places second in average lemmatization accuracy.",Cross-Lingual Lemmatization and Morphology Tagging with Two-Stage Multilingual BERT Fine-Tuning,"We present our CHARLES-SAARLAND system for the SIGMORPHON 2019 Shared Task on Crosslinguality and Context in Morphology, in task 2, Morphological Analysis and Lemmatization in Context. We leverage the multilingual BERT model and apply several fine-tuning strategies introduced by UDify demonstrating exceptional evaluation performance on morpho-syntactic tasks. Our results show that fine-tuning multilingual BERT on the concatenation of all available treebanks allows the model to learn cross-lingual information that is able to boost lemmatization and morphology tagging accuracy over fine-tuning it purely monolingually. Unlike UDify, however, we show that when paired with additional character-level and word-level LSTM layers, a second stage of fine-tuning on each treebank individually can improve evaluation even further. Out of all submissions for this shared task, our system achieves the highest average accuracy and f1 score in morphology tagging and places second in average lemmatization accuracy.",Daniel Kondratyuk has been supported by the Erasmus Mundus program in Language & Communication Technologies (LCT).,"Cross-Lingual Lemmatization and Morphology Tagging with Two-Stage Multilingual BERT Fine-Tuning. We present our CHARLES-SAARLAND system for the SIGMORPHON 2019 Shared Task on Crosslinguality and Context in Morphology, in task 2, Morphological Analysis and Lemmatization in Context. We leverage the multilingual BERT model and apply several fine-tuning strategies introduced by UDify demonstrating exceptional evaluation performance on morpho-syntactic tasks. Our results show that fine-tuning multilingual BERT on the concatenation of all available treebanks allows the model to learn cross-lingual information that is able to boost lemmatization and morphology tagging accuracy over fine-tuning it purely monolingually. Unlike UDify, however, we show that when paired with additional character-level and word-level LSTM layers, a second stage of fine-tuning on each treebank individually can improve evaluation even further. Out of all submissions for this shared task, our system achieves the highest average accuracy and f1 score in morphology tagging and places second in average lemmatization accuracy.",2019
gardner-etal-2020-determining,https://aclanthology.org/2020.wnut-1.4,0,,,,,,,"Determining Question-Answer Plausibility in Crowdsourced Datasets Using Multi-Task Learning. Datasets extracted from social networks and online forums are often prone to the pitfalls of natural language, namely the presence of unstructured and noisy data. In this work, we seek to enable the collection of high-quality question-answer datasets from social media by proposing a novel task for automated quality analysis and data cleaning: question-answer (QA) plausibility. Given a machine or usergenerated question and a crowd-sourced response from a social media user, we determine if the question and response are valid; if so, we identify the answer within the free-form response. We design BERT-based models to perform the QA plausibility task, and we evaluate the ability of our models to generate a clean, usable question-answer dataset. Our highestperforming approach consists of a singletask model which determines the plausibility of the question, followed by a multitask model which evaluates the plausibility of the response as well as extracts answers (Question Plausibility AUROC=0.75, Response Plausibility AUROC=0.78, Answer Extraction F1=0.665).",Determining Question-Answer Plausibility in Crowdsourced Datasets Using Multi-Task Learning,"Datasets extracted from social networks and online forums are often prone to the pitfalls of natural language, namely the presence of unstructured and noisy data. In this work, we seek to enable the collection of high-quality question-answer datasets from social media by proposing a novel task for automated quality analysis and data cleaning: question-answer (QA) plausibility. Given a machine or usergenerated question and a crowd-sourced response from a social media user, we determine if the question and response are valid; if so, we identify the answer within the free-form response. We design BERT-based models to perform the QA plausibility task, and we evaluate the ability of our models to generate a clean, usable question-answer dataset. Our highestperforming approach consists of a singletask model which determines the plausibility of the question, followed by a multitask model which evaluates the plausibility of the response as well as extracts answers (Question Plausibility AUROC=0.75, Response Plausibility AUROC=0.78, Answer Extraction F1=0.665).",Determining Question-Answer Plausibility in Crowdsourced Datasets Using Multi-Task Learning,"Datasets extracted from social networks and online forums are often prone to the pitfalls of natural language, namely the presence of unstructured and noisy data. In this work, we seek to enable the collection of high-quality question-answer datasets from social media by proposing a novel task for automated quality analysis and data cleaning: question-answer (QA) plausibility. Given a machine or usergenerated question and a crowd-sourced response from a social media user, we determine if the question and response are valid; if so, we identify the answer within the free-form response. We design BERT-based models to perform the QA plausibility task, and we evaluate the ability of our models to generate a clean, usable question-answer dataset. Our highestperforming approach consists of a singletask model which determines the plausibility of the question, followed by a multitask model which evaluates the plausibility of the response as well as extracts answers (Question Plausibility AUROC=0.75, Response Plausibility AUROC=0.78, Answer Extraction F1=0.665).",,"Determining Question-Answer Plausibility in Crowdsourced Datasets Using Multi-Task Learning. Datasets extracted from social networks and online forums are often prone to the pitfalls of natural language, namely the presence of unstructured and noisy data. In this work, we seek to enable the collection of high-quality question-answer datasets from social media by proposing a novel task for automated quality analysis and data cleaning: question-answer (QA) plausibility. Given a machine or usergenerated question and a crowd-sourced response from a social media user, we determine if the question and response are valid; if so, we identify the answer within the free-form response. We design BERT-based models to perform the QA plausibility task, and we evaluate the ability of our models to generate a clean, usable question-answer dataset. Our highestperforming approach consists of a singletask model which determines the plausibility of the question, followed by a multitask model which evaluates the plausibility of the response as well as extracts answers (Question Plausibility AUROC=0.75, Response Plausibility AUROC=0.78, Answer Extraction F1=0.665).",2020
zhang-lapata-2014-chinese,https://aclanthology.org/D14-1074,0,,,,,,,"Chinese Poetry Generation with Recurrent Neural Networks. We propose a model for Chinese poem generation based on recurrent neural networks which we argue is ideally suited to capturing poetic content and form. Our generator jointly performs content selection (""what to say"") and surface realization (""how to say"") by learning representations of individual characters, and their combinations into one or more lines as well as how these mutually reinforce and constrain each other. Poem lines are generated incrementally by taking into account the entire history of what has been generated so far rather than the limited horizon imposed by the previous line or lexical n-grams. Experimental results show that our model outperforms competitive Chinese poetry generation systems using both automatic and manual evaluation methods.",{C}hinese Poetry Generation with Recurrent Neural Networks,"We propose a model for Chinese poem generation based on recurrent neural networks which we argue is ideally suited to capturing poetic content and form. Our generator jointly performs content selection (""what to say"") and surface realization (""how to say"") by learning representations of individual characters, and their combinations into one or more lines as well as how these mutually reinforce and constrain each other. Poem lines are generated incrementally by taking into account the entire history of what has been generated so far rather than the limited horizon imposed by the previous line or lexical n-grams. Experimental results show that our model outperforms competitive Chinese poetry generation systems using both automatic and manual evaluation methods.",Chinese Poetry Generation with Recurrent Neural Networks,"We propose a model for Chinese poem generation based on recurrent neural networks which we argue is ideally suited to capturing poetic content and form. Our generator jointly performs content selection (""what to say"") and surface realization (""how to say"") by learning representations of individual characters, and their combinations into one or more lines as well as how these mutually reinforce and constrain each other. Poem lines are generated incrementally by taking into account the entire history of what has been generated so far rather than the limited horizon imposed by the previous line or lexical n-grams. Experimental results show that our model outperforms competitive Chinese poetry generation systems using both automatic and manual evaluation methods.","We would like to thank Eva Halser for valuable discussions on the machine translation baseline. We are grateful to the 30 Chinese poetry experts for participating in our rating study. Thanks to Gujing Lu, Chu Liu, and Yibo Wang for their help with translating the poems in Table 6 and Table 1. ","Chinese Poetry Generation with Recurrent Neural Networks. We propose a model for Chinese poem generation based on recurrent neural networks which we argue is ideally suited to capturing poetic content and form. Our generator jointly performs content selection (""what to say"") and surface realization (""how to say"") by learning representations of individual characters, and their combinations into one or more lines as well as how these mutually reinforce and constrain each other. Poem lines are generated incrementally by taking into account the entire history of what has been generated so far rather than the limited horizon imposed by the previous line or lexical n-grams. Experimental results show that our model outperforms competitive Chinese poetry generation systems using both automatic and manual evaluation methods.",2014
takehisa-2017-remarks,https://aclanthology.org/Y17-1028,0,,,,,,,"Remarks on Denominal -Ed Adjectives. This paper discusses denominal adjectives derived by affixation of-ed in English in light of recent advances in linguistic theory and makes the following three claims. First, unlike recent proposals arguing against their denominal status, the paper defends the widely held view that these adjectives are derived from nominals and goes on to argue that the nominal bases involved are structurally reduced: nP. Second, the paper argues that the suffixed in denominal adjectives shows no contextual allomorphy, which is a natural consequence that follows from the workings of the mechanism of exponent insertion in Distributed Morphology (Halle and Marantz, 1993). Third, the meaning associated with denominal-ed adjectives stems from the suffix's denotation requiring a relation, which effectively restricts base nominals to relational nouns, derived or underived. It is also argued that the suffix is crucially different from possessive determiners in English (e.g., 's) in that, while the former imposes type shifting on non-relational nouns, the latter undergo type shifting to accommodate them.",Remarks on Denominal -Ed Adjectives,"This paper discusses denominal adjectives derived by affixation of-ed in English in light of recent advances in linguistic theory and makes the following three claims. First, unlike recent proposals arguing against their denominal status, the paper defends the widely held view that these adjectives are derived from nominals and goes on to argue that the nominal bases involved are structurally reduced: nP. Second, the paper argues that the suffixed in denominal adjectives shows no contextual allomorphy, which is a natural consequence that follows from the workings of the mechanism of exponent insertion in Distributed Morphology (Halle and Marantz, 1993). Third, the meaning associated with denominal-ed adjectives stems from the suffix's denotation requiring a relation, which effectively restricts base nominals to relational nouns, derived or underived. It is also argued that the suffix is crucially different from possessive determiners in English (e.g., 's) in that, while the former imposes type shifting on non-relational nouns, the latter undergo type shifting to accommodate them.",Remarks on Denominal -Ed Adjectives,"This paper discusses denominal adjectives derived by affixation of-ed in English in light of recent advances in linguistic theory and makes the following three claims. First, unlike recent proposals arguing against their denominal status, the paper defends the widely held view that these adjectives are derived from nominals and goes on to argue that the nominal bases involved are structurally reduced: nP. Second, the paper argues that the suffixed in denominal adjectives shows no contextual allomorphy, which is a natural consequence that follows from the workings of the mechanism of exponent insertion in Distributed Morphology (Halle and Marantz, 1993). Third, the meaning associated with denominal-ed adjectives stems from the suffix's denotation requiring a relation, which effectively restricts base nominals to relational nouns, derived or underived. It is also argued that the suffix is crucially different from possessive determiners in English (e.g., 's) in that, while the former imposes type shifting on non-relational nouns, the latter undergo type shifting to accommodate them.",I am grateful to an anonymous reviewer for providing invaluable comments on an earlier version of this paper. The usual disclaimers apply.,"Remarks on Denominal -Ed Adjectives. This paper discusses denominal adjectives derived by affixation of-ed in English in light of recent advances in linguistic theory and makes the following three claims. First, unlike recent proposals arguing against their denominal status, the paper defends the widely held view that these adjectives are derived from nominals and goes on to argue that the nominal bases involved are structurally reduced: nP. Second, the paper argues that the suffixed in denominal adjectives shows no contextual allomorphy, which is a natural consequence that follows from the workings of the mechanism of exponent insertion in Distributed Morphology (Halle and Marantz, 1993). Third, the meaning associated with denominal-ed adjectives stems from the suffix's denotation requiring a relation, which effectively restricts base nominals to relational nouns, derived or underived. It is also argued that the suffix is crucially different from possessive determiners in English (e.g., 's) in that, while the former imposes type shifting on non-relational nouns, the latter undergo type shifting to accommodate them.",2017
yan-etal-2021-adatag,https://aclanthology.org/2021.acl-long.362,0,,,,business_use,,,"AdaTag: Multi-Attribute Value Extraction from Product Profiles with Adaptive Decoding. Automatic extraction of product attribute values is an important enabling technology in e-Commerce platforms. This task is usually modeled using sequence labeling architectures, with several extensions to handle multi-attribute extraction. One line of previous work constructs attribute-specific models, through separate decoders or entirely separate models. However, this approach constrains knowledge sharing across different attributes. Other contributions use a single multiattribute model, with different techniques to embed attribute information. But sharing the entire network parameters across all attributes can limit the model's capacity to capture attribute-specific characteristics. In this paper we present AdaTag, which uses adaptive decoding to handle extraction. We parameterize the decoder with pretrained attribute embeddings, through a hypernetwork and a Mixture-of-Experts (MoE) module. This allows for separate, but semantically correlated, decoders to be generated on the fly for different attributes. This approach facilitates knowledge sharing, while maintaining the specificity of each attribute. Our experiments on a realworld e-Commerce dataset show marked improvements over previous methods. * Most of the work was done during an internship at Amazon.",{A}da{T}ag: Multi-Attribute Value Extraction from Product Profiles with Adaptive Decoding,"Automatic extraction of product attribute values is an important enabling technology in e-Commerce platforms. This task is usually modeled using sequence labeling architectures, with several extensions to handle multi-attribute extraction. One line of previous work constructs attribute-specific models, through separate decoders or entirely separate models. However, this approach constrains knowledge sharing across different attributes. Other contributions use a single multiattribute model, with different techniques to embed attribute information. But sharing the entire network parameters across all attributes can limit the model's capacity to capture attribute-specific characteristics. In this paper we present AdaTag, which uses adaptive decoding to handle extraction. We parameterize the decoder with pretrained attribute embeddings, through a hypernetwork and a Mixture-of-Experts (MoE) module. This allows for separate, but semantically correlated, decoders to be generated on the fly for different attributes. This approach facilitates knowledge sharing, while maintaining the specificity of each attribute. Our experiments on a realworld e-Commerce dataset show marked improvements over previous methods. * Most of the work was done during an internship at Amazon.",AdaTag: Multi-Attribute Value Extraction from Product Profiles with Adaptive Decoding,"Automatic extraction of product attribute values is an important enabling technology in e-Commerce platforms. This task is usually modeled using sequence labeling architectures, with several extensions to handle multi-attribute extraction. One line of previous work constructs attribute-specific models, through separate decoders or entirely separate models. However, this approach constrains knowledge sharing across different attributes. Other contributions use a single multiattribute model, with different techniques to embed attribute information. But sharing the entire network parameters across all attributes can limit the model's capacity to capture attribute-specific characteristics. In this paper we present AdaTag, which uses adaptive decoding to handle extraction. We parameterize the decoder with pretrained attribute embeddings, through a hypernetwork and a Mixture-of-Experts (MoE) module. This allows for separate, but semantically correlated, decoders to be generated on the fly for different attributes. This approach facilitates knowledge sharing, while maintaining the specificity of each attribute. Our experiments on a realworld e-Commerce dataset show marked improvements over previous methods. * Most of the work was done during an internship at Amazon.","This work has been supported in part by NSF SMA 18-29268. We would like to thank Jun Ma, Chenwei Zhang, Colin Lockard, Pascual Martínez-Gómez, Binxuan Huang from Amazon, and all the collaborators in USC INK research lab, for their constructive feedback on the work. We would also like to thank the anonymous reviewers for their valuable comments.","AdaTag: Multi-Attribute Value Extraction from Product Profiles with Adaptive Decoding. Automatic extraction of product attribute values is an important enabling technology in e-Commerce platforms. This task is usually modeled using sequence labeling architectures, with several extensions to handle multi-attribute extraction. One line of previous work constructs attribute-specific models, through separate decoders or entirely separate models. However, this approach constrains knowledge sharing across different attributes. Other contributions use a single multiattribute model, with different techniques to embed attribute information. But sharing the entire network parameters across all attributes can limit the model's capacity to capture attribute-specific characteristics. In this paper we present AdaTag, which uses adaptive decoding to handle extraction. We parameterize the decoder with pretrained attribute embeddings, through a hypernetwork and a Mixture-of-Experts (MoE) module. This allows for separate, but semantically correlated, decoders to be generated on the fly for different attributes. This approach facilitates knowledge sharing, while maintaining the specificity of each attribute. Our experiments on a realworld e-Commerce dataset show marked improvements over previous methods. * Most of the work was done during an internship at Amazon.",2021
chen-kageura-2020-multilingualization,https://aclanthology.org/2020.lrec-1.512,1,,,,health,,,"Multilingualization of Medical Terminology: Semantic and Structural Embedding Approaches. The multilingualization of terminology is an essential step in the translation pipeline, to ensure the correct transfer of domain-specific concepts. Many institutions and language service providers construct and maintain multilingual terminologies, which constitute important assets. However, the curation of such multilingual resources requires significant human effort; though automatic multilingual term extraction methods have been proposed so far, they are of limited success as term translation cannot be satisfied by simply conveying meaning, but requires the terminologists and domain experts' knowledge to fit the term within the existing terminology. Here we propose a method to encode the structural properties of terms by aligning their embeddings using graph convolutional networks trained from separate languages. The results show that the structural information can augment the standard bilingual lexicon induction methods, and that taking into account the structural nature of terminologies allows our method to produce better results.",Multilingualization of Medical Terminology: Semantic and Structural Embedding Approaches,"The multilingualization of terminology is an essential step in the translation pipeline, to ensure the correct transfer of domain-specific concepts. Many institutions and language service providers construct and maintain multilingual terminologies, which constitute important assets. However, the curation of such multilingual resources requires significant human effort; though automatic multilingual term extraction methods have been proposed so far, they are of limited success as term translation cannot be satisfied by simply conveying meaning, but requires the terminologists and domain experts' knowledge to fit the term within the existing terminology. Here we propose a method to encode the structural properties of terms by aligning their embeddings using graph convolutional networks trained from separate languages. The results show that the structural information can augment the standard bilingual lexicon induction methods, and that taking into account the structural nature of terminologies allows our method to produce better results.",Multilingualization of Medical Terminology: Semantic and Structural Embedding Approaches,"The multilingualization of terminology is an essential step in the translation pipeline, to ensure the correct transfer of domain-specific concepts. Many institutions and language service providers construct and maintain multilingual terminologies, which constitute important assets. However, the curation of such multilingual resources requires significant human effort; though automatic multilingual term extraction methods have been proposed so far, they are of limited success as term translation cannot be satisfied by simply conveying meaning, but requires the terminologists and domain experts' knowledge to fit the term within the existing terminology. Here we propose a method to encode the structural properties of terms by aligning their embeddings using graph convolutional networks trained from separate languages. The results show that the structural information can augment the standard bilingual lexicon induction methods, and that taking into account the structural nature of terminologies allows our method to produce better results.",,"Multilingualization of Medical Terminology: Semantic and Structural Embedding Approaches. The multilingualization of terminology is an essential step in the translation pipeline, to ensure the correct transfer of domain-specific concepts. Many institutions and language service providers construct and maintain multilingual terminologies, which constitute important assets. However, the curation of such multilingual resources requires significant human effort; though automatic multilingual term extraction methods have been proposed so far, they are of limited success as term translation cannot be satisfied by simply conveying meaning, but requires the terminologists and domain experts' knowledge to fit the term within the existing terminology. Here we propose a method to encode the structural properties of terms by aligning their embeddings using graph convolutional networks trained from separate languages. The results show that the structural information can augment the standard bilingual lexicon induction methods, and that taking into account the structural nature of terminologies allows our method to produce better results.",2020
ge-etal-2013-event,https://aclanthology.org/D13-1001,0,,,,,,,"Event-Based Time Label Propagation for Automatic Dating of News Articles. Since many applications such as timeline summaries and temporal IR involving temporal analysis rely on document timestamps, the task of automatic dating of documents has been increasingly important. Instead of using feature-based methods as conventional models, our method attempts to date documents in a year level by exploiting relative temporal relations between documents and events, which are very effective for dating documents. Based on this intuition, we proposed an eventbased time label propagation model called confidence boosting in which time label information can be propagated between documents and events on a bipartite graph. The experiments show that our event-based propagation model can predict document timestamps in high accuracy and the model combined with a MaxEnt classifier outperforms the state-ofthe-art method for this task especially when the size of the training set is small.",Event-Based Time Label Propagation for Automatic Dating of News Articles,"Since many applications such as timeline summaries and temporal IR involving temporal analysis rely on document timestamps, the task of automatic dating of documents has been increasingly important. Instead of using feature-based methods as conventional models, our method attempts to date documents in a year level by exploiting relative temporal relations between documents and events, which are very effective for dating documents. Based on this intuition, we proposed an eventbased time label propagation model called confidence boosting in which time label information can be propagated between documents and events on a bipartite graph. The experiments show that our event-based propagation model can predict document timestamps in high accuracy and the model combined with a MaxEnt classifier outperforms the state-ofthe-art method for this task especially when the size of the training set is small.",Event-Based Time Label Propagation for Automatic Dating of News Articles,"Since many applications such as timeline summaries and temporal IR involving temporal analysis rely on document timestamps, the task of automatic dating of documents has been increasingly important. Instead of using feature-based methods as conventional models, our method attempts to date documents in a year level by exploiting relative temporal relations between documents and events, which are very effective for dating documents. Based on this intuition, we proposed an eventbased time label propagation model called confidence boosting in which time label information can be propagated between documents and events on a bipartite graph. The experiments show that our event-based propagation model can predict document timestamps in high accuracy and the model combined with a MaxEnt classifier outperforms the state-ofthe-art method for this task especially when the size of the training set is small.","We thank the anonymous reviewers for their valuable suggestions. This paper is supported by NSFC Project 61075067, NSFC Project 61273318 and National Key Technology R&D Program (No: 2011BAH10B04-03).","Event-Based Time Label Propagation for Automatic Dating of News Articles. Since many applications such as timeline summaries and temporal IR involving temporal analysis rely on document timestamps, the task of automatic dating of documents has been increasingly important. Instead of using feature-based methods as conventional models, our method attempts to date documents in a year level by exploiting relative temporal relations between documents and events, which are very effective for dating documents. Based on this intuition, we proposed an eventbased time label propagation model called confidence boosting in which time label information can be propagated between documents and events on a bipartite graph. The experiments show that our event-based propagation model can predict document timestamps in high accuracy and the model combined with a MaxEnt classifier outperforms the state-ofthe-art method for this task especially when the size of the training set is small.",2013
murveit-etal-1991-speech,https://aclanthology.org/H91-1015,0,,,,,,,"Speech Recognition in SRI's Resource Management and ATIS Systems. This paper describes improvements to DECIPHER, the speech recognition component in SKI's Air Travel Information Systems (ATIS) and Resource Management systems. DECIPHER is a speaker-independent continuous speech recognition system based on hidden Markov model (HMM) technology. We show significant performance improvements in DECIPHER due to (I) the addition of tied-mixture I-IMM modeling (2) rejection of outof-vocabulary speech and background noise while continuing to recognize speech (3) adapting to the current speaker (4) the implementation of N-gram statistical grammars with DECIPHER. Finally we describe our performance in the February 1991 DARPA Resource Management evaluation (4.8 percent word error) and in the February 1991 DARPA-ATIS speech and SLS evaluations (95 sentences correct, 15 wrong of 140). We show that, for the ATIS evaluation, a well-conceived system integration can be relatively robust to speech recognition errors and to linguistic variability and errors.",Speech Recognition in {SRI}{'}s Resource Management and {ATIS} Systems,"This paper describes improvements to DECIPHER, the speech recognition component in SKI's Air Travel Information Systems (ATIS) and Resource Management systems. DECIPHER is a speaker-independent continuous speech recognition system based on hidden Markov model (HMM) technology. We show significant performance improvements in DECIPHER due to (I) the addition of tied-mixture I-IMM modeling (2) rejection of outof-vocabulary speech and background noise while continuing to recognize speech (3) adapting to the current speaker (4) the implementation of N-gram statistical grammars with DECIPHER. Finally we describe our performance in the February 1991 DARPA Resource Management evaluation (4.8 percent word error) and in the February 1991 DARPA-ATIS speech and SLS evaluations (95 sentences correct, 15 wrong of 140). We show that, for the ATIS evaluation, a well-conceived system integration can be relatively robust to speech recognition errors and to linguistic variability and errors.",Speech Recognition in SRI's Resource Management and ATIS Systems,"This paper describes improvements to DECIPHER, the speech recognition component in SKI's Air Travel Information Systems (ATIS) and Resource Management systems. DECIPHER is a speaker-independent continuous speech recognition system based on hidden Markov model (HMM) technology. We show significant performance improvements in DECIPHER due to (I) the addition of tied-mixture I-IMM modeling (2) rejection of outof-vocabulary speech and background noise while continuing to recognize speech (3) adapting to the current speaker (4) the implementation of N-gram statistical grammars with DECIPHER. Finally we describe our performance in the February 1991 DARPA Resource Management evaluation (4.8 percent word error) and in the February 1991 DARPA-ATIS speech and SLS evaluations (95 sentences correct, 15 wrong of 140). We show that, for the ATIS evaluation, a well-conceived system integration can be relatively robust to speech recognition errors and to linguistic variability and errors.",,"Speech Recognition in SRI's Resource Management and ATIS Systems. This paper describes improvements to DECIPHER, the speech recognition component in SKI's Air Travel Information Systems (ATIS) and Resource Management systems. DECIPHER is a speaker-independent continuous speech recognition system based on hidden Markov model (HMM) technology. We show significant performance improvements in DECIPHER due to (I) the addition of tied-mixture I-IMM modeling (2) rejection of outof-vocabulary speech and background noise while continuing to recognize speech (3) adapting to the current speaker (4) the implementation of N-gram statistical grammars with DECIPHER. Finally we describe our performance in the February 1991 DARPA Resource Management evaluation (4.8 percent word error) and in the February 1991 DARPA-ATIS speech and SLS evaluations (95 sentences correct, 15 wrong of 140). We show that, for the ATIS evaluation, a well-conceived system integration can be relatively robust to speech recognition errors and to linguistic variability and errors.",1991
meile-1961-problems,https://aclanthology.org/1961.earlymt-1.21,0,,,,,,,"On problems of address in an automatic dictionary of French. In most printed dictionaries, the address of each article, that is of each set of information pertaining to that particular entry, is simply the word itself. It has to be so in a book for common use: for the general reader's sake, the word must be entered in its complete form.
In the case of long words, part only of the letters contained in the word would be enough to provide an adequate address, that is to achieve an alphabetical classification. As a matter of fact, the last letters of a long word (say a word of more than ten letters) do not play any part whatsoever as classificators. The first four or five letters are very often sufficient; subsequent letters provide an over-definition which, from the point of view of address only, remains useless.",On problems of address in an automatic dictionary of {F}rench,"In most printed dictionaries, the address of each article, that is of each set of information pertaining to that particular entry, is simply the word itself. It has to be so in a book for common use: for the general reader's sake, the word must be entered in its complete form.
In the case of long words, part only of the letters contained in the word would be enough to provide an adequate address, that is to achieve an alphabetical classification. As a matter of fact, the last letters of a long word (say a word of more than ten letters) do not play any part whatsoever as classificators. The first four or five letters are very often sufficient; subsequent letters provide an over-definition which, from the point of view of address only, remains useless.",On problems of address in an automatic dictionary of French,"In most printed dictionaries, the address of each article, that is of each set of information pertaining to that particular entry, is simply the word itself. It has to be so in a book for common use: for the general reader's sake, the word must be entered in its complete form.
In the case of long words, part only of the letters contained in the word would be enough to provide an adequate address, that is to achieve an alphabetical classification. As a matter of fact, the last letters of a long word (say a word of more than ten letters) do not play any part whatsoever as classificators. The first four or five letters are very often sufficient; subsequent letters provide an over-definition which, from the point of view of address only, remains useless.",,"On problems of address in an automatic dictionary of French. In most printed dictionaries, the address of each article, that is of each set of information pertaining to that particular entry, is simply the word itself. It has to be so in a book for common use: for the general reader's sake, the word must be entered in its complete form.
In the case of long words, part only of the letters contained in the word would be enough to provide an adequate address, that is to achieve an alphabetical classification. As a matter of fact, the last letters of a long word (say a word of more than ten letters) do not play any part whatsoever as classificators. The first four or five letters are very often sufficient; subsequent letters provide an over-definition which, from the point of view of address only, remains useless.",1961
bosch-etal-2006-towards,http://www.lrec-conf.org/proceedings/lrec2006/pdf/597_pdf.pdf,0,,,,,,,"Towards machine-readable lexicons for South African Bantu languages. Lexical information for South African Bantu languages is not readily available in the form of machine-readable lexicons. At present the availability of lexical information is restricted to a variety of paper dictionaries. These dictionaries display considerable diversity in the organisation and representation of data. In order to proceed towards the development of reusable and suitably standardised machine-readable lexicons for these languages, a data model for lexical entries becomes a prerequisite. In this study the general purpose model as developed by Bell and Bird (2000) is used as a point of departure. Firstly, the extent to which the Bell and Bird (2000) data model may be applied to and modified for the above-mentioned languages is investigated. Initial investigations indicate that modification of this data model is necessary to make provision for the specific requirements of lexical entries in these languages. Secondly, a data model in the form of an XML DTD for the languages in question, based on our findings regarding (Bell & Bird, 2000) and (Weber, 2002) is presented. Included in this model are additional particular requirements for complete and appropriate representation of linguistic information as identified in the study of available paper dictionaries.",Towards machine-readable lexicons for {S}outh {A}frican {B}antu languages,"Lexical information for South African Bantu languages is not readily available in the form of machine-readable lexicons. At present the availability of lexical information is restricted to a variety of paper dictionaries. These dictionaries display considerable diversity in the organisation and representation of data. In order to proceed towards the development of reusable and suitably standardised machine-readable lexicons for these languages, a data model for lexical entries becomes a prerequisite. In this study the general purpose model as developed by Bell and Bird (2000) is used as a point of departure. Firstly, the extent to which the Bell and Bird (2000) data model may be applied to and modified for the above-mentioned languages is investigated. Initial investigations indicate that modification of this data model is necessary to make provision for the specific requirements of lexical entries in these languages. Secondly, a data model in the form of an XML DTD for the languages in question, based on our findings regarding (Bell & Bird, 2000) and (Weber, 2002) is presented. Included in this model are additional particular requirements for complete and appropriate representation of linguistic information as identified in the study of available paper dictionaries.",Towards machine-readable lexicons for South African Bantu languages,"Lexical information for South African Bantu languages is not readily available in the form of machine-readable lexicons. At present the availability of lexical information is restricted to a variety of paper dictionaries. These dictionaries display considerable diversity in the organisation and representation of data. In order to proceed towards the development of reusable and suitably standardised machine-readable lexicons for these languages, a data model for lexical entries becomes a prerequisite. In this study the general purpose model as developed by Bell and Bird (2000) is used as a point of departure. Firstly, the extent to which the Bell and Bird (2000) data model may be applied to and modified for the above-mentioned languages is investigated. Initial investigations indicate that modification of this data model is necessary to make provision for the specific requirements of lexical entries in these languages. Secondly, a data model in the form of an XML DTD for the languages in question, based on our findings regarding (Bell & Bird, 2000) and (Weber, 2002) is presented. Included in this model are additional particular requirements for complete and appropriate representation of linguistic information as identified in the study of available paper dictionaries.","This material is based upon work supported by the National Research Foundation under grant number 2053403. Any opinion, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Research Foundation.","Towards machine-readable lexicons for South African Bantu languages. Lexical information for South African Bantu languages is not readily available in the form of machine-readable lexicons. At present the availability of lexical information is restricted to a variety of paper dictionaries. These dictionaries display considerable diversity in the organisation and representation of data. In order to proceed towards the development of reusable and suitably standardised machine-readable lexicons for these languages, a data model for lexical entries becomes a prerequisite. In this study the general purpose model as developed by Bell and Bird (2000) is used as a point of departure. Firstly, the extent to which the Bell and Bird (2000) data model may be applied to and modified for the above-mentioned languages is investigated. Initial investigations indicate that modification of this data model is necessary to make provision for the specific requirements of lexical entries in these languages. Secondly, a data model in the form of an XML DTD for the languages in question, based on our findings regarding (Bell & Bird, 2000) and (Weber, 2002) is presented. Included in this model are additional particular requirements for complete and appropriate representation of linguistic information as identified in the study of available paper dictionaries.",2006
liu-haghighi-2011-ordering,https://aclanthology.org/P11-1111,0,,,,,,,"Ordering Prenominal Modifiers with a Reranking Approach. In this work, we present a novel approach to the generation task of ordering prenominal modifiers. We take a maximum entropy reranking approach to the problem which admits arbitrary features on a permutation of modifiers, exploiting hundreds of thousands of features in total. We compare our error rates to the state-of-the-art and to a strong Google ngram count baseline. We attain a maximum error reduction of 69.8% and average error reduction across all test sets of 59.1% compared to the state-of-the-art and a maximum error reduction of 68.4% and average error reduction across all test sets of 41.8% compared to our Google n-gram count baseline.",Ordering Prenominal Modifiers with a Reranking Approach,"In this work, we present a novel approach to the generation task of ordering prenominal modifiers. We take a maximum entropy reranking approach to the problem which admits arbitrary features on a permutation of modifiers, exploiting hundreds of thousands of features in total. We compare our error rates to the state-of-the-art and to a strong Google ngram count baseline. We attain a maximum error reduction of 69.8% and average error reduction across all test sets of 59.1% compared to the state-of-the-art and a maximum error reduction of 68.4% and average error reduction across all test sets of 41.8% compared to our Google n-gram count baseline.",Ordering Prenominal Modifiers with a Reranking Approach,"In this work, we present a novel approach to the generation task of ordering prenominal modifiers. We take a maximum entropy reranking approach to the problem which admits arbitrary features on a permutation of modifiers, exploiting hundreds of thousands of features in total. We compare our error rates to the state-of-the-art and to a strong Google ngram count baseline. We attain a maximum error reduction of 69.8% and average error reduction across all test sets of 59.1% compared to the state-of-the-art and a maximum error reduction of 68.4% and average error reduction across all test sets of 41.8% compared to our Google n-gram count baseline.","Many thanks to Margaret Mitchell, Regina Barzilay, Xiao Chen, and members of the CSAIL NLP group for their help and suggestions.","Ordering Prenominal Modifiers with a Reranking Approach. In this work, we present a novel approach to the generation task of ordering prenominal modifiers. We take a maximum entropy reranking approach to the problem which admits arbitrary features on a permutation of modifiers, exploiting hundreds of thousands of features in total. We compare our error rates to the state-of-the-art and to a strong Google ngram count baseline. We attain a maximum error reduction of 69.8% and average error reduction across all test sets of 59.1% compared to the state-of-the-art and a maximum error reduction of 68.4% and average error reduction across all test sets of 41.8% compared to our Google n-gram count baseline.",2011
song-etal-2012-joint,https://aclanthology.org/D12-1114,0,,,,,,,"Joint Learning for Coreference Resolution with Markov Logic. Pairwise coreference resolution models must merge pairwise coreference decisions to generate final outputs. Traditional merging methods adopt different strategies such as the bestfirst method and enforcing the transitivity constraint, but most of these methods are used independently of the pairwise learning methods as an isolated inference procedure at the end. We propose a joint learning model which combines pairwise classification and mention clustering with Markov logic. Experimental results show that our joint learning system outperforms independent learning systems. Our system gives a better performance than all the learning-based systems from the CoNLL-2011 shared task on the same dataset. Compared with the best system from CoNLL-2011, which employs a rule-based method, our system shows competitive performance.",Joint Learning for Coreference Resolution with {M}arkov {L}ogic,"Pairwise coreference resolution models must merge pairwise coreference decisions to generate final outputs. Traditional merging methods adopt different strategies such as the bestfirst method and enforcing the transitivity constraint, but most of these methods are used independently of the pairwise learning methods as an isolated inference procedure at the end. We propose a joint learning model which combines pairwise classification and mention clustering with Markov logic. Experimental results show that our joint learning system outperforms independent learning systems. Our system gives a better performance than all the learning-based systems from the CoNLL-2011 shared task on the same dataset. Compared with the best system from CoNLL-2011, which employs a rule-based method, our system shows competitive performance.",Joint Learning for Coreference Resolution with Markov Logic,"Pairwise coreference resolution models must merge pairwise coreference decisions to generate final outputs. Traditional merging methods adopt different strategies such as the bestfirst method and enforcing the transitivity constraint, but most of these methods are used independently of the pairwise learning methods as an isolated inference procedure at the end. We propose a joint learning model which combines pairwise classification and mention clustering with Markov logic. Experimental results show that our joint learning system outperforms independent learning systems. Our system gives a better performance than all the learning-based systems from the CoNLL-2011 shared task on the same dataset. Compared with the best system from CoNLL-2011, which employs a rule-based method, our system shows competitive performance.","Part of the work was done when the first author was a visiting student in the Singapore Management University. And this work was partially supported by the National High Technology Research and Development Program of China(863 Program) (No.2012AA011101), the National Natural Science Foundation of China (No.91024009, No.60973053, No.90920011), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20090001110047). ","Joint Learning for Coreference Resolution with Markov Logic. Pairwise coreference resolution models must merge pairwise coreference decisions to generate final outputs. Traditional merging methods adopt different strategies such as the bestfirst method and enforcing the transitivity constraint, but most of these methods are used independently of the pairwise learning methods as an isolated inference procedure at the end. We propose a joint learning model which combines pairwise classification and mention clustering with Markov logic. Experimental results show that our joint learning system outperforms independent learning systems. Our system gives a better performance than all the learning-based systems from the CoNLL-2011 shared task on the same dataset. Compared with the best system from CoNLL-2011, which employs a rule-based method, our system shows competitive performance.",2012
velardi-etal-2012-new,http://www.lrec-conf.org/proceedings/lrec2012/pdf/295_Paper.pdf,0,,,,,,,"A New Method for Evaluating Automatically Learned Terminological Taxonomies. Evaluating a taxonomy learned automatically against an existing gold standard is a very complex problem, because differences stem from the number, label, depth and ordering of the taxonomy nodes. In this paper we propose casting the problem as one of comparing two hierarchical clusters. To this end we defined a variation of the Fowlkes and Mallows measure (Fowlkes and Mallows, 1983). Our method assigns a similarity value B i (l,r) to the learned (l) and reference (r) taxonomy for each cut i of the corresponding anonymised hierarchies, starting from the topmost nodes down to the leaf concepts. For each cut i, the two hierarchies can be seen as two clusterings C i l , C i r of the leaf concepts. We assign a prize to early similarity values, i.e. when concepts are clustered in a similar way down to the lowest taxonomy levels (close to the leaf nodes). We apply our method to the evaluation of the taxonomy learning methods put forward by Navigli et al. (2011) and Kozareva and Hovy (2010).",A New Method for Evaluating Automatically Learned Terminological Taxonomies,"Evaluating a taxonomy learned automatically against an existing gold standard is a very complex problem, because differences stem from the number, label, depth and ordering of the taxonomy nodes. In this paper we propose casting the problem as one of comparing two hierarchical clusters. To this end we defined a variation of the Fowlkes and Mallows measure (Fowlkes and Mallows, 1983). Our method assigns a similarity value B i (l,r) to the learned (l) and reference (r) taxonomy for each cut i of the corresponding anonymised hierarchies, starting from the topmost nodes down to the leaf concepts. For each cut i, the two hierarchies can be seen as two clusterings C i l , C i r of the leaf concepts. We assign a prize to early similarity values, i.e. when concepts are clustered in a similar way down to the lowest taxonomy levels (close to the leaf nodes). We apply our method to the evaluation of the taxonomy learning methods put forward by Navigli et al. (2011) and Kozareva and Hovy (2010).",A New Method for Evaluating Automatically Learned Terminological Taxonomies,"Evaluating a taxonomy learned automatically against an existing gold standard is a very complex problem, because differences stem from the number, label, depth and ordering of the taxonomy nodes. In this paper we propose casting the problem as one of comparing two hierarchical clusters. To this end we defined a variation of the Fowlkes and Mallows measure (Fowlkes and Mallows, 1983). Our method assigns a similarity value B i (l,r) to the learned (l) and reference (r) taxonomy for each cut i of the corresponding anonymised hierarchies, starting from the topmost nodes down to the leaf concepts. For each cut i, the two hierarchies can be seen as two clusterings C i l , C i r of the leaf concepts. We assign a prize to early similarity values, i.e. when concepts are clustered in a similar way down to the lowest taxonomy levels (close to the leaf nodes). We apply our method to the evaluation of the taxonomy learning methods put forward by Navigli et al. (2011) and Kozareva and Hovy (2010).",Roberto Navigli and Stefano Faralli gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234.,"A New Method for Evaluating Automatically Learned Terminological Taxonomies. Evaluating a taxonomy learned automatically against an existing gold standard is a very complex problem, because differences stem from the number, label, depth and ordering of the taxonomy nodes. In this paper we propose casting the problem as one of comparing two hierarchical clusters. To this end we defined a variation of the Fowlkes and Mallows measure (Fowlkes and Mallows, 1983). Our method assigns a similarity value B i (l,r) to the learned (l) and reference (r) taxonomy for each cut i of the corresponding anonymised hierarchies, starting from the topmost nodes down to the leaf concepts. For each cut i, the two hierarchies can be seen as two clusterings C i l , C i r of the leaf concepts. We assign a prize to early similarity values, i.e. when concepts are clustered in a similar way down to the lowest taxonomy levels (close to the leaf nodes). We apply our method to the evaluation of the taxonomy learning methods put forward by Navigli et al. (2011) and Kozareva and Hovy (2010).",2012
cettolo-etal-2015-iwslt,https://aclanthology.org/2015.iwslt-evaluation.1,0,,,,,,,The IWSLT 2015 Evaluation Campaign. ,The {IWSLT} 2015 Evaluation Campaign,,The IWSLT 2015 Evaluation Campaign,,,The IWSLT 2015 Evaluation Campaign. ,2015
saers-wu-2013-unsupervised-learning,https://aclanthology.org/2013.iwslt-papers.15,0,,,,,,,"Unsupervised learning of bilingual categories in inversion transduction grammar induction. We present the first known experiments incorporating unsupervised bilingual nonterminal category learning within end-to-end fully unsupervised transduction grammar induction using matched training and testing models. Despite steady recent progress, such induction experiments until now have not allowed for learning differentiated nonterminal categories. We divide the learning into two stages: (1) a bootstrap stage that generates a large set of categorized short transduction rule hypotheses, and (2) a minimum conditional description length stage that simultaneously prunes away less useful short rule hypotheses, while also iteratively segmenting full sentence pairs into useful longer categorized transduction rules. We show that the second stage works better when the rule hypotheses have categories than when they do not, and that the proposed conditional description length approach combines the rules hypothesized by the two stages better than a mixture model does. We also show that the compact model learned during the second stage can be further improved by combining the result of different iterations in a mixture model. In total, we see a jump in BLEU score, from 17.53 for a standalone minimum description length baseline with no category learning, to 20.93 when incorporating category induction on a Chinese-English translation task.",Unsupervised learning of bilingual categories in inversion transduction grammar induction,"We present the first known experiments incorporating unsupervised bilingual nonterminal category learning within end-to-end fully unsupervised transduction grammar induction using matched training and testing models. Despite steady recent progress, such induction experiments until now have not allowed for learning differentiated nonterminal categories. We divide the learning into two stages: (1) a bootstrap stage that generates a large set of categorized short transduction rule hypotheses, and (2) a minimum conditional description length stage that simultaneously prunes away less useful short rule hypotheses, while also iteratively segmenting full sentence pairs into useful longer categorized transduction rules. We show that the second stage works better when the rule hypotheses have categories than when they do not, and that the proposed conditional description length approach combines the rules hypothesized by the two stages better than a mixture model does. We also show that the compact model learned during the second stage can be further improved by combining the result of different iterations in a mixture model. In total, we see a jump in BLEU score, from 17.53 for a standalone minimum description length baseline with no category learning, to 20.93 when incorporating category induction on a Chinese-English translation task.",Unsupervised learning of bilingual categories in inversion transduction grammar induction,"We present the first known experiments incorporating unsupervised bilingual nonterminal category learning within end-to-end fully unsupervised transduction grammar induction using matched training and testing models. Despite steady recent progress, such induction experiments until now have not allowed for learning differentiated nonterminal categories. We divide the learning into two stages: (1) a bootstrap stage that generates a large set of categorized short transduction rule hypotheses, and (2) a minimum conditional description length stage that simultaneously prunes away less useful short rule hypotheses, while also iteratively segmenting full sentence pairs into useful longer categorized transduction rules. We show that the second stage works better when the rule hypotheses have categories than when they do not, and that the proposed conditional description length approach combines the rules hypothesized by the two stages better than a mixture model does. We also show that the compact model learned during the second stage can be further improved by combining the result of different iterations in a mixture model. In total, we see a jump in BLEU score, from 17.53 for a standalone minimum description length baseline with no category learning, to 20.93 when incorporating category induction on a Chinese-English translation task.","This material is based upon work supported in part by the Defense Advanced Research Projects Agency (DARPA) under BOLT contract no. HR0011-12-C-0016, and GALE contract nos. HR0011-06-C-0022 and HR0011-06-C-0023; by the European Union under the FP7 grant agreement no. 287658; and by the Hong Kong Research Grants Council (RGC) research grants GRF620811, GRF621008, and GRF612806. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA, the EU, or RGC.","Unsupervised learning of bilingual categories in inversion transduction grammar induction. We present the first known experiments incorporating unsupervised bilingual nonterminal category learning within end-to-end fully unsupervised transduction grammar induction using matched training and testing models. Despite steady recent progress, such induction experiments until now have not allowed for learning differentiated nonterminal categories. We divide the learning into two stages: (1) a bootstrap stage that generates a large set of categorized short transduction rule hypotheses, and (2) a minimum conditional description length stage that simultaneously prunes away less useful short rule hypotheses, while also iteratively segmenting full sentence pairs into useful longer categorized transduction rules. We show that the second stage works better when the rule hypotheses have categories than when they do not, and that the proposed conditional description length approach combines the rules hypothesized by the two stages better than a mixture model does. We also show that the compact model learned during the second stage can be further improved by combining the result of different iterations in a mixture model. In total, we see a jump in BLEU score, from 17.53 for a standalone minimum description length baseline with no category learning, to 20.93 when incorporating category induction on a Chinese-English translation task.",2013
arumae-liu-2019-guiding,https://aclanthology.org/N19-1264,0,,,,,,,"Guiding Extractive Summarization with Question-Answering Rewards. Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors. During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data. Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer. The victim filed a complaint after seeing images of herself on his phone last year. [...]",Guiding Extractive Summarization with Question-Answering Rewards,"Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors. During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data. Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer. The victim filed a complaint after seeing images of herself on his phone last year. [...]",Guiding Extractive Summarization with Question-Answering Rewards,"Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors. During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data. Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer. The victim filed a complaint after seeing images of herself on his phone last year. [...]",,"Guiding Extractive Summarization with Question-Answering Rewards. Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors. During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data. Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer. The victim filed a complaint after seeing images of herself on his phone last year. [...]",2019
mann-1981-two,https://aclanthology.org/P81-1012,0,,,,,,,"Two Discourse Generators. The task of discourse generation is to produce multisentential text in natural language which (when heard or read) produces effects (informing, motivating, etc.) and impressions (conciseness, correctness, ease of reading, etc.) which are appropriate to a need or goal held by the creator of the text.
Because even little children can produce multieententiaJ text, the task of discourse generation appears deceptively easy. It is actually extremely complex, in part because it usually involves many different kinds of knowledge. The skilled writer must know the subiect matter, the beliefs of the reader and his own reasons for writing. He must also know the syntax, semantics, inferential patterns, text structures and words of the language. It would be complex enough if these were all independent bodies of knowledge, independently employed. Unfortunately, they are all interdependent in intricate ways. The use of each must be coordinated with all of the others.",Two Discourse Generators,"The task of discourse generation is to produce multisentential text in natural language which (when heard or read) produces effects (informing, motivating, etc.) and impressions (conciseness, correctness, ease of reading, etc.) which are appropriate to a need or goal held by the creator of the text.
Because even little children can produce multieententiaJ text, the task of discourse generation appears deceptively easy. It is actually extremely complex, in part because it usually involves many different kinds of knowledge. The skilled writer must know the subiect matter, the beliefs of the reader and his own reasons for writing. He must also know the syntax, semantics, inferential patterns, text structures and words of the language. It would be complex enough if these were all independent bodies of knowledge, independently employed. Unfortunately, they are all interdependent in intricate ways. The use of each must be coordinated with all of the others.",Two Discourse Generators,"The task of discourse generation is to produce multisentential text in natural language which (when heard or read) produces effects (informing, motivating, etc.) and impressions (conciseness, correctness, ease of reading, etc.) which are appropriate to a need or goal held by the creator of the text.
Because even little children can produce multieententiaJ text, the task of discourse generation appears deceptively easy. It is actually extremely complex, in part because it usually involves many different kinds of knowledge. The skilled writer must know the subiect matter, the beliefs of the reader and his own reasons for writing. He must also know the syntax, semantics, inferential patterns, text structures and words of the language. It would be complex enough if these were all independent bodies of knowledge, independently employed. Unfortunately, they are all interdependent in intricate ways. The use of each must be coordinated with all of the others.",,"Two Discourse Generators. The task of discourse generation is to produce multisentential text in natural language which (when heard or read) produces effects (informing, motivating, etc.) and impressions (conciseness, correctness, ease of reading, etc.) which are appropriate to a need or goal held by the creator of the text.
Because even little children can produce multieententiaJ text, the task of discourse generation appears deceptively easy. It is actually extremely complex, in part because it usually involves many different kinds of knowledge. The skilled writer must know the subiect matter, the beliefs of the reader and his own reasons for writing. He must also know the syntax, semantics, inferential patterns, text structures and words of the language. It would be complex enough if these were all independent bodies of knowledge, independently employed. Unfortunately, they are all interdependent in intricate ways. The use of each must be coordinated with all of the others.",1981
khosla-rose-2020-using,https://aclanthology.org/2020.codi-1.3,0,,,,,,,"Using Type Information to Improve Entity Coreference Resolution. Coreference resolution (CR) is an essential part of discourse analysis. Most recently, neural approaches have been proposed to improve over SOTA models from earlier paradigms. So far none of the published neural models leverage external semantic knowledge such as type information. This paper offers the first such model and evaluation, demonstrating modest gains in accuracy by introducing either gold standard or predicted types. In the proposed approach, type information serves both to (1) improve mention representation and (2) create a soft type consistency check between coreference candidate mentions. Our evaluation covers two different grain sizes of types over four different benchmark corpora.",Using Type Information to Improve Entity Coreference Resolution,"Coreference resolution (CR) is an essential part of discourse analysis. Most recently, neural approaches have been proposed to improve over SOTA models from earlier paradigms. So far none of the published neural models leverage external semantic knowledge such as type information. This paper offers the first such model and evaluation, demonstrating modest gains in accuracy by introducing either gold standard or predicted types. In the proposed approach, type information serves both to (1) improve mention representation and (2) create a soft type consistency check between coreference candidate mentions. Our evaluation covers two different grain sizes of types over four different benchmark corpora.",Using Type Information to Improve Entity Coreference Resolution,"Coreference resolution (CR) is an essential part of discourse analysis. Most recently, neural approaches have been proposed to improve over SOTA models from earlier paradigms. So far none of the published neural models leverage external semantic knowledge such as type information. This paper offers the first such model and evaluation, demonstrating modest gains in accuracy by introducing either gold standard or predicted types. In the proposed approach, type information serves both to (1) improve mention representation and (2) create a soft type consistency check between coreference candidate mentions. Our evaluation covers two different grain sizes of types over four different benchmark corpora.","We thank the anonymous reviewers for their insightful comments. We are also grateful to the members of the TELEDIA group at LTI, CMU for the invaluable feedback. This work was funded in part by Dow Chemical, and Microsoft.","Using Type Information to Improve Entity Coreference Resolution. Coreference resolution (CR) is an essential part of discourse analysis. Most recently, neural approaches have been proposed to improve over SOTA models from earlier paradigms. So far none of the published neural models leverage external semantic knowledge such as type information. This paper offers the first such model and evaluation, demonstrating modest gains in accuracy by introducing either gold standard or predicted types. In the proposed approach, type information serves both to (1) improve mention representation and (2) create a soft type consistency check between coreference candidate mentions. Our evaluation covers two different grain sizes of types over four different benchmark corpora.",2020
ye-etal-2016-interactive,https://aclanthology.org/C16-1169,0,,,,,,,"Interactive-Predictive Machine Translation based on Syntactic Constraints of Prefix. Interactive-predictive machine translation (IPMT) is a translation mode which combines machine translation technology and human behaviours. In the IPMT system, the utilization of the prefix greatly affects the interaction efficiency. However, state-of-the-art methods filter translation hypotheses mainly according to their matching results with the prefix on character level, and the advantage of the prefix is not fully developed. Focusing on this problem, this paper mines the deep constraints of prefix on syntactic level to improve the performance of IPMT systems. Two syntactic subtree matching rules based on phrase structure grammar are proposed to filter the translation hypotheses more strictly. Experimental results on LDC Chinese-English corpora show that the proposed method outperforms state-of-the-art phrase-based IPMT system while keeping comparable decoding speed.",Interactive-Predictive Machine Translation based on Syntactic Constraints of Prefix,"Interactive-predictive machine translation (IPMT) is a translation mode which combines machine translation technology and human behaviours. In the IPMT system, the utilization of the prefix greatly affects the interaction efficiency. However, state-of-the-art methods filter translation hypotheses mainly according to their matching results with the prefix on character level, and the advantage of the prefix is not fully developed. Focusing on this problem, this paper mines the deep constraints of prefix on syntactic level to improve the performance of IPMT systems. Two syntactic subtree matching rules based on phrase structure grammar are proposed to filter the translation hypotheses more strictly. Experimental results on LDC Chinese-English corpora show that the proposed method outperforms state-of-the-art phrase-based IPMT system while keeping comparable decoding speed.",Interactive-Predictive Machine Translation based on Syntactic Constraints of Prefix,"Interactive-predictive machine translation (IPMT) is a translation mode which combines machine translation technology and human behaviours. In the IPMT system, the utilization of the prefix greatly affects the interaction efficiency. However, state-of-the-art methods filter translation hypotheses mainly according to their matching results with the prefix on character level, and the advantage of the prefix is not fully developed. Focusing on this problem, this paper mines the deep constraints of prefix on syntactic level to improve the performance of IPMT systems. Two syntactic subtree matching rules based on phrase structure grammar are proposed to filter the translation hypotheses more strictly. Experimental results on LDC Chinese-English corpora show that the proposed method outperforms state-of-the-art phrase-based IPMT system while keeping comparable decoding speed.",This work is supported by the National Natural Science Foundation of China (No. 61402299). We would like to thank the anonymous reviewers for their insightful and constructive comments. We also want to thank Yapeng Zhang for help in the preparation of experimental systems in this paper.,"Interactive-Predictive Machine Translation based on Syntactic Constraints of Prefix. Interactive-predictive machine translation (IPMT) is a translation mode which combines machine translation technology and human behaviours. In the IPMT system, the utilization of the prefix greatly affects the interaction efficiency. However, state-of-the-art methods filter translation hypotheses mainly according to their matching results with the prefix on character level, and the advantage of the prefix is not fully developed. Focusing on this problem, this paper mines the deep constraints of prefix on syntactic level to improve the performance of IPMT systems. Two syntactic subtree matching rules based on phrase structure grammar are proposed to filter the translation hypotheses more strictly. Experimental results on LDC Chinese-English corpora show that the proposed method outperforms state-of-the-art phrase-based IPMT system while keeping comparable decoding speed.",2016
li-etal-2016-extending,https://aclanthology.org/W16-0602,0,,,,,,,"Extending Phrase-Based Translation with Dependencies by Using Graphs. In this paper, we propose a graph-based translation model which takes advantage of discontinuous phrases. The model segments a graph which combines bigram and dependency relations into subgraphs and produces translations by combining translations of these subgraphs. Experiments on Chinese-English and German-English tasks show that our system is significantly better than the phrase-based model. By explicitly modeling the graph segmentation, our system gains further improvement.",Extending Phrase-Based Translation with Dependencies by Using Graphs,"In this paper, we propose a graph-based translation model which takes advantage of discontinuous phrases. The model segments a graph which combines bigram and dependency relations into subgraphs and produces translations by combining translations of these subgraphs. Experiments on Chinese-English and German-English tasks show that our system is significantly better than the phrase-based model. By explicitly modeling the graph segmentation, our system gains further improvement.",Extending Phrase-Based Translation with Dependencies by Using Graphs,"In this paper, we propose a graph-based translation model which takes advantage of discontinuous phrases. The model segments a graph which combines bigram and dependency relations into subgraphs and produces translations by combining translations of these subgraphs. Experiments on Chinese-English and German-English tasks show that our system is significantly better than the phrase-based model. By explicitly modeling the graph segmentation, our system gains further improvement.",This research has received funding from the People Programme ( ,"Extending Phrase-Based Translation with Dependencies by Using Graphs. In this paper, we propose a graph-based translation model which takes advantage of discontinuous phrases. The model segments a graph which combines bigram and dependency relations into subgraphs and produces translations by combining translations of these subgraphs. Experiments on Chinese-English and German-English tasks show that our system is significantly better than the phrase-based model. By explicitly modeling the graph segmentation, our system gains further improvement.",2016
duma-menzel-2017-sef,https://aclanthology.org/S17-2024,0,,,,,,,"SEF@UHH at SemEval-2017 Task 1: Unsupervised Knowledge-Free Semantic Textual Similarity via Paragraph Vector. This paper describes our unsupervised knowledge-free approach to the SemEval-2017 Task 1 Competition. The proposed method makes use of Paragraph Vector for assessing the semantic similarity between pairs of sentences. We experimented with various dimensions of the vector and three state-of-the-art similarity metrics. Given a cross-lingual task, we trained models corresponding to its two languages and combined the models by averaging the similarity scores. The results of our submitted runs are above the median scores for five out of seven test sets by means of Pearson Correlation. Moreover, one of our system runs performed best on the Spanish-English-WMT test set ranking first out of 53 runs submitted in total by all participants.",{SEF}@{UHH} at {S}em{E}val-2017 Task 1: Unsupervised Knowledge-Free Semantic Textual Similarity via Paragraph Vector,"This paper describes our unsupervised knowledge-free approach to the SemEval-2017 Task 1 Competition. The proposed method makes use of Paragraph Vector for assessing the semantic similarity between pairs of sentences. We experimented with various dimensions of the vector and three state-of-the-art similarity metrics. Given a cross-lingual task, we trained models corresponding to its two languages and combined the models by averaging the similarity scores. The results of our submitted runs are above the median scores for five out of seven test sets by means of Pearson Correlation. Moreover, one of our system runs performed best on the Spanish-English-WMT test set ranking first out of 53 runs submitted in total by all participants.",SEF@UHH at SemEval-2017 Task 1: Unsupervised Knowledge-Free Semantic Textual Similarity via Paragraph Vector,"This paper describes our unsupervised knowledge-free approach to the SemEval-2017 Task 1 Competition. The proposed method makes use of Paragraph Vector for assessing the semantic similarity between pairs of sentences. We experimented with various dimensions of the vector and three state-of-the-art similarity metrics. Given a cross-lingual task, we trained models corresponding to its two languages and combined the models by averaging the similarity scores. The results of our submitted runs are above the median scores for five out of seven test sets by means of Pearson Correlation. Moreover, one of our system runs performed best on the Spanish-English-WMT test set ranking first out of 53 runs submitted in total by all participants.",,"SEF@UHH at SemEval-2017 Task 1: Unsupervised Knowledge-Free Semantic Textual Similarity via Paragraph Vector. This paper describes our unsupervised knowledge-free approach to the SemEval-2017 Task 1 Competition. The proposed method makes use of Paragraph Vector for assessing the semantic similarity between pairs of sentences. We experimented with various dimensions of the vector and three state-of-the-art similarity metrics. Given a cross-lingual task, we trained models corresponding to its two languages and combined the models by averaging the similarity scores. The results of our submitted runs are above the median scores for five out of seven test sets by means of Pearson Correlation. Moreover, one of our system runs performed best on the Spanish-English-WMT test set ranking first out of 53 runs submitted in total by all participants.",2017
denero-etal-2006-generative,https://aclanthology.org/W06-3105,0,,,,,,,"Why Generative Phrase Models Underperform Surface Heuristics. We investigate why weights from generative models underperform heuristic estimates in phrasebased machine translation. We first propose a simple generative, phrase-based model and verify that its estimates are inferior to those given by surface statistics. The performance gap stems primarily from the addition of a hidden segmentation variable, which increases the capacity for overfitting during maximum likelihood training with EM. In particular, while word level models benefit greatly from re-estimation, phrase-level models do not: the crucial difference is that distinct word alignments cannot all be correct, while distinct segmentations can. Alternate segmentations rather than alternate alignments compete, resulting in increased determinization of the phrase table, decreased generalization, and decreased final BLEU score. We also show that interpolation of the two methods can result in a modest increase in BLEU score.",Why Generative Phrase Models Underperform Surface Heuristics,"We investigate why weights from generative models underperform heuristic estimates in phrasebased machine translation. We first propose a simple generative, phrase-based model and verify that its estimates are inferior to those given by surface statistics. The performance gap stems primarily from the addition of a hidden segmentation variable, which increases the capacity for overfitting during maximum likelihood training with EM. In particular, while word level models benefit greatly from re-estimation, phrase-level models do not: the crucial difference is that distinct word alignments cannot all be correct, while distinct segmentations can. Alternate segmentations rather than alternate alignments compete, resulting in increased determinization of the phrase table, decreased generalization, and decreased final BLEU score. We also show that interpolation of the two methods can result in a modest increase in BLEU score.",Why Generative Phrase Models Underperform Surface Heuristics,"We investigate why weights from generative models underperform heuristic estimates in phrasebased machine translation. We first propose a simple generative, phrase-based model and verify that its estimates are inferior to those given by surface statistics. The performance gap stems primarily from the addition of a hidden segmentation variable, which increases the capacity for overfitting during maximum likelihood training with EM. In particular, while word level models benefit greatly from re-estimation, phrase-level models do not: the crucial difference is that distinct word alignments cannot all be correct, while distinct segmentations can. Alternate segmentations rather than alternate alignments compete, resulting in increased determinization of the phrase table, decreased generalization, and decreased final BLEU score. We also show that interpolation of the two methods can result in a modest increase in BLEU score.",,"Why Generative Phrase Models Underperform Surface Heuristics. We investigate why weights from generative models underperform heuristic estimates in phrasebased machine translation. We first propose a simple generative, phrase-based model and verify that its estimates are inferior to those given by surface statistics. The performance gap stems primarily from the addition of a hidden segmentation variable, which increases the capacity for overfitting during maximum likelihood training with EM. In particular, while word level models benefit greatly from re-estimation, phrase-level models do not: the crucial difference is that distinct word alignments cannot all be correct, while distinct segmentations can. Alternate segmentations rather than alternate alignments compete, resulting in increased determinization of the phrase table, decreased generalization, and decreased final BLEU score. We also show that interpolation of the two methods can result in a modest increase in BLEU score.",2006
atwell-drakos-1987-pattern,https://aclanthology.org/E87-1010,0,,,,,,,"Pattern Recognition Applied to the Acquisition of a Grammatical Classification System From Unrestricted English Text. Within computational linguistics, the use of statistical pattern matching is generally restricted to speech processing. We have attempted to apply statistical techniques to discover a grammatical classification system from a Corpus of 'raw' English text. A discovery procedure is simpler for a simpler",Pattern Recognition Applied to the Acquisition of a Grammatical Classification System From Unrestricted {E}nglish Text,"Within computational linguistics, the use of statistical pattern matching is generally restricted to speech processing. We have attempted to apply statistical techniques to discover a grammatical classification system from a Corpus of 'raw' English text. A discovery procedure is simpler for a simpler",Pattern Recognition Applied to the Acquisition of a Grammatical Classification System From Unrestricted English Text,"Within computational linguistics, the use of statistical pattern matching is generally restricted to speech processing. We have attempted to apply statistical techniques to discover a grammatical classification system from a Corpus of 'raw' English text. A discovery procedure is simpler for a simpler",,"Pattern Recognition Applied to the Acquisition of a Grammatical Classification System From Unrestricted English Text. Within computational linguistics, the use of statistical pattern matching is generally restricted to speech processing. We have attempted to apply statistical techniques to discover a grammatical classification system from a Corpus of 'raw' English text. A discovery procedure is simpler for a simpler",1987
varanasi-etal-2020-copybert,https://aclanthology.org/2020.nlp4convai-1.3,0,,,,,,,"CopyBERT: A Unified Approach to Question Generation with Self-Attention. Contextualized word embeddings provide better initialization for neural networks that deal with various natural language understanding (NLU) tasks including question answering (QA) and more recently, question generation (QG). Apart from providing meaningful word representations, pre-trained transformer models, such as BERT also provide self-attentions which encode syntactic information that can be probed for dependency parsing and POStagging. In this paper, we show that the information from self-attentions of BERT are useful for language modeling of questions conditioned on paragraph and answer phrases. To control the attention span, we use semidiagonal mask and utilize a shared model for encoding and decoding, unlike sequence-tosequence. We further employ copy mechanism over self-attentions to achieve state-of-the-art results for question generation on SQuAD dataset.",{C}opy{BERT}: A Unified Approach to Question Generation with Self-Attention,"Contextualized word embeddings provide better initialization for neural networks that deal with various natural language understanding (NLU) tasks including question answering (QA) and more recently, question generation (QG). Apart from providing meaningful word representations, pre-trained transformer models, such as BERT also provide self-attentions which encode syntactic information that can be probed for dependency parsing and POStagging. In this paper, we show that the information from self-attentions of BERT are useful for language modeling of questions conditioned on paragraph and answer phrases. To control the attention span, we use semidiagonal mask and utilize a shared model for encoding and decoding, unlike sequence-tosequence. We further employ copy mechanism over self-attentions to achieve state-of-the-art results for question generation on SQuAD dataset.",CopyBERT: A Unified Approach to Question Generation with Self-Attention,"Contextualized word embeddings provide better initialization for neural networks that deal with various natural language understanding (NLU) tasks including question answering (QA) and more recently, question generation (QG). Apart from providing meaningful word representations, pre-trained transformer models, such as BERT also provide self-attentions which encode syntactic information that can be probed for dependency parsing and POStagging. In this paper, we show that the information from self-attentions of BERT are useful for language modeling of questions conditioned on paragraph and answer phrases. To control the attention span, we use semidiagonal mask and utilize a shared model for encoding and decoding, unlike sequence-tosequence. We further employ copy mechanism over self-attentions to achieve state-of-the-art results for question generation on SQuAD dataset.",The authors would like to thank the anonymous reviewers for helpful feedback. The work was partially funded by the German Federal Ministry of Education and Research (BMBF) through the project DEEPLEE (01IW17001).,"CopyBERT: A Unified Approach to Question Generation with Self-Attention. Contextualized word embeddings provide better initialization for neural networks that deal with various natural language understanding (NLU) tasks including question answering (QA) and more recently, question generation (QG). Apart from providing meaningful word representations, pre-trained transformer models, such as BERT also provide self-attentions which encode syntactic information that can be probed for dependency parsing and POStagging. In this paper, we show that the information from self-attentions of BERT are useful for language modeling of questions conditioned on paragraph and answer phrases. To control the attention span, we use semidiagonal mask and utilize a shared model for encoding and decoding, unlike sequence-tosequence. We further employ copy mechanism over self-attentions to achieve state-of-the-art results for question generation on SQuAD dataset.",2020
ager-etal-2018-modelling,https://aclanthology.org/K18-1051,0,,,,,,,"Modelling Salient Features as Directions in Fine-Tuned Semantic Spaces. In this paper we consider semantic spaces consisting of objects from some particular domain (e.g. IMDB movie reviews). Various authors have observed that such semantic spaces often model salient features (e.g. how scary a movie is) as directions. These feature directions allow us to rank objects according to how much they have the corresponding feature, and can thus play an important role in interpretable classifiers, recommendation systems, or entity-oriented search engines, among others. Methods for learning semantic spaces, however, are mostly aimed at modelling similarity. In this paper, we argue that there is an inherent trade-off between capturing similarity and faithfully modelling features as directions. Following this observation, we propose a simple method to fine-tune existing semantic spaces, with the aim of improving the quality of their feature directions. Crucially, our method is fully unsupervised, requiring only a bag-of-words representation of the objects as input.",Modelling Salient Features as Directions in Fine-Tuned Semantic Spaces,"In this paper we consider semantic spaces consisting of objects from some particular domain (e.g. IMDB movie reviews). Various authors have observed that such semantic spaces often model salient features (e.g. how scary a movie is) as directions. These feature directions allow us to rank objects according to how much they have the corresponding feature, and can thus play an important role in interpretable classifiers, recommendation systems, or entity-oriented search engines, among others. Methods for learning semantic spaces, however, are mostly aimed at modelling similarity. In this paper, we argue that there is an inherent trade-off between capturing similarity and faithfully modelling features as directions. Following this observation, we propose a simple method to fine-tune existing semantic spaces, with the aim of improving the quality of their feature directions. Crucially, our method is fully unsupervised, requiring only a bag-of-words representation of the objects as input.",Modelling Salient Features as Directions in Fine-Tuned Semantic Spaces,"In this paper we consider semantic spaces consisting of objects from some particular domain (e.g. IMDB movie reviews). Various authors have observed that such semantic spaces often model salient features (e.g. how scary a movie is) as directions. These feature directions allow us to rank objects according to how much they have the corresponding feature, and can thus play an important role in interpretable classifiers, recommendation systems, or entity-oriented search engines, among others. Methods for learning semantic spaces, however, are mostly aimed at modelling similarity. In this paper, we argue that there is an inherent trade-off between capturing similarity and faithfully modelling features as directions. Following this observation, we propose a simple method to fine-tune existing semantic spaces, with the aim of improving the quality of their feature directions. Crucially, our method is fully unsupervised, requiring only a bag-of-words representation of the objects as input.",This work has been supported by ERC Starting Grant 637277.,"Modelling Salient Features as Directions in Fine-Tuned Semantic Spaces. In this paper we consider semantic spaces consisting of objects from some particular domain (e.g. IMDB movie reviews). Various authors have observed that such semantic spaces often model salient features (e.g. how scary a movie is) as directions. These feature directions allow us to rank objects according to how much they have the corresponding feature, and can thus play an important role in interpretable classifiers, recommendation systems, or entity-oriented search engines, among others. Methods for learning semantic spaces, however, are mostly aimed at modelling similarity. In this paper, we argue that there is an inherent trade-off between capturing similarity and faithfully modelling features as directions. Following this observation, we propose a simple method to fine-tune existing semantic spaces, with the aim of improving the quality of their feature directions. Crucially, our method is fully unsupervised, requiring only a bag-of-words representation of the objects as input.",2018
dirkson-2019-knowledge,https://aclanthology.org/P19-2009,1,,,,health,,,"Knowledge Discovery and Hypothesis Generation from Online Patient Forums: A Research Proposal. The unprompted patient experiences shared on patient forums contain a wealth of unexploited knowledge. Mining this knowledge and crosslinking it with biomedical literature, could expose novel insights, which could subsequently provide hypotheses for further clinical research. As of yet, automated methods for open knowledge discovery on patient forum text are lacking. Thus, in this research proposal, we outline future research into methods for mining, aggregating and cross-linking patient knowledge from online forums. Additionally, we aim to address how one could measure the credibility of this extracted knowledge.",Knowledge Discovery and Hypothesis Generation from Online Patient Forums: A Research Proposal,"The unprompted patient experiences shared on patient forums contain a wealth of unexploited knowledge. Mining this knowledge and crosslinking it with biomedical literature, could expose novel insights, which could subsequently provide hypotheses for further clinical research. As of yet, automated methods for open knowledge discovery on patient forum text are lacking. Thus, in this research proposal, we outline future research into methods for mining, aggregating and cross-linking patient knowledge from online forums. Additionally, we aim to address how one could measure the credibility of this extracted knowledge.",Knowledge Discovery and Hypothesis Generation from Online Patient Forums: A Research Proposal,"The unprompted patient experiences shared on patient forums contain a wealth of unexploited knowledge. Mining this knowledge and crosslinking it with biomedical literature, could expose novel insights, which could subsequently provide hypotheses for further clinical research. As of yet, automated methods for open knowledge discovery on patient forum text are lacking. Thus, in this research proposal, we outline future research into methods for mining, aggregating and cross-linking patient knowledge from online forums. Additionally, we aim to address how one could measure the credibility of this extracted knowledge.",,"Knowledge Discovery and Hypothesis Generation from Online Patient Forums: A Research Proposal. The unprompted patient experiences shared on patient forums contain a wealth of unexploited knowledge. Mining this knowledge and crosslinking it with biomedical literature, could expose novel insights, which could subsequently provide hypotheses for further clinical research. As of yet, automated methods for open knowledge discovery on patient forum text are lacking. Thus, in this research proposal, we outline future research into methods for mining, aggregating and cross-linking patient knowledge from online forums. Additionally, we aim to address how one could measure the credibility of this extracted knowledge.",2019
symonds-etal-2011-modelling,https://aclanthology.org/Y11-1033,0,,,,,,,"Modelling Word Meaning using Efficient Tensor Representations. Models of word meaning, built from a corpus of text, have demonstrated success in emulating human performance on a number of cognitive tasks. Many of these models use geometric representations of words to store semantic associations between words. Often word order information is not captured in these models. The lack of structural information used by these models has been raised as a weakness when performing cognitive tasks. This paper presents an efficient tensor based approach to modelling word meaning that builds on recent attempts to encode word order information, while providing flexible methods for extracting task specific semantic information.",Modelling Word Meaning using Efficient Tensor Representations,"Models of word meaning, built from a corpus of text, have demonstrated success in emulating human performance on a number of cognitive tasks. Many of these models use geometric representations of words to store semantic associations between words. Often word order information is not captured in these models. The lack of structural information used by these models has been raised as a weakness when performing cognitive tasks. This paper presents an efficient tensor based approach to modelling word meaning that builds on recent attempts to encode word order information, while providing flexible methods for extracting task specific semantic information.",Modelling Word Meaning using Efficient Tensor Representations,"Models of word meaning, built from a corpus of text, have demonstrated success in emulating human performance on a number of cognitive tasks. Many of these models use geometric representations of words to store semantic associations between words. Often word order information is not captured in these models. The lack of structural information used by these models has been raised as a weakness when performing cognitive tasks. This paper presents an efficient tensor based approach to modelling word meaning that builds on recent attempts to encode word order information, while providing flexible methods for extracting task specific semantic information.",,"Modelling Word Meaning using Efficient Tensor Representations. Models of word meaning, built from a corpus of text, have demonstrated success in emulating human performance on a number of cognitive tasks. Many of these models use geometric representations of words to store semantic associations between words. Often word order information is not captured in these models. The lack of structural information used by these models has been raised as a weakness when performing cognitive tasks. This paper presents an efficient tensor based approach to modelling word meaning that builds on recent attempts to encode word order information, while providing flexible methods for extracting task specific semantic information.",2011
lin-2004-computational,https://aclanthology.org/N04-2004,0,,,,,,,"A Computational Framework for Non-Lexicalist Semantics. Under a lexicalist approach to semantics, a verb completely encodes its syntactic and semantic structures, along with the relevant syntax-tosemantics mapping; polysemy is typically attributed to the existence of different lexical entries. A lexicon organized in this fashion contains much redundant information and is unable to capture cross-categorial morphological derivations. The solution is to spread the ""semantic load"" of lexical entries to other morphemes not typically taken to bear semantic content. This approach follows current trends in linguistic theory, and more perspicuously accounts for alternations in argument structure. I demonstrate how such a framework can be computationally realized with a feature-based, agenda-driven chart parser for the Minimalist Program.",A Computational Framework for Non-Lexicalist Semantics,"Under a lexicalist approach to semantics, a verb completely encodes its syntactic and semantic structures, along with the relevant syntax-tosemantics mapping; polysemy is typically attributed to the existence of different lexical entries. A lexicon organized in this fashion contains much redundant information and is unable to capture cross-categorial morphological derivations. The solution is to spread the ""semantic load"" of lexical entries to other morphemes not typically taken to bear semantic content. This approach follows current trends in linguistic theory, and more perspicuously accounts for alternations in argument structure. I demonstrate how such a framework can be computationally realized with a feature-based, agenda-driven chart parser for the Minimalist Program.",A Computational Framework for Non-Lexicalist Semantics,"Under a lexicalist approach to semantics, a verb completely encodes its syntactic and semantic structures, along with the relevant syntax-tosemantics mapping; polysemy is typically attributed to the existence of different lexical entries. A lexicon organized in this fashion contains much redundant information and is unable to capture cross-categorial morphological derivations. The solution is to spread the ""semantic load"" of lexical entries to other morphemes not typically taken to bear semantic content. This approach follows current trends in linguistic theory, and more perspicuously accounts for alternations in argument structure. I demonstrate how such a framework can be computationally realized with a feature-based, agenda-driven chart parser for the Minimalist Program.",,"A Computational Framework for Non-Lexicalist Semantics. Under a lexicalist approach to semantics, a verb completely encodes its syntactic and semantic structures, along with the relevant syntax-tosemantics mapping; polysemy is typically attributed to the existence of different lexical entries. A lexicon organized in this fashion contains much redundant information and is unable to capture cross-categorial morphological derivations. The solution is to spread the ""semantic load"" of lexical entries to other morphemes not typically taken to bear semantic content. This approach follows current trends in linguistic theory, and more perspicuously accounts for alternations in argument structure. I demonstrate how such a framework can be computationally realized with a feature-based, agenda-driven chart parser for the Minimalist Program.",2004
makrai-etal-2013-applicative,https://aclanthology.org/W13-3207,0,,,,,,,"Applicative structure in vector space models. We introduce a new 50-dimensional embedding obtained by spectral clustering of a graph describing the conceptual structure of the lexicon. We use the embedding directly to investigate sets of antonymic pairs, and indirectly to argue that function application in CVSMs requires not just vectors but two transformations (corresponding to subject and object) as well.",Applicative structure in vector space models,"We introduce a new 50-dimensional embedding obtained by spectral clustering of a graph describing the conceptual structure of the lexicon. We use the embedding directly to investigate sets of antonymic pairs, and indirectly to argue that function application in CVSMs requires not just vectors but two transformations (corresponding to subject and object) as well.",Applicative structure in vector space models,"We introduce a new 50-dimensional embedding obtained by spectral clustering of a graph describing the conceptual structure of the lexicon. We use the embedding directly to investigate sets of antonymic pairs, and indirectly to argue that function application in CVSMs requires not just vectors but two transformations (corresponding to subject and object) as well.","Makrai did the work on antonym set testing, Nemeskey built the embedding, Kornai advised. We would like to thank Zsófia Tardos (BUTE) and the anonymous reviewers for useful comments. Work supported by OTKA grant #82333.","Applicative structure in vector space models. We introduce a new 50-dimensional embedding obtained by spectral clustering of a graph describing the conceptual structure of the lexicon. We use the embedding directly to investigate sets of antonymic pairs, and indirectly to argue that function application in CVSMs requires not just vectors but two transformations (corresponding to subject and object) as well.",2013
libovicky-etal-2020-expand,https://aclanthology.org/2020.ngt-1.18,1,,,,education,,,"Expand and Filter: CUNI and LMU Systems for the WNGT 2020 Duolingo Shared Task. We present our submission to the Simultaneous Translation And Paraphrase for Language Education (STAPLE) challenge. We used a standard Transformer model for translation, with a crosslingual classifier predicting correct translations on the output n-best list. To increase the diversity of the outputs, we used additional data to train the translation model, and we trained a paraphrasing model based on the Levenshtein Transformer architecture to generate further synonymous translations. The paraphrasing results were again filtered using our classifier. While the use of additional data and our classifier filter were able to improve results, the paraphrasing model produced too many invalid outputs to further improve the output quality. Our model without the paraphrasing component finished in the middle of the field for the shared task, improving over the best baseline by a margin of 10-22% weighted F1 absolute.",Expand and Filter: {CUNI} and {LMU} Systems for the {WNGT} 2020 {D}uolingo Shared Task,"We present our submission to the Simultaneous Translation And Paraphrase for Language Education (STAPLE) challenge. We used a standard Transformer model for translation, with a crosslingual classifier predicting correct translations on the output n-best list. To increase the diversity of the outputs, we used additional data to train the translation model, and we trained a paraphrasing model based on the Levenshtein Transformer architecture to generate further synonymous translations. The paraphrasing results were again filtered using our classifier. While the use of additional data and our classifier filter were able to improve results, the paraphrasing model produced too many invalid outputs to further improve the output quality. Our model without the paraphrasing component finished in the middle of the field for the shared task, improving over the best baseline by a margin of 10-22% weighted F1 absolute.",Expand and Filter: CUNI and LMU Systems for the WNGT 2020 Duolingo Shared Task,"We present our submission to the Simultaneous Translation And Paraphrase for Language Education (STAPLE) challenge. We used a standard Transformer model for translation, with a crosslingual classifier predicting correct translations on the output n-best list. To increase the diversity of the outputs, we used additional data to train the translation model, and we trained a paraphrasing model based on the Levenshtein Transformer architecture to generate further synonymous translations. The paraphrasing results were again filtered using our classifier. While the use of additional data and our classifier filter were able to improve results, the paraphrasing model produced too many invalid outputs to further improve the output quality. Our model without the paraphrasing component finished in the middle of the field for the shared task, improving over the best baseline by a margin of 10-22% weighted F1 absolute.",,"Expand and Filter: CUNI and LMU Systems for the WNGT 2020 Duolingo Shared Task. We present our submission to the Simultaneous Translation And Paraphrase for Language Education (STAPLE) challenge. We used a standard Transformer model for translation, with a crosslingual classifier predicting correct translations on the output n-best list. To increase the diversity of the outputs, we used additional data to train the translation model, and we trained a paraphrasing model based on the Levenshtein Transformer architecture to generate further synonymous translations. The paraphrasing results were again filtered using our classifier. While the use of additional data and our classifier filter were able to improve results, the paraphrasing model produced too many invalid outputs to further improve the output quality. Our model without the paraphrasing component finished in the middle of the field for the shared task, improving over the best baseline by a margin of 10-22% weighted F1 absolute.",2020
varadi-2000-lexical,http://www.lrec-conf.org/proceedings/lrec2000/pdf/122.pdf,0,,,,,,,"Lexical and Translation Equivalence in Parallel Corpora. In the present paper we intend to investigate to what extent use of parallel corpora can help to eliminate some of the difficulties noted with bilingual dictionaries. The particular issues addressed are the bidirectionality of translation equivalence, the coverage of multiword units, and the amount of implicit knowledge presupposed on the part of the user in interpreting the data. Three lexical items belonging to different word classes were chosen for analysis: the noun head, the verb give and the preposition with. George Orwell's novel 1984 was used as source material, which is available in English-Hungarian sentence aligned form. It is argued that the analysis of translation equivalents displayed in sets of concordances with aligned sentences in the target language holds important implications for bilingual lexicography and automatic word alignment methodology.",Lexical and Translation Equivalence in Parallel Corpora,"In the present paper we intend to investigate to what extent use of parallel corpora can help to eliminate some of the difficulties noted with bilingual dictionaries. The particular issues addressed are the bidirectionality of translation equivalence, the coverage of multiword units, and the amount of implicit knowledge presupposed on the part of the user in interpreting the data. Three lexical items belonging to different word classes were chosen for analysis: the noun head, the verb give and the preposition with. George Orwell's novel 1984 was used as source material, which is available in English-Hungarian sentence aligned form. It is argued that the analysis of translation equivalents displayed in sets of concordances with aligned sentences in the target language holds important implications for bilingual lexicography and automatic word alignment methodology.",Lexical and Translation Equivalence in Parallel Corpora,"In the present paper we intend to investigate to what extent use of parallel corpora can help to eliminate some of the difficulties noted with bilingual dictionaries. The particular issues addressed are the bidirectionality of translation equivalence, the coverage of multiword units, and the amount of implicit knowledge presupposed on the part of the user in interpreting the data. Three lexical items belonging to different word classes were chosen for analysis: the noun head, the verb give and the preposition with. George Orwell's novel 1984 was used as source material, which is available in English-Hungarian sentence aligned form. It is argued that the analysis of translation equivalents displayed in sets of concordances with aligned sentences in the target language holds important implications for bilingual lexicography and automatic word alignment methodology.",The research reported in the paper was supported by Országos Tudományos Kutatási Alapprogramok (grant number T026091).,"Lexical and Translation Equivalence in Parallel Corpora. In the present paper we intend to investigate to what extent use of parallel corpora can help to eliminate some of the difficulties noted with bilingual dictionaries. The particular issues addressed are the bidirectionality of translation equivalence, the coverage of multiword units, and the amount of implicit knowledge presupposed on the part of the user in interpreting the data. Three lexical items belonging to different word classes were chosen for analysis: the noun head, the verb give and the preposition with. George Orwell's novel 1984 was used as source material, which is available in English-Hungarian sentence aligned form. It is argued that the analysis of translation equivalents displayed in sets of concordances with aligned sentences in the target language holds important implications for bilingual lexicography and automatic word alignment methodology.",2000
choi-etal-1999-english,https://aclanthology.org/1999.mtsummit-1.64,0,,,,,,,"English-to-Korean Web translator : ``FromTo/Web-EK''. The previous English-Korean MT system that have been developed in Korea have dealt with only written text as translation object. Most of them enumerated a following list of the problems that had not seemed to be easy to solve in the near future : 1) processing of non-continuous idiomatic expressions 2) reduction of too many POS or structural ambiguities 3) robust processing for long sentence and parsing failure 4) selecting correct word correspondence between several alternatives. The problems can be considered as important factors that have influence on the translation quality of machine translation system. This paper describes not only the solutions of problems of the previous English-to-Korean machine translation systems but also the HTML tags management between two structurally different languages, English and Korean. Through the solutions we translate successfully English web documents into Korean one in the English-to-Korean web translator ""FromTo/Web-EK"" which has been developed from 1997.",{E}nglish-to-{K}orean Web translator : {``}{F}rom{T}o/Web-{EK}{''},"The previous English-Korean MT system that have been developed in Korea have dealt with only written text as translation object. Most of them enumerated a following list of the problems that had not seemed to be easy to solve in the near future : 1) processing of non-continuous idiomatic expressions 2) reduction of too many POS or structural ambiguities 3) robust processing for long sentence and parsing failure 4) selecting correct word correspondence between several alternatives. The problems can be considered as important factors that have influence on the translation quality of machine translation system. This paper describes not only the solutions of problems of the previous English-to-Korean machine translation systems but also the HTML tags management between two structurally different languages, English and Korean. Through the solutions we translate successfully English web documents into Korean one in the English-to-Korean web translator ""FromTo/Web-EK"" which has been developed from 1997.",English-to-Korean Web translator : ``FromTo/Web-EK'',"The previous English-Korean MT system that have been developed in Korea have dealt with only written text as translation object. Most of them enumerated a following list of the problems that had not seemed to be easy to solve in the near future : 1) processing of non-continuous idiomatic expressions 2) reduction of too many POS or structural ambiguities 3) robust processing for long sentence and parsing failure 4) selecting correct word correspondence between several alternatives. The problems can be considered as important factors that have influence on the translation quality of machine translation system. This paper describes not only the solutions of problems of the previous English-to-Korean machine translation systems but also the HTML tags management between two structurally different languages, English and Korean. Through the solutions we translate successfully English web documents into Korean one in the English-to-Korean web translator ""FromTo/Web-EK"" which has been developed from 1997.",,"English-to-Korean Web translator : ``FromTo/Web-EK''. The previous English-Korean MT system that have been developed in Korea have dealt with only written text as translation object. Most of them enumerated a following list of the problems that had not seemed to be easy to solve in the near future : 1) processing of non-continuous idiomatic expressions 2) reduction of too many POS or structural ambiguities 3) robust processing for long sentence and parsing failure 4) selecting correct word correspondence between several alternatives. The problems can be considered as important factors that have influence on the translation quality of machine translation system. This paper describes not only the solutions of problems of the previous English-to-Korean machine translation systems but also the HTML tags management between two structurally different languages, English and Korean. Through the solutions we translate successfully English web documents into Korean one in the English-to-Korean web translator ""FromTo/Web-EK"" which has been developed from 1997.",1999
tanase-etal-2020-upb,https://aclanthology.org/2020.semeval-1.296,1,,,,hate_speech,,,"UPB at SemEval-2020 Task 12: Multilingual Offensive Language Detection on Social Media by Fine-tuning a Variety of BERT-based Models. Offensive language detection is one of the most challenging problem in the natural language processing field, being imposed by the rising presence of this phenomenon in online social media. This paper describes our Transformer-based solutions for identifying offensive language on Twitter in five languages (i.e., English, Arabic, Danish, Greek, and Turkish), which was employed in Subtask A of the Offenseval 2020 shared task. Several neural architectures (i.e., BERT, mBERT, Roberta, XLM-Roberta, and ALBERT), pre-trained using both single-language and multilingual corpora, were fine-tuned and compared using multiple combinations of datasets. Finally, the highest-scoring models were used for our submissions in the competition, which ranked our",{UPB} at {S}em{E}val-2020 Task 12: Multilingual Offensive Language Detection on Social Media by Fine-tuning a Variety of {BERT}-based Models,"Offensive language detection is one of the most challenging problem in the natural language processing field, being imposed by the rising presence of this phenomenon in online social media. This paper describes our Transformer-based solutions for identifying offensive language on Twitter in five languages (i.e., English, Arabic, Danish, Greek, and Turkish), which was employed in Subtask A of the Offenseval 2020 shared task. Several neural architectures (i.e., BERT, mBERT, Roberta, XLM-Roberta, and ALBERT), pre-trained using both single-language and multilingual corpora, were fine-tuned and compared using multiple combinations of datasets. Finally, the highest-scoring models were used for our submissions in the competition, which ranked our",UPB at SemEval-2020 Task 12: Multilingual Offensive Language Detection on Social Media by Fine-tuning a Variety of BERT-based Models,"Offensive language detection is one of the most challenging problem in the natural language processing field, being imposed by the rising presence of this phenomenon in online social media. This paper describes our Transformer-based solutions for identifying offensive language on Twitter in five languages (i.e., English, Arabic, Danish, Greek, and Turkish), which was employed in Subtask A of the Offenseval 2020 shared task. Several neural architectures (i.e., BERT, mBERT, Roberta, XLM-Roberta, and ALBERT), pre-trained using both single-language and multilingual corpora, were fine-tuned and compared using multiple combinations of datasets. Finally, the highest-scoring models were used for our submissions in the competition, which ranked our",,"UPB at SemEval-2020 Task 12: Multilingual Offensive Language Detection on Social Media by Fine-tuning a Variety of BERT-based Models. Offensive language detection is one of the most challenging problem in the natural language processing field, being imposed by the rising presence of this phenomenon in online social media. This paper describes our Transformer-based solutions for identifying offensive language on Twitter in five languages (i.e., English, Arabic, Danish, Greek, and Turkish), which was employed in Subtask A of the Offenseval 2020 shared task. Several neural architectures (i.e., BERT, mBERT, Roberta, XLM-Roberta, and ALBERT), pre-trained using both single-language and multilingual corpora, were fine-tuned and compared using multiple combinations of datasets. Finally, the highest-scoring models were used for our submissions in the competition, which ranked our",2020
ahlberg-enache-2012-combining,http://www.lrec-conf.org/proceedings/lrec2012/pdf/360_Paper.pdf,0,,,,,,,"Combining Language Resources Into A Grammar-Driven Swedish Parser. This paper describes work on a rule-based, open-source parser for Swedish. The central component is a wide-coverage grammar implemented in the GF formalism (Grammatical Framework), a dependently typed grammar formalism based on Martin-Löf type theory. GF has strong support for multilinguality and has so far been used successfully for controlled languages (Angelov and Ranta, 2009) and recent experiments have showed that it is also possible to use the framework for parsing unrestricted language. In addition to GF, we use two other main resources: the Swedish treebank Talbanken and the electronic lexicon SALDO. By combining the grammar with a lexicon extracted from SALDO we obtain a parser accepting all sentences described by the given rules. We develop and test this on examples from Talbanken. The resulting parser gives a full syntactic analysis of the input sentences. It will be highly reusable, freely available, and as GF provides libraries for compiling grammars to a number of programming languages, chosen parts of the the grammar may be used in various NLP applications.",Combining Language Resources Into A Grammar-Driven {S}wedish Parser,"This paper describes work on a rule-based, open-source parser for Swedish. The central component is a wide-coverage grammar implemented in the GF formalism (Grammatical Framework), a dependently typed grammar formalism based on Martin-Löf type theory. GF has strong support for multilinguality and has so far been used successfully for controlled languages (Angelov and Ranta, 2009) and recent experiments have showed that it is also possible to use the framework for parsing unrestricted language. In addition to GF, we use two other main resources: the Swedish treebank Talbanken and the electronic lexicon SALDO. By combining the grammar with a lexicon extracted from SALDO we obtain a parser accepting all sentences described by the given rules. We develop and test this on examples from Talbanken. The resulting parser gives a full syntactic analysis of the input sentences. It will be highly reusable, freely available, and as GF provides libraries for compiling grammars to a number of programming languages, chosen parts of the the grammar may be used in various NLP applications.",Combining Language Resources Into A Grammar-Driven Swedish Parser,"This paper describes work on a rule-based, open-source parser for Swedish. The central component is a wide-coverage grammar implemented in the GF formalism (Grammatical Framework), a dependently typed grammar formalism based on Martin-Löf type theory. GF has strong support for multilinguality and has so far been used successfully for controlled languages (Angelov and Ranta, 2009) and recent experiments have showed that it is also possible to use the framework for parsing unrestricted language. In addition to GF, we use two other main resources: the Swedish treebank Talbanken and the electronic lexicon SALDO. By combining the grammar with a lexicon extracted from SALDO we obtain a parser accepting all sentences described by the given rules. We develop and test this on examples from Talbanken. The resulting parser gives a full syntactic analysis of the input sentences. It will be highly reusable, freely available, and as GF provides libraries for compiling grammars to a number of programming languages, chosen parts of the the grammar may be used in various NLP applications.","The work has been funded by Center of Language Technology. We would also like to give special thanks to Aarne Ranta, Elisabet Engdahl, Krasimir Angelov, Olga Caprotti, Lars Borin and John Camilleri for their help and support.","Combining Language Resources Into A Grammar-Driven Swedish Parser. This paper describes work on a rule-based, open-source parser for Swedish. The central component is a wide-coverage grammar implemented in the GF formalism (Grammatical Framework), a dependently typed grammar formalism based on Martin-Löf type theory. GF has strong support for multilinguality and has so far been used successfully for controlled languages (Angelov and Ranta, 2009) and recent experiments have showed that it is also possible to use the framework for parsing unrestricted language. In addition to GF, we use two other main resources: the Swedish treebank Talbanken and the electronic lexicon SALDO. By combining the grammar with a lexicon extracted from SALDO we obtain a parser accepting all sentences described by the given rules. We develop and test this on examples from Talbanken. The resulting parser gives a full syntactic analysis of the input sentences. It will be highly reusable, freely available, and as GF provides libraries for compiling grammars to a number of programming languages, chosen parts of the the grammar may be used in various NLP applications.",2012
sinopalnikova-smrz-2006-intelligent,http://www.lrec-conf.org/proceedings/lrec2006/pdf/275_pdf.pdf,0,,,,,,,"Intelligent Dictionary Interfaces: Usability Evaluation of Access-Supporting Enhancements. The present paper describes psycholinguistic experiments aimed at exploring the way people behave while accessing electronic dictionaries. In our work we focused on the access by meaning that, in comparison with the access by form, is currently less studied and very seldom implemented in modern dictionary interfaces. Thus, the goal of our experiments was to explore dictionary users' requirements and to study what services an intelligent dictionary interface should be able to supply to help solving access by meaning problems. We tested several access-supporting enhancements of electronic dictionaries based on various language resources (corpora, wordnets, word association norms and explanatory dictionaries). Experiments were carried out with native speakers of three European languages-English, Czech and Russian. Results for monolingual and bilingual cases are presented.",Intelligent Dictionary Interfaces: Usability Evaluation of Access-Supporting Enhancements,"The present paper describes psycholinguistic experiments aimed at exploring the way people behave while accessing electronic dictionaries. In our work we focused on the access by meaning that, in comparison with the access by form, is currently less studied and very seldom implemented in modern dictionary interfaces. Thus, the goal of our experiments was to explore dictionary users' requirements and to study what services an intelligent dictionary interface should be able to supply to help solving access by meaning problems. We tested several access-supporting enhancements of electronic dictionaries based on various language resources (corpora, wordnets, word association norms and explanatory dictionaries). Experiments were carried out with native speakers of three European languages-English, Czech and Russian. Results for monolingual and bilingual cases are presented.",Intelligent Dictionary Interfaces: Usability Evaluation of Access-Supporting Enhancements,"The present paper describes psycholinguistic experiments aimed at exploring the way people behave while accessing electronic dictionaries. In our work we focused on the access by meaning that, in comparison with the access by form, is currently less studied and very seldom implemented in modern dictionary interfaces. Thus, the goal of our experiments was to explore dictionary users' requirements and to study what services an intelligent dictionary interface should be able to supply to help solving access by meaning problems. We tested several access-supporting enhancements of electronic dictionaries based on various language resources (corpora, wordnets, word association norms and explanatory dictionaries). Experiments were carried out with native speakers of three European languages-English, Czech and Russian. Results for monolingual and bilingual cases are presented.",,"Intelligent Dictionary Interfaces: Usability Evaluation of Access-Supporting Enhancements. The present paper describes psycholinguistic experiments aimed at exploring the way people behave while accessing electronic dictionaries. In our work we focused on the access by meaning that, in comparison with the access by form, is currently less studied and very seldom implemented in modern dictionary interfaces. Thus, the goal of our experiments was to explore dictionary users' requirements and to study what services an intelligent dictionary interface should be able to supply to help solving access by meaning problems. We tested several access-supporting enhancements of electronic dictionaries based on various language resources (corpora, wordnets, word association norms and explanatory dictionaries). Experiments were carried out with native speakers of three European languages-English, Czech and Russian. Results for monolingual and bilingual cases are presented.",2006
boldrini-etal-2010-emotiblog,https://aclanthology.org/W10-1801,0,,,,,,,"EmotiBlog: A Finer-Grained and More Precise Learning of Subjectivity Expression Models. The exponential growth of the subjective information in the framework of the Web 2.0 has led to the need to create Natural Language Processing tools able to analyse and process such data for multiple practical applications. They require training on specifically annotated corpora, whose level of detail must be fine enough to capture the phenomena involved. This paper presents EmotiBlog-a finegrained annotation scheme for subjectivity. We show the manner in which it is built and demonstrate the benefits it brings to the systems using it for training, through the experiments we carried out on opinion mining and emotion detection. We employ corpora of different textual genres-a set of annotated reported speech extracted from news articles, the set of news titles annotated with polarity and emotion from the SemEval 2007 (Task 14) and ISEAR, a corpus of real-life selfexpressed emotion. We also show how the model built from the EmotiBlog annotations can be enhanced with external resources. The results demonstrate that EmotiBlog, through its structure and annotation paradigm, offers high quality training data for systems dealing both with opinion mining, as well as emotion detection.",{E}moti{B}log: A Finer-Grained and More Precise Learning of Subjectivity Expression Models,"The exponential growth of the subjective information in the framework of the Web 2.0 has led to the need to create Natural Language Processing tools able to analyse and process such data for multiple practical applications. They require training on specifically annotated corpora, whose level of detail must be fine enough to capture the phenomena involved. This paper presents EmotiBlog-a finegrained annotation scheme for subjectivity. We show the manner in which it is built and demonstrate the benefits it brings to the systems using it for training, through the experiments we carried out on opinion mining and emotion detection. We employ corpora of different textual genres-a set of annotated reported speech extracted from news articles, the set of news titles annotated with polarity and emotion from the SemEval 2007 (Task 14) and ISEAR, a corpus of real-life selfexpressed emotion. We also show how the model built from the EmotiBlog annotations can be enhanced with external resources. The results demonstrate that EmotiBlog, through its structure and annotation paradigm, offers high quality training data for systems dealing both with opinion mining, as well as emotion detection.",EmotiBlog: A Finer-Grained and More Precise Learning of Subjectivity Expression Models,"The exponential growth of the subjective information in the framework of the Web 2.0 has led to the need to create Natural Language Processing tools able to analyse and process such data for multiple practical applications. They require training on specifically annotated corpora, whose level of detail must be fine enough to capture the phenomena involved. This paper presents EmotiBlog-a finegrained annotation scheme for subjectivity. We show the manner in which it is built and demonstrate the benefits it brings to the systems using it for training, through the experiments we carried out on opinion mining and emotion detection. We employ corpora of different textual genres-a set of annotated reported speech extracted from news articles, the set of news titles annotated with polarity and emotion from the SemEval 2007 (Task 14) and ISEAR, a corpus of real-life selfexpressed emotion. We also show how the model built from the EmotiBlog annotations can be enhanced with external resources. The results demonstrate that EmotiBlog, through its structure and annotation paradigm, offers high quality training data for systems dealing both with opinion mining, as well as emotion detection.",,"EmotiBlog: A Finer-Grained and More Precise Learning of Subjectivity Expression Models. The exponential growth of the subjective information in the framework of the Web 2.0 has led to the need to create Natural Language Processing tools able to analyse and process such data for multiple practical applications. They require training on specifically annotated corpora, whose level of detail must be fine enough to capture the phenomena involved. This paper presents EmotiBlog-a finegrained annotation scheme for subjectivity. We show the manner in which it is built and demonstrate the benefits it brings to the systems using it for training, through the experiments we carried out on opinion mining and emotion detection. We employ corpora of different textual genres-a set of annotated reported speech extracted from news articles, the set of news titles annotated with polarity and emotion from the SemEval 2007 (Task 14) and ISEAR, a corpus of real-life selfexpressed emotion. We also show how the model built from the EmotiBlog annotations can be enhanced with external resources. The results demonstrate that EmotiBlog, through its structure and annotation paradigm, offers high quality training data for systems dealing both with opinion mining, as well as emotion detection.",2010
tufis-etal-2020-collection,https://aclanthology.org/2020.lrec-1.337,1,,,,peace_justice_and_strong_institutions,,,"Collection and Annotation of the Romanian Legal Corpus. We present the Romanian legislative corpus which is a valuable linguistic asset for the development of machine translation systems, especially for under-resourced languages. The knowledge that can be extracted from this resource is necessary for a deeper understanding of how law terminology is used and how it can be made more consistent. At this moment, the corpus contains more than 144k documents representing the legislative body of Romania. This corpus is processed and annotated at different levels: linguistically (tokenized, lemmatized and POS-tagged), dependency parsed, chunked, named entities identified and labeled with IATE terms and EUROVOC descriptors. Each annotated document has a CONLL-U Plus format consisting of 14 columns; in addition to the standard 10-column format, four other types of annotations were added. Moreover the repository will be periodically updated as new legislative texts are published. These will be automatically collected and transmitted to the processing and annotation pipeline. The access to the corpus is provided through ELRC infrastructure.",Collection and Annotation of the {R}omanian Legal Corpus,"We present the Romanian legislative corpus which is a valuable linguistic asset for the development of machine translation systems, especially for under-resourced languages. The knowledge that can be extracted from this resource is necessary for a deeper understanding of how law terminology is used and how it can be made more consistent. At this moment, the corpus contains more than 144k documents representing the legislative body of Romania. This corpus is processed and annotated at different levels: linguistically (tokenized, lemmatized and POS-tagged), dependency parsed, chunked, named entities identified and labeled with IATE terms and EUROVOC descriptors. Each annotated document has a CONLL-U Plus format consisting of 14 columns; in addition to the standard 10-column format, four other types of annotations were added. Moreover the repository will be periodically updated as new legislative texts are published. These will be automatically collected and transmitted to the processing and annotation pipeline. The access to the corpus is provided through ELRC infrastructure.",Collection and Annotation of the Romanian Legal Corpus,"We present the Romanian legislative corpus which is a valuable linguistic asset for the development of machine translation systems, especially for under-resourced languages. The knowledge that can be extracted from this resource is necessary for a deeper understanding of how law terminology is used and how it can be made more consistent. At this moment, the corpus contains more than 144k documents representing the legislative body of Romania. This corpus is processed and annotated at different levels: linguistically (tokenized, lemmatized and POS-tagged), dependency parsed, chunked, named entities identified and labeled with IATE terms and EUROVOC descriptors. Each annotated document has a CONLL-U Plus format consisting of 14 columns; in addition to the standard 10-column format, four other types of annotations were added. Moreover the repository will be periodically updated as new legislative texts are published. These will be automatically collected and transmitted to the processing and annotation pipeline. The access to the corpus is provided through ELRC infrastructure.","This research was supported by the EC grant no. INEA/CEF/ICT/A2017/1565710 for the Action no. 2017-EU-IA-0136 entitled ""Multilingual Resources for CEF.AT in the legal domain"" (MARCELL).","Collection and Annotation of the Romanian Legal Corpus. We present the Romanian legislative corpus which is a valuable linguistic asset for the development of machine translation systems, especially for under-resourced languages. The knowledge that can be extracted from this resource is necessary for a deeper understanding of how law terminology is used and how it can be made more consistent. At this moment, the corpus contains more than 144k documents representing the legislative body of Romania. This corpus is processed and annotated at different levels: linguistically (tokenized, lemmatized and POS-tagged), dependency parsed, chunked, named entities identified and labeled with IATE terms and EUROVOC descriptors. Each annotated document has a CONLL-U Plus format consisting of 14 columns; in addition to the standard 10-column format, four other types of annotations were added. Moreover the repository will be periodically updated as new legislative texts are published. These will be automatically collected and transmitted to the processing and annotation pipeline. The access to the corpus is provided through ELRC infrastructure.",2020
milajevs-purver-2014-investigating,https://aclanthology.org/W14-1505,0,,,,,,,"Investigating the Contribution of Distributional Semantic Information for Dialogue Act Classification. This paper presents a series of experiments in applying compositional distributional semantic models to dialogue act classification. In contrast to the widely used bag-ofwords approach, we build the meaning of an utterance from its parts by composing the distributional word vectors using vector addition and multiplication. We investigate the contribution of word sequence, dialogue act sequence, and distributional information to the performance, and compare with the current state of the art approaches. Our experiment suggests that that distributional information is useful for dialogue act tagging but that simple models of compositionality fail to capture crucial information from word and utterance sequence; more advanced approaches (e.g. sequence-or grammar-driven, such as categorical, word vector composition) are required.",Investigating the Contribution of Distributional Semantic Information for Dialogue Act Classification,"This paper presents a series of experiments in applying compositional distributional semantic models to dialogue act classification. In contrast to the widely used bag-ofwords approach, we build the meaning of an utterance from its parts by composing the distributional word vectors using vector addition and multiplication. We investigate the contribution of word sequence, dialogue act sequence, and distributional information to the performance, and compare with the current state of the art approaches. Our experiment suggests that that distributional information is useful for dialogue act tagging but that simple models of compositionality fail to capture crucial information from word and utterance sequence; more advanced approaches (e.g. sequence-or grammar-driven, such as categorical, word vector composition) are required.",Investigating the Contribution of Distributional Semantic Information for Dialogue Act Classification,"This paper presents a series of experiments in applying compositional distributional semantic models to dialogue act classification. In contrast to the widely used bag-ofwords approach, we build the meaning of an utterance from its parts by composing the distributional word vectors using vector addition and multiplication. We investigate the contribution of word sequence, dialogue act sequence, and distributional information to the performance, and compare with the current state of the art approaches. Our experiment suggests that that distributional information is useful for dialogue act tagging but that simple models of compositionality fail to capture crucial information from word and utterance sequence; more advanced approaches (e.g. sequence-or grammar-driven, such as categorical, word vector composition) are required.",We thank Mehrnoosh Sadrzadeh for her helpful advice and valuable discussion. We would like to thank anonymous reviewers for their effective comments. Milajevs is supported by the EP-SRC project EP/J002607/1. Purver is supported in part by the European Community's Seventh Framework Programme under grant agreement no 611733 (ConCreTe).,"Investigating the Contribution of Distributional Semantic Information for Dialogue Act Classification. This paper presents a series of experiments in applying compositional distributional semantic models to dialogue act classification. In contrast to the widely used bag-ofwords approach, we build the meaning of an utterance from its parts by composing the distributional word vectors using vector addition and multiplication. We investigate the contribution of word sequence, dialogue act sequence, and distributional information to the performance, and compare with the current state of the art approaches. Our experiment suggests that that distributional information is useful for dialogue act tagging but that simple models of compositionality fail to capture crucial information from word and utterance sequence; more advanced approaches (e.g. sequence-or grammar-driven, such as categorical, word vector composition) are required.",2014
zhao-caragea-2021-knowledge,https://aclanthology.org/2021.ranlp-1.181,1,,,,privacy_protection,,,"Knowledge Distillation with BERT for Image Tag-Based Privacy Prediction. Text in the form of tags associated with online images is often informative for predicting private or sensitive content from images. When using privacy prediction systems running on social networking sites that decide whether each uploaded image should get posted or be protected, users may be reluctant to share real images that may reveal their identity, but may share image tags. In such cases, privacy-aware tags become good indicators of image privacy and can be utilized to generate privacy decisions. In this paper, our aim is to learn tag representations for images to improve tagbased image privacy prediction. To achieve this, we explore self-distillation with BERT, in which we utilize knowledge in the form of soft probability distributions (soft labels) from the teacher model to help with the training of the student model. Our approach effectively learns better tag representations with improved performance on private image identification and outperforms state-of-the-art models for this task. Moreover, we utilize the idea of knowledge distillation to improve tag representations in a semi-supervised learning task. Our semi-supervised approach with only 20% of annotated data achieves similar performance compared with its supervised learning counterpart. Last, we provide a comprehensive analysis to get a better understanding of our approach.",Knowledge Distillation with {BERT} for Image Tag-Based Privacy Prediction,"Text in the form of tags associated with online images is often informative for predicting private or sensitive content from images. When using privacy prediction systems running on social networking sites that decide whether each uploaded image should get posted or be protected, users may be reluctant to share real images that may reveal their identity, but may share image tags. In such cases, privacy-aware tags become good indicators of image privacy and can be utilized to generate privacy decisions. In this paper, our aim is to learn tag representations for images to improve tagbased image privacy prediction. To achieve this, we explore self-distillation with BERT, in which we utilize knowledge in the form of soft probability distributions (soft labels) from the teacher model to help with the training of the student model. Our approach effectively learns better tag representations with improved performance on private image identification and outperforms state-of-the-art models for this task. Moreover, we utilize the idea of knowledge distillation to improve tag representations in a semi-supervised learning task. Our semi-supervised approach with only 20% of annotated data achieves similar performance compared with its supervised learning counterpart. Last, we provide a comprehensive analysis to get a better understanding of our approach.",Knowledge Distillation with BERT for Image Tag-Based Privacy Prediction,"Text in the form of tags associated with online images is often informative for predicting private or sensitive content from images. When using privacy prediction systems running on social networking sites that decide whether each uploaded image should get posted or be protected, users may be reluctant to share real images that may reveal their identity, but may share image tags. In such cases, privacy-aware tags become good indicators of image privacy and can be utilized to generate privacy decisions. In this paper, our aim is to learn tag representations for images to improve tagbased image privacy prediction. To achieve this, we explore self-distillation with BERT, in which we utilize knowledge in the form of soft probability distributions (soft labels) from the teacher model to help with the training of the student model. Our approach effectively learns better tag representations with improved performance on private image identification and outperforms state-of-the-art models for this task. Moreover, we utilize the idea of knowledge distillation to improve tag representations in a semi-supervised learning task. Our semi-supervised approach with only 20% of annotated data achieves similar performance compared with its supervised learning counterpart. Last, we provide a comprehensive analysis to get a better understanding of our approach.","This research is supported in part by NSF. Any opinions, findings, and conclusions expressed here are those of the authors and do not necessarily reflect the views of NSF. The computing for this project was performed on AWS. We also thank our reviewers for their feedback.","Knowledge Distillation with BERT for Image Tag-Based Privacy Prediction. Text in the form of tags associated with online images is often informative for predicting private or sensitive content from images. When using privacy prediction systems running on social networking sites that decide whether each uploaded image should get posted or be protected, users may be reluctant to share real images that may reveal their identity, but may share image tags. In such cases, privacy-aware tags become good indicators of image privacy and can be utilized to generate privacy decisions. In this paper, our aim is to learn tag representations for images to improve tagbased image privacy prediction. To achieve this, we explore self-distillation with BERT, in which we utilize knowledge in the form of soft probability distributions (soft labels) from the teacher model to help with the training of the student model. Our approach effectively learns better tag representations with improved performance on private image identification and outperforms state-of-the-art models for this task. Moreover, we utilize the idea of knowledge distillation to improve tag representations in a semi-supervised learning task. Our semi-supervised approach with only 20% of annotated data achieves similar performance compared with its supervised learning counterpart. Last, we provide a comprehensive analysis to get a better understanding of our approach.",2021
guijarrubia-etal-2004-evaluation,http://www.lrec-conf.org/proceedings/lrec2004/pdf/309.pdf,0,,,,,,,"Evaluation of a Spoken Phonetic Database in Basque Language. In this paper we present the evaluation of a spoken phonetic corpus designed to train acoustic models for Speech Recognition applications in Basque Language. A complete set of acoustic-phonetic decoding experiments was carried out over the proposed database. Context dependent and independent phoneme units were used in these experiments with two different approaches to acoustic modeling, namely discrete and continuous Hidden Markov Models (HMMs). A complete set of HMMs were trained and tested with the database. Experimental results reveal that the database is large and phonetically rich enough to get great acoustic models to be integrated in Continuous Speech Recognition Systems.",Evaluation of a Spoken Phonetic Database in {B}asque Language,"In this paper we present the evaluation of a spoken phonetic corpus designed to train acoustic models for Speech Recognition applications in Basque Language. A complete set of acoustic-phonetic decoding experiments was carried out over the proposed database. Context dependent and independent phoneme units were used in these experiments with two different approaches to acoustic modeling, namely discrete and continuous Hidden Markov Models (HMMs). A complete set of HMMs were trained and tested with the database. Experimental results reveal that the database is large and phonetically rich enough to get great acoustic models to be integrated in Continuous Speech Recognition Systems.",Evaluation of a Spoken Phonetic Database in Basque Language,"In this paper we present the evaluation of a spoken phonetic corpus designed to train acoustic models for Speech Recognition applications in Basque Language. A complete set of acoustic-phonetic decoding experiments was carried out over the proposed database. Context dependent and independent phoneme units were used in these experiments with two different approaches to acoustic modeling, namely discrete and continuous Hidden Markov Models (HMMs). A complete set of HMMs were trained and tested with the database. Experimental results reveal that the database is large and phonetically rich enough to get great acoustic models to be integrated in Continuous Speech Recognition Systems.",,"Evaluation of a Spoken Phonetic Database in Basque Language. In this paper we present the evaluation of a spoken phonetic corpus designed to train acoustic models for Speech Recognition applications in Basque Language. A complete set of acoustic-phonetic decoding experiments was carried out over the proposed database. Context dependent and independent phoneme units were used in these experiments with two different approaches to acoustic modeling, namely discrete and continuous Hidden Markov Models (HMMs). A complete set of HMMs were trained and tested with the database. Experimental results reveal that the database is large and phonetically rich enough to get great acoustic models to be integrated in Continuous Speech Recognition Systems.",2004
atanasov-etal-2019-predicting,https://aclanthology.org/K19-1096,1,,,,peace_justice_and_strong_institutions,,,"Predicting the Role of Political Trolls in Social Media. We investigate the political roles of ""Internet trolls"" in social media. Political trolls, such as the ones linked to the Russian Internet Research Agency (IRA), have recently gained enormous attention for their ability to sway public opinion and even influence elections. Analysis of the online traces of trolls has shown different behavioral patterns, which target different slices of the population. However, this analysis is manual and laborintensive, thus making it impractical as a firstresponse tool for newly-discovered troll farms. In this paper, we show how to automate this analysis by using machine learning in a realistic setting. In particular, we show how to classify trolls according to their political role-left, news feed, right-by using features extracted from social media, i.e., Twitter, in two scenarios: (i) in a traditional supervised learning scenario, where labels for trolls are available, and (ii) in a distant supervision scenario, where labels for trolls are not available, and we rely on more-commonly-available labels for news outlets mentioned by the trolls. Technically, we leverage the community structure and the text of the messages in the online social network of trolls represented as a graph, from which we extract several types of learned representations, i.e., embeddings, for the trolls. Experiments on the ""IRA Russian Troll"" dataset show that our methodology improves over the state-of-the-art in the first scenario, while providing a compelling case for the second scenario, which has not been explored in the literature thus far.",Predicting the Role of Political Trolls in Social Media,"We investigate the political roles of ""Internet trolls"" in social media. Political trolls, such as the ones linked to the Russian Internet Research Agency (IRA), have recently gained enormous attention for their ability to sway public opinion and even influence elections. Analysis of the online traces of trolls has shown different behavioral patterns, which target different slices of the population. However, this analysis is manual and laborintensive, thus making it impractical as a firstresponse tool for newly-discovered troll farms. In this paper, we show how to automate this analysis by using machine learning in a realistic setting. In particular, we show how to classify trolls according to their political role-left, news feed, right-by using features extracted from social media, i.e., Twitter, in two scenarios: (i) in a traditional supervised learning scenario, where labels for trolls are available, and (ii) in a distant supervision scenario, where labels for trolls are not available, and we rely on more-commonly-available labels for news outlets mentioned by the trolls. Technically, we leverage the community structure and the text of the messages in the online social network of trolls represented as a graph, from which we extract several types of learned representations, i.e., embeddings, for the trolls. Experiments on the ""IRA Russian Troll"" dataset show that our methodology improves over the state-of-the-art in the first scenario, while providing a compelling case for the second scenario, which has not been explored in the literature thus far.",Predicting the Role of Political Trolls in Social Media,"We investigate the political roles of ""Internet trolls"" in social media. Political trolls, such as the ones linked to the Russian Internet Research Agency (IRA), have recently gained enormous attention for their ability to sway public opinion and even influence elections. Analysis of the online traces of trolls has shown different behavioral patterns, which target different slices of the population. However, this analysis is manual and laborintensive, thus making it impractical as a firstresponse tool for newly-discovered troll farms. In this paper, we show how to automate this analysis by using machine learning in a realistic setting. In particular, we show how to classify trolls according to their political role-left, news feed, right-by using features extracted from social media, i.e., Twitter, in two scenarios: (i) in a traditional supervised learning scenario, where labels for trolls are available, and (ii) in a distant supervision scenario, where labels for trolls are not available, and we rely on more-commonly-available labels for news outlets mentioned by the trolls. Technically, we leverage the community structure and the text of the messages in the online social network of trolls represented as a graph, from which we extract several types of learned representations, i.e., embeddings, for the trolls. Experiments on the ""IRA Russian Troll"" dataset show that our methodology improves over the state-of-the-art in the first scenario, while providing a compelling case for the second scenario, which has not been explored in the literature thus far.","This research is part of the Tanbih project, 4 which aims to limit the effect of ""fake news"", propaganda and media bias by making users aware of what they are reading. The project is developed in collaboration between the Qatar Computing Research Institute, HBKU and the MIT Computer Science and Artificial Intelligence Laboratory.Gianmarco De Francisci Morales acknowledges support from Intesa Sanpaolo Innovation Center. The funder had no role in the study design, in the data collection and analysis, in the decision to publish, or in the preparation of the manuscript.","Predicting the Role of Political Trolls in Social Media. We investigate the political roles of ""Internet trolls"" in social media. Political trolls, such as the ones linked to the Russian Internet Research Agency (IRA), have recently gained enormous attention for their ability to sway public opinion and even influence elections. Analysis of the online traces of trolls has shown different behavioral patterns, which target different slices of the population. However, this analysis is manual and laborintensive, thus making it impractical as a firstresponse tool for newly-discovered troll farms. In this paper, we show how to automate this analysis by using machine learning in a realistic setting. In particular, we show how to classify trolls according to their political role-left, news feed, right-by using features extracted from social media, i.e., Twitter, in two scenarios: (i) in a traditional supervised learning scenario, where labels for trolls are available, and (ii) in a distant supervision scenario, where labels for trolls are not available, and we rely on more-commonly-available labels for news outlets mentioned by the trolls. Technically, we leverage the community structure and the text of the messages in the online social network of trolls represented as a graph, from which we extract several types of learned representations, i.e., embeddings, for the trolls. Experiments on the ""IRA Russian Troll"" dataset show that our methodology improves over the state-of-the-art in the first scenario, while providing a compelling case for the second scenario, which has not been explored in the literature thus far.",2019
si-etal-2020-new,https://aclanthology.org/2020.icon-main.20,1,,,,disinformation_and_fake_news,,,"A New Approach to Claim Check-Worthiness Prediction and Claim Verification. The more we are advancing towards a modern world, the more it opens the path to falsification in every aspect of life. Even in case of knowing the surrounding, common people can not judge the actual scenario as the promises, comments and opinions of the influential people at power keep changing every day. Therefore computationally determining the truthfulness of such claims and comments has a very important societal impact. This paper describes a unique method to extract check-worthy claims from the 2016 US presidential debates and verify the truthfulness of the check-worthy claims. We classify the claims for check-worthiness with our modified Tf-Idf model which is used in background training on fact-checking news articles (NBC News and Washington Post). We check the truthfulness of the claims by using POS, sentiment score and cosine similarity features.",A New Approach to Claim Check-Worthiness Prediction and Claim Verification,"The more we are advancing towards a modern world, the more it opens the path to falsification in every aspect of life. Even in case of knowing the surrounding, common people can not judge the actual scenario as the promises, comments and opinions of the influential people at power keep changing every day. Therefore computationally determining the truthfulness of such claims and comments has a very important societal impact. This paper describes a unique method to extract check-worthy claims from the 2016 US presidential debates and verify the truthfulness of the check-worthy claims. We classify the claims for check-worthiness with our modified Tf-Idf model which is used in background training on fact-checking news articles (NBC News and Washington Post). We check the truthfulness of the claims by using POS, sentiment score and cosine similarity features.",A New Approach to Claim Check-Worthiness Prediction and Claim Verification,"The more we are advancing towards a modern world, the more it opens the path to falsification in every aspect of life. Even in case of knowing the surrounding, common people can not judge the actual scenario as the promises, comments and opinions of the influential people at power keep changing every day. Therefore computationally determining the truthfulness of such claims and comments has a very important societal impact. This paper describes a unique method to extract check-worthy claims from the 2016 US presidential debates and verify the truthfulness of the check-worthy claims. We classify the claims for check-worthiness with our modified Tf-Idf model which is used in background training on fact-checking news articles (NBC News and Washington Post). We check the truthfulness of the claims by using POS, sentiment score and cosine similarity features.",,"A New Approach to Claim Check-Worthiness Prediction and Claim Verification. The more we are advancing towards a modern world, the more it opens the path to falsification in every aspect of life. Even in case of knowing the surrounding, common people can not judge the actual scenario as the promises, comments and opinions of the influential people at power keep changing every day. Therefore computationally determining the truthfulness of such claims and comments has a very important societal impact. This paper describes a unique method to extract check-worthy claims from the 2016 US presidential debates and verify the truthfulness of the check-worthy claims. We classify the claims for check-worthiness with our modified Tf-Idf model which is used in background training on fact-checking news articles (NBC News and Washington Post). We check the truthfulness of the claims by using POS, sentiment score and cosine similarity features.",2020
kelleher-2007-dit,https://aclanthology.org/2007.mtsummit-ucnlg.17,1,,,,education,,,"DIT: frequency based incremental attribute selection for GRE. The DIT system uses an incremental greedy search to generate descriptions, (similar to the incremental algorithm described in (Dale and Reiter, 1995) ) incremental algorithm). The selection of the next attribute to be tested for inclusion in the description is ordered by the absolute frequency of each attribute in the training corpus. Attributes are selected in descending order of frequency (i.e. the attribute that occurred most frequently in the training corpus is selected first). The type attribute is always included in the description. Other attributes are included in the description if they excludes at least 1 distractor from the set of distractors that fulfil the description generated prior the the attribute's selection.The algorithm terminates when a distinguishing description has been generated or all the targets attributes have been tested for inclusion in the description. To generate a description the system does the following:",{DIT}: frequency based incremental attribute selection for {GRE},"The DIT system uses an incremental greedy search to generate descriptions, (similar to the incremental algorithm described in (Dale and Reiter, 1995) ) incremental algorithm). The selection of the next attribute to be tested for inclusion in the description is ordered by the absolute frequency of each attribute in the training corpus. Attributes are selected in descending order of frequency (i.e. the attribute that occurred most frequently in the training corpus is selected first). The type attribute is always included in the description. Other attributes are included in the description if they excludes at least 1 distractor from the set of distractors that fulfil the description generated prior the the attribute's selection.The algorithm terminates when a distinguishing description has been generated or all the targets attributes have been tested for inclusion in the description. To generate a description the system does the following:",DIT: frequency based incremental attribute selection for GRE,"The DIT system uses an incremental greedy search to generate descriptions, (similar to the incremental algorithm described in (Dale and Reiter, 1995) ) incremental algorithm). The selection of the next attribute to be tested for inclusion in the description is ordered by the absolute frequency of each attribute in the training corpus. Attributes are selected in descending order of frequency (i.e. the attribute that occurred most frequently in the training corpus is selected first). The type attribute is always included in the description. Other attributes are included in the description if they excludes at least 1 distractor from the set of distractors that fulfil the description generated prior the the attribute's selection.The algorithm terminates when a distinguishing description has been generated or all the targets attributes have been tested for inclusion in the description. To generate a description the system does the following:",,"DIT: frequency based incremental attribute selection for GRE. The DIT system uses an incremental greedy search to generate descriptions, (similar to the incremental algorithm described in (Dale and Reiter, 1995) ) incremental algorithm). The selection of the next attribute to be tested for inclusion in the description is ordered by the absolute frequency of each attribute in the training corpus. Attributes are selected in descending order of frequency (i.e. the attribute that occurred most frequently in the training corpus is selected first). The type attribute is always included in the description. Other attributes are included in the description if they excludes at least 1 distractor from the set of distractors that fulfil the description generated prior the the attribute's selection.The algorithm terminates when a distinguishing description has been generated or all the targets attributes have been tested for inclusion in the description. To generate a description the system does the following:",2007
webb-etal-2010-evaluating,http://www.lrec-conf.org/proceedings/lrec2010/pdf/115_Paper.pdf,0,,,,,,,"Evaluating Human-Machine Conversation for Appropriateness. Evaluation of complex, collaborative dialogue systems is a difficult task. Traditionally, developers have relied upon subjective feedback from the user, and parametrisation over observable metrics. However, both models place some reliance on the notion of a task; that is, the system is helping to user achieve some clearly defined goal, such as book a flight or complete a banking transaction. It is not clear that such metrics are as useful when dealing with a system that has a more complex task, or even no definable task at all, beyond maintain and performing a collaborative dialogue. Working within the EU funded COMPANIONS program, we investigate the use of appropriateness as a measure of conversation quality, the hypothesis being that good companions need to be good conversational partners. We report initial work in the direction of annotating dialogue for indicators of good conversation, including the annotation and comparison of the output of two generations of the same dialogue system.",Evaluating Human-Machine Conversation for Appropriateness,"Evaluation of complex, collaborative dialogue systems is a difficult task. Traditionally, developers have relied upon subjective feedback from the user, and parametrisation over observable metrics. However, both models place some reliance on the notion of a task; that is, the system is helping to user achieve some clearly defined goal, such as book a flight or complete a banking transaction. It is not clear that such metrics are as useful when dealing with a system that has a more complex task, or even no definable task at all, beyond maintain and performing a collaborative dialogue. Working within the EU funded COMPANIONS program, we investigate the use of appropriateness as a measure of conversation quality, the hypothesis being that good companions need to be good conversational partners. We report initial work in the direction of annotating dialogue for indicators of good conversation, including the annotation and comparison of the output of two generations of the same dialogue system.",Evaluating Human-Machine Conversation for Appropriateness,"Evaluation of complex, collaborative dialogue systems is a difficult task. Traditionally, developers have relied upon subjective feedback from the user, and parametrisation over observable metrics. However, both models place some reliance on the notion of a task; that is, the system is helping to user achieve some clearly defined goal, such as book a flight or complete a banking transaction. It is not clear that such metrics are as useful when dealing with a system that has a more complex task, or even no definable task at all, beyond maintain and performing a collaborative dialogue. Working within the EU funded COMPANIONS program, we investigate the use of appropriateness as a measure of conversation quality, the hypothesis being that good companions need to be good conversational partners. We report initial work in the direction of annotating dialogue for indicators of good conversation, including the annotation and comparison of the output of two generations of the same dialogue system.",This work was funded by the Companions project (www.companions-project.org) sponsored by the European Commission as part of the Information Society Technologies (IST) programme under EC grant number IST-FP6-034434.,"Evaluating Human-Machine Conversation for Appropriateness. Evaluation of complex, collaborative dialogue systems is a difficult task. Traditionally, developers have relied upon subjective feedback from the user, and parametrisation over observable metrics. However, both models place some reliance on the notion of a task; that is, the system is helping to user achieve some clearly defined goal, such as book a flight or complete a banking transaction. It is not clear that such metrics are as useful when dealing with a system that has a more complex task, or even no definable task at all, beyond maintain and performing a collaborative dialogue. Working within the EU funded COMPANIONS program, we investigate the use of appropriateness as a measure of conversation quality, the hypothesis being that good companions need to be good conversational partners. We report initial work in the direction of annotating dialogue for indicators of good conversation, including the annotation and comparison of the output of two generations of the same dialogue system.",2010
nagao-etal-2002-annotation,https://aclanthology.org/C02-1098,0,,,,,,,"Annotation-Based Multimedia Summarization and Translation. This paper presents techniques for multimedia annotation and their application to video summarization and translation. Our tool for annotation allows users to easily create annotation including voice transcripts, video scene descriptions, and visual/auditory object descriptions. The module for voice transcription is capable of multilingual spoken language identification and recognition. A video scene description consists of semi-automatically detected keyframes of each scene in a video clip and time codes of scenes. A visual object description is created by tracking and interactive naming of people and objects in video scenes. The text data in the multimedia annotation are syntactically and semantically structured using linguistic annotation. The proposed multimedia summarization works upon a multimodal document that consists of a video, keyframes of scenes, and transcripts of the scenes. The multimedia translation automatically generates several versions of multimedia content in different languages.",Annotation-Based Multimedia Summarization and Translation,"This paper presents techniques for multimedia annotation and their application to video summarization and translation. Our tool for annotation allows users to easily create annotation including voice transcripts, video scene descriptions, and visual/auditory object descriptions. The module for voice transcription is capable of multilingual spoken language identification and recognition. A video scene description consists of semi-automatically detected keyframes of each scene in a video clip and time codes of scenes. A visual object description is created by tracking and interactive naming of people and objects in video scenes. The text data in the multimedia annotation are syntactically and semantically structured using linguistic annotation. The proposed multimedia summarization works upon a multimodal document that consists of a video, keyframes of scenes, and transcripts of the scenes. The multimedia translation automatically generates several versions of multimedia content in different languages.",Annotation-Based Multimedia Summarization and Translation,"This paper presents techniques for multimedia annotation and their application to video summarization and translation. Our tool for annotation allows users to easily create annotation including voice transcripts, video scene descriptions, and visual/auditory object descriptions. The module for voice transcription is capable of multilingual spoken language identification and recognition. A video scene description consists of semi-automatically detected keyframes of each scene in a video clip and time codes of scenes. A visual object description is created by tracking and interactive naming of people and objects in video scenes. The text data in the multimedia annotation are syntactically and semantically structured using linguistic annotation. The proposed multimedia summarization works upon a multimodal document that consists of a video, keyframes of scenes, and transcripts of the scenes. The multimedia translation automatically generates several versions of multimedia content in different languages.",,"Annotation-Based Multimedia Summarization and Translation. This paper presents techniques for multimedia annotation and their application to video summarization and translation. Our tool for annotation allows users to easily create annotation including voice transcripts, video scene descriptions, and visual/auditory object descriptions. The module for voice transcription is capable of multilingual spoken language identification and recognition. A video scene description consists of semi-automatically detected keyframes of each scene in a video clip and time codes of scenes. A visual object description is created by tracking and interactive naming of people and objects in video scenes. The text data in the multimedia annotation are syntactically and semantically structured using linguistic annotation. The proposed multimedia summarization works upon a multimodal document that consists of a video, keyframes of scenes, and transcripts of the scenes. The multimedia translation automatically generates several versions of multimedia content in different languages.",2002
mathur-etal-2018-detecting,https://aclanthology.org/W18-3504,1,,,,hate_speech,,,"Detecting Offensive Tweets in Hindi-English Code-Switched Language. The exponential rise of social media websites like Twitter, Facebook and Reddit in linguistically diverse geographical regions has led to hybridization of popular native languages with English in an effort to ease communication. The paper focuses on the classification of offensive tweets written in Hinglish language, which is a portmanteau of the Indic language Hindi with the Roman script. The paper introduces a novel tweet dataset, titled Hindi-English Offensive Tweet (HEOT) dataset, consisting of tweets in Hindi-English code switched language split into three classes: nonoffensive, abusive and hate-speech. Further, we approach the problem of classification of the tweets in HEOT dataset using transfer learning wherein the proposed model employing Convolutional Neural Networks is pre-trained on tweets in English followed by retraining on Hinglish tweets.",Detecting Offensive Tweets in {H}indi-{E}nglish Code-Switched Language,"The exponential rise of social media websites like Twitter, Facebook and Reddit in linguistically diverse geographical regions has led to hybridization of popular native languages with English in an effort to ease communication. The paper focuses on the classification of offensive tweets written in Hinglish language, which is a portmanteau of the Indic language Hindi with the Roman script. The paper introduces a novel tweet dataset, titled Hindi-English Offensive Tweet (HEOT) dataset, consisting of tweets in Hindi-English code switched language split into three classes: nonoffensive, abusive and hate-speech. Further, we approach the problem of classification of the tweets in HEOT dataset using transfer learning wherein the proposed model employing Convolutional Neural Networks is pre-trained on tweets in English followed by retraining on Hinglish tweets.",Detecting Offensive Tweets in Hindi-English Code-Switched Language,"The exponential rise of social media websites like Twitter, Facebook and Reddit in linguistically diverse geographical regions has led to hybridization of popular native languages with English in an effort to ease communication. The paper focuses on the classification of offensive tweets written in Hinglish language, which is a portmanteau of the Indic language Hindi with the Roman script. The paper introduces a novel tweet dataset, titled Hindi-English Offensive Tweet (HEOT) dataset, consisting of tweets in Hindi-English code switched language split into three classes: nonoffensive, abusive and hate-speech. Further, we approach the problem of classification of the tweets in HEOT dataset using transfer learning wherein the proposed model employing Convolutional Neural Networks is pre-trained on tweets in English followed by retraining on Hinglish tweets.",,"Detecting Offensive Tweets in Hindi-English Code-Switched Language. The exponential rise of social media websites like Twitter, Facebook and Reddit in linguistically diverse geographical regions has led to hybridization of popular native languages with English in an effort to ease communication. The paper focuses on the classification of offensive tweets written in Hinglish language, which is a portmanteau of the Indic language Hindi with the Roman script. The paper introduces a novel tweet dataset, titled Hindi-English Offensive Tweet (HEOT) dataset, consisting of tweets in Hindi-English code switched language split into three classes: nonoffensive, abusive and hate-speech. Further, we approach the problem of classification of the tweets in HEOT dataset using transfer learning wherein the proposed model employing Convolutional Neural Networks is pre-trained on tweets in English followed by retraining on Hinglish tweets.",2018
kallgren-1996-linguistic,https://aclanthology.org/C96-2114,0,,,,,,,"Linguistic Indeterminacy as a Source of Errors in Tagging. Most evaluations of part-of-speech tagging compare the utput of an automatic tagger to some established standard, define the differences as tagging errors and try to remedy them by, e.g., more training of the tagger. The present article is based on a manual analysis of a large number of tagging errors. Some clear patterns among the errors can be discerned, and the sources of the errors as well as possible alternative methods of remedy are presented and discussed. In particular are the problems with undecidable cases treated.",Linguistic Indeterminacy as a Source of Errors in Tagging,"Most evaluations of part-of-speech tagging compare the utput of an automatic tagger to some established standard, define the differences as tagging errors and try to remedy them by, e.g., more training of the tagger. The present article is based on a manual analysis of a large number of tagging errors. Some clear patterns among the errors can be discerned, and the sources of the errors as well as possible alternative methods of remedy are presented and discussed. In particular are the problems with undecidable cases treated.",Linguistic Indeterminacy as a Source of Errors in Tagging,"Most evaluations of part-of-speech tagging compare the utput of an automatic tagger to some established standard, define the differences as tagging errors and try to remedy them by, e.g., more training of the tagger. The present article is based on a manual analysis of a large number of tagging errors. Some clear patterns among the errors can be discerned, and the sources of the errors as well as possible alternative methods of remedy are presented and discussed. In particular are the problems with undecidable cases treated.",,"Linguistic Indeterminacy as a Source of Errors in Tagging. Most evaluations of part-of-speech tagging compare the utput of an automatic tagger to some established standard, define the differences as tagging errors and try to remedy them by, e.g., more training of the tagger. The present article is based on a manual analysis of a large number of tagging errors. Some clear patterns among the errors can be discerned, and the sources of the errors as well as possible alternative methods of remedy are presented and discussed. In particular are the problems with undecidable cases treated.",1996
singha-roy-mercer-2022-biocite,https://aclanthology.org/2022.bionlp-1.23,1,,,,health,,,"BioCite: A Deep Learning-based Citation Linkage Framework for Biomedical Research Articles. Research papers reflect scientific advances. Citations are widely used in research publications to support the new findings and show their benefits, while also regulating the information flow to make the contents clearer for the audience. A citation in a research article refers to the information's source, but not the specific text span from that source article. In biomedical research articles, this task is challenging as the same chemical or biological component can be represented in multiple ways in different papers from various domains. This paper suggests a mechanism for linking citing sentences in a publication with cited sentences in referenced sources. The framework presented here pairs the citing sentence with all of the sentences in the reference text, and then tries to retrieve the semantically equivalent pairs. These semantically related sentences from the reference paper are chosen as the cited statements. This effort involves designing a citation linkage framework utilizing sequential and tree-structured siamese deep learning models. This paper also provides a method to create an automatically generated corpus for such a task.",{B}io{C}ite: A Deep Learning-based Citation Linkage Framework for Biomedical Research Articles,"Research papers reflect scientific advances. Citations are widely used in research publications to support the new findings and show their benefits, while also regulating the information flow to make the contents clearer for the audience. A citation in a research article refers to the information's source, but not the specific text span from that source article. In biomedical research articles, this task is challenging as the same chemical or biological component can be represented in multiple ways in different papers from various domains. This paper suggests a mechanism for linking citing sentences in a publication with cited sentences in referenced sources. The framework presented here pairs the citing sentence with all of the sentences in the reference text, and then tries to retrieve the semantically equivalent pairs. These semantically related sentences from the reference paper are chosen as the cited statements. This effort involves designing a citation linkage framework utilizing sequential and tree-structured siamese deep learning models. This paper also provides a method to create an automatically generated corpus for such a task.",BioCite: A Deep Learning-based Citation Linkage Framework for Biomedical Research Articles,"Research papers reflect scientific advances. Citations are widely used in research publications to support the new findings and show their benefits, while also regulating the information flow to make the contents clearer for the audience. A citation in a research article refers to the information's source, but not the specific text span from that source article. In biomedical research articles, this task is challenging as the same chemical or biological component can be represented in multiple ways in different papers from various domains. This paper suggests a mechanism for linking citing sentences in a publication with cited sentences in referenced sources. The framework presented here pairs the citing sentence with all of the sentences in the reference text, and then tries to retrieve the semantically equivalent pairs. These semantically related sentences from the reference paper are chosen as the cited statements. This effort involves designing a citation linkage framework utilizing sequential and tree-structured siamese deep learning models. This paper also provides a method to create an automatically generated corpus for such a task.",,"BioCite: A Deep Learning-based Citation Linkage Framework for Biomedical Research Articles. Research papers reflect scientific advances. Citations are widely used in research publications to support the new findings and show their benefits, while also regulating the information flow to make the contents clearer for the audience. A citation in a research article refers to the information's source, but not the specific text span from that source article. In biomedical research articles, this task is challenging as the same chemical or biological component can be represented in multiple ways in different papers from various domains. This paper suggests a mechanism for linking citing sentences in a publication with cited sentences in referenced sources. The framework presented here pairs the citing sentence with all of the sentences in the reference text, and then tries to retrieve the semantically equivalent pairs. These semantically related sentences from the reference paper are chosen as the cited statements. This effort involves designing a citation linkage framework utilizing sequential and tree-structured siamese deep learning models. This paper also provides a method to create an automatically generated corpus for such a task.",2022
noble-etal-2021-semantic,https://aclanthology.org/2021.starsem-1.3,0,,,,,,,"Semantic shift in social networks. Just as the meaning of words is tied to the communities in which they are used, so too is semantic change. But how does lexical semantic change manifest differently across different communities? In this work, we investigate the relationship between community structure and semantic change in 45 communities from the social media website Reddit. We use distributional methods to quantify lexical semantic change and induce a social network on communities, based on interactions between members. We explore the relationship between semantic change and the clustering coefficient of a community's social network graph, as well as community size and stability. While none of these factors are found to be significant on their own, we report a significant effect of their three-way interaction. We also report on significant wordlevel effects of frequency and change in frequency, which replicate previous findings.",Semantic shift in social networks,"Just as the meaning of words is tied to the communities in which they are used, so too is semantic change. But how does lexical semantic change manifest differently across different communities? In this work, we investigate the relationship between community structure and semantic change in 45 communities from the social media website Reddit. We use distributional methods to quantify lexical semantic change and induce a social network on communities, based on interactions between members. We explore the relationship between semantic change and the clustering coefficient of a community's social network graph, as well as community size and stability. While none of these factors are found to be significant on their own, we report a significant effect of their three-way interaction. We also report on significant wordlevel effects of frequency and change in frequency, which replicate previous findings.",Semantic shift in social networks,"Just as the meaning of words is tied to the communities in which they are used, so too is semantic change. But how does lexical semantic change manifest differently across different communities? In this work, we investigate the relationship between community structure and semantic change in 45 communities from the social media website Reddit. We use distributional methods to quantify lexical semantic change and induce a social network on communities, based on interactions between members. We explore the relationship between semantic change and the clustering coefficient of a community's social network graph, as well as community size and stability. While none of these factors are found to be significant on their own, we report a significant effect of their three-way interaction. We also report on significant wordlevel effects of frequency and change in frequency, which replicate previous findings.",This work was supported by grant 2014-39 from the Swedish Research Council (VR) for the establishment of the Centre for Linguistic Theory and Studies in Probability (CLASP) at the University of Gothenburg. This work was also supported by the Marianne and Marcus Wallenberg Foundation grant 2019.0214 for the Gothenburg Research Initiative for Politically Emergent Systems (GRIPES). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819455 Awarded to RF).,"Semantic shift in social networks. Just as the meaning of words is tied to the communities in which they are used, so too is semantic change. But how does lexical semantic change manifest differently across different communities? In this work, we investigate the relationship between community structure and semantic change in 45 communities from the social media website Reddit. We use distributional methods to quantify lexical semantic change and induce a social network on communities, based on interactions between members. We explore the relationship between semantic change and the clustering coefficient of a community's social network graph, as well as community size and stability. While none of these factors are found to be significant on their own, we report a significant effect of their three-way interaction. We also report on significant wordlevel effects of frequency and change in frequency, which replicate previous findings.",2021
himoro-pareja-lora-2020-towards,https://aclanthology.org/2020.lrec-1.327,0,,,,,,,"Towards a Spell Checker for Zamboanga Chavacano Orthography. Zamboanga Chabacano (ZC) is the most vibrant variety of Philippine Creole Spanish, with over 400,000 native speakers in the Philippines (as of 2010). Following its introduction as a subject and a medium of instruction in the public schools of Zamboanga City from Grade 1 to 3 in 2012, an official orthography for this variety-the so-called ""Zamboanga Chavacano Orthography""-has been approved in 2014. Its complexity, however, is a barrier to most speakers, since it does not necessarily reflect the particular phonetic evolution in ZC, but favours etymology instead. The distance between the correct spelling and the different spelling variations is often so great that delivering acceptable performance with the current de facto spell checking technologies may be challenging. The goals of this research is to propose i) a spelling error taxonomy for ZC, formalised as an ontology and ii) an adaptive spell checking approach using Character-Based Statistical Machine Translation to correct spelling errors in ZC. Our results show that this approach is suitable for the goals mentioned and that it could be combined with other current spell checking technologies to achieve even higher performance.",Towards a Spell Checker for {Z}amboanga {C}havacano Orthography,"Zamboanga Chabacano (ZC) is the most vibrant variety of Philippine Creole Spanish, with over 400,000 native speakers in the Philippines (as of 2010). Following its introduction as a subject and a medium of instruction in the public schools of Zamboanga City from Grade 1 to 3 in 2012, an official orthography for this variety-the so-called ""Zamboanga Chavacano Orthography""-has been approved in 2014. Its complexity, however, is a barrier to most speakers, since it does not necessarily reflect the particular phonetic evolution in ZC, but favours etymology instead. The distance between the correct spelling and the different spelling variations is often so great that delivering acceptable performance with the current de facto spell checking technologies may be challenging. The goals of this research is to propose i) a spelling error taxonomy for ZC, formalised as an ontology and ii) an adaptive spell checking approach using Character-Based Statistical Machine Translation to correct spelling errors in ZC. Our results show that this approach is suitable for the goals mentioned and that it could be combined with other current spell checking technologies to achieve even higher performance.",Towards a Spell Checker for Zamboanga Chavacano Orthography,"Zamboanga Chabacano (ZC) is the most vibrant variety of Philippine Creole Spanish, with over 400,000 native speakers in the Philippines (as of 2010). Following its introduction as a subject and a medium of instruction in the public schools of Zamboanga City from Grade 1 to 3 in 2012, an official orthography for this variety-the so-called ""Zamboanga Chavacano Orthography""-has been approved in 2014. Its complexity, however, is a barrier to most speakers, since it does not necessarily reflect the particular phonetic evolution in ZC, but favours etymology instead. The distance between the correct spelling and the different spelling variations is often so great that delivering acceptable performance with the current de facto spell checking technologies may be challenging. The goals of this research is to propose i) a spelling error taxonomy for ZC, formalised as an ontology and ii) an adaptive spell checking approach using Character-Based Statistical Machine Translation to correct spelling errors in ZC. Our results show that this approach is suitable for the goals mentioned and that it could be combined with other current spell checking technologies to achieve even higher performance.",,"Towards a Spell Checker for Zamboanga Chavacano Orthography. Zamboanga Chabacano (ZC) is the most vibrant variety of Philippine Creole Spanish, with over 400,000 native speakers in the Philippines (as of 2010). Following its introduction as a subject and a medium of instruction in the public schools of Zamboanga City from Grade 1 to 3 in 2012, an official orthography for this variety-the so-called ""Zamboanga Chavacano Orthography""-has been approved in 2014. Its complexity, however, is a barrier to most speakers, since it does not necessarily reflect the particular phonetic evolution in ZC, but favours etymology instead. The distance between the correct spelling and the different spelling variations is often so great that delivering acceptable performance with the current de facto spell checking technologies may be challenging. The goals of this research is to propose i) a spelling error taxonomy for ZC, formalised as an ontology and ii) an adaptive spell checking approach using Character-Based Statistical Machine Translation to correct spelling errors in ZC. Our results show that this approach is suitable for the goals mentioned and that it could be combined with other current spell checking technologies to achieve even higher performance.",2020
edmundson-1963-behavior,https://aclanthology.org/1963.earlymt-1.9,0,,,,,,,The behavior of English articles. ,The behavior of {E}nglish articles,,The behavior of English articles,,,The behavior of English articles. ,1963
skjaerholt-2014-chance,https://aclanthology.org/P14-1088,0,,,,,,,"A chance-corrected measure of inter-annotator agreement for syntax. Following the works of Carletta (1996) and Artstein and Poesio (2008), there is an increasing consensus within the field that in order to properly gauge the reliability of an annotation effort, chance-corrected measures of inter-annotator agreement should be used. With this in mind, it is striking that virtually all evaluations of syntactic annotation efforts use uncorrected parser evaluation metrics such as bracket F 1 (for phrase structure) and accuracy scores (for dependencies). In this work we present a chance-corrected metric based on Krippendorff's α, adapted to the structure of syntactic annotations and applicable both to phrase structure and dependency annotation without any modifications. To evaluate our metric we first present a number of synthetic experiments to better control the sources of noise and gauge the metric's responses, before finally contrasting the behaviour of our chance-corrected metric with that of uncorrected parser evaluation metrics on real corpora. 1",A chance-corrected measure of inter-annotator agreement for syntax,"Following the works of Carletta (1996) and Artstein and Poesio (2008), there is an increasing consensus within the field that in order to properly gauge the reliability of an annotation effort, chance-corrected measures of inter-annotator agreement should be used. With this in mind, it is striking that virtually all evaluations of syntactic annotation efforts use uncorrected parser evaluation metrics such as bracket F 1 (for phrase structure) and accuracy scores (for dependencies). In this work we present a chance-corrected metric based on Krippendorff's α, adapted to the structure of syntactic annotations and applicable both to phrase structure and dependency annotation without any modifications. To evaluate our metric we first present a number of synthetic experiments to better control the sources of noise and gauge the metric's responses, before finally contrasting the behaviour of our chance-corrected metric with that of uncorrected parser evaluation metrics on real corpora. 1",A chance-corrected measure of inter-annotator agreement for syntax,"Following the works of Carletta (1996) and Artstein and Poesio (2008), there is an increasing consensus within the field that in order to properly gauge the reliability of an annotation effort, chance-corrected measures of inter-annotator agreement should be used. With this in mind, it is striking that virtually all evaluations of syntactic annotation efforts use uncorrected parser evaluation metrics such as bracket F 1 (for phrase structure) and accuracy scores (for dependencies). In this work we present a chance-corrected metric based on Krippendorff's α, adapted to the structure of syntactic annotations and applicable both to phrase structure and dependency annotation without any modifications. To evaluate our metric we first present a number of synthetic experiments to better control the sources of noise and gauge the metric's responses, before finally contrasting the behaviour of our chance-corrected metric with that of uncorrected parser evaluation metrics on real corpora. 1","I would like to thank JanŠtěpánek at Charles University for data from the PCEDT and help with the conversion process, the CDT project for publishing their agreement data, Per Erik Solberg at 8 The Python implementation used in this work, using NumPy and the PyPy compiler, took seven and a half hours compute a single α for the PCEDT data set on an Intel Core i7 2.9 GHz computer. The program is single-threaded. the Norwegian National Library for data from the NDT, and Emily Bender at the University of Washington for the SSD data.","A chance-corrected measure of inter-annotator agreement for syntax. Following the works of Carletta (1996) and Artstein and Poesio (2008), there is an increasing consensus within the field that in order to properly gauge the reliability of an annotation effort, chance-corrected measures of inter-annotator agreement should be used. With this in mind, it is striking that virtually all evaluations of syntactic annotation efforts use uncorrected parser evaluation metrics such as bracket F 1 (for phrase structure) and accuracy scores (for dependencies). In this work we present a chance-corrected metric based on Krippendorff's α, adapted to the structure of syntactic annotations and applicable both to phrase structure and dependency annotation without any modifications. To evaluate our metric we first present a number of synthetic experiments to better control the sources of noise and gauge the metric's responses, before finally contrasting the behaviour of our chance-corrected metric with that of uncorrected parser evaluation metrics on real corpora. 1",2014
zarriess-etal-2016-pentoref,https://aclanthology.org/L16-1019,0,,,,,,,"PentoRef: A Corpus of Spoken References in Task-oriented Dialogues. PentoRef is a corpus of task-oriented dialogues collected in systematically manipulated settings. The corpus is multilingual, with English and German sections, and overall comprises more than 20000 utterances. The dialogues are fully transcribed and annotated with referring expressions mapped to objects in corresponding visual scenes, which makes the corpus a rich resource for research on spoken referring expressions in generation and resolution. The corpus includes several sub-corpora that correspond to different dialogue situations where parameters related to interactivity, visual access, and verbal channel have been manipulated in systematic ways. The corpus thus lends itself to very targeted studies of reference in spontaneous dialogue.",{P}ento{R}ef: A Corpus of Spoken References in Task-oriented Dialogues,"PentoRef is a corpus of task-oriented dialogues collected in systematically manipulated settings. The corpus is multilingual, with English and German sections, and overall comprises more than 20000 utterances. The dialogues are fully transcribed and annotated with referring expressions mapped to objects in corresponding visual scenes, which makes the corpus a rich resource for research on spoken referring expressions in generation and resolution. The corpus includes several sub-corpora that correspond to different dialogue situations where parameters related to interactivity, visual access, and verbal channel have been manipulated in systematic ways. The corpus thus lends itself to very targeted studies of reference in spontaneous dialogue.",PentoRef: A Corpus of Spoken References in Task-oriented Dialogues,"PentoRef is a corpus of task-oriented dialogues collected in systematically manipulated settings. The corpus is multilingual, with English and German sections, and overall comprises more than 20000 utterances. The dialogues are fully transcribed and annotated with referring expressions mapped to objects in corresponding visual scenes, which makes the corpus a rich resource for research on spoken referring expressions in generation and resolution. The corpus includes several sub-corpora that correspond to different dialogue situations where parameters related to interactivity, visual access, and verbal channel have been manipulated in systematic ways. The corpus thus lends itself to very targeted studies of reference in spontaneous dialogue.",This work was supported by the German Research Foundation (DFG) through the Cluster of Excellence Cognitive Interaction Technology 'CITEC' (EXC 277) at Bielefeld University and the DUEL project (grant SCHL 845/5-1).,"PentoRef: A Corpus of Spoken References in Task-oriented Dialogues. PentoRef is a corpus of task-oriented dialogues collected in systematically manipulated settings. The corpus is multilingual, with English and German sections, and overall comprises more than 20000 utterances. The dialogues are fully transcribed and annotated with referring expressions mapped to objects in corresponding visual scenes, which makes the corpus a rich resource for research on spoken referring expressions in generation and resolution. The corpus includes several sub-corpora that correspond to different dialogue situations where parameters related to interactivity, visual access, and verbal channel have been manipulated in systematic ways. The corpus thus lends itself to very targeted studies of reference in spontaneous dialogue.",2016
xu-etal-2020-volctrans,https://aclanthology.org/2020.wmt-1.112,0,,,,,,,"Volctrans Parallel Corpus Filtering System for WMT 2020. In this paper, we describe our submissions to the WMT20 shared task on parallel corpus filtering and alignment for low-resource conditions. The task requires the participants to align potential parallel sentence pairs out of the given document pairs, and score them so that low-quality pairs can be filtered. Our system, Volctrans, is made of two modules, i.e., a mining module and a scoring module. Based on the word alignment model, the mining module adopts an iterative mining strategy to extract latent parallel sentences. In the scoring module, an XLM-based scorer provides scores, followed by reranking mechanisms and ensemble. Our submissions outperform the baseline by 3.x/2.x and 2.x/2.x for km-en and ps-en on From Scratch/Fine-Tune conditions.",Volctrans Parallel Corpus Filtering System for {WMT} 2020,"In this paper, we describe our submissions to the WMT20 shared task on parallel corpus filtering and alignment for low-resource conditions. The task requires the participants to align potential parallel sentence pairs out of the given document pairs, and score them so that low-quality pairs can be filtered. Our system, Volctrans, is made of two modules, i.e., a mining module and a scoring module. Based on the word alignment model, the mining module adopts an iterative mining strategy to extract latent parallel sentences. In the scoring module, an XLM-based scorer provides scores, followed by reranking mechanisms and ensemble. Our submissions outperform the baseline by 3.x/2.x and 2.x/2.x for km-en and ps-en on From Scratch/Fine-Tune conditions.",Volctrans Parallel Corpus Filtering System for WMT 2020,"In this paper, we describe our submissions to the WMT20 shared task on parallel corpus filtering and alignment for low-resource conditions. The task requires the participants to align potential parallel sentence pairs out of the given document pairs, and score them so that low-quality pairs can be filtered. Our system, Volctrans, is made of two modules, i.e., a mining module and a scoring module. Based on the word alignment model, the mining module adopts an iterative mining strategy to extract latent parallel sentences. In the scoring module, an XLM-based scorer provides scores, followed by reranking mechanisms and ensemble. Our submissions outperform the baseline by 3.x/2.x and 2.x/2.x for km-en and ps-en on From Scratch/Fine-Tune conditions.",,"Volctrans Parallel Corpus Filtering System for WMT 2020. In this paper, we describe our submissions to the WMT20 shared task on parallel corpus filtering and alignment for low-resource conditions. The task requires the participants to align potential parallel sentence pairs out of the given document pairs, and score them so that low-quality pairs can be filtered. Our system, Volctrans, is made of two modules, i.e., a mining module and a scoring module. Based on the word alignment model, the mining module adopts an iterative mining strategy to extract latent parallel sentences. In the scoring module, an XLM-based scorer provides scores, followed by reranking mechanisms and ensemble. Our submissions outperform the baseline by 3.x/2.x and 2.x/2.x for km-en and ps-en on From Scratch/Fine-Tune conditions.",2020
trotta-etal-2020-adding,https://aclanthology.org/2020.lrec-1.532,1,,,,peace_justice_and_strong_institutions,,,"Adding Gesture, Posture and Facial Displays to the PoliModal Corpus of Political Interviews. This paper introduces a multimodal corpus in the political domain, which on top of transcribed face-to-face interviews presents the annotation of facial displays, hand gestures and body posture. While the fully annotated corpus consists of 3 interviews for a total of 120 minutes, it is extracted from a larger available corpus of 56 face-to-face interviews (14 hours) that has been manually annotated with information about metadata (i.e. tools used for the transcription, link to the interview etc.), pauses (used to mark a pause either between or within utterances), vocal expressions (marking non-lexical expressions such as burp and semi-lexical expressions such as primary interjections), deletions (false starts, repetitions and truncated words) and overlaps. In this work, we describe the additional level of annotation relating to non-verbal elements used by three Italian politicians belonging to three different political parties and who at the time of the talk-show were all candidates for the presidency of the Council of Ministers. We also present the results of some analyses aimed at identifying existing relations between the proxemic phenomena and the linguistic structures in which they occur, in order to capture recurring patterns and differences in the communication strategy.","Adding Gesture, Posture and Facial Displays to the {P}oli{M}odal Corpus of Political Interviews","This paper introduces a multimodal corpus in the political domain, which on top of transcribed face-to-face interviews presents the annotation of facial displays, hand gestures and body posture. While the fully annotated corpus consists of 3 interviews for a total of 120 minutes, it is extracted from a larger available corpus of 56 face-to-face interviews (14 hours) that has been manually annotated with information about metadata (i.e. tools used for the transcription, link to the interview etc.), pauses (used to mark a pause either between or within utterances), vocal expressions (marking non-lexical expressions such as burp and semi-lexical expressions such as primary interjections), deletions (false starts, repetitions and truncated words) and overlaps. In this work, we describe the additional level of annotation relating to non-verbal elements used by three Italian politicians belonging to three different political parties and who at the time of the talk-show were all candidates for the presidency of the Council of Ministers. We also present the results of some analyses aimed at identifying existing relations between the proxemic phenomena and the linguistic structures in which they occur, in order to capture recurring patterns and differences in the communication strategy.","Adding Gesture, Posture and Facial Displays to the PoliModal Corpus of Political Interviews","This paper introduces a multimodal corpus in the political domain, which on top of transcribed face-to-face interviews presents the annotation of facial displays, hand gestures and body posture. While the fully annotated corpus consists of 3 interviews for a total of 120 minutes, it is extracted from a larger available corpus of 56 face-to-face interviews (14 hours) that has been manually annotated with information about metadata (i.e. tools used for the transcription, link to the interview etc.), pauses (used to mark a pause either between or within utterances), vocal expressions (marking non-lexical expressions such as burp and semi-lexical expressions such as primary interjections), deletions (false starts, repetitions and truncated words) and overlaps. In this work, we describe the additional level of annotation relating to non-verbal elements used by three Italian politicians belonging to three different political parties and who at the time of the talk-show were all candidates for the presidency of the Council of Ministers. We also present the results of some analyses aimed at identifying existing relations between the proxemic phenomena and the linguistic structures in which they occur, in order to capture recurring patterns and differences in the communication strategy.",,"Adding Gesture, Posture and Facial Displays to the PoliModal Corpus of Political Interviews. This paper introduces a multimodal corpus in the political domain, which on top of transcribed face-to-face interviews presents the annotation of facial displays, hand gestures and body posture. While the fully annotated corpus consists of 3 interviews for a total of 120 minutes, it is extracted from a larger available corpus of 56 face-to-face interviews (14 hours) that has been manually annotated with information about metadata (i.e. tools used for the transcription, link to the interview etc.), pauses (used to mark a pause either between or within utterances), vocal expressions (marking non-lexical expressions such as burp and semi-lexical expressions such as primary interjections), deletions (false starts, repetitions and truncated words) and overlaps. In this work, we describe the additional level of annotation relating to non-verbal elements used by three Italian politicians belonging to three different political parties and who at the time of the talk-show were all candidates for the presidency of the Council of Ministers. We also present the results of some analyses aimed at identifying existing relations between the proxemic phenomena and the linguistic structures in which they occur, in order to capture recurring patterns and differences in the communication strategy.",2020
voutilainen-1994-noun,https://aclanthology.org/W93-0426,0,,,,,,,A Noun Phrase Parser of English. A tro V o u tila in e n H elsin k i A b stract An accurate rule-based noun phrase parser of English is described. Special attention is given to the linguistic description. A report on a performance test concludes the paper. 1. In trod u ction 1.1 M o tiv a tio n .,A Noun Phrase Parser of {E}nglish,A tro V o u tila in e n H elsin k i A b stract An accurate rule-based noun phrase parser of English is described. Special attention is given to the linguistic description. A report on a performance test concludes the paper. 1. In trod u ction 1.1 M o tiv a tio n .,A Noun Phrase Parser of English,A tro V o u tila in e n H elsin k i A b stract An accurate rule-based noun phrase parser of English is described. Special attention is given to the linguistic description. A report on a performance test concludes the paper. 1. In trod u ction 1.1 M o tiv a tio n .,,A Noun Phrase Parser of English. A tro V o u tila in e n H elsin k i A b stract An accurate rule-based noun phrase parser of English is described. Special attention is given to the linguistic description. A report on a performance test concludes the paper. 1. In trod u ction 1.1 M o tiv a tio n .,1994
pei-feng-2006-representation,https://aclanthology.org/Y06-1063,0,,,,,,,"Representation of Original Sense of Chinese Characters by FOPC. In Natural Language Processing(NLP), the automatic analysis of meaning occupies a very important position. The representation of original sense of Chinese character plays an irreplaceable role in the processing of advanced units of Chinese language such as the processing of syntax and semantics, etc. This paper, by introducing a few important concepts: FOPC, Ontology and Case Grammar, discusses the representation of original sense of Chinese character.",Representation of Original Sense of {C}hinese Characters by {FOPC},"In Natural Language Processing(NLP), the automatic analysis of meaning occupies a very important position. The representation of original sense of Chinese character plays an irreplaceable role in the processing of advanced units of Chinese language such as the processing of syntax and semantics, etc. This paper, by introducing a few important concepts: FOPC, Ontology and Case Grammar, discusses the representation of original sense of Chinese character.",Representation of Original Sense of Chinese Characters by FOPC,"In Natural Language Processing(NLP), the automatic analysis of meaning occupies a very important position. The representation of original sense of Chinese character plays an irreplaceable role in the processing of advanced units of Chinese language such as the processing of syntax and semantics, etc. This paper, by introducing a few important concepts: FOPC, Ontology and Case Grammar, discusses the representation of original sense of Chinese character.",,"Representation of Original Sense of Chinese Characters by FOPC. In Natural Language Processing(NLP), the automatic analysis of meaning occupies a very important position. The representation of original sense of Chinese character plays an irreplaceable role in the processing of advanced units of Chinese language such as the processing of syntax and semantics, etc. This paper, by introducing a few important concepts: FOPC, Ontology and Case Grammar, discusses the representation of original sense of Chinese character.",2006
desai-etal-2015-logistic,https://aclanthology.org/W15-5931,0,,,,,,,Logistic Regression for Automatic Lexical Level Morphological Paradigm Selection for Konkani Nouns. Automatic selection of morphological paradigm for a noun lemma is necessary to automate the task of building morphological analyzer for nouns with minimal human interventions. Morphological paradigms can be of two types namely surface level morphological paradigms and lexical level morphological paradigms. In this paper we present a method to automatically select lexical level morphological paradigms for Konkani nouns. Using the proposed concept of paradigm differentiating measure to generate a training data set we found that logistic regression can be used to automatically select lexical level morphological paradigms with an F-Score of 0.957.,Logistic Regression for Automatic Lexical Level Morphological Paradigm Selection for {K}onkani Nouns,Automatic selection of morphological paradigm for a noun lemma is necessary to automate the task of building morphological analyzer for nouns with minimal human interventions. Morphological paradigms can be of two types namely surface level morphological paradigms and lexical level morphological paradigms. In this paper we present a method to automatically select lexical level morphological paradigms for Konkani nouns. Using the proposed concept of paradigm differentiating measure to generate a training data set we found that logistic regression can be used to automatically select lexical level morphological paradigms with an F-Score of 0.957.,Logistic Regression for Automatic Lexical Level Morphological Paradigm Selection for Konkani Nouns,Automatic selection of morphological paradigm for a noun lemma is necessary to automate the task of building morphological analyzer for nouns with minimal human interventions. Morphological paradigms can be of two types namely surface level morphological paradigms and lexical level morphological paradigms. In this paper we present a method to automatically select lexical level morphological paradigms for Konkani nouns. Using the proposed concept of paradigm differentiating measure to generate a training data set we found that logistic regression can be used to automatically select lexical level morphological paradigms with an F-Score of 0.957.,,Logistic Regression for Automatic Lexical Level Morphological Paradigm Selection for Konkani Nouns. Automatic selection of morphological paradigm for a noun lemma is necessary to automate the task of building morphological analyzer for nouns with minimal human interventions. Morphological paradigms can be of two types namely surface level morphological paradigms and lexical level morphological paradigms. In this paper we present a method to automatically select lexical level morphological paradigms for Konkani nouns. Using the proposed concept of paradigm differentiating measure to generate a training data set we found that logistic regression can be used to automatically select lexical level morphological paradigms with an F-Score of 0.957.,2015
yousef-etal-2021-press,https://aclanthology.org/2021.emnlp-demo.18,1,,,,peace_justice_and_strong_institutions,,,"Press Freedom Monitor: Detection of Reported Press and Media Freedom Violations in Twitter and News Articles. Freedom of the press and media is of vital importance for democratically organised states and open societies. We introduce the Press Freedom Monitor, a tool that aims to detect reported press and media freedom violations in news articles and tweets. It is used by press and media freedom organisations to support their daily monitoring and to trigger rapid response actions. The Press Freedom Monitor enables the monitoring experts to get a swift overview of recently reported incidents and it has performed impressively in this regard. This paper presents our work on the tool, starting with the training phase, which comprises defining the topic-related keywords to be used for querying APIs for news and Twitter content and evaluating different machine learning models based on a training dataset specifically created for our use case. Then, we describe the components of the production pipeline, including data gathering, duplicates removal, country mapping, case mapping and the user interface. We also conducted a usability study to evaluate the effectiveness of the user interface, and describe improvement plans for future work.",Press Freedom Monitor: Detection of Reported Press and Media Freedom Violations in {T}witter and News Articles,"Freedom of the press and media is of vital importance for democratically organised states and open societies. We introduce the Press Freedom Monitor, a tool that aims to detect reported press and media freedom violations in news articles and tweets. It is used by press and media freedom organisations to support their daily monitoring and to trigger rapid response actions. The Press Freedom Monitor enables the monitoring experts to get a swift overview of recently reported incidents and it has performed impressively in this regard. This paper presents our work on the tool, starting with the training phase, which comprises defining the topic-related keywords to be used for querying APIs for news and Twitter content and evaluating different machine learning models based on a training dataset specifically created for our use case. Then, we describe the components of the production pipeline, including data gathering, duplicates removal, country mapping, case mapping and the user interface. We also conducted a usability study to evaluate the effectiveness of the user interface, and describe improvement plans for future work.",Press Freedom Monitor: Detection of Reported Press and Media Freedom Violations in Twitter and News Articles,"Freedom of the press and media is of vital importance for democratically organised states and open societies. We introduce the Press Freedom Monitor, a tool that aims to detect reported press and media freedom violations in news articles and tweets. It is used by press and media freedom organisations to support their daily monitoring and to trigger rapid response actions. The Press Freedom Monitor enables the monitoring experts to get a swift overview of recently reported incidents and it has performed impressively in this regard. This paper presents our work on the tool, starting with the training phase, which comprises defining the topic-related keywords to be used for querying APIs for news and Twitter content and evaluating different machine learning models based on a training dataset specifically created for our use case. Then, we describe the components of the production pipeline, including data gathering, duplicates removal, country mapping, case mapping and the user interface. We also conducted a usability study to evaluate the effectiveness of the user interface, and describe improvement plans for future work.","This work is funded by the European Commission within the Media Freedom Rapid Response project and co-financed through public funding by the regional parliament of Saxony, Germany.","Press Freedom Monitor: Detection of Reported Press and Media Freedom Violations in Twitter and News Articles. Freedom of the press and media is of vital importance for democratically organised states and open societies. We introduce the Press Freedom Monitor, a tool that aims to detect reported press and media freedom violations in news articles and tweets. It is used by press and media freedom organisations to support their daily monitoring and to trigger rapid response actions. The Press Freedom Monitor enables the monitoring experts to get a swift overview of recently reported incidents and it has performed impressively in this regard. This paper presents our work on the tool, starting with the training phase, which comprises defining the topic-related keywords to be used for querying APIs for news and Twitter content and evaluating different machine learning models based on a training dataset specifically created for our use case. Then, we describe the components of the production pipeline, including data gathering, duplicates removal, country mapping, case mapping and the user interface. We also conducted a usability study to evaluate the effectiveness of the user interface, and describe improvement plans for future work.",2021
walker-etal-2018-evidence,https://aclanthology.org/W18-5209,1,,,,peace_justice_and_strong_institutions,,,"Evidence Types, Credibility Factors, and Patterns or Soft Rules for Weighing Conflicting Evidence: Argument Mining in the Context of Legal Rules Governing Evidence Assessment. This paper reports on the results of an empirical study of adjudicatory decisions about veterans' claims for disability benefits in the United States. It develops a typology of kinds of relevant evidence (argument premises) employed in cases, and it identifies factors that the tribunal considers when assessing the credibility or trustworthiness of individual items of evidence. It also reports on patterns or ""soft rules"" that the tribunal uses to comparatively weigh the probative value of conflicting evidence. These evidence types, credibility factors, and comparison patterns are developed to be inter-operable with legal rules governing the evidence assessment process in the U.S. This approach should be transferable to other legal and non-legal domains.","Evidence Types, Credibility Factors, and Patterns or Soft Rules for Weighing Conflicting Evidence: Argument Mining in the Context of Legal Rules Governing Evidence Assessment","This paper reports on the results of an empirical study of adjudicatory decisions about veterans' claims for disability benefits in the United States. It develops a typology of kinds of relevant evidence (argument premises) employed in cases, and it identifies factors that the tribunal considers when assessing the credibility or trustworthiness of individual items of evidence. It also reports on patterns or ""soft rules"" that the tribunal uses to comparatively weigh the probative value of conflicting evidence. These evidence types, credibility factors, and comparison patterns are developed to be inter-operable with legal rules governing the evidence assessment process in the U.S. This approach should be transferable to other legal and non-legal domains.","Evidence Types, Credibility Factors, and Patterns or Soft Rules for Weighing Conflicting Evidence: Argument Mining in the Context of Legal Rules Governing Evidence Assessment","This paper reports on the results of an empirical study of adjudicatory decisions about veterans' claims for disability benefits in the United States. It develops a typology of kinds of relevant evidence (argument premises) employed in cases, and it identifies factors that the tribunal considers when assessing the credibility or trustworthiness of individual items of evidence. It also reports on patterns or ""soft rules"" that the tribunal uses to comparatively weigh the probative value of conflicting evidence. These evidence types, credibility factors, and comparison patterns are developed to be inter-operable with legal rules governing the evidence assessment process in the U.S. This approach should be transferable to other legal and non-legal domains.","We are grateful to the peer reviewers for this paper, whose comments led to significant improvements. This research was generously supported by the Maurice A. Deane School of Law at Hofstra University, New York, USA.","Evidence Types, Credibility Factors, and Patterns or Soft Rules for Weighing Conflicting Evidence: Argument Mining in the Context of Legal Rules Governing Evidence Assessment. This paper reports on the results of an empirical study of adjudicatory decisions about veterans' claims for disability benefits in the United States. It develops a typology of kinds of relevant evidence (argument premises) employed in cases, and it identifies factors that the tribunal considers when assessing the credibility or trustworthiness of individual items of evidence. It also reports on patterns or ""soft rules"" that the tribunal uses to comparatively weigh the probative value of conflicting evidence. These evidence types, credibility factors, and comparison patterns are developed to be inter-operable with legal rules governing the evidence assessment process in the U.S. This approach should be transferable to other legal and non-legal domains.",2018
klein-1999-standardisation,https://aclanthology.org/W99-0305,0,,,,,,,"Standardisation Efforts on the Level of Dialogue Act in the MATE Project. This paper describes the state of the art of coding schemes for dialogue acts and the efforts to establish a standard in this field. We present a review and comparison of currently available schemes and outline the comparison problems we had due to domain, task, and language dependencies of schemes. We discuss solution strategies which have in mind the reusability of corpora. Reusability is a crucial point because production and annotation of corpora is very time and cost consuming but the current broad variety of schemes makes reusability of annotated corpora very hard. The work of this paper takes place in the framework of the European Union funded MATE project. MATE aims to develop general methodological guidelines for the creation, annotation, retrieval and analysis of annotated corpora.",Standardisation Efforts on the Level of Dialogue Act in the {MATE} Project,"This paper describes the state of the art of coding schemes for dialogue acts and the efforts to establish a standard in this field. We present a review and comparison of currently available schemes and outline the comparison problems we had due to domain, task, and language dependencies of schemes. We discuss solution strategies which have in mind the reusability of corpora. Reusability is a crucial point because production and annotation of corpora is very time and cost consuming but the current broad variety of schemes makes reusability of annotated corpora very hard. The work of this paper takes place in the framework of the European Union funded MATE project. MATE aims to develop general methodological guidelines for the creation, annotation, retrieval and analysis of annotated corpora.",Standardisation Efforts on the Level of Dialogue Act in the MATE Project,"This paper describes the state of the art of coding schemes for dialogue acts and the efforts to establish a standard in this field. We present a review and comparison of currently available schemes and outline the comparison problems we had due to domain, task, and language dependencies of schemes. We discuss solution strategies which have in mind the reusability of corpora. Reusability is a crucial point because production and annotation of corpora is very time and cost consuming but the current broad variety of schemes makes reusability of annotated corpora very hard. The work of this paper takes place in the framework of the European Union funded MATE project. MATE aims to develop general methodological guidelines for the creation, annotation, retrieval and analysis of annotated corpora.",The work described here is part of the European Union funded MATE LE Telematics Project LE4-8370.,"Standardisation Efforts on the Level of Dialogue Act in the MATE Project. This paper describes the state of the art of coding schemes for dialogue acts and the efforts to establish a standard in this field. We present a review and comparison of currently available schemes and outline the comparison problems we had due to domain, task, and language dependencies of schemes. We discuss solution strategies which have in mind the reusability of corpora. Reusability is a crucial point because production and annotation of corpora is very time and cost consuming but the current broad variety of schemes makes reusability of annotated corpora very hard. The work of this paper takes place in the framework of the European Union funded MATE project. MATE aims to develop general methodological guidelines for the creation, annotation, retrieval and analysis of annotated corpora.",1999
busemann-1997-automating,https://aclanthology.org/A97-2003,1,,,,industry_innovation_infrastructure,,,"Automating NL Appointment Scheduling with COSMA. Appointment scheduling is a problem faced daily by many individuals and organizations. Cooperating agent systems have been developed to partially automate this task. In order to extend the circle of participants as far as possible we advocate the use of natural language transmitted by email. We demonstrate COSMA, a fully implemented German language server for existing appointment scheduling agent systems. COSMA can cope with multiple dialogues in parallel, and accounts for differences in dialogue behaviour between human and machine agents.",Automating {NL} Appointment Scheduling with {COSMA},"Appointment scheduling is a problem faced daily by many individuals and organizations. Cooperating agent systems have been developed to partially automate this task. In order to extend the circle of participants as far as possible we advocate the use of natural language transmitted by email. We demonstrate COSMA, a fully implemented German language server for existing appointment scheduling agent systems. COSMA can cope with multiple dialogues in parallel, and accounts for differences in dialogue behaviour between human and machine agents.",Automating NL Appointment Scheduling with COSMA,"Appointment scheduling is a problem faced daily by many individuals and organizations. Cooperating agent systems have been developed to partially automate this task. In order to extend the circle of participants as far as possible we advocate the use of natural language transmitted by email. We demonstrate COSMA, a fully implemented German language server for existing appointment scheduling agent systems. COSMA can cope with multiple dialogues in parallel, and accounts for differences in dialogue behaviour between human and machine agents.","The following persons have contributed significantly to the development and the implementation of the NL server system and its components: Thierry Declerck, Abdel Kader Diagne, Luca Dini, Judith Klein, and G/inter Neumann. The PASHA agent system has been developed and extended by Sven Schmeier.","Automating NL Appointment Scheduling with COSMA. Appointment scheduling is a problem faced daily by many individuals and organizations. Cooperating agent systems have been developed to partially automate this task. In order to extend the circle of participants as far as possible we advocate the use of natural language transmitted by email. We demonstrate COSMA, a fully implemented German language server for existing appointment scheduling agent systems. COSMA can cope with multiple dialogues in parallel, and accounts for differences in dialogue behaviour between human and machine agents.",1997
zhang-etal-2021-namer,https://aclanthology.org/2021.naacl-demos.3,0,,,,,,,"NAMER: A Node-Based Multitasking Framework for Multi-Hop Knowledge Base Question Answering. We present NAMER, an open-domain Chinese knowledge base question answering system based on a novel node-based framework that better grasps the structural mapping between questions and KB queries by aligning the nodes in a query with their corresponding mentions in question. Equipped with techniques including data augmentation and multitasking, we show that the proposed framework outperforms the previous SoTA on CCKS CKBQA dataset. Moreover, we develop a novel data annotation strategy that facilitates the node-tomention alignment, a dataset 1 with such strategy is also published to promote further research. An online demo of NAMER 2 is provided to visualize our framework and supply extra information for users, a video illustration 3 of NAMER is also available.",{NAMER}: A Node-Based Multitasking Framework for Multi-Hop Knowledge Base Question Answering,"We present NAMER, an open-domain Chinese knowledge base question answering system based on a novel node-based framework that better grasps the structural mapping between questions and KB queries by aligning the nodes in a query with their corresponding mentions in question. Equipped with techniques including data augmentation and multitasking, we show that the proposed framework outperforms the previous SoTA on CCKS CKBQA dataset. Moreover, we develop a novel data annotation strategy that facilitates the node-tomention alignment, a dataset 1 with such strategy is also published to promote further research. An online demo of NAMER 2 is provided to visualize our framework and supply extra information for users, a video illustration 3 of NAMER is also available.",NAMER: A Node-Based Multitasking Framework for Multi-Hop Knowledge Base Question Answering,"We present NAMER, an open-domain Chinese knowledge base question answering system based on a novel node-based framework that better grasps the structural mapping between questions and KB queries by aligning the nodes in a query with their corresponding mentions in question. Equipped with techniques including data augmentation and multitasking, we show that the proposed framework outperforms the previous SoTA on CCKS CKBQA dataset. Moreover, we develop a novel data annotation strategy that facilitates the node-tomention alignment, a dataset 1 with such strategy is also published to promote further research. An online demo of NAMER 2 is provided to visualize our framework and supply extra information for users, a video illustration 3 of NAMER is also available.","We would like to thank Yanzeng Li and Wenjie Li for the valuable assistance on system design and implementation. We also appreciate anonymous reviewers for their insightful and constructive comments. This work was supported by NSFC under grants 61932001, 61961130390, U20A20174. This work was also partially supported by Beijing Academy of Artificial Intelligence (BAAI). The corresponding author of this work is Lei Zou (zoulei@pku.edu.cn).","NAMER: A Node-Based Multitasking Framework for Multi-Hop Knowledge Base Question Answering. We present NAMER, an open-domain Chinese knowledge base question answering system based on a novel node-based framework that better grasps the structural mapping between questions and KB queries by aligning the nodes in a query with their corresponding mentions in question. Equipped with techniques including data augmentation and multitasking, we show that the proposed framework outperforms the previous SoTA on CCKS CKBQA dataset. Moreover, we develop a novel data annotation strategy that facilitates the node-tomention alignment, a dataset 1 with such strategy is also published to promote further research. An online demo of NAMER 2 is provided to visualize our framework and supply extra information for users, a video illustration 3 of NAMER is also available.",2021
kay-2014-computational,https://aclanthology.org/C14-1191,0,,,,,,,"Does a Computational Linguist have to be a Linguist?. Early computational linguists supplied much of theoretical basis that the ALPAC report said was needed for research on the practical problem of machine translation. The result of their efforts turned out to be more fundamental in that it provided a general theoretical basis for the study of language use as a process, giving rise eventually to constraint-based grammatical formalisms for syntax, finite-state approaches to morphology and phonology, and a host of models how speakers might assemble sentences, and hearers take them apart. Recently, an entirely new enterprise, based on machine learning and big data, has sprung on the scene and challenged the ALPAC committee's finding that linguistic processing must have a firm basis in linguistic theory. In this talk, I will show that the long-term development of linguistic processing requires linguistic theory, sophisticated statistical manipulation of big data, and a third component which is not linguistic at all.",Does a Computational Linguist have to be a Linguist?,"Early computational linguists supplied much of theoretical basis that the ALPAC report said was needed for research on the practical problem of machine translation. The result of their efforts turned out to be more fundamental in that it provided a general theoretical basis for the study of language use as a process, giving rise eventually to constraint-based grammatical formalisms for syntax, finite-state approaches to morphology and phonology, and a host of models how speakers might assemble sentences, and hearers take them apart. Recently, an entirely new enterprise, based on machine learning and big data, has sprung on the scene and challenged the ALPAC committee's finding that linguistic processing must have a firm basis in linguistic theory. In this talk, I will show that the long-term development of linguistic processing requires linguistic theory, sophisticated statistical manipulation of big data, and a third component which is not linguistic at all.",Does a Computational Linguist have to be a Linguist?,"Early computational linguists supplied much of theoretical basis that the ALPAC report said was needed for research on the practical problem of machine translation. The result of their efforts turned out to be more fundamental in that it provided a general theoretical basis for the study of language use as a process, giving rise eventually to constraint-based grammatical formalisms for syntax, finite-state approaches to morphology and phonology, and a host of models how speakers might assemble sentences, and hearers take them apart. Recently, an entirely new enterprise, based on machine learning and big data, has sprung on the scene and challenged the ALPAC committee's finding that linguistic processing must have a firm basis in linguistic theory. In this talk, I will show that the long-term development of linguistic processing requires linguistic theory, sophisticated statistical manipulation of big data, and a third component which is not linguistic at all.",,"Does a Computational Linguist have to be a Linguist?. Early computational linguists supplied much of theoretical basis that the ALPAC report said was needed for research on the practical problem of machine translation. The result of their efforts turned out to be more fundamental in that it provided a general theoretical basis for the study of language use as a process, giving rise eventually to constraint-based grammatical formalisms for syntax, finite-state approaches to morphology and phonology, and a host of models how speakers might assemble sentences, and hearers take them apart. Recently, an entirely new enterprise, based on machine learning and big data, has sprung on the scene and challenged the ALPAC committee's finding that linguistic processing must have a firm basis in linguistic theory. In this talk, I will show that the long-term development of linguistic processing requires linguistic theory, sophisticated statistical manipulation of big data, and a third component which is not linguistic at all.",2014
gonzalez-etal-2021-interaction,https://aclanthology.org/2021.findings-acl.259,1,,,,peace_justice_and_strong_institutions,,,"On the Interaction of Belief Bias and Explanations. A myriad of explainability methods have been proposed in recent years, but there is little consensus on how to evaluate them. While automatic metrics allow for quick benchmarking, it isn't clear how such metrics reflect human interaction with explanations. Human evaluation is of paramount importance, but previous protocols fail to account for belief biases affecting human performance, which may lead to misleading conclusions. We provide an overview of belief bias, its role in human evaluation, and ideas for NLP practitioners on how to account for it. For two experimental paradigms, we present a case study of gradientbased explainability introducing simple ways to account for humans' prior beliefs: models of varying quality and adversarial examples. We show that conclusions about the highest performing methods change when introducing such controls, pointing to the importance of accounting for belief bias in evaluation.",On the Interaction of Belief Bias and Explanations,"A myriad of explainability methods have been proposed in recent years, but there is little consensus on how to evaluate them. While automatic metrics allow for quick benchmarking, it isn't clear how such metrics reflect human interaction with explanations. Human evaluation is of paramount importance, but previous protocols fail to account for belief biases affecting human performance, which may lead to misleading conclusions. We provide an overview of belief bias, its role in human evaluation, and ideas for NLP practitioners on how to account for it. For two experimental paradigms, we present a case study of gradientbased explainability introducing simple ways to account for humans' prior beliefs: models of varying quality and adversarial examples. We show that conclusions about the highest performing methods change when introducing such controls, pointing to the importance of accounting for belief bias in evaluation.",On the Interaction of Belief Bias and Explanations,"A myriad of explainability methods have been proposed in recent years, but there is little consensus on how to evaluate them. While automatic metrics allow for quick benchmarking, it isn't clear how such metrics reflect human interaction with explanations. Human evaluation is of paramount importance, but previous protocols fail to account for belief biases affecting human performance, which may lead to misleading conclusions. We provide an overview of belief bias, its role in human evaluation, and ideas for NLP practitioners on how to account for it. For two experimental paradigms, we present a case study of gradientbased explainability introducing simple ways to account for humans' prior beliefs: models of varying quality and adversarial examples. We show that conclusions about the highest performing methods change when introducing such controls, pointing to the importance of accounting for belief bias in evaluation.",We thank the reviewers for their insightful feedback for this and previous versions of this paper. This work is partly funded by the Innovation Fund Denmark.,"On the Interaction of Belief Bias and Explanations. A myriad of explainability methods have been proposed in recent years, but there is little consensus on how to evaluate them. While automatic metrics allow for quick benchmarking, it isn't clear how such metrics reflect human interaction with explanations. Human evaluation is of paramount importance, but previous protocols fail to account for belief biases affecting human performance, which may lead to misleading conclusions. We provide an overview of belief bias, its role in human evaluation, and ideas for NLP practitioners on how to account for it. For two experimental paradigms, we present a case study of gradientbased explainability introducing simple ways to account for humans' prior beliefs: models of varying quality and adversarial examples. We show that conclusions about the highest performing methods change when introducing such controls, pointing to the importance of accounting for belief bias in evaluation.",2021
tanimura-nakagawa-2000-alignment,https://aclanthology.org/W00-1708,0,,,,,,,"Alignment of Sound Track with Text in a TV Drama. (i-p+1,j) (i-p,j-1) (i-1,j-p) (i,j) (i-1,j-1) 1 1 1 1 1 1 1 1 1 1 A' B' a'mi b'mj b'mj-1 a'mi-1 p p (i-p+1,j) (i-p,j-1) (i,j-p+1) (i-1,j-p) (i,j) (i-1,j-1",Alignment of Sound Track with Text in a {TV} Drama,"(i-p+1,j) (i-p,j-1) (i-1,j-p) (i,j) (i-1,j-1) 1 1 1 1 1 1 1 1 1 1 A' B' a'mi b'mj b'mj-1 a'mi-1 p p (i-p+1,j) (i-p,j-1) (i,j-p+1) (i-1,j-p) (i,j) (i-1,j-1",Alignment of Sound Track with Text in a TV Drama,"(i-p+1,j) (i-p,j-1) (i-1,j-p) (i,j) (i-1,j-1) 1 1 1 1 1 1 1 1 1 1 A' B' a'mi b'mj b'mj-1 a'mi-1 p p (i-p+1,j) (i-p,j-1) (i,j-p+1) (i-1,j-p) (i,j) (i-1,j-1",,"Alignment of Sound Track with Text in a TV Drama. (i-p+1,j) (i-p,j-1) (i-1,j-p) (i,j) (i-1,j-1) 1 1 1 1 1 1 1 1 1 1 A' B' a'mi b'mj b'mj-1 a'mi-1 p p (i-p+1,j) (i-p,j-1) (i,j-p+1) (i-1,j-p) (i,j) (i-1,j-1",2000
joshi-2013-invited,https://aclanthology.org/W13-3702,0,,,,,,,"Invited talk: Dependency Representations, Grammars, Folded Structures, among Other Things!. In a dependency grammar (DG) dependency rep resentations (trees) directly express the depen dency relations between words. The hierarchical structure emerges out of the representation. There are no labels other than the words them selves. In a phrase structure type of representa tion words are associated with some category la bels and then the dependencies between the words emerge indirectly in terms of the phrase structure, the nonterminal labels, and possibly some indices associated with the labels. Behind the scene there is a phrase structure grammar (PSG) that builds the hierarchical structure. In a categorical type of grammar (CG), words are as sociated with labels that encode the combinatory potential of each word. Then the hierarchical structure (tree structure) emerges out of a set of operations such as application, function composi tion, type raising, among others. In a treeadjoin ing grammar (TAG), each word is associated with an elementary tree that encodes both the hi erarchical and the dependency structure associ ated with the lexical anchor and the tree(s) asso ciated with a word. The elementary trees are then composed with the operations of substitution and adjoining. In a way, the dependency potential of a word is localized within the elementary tree (trees) associated with a word. Already TAG and TAG like grammars are able to represent dependencies that go beyond those that can be represented by contextfree grammars, but in a controlled way. With this perspective and with the availability of larger dependency annotated corpora (e.g. the Prague Dependency Treebank) one is able to as sess how far one can cover the dependencies that actually appear in the corpora. This approach has the potential of carrying out an 'empirical' inves tigation of the power of representations and the associated grammars. Here by 'empirical' we do not mean 'statistical or distributional' but rather in the sense of covering as much as possible the actual data in annotated corpora! If time permits, I will talk about how dependen cies are represented in nature. For example, grammars have been used to describe the folded structure of RNA biomolecules. The folded structure here describes the dependencies be tween the amino acids as they appear in an RNA biomolecule. One can then ask the question: Can we represent a sentence structure as a folded structure, where the fold captures both the depen dencies and the structure, without any additional labels?","Invited talk: Dependency Representations, Grammars, Folded Structures, among Other Things!","In a dependency grammar (DG) dependency rep resentations (trees) directly express the depen dency relations between words. The hierarchical structure emerges out of the representation. There are no labels other than the words them selves. In a phrase structure type of representa tion words are associated with some category la bels and then the dependencies between the words emerge indirectly in terms of the phrase structure, the nonterminal labels, and possibly some indices associated with the labels. Behind the scene there is a phrase structure grammar (PSG) that builds the hierarchical structure. In a categorical type of grammar (CG), words are as sociated with labels that encode the combinatory potential of each word. Then the hierarchical structure (tree structure) emerges out of a set of operations such as application, function composi tion, type raising, among others. In a treeadjoin ing grammar (TAG), each word is associated with an elementary tree that encodes both the hi erarchical and the dependency structure associ ated with the lexical anchor and the tree(s) asso ciated with a word. The elementary trees are then composed with the operations of substitution and adjoining. In a way, the dependency potential of a word is localized within the elementary tree (trees) associated with a word. Already TAG and TAG like grammars are able to represent dependencies that go beyond those that can be represented by contextfree grammars, but in a controlled way. With this perspective and with the availability of larger dependency annotated corpora (e.g. the Prague Dependency Treebank) one is able to as sess how far one can cover the dependencies that actually appear in the corpora. This approach has the potential of carrying out an 'empirical' inves tigation of the power of representations and the associated grammars. Here by 'empirical' we do not mean 'statistical or distributional' but rather in the sense of covering as much as possible the actual data in annotated corpora! If time permits, I will talk about how dependen cies are represented in nature. For example, grammars have been used to describe the folded structure of RNA biomolecules. The folded structure here describes the dependencies be tween the amino acids as they appear in an RNA biomolecule. One can then ask the question: Can we represent a sentence structure as a folded structure, where the fold captures both the depen dencies and the structure, without any additional labels?","Invited talk: Dependency Representations, Grammars, Folded Structures, among Other Things!","In a dependency grammar (DG) dependency rep resentations (trees) directly express the depen dency relations between words. The hierarchical structure emerges out of the representation. There are no labels other than the words them selves. In a phrase structure type of representa tion words are associated with some category la bels and then the dependencies between the words emerge indirectly in terms of the phrase structure, the nonterminal labels, and possibly some indices associated with the labels. Behind the scene there is a phrase structure grammar (PSG) that builds the hierarchical structure. In a categorical type of grammar (CG), words are as sociated with labels that encode the combinatory potential of each word. Then the hierarchical structure (tree structure) emerges out of a set of operations such as application, function composi tion, type raising, among others. In a treeadjoin ing grammar (TAG), each word is associated with an elementary tree that encodes both the hi erarchical and the dependency structure associ ated with the lexical anchor and the tree(s) asso ciated with a word. The elementary trees are then composed with the operations of substitution and adjoining. In a way, the dependency potential of a word is localized within the elementary tree (trees) associated with a word. Already TAG and TAG like grammars are able to represent dependencies that go beyond those that can be represented by contextfree grammars, but in a controlled way. With this perspective and with the availability of larger dependency annotated corpora (e.g. the Prague Dependency Treebank) one is able to as sess how far one can cover the dependencies that actually appear in the corpora. This approach has the potential of carrying out an 'empirical' inves tigation of the power of representations and the associated grammars. Here by 'empirical' we do not mean 'statistical or distributional' but rather in the sense of covering as much as possible the actual data in annotated corpora! If time permits, I will talk about how dependen cies are represented in nature. For example, grammars have been used to describe the folded structure of RNA biomolecules. The folded structure here describes the dependencies be tween the amino acids as they appear in an RNA biomolecule. One can then ask the question: Can we represent a sentence structure as a folded structure, where the fold captures both the depen dencies and the structure, without any additional labels?",,"Invited talk: Dependency Representations, Grammars, Folded Structures, among Other Things!. In a dependency grammar (DG) dependency rep resentations (trees) directly express the depen dency relations between words. The hierarchical structure emerges out of the representation. There are no labels other than the words them selves. In a phrase structure type of representa tion words are associated with some category la bels and then the dependencies between the words emerge indirectly in terms of the phrase structure, the nonterminal labels, and possibly some indices associated with the labels. Behind the scene there is a phrase structure grammar (PSG) that builds the hierarchical structure. In a categorical type of grammar (CG), words are as sociated with labels that encode the combinatory potential of each word. Then the hierarchical structure (tree structure) emerges out of a set of operations such as application, function composi tion, type raising, among others. In a treeadjoin ing grammar (TAG), each word is associated with an elementary tree that encodes both the hi erarchical and the dependency structure associ ated with the lexical anchor and the tree(s) asso ciated with a word. The elementary trees are then composed with the operations of substitution and adjoining. In a way, the dependency potential of a word is localized within the elementary tree (trees) associated with a word. Already TAG and TAG like grammars are able to represent dependencies that go beyond those that can be represented by contextfree grammars, but in a controlled way. With this perspective and with the availability of larger dependency annotated corpora (e.g. the Prague Dependency Treebank) one is able to as sess how far one can cover the dependencies that actually appear in the corpora. This approach has the potential of carrying out an 'empirical' inves tigation of the power of representations and the associated grammars. Here by 'empirical' we do not mean 'statistical or distributional' but rather in the sense of covering as much as possible the actual data in annotated corpora! If time permits, I will talk about how dependen cies are represented in nature. For example, grammars have been used to describe the folded structure of RNA biomolecules. The folded structure here describes the dependencies be tween the amino acids as they appear in an RNA biomolecule. One can then ask the question: Can we represent a sentence structure as a folded structure, where the fold captures both the depen dencies and the structure, without any additional labels?",2013
cyrus-feddes-2004-model,https://aclanthology.org/W04-2202,0,,,,,,,"A Model for Fine-Grained Alignment of Multilingual Texts. While alignment of texts on the sentential level is often seen as being too coarse, and word alignment as being too fine-grained, bi-or multilingual texts which are aligned on a level inbetween are a useful resource for many purposes. Starting from a number of examples of non-literal translations, which tend to make alignment difficult, we describe an alignment model which copes with these cases by explicitly coding them. The model is based on predicateargument structures and thus covers the middle ground between sentence and word alignment. The model is currently used in a recently initiated project of a parallel English-German treebank (FuSe), which can in principle be extended with additional languages. * We would like to thank our colleague Frank Schumacher for many valuable comments on this paper. 1 Cf. the approach described in (Melamed, 1998).",A Model for Fine-Grained Alignment of Multilingual Texts,"While alignment of texts on the sentential level is often seen as being too coarse, and word alignment as being too fine-grained, bi-or multilingual texts which are aligned on a level inbetween are a useful resource for many purposes. Starting from a number of examples of non-literal translations, which tend to make alignment difficult, we describe an alignment model which copes with these cases by explicitly coding them. The model is based on predicateargument structures and thus covers the middle ground between sentence and word alignment. The model is currently used in a recently initiated project of a parallel English-German treebank (FuSe), which can in principle be extended with additional languages. * We would like to thank our colleague Frank Schumacher for many valuable comments on this paper. 1 Cf. the approach described in (Melamed, 1998).",A Model for Fine-Grained Alignment of Multilingual Texts,"While alignment of texts on the sentential level is often seen as being too coarse, and word alignment as being too fine-grained, bi-or multilingual texts which are aligned on a level inbetween are a useful resource for many purposes. Starting from a number of examples of non-literal translations, which tend to make alignment difficult, we describe an alignment model which copes with these cases by explicitly coding them. The model is based on predicateargument structures and thus covers the middle ground between sentence and word alignment. The model is currently used in a recently initiated project of a parallel English-German treebank (FuSe), which can in principle be extended with additional languages. * We would like to thank our colleague Frank Schumacher for many valuable comments on this paper. 1 Cf. the approach described in (Melamed, 1998).",,"A Model for Fine-Grained Alignment of Multilingual Texts. While alignment of texts on the sentential level is often seen as being too coarse, and word alignment as being too fine-grained, bi-or multilingual texts which are aligned on a level inbetween are a useful resource for many purposes. Starting from a number of examples of non-literal translations, which tend to make alignment difficult, we describe an alignment model which copes with these cases by explicitly coding them. The model is based on predicateargument structures and thus covers the middle ground between sentence and word alignment. The model is currently used in a recently initiated project of a parallel English-German treebank (FuSe), which can in principle be extended with additional languages. * We would like to thank our colleague Frank Schumacher for many valuable comments on this paper. 1 Cf. the approach described in (Melamed, 1998).",2004
koller-kruijff-2004-talking,https://aclanthology.org/C04-1049,0,,,,,,,"Talking robots with Lego MindStorms. This paper shows how talking robots can be built from off-the-shelf components, based on the Lego MindStorms robotics platform. We present four robots that students created as final projects in a seminar we supervised. Because Lego robots are so affordable, we argue that it is now feasible for any dialogue researcher to tackle the interesting challenges at the robot-dialogue interface. 1 LEGO and LEGO MindStorms are trademarks of the LEGO Company.",Talking robots with {L}ego {M}ind{S}torms,"This paper shows how talking robots can be built from off-the-shelf components, based on the Lego MindStorms robotics platform. We present four robots that students created as final projects in a seminar we supervised. Because Lego robots are so affordable, we argue that it is now feasible for any dialogue researcher to tackle the interesting challenges at the robot-dialogue interface. 1 LEGO and LEGO MindStorms are trademarks of the LEGO Company.",Talking robots with Lego MindStorms,"This paper shows how talking robots can be built from off-the-shelf components, based on the Lego MindStorms robotics platform. We present four robots that students created as final projects in a seminar we supervised. Because Lego robots are so affordable, we argue that it is now feasible for any dialogue researcher to tackle the interesting challenges at the robot-dialogue interface. 1 LEGO and LEGO MindStorms are trademarks of the LEGO Company.","Acknowledgments. The authors would like to thank LEGO and CLT Sprachtechnologie for providing free components from which to build our robot systems. We are deeply indebted to our students, who put tremendous effort into designing and building the presented robots. Further information about the student projects (including a movie) is available at the course website, http://www.coli.unisb.de/cl/courses/lego-02.","Talking robots with Lego MindStorms. This paper shows how talking robots can be built from off-the-shelf components, based on the Lego MindStorms robotics platform. We present four robots that students created as final projects in a seminar we supervised. Because Lego robots are so affordable, we argue that it is now feasible for any dialogue researcher to tackle the interesting challenges at the robot-dialogue interface. 1 LEGO and LEGO MindStorms are trademarks of the LEGO Company.",2004
chatterjee-etal-2017-multi,https://aclanthology.org/W17-4773,0,,,,,,,"Multi-source Neural Automatic Post-Editing: FBK's participation in the WMT 2017 APE shared task. Previous phrase-based approaches to Automatic Post-editing (APE) have shown that the dependency of MT errors from the source sentence can be exploited by jointly learning from source and target information. By integrating this notion in a neural approach to the problem, we present the multi-source neural machine translation (NMT) system submitted by FBK to the WMT 2017 APE shared task. Our system implements multi-source NMT in a weighted ensemble of 8 models. The n-best hypotheses produced by this ensemble are further re-ranked using features based on the edit distance between the original MT output and each APE hypothesis, as well as other statistical models (n-gram language model and operation sequence model). This solution resulted in the best system submission for this round of the APE shared task for both en-de and de-en language directions. For the former language direction, our primary submission improves over the MT baseline up to-4.9 TER and +7.6 BLEU points. For the latter, where the higher quality of the original MT output reduces the room for improvement, the gains are lower but still significant (-0.25 TER and +0.3 BLEU).",Multi-source Neural Automatic Post-Editing: {FBK}{'}s participation in the {WMT} 2017 {APE} shared task,"Previous phrase-based approaches to Automatic Post-editing (APE) have shown that the dependency of MT errors from the source sentence can be exploited by jointly learning from source and target information. By integrating this notion in a neural approach to the problem, we present the multi-source neural machine translation (NMT) system submitted by FBK to the WMT 2017 APE shared task. Our system implements multi-source NMT in a weighted ensemble of 8 models. The n-best hypotheses produced by this ensemble are further re-ranked using features based on the edit distance between the original MT output and each APE hypothesis, as well as other statistical models (n-gram language model and operation sequence model). This solution resulted in the best system submission for this round of the APE shared task for both en-de and de-en language directions. For the former language direction, our primary submission improves over the MT baseline up to-4.9 TER and +7.6 BLEU points. For the latter, where the higher quality of the original MT output reduces the room for improvement, the gains are lower but still significant (-0.25 TER and +0.3 BLEU).",Multi-source Neural Automatic Post-Editing: FBK's participation in the WMT 2017 APE shared task,"Previous phrase-based approaches to Automatic Post-editing (APE) have shown that the dependency of MT errors from the source sentence can be exploited by jointly learning from source and target information. By integrating this notion in a neural approach to the problem, we present the multi-source neural machine translation (NMT) system submitted by FBK to the WMT 2017 APE shared task. Our system implements multi-source NMT in a weighted ensemble of 8 models. The n-best hypotheses produced by this ensemble are further re-ranked using features based on the edit distance between the original MT output and each APE hypothesis, as well as other statistical models (n-gram language model and operation sequence model). This solution resulted in the best system submission for this round of the APE shared task for both en-de and de-en language directions. For the former language direction, our primary submission improves over the MT baseline up to-4.9 TER and +7.6 BLEU points. For the latter, where the higher quality of the original MT output reduces the room for improvement, the gains are lower but still significant (-0.25 TER and +0.3 BLEU).",This work has been partially supported by the ECfunded H2020 project QT21 (grant agreement no. 645452).,"Multi-source Neural Automatic Post-Editing: FBK's participation in the WMT 2017 APE shared task. Previous phrase-based approaches to Automatic Post-editing (APE) have shown that the dependency of MT errors from the source sentence can be exploited by jointly learning from source and target information. By integrating this notion in a neural approach to the problem, we present the multi-source neural machine translation (NMT) system submitted by FBK to the WMT 2017 APE shared task. Our system implements multi-source NMT in a weighted ensemble of 8 models. The n-best hypotheses produced by this ensemble are further re-ranked using features based on the edit distance between the original MT output and each APE hypothesis, as well as other statistical models (n-gram language model and operation sequence model). This solution resulted in the best system submission for this round of the APE shared task for both en-de and de-en language directions. For the former language direction, our primary submission improves over the MT baseline up to-4.9 TER and +7.6 BLEU points. For the latter, where the higher quality of the original MT output reduces the room for improvement, the gains are lower but still significant (-0.25 TER and +0.3 BLEU).",2017
diab-bhutada-2009-verb,https://aclanthology.org/W09-2903,0,,,,,,,"Verb Noun Construction MWE Token Classification. We address the problem of classifying multiword expression tokens in running text. We focus our study on Verb-Noun Constructions (VNC) that vary in their idiomaticity depending on context. VNC tokens are classified as either idiomatic or literal. We present a supervised learning approach to the problem. We experiment with different features. Our approach yields the best results to date on MWE classification combining different linguistically motivated features, the overall performance yields an F-measure of 84.58% corresponding to an Fmeasure of 89.96% for idiomaticity identification and classification and 62.03% for literal identification and classification.",Verb Noun Construction {MWE} Token Classification,"We address the problem of classifying multiword expression tokens in running text. We focus our study on Verb-Noun Constructions (VNC) that vary in their idiomaticity depending on context. VNC tokens are classified as either idiomatic or literal. We present a supervised learning approach to the problem. We experiment with different features. Our approach yields the best results to date on MWE classification combining different linguistically motivated features, the overall performance yields an F-measure of 84.58% corresponding to an Fmeasure of 89.96% for idiomaticity identification and classification and 62.03% for literal identification and classification.",Verb Noun Construction MWE Token Classification,"We address the problem of classifying multiword expression tokens in running text. We focus our study on Verb-Noun Constructions (VNC) that vary in their idiomaticity depending on context. VNC tokens are classified as either idiomatic or literal. We present a supervised learning approach to the problem. We experiment with different features. Our approach yields the best results to date on MWE classification combining different linguistically motivated features, the overall performance yields an F-measure of 84.58% corresponding to an Fmeasure of 89.96% for idiomaticity identification and classification and 62.03% for literal identification and classification.",,"Verb Noun Construction MWE Token Classification. We address the problem of classifying multiword expression tokens in running text. We focus our study on Verb-Noun Constructions (VNC) that vary in their idiomaticity depending on context. VNC tokens are classified as either idiomatic or literal. We present a supervised learning approach to the problem. We experiment with different features. Our approach yields the best results to date on MWE classification combining different linguistically motivated features, the overall performance yields an F-measure of 84.58% corresponding to an Fmeasure of 89.96% for idiomaticity identification and classification and 62.03% for literal identification and classification.",2009
laurent-etal-2010-ad,http://www.lrec-conf.org/proceedings/lrec2010/pdf/133_Paper.pdf,0,,,,,,,"Ad-hoc Evaluations Along the Lifecycle of Industrial Spoken Dialogue Systems: Heading to Harmonisation?. With a view to rationalise the evaluation process within the Orange Labs spoken dialogue system projects, a field audit has been realised among the various related professionals. The article presents the main conclusions of the study and draws work perspectives to enhance the evaluation process in such a complex organisation. We first present the typical spoken dialogue system project lifecycle and the involved communities of stakeholders. We then sketch a map of indicators used across the teams. It shows that each professional category designs its evaluation metrics according to a case-by-case strategy, each one targeting different goals and methodologies. And last, we identify weaknesses in the evaluation process is handled by the various teams. Among others, we mention: the dependency on the design and exploitation tools that may not be suitable for an adequate collection of relevant indicators, the need to refine some indicators' definition and analysis to obtain valuable information for system enhancement, the sharing issue that advocates for a common definition of indicators across the teams and, as a consequence, the need for shared applications that support and encourage such a rationalisation.",Ad-hoc Evaluations Along the Lifecycle of Industrial Spoken Dialogue Systems: Heading to Harmonisation?,"With a view to rationalise the evaluation process within the Orange Labs spoken dialogue system projects, a field audit has been realised among the various related professionals. The article presents the main conclusions of the study and draws work perspectives to enhance the evaluation process in such a complex organisation. We first present the typical spoken dialogue system project lifecycle and the involved communities of stakeholders. We then sketch a map of indicators used across the teams. It shows that each professional category designs its evaluation metrics according to a case-by-case strategy, each one targeting different goals and methodologies. And last, we identify weaknesses in the evaluation process is handled by the various teams. Among others, we mention: the dependency on the design and exploitation tools that may not be suitable for an adequate collection of relevant indicators, the need to refine some indicators' definition and analysis to obtain valuable information for system enhancement, the sharing issue that advocates for a common definition of indicators across the teams and, as a consequence, the need for shared applications that support and encourage such a rationalisation.",Ad-hoc Evaluations Along the Lifecycle of Industrial Spoken Dialogue Systems: Heading to Harmonisation?,"With a view to rationalise the evaluation process within the Orange Labs spoken dialogue system projects, a field audit has been realised among the various related professionals. The article presents the main conclusions of the study and draws work perspectives to enhance the evaluation process in such a complex organisation. We first present the typical spoken dialogue system project lifecycle and the involved communities of stakeholders. We then sketch a map of indicators used across the teams. It shows that each professional category designs its evaluation metrics according to a case-by-case strategy, each one targeting different goals and methodologies. And last, we identify weaknesses in the evaluation process is handled by the various teams. Among others, we mention: the dependency on the design and exploitation tools that may not be suitable for an adequate collection of relevant indicators, the need to refine some indicators' definition and analysis to obtain valuable information for system enhancement, the sharing issue that advocates for a common definition of indicators across the teams and, as a consequence, the need for shared applications that support and encourage such a rationalisation.",,"Ad-hoc Evaluations Along the Lifecycle of Industrial Spoken Dialogue Systems: Heading to Harmonisation?. With a view to rationalise the evaluation process within the Orange Labs spoken dialogue system projects, a field audit has been realised among the various related professionals. The article presents the main conclusions of the study and draws work perspectives to enhance the evaluation process in such a complex organisation. We first present the typical spoken dialogue system project lifecycle and the involved communities of stakeholders. We then sketch a map of indicators used across the teams. It shows that each professional category designs its evaluation metrics according to a case-by-case strategy, each one targeting different goals and methodologies. And last, we identify weaknesses in the evaluation process is handled by the various teams. Among others, we mention: the dependency on the design and exploitation tools that may not be suitable for an adequate collection of relevant indicators, the need to refine some indicators' definition and analysis to obtain valuable information for system enhancement, the sharing issue that advocates for a common definition of indicators across the teams and, as a consequence, the need for shared applications that support and encourage such a rationalisation.",2010
rosner-2002-future,http://www.lrec-conf.org/proceedings/lrec2002/pdf/256.pdf,0,,,,,,,"The Future of Maltilex. The Maltilex project, supported by the University of Malta, has now been running for approximately 3 years. Its aim is to create a computational lexicon of Maltese to serve as the basic infrastructure for the development of a wide variety of language-enabled applications. The project is further described in Rosner et. al. (Rosner et-al 1999, Rosner et al., 1998). This paper discusses the background, achievements, and immediate future aims of the project. It concludes with a discussion of some themes to be pursued in the medium term.",The Future of Maltilex,"The Maltilex project, supported by the University of Malta, has now been running for approximately 3 years. Its aim is to create a computational lexicon of Maltese to serve as the basic infrastructure for the development of a wide variety of language-enabled applications. The project is further described in Rosner et. al. (Rosner et-al 1999, Rosner et al., 1998). This paper discusses the background, achievements, and immediate future aims of the project. It concludes with a discussion of some themes to be pursued in the medium term.",The Future of Maltilex,"The Maltilex project, supported by the University of Malta, has now been running for approximately 3 years. Its aim is to create a computational lexicon of Maltese to serve as the basic infrastructure for the development of a wide variety of language-enabled applications. The project is further described in Rosner et. al. (Rosner et-al 1999, Rosner et al., 1998). This paper discusses the background, achievements, and immediate future aims of the project. It concludes with a discussion of some themes to be pursued in the medium term.","This work is being supported by the University of Malta. Thanks also go to colleagues Ray Fabri, Joe Caruana, Albert Gatt, and Angelo Dalli all of whom are working actively for the project.","The Future of Maltilex. The Maltilex project, supported by the University of Malta, has now been running for approximately 3 years. Its aim is to create a computational lexicon of Maltese to serve as the basic infrastructure for the development of a wide variety of language-enabled applications. The project is further described in Rosner et. al. (Rosner et-al 1999, Rosner et al., 1998). This paper discusses the background, achievements, and immediate future aims of the project. It concludes with a discussion of some themes to be pursued in the medium term.",2002
cramer-etal-2006-building,http://www.lrec-conf.org/proceedings/lrec2006/pdf/206_pdf.pdf,0,,,,,,,"Building an Evaluation Corpus for German Question Answering by Harvesting Wikipedia. The growing interest in open-domain question answering is limited by the lack of evaluation and training resources. To overcome this resource bottleneck for German, we propose a novel methodology to acquire new question-answer pairs for system evaluation that relies on volunteer collaboration over the Internet. Utilizing Wikipedia, a popular free online encyclopedia available in several languages, we show that the data acquisition problem can be cast as a Web experiment. We present a Web-based annotation tool and carry out a distributed data collection experiment. The data gathered from the mostly anonymous contributors is compared to a similar dataset produced in-house by domain experts on the one hand, and the German questions from the from the CLEF QA 2004 effort on the other hand. Our analysis of the datasets suggests that using our novel method a medium-scale evaluation resource can be built at very small cost in a short period of time. The technique and software developed here is readily applicable to other languages where free online encyclopedias are available, and our resulting corpus is likewise freely available.",Building an Evaluation Corpus for {G}erman Question Answering by Harvesting {W}ikipedia,"The growing interest in open-domain question answering is limited by the lack of evaluation and training resources. To overcome this resource bottleneck for German, we propose a novel methodology to acquire new question-answer pairs for system evaluation that relies on volunteer collaboration over the Internet. Utilizing Wikipedia, a popular free online encyclopedia available in several languages, we show that the data acquisition problem can be cast as a Web experiment. We present a Web-based annotation tool and carry out a distributed data collection experiment. The data gathered from the mostly anonymous contributors is compared to a similar dataset produced in-house by domain experts on the one hand, and the German questions from the from the CLEF QA 2004 effort on the other hand. Our analysis of the datasets suggests that using our novel method a medium-scale evaluation resource can be built at very small cost in a short period of time. The technique and software developed here is readily applicable to other languages where free online encyclopedias are available, and our resulting corpus is likewise freely available.",Building an Evaluation Corpus for German Question Answering by Harvesting Wikipedia,"The growing interest in open-domain question answering is limited by the lack of evaluation and training resources. To overcome this resource bottleneck for German, we propose a novel methodology to acquire new question-answer pairs for system evaluation that relies on volunteer collaboration over the Internet. Utilizing Wikipedia, a popular free online encyclopedia available in several languages, we show that the data acquisition problem can be cast as a Web experiment. We present a Web-based annotation tool and carry out a distributed data collection experiment. The data gathered from the mostly anonymous contributors is compared to a similar dataset produced in-house by domain experts on the one hand, and the German questions from the from the CLEF QA 2004 effort on the other hand. Our analysis of the datasets suggests that using our novel method a medium-scale evaluation resource can be built at very small cost in a short period of time. The technique and software developed here is readily applicable to other languages where free online encyclopedias are available, and our resulting corpus is likewise freely available.","Acknowledgments. This research was partly funded by the BMBF Project SmartWeb under Federal Ministry of Education and Research grant 01IM D01M. We thank Tim Bartel from the Wikimedia Foundation for feedback. A big ""thank you"" goes to all volunteer subjects, without whom this would not have been possible. We also thank the two Web experiment portals for linking to our study and Holly Branigan for discussions about alignment.","Building an Evaluation Corpus for German Question Answering by Harvesting Wikipedia. The growing interest in open-domain question answering is limited by the lack of evaluation and training resources. To overcome this resource bottleneck for German, we propose a novel methodology to acquire new question-answer pairs for system evaluation that relies on volunteer collaboration over the Internet. Utilizing Wikipedia, a popular free online encyclopedia available in several languages, we show that the data acquisition problem can be cast as a Web experiment. We present a Web-based annotation tool and carry out a distributed data collection experiment. The data gathered from the mostly anonymous contributors is compared to a similar dataset produced in-house by domain experts on the one hand, and the German questions from the from the CLEF QA 2004 effort on the other hand. Our analysis of the datasets suggests that using our novel method a medium-scale evaluation resource can be built at very small cost in a short period of time. The technique and software developed here is readily applicable to other languages where free online encyclopedias are available, and our resulting corpus is likewise freely available.",2006
kim-park-2004-bioar,https://aclanthology.org/W04-0711,1,,,,health,,,"BioAR: Anaphora Resolution for Relating Protein Names to Proteome Database Entries. The need for associating, or grounding, protein names in the literature with the entries of proteome databases such as Swiss-Prot is well-recognized. The protein names in the biomedical literature show a high degree of morphological and syntactic variations, and various anaphoric expressions including null anaphors. We present a biomedical anaphora resolution system, BioAR, in order to address the variations of protein names and to further associate them with Swiss-Prot entries as the actual entities in the world. The system shows the performance of 59.5%¢ 75.0% precision and 40.7%¢ 56.3% recall, depending on the specific types of anaphoric expressions. We apply BioAR to the protein names in the biological interactions as extracted by our biomedical information extraction system, or BioIE, in order to construct protein pathways automatically.",{B}io{AR}: Anaphora Resolution for Relating Protein Names to Proteome Database Entries,"The need for associating, or grounding, protein names in the literature with the entries of proteome databases such as Swiss-Prot is well-recognized. The protein names in the biomedical literature show a high degree of morphological and syntactic variations, and various anaphoric expressions including null anaphors. We present a biomedical anaphora resolution system, BioAR, in order to address the variations of protein names and to further associate them with Swiss-Prot entries as the actual entities in the world. The system shows the performance of 59.5%¢ 75.0% precision and 40.7%¢ 56.3% recall, depending on the specific types of anaphoric expressions. We apply BioAR to the protein names in the biological interactions as extracted by our biomedical information extraction system, or BioIE, in order to construct protein pathways automatically.",BioAR: Anaphora Resolution for Relating Protein Names to Proteome Database Entries,"The need for associating, or grounding, protein names in the literature with the entries of proteome databases such as Swiss-Prot is well-recognized. The protein names in the biomedical literature show a high degree of morphological and syntactic variations, and various anaphoric expressions including null anaphors. We present a biomedical anaphora resolution system, BioAR, in order to address the variations of protein names and to further associate them with Swiss-Prot entries as the actual entities in the world. The system shows the performance of 59.5%¢ 75.0% precision and 40.7%¢ 56.3% recall, depending on the specific types of anaphoric expressions. We apply BioAR to the protein names in the biological interactions as extracted by our biomedical information extraction system, or BioIE, in order to construct protein pathways automatically.",We are grateful to the anonymous reviewers and to Bonnie Webber for helpful comments. This work has been supported by the Korea Science and Engineering Foundation through AITrc.,"BioAR: Anaphora Resolution for Relating Protein Names to Proteome Database Entries. The need for associating, or grounding, protein names in the literature with the entries of proteome databases such as Swiss-Prot is well-recognized. The protein names in the biomedical literature show a high degree of morphological and syntactic variations, and various anaphoric expressions including null anaphors. We present a biomedical anaphora resolution system, BioAR, in order to address the variations of protein names and to further associate them with Swiss-Prot entries as the actual entities in the world. The system shows the performance of 59.5%¢ 75.0% precision and 40.7%¢ 56.3% recall, depending on the specific types of anaphoric expressions. We apply BioAR to the protein names in the biological interactions as extracted by our biomedical information extraction system, or BioIE, in order to construct protein pathways automatically.",2004
xie-pu-2021-empathetic,https://aclanthology.org/2021.conll-1.10,1,,,,health,,,"Empathetic Dialog Generation with Fine-Grained Intents. Empathetic dialog generation aims at generating coherent responses following previous dialog turns and, more importantly, showing a sense of caring and a desire to help. Existing models either rely on pre-defined emotion labels to guide the response generation, or use deterministic rules to decide the emotion of the response. With the advent of advanced language models, it is possible to learn subtle interactions directly from the dataset, providing that the emotion categories offer sufficient nuances and other non-emotional but emotional regulating intents are included. In this paper, we describe how to incorporate a taxonomy of 32 emotion categories and 8 additional emotion regulating intents to succeed the task of empathetic response generation. To facilitate the training, we also curated a largescale emotional dialog dataset from movie subtitles. Through a carefully designed crowdsourcing experiment, we evaluated and demonstrated how our model produces more empathetic dialogs compared with its baselines.",Empathetic Dialog Generation with Fine-Grained Intents,"Empathetic dialog generation aims at generating coherent responses following previous dialog turns and, more importantly, showing a sense of caring and a desire to help. Existing models either rely on pre-defined emotion labels to guide the response generation, or use deterministic rules to decide the emotion of the response. With the advent of advanced language models, it is possible to learn subtle interactions directly from the dataset, providing that the emotion categories offer sufficient nuances and other non-emotional but emotional regulating intents are included. In this paper, we describe how to incorporate a taxonomy of 32 emotion categories and 8 additional emotion regulating intents to succeed the task of empathetic response generation. To facilitate the training, we also curated a largescale emotional dialog dataset from movie subtitles. Through a carefully designed crowdsourcing experiment, we evaluated and demonstrated how our model produces more empathetic dialogs compared with its baselines.",Empathetic Dialog Generation with Fine-Grained Intents,"Empathetic dialog generation aims at generating coherent responses following previous dialog turns and, more importantly, showing a sense of caring and a desire to help. Existing models either rely on pre-defined emotion labels to guide the response generation, or use deterministic rules to decide the emotion of the response. With the advent of advanced language models, it is possible to learn subtle interactions directly from the dataset, providing that the emotion categories offer sufficient nuances and other non-emotional but emotional regulating intents are included. In this paper, we describe how to incorporate a taxonomy of 32 emotion categories and 8 additional emotion regulating intents to succeed the task of empathetic response generation. To facilitate the training, we also curated a largescale emotional dialog dataset from movie subtitles. Through a carefully designed crowdsourcing experiment, we evaluated and demonstrated how our model produces more empathetic dialogs compared with its baselines.",,"Empathetic Dialog Generation with Fine-Grained Intents. Empathetic dialog generation aims at generating coherent responses following previous dialog turns and, more importantly, showing a sense of caring and a desire to help. Existing models either rely on pre-defined emotion labels to guide the response generation, or use deterministic rules to decide the emotion of the response. With the advent of advanced language models, it is possible to learn subtle interactions directly from the dataset, providing that the emotion categories offer sufficient nuances and other non-emotional but emotional regulating intents are included. In this paper, we describe how to incorporate a taxonomy of 32 emotion categories and 8 additional emotion regulating intents to succeed the task of empathetic response generation. To facilitate the training, we also curated a largescale emotional dialog dataset from movie subtitles. Through a carefully designed crowdsourcing experiment, we evaluated and demonstrated how our model produces more empathetic dialogs compared with its baselines.",2021
cesa-bianchi-reverberi-2009-online,https://aclanthology.org/2009.eamt-smart.3,0,,,,,,,"Online learning for CAT applications. CAT meets online learning The vector w contains the decoder online weights
EQUATION",Online learning for {CAT} applications,"CAT meets online learning The vector w contains the decoder online weights
EQUATION",Online learning for CAT applications,"CAT meets online learning The vector w contains the decoder online weights
EQUATION",,"Online learning for CAT applications. CAT meets online learning The vector w contains the decoder online weights
EQUATION",2009
mckeown-1984-natural,https://aclanthology.org/P84-1043,0,,,,,,,"Natural Language for Exert Systems: Comparisons with Database Systems. Do natural language database systems still ,~lovide a valuable environment for further work on n~,tural language processing?",Natural Language for Exert Systems: Comparisons with Database Systems,"Do natural language database systems still ,~lovide a valuable environment for further work on n~,tural language processing?",Natural Language for Exert Systems: Comparisons with Database Systems,"Do natural language database systems still ,~lovide a valuable environment for further work on n~,tural language processing?",,"Natural Language for Exert Systems: Comparisons with Database Systems. Do natural language database systems still ,~lovide a valuable environment for further work on n~,tural language processing?",1984
pitler-etal-2010-using,https://aclanthology.org/C10-1100,0,,,,,,,"Using Web-scale N-grams to Improve Base NP Parsing Performance. We use web-scale N-grams in a base NP parser that correctly analyzes 95.4% of the base NPs in natural text. Web-scale data improves performance. That is, there is no data like more data. Performance scales log-linearly with the number of parameters in the model (the number of unique N-grams). The web-scale N-grams are particularly helpful in harder cases, such as NPs that contain conjunctions.",Using Web-scale N-grams to Improve Base {NP} Parsing Performance,"We use web-scale N-grams in a base NP parser that correctly analyzes 95.4% of the base NPs in natural text. Web-scale data improves performance. That is, there is no data like more data. Performance scales log-linearly with the number of parameters in the model (the number of unique N-grams). The web-scale N-grams are particularly helpful in harder cases, such as NPs that contain conjunctions.",Using Web-scale N-grams to Improve Base NP Parsing Performance,"We use web-scale N-grams in a base NP parser that correctly analyzes 95.4% of the base NPs in natural text. Web-scale data improves performance. That is, there is no data like more data. Performance scales log-linearly with the number of parameters in the model (the number of unique N-grams). The web-scale N-grams are particularly helpful in harder cases, such as NPs that contain conjunctions.",We gratefully acknowledge the Center for Language and Speech Processing at Johns Hopkins University for hosting the workshop at which this research was conducted.,"Using Web-scale N-grams to Improve Base NP Parsing Performance. We use web-scale N-grams in a base NP parser that correctly analyzes 95.4% of the base NPs in natural text. Web-scale data improves performance. That is, there is no data like more data. Performance scales log-linearly with the number of parameters in the model (the number of unique N-grams). The web-scale N-grams are particularly helpful in harder cases, such as NPs that contain conjunctions.",2010
du-etal-2021-learning,https://aclanthology.org/2021.acl-long.403,0,,,,,,,"Learning Event Graph Knowledge for Abductive Reasoning. Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering. To facilitate this task, a narrative text based abductive reasoning task αNLI is proposed, together with explorations about building reasoning framework using pretrained language models. However, abundant event commonsense knowledge is not well exploited for this task. To fill this gap, we propose a variational autoencoder based model ege-RoBERTa, which employs a latent variable to capture the necessary commonsense knowledge from event graph for guiding the abductive reasoning task. Experimental results show that through learning the external event graph knowledge, our approach outperforms the baseline methods on the αNLI task.",Learning Event Graph Knowledge for Abductive Reasoning,"Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering. To facilitate this task, a narrative text based abductive reasoning task αNLI is proposed, together with explorations about building reasoning framework using pretrained language models. However, abundant event commonsense knowledge is not well exploited for this task. To fill this gap, we propose a variational autoencoder based model ege-RoBERTa, which employs a latent variable to capture the necessary commonsense knowledge from event graph for guiding the abductive reasoning task. Experimental results show that through learning the external event graph knowledge, our approach outperforms the baseline methods on the αNLI task.",Learning Event Graph Knowledge for Abductive Reasoning,"Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering. To facilitate this task, a narrative text based abductive reasoning task αNLI is proposed, together with explorations about building reasoning framework using pretrained language models. However, abundant event commonsense knowledge is not well exploited for this task. To fill this gap, we propose a variational autoencoder based model ege-RoBERTa, which employs a latent variable to capture the necessary commonsense knowledge from event graph for guiding the abductive reasoning task. Experimental results show that through learning the external event graph knowledge, our approach outperforms the baseline methods on the αNLI task.","We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (2020AAA0106501), and the National Natural Science Foundation of China (61976073).","Learning Event Graph Knowledge for Abductive Reasoning. Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering. To facilitate this task, a narrative text based abductive reasoning task αNLI is proposed, together with explorations about building reasoning framework using pretrained language models. However, abundant event commonsense knowledge is not well exploited for this task. To fill this gap, we propose a variational autoencoder based model ege-RoBERTa, which employs a latent variable to capture the necessary commonsense knowledge from event graph for guiding the abductive reasoning task. Experimental results show that through learning the external event graph knowledge, our approach outperforms the baseline methods on the αNLI task.",2021
masmoudi-etal-2019-semantic,https://aclanthology.org/R19-1084,0,,,,,,,"Semantic Language Model for Tunisian Dialect. In this paper, we describe the process of creating a statistical Language Model (LM) for the Tunisian Dialect. Indeed, this work is part of the realization of Automatic Speech Recognition (ASR) system for the Tunisian Railway Transport Network. Since our field of work has been limited, there are several words with similar behaviors (semantic for example) but they do not have the same appearance probability; their class groupings will therefore be possible. For these reasons, we propose to build an n-class LM that is based mainly on the integration of purely semantic data. Indeed, each class represents an abstraction of similar labels. In order to improve the sequence labeling task, we proposed to use a discriminative algorithm based on the Conditional Random Field (CRF) model. To better judge our choice of creating an n-class word model, we compared the created model with the 3-gram type model on the same test corpus of evaluation. Additionally, to assess the impact of using the CRF model to perform the semantic labelling task in order to construct semantic classes, we compared the n-class created model with using the CRF in the semantic labelling task and the n-class model without using the CRF in the semantic labelling task. The drawn comparison of the predictive power of the n-class model obtained by applying the CRF model in the semantic labelling is that it is better than the other two models presenting the highest value of its perplexity.",Semantic Language Model for {T}unisian Dialect,"In this paper, we describe the process of creating a statistical Language Model (LM) for the Tunisian Dialect. Indeed, this work is part of the realization of Automatic Speech Recognition (ASR) system for the Tunisian Railway Transport Network. Since our field of work has been limited, there are several words with similar behaviors (semantic for example) but they do not have the same appearance probability; their class groupings will therefore be possible. For these reasons, we propose to build an n-class LM that is based mainly on the integration of purely semantic data. Indeed, each class represents an abstraction of similar labels. In order to improve the sequence labeling task, we proposed to use a discriminative algorithm based on the Conditional Random Field (CRF) model. To better judge our choice of creating an n-class word model, we compared the created model with the 3-gram type model on the same test corpus of evaluation. Additionally, to assess the impact of using the CRF model to perform the semantic labelling task in order to construct semantic classes, we compared the n-class created model with using the CRF in the semantic labelling task and the n-class model without using the CRF in the semantic labelling task. The drawn comparison of the predictive power of the n-class model obtained by applying the CRF model in the semantic labelling is that it is better than the other two models presenting the highest value of its perplexity.",Semantic Language Model for Tunisian Dialect,"In this paper, we describe the process of creating a statistical Language Model (LM) for the Tunisian Dialect. Indeed, this work is part of the realization of Automatic Speech Recognition (ASR) system for the Tunisian Railway Transport Network. Since our field of work has been limited, there are several words with similar behaviors (semantic for example) but they do not have the same appearance probability; their class groupings will therefore be possible. For these reasons, we propose to build an n-class LM that is based mainly on the integration of purely semantic data. Indeed, each class represents an abstraction of similar labels. In order to improve the sequence labeling task, we proposed to use a discriminative algorithm based on the Conditional Random Field (CRF) model. To better judge our choice of creating an n-class word model, we compared the created model with the 3-gram type model on the same test corpus of evaluation. Additionally, to assess the impact of using the CRF model to perform the semantic labelling task in order to construct semantic classes, we compared the n-class created model with using the CRF in the semantic labelling task and the n-class model without using the CRF in the semantic labelling task. The drawn comparison of the predictive power of the n-class model obtained by applying the CRF model in the semantic labelling is that it is better than the other two models presenting the highest value of its perplexity.",,"Semantic Language Model for Tunisian Dialect. In this paper, we describe the process of creating a statistical Language Model (LM) for the Tunisian Dialect. Indeed, this work is part of the realization of Automatic Speech Recognition (ASR) system for the Tunisian Railway Transport Network. Since our field of work has been limited, there are several words with similar behaviors (semantic for example) but they do not have the same appearance probability; their class groupings will therefore be possible. For these reasons, we propose to build an n-class LM that is based mainly on the integration of purely semantic data. Indeed, each class represents an abstraction of similar labels. In order to improve the sequence labeling task, we proposed to use a discriminative algorithm based on the Conditional Random Field (CRF) model. To better judge our choice of creating an n-class word model, we compared the created model with the 3-gram type model on the same test corpus of evaluation. Additionally, to assess the impact of using the CRF model to perform the semantic labelling task in order to construct semantic classes, we compared the n-class created model with using the CRF in the semantic labelling task and the n-class model without using the CRF in the semantic labelling task. The drawn comparison of the predictive power of the n-class model obtained by applying the CRF model in the semantic labelling is that it is better than the other two models presenting the highest value of its perplexity.",2019
saggion-etal-2010-multilingual,https://aclanthology.org/C10-2122,0,,,,,,,"Multilingual Summarization Evaluation without Human Models. We study correlation of rankings of text summarization systems using evaluation methods with and without human models. We apply our comparison framework to various well-established contentbased evaluation measures in text summarization such as coverage, Responsiveness, Pyramids and ROUGE studying their associations in various text summarization tasks including generic and focus-based multi-document summarization in English and generic single-document summarization in French and Spanish. The research is carried out using a new content-based evaluation framework called FRESA to compute a variety of divergences among probability distributions.",Multilingual Summarization Evaluation without Human Models,"We study correlation of rankings of text summarization systems using evaluation methods with and without human models. We apply our comparison framework to various well-established contentbased evaluation measures in text summarization such as coverage, Responsiveness, Pyramids and ROUGE studying their associations in various text summarization tasks including generic and focus-based multi-document summarization in English and generic single-document summarization in French and Spanish. The research is carried out using a new content-based evaluation framework called FRESA to compute a variety of divergences among probability distributions.",Multilingual Summarization Evaluation without Human Models,"We study correlation of rankings of text summarization systems using evaluation methods with and without human models. We apply our comparison framework to various well-established contentbased evaluation measures in text summarization such as coverage, Responsiveness, Pyramids and ROUGE studying their associations in various text summarization tasks including generic and focus-based multi-document summarization in English and generic single-document summarization in French and Spanish. The research is carried out using a new content-based evaluation framework called FRESA to compute a variety of divergences among probability distributions.","We thank three anonymous reviewers for their valuable and enthusiastic comments. Horacio Saggion is grateful to the Programa Ramón y Cajal from the Ministerio de Ciencia e Innovación, Spain and to a Comença grant from Universitat Pompeu Fabra (COMENÇ A10.004). This work is partially supported by a postdoctoral grant (National Program for Mobility of Research Human Resources; National Plan of Scientific Research, Development and Innovation 2008-2011) given to Iria da Cunha by the Ministerio de Ciencia e Innovación, Spain.","Multilingual Summarization Evaluation without Human Models. We study correlation of rankings of text summarization systems using evaluation methods with and without human models. We apply our comparison framework to various well-established contentbased evaluation measures in text summarization such as coverage, Responsiveness, Pyramids and ROUGE studying their associations in various text summarization tasks including generic and focus-based multi-document summarization in English and generic single-document summarization in French and Spanish. The research is carried out using a new content-based evaluation framework called FRESA to compute a variety of divergences among probability distributions.",2010
de-waard-pander-maat-2012-epistemic,https://aclanthology.org/W12-4306,1,,,,industry_innovation_infrastructure,,,"Epistemic Modality and Knowledge Attribution in Scientific Discourse: A Taxonomy of Types and Overview of Features. We propose a model for knowledge attribution and epistemic evaluation in scientific discourse, consisting of three dimensions with different values: source (author, other, unknown); value (unknown, possible, probable, presumed true) and basis (reasoning, data, other). Based on a literature review, we investigate four linguistic features that mark different types epistemic evaluation (modal auxiliary verbs, adverbs/adjectives, reporting verbs and references). A corpus study on two biology papers indicates the usefulness of this model, and suggest some typical trends. In particular, we find that matrix clauses with a reporting verb of the form 'These results suggest', are the predominant feature indicating knowledge attribution in scientific text.",Epistemic Modality and Knowledge Attribution in Scientific Discourse: A Taxonomy of Types and Overview of Features,"We propose a model for knowledge attribution and epistemic evaluation in scientific discourse, consisting of three dimensions with different values: source (author, other, unknown); value (unknown, possible, probable, presumed true) and basis (reasoning, data, other). Based on a literature review, we investigate four linguistic features that mark different types epistemic evaluation (modal auxiliary verbs, adverbs/adjectives, reporting verbs and references). A corpus study on two biology papers indicates the usefulness of this model, and suggest some typical trends. In particular, we find that matrix clauses with a reporting verb of the form 'These results suggest', are the predominant feature indicating knowledge attribution in scientific text.",Epistemic Modality and Knowledge Attribution in Scientific Discourse: A Taxonomy of Types and Overview of Features,"We propose a model for knowledge attribution and epistemic evaluation in scientific discourse, consisting of three dimensions with different values: source (author, other, unknown); value (unknown, possible, probable, presumed true) and basis (reasoning, data, other). Based on a literature review, we investigate four linguistic features that mark different types epistemic evaluation (modal auxiliary verbs, adverbs/adjectives, reporting verbs and references). A corpus study on two biology papers indicates the usefulness of this model, and suggest some typical trends. In particular, we find that matrix clauses with a reporting verb of the form 'These results suggest', are the predominant feature indicating knowledge attribution in scientific text.","We wish to thank Eduard Hovy for providing the insight that modality can be thought of like sentiment, and our anonymous reviewers for their constructive comments. Anita de Waard's research is supported by Elsevier Labs and a grant from the Dutch funding organization NWO, under their Casimir Programme.","Epistemic Modality and Knowledge Attribution in Scientific Discourse: A Taxonomy of Types and Overview of Features. We propose a model for knowledge attribution and epistemic evaluation in scientific discourse, consisting of three dimensions with different values: source (author, other, unknown); value (unknown, possible, probable, presumed true) and basis (reasoning, data, other). Based on a literature review, we investigate four linguistic features that mark different types epistemic evaluation (modal auxiliary verbs, adverbs/adjectives, reporting verbs and references). A corpus study on two biology papers indicates the usefulness of this model, and suggest some typical trends. In particular, we find that matrix clauses with a reporting verb of the form 'These results suggest', are the predominant feature indicating knowledge attribution in scientific text.",2012
laparra-etal-2015-timelines,https://aclanthology.org/W15-4508,0,,,,,,,"From TimeLines to StoryLines: A preliminary proposal for evaluating narratives. We formulate a proposal that covers a new definition of StoryLines based on the shared data provided by the NewsStory workshop. We re-use the SemEval 2015 Task 4: Timelines dataset to provide a gold-standard dataset and an evaluation measure for evaluating StoryLines extraction systems. We also present a system to explore the feasibility of capturing Story-Lines automatically. Finally, based on our initial findings, we also discuss some simple changes that will improve the existing annotations to complete our initial Story-Line task proposal.",From {T}ime{L}ines to {S}tory{L}ines: A preliminary proposal for evaluating narratives,"We formulate a proposal that covers a new definition of StoryLines based on the shared data provided by the NewsStory workshop. We re-use the SemEval 2015 Task 4: Timelines dataset to provide a gold-standard dataset and an evaluation measure for evaluating StoryLines extraction systems. We also present a system to explore the feasibility of capturing Story-Lines automatically. Finally, based on our initial findings, we also discuss some simple changes that will improve the existing annotations to complete our initial Story-Line task proposal.",From TimeLines to StoryLines: A preliminary proposal for evaluating narratives,"We formulate a proposal that covers a new definition of StoryLines based on the shared data provided by the NewsStory workshop. We re-use the SemEval 2015 Task 4: Timelines dataset to provide a gold-standard dataset and an evaluation measure for evaluating StoryLines extraction systems. We also present a system to explore the feasibility of capturing Story-Lines automatically. Finally, based on our initial findings, we also discuss some simple changes that will improve the existing annotations to complete our initial Story-Line task proposal.","We are grateful to the anonymous reviewers for their insightful comments. This work has been partially funded by SKaTer (TIN2012-38584-C06-02) and NewsReader (FP7-ICT-2011-8-316404), as well as the READERS project with the financial support of MINECO, ANR (convention ANR-12-CHRI-0004-03) and EPSRC (EP/K017845/1) in the framework of ERA-NET CHIST-ERA (UE FP7/2007.","From TimeLines to StoryLines: A preliminary proposal for evaluating narratives. We formulate a proposal that covers a new definition of StoryLines based on the shared data provided by the NewsStory workshop. We re-use the SemEval 2015 Task 4: Timelines dataset to provide a gold-standard dataset and an evaluation measure for evaluating StoryLines extraction systems. We also present a system to explore the feasibility of capturing Story-Lines automatically. Finally, based on our initial findings, we also discuss some simple changes that will improve the existing annotations to complete our initial Story-Line task proposal.",2015
deshmukh-etal-2019-sequence,https://aclanthology.org/W19-5809,0,,,,,,,"A Sequence Modeling Approach for Structured Data Extraction from Unstructured Text. Extraction of structured information from unstructured text has always been a problem of interest for NLP community. Structured data is concise to store, search and retrieve; and it facilitates easier human & machine consumption. Traditionally, structured data extraction from text has been done by using various parsing methodologies, applying domain specific rules and heuristics. In this work, we leverage the developments in the space of sequence modeling for the problem of structured data extraction. Initially, we posed the problem as a machine translation problem and used the state-of-the-art machine translation model. Based on these initial results, we changed the approach to a sequence tagging one. We propose an extension of one of the attractive models for sequence tagging tailored and effective to our problem. This gave 4.4% improvement over the vanilla sequence tagging model. We also propose another variant of the sequence tagging model which can handle multiple labels of words. Experiments have been performed on Wikipedia Infobox Dataset of biographies and results are presented for both single and multi-label models. These models indicate an effective alternate deep learning technique based methods to extract structured data from raw text.",A Sequence Modeling Approach for Structured Data Extraction from Unstructured Text,"Extraction of structured information from unstructured text has always been a problem of interest for NLP community. Structured data is concise to store, search and retrieve; and it facilitates easier human & machine consumption. Traditionally, structured data extraction from text has been done by using various parsing methodologies, applying domain specific rules and heuristics. In this work, we leverage the developments in the space of sequence modeling for the problem of structured data extraction. Initially, we posed the problem as a machine translation problem and used the state-of-the-art machine translation model. Based on these initial results, we changed the approach to a sequence tagging one. We propose an extension of one of the attractive models for sequence tagging tailored and effective to our problem. This gave 4.4% improvement over the vanilla sequence tagging model. We also propose another variant of the sequence tagging model which can handle multiple labels of words. Experiments have been performed on Wikipedia Infobox Dataset of biographies and results are presented for both single and multi-label models. These models indicate an effective alternate deep learning technique based methods to extract structured data from raw text.",A Sequence Modeling Approach for Structured Data Extraction from Unstructured Text,"Extraction of structured information from unstructured text has always been a problem of interest for NLP community. Structured data is concise to store, search and retrieve; and it facilitates easier human & machine consumption. Traditionally, structured data extraction from text has been done by using various parsing methodologies, applying domain specific rules and heuristics. In this work, we leverage the developments in the space of sequence modeling for the problem of structured data extraction. Initially, we posed the problem as a machine translation problem and used the state-of-the-art machine translation model. Based on these initial results, we changed the approach to a sequence tagging one. We propose an extension of one of the attractive models for sequence tagging tailored and effective to our problem. This gave 4.4% improvement over the vanilla sequence tagging model. We also propose another variant of the sequence tagging model which can handle multiple labels of words. Experiments have been performed on Wikipedia Infobox Dataset of biographies and results are presented for both single and multi-label models. These models indicate an effective alternate deep learning technique based methods to extract structured data from raw text.",,"A Sequence Modeling Approach for Structured Data Extraction from Unstructured Text. Extraction of structured information from unstructured text has always been a problem of interest for NLP community. Structured data is concise to store, search and retrieve; and it facilitates easier human & machine consumption. Traditionally, structured data extraction from text has been done by using various parsing methodologies, applying domain specific rules and heuristics. In this work, we leverage the developments in the space of sequence modeling for the problem of structured data extraction. Initially, we posed the problem as a machine translation problem and used the state-of-the-art machine translation model. Based on these initial results, we changed the approach to a sequence tagging one. We propose an extension of one of the attractive models for sequence tagging tailored and effective to our problem. This gave 4.4% improvement over the vanilla sequence tagging model. We also propose another variant of the sequence tagging model which can handle multiple labels of words. Experiments have been performed on Wikipedia Infobox Dataset of biographies and results are presented for both single and multi-label models. These models indicate an effective alternate deep learning technique based methods to extract structured data from raw text.",2019
somers-2005-faking,https://aclanthology.org/U05-1012,0,,,,,,,"Faking it: Synthetic Text-to-speech Synthesis for Under-resourced Languages -- Experimental Design. Speech synthesis or text-to-speech (TTS) systems are currently available for a number of the world's major languages, but for thousands of the world's 'minor' languages no such technology is available. While awaiting the development of such technology, we would like to try the stopgap solution of using an existing TTS system for a major language (the base language) to 'fake' TTS for a minor language (the target language). This paper describes the design for an experiment which involves finding a suitable base language for the Australian Aboriginal language Pitjantjajara as a target language, and evaluating its usability in the real-life situation of providing language technology support for speakers of the target language whose understanding of the local majority language is limited, for example in the scenario of going to the doctor.",Faking it: Synthetic Text-to-speech Synthesis for Under-resourced Languages {--} Experimental Design,"Speech synthesis or text-to-speech (TTS) systems are currently available for a number of the world's major languages, but for thousands of the world's 'minor' languages no such technology is available. While awaiting the development of such technology, we would like to try the stopgap solution of using an existing TTS system for a major language (the base language) to 'fake' TTS for a minor language (the target language). This paper describes the design for an experiment which involves finding a suitable base language for the Australian Aboriginal language Pitjantjajara as a target language, and evaluating its usability in the real-life situation of providing language technology support for speakers of the target language whose understanding of the local majority language is limited, for example in the scenario of going to the doctor.",Faking it: Synthetic Text-to-speech Synthesis for Under-resourced Languages -- Experimental Design,"Speech synthesis or text-to-speech (TTS) systems are currently available for a number of the world's major languages, but for thousands of the world's 'minor' languages no such technology is available. While awaiting the development of such technology, we would like to try the stopgap solution of using an existing TTS system for a major language (the base language) to 'fake' TTS for a minor language (the target language). This paper describes the design for an experiment which involves finding a suitable base language for the Australian Aboriginal language Pitjantjajara as a target language, and evaluating its usability in the real-life situation of providing language technology support for speakers of the target language whose understanding of the local majority language is limited, for example in the scenario of going to the doctor.","Our thanks go to Andrew Longmire at the Department of Environment and Heritage's Cultural Centre, Uluru-Kata Tjuta National Park, Yulara NT, and to Bill Edwards, of the Unaipon School, University of South Australia, Adelaide, for their interest in the experiment, and, we hope eventually, for their assistance in conducting it.","Faking it: Synthetic Text-to-speech Synthesis for Under-resourced Languages -- Experimental Design. Speech synthesis or text-to-speech (TTS) systems are currently available for a number of the world's major languages, but for thousands of the world's 'minor' languages no such technology is available. While awaiting the development of such technology, we would like to try the stopgap solution of using an existing TTS system for a major language (the base language) to 'fake' TTS for a minor language (the target language). This paper describes the design for an experiment which involves finding a suitable base language for the Australian Aboriginal language Pitjantjajara as a target language, and evaluating its usability in the real-life situation of providing language technology support for speakers of the target language whose understanding of the local majority language is limited, for example in the scenario of going to the doctor.",2005
edunov-etal-2018-understanding,https://aclanthology.org/D18-1045,0,,,,,,,"Understanding Back-Translation at Scale. An effective method to improve neural machine translation with monolingual data is to augment the parallel training corpus with back-translations of target language sentences. This work broadens the understanding of back-translation and investigates a number of methods to generate synthetic source sentences. We find that in all but resource poor settings back-translations obtained via sampling or noised beam outputs are most effective. Our analysis shows that sampling or noisy synthetic data gives a much stronger training signal than data generated by beam or greedy search. We also compare how synthetic data compares to genuine bitext and study various domain effects. Finally, we scale to hundreds of millions of monolingual sentences and achieve a new state of the art of 35 BLEU on the WMT'14 English-German test set.",Understanding Back-Translation at Scale,"An effective method to improve neural machine translation with monolingual data is to augment the parallel training corpus with back-translations of target language sentences. This work broadens the understanding of back-translation and investigates a number of methods to generate synthetic source sentences. We find that in all but resource poor settings back-translations obtained via sampling or noised beam outputs are most effective. Our analysis shows that sampling or noisy synthetic data gives a much stronger training signal than data generated by beam or greedy search. We also compare how synthetic data compares to genuine bitext and study various domain effects. Finally, we scale to hundreds of millions of monolingual sentences and achieve a new state of the art of 35 BLEU on the WMT'14 English-German test set.",Understanding Back-Translation at Scale,"An effective method to improve neural machine translation with monolingual data is to augment the parallel training corpus with back-translations of target language sentences. This work broadens the understanding of back-translation and investigates a number of methods to generate synthetic source sentences. We find that in all but resource poor settings back-translations obtained via sampling or noised beam outputs are most effective. Our analysis shows that sampling or noisy synthetic data gives a much stronger training signal than data generated by beam or greedy search. We also compare how synthetic data compares to genuine bitext and study various domain effects. Finally, we scale to hundreds of millions of monolingual sentences and achieve a new state of the art of 35 BLEU on the WMT'14 English-German test set.",,"Understanding Back-Translation at Scale. An effective method to improve neural machine translation with monolingual data is to augment the parallel training corpus with back-translations of target language sentences. This work broadens the understanding of back-translation and investigates a number of methods to generate synthetic source sentences. We find that in all but resource poor settings back-translations obtained via sampling or noised beam outputs are most effective. Our analysis shows that sampling or noisy synthetic data gives a much stronger training signal than data generated by beam or greedy search. We also compare how synthetic data compares to genuine bitext and study various domain effects. Finally, we scale to hundreds of millions of monolingual sentences and achieve a new state of the art of 35 BLEU on the WMT'14 English-German test set.",2018
rohanian-2017-multi,https://doi.org/10.26615/issn.1314-9156.2017_005,0,,,,,,,Multi-Document Summarization of Persian Text using Paragraph Vectors. ,Multi-Document Summarization of {P}ersian Text using Paragraph Vectors,,Multi-Document Summarization of Persian Text using Paragraph Vectors,,,Multi-Document Summarization of Persian Text using Paragraph Vectors. ,2017
shahid-etal-2020-detecting,https://aclanthology.org/2020.nuse-1.15,1,,,,disinformation_and_fake_news,,,"Detecting and understanding moral biases in news. We describe work in progress on detecting and understanding the moral biases of news sources by combining framing theory with natural language processing. First we draw connections between issue-specific frames and moral frames that apply to all issues. Then we analyze the connection between moral frame presence and news source political leaning. We develop and test a simple classification model for detecting the presence of a moral frame, highlighting the need for more sophisticated models. We also discuss some of the annotation and frame detection challenges that can inform future research in this area.",Detecting and understanding moral biases in news,"We describe work in progress on detecting and understanding the moral biases of news sources by combining framing theory with natural language processing. First we draw connections between issue-specific frames and moral frames that apply to all issues. Then we analyze the connection between moral frame presence and news source political leaning. We develop and test a simple classification model for detecting the presence of a moral frame, highlighting the need for more sophisticated models. We also discuss some of the annotation and frame detection challenges that can inform future research in this area.",Detecting and understanding moral biases in news,"We describe work in progress on detecting and understanding the moral biases of news sources by combining framing theory with natural language processing. First we draw connections between issue-specific frames and moral frames that apply to all issues. Then we analyze the connection between moral frame presence and news source political leaning. We develop and test a simple classification model for detecting the presence of a moral frame, highlighting the need for more sophisticated models. We also discuss some of the annotation and frame detection challenges that can inform future research in this area.","The authors would like to thank Sumayya Siddiqui, Navya Reddy and Hasan Sehwail for their help with annotating the data.","Detecting and understanding moral biases in news. We describe work in progress on detecting and understanding the moral biases of news sources by combining framing theory with natural language processing. First we draw connections between issue-specific frames and moral frames that apply to all issues. Then we analyze the connection between moral frame presence and news source political leaning. We develop and test a simple classification model for detecting the presence of a moral frame, highlighting the need for more sophisticated models. We also discuss some of the annotation and frame detection challenges that can inform future research in this area.",2020
meteer-etal-2012-medlingmap,https://aclanthology.org/W12-2417,1,,,,health,,,"MedLingMap: A growing resource mapping the Bio-Medical NLP field. The application of natural language processing (NLP) in the biology and medical domain crosses many fields from Healthcare Information to Bioinformatics to NLP itself. In order to make sense of how these fields relate and intersect, we have created ""MedLingMap"" (www.medlingmap.org) which is a compilation of references with a multi-faceted index. The initial focus has been creating the infrastructure and populating it with references annotated with facets such as topic, resources used (ontologies, tools, corpora), and organizations. Simultaneously we are applying NLP techniques to the text to find clusters, key terms and other relationships. The goal for this paper is to introduce MedLingMap to the community and show how it can be a powerful tool for research and exploration in the field.",{M}ed{L}ing{M}ap: A growing resource mapping the Bio-Medical {NLP} field,"The application of natural language processing (NLP) in the biology and medical domain crosses many fields from Healthcare Information to Bioinformatics to NLP itself. In order to make sense of how these fields relate and intersect, we have created ""MedLingMap"" (www.medlingmap.org) which is a compilation of references with a multi-faceted index. The initial focus has been creating the infrastructure and populating it with references annotated with facets such as topic, resources used (ontologies, tools, corpora), and organizations. Simultaneously we are applying NLP techniques to the text to find clusters, key terms and other relationships. The goal for this paper is to introduce MedLingMap to the community and show how it can be a powerful tool for research and exploration in the field.",MedLingMap: A growing resource mapping the Bio-Medical NLP field,"The application of natural language processing (NLP) in the biology and medical domain crosses many fields from Healthcare Information to Bioinformatics to NLP itself. In order to make sense of how these fields relate and intersect, we have created ""MedLingMap"" (www.medlingmap.org) which is a compilation of references with a multi-faceted index. The initial focus has been creating the infrastructure and populating it with references annotated with facets such as topic, resources used (ontologies, tools, corpora), and organizations. Simultaneously we are applying NLP techniques to the text to find clusters, key terms and other relationships. The goal for this paper is to introduce MedLingMap to the community and show how it can be a powerful tool for research and exploration in the field.",,"MedLingMap: A growing resource mapping the Bio-Medical NLP field. The application of natural language processing (NLP) in the biology and medical domain crosses many fields from Healthcare Information to Bioinformatics to NLP itself. In order to make sense of how these fields relate and intersect, we have created ""MedLingMap"" (www.medlingmap.org) which is a compilation of references with a multi-faceted index. The initial focus has been creating the infrastructure and populating it with references annotated with facets such as topic, resources used (ontologies, tools, corpora), and organizations. Simultaneously we are applying NLP techniques to the text to find clusters, key terms and other relationships. The goal for this paper is to introduce MedLingMap to the community and show how it can be a powerful tool for research and exploration in the field.",2012
habash-metsky-2008-automatic,https://aclanthology.org/2008.amta-papers.9,0,,,,,,,"Automatic Learning of Morphological Variations for Handling Out-of-Vocabulary Terms in Urdu-English MT. We present an approach for online handling of Out-of-Vocabulary (OOV) terms in Urdu-English MT. Since Urdu is morphologically richer than English, we expect a large portion of the OOV terms to be Urdu morphological variations that are irrelevant to English. We describe an approach to automatically learn English-irrelevant (targetirrelevant) Urdu (source) morphological variation rules from standard phrase tables. These rules are learned in an unsupervised (or lightly supervised) manner by exploiting redundancy in Urdu and collocation with English translations. We use these rules to hypothesize invocabulary alternatives to the OOV terms. Our results show that we reduce the OOV rate from a standard baseline average of 2.6% to an average of 0.3% (or 89% relative decrease). We also increase the BLEU score by 0.45 (absolute) and 2.8% (relative) on a standard test set. A manual error analysis shows that 28% of handled OOV cases produce acceptable translations in context. [8th AMTA conference, Hawaii, 21-25 October 2008] Ã © ÂÃ º bnwAnA 'to make through another person (indirect causative)'. Much of these inflectional variations are just ""noise"" from the point of view of English but some are not. In the work presented here we attempt to automatically learn the patterns of what English is truly blind to and what it is not.",Automatic Learning of Morphological Variations for Handling Out-of-Vocabulary Terms in {U}rdu-{E}nglish {MT},"We present an approach for online handling of Out-of-Vocabulary (OOV) terms in Urdu-English MT. Since Urdu is morphologically richer than English, we expect a large portion of the OOV terms to be Urdu morphological variations that are irrelevant to English. We describe an approach to automatically learn English-irrelevant (targetirrelevant) Urdu (source) morphological variation rules from standard phrase tables. These rules are learned in an unsupervised (or lightly supervised) manner by exploiting redundancy in Urdu and collocation with English translations. We use these rules to hypothesize invocabulary alternatives to the OOV terms. Our results show that we reduce the OOV rate from a standard baseline average of 2.6% to an average of 0.3% (or 89% relative decrease). We also increase the BLEU score by 0.45 (absolute) and 2.8% (relative) on a standard test set. A manual error analysis shows that 28% of handled OOV cases produce acceptable translations in context. [8th AMTA conference, Hawaii, 21-25 October 2008] Ã © ÂÃ º bnwAnA 'to make through another person (indirect causative)'. Much of these inflectional variations are just ""noise"" from the point of view of English but some are not. In the work presented here we attempt to automatically learn the patterns of what English is truly blind to and what it is not.",Automatic Learning of Morphological Variations for Handling Out-of-Vocabulary Terms in Urdu-English MT,"We present an approach for online handling of Out-of-Vocabulary (OOV) terms in Urdu-English MT. Since Urdu is morphologically richer than English, we expect a large portion of the OOV terms to be Urdu morphological variations that are irrelevant to English. We describe an approach to automatically learn English-irrelevant (targetirrelevant) Urdu (source) morphological variation rules from standard phrase tables. These rules are learned in an unsupervised (or lightly supervised) manner by exploiting redundancy in Urdu and collocation with English translations. We use these rules to hypothesize invocabulary alternatives to the OOV terms. Our results show that we reduce the OOV rate from a standard baseline average of 2.6% to an average of 0.3% (or 89% relative decrease). We also increase the BLEU score by 0.45 (absolute) and 2.8% (relative) on a standard test set. A manual error analysis shows that 28% of handled OOV cases produce acceptable translations in context. [8th AMTA conference, Hawaii, 21-25 October 2008] Ã © ÂÃ º bnwAnA 'to make through another person (indirect causative)'. Much of these inflectional variations are just ""noise"" from the point of view of English but some are not. In the work presented here we attempt to automatically learn the patterns of what English is truly blind to and what it is not.","The first author was funded under the DARPA GALE program, contract HR0011-06-C-0023.","Automatic Learning of Morphological Variations for Handling Out-of-Vocabulary Terms in Urdu-English MT. We present an approach for online handling of Out-of-Vocabulary (OOV) terms in Urdu-English MT. Since Urdu is morphologically richer than English, we expect a large portion of the OOV terms to be Urdu morphological variations that are irrelevant to English. We describe an approach to automatically learn English-irrelevant (targetirrelevant) Urdu (source) morphological variation rules from standard phrase tables. These rules are learned in an unsupervised (or lightly supervised) manner by exploiting redundancy in Urdu and collocation with English translations. We use these rules to hypothesize invocabulary alternatives to the OOV terms. Our results show that we reduce the OOV rate from a standard baseline average of 2.6% to an average of 0.3% (or 89% relative decrease). We also increase the BLEU score by 0.45 (absolute) and 2.8% (relative) on a standard test set. A manual error analysis shows that 28% of handled OOV cases produce acceptable translations in context. [8th AMTA conference, Hawaii, 21-25 October 2008] Ã © ÂÃ º bnwAnA 'to make through another person (indirect causative)'. Much of these inflectional variations are just ""noise"" from the point of view of English but some are not. In the work presented here we attempt to automatically learn the patterns of what English is truly blind to and what it is not.",2008
dary-etal-2021-talep,https://aclanthology.org/2021.cmcl-1.13,0,,,,,,,"TALEP at CMCL 2021 Shared Task: Non Linear Combination of Low and High-Level Features for Predicting Eye-Tracking Data. In this paper we describe our contribution to the CMCL 2021 Shared Task, which consists in predicting 5 different eye tracking variables from English tokenized text. Our approach is based on a neural network that combines both raw textual features we extracted from the text and parser-based features that include linguistic predictions (e.g. part of speech) and complexity metrics (e.g., entropy of parsing). We found that both the features we considered as well as the architecture of the neural model that combined these features played a role in the overall performance. Our system achieved relatively high accuracy on the test data of the challenge and was ranked 2nd out of 13 competing teams and a total of 30 submissions.",{TALEP} at {CMCL} 2021 Shared Task: Non Linear Combination of Low and High-Level Features for Predicting Eye-Tracking Data,"In this paper we describe our contribution to the CMCL 2021 Shared Task, which consists in predicting 5 different eye tracking variables from English tokenized text. Our approach is based on a neural network that combines both raw textual features we extracted from the text and parser-based features that include linguistic predictions (e.g. part of speech) and complexity metrics (e.g., entropy of parsing). We found that both the features we considered as well as the architecture of the neural model that combined these features played a role in the overall performance. Our system achieved relatively high accuracy on the test data of the challenge and was ranked 2nd out of 13 competing teams and a total of 30 submissions.",TALEP at CMCL 2021 Shared Task: Non Linear Combination of Low and High-Level Features for Predicting Eye-Tracking Data,"In this paper we describe our contribution to the CMCL 2021 Shared Task, which consists in predicting 5 different eye tracking variables from English tokenized text. Our approach is based on a neural network that combines both raw textual features we extracted from the text and parser-based features that include linguistic predictions (e.g. part of speech) and complexity metrics (e.g., entropy of parsing). We found that both the features we considered as well as the architecture of the neural model that combined these features played a role in the overall performance. Our system achieved relatively high accuracy on the test data of the challenge and was ranked 2nd out of 13 competing teams and a total of 30 submissions.",,"TALEP at CMCL 2021 Shared Task: Non Linear Combination of Low and High-Level Features for Predicting Eye-Tracking Data. In this paper we describe our contribution to the CMCL 2021 Shared Task, which consists in predicting 5 different eye tracking variables from English tokenized text. Our approach is based on a neural network that combines both raw textual features we extracted from the text and parser-based features that include linguistic predictions (e.g. part of speech) and complexity metrics (e.g., entropy of parsing). We found that both the features we considered as well as the architecture of the neural model that combined these features played a role in the overall performance. Our system achieved relatively high accuracy on the test data of the challenge and was ranked 2nd out of 13 competing teams and a total of 30 submissions.",2021
williams-1984-frequency,https://aclanthology.org/1984.bcs-1.7,0,,,,,,,A frequency-mode device to assist in the machine translation of natural languages. ,A frequency-mode device to assist in the machine translation of natural languages,,A frequency-mode device to assist in the machine translation of natural languages,,,A frequency-mode device to assist in the machine translation of natural languages. ,1984
ljubesic-etal-2017-adapting,https://aclanthology.org/W17-1410,0,,,,,,,"Adapting a State-of-the-Art Tagger for South Slavic Languages to Non-Standard Text. In this paper we present the adaptations of a state-of-the-art tagger for South Slavic languages to non-standard texts on the example of the Slovene language. We investigate the impact of introducing in-domain training data as well as additional super-87.41% on the full morphosyntactic description, which is, nevertheless, still quite far from the accuracy of 94.27% achieved on standard text.",Adapting a State-of-the-Art Tagger for {S}outh {S}lavic Languages to Non-Standard Text,"In this paper we present the adaptations of a state-of-the-art tagger for South Slavic languages to non-standard texts on the example of the Slovene language. We investigate the impact of introducing in-domain training data as well as additional super-87.41% on the full morphosyntactic description, which is, nevertheless, still quite far from the accuracy of 94.27% achieved on standard text.",Adapting a State-of-the-Art Tagger for South Slavic Languages to Non-Standard Text,"In this paper we present the adaptations of a state-of-the-art tagger for South Slavic languages to non-standard texts on the example of the Slovene language. We investigate the impact of introducing in-domain training data as well as additional super-87.41% on the full morphosyntactic description, which is, nevertheless, still quite far from the accuracy of 94.27% achieved on standard text.","The work described in this paper was funded by the Slovenian Research Agency national basic research project J6-6842 ""Resources, Tools and Methods for the Research of Nonstandard Internet Slovene"", the national research programme ""Knowledge Technologies"", by the Ministry of Education, Science and Sport within the ""CLARIN.SI"" research infrastructure and the Swiss National Science Foundation grant IZ74Z0 160501 (ReLDI).","Adapting a State-of-the-Art Tagger for South Slavic Languages to Non-Standard Text. In this paper we present the adaptations of a state-of-the-art tagger for South Slavic languages to non-standard texts on the example of the Slovene language. We investigate the impact of introducing in-domain training data as well as additional super-87.41% on the full morphosyntactic description, which is, nevertheless, still quite far from the accuracy of 94.27% achieved on standard text.",2017
amiri-etal-2017-repeat,https://aclanthology.org/D17-1255,0,,,,,,,"Repeat before Forgetting: Spaced Repetition for Efficient and Effective Training of Neural Networks. We present a novel approach for training artificial neural networks. Our approach is inspired by broad evidence in psychology that shows human learners can learn efficiently and effectively by increasing intervals of time between subsequent reviews of previously learned materials (spaced repetition). We investigate the analogy between training neural models and findings in psychology about human memory model and develop an efficient and effective algorithm to train neural models. The core part of our algorithm is a cognitively-motivated scheduler according to which training instances and their ""reviews"" are spaced over time. Our algorithm uses only 34-50% of data per epoch, is 2.9-4.8 times faster than standard training, and outperforms competing state-of-the-art baselines. 1",Repeat before Forgetting: Spaced Repetition for Efficient and Effective Training of Neural Networks,"We present a novel approach for training artificial neural networks. Our approach is inspired by broad evidence in psychology that shows human learners can learn efficiently and effectively by increasing intervals of time between subsequent reviews of previously learned materials (spaced repetition). We investigate the analogy between training neural models and findings in psychology about human memory model and develop an efficient and effective algorithm to train neural models. The core part of our algorithm is a cognitively-motivated scheduler according to which training instances and their ""reviews"" are spaced over time. Our algorithm uses only 34-50% of data per epoch, is 2.9-4.8 times faster than standard training, and outperforms competing state-of-the-art baselines. 1",Repeat before Forgetting: Spaced Repetition for Efficient and Effective Training of Neural Networks,"We present a novel approach for training artificial neural networks. Our approach is inspired by broad evidence in psychology that shows human learners can learn efficiently and effectively by increasing intervals of time between subsequent reviews of previously learned materials (spaced repetition). We investigate the analogy between training neural models and findings in psychology about human memory model and develop an efficient and effective algorithm to train neural models. The core part of our algorithm is a cognitively-motivated scheduler according to which training instances and their ""reviews"" are spaced over time. Our algorithm uses only 34-50% of data per epoch, is 2.9-4.8 times faster than standard training, and outperforms competing state-of-the-art baselines. 1",We thank Mitra Mohtarami for her constructive feedback during the development of this paper and anonymous reviewers for their thoughtful comments. This work was supported by National Institutes of Health (NIH) grant R01GM114355 from the National Institute of General Medical Sciences (NIGMS). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.,"Repeat before Forgetting: Spaced Repetition for Efficient and Effective Training of Neural Networks. We present a novel approach for training artificial neural networks. Our approach is inspired by broad evidence in psychology that shows human learners can learn efficiently and effectively by increasing intervals of time between subsequent reviews of previously learned materials (spaced repetition). We investigate the analogy between training neural models and findings in psychology about human memory model and develop an efficient and effective algorithm to train neural models. The core part of our algorithm is a cognitively-motivated scheduler according to which training instances and their ""reviews"" are spaced over time. Our algorithm uses only 34-50% of data per epoch, is 2.9-4.8 times faster than standard training, and outperforms competing state-of-the-art baselines. 1",2017
joshi-etal-2017-triviaqa,https://aclanthology.org/P17-1147,0,,,,,,,"TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K questionanswer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a featurebased classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that Trivi-aQA is a challenging testbed that is worth significant future study. 1",{T}rivia{QA}: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension,"We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K questionanswer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a featurebased classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that Trivi-aQA is a challenging testbed that is worth significant future study. 1",TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension,"We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K questionanswer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a featurebased classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that Trivi-aQA is a challenging testbed that is worth significant future study. 1","This work was supported by DARPA contract FA8750-13-2-0019, the WRF/Cable Professorship, gifts from Google and Tencent, and an Allen Distinguished Investigator Award. The authors would like to thank Minjoon Seo for the BiDAF code, and Noah Smith, Srinivasan Iyer, Mark Yatskar, Nicholas FitzGerald, Antoine Bosselut, Dallas Card, and anonymous reviewers for helpful comments.","TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K questionanswer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a featurebased classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that Trivi-aQA is a challenging testbed that is worth significant future study. 1",2017
riloff-lehnert-1993-dictionary,https://aclanthology.org/X93-1023,0,,,,,,,"Dictionary Construction by Domain Experts. Sites participating in the recent message understanding conferences have increasingly focused their research on developing methods for automated knowledge acquisition and tools for human-assisted knowledge engineering. However, it is important to remember that the ultimate users of these tools will be domain experts, not natural language processing researchers. Domain experls have extensive knowledge about the task and the domain, but will have little or no background in linguistics or text processing. Tools that assume familiarity with computational linguistics will be of limited use in practical development scenarios.",Dictionary Construction by Domain Experts,"Sites participating in the recent message understanding conferences have increasingly focused their research on developing methods for automated knowledge acquisition and tools for human-assisted knowledge engineering. However, it is important to remember that the ultimate users of these tools will be domain experts, not natural language processing researchers. Domain experls have extensive knowledge about the task and the domain, but will have little or no background in linguistics or text processing. Tools that assume familiarity with computational linguistics will be of limited use in practical development scenarios.",Dictionary Construction by Domain Experts,"Sites participating in the recent message understanding conferences have increasingly focused their research on developing methods for automated knowledge acquisition and tools for human-assisted knowledge engineering. However, it is important to remember that the ultimate users of these tools will be domain experts, not natural language processing researchers. Domain experls have extensive knowledge about the task and the domain, but will have little or no background in linguistics or text processing. Tools that assume familiarity with computational linguistics will be of limited use in practical development scenarios.",,"Dictionary Construction by Domain Experts. Sites participating in the recent message understanding conferences have increasingly focused their research on developing methods for automated knowledge acquisition and tools for human-assisted knowledge engineering. However, it is important to remember that the ultimate users of these tools will be domain experts, not natural language processing researchers. Domain experls have extensive knowledge about the task and the domain, but will have little or no background in linguistics or text processing. Tools that assume familiarity with computational linguistics will be of limited use in practical development scenarios.",1993
emele-dorna-1998-ambiguity-preserving,https://aclanthology.org/P98-1060,0,,,,,,,"Ambiguity Preserving Machine Translation using Packed Representations. In this paper we present an ambiguity preserving translation approach which transfers ambiguous LFG f-structure representations. It is based on packed f-structure representations which are the result of potentially ambiguous utterances. If the ambiguities between source and target language can be preserved, no unpacking during transfer is necessary and the generator may produce utterances which maximally cover the underlying ambiguities. We convert the packed f-structure descriptions into a flat set of prolog terms which consist of predicates, their predicate argument structure and additional attribute-value information. Ambiguity is expressed via local disjunctions. The flat representations facilitate the application of a Shake-and-Bake like transfer approach extended to deal with packed ambiguities. * We would like to thank our colleagues at Xerox PARC and Xerox RCE for fruitful discussions and the anonymous reviewers for valuable feedback. This work was funded by the German Federal Ministry of Education, Science, Research and Technology (BMBF) in the framework of the Verbmobil project under grant 01 IV 701 N3.",Ambiguity Preserving Machine Translation using Packed Representations,"In this paper we present an ambiguity preserving translation approach which transfers ambiguous LFG f-structure representations. It is based on packed f-structure representations which are the result of potentially ambiguous utterances. If the ambiguities between source and target language can be preserved, no unpacking during transfer is necessary and the generator may produce utterances which maximally cover the underlying ambiguities. We convert the packed f-structure descriptions into a flat set of prolog terms which consist of predicates, their predicate argument structure and additional attribute-value information. Ambiguity is expressed via local disjunctions. The flat representations facilitate the application of a Shake-and-Bake like transfer approach extended to deal with packed ambiguities. * We would like to thank our colleagues at Xerox PARC and Xerox RCE for fruitful discussions and the anonymous reviewers for valuable feedback. This work was funded by the German Federal Ministry of Education, Science, Research and Technology (BMBF) in the framework of the Verbmobil project under grant 01 IV 701 N3.",Ambiguity Preserving Machine Translation using Packed Representations,"In this paper we present an ambiguity preserving translation approach which transfers ambiguous LFG f-structure representations. It is based on packed f-structure representations which are the result of potentially ambiguous utterances. If the ambiguities between source and target language can be preserved, no unpacking during transfer is necessary and the generator may produce utterances which maximally cover the underlying ambiguities. We convert the packed f-structure descriptions into a flat set of prolog terms which consist of predicates, their predicate argument structure and additional attribute-value information. Ambiguity is expressed via local disjunctions. The flat representations facilitate the application of a Shake-and-Bake like transfer approach extended to deal with packed ambiguities. * We would like to thank our colleagues at Xerox PARC and Xerox RCE for fruitful discussions and the anonymous reviewers for valuable feedback. This work was funded by the German Federal Ministry of Education, Science, Research and Technology (BMBF) in the framework of the Verbmobil project under grant 01 IV 701 N3.",,"Ambiguity Preserving Machine Translation using Packed Representations. In this paper we present an ambiguity preserving translation approach which transfers ambiguous LFG f-structure representations. It is based on packed f-structure representations which are the result of potentially ambiguous utterances. If the ambiguities between source and target language can be preserved, no unpacking during transfer is necessary and the generator may produce utterances which maximally cover the underlying ambiguities. We convert the packed f-structure descriptions into a flat set of prolog terms which consist of predicates, their predicate argument structure and additional attribute-value information. Ambiguity is expressed via local disjunctions. The flat representations facilitate the application of a Shake-and-Bake like transfer approach extended to deal with packed ambiguities. * We would like to thank our colleagues at Xerox PARC and Xerox RCE for fruitful discussions and the anonymous reviewers for valuable feedback. This work was funded by the German Federal Ministry of Education, Science, Research and Technology (BMBF) in the framework of the Verbmobil project under grant 01 IV 701 N3.",1998
raiyani-etal-2018-fully,https://aclanthology.org/W18-4404,1,,,,hate_speech,,,"Fully Connected Neural Network with Advance Preprocessor to Identify Aggression over Facebook and Twitter. Aggression Identification and Hate Speech detection had become an essential part of cyberharassment and cyberbullying and an automatic aggression identification can lead to the interception of such trolling. Following the same idealization, vista.ue team participated in the workshop which included a shared task on 'Aggression Identification'. A dataset of 15,000 aggression-annotated Facebook Posts and Comments written in Hindi (in both Roman and Devanagari script) and English languages were made available and different classification models were designed. This paper presents a model that outperforms Facebook FastText (Joulin et al., 2016a) and deep learning models over this dataset. Especially, the English developed system, when used to classify Twitter text, outperforms all the shared task submitted systems.",Fully Connected Neural Network with Advance Preprocessor to Identify Aggression over {F}acebook and {T}witter,"Aggression Identification and Hate Speech detection had become an essential part of cyberharassment and cyberbullying and an automatic aggression identification can lead to the interception of such trolling. Following the same idealization, vista.ue team participated in the workshop which included a shared task on 'Aggression Identification'. A dataset of 15,000 aggression-annotated Facebook Posts and Comments written in Hindi (in both Roman and Devanagari script) and English languages were made available and different classification models were designed. This paper presents a model that outperforms Facebook FastText (Joulin et al., 2016a) and deep learning models over this dataset. Especially, the English developed system, when used to classify Twitter text, outperforms all the shared task submitted systems.",Fully Connected Neural Network with Advance Preprocessor to Identify Aggression over Facebook and Twitter,"Aggression Identification and Hate Speech detection had become an essential part of cyberharassment and cyberbullying and an automatic aggression identification can lead to the interception of such trolling. Following the same idealization, vista.ue team participated in the workshop which included a shared task on 'Aggression Identification'. A dataset of 15,000 aggression-annotated Facebook Posts and Comments written in Hindi (in both Roman and Devanagari script) and English languages were made available and different classification models were designed. This paper presents a model that outperforms Facebook FastText (Joulin et al., 2016a) and deep learning models over this dataset. Especially, the English developed system, when used to classify Twitter text, outperforms all the shared task submitted systems.","The authors would like to thank COMPETE 2020, PORTUGAL 2020 Programs, the European Union, and LISBOA 2020 for supporting this research as part of Agatha Project SI & IDT number 18022 (Intelligent analysis system of open of sources information for surveillance/crime control) made in collaboration with the University ofÉvora. The colleagues Madhu Agrawal, Silvia Bottura Scardina and Roy Bayot provided insight and expertise that greatly assisted the research.","Fully Connected Neural Network with Advance Preprocessor to Identify Aggression over Facebook and Twitter. Aggression Identification and Hate Speech detection had become an essential part of cyberharassment and cyberbullying and an automatic aggression identification can lead to the interception of such trolling. Following the same idealization, vista.ue team participated in the workshop which included a shared task on 'Aggression Identification'. A dataset of 15,000 aggression-annotated Facebook Posts and Comments written in Hindi (in both Roman and Devanagari script) and English languages were made available and different classification models were designed. This paper presents a model that outperforms Facebook FastText (Joulin et al., 2016a) and deep learning models over this dataset. Especially, the English developed system, when used to classify Twitter text, outperforms all the shared task submitted systems.",2018
pouliquen-etal-2011-statistical,https://aclanthology.org/2011.eamt-1.3,0,,,,,,,"Statistical Machine Translation. This paper presents a study conducted in the course of implementing a project in the World Intellectual Property Organization (WIPO) on assisted translation of patent abstracts and titles from English to French. The tool (called 'Tapta') is trained on an extensive corpus of manually translated patents. These patents are classified, each class belonging to one of the 32 predefined domains. The trained Statistical Machine Translation (SMT) tool uses this additional information to propose more accurate translations according to the context. The performance of the SMT system was shown to be above the current state of the art, but, in order to produce an acceptable translation, a human has to supervise the process. Therefore, a graphical user interface was built in which the translator drives the automatic translation process. A significant experiment with human operators was conducted within WIPO, the output was judged to be successful and a project to use Tapta in production is now under discussion.",Statistical Machine Translation,"This paper presents a study conducted in the course of implementing a project in the World Intellectual Property Organization (WIPO) on assisted translation of patent abstracts and titles from English to French. The tool (called 'Tapta') is trained on an extensive corpus of manually translated patents. These patents are classified, each class belonging to one of the 32 predefined domains. The trained Statistical Machine Translation (SMT) tool uses this additional information to propose more accurate translations according to the context. The performance of the SMT system was shown to be above the current state of the art, but, in order to produce an acceptable translation, a human has to supervise the process. Therefore, a graphical user interface was built in which the translator drives the automatic translation process. A significant experiment with human operators was conducted within WIPO, the output was judged to be successful and a project to use Tapta in production is now under discussion.",Statistical Machine Translation,"This paper presents a study conducted in the course of implementing a project in the World Intellectual Property Organization (WIPO) on assisted translation of patent abstracts and titles from English to French. The tool (called 'Tapta') is trained on an extensive corpus of manually translated patents. These patents are classified, each class belonging to one of the 32 predefined domains. The trained Statistical Machine Translation (SMT) tool uses this additional information to propose more accurate translations according to the context. The performance of the SMT system was shown to be above the current state of the art, but, in order to produce an acceptable translation, a human has to supervise the process. Therefore, a graphical user interface was built in which the translator drives the automatic translation process. A significant experiment with human operators was conducted within WIPO, the output was judged to be successful and a project to use Tapta in production is now under discussion.","This work would not have been possible without the help of WIPO translators, namely Cécile Copet, Sophie Maire, Yann Wipraechtiger, Peter Smith and Nicolas Potapov. Special thanks to the 15 persons who participated in the two tests of Tapta and to Paul Halfpenny for his valuable proof-reading.","Statistical Machine Translation. This paper presents a study conducted in the course of implementing a project in the World Intellectual Property Organization (WIPO) on assisted translation of patent abstracts and titles from English to French. The tool (called 'Tapta') is trained on an extensive corpus of manually translated patents. These patents are classified, each class belonging to one of the 32 predefined domains. The trained Statistical Machine Translation (SMT) tool uses this additional information to propose more accurate translations according to the context. The performance of the SMT system was shown to be above the current state of the art, but, in order to produce an acceptable translation, a human has to supervise the process. Therefore, a graphical user interface was built in which the translator drives the automatic translation process. A significant experiment with human operators was conducted within WIPO, the output was judged to be successful and a project to use Tapta in production is now under discussion.",2011
wu-palmer-1994-verb,https://aclanthology.org/P94-1019,0,,,,,,,"Verb Semantics and Lexical Selection. This paper will focus on the semantic representation of verbs in computer systems and its impact on lexical selection problems in machine translation (MT). Two groups of English and Chinese verbs are examined to show that lexical selection must be based on interpretation of the sentence as well as selection restrictions placed on the verb arguments. A novel representation scheme is suggested, and is compared to representations with selection restrictions used in transfer-based MT. We see our approach as closely aligned with knowledge-based MT approaches (KBMT), and as a separate component that could be incorporated into existing systems. Examples and experimental results will show that, using this scheme, inexact matches can achieve correct lexical selection.",Verb Semantics and Lexical Selection,"This paper will focus on the semantic representation of verbs in computer systems and its impact on lexical selection problems in machine translation (MT). Two groups of English and Chinese verbs are examined to show that lexical selection must be based on interpretation of the sentence as well as selection restrictions placed on the verb arguments. A novel representation scheme is suggested, and is compared to representations with selection restrictions used in transfer-based MT. We see our approach as closely aligned with knowledge-based MT approaches (KBMT), and as a separate component that could be incorporated into existing systems. Examples and experimental results will show that, using this scheme, inexact matches can achieve correct lexical selection.",Verb Semantics and Lexical Selection,"This paper will focus on the semantic representation of verbs in computer systems and its impact on lexical selection problems in machine translation (MT). Two groups of English and Chinese verbs are examined to show that lexical selection must be based on interpretation of the sentence as well as selection restrictions placed on the verb arguments. A novel representation scheme is suggested, and is compared to representations with selection restrictions used in transfer-based MT. We see our approach as closely aligned with knowledge-based MT approaches (KBMT), and as a separate component that could be incorporated into existing systems. Examples and experimental results will show that, using this scheme, inexact matches can achieve correct lexical selection.",,"Verb Semantics and Lexical Selection. This paper will focus on the semantic representation of verbs in computer systems and its impact on lexical selection problems in machine translation (MT). Two groups of English and Chinese verbs are examined to show that lexical selection must be based on interpretation of the sentence as well as selection restrictions placed on the verb arguments. A novel representation scheme is suggested, and is compared to representations with selection restrictions used in transfer-based MT. We see our approach as closely aligned with knowledge-based MT approaches (KBMT), and as a separate component that could be incorporated into existing systems. Examples and experimental results will show that, using this scheme, inexact matches can achieve correct lexical selection.",1994
libovicky-helcl-2017-attention,https://aclanthology.org/P17-2031,0,,,,,,,"Attention Strategies for Multi-Source Sequence-to-Sequence Learning. Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.",Attention Strategies for Multi-Source Sequence-to-Sequence Learning,"Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.",Attention Strategies for Multi-Source Sequence-to-Sequence Learning,"Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.","We would like to thank Ondřej Dušek, Rudolf Rosa, Pavel Pecina, and Ondřej Bojar for a fruitful discussions and comments on the draft of the paper.This research has been funded by the Czech Science Foundation grant no. P103/12/G084, the EU grant no. H2020-ICT-2014-1-645452 (QT21), and Charles University grant no. 52315/2014 and SVV project no. 260 453. This work has been using language resources developed and/or stored and/or distributed by the LINDAT-Clarin project of the Ministry of Education of the Czech Republic (project LM2010013).","Attention Strategies for Multi-Source Sequence-to-Sequence Learning. Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.",2017
bogantes-etal-2016-towards,https://aclanthology.org/L16-1358,0,,,,,,,"Towards Lexical Encoding of Multi-Word Expressions in Spanish Dialects. This paper describes a pilot study in lexical encoding of multi-word expressions (MWEs) in 4 Latin American dialects of Spanish: Costa Rican, Colombian, Mexican and Peruvian. We describe the variability of MWE usage across dialects. We adapt an existing data model to a dialect-aware encoding, so as to represent dialect-related specificities, while avoiding redundancy of the data common for all dialects. A dozen of linguistic properties of MWEs can be expressed in this model, both on the level of a whole MWE and of its individual components. We describe the resulting lexical resource containing several dozens of MWEs in four dialects and we propose a method for constructing a web corpus as a support for crowdsourcing examples of MWE occurrences. The resource is available under an open license and paves the way towards a large-scale dialect-aware language resource construction, which should prove useful in both traditional and novel NLP applications.",Towards Lexical Encoding of Multi-Word Expressions in {S}panish Dialects,"This paper describes a pilot study in lexical encoding of multi-word expressions (MWEs) in 4 Latin American dialects of Spanish: Costa Rican, Colombian, Mexican and Peruvian. We describe the variability of MWE usage across dialects. We adapt an existing data model to a dialect-aware encoding, so as to represent dialect-related specificities, while avoiding redundancy of the data common for all dialects. A dozen of linguistic properties of MWEs can be expressed in this model, both on the level of a whole MWE and of its individual components. We describe the resulting lexical resource containing several dozens of MWEs in four dialects and we propose a method for constructing a web corpus as a support for crowdsourcing examples of MWE occurrences. The resource is available under an open license and paves the way towards a large-scale dialect-aware language resource construction, which should prove useful in both traditional and novel NLP applications.",Towards Lexical Encoding of Multi-Word Expressions in Spanish Dialects,"This paper describes a pilot study in lexical encoding of multi-word expressions (MWEs) in 4 Latin American dialects of Spanish: Costa Rican, Colombian, Mexican and Peruvian. We describe the variability of MWE usage across dialects. We adapt an existing data model to a dialect-aware encoding, so as to represent dialect-related specificities, while avoiding redundancy of the data common for all dialects. A dozen of linguistic properties of MWEs can be expressed in this model, both on the level of a whole MWE and of its individual components. We describe the resulting lexical resource containing several dozens of MWEs in four dialects and we propose a method for constructing a web corpus as a support for crowdsourcing examples of MWE occurrences. The resource is available under an open license and paves the way towards a large-scale dialect-aware language resource construction, which should prove useful in both traditional and novel NLP applications.","This work is an outcome of a student project carried out within the Erasmus Mundus Master's program ""Information Technologies for Business Intelligence"" 7 . It was supported by the IC1207 COST action PARSEME 8 . We are grateful to prof. Shuly Wintner for his valuable insights into lexical encoding of MWEs.","Towards Lexical Encoding of Multi-Word Expressions in Spanish Dialects. This paper describes a pilot study in lexical encoding of multi-word expressions (MWEs) in 4 Latin American dialects of Spanish: Costa Rican, Colombian, Mexican and Peruvian. We describe the variability of MWE usage across dialects. We adapt an existing data model to a dialect-aware encoding, so as to represent dialect-related specificities, while avoiding redundancy of the data common for all dialects. A dozen of linguistic properties of MWEs can be expressed in this model, both on the level of a whole MWE and of its individual components. We describe the resulting lexical resource containing several dozens of MWEs in four dialects and we propose a method for constructing a web corpus as a support for crowdsourcing examples of MWE occurrences. The resource is available under an open license and paves the way towards a large-scale dialect-aware language resource construction, which should prove useful in both traditional and novel NLP applications.",2016
le-etal-2020-dual,https://aclanthology.org/2020.coling-main.314,0,,,,,,,"Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation. We introduce dual-decoder Transformer, a new model architecture that jointly performs automatic speech recognition (ASR) and multilingual speech translation (ST). Our models are based on the original Transformer architecture (Vaswani et al., 2017) but consist of two decoders, each responsible for one task (ASR or ST). Our major contribution lies in how these decoders interact with each other: one decoder can attend to different information sources from the other via a dual-attention mechanism. We propose two variants of these architectures corresponding to two different levels of dependencies between the decoders, called the parallel and cross dual-decoder Transformers, respectively. Extensive experiments on the MuST-C dataset show that our models outperform the previously-reported highest translation performance in the multilingual settings, and outperform as well bilingual one-to-one results. Furthermore, our parallel models demonstrate no trade-off between ASR and ST compared to the vanilla multi-task architecture. Our code and pre-trained models are available at https://github.com/formiel/speech-translation.",Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation,"We introduce dual-decoder Transformer, a new model architecture that jointly performs automatic speech recognition (ASR) and multilingual speech translation (ST). Our models are based on the original Transformer architecture (Vaswani et al., 2017) but consist of two decoders, each responsible for one task (ASR or ST). Our major contribution lies in how these decoders interact with each other: one decoder can attend to different information sources from the other via a dual-attention mechanism. We propose two variants of these architectures corresponding to two different levels of dependencies between the decoders, called the parallel and cross dual-decoder Transformers, respectively. Extensive experiments on the MuST-C dataset show that our models outperform the previously-reported highest translation performance in the multilingual settings, and outperform as well bilingual one-to-one results. Furthermore, our parallel models demonstrate no trade-off between ASR and ST compared to the vanilla multi-task architecture. Our code and pre-trained models are available at https://github.com/formiel/speech-translation.",Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation,"We introduce dual-decoder Transformer, a new model architecture that jointly performs automatic speech recognition (ASR) and multilingual speech translation (ST). Our models are based on the original Transformer architecture (Vaswani et al., 2017) but consist of two decoders, each responsible for one task (ASR or ST). Our major contribution lies in how these decoders interact with each other: one decoder can attend to different information sources from the other via a dual-attention mechanism. We propose two variants of these architectures corresponding to two different levels of dependencies between the decoders, called the parallel and cross dual-decoder Transformers, respectively. Extensive experiments on the MuST-C dataset show that our models outperform the previously-reported highest translation performance in the multilingual settings, and outperform as well bilingual one-to-one results. Furthermore, our parallel models demonstrate no trade-off between ASR and ST compared to the vanilla multi-task architecture. Our code and pre-trained models are available at https://github.com/formiel/speech-translation.","This work was supported by a Facebook AI SRA grant, and was granted access to the HPC resources of IDRIS under the allocations 2020-AD011011695 and 2020-AP011011765 made by GENCI. It was also done as part of the Multidisciplinary Institute in Artificial Intelligence MIAI@Grenoble-Alpes (ANR-19-P3IA-0003). We thank the anonymous reviewers for their insightful feedback. ","Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation. We introduce dual-decoder Transformer, a new model architecture that jointly performs automatic speech recognition (ASR) and multilingual speech translation (ST). Our models are based on the original Transformer architecture (Vaswani et al., 2017) but consist of two decoders, each responsible for one task (ASR or ST). Our major contribution lies in how these decoders interact with each other: one decoder can attend to different information sources from the other via a dual-attention mechanism. We propose two variants of these architectures corresponding to two different levels of dependencies between the decoders, called the parallel and cross dual-decoder Transformers, respectively. Extensive experiments on the MuST-C dataset show that our models outperform the previously-reported highest translation performance in the multilingual settings, and outperform as well bilingual one-to-one results. Furthermore, our parallel models demonstrate no trade-off between ASR and ST compared to the vanilla multi-task architecture. Our code and pre-trained models are available at https://github.com/formiel/speech-translation.",2020
kiyota-etal-2003-dialog,https://aclanthology.org/P03-2027,0,,,,,,,"Dialog Navigator : A Spoken Dialog Q-A System based on Large Text Knowledge Base. This paper describes a spoken dialog Q-A system as a substitution for call centers. The system is capable of making dialogs for both fixing speech recognition errors and for clarifying vague questions, based on only large text knowledge base. We introduce two measures to make dialogs for fixing recognition errors. An experimental evaluation shows the advantages of these measures.",Dialog Navigator : A Spoken Dialog {Q}-A System based on Large Text Knowledge Base,"This paper describes a spoken dialog Q-A system as a substitution for call centers. The system is capable of making dialogs for both fixing speech recognition errors and for clarifying vague questions, based on only large text knowledge base. We introduce two measures to make dialogs for fixing recognition errors. An experimental evaluation shows the advantages of these measures.",Dialog Navigator : A Spoken Dialog Q-A System based on Large Text Knowledge Base,"This paper describes a spoken dialog Q-A system as a substitution for call centers. The system is capable of making dialogs for both fixing speech recognition errors and for clarifying vague questions, based on only large text knowledge base. We introduce two measures to make dialogs for fixing recognition errors. An experimental evaluation shows the advantages of these measures.",,"Dialog Navigator : A Spoken Dialog Q-A System based on Large Text Knowledge Base. This paper describes a spoken dialog Q-A system as a substitution for call centers. The system is capable of making dialogs for both fixing speech recognition errors and for clarifying vague questions, based on only large text knowledge base. We introduce two measures to make dialogs for fixing recognition errors. An experimental evaluation shows the advantages of these measures.",2003
amble-2000-bustuc-natural,https://aclanthology.org/W99-1001,0,,,,,,,"BusTUC--A natural language bus route adviser in Prolog. Sam m endrag The paper describes a natural language based expert system route adviser for the public bus transport in Trondheim, Norway. The sy stem is available on the Internet, and has been installed at the bus company's web server since the beginning of 1999. The system is bilin gual, relying on an internal language independent logic representation.",{B}us{TUC}{--}A natural language bus route adviser in {P}rolog,"Sam m endrag The paper describes a natural language based expert system route adviser for the public bus transport in Trondheim, Norway. The sy stem is available on the Internet, and has been installed at the bus company's web server since the beginning of 1999. The system is bilin gual, relying on an internal language independent logic representation.",BusTUC--A natural language bus route adviser in Prolog,"Sam m endrag The paper describes a natural language based expert system route adviser for the public bus transport in Trondheim, Norway. The sy stem is available on the Internet, and has been installed at the bus company's web server since the beginning of 1999. The system is bilin gual, relying on an internal language independent logic representation.",,"BusTUC--A natural language bus route adviser in Prolog. Sam m endrag The paper describes a natural language based expert system route adviser for the public bus transport in Trondheim, Norway. The sy stem is available on the Internet, and has been installed at the bus company's web server since the beginning of 1999. The system is bilin gual, relying on an internal language independent logic representation.",2000
johansson-etal-2012-semantic,http://www.lrec-conf.org/proceedings/lrec2012/pdf/455_Paper.pdf,0,,,,,,,"Semantic Role Labeling with the Swedish FrameNet. We present the first results on semantic role labeling using the Swedish FrameNet, which is a lexical resource currently in development. Several aspects of the task are investigated, including the selection of machine learning features, the effect of choice of syntactic parser, and the ability of the system to generalize to new frames and new genres. In addition, we evaluate two methods to make the role label classifier more robust: cross-frame generalization and cluster-based features. Although the small amount of training data limits the performance achievable at the moment, we reach promising results. In particular, the classifier that extracts the boundaries of arguments works well for new frames, which suggests that it already at this stage can be useful in a semi-automatic setting.",Semantic Role Labeling with the {S}wedish {F}rame{N}et,"We present the first results on semantic role labeling using the Swedish FrameNet, which is a lexical resource currently in development. Several aspects of the task are investigated, including the selection of machine learning features, the effect of choice of syntactic parser, and the ability of the system to generalize to new frames and new genres. In addition, we evaluate two methods to make the role label classifier more robust: cross-frame generalization and cluster-based features. Although the small amount of training data limits the performance achievable at the moment, we reach promising results. In particular, the classifier that extracts the boundaries of arguments works well for new frames, which suggests that it already at this stage can be useful in a semi-automatic setting.",Semantic Role Labeling with the Swedish FrameNet,"We present the first results on semantic role labeling using the Swedish FrameNet, which is a lexical resource currently in development. Several aspects of the task are investigated, including the selection of machine learning features, the effect of choice of syntactic parser, and the ability of the system to generalize to new frames and new genres. In addition, we evaluate two methods to make the role label classifier more robust: cross-frame generalization and cluster-based features. Although the small amount of training data limits the performance achievable at the moment, we reach promising results. In particular, the classifier that extracts the boundaries of arguments works well for new frames, which suggests that it already at this stage can be useful in a semi-automatic setting.",We are grateful to Percy Liang for the implementation of the Brown clustering software. This work was partly funded by the Centre for Language Technology at Gothenburg University.,"Semantic Role Labeling with the Swedish FrameNet. We present the first results on semantic role labeling using the Swedish FrameNet, which is a lexical resource currently in development. Several aspects of the task are investigated, including the selection of machine learning features, the effect of choice of syntactic parser, and the ability of the system to generalize to new frames and new genres. In addition, we evaluate two methods to make the role label classifier more robust: cross-frame generalization and cluster-based features. Although the small amount of training data limits the performance achievable at the moment, we reach promising results. In particular, the classifier that extracts the boundaries of arguments works well for new frames, which suggests that it already at this stage can be useful in a semi-automatic setting.",2012
olabiyi-etal-2019-multi,https://aclanthology.org/W19-4114,0,,,,,,,"Multi-turn Dialogue Response Generation in an Adversarial Learning Framework. We propose an adversarial learning approach for generating multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embeddings with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows improved performance over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This improvement is demonstrated on the Movie triples and Ubuntu dialogue datasets using both automatic and human evaluations.",Multi-turn Dialogue Response Generation in an Adversarial Learning Framework,"We propose an adversarial learning approach for generating multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embeddings with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows improved performance over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This improvement is demonstrated on the Movie triples and Ubuntu dialogue datasets using both automatic and human evaluations.",Multi-turn Dialogue Response Generation in an Adversarial Learning Framework,"We propose an adversarial learning approach for generating multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embeddings with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows improved performance over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This improvement is demonstrated on the Movie triples and Ubuntu dialogue datasets using both automatic and human evaluations.",,"Multi-turn Dialogue Response Generation in an Adversarial Learning Framework. We propose an adversarial learning approach for generating multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embeddings with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows improved performance over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This improvement is demonstrated on the Movie triples and Ubuntu dialogue datasets using both automatic and human evaluations.",2019
su-etal-2022-comparison,https://aclanthology.org/2022.acl-long.572,0,,,,,,,"A Comparison of Strategies for Source-Free Domain Adaptation. Data sharing restrictions are common in NLP, especially in the clinical domain, but there is limited research on adapting models to new domains without access to the original training data, a setting known as source-free domain adaptation. We take algorithms that traditionally assume access to the source-domain training data-active learning, self-training, and data augmentation-and adapt them for source-free domain adaptation. Then we systematically compare these different strategies across multiple tasks and domains. We find that active learning yields consistent gains across all SemEval 2021 Task 10 tasks and domains, but though the shared task saw successful self-trained and data augmented models, our systematic comparison finds these strategies to be unreliable for source-free domain adaptation.",A Comparison of Strategies for Source-Free Domain Adaptation,"Data sharing restrictions are common in NLP, especially in the clinical domain, but there is limited research on adapting models to new domains without access to the original training data, a setting known as source-free domain adaptation. We take algorithms that traditionally assume access to the source-domain training data-active learning, self-training, and data augmentation-and adapt them for source-free domain adaptation. Then we systematically compare these different strategies across multiple tasks and domains. We find that active learning yields consistent gains across all SemEval 2021 Task 10 tasks and domains, but though the shared task saw successful self-trained and data augmented models, our systematic comparison finds these strategies to be unreliable for source-free domain adaptation.",A Comparison of Strategies for Source-Free Domain Adaptation,"Data sharing restrictions are common in NLP, especially in the clinical domain, but there is limited research on adapting models to new domains without access to the original training data, a setting known as source-free domain adaptation. We take algorithms that traditionally assume access to the source-domain training data-active learning, self-training, and data augmentation-and adapt them for source-free domain adaptation. Then we systematically compare these different strategies across multiple tasks and domains. We find that active learning yields consistent gains across all SemEval 2021 Task 10 tasks and domains, but though the shared task saw successful self-trained and data augmented models, our systematic comparison finds these strategies to be unreliable for source-free domain adaptation.",Research reported in this publication was supported by the National Library of Medicine of the National Institutes of Health under Award Numbers R01LM012918 and R01LM010090. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.,"A Comparison of Strategies for Source-Free Domain Adaptation. Data sharing restrictions are common in NLP, especially in the clinical domain, but there is limited research on adapting models to new domains without access to the original training data, a setting known as source-free domain adaptation. We take algorithms that traditionally assume access to the source-domain training data-active learning, self-training, and data augmentation-and adapt them for source-free domain adaptation. Then we systematically compare these different strategies across multiple tasks and domains. We find that active learning yields consistent gains across all SemEval 2021 Task 10 tasks and domains, but though the shared task saw successful self-trained and data augmented models, our systematic comparison finds these strategies to be unreliable for source-free domain adaptation.",2022
nissim-etal-2004-annotation,http://www.lrec-conf.org/proceedings/lrec2004/pdf/638.pdf,0,,,,,,,"An Annotation Scheme for Information Status in Dialogue. We present an annotation scheme for information status (IS) in dialogue, and validate it on three Switchboard dialogues. We show that our scheme has good reproducibility, and compare it with previous attempts to code IS and related features. We eventually apply the scheme to 147 dialogues, thus producing a corpus that contains nearly 70,000 NPs annotated for IS and over 15,000 coreference links.",An Annotation Scheme for Information Status in Dialogue,"We present an annotation scheme for information status (IS) in dialogue, and validate it on three Switchboard dialogues. We show that our scheme has good reproducibility, and compare it with previous attempts to code IS and related features. We eventually apply the scheme to 147 dialogues, thus producing a corpus that contains nearly 70,000 NPs annotated for IS and over 15,000 coreference links.",An Annotation Scheme for Information Status in Dialogue,"We present an annotation scheme for information status (IS) in dialogue, and validate it on three Switchboard dialogues. We show that our scheme has good reproducibility, and compare it with previous attempts to code IS and related features. We eventually apply the scheme to 147 dialogues, thus producing a corpus that contains nearly 70,000 NPs annotated for IS and over 15,000 coreference links.",,"An Annotation Scheme for Information Status in Dialogue. We present an annotation scheme for information status (IS) in dialogue, and validate it on three Switchboard dialogues. We show that our scheme has good reproducibility, and compare it with previous attempts to code IS and related features. We eventually apply the scheme to 147 dialogues, thus producing a corpus that contains nearly 70,000 NPs annotated for IS and over 15,000 coreference links.",2004
klementiev-etal-2012-inducing,https://aclanthology.org/C12-1089,0,,,,,,,"Inducing Crosslingual Distributed Representations of Words. Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines (e.g. machine translation) when applied to a new language.",Inducing Crosslingual Distributed Representations of Words,"Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines (e.g. machine translation) when applied to a new language.",Inducing Crosslingual Distributed Representations of Words,"Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines (e.g. machine translation) when applied to a new language.",The work was supported by the MMCI Cluster of Excellence and a Google research award.,"Inducing Crosslingual Distributed Representations of Words. Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines (e.g. machine translation) when applied to a new language.",2012
song-etal-2018-deep,https://aclanthology.org/D18-1107,0,,,,,,,"A Deep Neural Network Sentence Level Classification Method with Context Information. In the sentence classification task, context formed from sentences adjacent to the sentence being classified can provide important information for classification. This context is, however, often ignored. Where methods do make use of context, only small amounts are considered, making it difficult to scale. We present a new method for sentence classification, Context-LSTM-CNN, that makes use of potentially large contexts. The method also utilizes long-range dependencies within the sentence being classified, using an LSTM, and short-span features, using a stacked CNN. Our experiments demonstrate that this approach consistently improves over previous methods on two different datasets.",A Deep Neural Network Sentence Level Classification Method with Context Information,"In the sentence classification task, context formed from sentences adjacent to the sentence being classified can provide important information for classification. This context is, however, often ignored. Where methods do make use of context, only small amounts are considered, making it difficult to scale. We present a new method for sentence classification, Context-LSTM-CNN, that makes use of potentially large contexts. The method also utilizes long-range dependencies within the sentence being classified, using an LSTM, and short-span features, using a stacked CNN. Our experiments demonstrate that this approach consistently improves over previous methods on two different datasets.",A Deep Neural Network Sentence Level Classification Method with Context Information,"In the sentence classification task, context formed from sentences adjacent to the sentence being classified can provide important information for classification. This context is, however, often ignored. Where methods do make use of context, only small amounts are considered, making it difficult to scale. We present a new method for sentence classification, Context-LSTM-CNN, that makes use of potentially large contexts. The method also utilizes long-range dependencies within the sentence being classified, using an LSTM, and short-span features, using a stacked CNN. Our experiments demonstrate that this approach consistently improves over previous methods on two different datasets.",This work was partially supported by the European Union under grant agreement No. 654024 SoBigData.,"A Deep Neural Network Sentence Level Classification Method with Context Information. In the sentence classification task, context formed from sentences adjacent to the sentence being classified can provide important information for classification. This context is, however, often ignored. Where methods do make use of context, only small amounts are considered, making it difficult to scale. We present a new method for sentence classification, Context-LSTM-CNN, that makes use of potentially large contexts. The method also utilizes long-range dependencies within the sentence being classified, using an LSTM, and short-span features, using a stacked CNN. Our experiments demonstrate that this approach consistently improves over previous methods on two different datasets.",2018
hassanali-liu-2011-measuring,https://aclanthology.org/W11-1411,1,,,,education,,,"Measuring Language Development in Early Childhood Education: A Case Study of Grammar Checking in Child Language Transcripts. Language sample analysis is an important technique used in measuring language development. At present, measures of grammatical complexity such as the Index of Productive Syntax (Scarborough, 1990) are used to measure language development in early childhood. Although these measures depict the overall competence in the usage of language, they do not provide for an analysis of the grammatical mistakes made by the child. In this paper, we explore the use of existing Natural Language Processing (NLP) techniques to provide an insight into the processing of child language transcripts and challenges in automatic grammar checking. We explore the automatic detection of 6 types of verb related grammatical errors. We compare rule based systems to statistical systems and investigate the use of different features. We found the statistical systems performed better than the rule based systems for most of the error categories.",Measuring Language Development in Early Childhood Education: A Case Study of Grammar Checking in Child Language Transcripts,"Language sample analysis is an important technique used in measuring language development. At present, measures of grammatical complexity such as the Index of Productive Syntax (Scarborough, 1990) are used to measure language development in early childhood. Although these measures depict the overall competence in the usage of language, they do not provide for an analysis of the grammatical mistakes made by the child. In this paper, we explore the use of existing Natural Language Processing (NLP) techniques to provide an insight into the processing of child language transcripts and challenges in automatic grammar checking. We explore the automatic detection of 6 types of verb related grammatical errors. We compare rule based systems to statistical systems and investigate the use of different features. We found the statistical systems performed better than the rule based systems for most of the error categories.",Measuring Language Development in Early Childhood Education: A Case Study of Grammar Checking in Child Language Transcripts,"Language sample analysis is an important technique used in measuring language development. At present, measures of grammatical complexity such as the Index of Productive Syntax (Scarborough, 1990) are used to measure language development in early childhood. Although these measures depict the overall competence in the usage of language, they do not provide for an analysis of the grammatical mistakes made by the child. In this paper, we explore the use of existing Natural Language Processing (NLP) techniques to provide an insight into the processing of child language transcripts and challenges in automatic grammar checking. We explore the automatic detection of 6 types of verb related grammatical errors. We compare rule based systems to statistical systems and investigate the use of different features. We found the statistical systems performed better than the rule based systems for most of the error categories.","The authors thank Chris Dollaghan for sharing the Paradise data, and Thamar Solorio for discussions. This research is partly supported by an NSF award IIS-1017190.","Measuring Language Development in Early Childhood Education: A Case Study of Grammar Checking in Child Language Transcripts. Language sample analysis is an important technique used in measuring language development. At present, measures of grammatical complexity such as the Index of Productive Syntax (Scarborough, 1990) are used to measure language development in early childhood. Although these measures depict the overall competence in the usage of language, they do not provide for an analysis of the grammatical mistakes made by the child. In this paper, we explore the use of existing Natural Language Processing (NLP) techniques to provide an insight into the processing of child language transcripts and challenges in automatic grammar checking. We explore the automatic detection of 6 types of verb related grammatical errors. We compare rule based systems to statistical systems and investigate the use of different features. We found the statistical systems performed better than the rule based systems for most of the error categories.",2011
dirkson-etal-2021-fuzzybio,https://aclanthology.org/2021.louhi-1.9,1,,,,health,,,"FuzzyBIO: A Proposal for Fuzzy Representation of Discontinuous Entities. Discontinuous entities pose a challenge to named entity recognition (NER). These phenomena occur commonly in the biomedical domain. As a solution, expansions of the BIO representation scheme that can handle these entity types are commonly used (i.e. BIOHD). However, the extra tag types make the NER task more difficult to learn. In this paper we propose an alternative; a fuzzy continuous BIO scheme (FuzzyBIO). We focus on the task of Adverse Drug Response extraction and normalization to compare FuzzyBIO to BIOHD. We find that FuzzyBIO improves recall of NER for two of three data sets and results in a higher percentage of correctly identified disjoint and composite entities for all data sets. Using FuzzyBIO also improves end-toend performance for continuous and composite entities in two of three data sets. Since FuzzyBIO improves performance for some data sets and the conversion from BIOHD to FuzzyBIO is straightforward, we recommend investigating which is more effective for any data set containing discontinuous entities.",{F}uzzy{BIO}: A Proposal for Fuzzy Representation of Discontinuous Entities,"Discontinuous entities pose a challenge to named entity recognition (NER). These phenomena occur commonly in the biomedical domain. As a solution, expansions of the BIO representation scheme that can handle these entity types are commonly used (i.e. BIOHD). However, the extra tag types make the NER task more difficult to learn. In this paper we propose an alternative; a fuzzy continuous BIO scheme (FuzzyBIO). We focus on the task of Adverse Drug Response extraction and normalization to compare FuzzyBIO to BIOHD. We find that FuzzyBIO improves recall of NER for two of three data sets and results in a higher percentage of correctly identified disjoint and composite entities for all data sets. Using FuzzyBIO also improves end-toend performance for continuous and composite entities in two of three data sets. Since FuzzyBIO improves performance for some data sets and the conversion from BIOHD to FuzzyBIO is straightforward, we recommend investigating which is more effective for any data set containing discontinuous entities.",FuzzyBIO: A Proposal for Fuzzy Representation of Discontinuous Entities,"Discontinuous entities pose a challenge to named entity recognition (NER). These phenomena occur commonly in the biomedical domain. As a solution, expansions of the BIO representation scheme that can handle these entity types are commonly used (i.e. BIOHD). However, the extra tag types make the NER task more difficult to learn. In this paper we propose an alternative; a fuzzy continuous BIO scheme (FuzzyBIO). We focus on the task of Adverse Drug Response extraction and normalization to compare FuzzyBIO to BIOHD. We find that FuzzyBIO improves recall of NER for two of three data sets and results in a higher percentage of correctly identified disjoint and composite entities for all data sets. Using FuzzyBIO also improves end-toend performance for continuous and composite entities in two of three data sets. Since FuzzyBIO improves performance for some data sets and the conversion from BIOHD to FuzzyBIO is straightforward, we recommend investigating which is more effective for any data set containing discontinuous entities.",We would like to thank the SIDN fonds for funding this research and our reviewers for their valuable feedback.,"FuzzyBIO: A Proposal for Fuzzy Representation of Discontinuous Entities. Discontinuous entities pose a challenge to named entity recognition (NER). These phenomena occur commonly in the biomedical domain. As a solution, expansions of the BIO representation scheme that can handle these entity types are commonly used (i.e. BIOHD). However, the extra tag types make the NER task more difficult to learn. In this paper we propose an alternative; a fuzzy continuous BIO scheme (FuzzyBIO). We focus on the task of Adverse Drug Response extraction and normalization to compare FuzzyBIO to BIOHD. We find that FuzzyBIO improves recall of NER for two of three data sets and results in a higher percentage of correctly identified disjoint and composite entities for all data sets. Using FuzzyBIO also improves end-toend performance for continuous and composite entities in two of three data sets. Since FuzzyBIO improves performance for some data sets and the conversion from BIOHD to FuzzyBIO is straightforward, we recommend investigating which is more effective for any data set containing discontinuous entities.",2021
huang-etal-2021-adast,https://aclanthology.org/2021.findings-acl.224,0,,,,,,,"AdaST: Dynamically Adapting Encoder States in the Decoder for End-to-End Speech-to-Text Translation. In end-to-end speech translation, acoustic representations learned by the encoder are usually fixed and static, from the perspective of the decoder, which is not desirable for dealing with the cross-modal and cross-lingual challenge in speech translation. In this paper, we show the benefits of varying acoustic states according to decoder hidden states and propose an adaptive speech-to-text translation model that is able to dynamically adapt acoustic states in the decoder. We concatenate the acoustic state and target word embedding sequence and feed the concatenated sequence into subsequent blocks in the decoder. In order to model the deep interaction between acoustic states and target hidden states, a speech-text mixed attention sublayer is introduced to replace the conventional cross-attention network. Experiment results on two widely-used datasets show that the proposed method significantly outperforms stateof-the-art neural speech translation models.",{A}da{ST}: Dynamically Adapting Encoder States in the Decoder for End-to-End Speech-to-Text Translation,"In end-to-end speech translation, acoustic representations learned by the encoder are usually fixed and static, from the perspective of the decoder, which is not desirable for dealing with the cross-modal and cross-lingual challenge in speech translation. In this paper, we show the benefits of varying acoustic states according to decoder hidden states and propose an adaptive speech-to-text translation model that is able to dynamically adapt acoustic states in the decoder. We concatenate the acoustic state and target word embedding sequence and feed the concatenated sequence into subsequent blocks in the decoder. In order to model the deep interaction between acoustic states and target hidden states, a speech-text mixed attention sublayer is introduced to replace the conventional cross-attention network. Experiment results on two widely-used datasets show that the proposed method significantly outperforms stateof-the-art neural speech translation models.",AdaST: Dynamically Adapting Encoder States in the Decoder for End-to-End Speech-to-Text Translation,"In end-to-end speech translation, acoustic representations learned by the encoder are usually fixed and static, from the perspective of the decoder, which is not desirable for dealing with the cross-modal and cross-lingual challenge in speech translation. In this paper, we show the benefits of varying acoustic states according to decoder hidden states and propose an adaptive speech-to-text translation model that is able to dynamically adapt acoustic states in the decoder. We concatenate the acoustic state and target word embedding sequence and feed the concatenated sequence into subsequent blocks in the decoder. In order to model the deep interaction between acoustic states and target hidden states, a speech-text mixed attention sublayer is introduced to replace the conventional cross-attention network. Experiment results on two widely-used datasets show that the proposed method significantly outperforms stateof-the-art neural speech translation models.",The present research was partially supported by the National Key Research and Development Program of China (Grant No. 2019QY1802). We would like to thank the anonymous reviewers for their insightful comments.,"AdaST: Dynamically Adapting Encoder States in the Decoder for End-to-End Speech-to-Text Translation. In end-to-end speech translation, acoustic representations learned by the encoder are usually fixed and static, from the perspective of the decoder, which is not desirable for dealing with the cross-modal and cross-lingual challenge in speech translation. In this paper, we show the benefits of varying acoustic states according to decoder hidden states and propose an adaptive speech-to-text translation model that is able to dynamically adapt acoustic states in the decoder. We concatenate the acoustic state and target word embedding sequence and feed the concatenated sequence into subsequent blocks in the decoder. In order to model the deep interaction between acoustic states and target hidden states, a speech-text mixed attention sublayer is introduced to replace the conventional cross-attention network. Experiment results on two widely-used datasets show that the proposed method significantly outperforms stateof-the-art neural speech translation models.",2021
okada-1980-conceptual,https://aclanthology.org/C80-1019,0,,,,,,,"Conceptual Taxonomy of Japanese Verbs for Uderstanding Natural Language and Picture Patterns. This paper presents a taxonomy of ""matter concepts"" or concepts of verbs that play roles of governors in understanding natural language and picture patterns. For this taxonomy we associate natural language with real world picture patterns and analyze the meanings common to them. The analysis shows that matter concepts are divided into two large classes:""simple matter concepts"" and ""non-simple matter concepts."" Furthermore, the latter is divided into ""complex concepts"" and ""derivative concepts."" About 4,700 matter concepts used in daily Japanese were actually classified according to the analysis. As a result of the classification about 1,200 basic matter concepts which cover the concepts of real world matter at a minimum were obtained. This classification was applied to a translation of picture pattern sequences into natural language.",Conceptual Taxonomy of {J}apanese Verbs for Uderstanding Natural Language and Picture Patterns,"This paper presents a taxonomy of ""matter concepts"" or concepts of verbs that play roles of governors in understanding natural language and picture patterns. For this taxonomy we associate natural language with real world picture patterns and analyze the meanings common to them. The analysis shows that matter concepts are divided into two large classes:""simple matter concepts"" and ""non-simple matter concepts."" Furthermore, the latter is divided into ""complex concepts"" and ""derivative concepts."" About 4,700 matter concepts used in daily Japanese were actually classified according to the analysis. As a result of the classification about 1,200 basic matter concepts which cover the concepts of real world matter at a minimum were obtained. This classification was applied to a translation of picture pattern sequences into natural language.",Conceptual Taxonomy of Japanese Verbs for Uderstanding Natural Language and Picture Patterns,"This paper presents a taxonomy of ""matter concepts"" or concepts of verbs that play roles of governors in understanding natural language and picture patterns. For this taxonomy we associate natural language with real world picture patterns and analyze the meanings common to them. The analysis shows that matter concepts are divided into two large classes:""simple matter concepts"" and ""non-simple matter concepts."" Furthermore, the latter is divided into ""complex concepts"" and ""derivative concepts."" About 4,700 matter concepts used in daily Japanese were actually classified according to the analysis. As a result of the classification about 1,200 basic matter concepts which cover the concepts of real world matter at a minimum were obtained. This classification was applied to a translation of picture pattern sequences into natural language.",,"Conceptual Taxonomy of Japanese Verbs for Uderstanding Natural Language and Picture Patterns. This paper presents a taxonomy of ""matter concepts"" or concepts of verbs that play roles of governors in understanding natural language and picture patterns. For this taxonomy we associate natural language with real world picture patterns and analyze the meanings common to them. The analysis shows that matter concepts are divided into two large classes:""simple matter concepts"" and ""non-simple matter concepts."" Furthermore, the latter is divided into ""complex concepts"" and ""derivative concepts."" About 4,700 matter concepts used in daily Japanese were actually classified according to the analysis. As a result of the classification about 1,200 basic matter concepts which cover the concepts of real world matter at a minimum were obtained. This classification was applied to a translation of picture pattern sequences into natural language.",1980
chen-etal-2021-identify,https://aclanthology.org/2021.rocling-1.43,0,,,,,,,"Identify Bilingual Patterns and Phrases from a Bilingual Sentence Pair. This paper presents a method for automatically identifying bilingual grammar patterns and extracting bilingual phrase instances from a given English-Chinese sentence pair. In our approach, the English-Chinese sentence pair is parsed to identify English grammar patterns and Chinese counterparts. The method involves generating translations of each English grammar pattern and calculating translation probability of words from a word-aligned parallel corpora. The results allow us to extract the most probable English-Chinese phrase pairs in the sentence pair. We present a prototype system that applies the method to extract grammar patterns and phrases in parallel sentences. An evaluation on randomly selected examples from a dictionary shows that our approach has reasonably good performance. We use human judge to assess the bilingual phrases generated by our approach. The results have potential to assist language learning and machine translation research.",Identify Bilingual Patterns and Phrases from a Bilingual Sentence Pair,"This paper presents a method for automatically identifying bilingual grammar patterns and extracting bilingual phrase instances from a given English-Chinese sentence pair. In our approach, the English-Chinese sentence pair is parsed to identify English grammar patterns and Chinese counterparts. The method involves generating translations of each English grammar pattern and calculating translation probability of words from a word-aligned parallel corpora. The results allow us to extract the most probable English-Chinese phrase pairs in the sentence pair. We present a prototype system that applies the method to extract grammar patterns and phrases in parallel sentences. An evaluation on randomly selected examples from a dictionary shows that our approach has reasonably good performance. We use human judge to assess the bilingual phrases generated by our approach. The results have potential to assist language learning and machine translation research.",Identify Bilingual Patterns and Phrases from a Bilingual Sentence Pair,"This paper presents a method for automatically identifying bilingual grammar patterns and extracting bilingual phrase instances from a given English-Chinese sentence pair. In our approach, the English-Chinese sentence pair is parsed to identify English grammar patterns and Chinese counterparts. The method involves generating translations of each English grammar pattern and calculating translation probability of words from a word-aligned parallel corpora. The results allow us to extract the most probable English-Chinese phrase pairs in the sentence pair. We present a prototype system that applies the method to extract grammar patterns and phrases in parallel sentences. An evaluation on randomly selected examples from a dictionary shows that our approach has reasonably good performance. We use human judge to assess the bilingual phrases generated by our approach. The results have potential to assist language learning and machine translation research.",,"Identify Bilingual Patterns and Phrases from a Bilingual Sentence Pair. This paper presents a method for automatically identifying bilingual grammar patterns and extracting bilingual phrase instances from a given English-Chinese sentence pair. In our approach, the English-Chinese sentence pair is parsed to identify English grammar patterns and Chinese counterparts. The method involves generating translations of each English grammar pattern and calculating translation probability of words from a word-aligned parallel corpora. The results allow us to extract the most probable English-Chinese phrase pairs in the sentence pair. We present a prototype system that applies the method to extract grammar patterns and phrases in parallel sentences. An evaluation on randomly selected examples from a dictionary shows that our approach has reasonably good performance. We use human judge to assess the bilingual phrases generated by our approach. The results have potential to assist language learning and machine translation research.",2021
beigman-klebanov-etal-2018-corpus,https://aclanthology.org/N18-2014,0,,,,,,,"A Corpus of Non-Native Written English Annotated for Metaphor. We present a corpus of 240 argumentative essays written by non-native speakers of English annotated for metaphor. The corpus is made publicly available. We provide benchmark performance of state-of-the-art systems on this new corpus, and explore the relationship between writing proficiency and metaphor use.",A Corpus of Non-Native Written {E}nglish Annotated for Metaphor,"We present a corpus of 240 argumentative essays written by non-native speakers of English annotated for metaphor. The corpus is made publicly available. We provide benchmark performance of state-of-the-art systems on this new corpus, and explore the relationship between writing proficiency and metaphor use.",A Corpus of Non-Native Written English Annotated for Metaphor,"We present a corpus of 240 argumentative essays written by non-native speakers of English annotated for metaphor. The corpus is made publicly available. We provide benchmark performance of state-of-the-art systems on this new corpus, and explore the relationship between writing proficiency and metaphor use.",,"A Corpus of Non-Native Written English Annotated for Metaphor. We present a corpus of 240 argumentative essays written by non-native speakers of English annotated for metaphor. The corpus is made publicly available. We provide benchmark performance of state-of-the-art systems on this new corpus, and explore the relationship between writing proficiency and metaphor use.",2018
amnueypornsakul-etal-2014-predicting,https://aclanthology.org/W14-4110,0,,,,,,,"Predicting Attrition Along the Way: The UIUC Model. Discussion forum and clickstream are two primary data streams that enable mining of student behavior in a massively open online course. A student's participation in the discussion forum gives direct access to the opinions and concerns of the student. However, the low participation (5-10%) in discussion forums, prompts the modeling of user behavior based on clickstream information. Here we study a predictive model for learner attrition on a given week using information mined just from the clickstream. Features that are related to the quiz attempt/submission and those that capture interaction with various course components are found to be reasonable predictors of attrition in a given week.",Predicting Attrition Along the Way: The {UIUC} Model,"Discussion forum and clickstream are two primary data streams that enable mining of student behavior in a massively open online course. A student's participation in the discussion forum gives direct access to the opinions and concerns of the student. However, the low participation (5-10%) in discussion forums, prompts the modeling of user behavior based on clickstream information. Here we study a predictive model for learner attrition on a given week using information mined just from the clickstream. Features that are related to the quiz attempt/submission and those that capture interaction with various course components are found to be reasonable predictors of attrition in a given week.",Predicting Attrition Along the Way: The UIUC Model,"Discussion forum and clickstream are two primary data streams that enable mining of student behavior in a massively open online course. A student's participation in the discussion forum gives direct access to the opinions and concerns of the student. However, the low participation (5-10%) in discussion forums, prompts the modeling of user behavior based on clickstream information. Here we study a predictive model for learner attrition on a given week using information mined just from the clickstream. Features that are related to the quiz attempt/submission and those that capture interaction with various course components are found to be reasonable predictors of attrition in a given week.",,"Predicting Attrition Along the Way: The UIUC Model. Discussion forum and clickstream are two primary data streams that enable mining of student behavior in a massively open online course. A student's participation in the discussion forum gives direct access to the opinions and concerns of the student. However, the low participation (5-10%) in discussion forums, prompts the modeling of user behavior based on clickstream information. Here we study a predictive model for learner attrition on a given week using information mined just from the clickstream. Features that are related to the quiz attempt/submission and those that capture interaction with various course components are found to be reasonable predictors of attrition in a given week.",2014
graff-etal-2019-ingeotec,https://aclanthology.org/S19-2114,0,,,,,,,"INGEOTEC at SemEval-2019 Task 5 and Task 6: A Genetic Programming Approach for Text Classification. This paper describes our participation in HatEval and OffensEval challenges for English and Spanish languages. We used several approaches, B4MSA, FastText, and EvoMSA. Best results were achieved with EvoMSA, which is a multilingual and domainindependent architecture that combines the prediction from different knowledge sources to solve text classification problems.",{INGEOTEC} at {S}em{E}val-2019 Task 5 and Task 6: A Genetic Programming Approach for Text Classification,"This paper describes our participation in HatEval and OffensEval challenges for English and Spanish languages. We used several approaches, B4MSA, FastText, and EvoMSA. Best results were achieved with EvoMSA, which is a multilingual and domainindependent architecture that combines the prediction from different knowledge sources to solve text classification problems.",INGEOTEC at SemEval-2019 Task 5 and Task 6: A Genetic Programming Approach for Text Classification,"This paper describes our participation in HatEval and OffensEval challenges for English and Spanish languages. We used several approaches, B4MSA, FastText, and EvoMSA. Best results were achieved with EvoMSA, which is a multilingual and domainindependent architecture that combines the prediction from different knowledge sources to solve text classification problems.",,"INGEOTEC at SemEval-2019 Task 5 and Task 6: A Genetic Programming Approach for Text Classification. This paper describes our participation in HatEval and OffensEval challenges for English and Spanish languages. We used several approaches, B4MSA, FastText, and EvoMSA. Best results were achieved with EvoMSA, which is a multilingual and domainindependent architecture that combines the prediction from different knowledge sources to solve text classification problems.",2019
lonngren-1988-lexika,https://aclanthology.org/W87-0115,0,,,,,,,"Lexika, baserade p\aa semantiska relationer (Lexica, based on semantic relations) [In Swedish]. Den första fråga man måste ta ställning till om man vill bygga upp en tesaurus, alltså ett lexikon baserat på semantiska relationer, är om man skall tillämpa någon form av hierarkisering och hur i så fall denna skall se ut. I princip vill jag förkasta tanken på att begrep pen som sådana kan ordnas hierarkiskt; jeig tror alltså inte på några universella eller särspråkliga semantiska primitiver ä la Wierzbicka (1972) . Normalt är det nog att konstatera att det föreligger ett associativt samband mellEin två begrepp, t.ex. tand och bita, samt att fastställa styrkan hos och arten av detta saimband utan att pos tulera något riktningsförhållande.
Det är emellertid praktiskt att orgainisera ett tesaurus lexikon hierarkiskt. Det innebär en förenkling så till vida att man ersätter en mångfald av relationer med i princip en enda, dependens. Jag tän ker mig här en mer djup-och genomgående hierarkisering än den vi finner i Rogets lexikon (1962, första gången utgivet 1852) och dess svenska efterbildning. Bring (1930) , där man definierat ett begränsat antal "" begreppsklasser"" och hänfört varje ord till en sådan. Frågan är bara om detta är möjligt, alltså om en sådan hierarkisering står i samklemg med ordskattens inre natur. För att citera Kassabov (1987, 51) gäller det här att undvika det allmänna misstaget att "" attempt to prove the systematic character of vocabulary not by establishing the inherent principles of its inner orgainization, but by forcing upon the lexical items the networks of pre-formulated systems"" .","Lexika, baserade p{\aa} semantiska relationer (Lexica, based on semantic relations) [In {S}wedish]","Den första fråga man måste ta ställning till om man vill bygga upp en tesaurus, alltså ett lexikon baserat på semantiska relationer, är om man skall tillämpa någon form av hierarkisering och hur i så fall denna skall se ut. I princip vill jag förkasta tanken på att begrep pen som sådana kan ordnas hierarkiskt; jeig tror alltså inte på några universella eller särspråkliga semantiska primitiver ä la Wierzbicka (1972) . Normalt är det nog att konstatera att det föreligger ett associativt samband mellEin två begrepp, t.ex. tand och bita, samt att fastställa styrkan hos och arten av detta saimband utan att pos tulera något riktningsförhållande.
Det är emellertid praktiskt att orgainisera ett tesaurus lexikon hierarkiskt. Det innebär en förenkling så till vida att man ersätter en mångfald av relationer med i princip en enda, dependens. Jag tän ker mig här en mer djup-och genomgående hierarkisering än den vi finner i Rogets lexikon (1962, första gången utgivet 1852) och dess svenska efterbildning. Bring (1930) , där man definierat ett begränsat antal "" begreppsklasser"" och hänfört varje ord till en sådan. Frågan är bara om detta är möjligt, alltså om en sådan hierarkisering står i samklemg med ordskattens inre natur. För att citera Kassabov (1987, 51) gäller det här att undvika det allmänna misstaget att "" attempt to prove the systematic character of vocabulary not by establishing the inherent principles of its inner orgainization, but by forcing upon the lexical items the networks of pre-formulated systems"" .","Lexika, baserade p\aa semantiska relationer (Lexica, based on semantic relations) [In Swedish]","Den första fråga man måste ta ställning till om man vill bygga upp en tesaurus, alltså ett lexikon baserat på semantiska relationer, är om man skall tillämpa någon form av hierarkisering och hur i så fall denna skall se ut. I princip vill jag förkasta tanken på att begrep pen som sådana kan ordnas hierarkiskt; jeig tror alltså inte på några universella eller särspråkliga semantiska primitiver ä la Wierzbicka (1972) . Normalt är det nog att konstatera att det föreligger ett associativt samband mellEin två begrepp, t.ex. tand och bita, samt att fastställa styrkan hos och arten av detta saimband utan att pos tulera något riktningsförhållande.
Det är emellertid praktiskt att orgainisera ett tesaurus lexikon hierarkiskt. Det innebär en förenkling så till vida att man ersätter en mångfald av relationer med i princip en enda, dependens. Jag tän ker mig här en mer djup-och genomgående hierarkisering än den vi finner i Rogets lexikon (1962, första gången utgivet 1852) och dess svenska efterbildning. Bring (1930) , där man definierat ett begränsat antal "" begreppsklasser"" och hänfört varje ord till en sådan. Frågan är bara om detta är möjligt, alltså om en sådan hierarkisering står i samklemg med ordskattens inre natur. För att citera Kassabov (1987, 51) gäller det här att undvika det allmänna misstaget att "" attempt to prove the systematic character of vocabulary not by establishing the inherent principles of its inner orgainization, but by forcing upon the lexical items the networks of pre-formulated systems"" .",,"Lexika, baserade p\aa semantiska relationer (Lexica, based on semantic relations) [In Swedish]. Den första fråga man måste ta ställning till om man vill bygga upp en tesaurus, alltså ett lexikon baserat på semantiska relationer, är om man skall tillämpa någon form av hierarkisering och hur i så fall denna skall se ut. I princip vill jag förkasta tanken på att begrep pen som sådana kan ordnas hierarkiskt; jeig tror alltså inte på några universella eller särspråkliga semantiska primitiver ä la Wierzbicka (1972) . Normalt är det nog att konstatera att det föreligger ett associativt samband mellEin två begrepp, t.ex. tand och bita, samt att fastställa styrkan hos och arten av detta saimband utan att pos tulera något riktningsförhållande.
Det är emellertid praktiskt att orgainisera ett tesaurus lexikon hierarkiskt. Det innebär en förenkling så till vida att man ersätter en mångfald av relationer med i princip en enda, dependens. Jag tän ker mig här en mer djup-och genomgående hierarkisering än den vi finner i Rogets lexikon (1962, första gången utgivet 1852) och dess svenska efterbildning. Bring (1930) , där man definierat ett begränsat antal "" begreppsklasser"" och hänfört varje ord till en sådan. Frågan är bara om detta är möjligt, alltså om en sådan hierarkisering står i samklemg med ordskattens inre natur. För att citera Kassabov (1987, 51) gäller det här att undvika det allmänna misstaget att "" attempt to prove the systematic character of vocabulary not by establishing the inherent principles of its inner orgainization, but by forcing upon the lexical items the networks of pre-formulated systems"" .",1988
maccartney-manning-2007-natural,https://aclanthology.org/W07-1431,0,,,,,,,"Natural Logic for Textual Inference. This paper presents the first use of a computational model of natural logic-a system of logical inference which operates over natural language-for textual inference. Most current approaches to the PAS-CAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.",Natural Logic for Textual Inference,"This paper presents the first use of a computational model of natural logic-a system of logical inference which operates over natural language-for textual inference. Most current approaches to the PAS-CAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.",Natural Logic for Textual Inference,"This paper presents the first use of a computational model of natural logic-a system of logical inference which operates over natural language-for textual inference. Most current approaches to the PAS-CAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.",Acknowledgements The authors wish to thank Marie-Catherine de Marneffe and the anonymous reviewers for their helpful comments on an earlier draft of this paper. This work was supported in part by ARDA's Advanced Question Answering for Intelligence (AQUAINT) Program.,"Natural Logic for Textual Inference. This paper presents the first use of a computational model of natural logic-a system of logical inference which operates over natural language-for textual inference. Most current approaches to the PAS-CAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.",2007
han-etal-2020-dyernie,https://aclanthology.org/2020.emnlp-main.593,0,,,,,,,"DyERNIE: Dynamic Evolution of Riemannian Manifold Embeddings for Temporal Knowledge Graph Completion. There has recently been increasing interest in learning representations of temporal knowledge graphs (KGs), which record the dynamic relationships between entities over time. Temporal KGs often exhibit multiple simultaneous non-Euclidean structures, such as hierarchical and cyclic structures. However, existing embedding approaches for temporal KGs typically learn entity representations and their dynamic evolution in the Euclidean space, which might not capture such intrinsic structures very well. To this end, we propose Dy-ERNIE, a non-Euclidean embedding approach that learns evolving entity representations in a product of Riemannian manifolds, where the composed spaces are estimated from the sectional curvatures of underlying data. Product manifolds enable our approach to better reflect a wide variety of geometric structures on temporal KGs. Besides, to capture the evolutionary dynamics of temporal KGs, we let the entity representations evolve according to a velocity vector defined in the tangent space at each timestamp. We analyze in detail the contribution of geometric spaces to representation learning of temporal KGs and evaluate our model on temporal knowledge graph completion tasks. Extensive experiments on three real-world datasets demonstrate significantly improved performance, indicating that the dynamics of multi-relational graph data can be more properly modeled by the evolution of embeddings on Riemannian manifolds.",{D}y{ERNIE}: {D}ynamic {E}volution of {R}iemannian {M}anifold {E}mbeddings for {T}emporal {K}nowledge {G}raph {C}ompletion,"There has recently been increasing interest in learning representations of temporal knowledge graphs (KGs), which record the dynamic relationships between entities over time. Temporal KGs often exhibit multiple simultaneous non-Euclidean structures, such as hierarchical and cyclic structures. However, existing embedding approaches for temporal KGs typically learn entity representations and their dynamic evolution in the Euclidean space, which might not capture such intrinsic structures very well. To this end, we propose Dy-ERNIE, a non-Euclidean embedding approach that learns evolving entity representations in a product of Riemannian manifolds, where the composed spaces are estimated from the sectional curvatures of underlying data. Product manifolds enable our approach to better reflect a wide variety of geometric structures on temporal KGs. Besides, to capture the evolutionary dynamics of temporal KGs, we let the entity representations evolve according to a velocity vector defined in the tangent space at each timestamp. We analyze in detail the contribution of geometric spaces to representation learning of temporal KGs and evaluate our model on temporal knowledge graph completion tasks. Extensive experiments on three real-world datasets demonstrate significantly improved performance, indicating that the dynamics of multi-relational graph data can be more properly modeled by the evolution of embeddings on Riemannian manifolds.",DyERNIE: Dynamic Evolution of Riemannian Manifold Embeddings for Temporal Knowledge Graph Completion,"There has recently been increasing interest in learning representations of temporal knowledge graphs (KGs), which record the dynamic relationships between entities over time. Temporal KGs often exhibit multiple simultaneous non-Euclidean structures, such as hierarchical and cyclic structures. However, existing embedding approaches for temporal KGs typically learn entity representations and their dynamic evolution in the Euclidean space, which might not capture such intrinsic structures very well. To this end, we propose Dy-ERNIE, a non-Euclidean embedding approach that learns evolving entity representations in a product of Riemannian manifolds, where the composed spaces are estimated from the sectional curvatures of underlying data. Product manifolds enable our approach to better reflect a wide variety of geometric structures on temporal KGs. Besides, to capture the evolutionary dynamics of temporal KGs, we let the entity representations evolve according to a velocity vector defined in the tangent space at each timestamp. We analyze in detail the contribution of geometric spaces to representation learning of temporal KGs and evaluate our model on temporal knowledge graph completion tasks. Extensive experiments on three real-world datasets demonstrate significantly improved performance, indicating that the dynamics of multi-relational graph data can be more properly modeled by the evolution of embeddings on Riemannian manifolds.","The authors acknowledge support by the German Federal Ministry for Education and Research (BMBF), funding project MLWin (grant 01IS18050).","DyERNIE: Dynamic Evolution of Riemannian Manifold Embeddings for Temporal Knowledge Graph Completion. There has recently been increasing interest in learning representations of temporal knowledge graphs (KGs), which record the dynamic relationships between entities over time. Temporal KGs often exhibit multiple simultaneous non-Euclidean structures, such as hierarchical and cyclic structures. However, existing embedding approaches for temporal KGs typically learn entity representations and their dynamic evolution in the Euclidean space, which might not capture such intrinsic structures very well. To this end, we propose Dy-ERNIE, a non-Euclidean embedding approach that learns evolving entity representations in a product of Riemannian manifolds, where the composed spaces are estimated from the sectional curvatures of underlying data. Product manifolds enable our approach to better reflect a wide variety of geometric structures on temporal KGs. Besides, to capture the evolutionary dynamics of temporal KGs, we let the entity representations evolve according to a velocity vector defined in the tangent space at each timestamp. We analyze in detail the contribution of geometric spaces to representation learning of temporal KGs and evaluate our model on temporal knowledge graph completion tasks. Extensive experiments on three real-world datasets demonstrate significantly improved performance, indicating that the dynamics of multi-relational graph data can be more properly modeled by the evolution of embeddings on Riemannian manifolds.",2020
mazziotta-2019-evolution,https://aclanthology.org/W19-7709,0,,,,,,,"The evolution of spatial rationales in Tesni\`ere's stemmas. This paper investigates the evolution of the spatial rationales of Tesnière's syntactic diagrams (stemma). I show that the conventions change from his first attempts to model complete sentences up to the classical stemma he uses in his Elements of structural syntax (1959). From mostly symbolic representations of hierarchy (directed arrows from the dependent to the governor), he shifts to a more configurational one (connected dependents are placed below the governor).",The evolution of spatial rationales in Tesni{\`e}re{'}s stemmas,"This paper investigates the evolution of the spatial rationales of Tesnière's syntactic diagrams (stemma). I show that the conventions change from his first attempts to model complete sentences up to the classical stemma he uses in his Elements of structural syntax (1959). From mostly symbolic representations of hierarchy (directed arrows from the dependent to the governor), he shifts to a more configurational one (connected dependents are placed below the governor).",The evolution of spatial rationales in Tesni\`ere's stemmas,"This paper investigates the evolution of the spatial rationales of Tesnière's syntactic diagrams (stemma). I show that the conventions change from his first attempts to model complete sentences up to the classical stemma he uses in his Elements of structural syntax (1959). From mostly symbolic representations of hierarchy (directed arrows from the dependent to the governor), he shifts to a more configurational one (connected dependents are placed below the governor).","I would like to thank Sylvain Kahane, Jean-Christophe Vanhalle and anonymous reviewers of the Depling comitee for their suggestions. I would also like to thank Jacques François and Lene Schøsler, who discussed preliminary versions of this paper.","The evolution of spatial rationales in Tesni\`ere's stemmas. This paper investigates the evolution of the spatial rationales of Tesnière's syntactic diagrams (stemma). I show that the conventions change from his first attempts to model complete sentences up to the classical stemma he uses in his Elements of structural syntax (1959). From mostly symbolic representations of hierarchy (directed arrows from the dependent to the governor), he shifts to a more configurational one (connected dependents are placed below the governor).",2019
mao-etal-2007-using,https://aclanthology.org/Y07-1031,0,,,,,,,"Using Non-Local Features to Improve Named Entity Recognition Recall. Named Entity Recognition (NER) is always limited by its lower recall resulting from the asymmetric data distribution where the NONE class dominates the entity classes. This paper presents an approach that exploits non-local information to improve the NER recall. Several kinds of non-local features encoding entity token occurrence, entity boundary and entity class are explored under Conditional Random Fields (CRFs) framework. Experiments on SIGHAN 2006 MSRA (CityU) corpus indicate that non-local features can effectively enhance the recall of the state-of-the-art NER systems. Incorporating the non-local features into the NER systems using local features alone, our best system achieves a 23.56% (25.26%) relative error reduction on the recall and 17.10% (11.36%) relative error reduction on the F1 score; the improved F1 score 89.38% (90.09%) is significantly superior to the best NER system with F1 of 86.51% (89.03%) participated in the closed track.",Using Non-Local Features to Improve Named Entity Recognition Recall,"Named Entity Recognition (NER) is always limited by its lower recall resulting from the asymmetric data distribution where the NONE class dominates the entity classes. This paper presents an approach that exploits non-local information to improve the NER recall. Several kinds of non-local features encoding entity token occurrence, entity boundary and entity class are explored under Conditional Random Fields (CRFs) framework. Experiments on SIGHAN 2006 MSRA (CityU) corpus indicate that non-local features can effectively enhance the recall of the state-of-the-art NER systems. Incorporating the non-local features into the NER systems using local features alone, our best system achieves a 23.56% (25.26%) relative error reduction on the recall and 17.10% (11.36%) relative error reduction on the F1 score; the improved F1 score 89.38% (90.09%) is significantly superior to the best NER system with F1 of 86.51% (89.03%) participated in the closed track.",Using Non-Local Features to Improve Named Entity Recognition Recall,"Named Entity Recognition (NER) is always limited by its lower recall resulting from the asymmetric data distribution where the NONE class dominates the entity classes. This paper presents an approach that exploits non-local information to improve the NER recall. Several kinds of non-local features encoding entity token occurrence, entity boundary and entity class are explored under Conditional Random Fields (CRFs) framework. Experiments on SIGHAN 2006 MSRA (CityU) corpus indicate that non-local features can effectively enhance the recall of the state-of-the-art NER systems. Incorporating the non-local features into the NER systems using local features alone, our best system achieves a 23.56% (25.26%) relative error reduction on the recall and 17.10% (11.36%) relative error reduction on the F1 score; the improved F1 score 89.38% (90.09%) is significantly superior to the best NER system with F1 of 86.51% (89.03%) participated in the closed track.",,"Using Non-Local Features to Improve Named Entity Recognition Recall. Named Entity Recognition (NER) is always limited by its lower recall resulting from the asymmetric data distribution where the NONE class dominates the entity classes. This paper presents an approach that exploits non-local information to improve the NER recall. Several kinds of non-local features encoding entity token occurrence, entity boundary and entity class are explored under Conditional Random Fields (CRFs) framework. Experiments on SIGHAN 2006 MSRA (CityU) corpus indicate that non-local features can effectively enhance the recall of the state-of-the-art NER systems. Incorporating the non-local features into the NER systems using local features alone, our best system achieves a 23.56% (25.26%) relative error reduction on the recall and 17.10% (11.36%) relative error reduction on the F1 score; the improved F1 score 89.38% (90.09%) is significantly superior to the best NER system with F1 of 86.51% (89.03%) participated in the closed track.",2007
chaudhary-etal-2021-wall,https://aclanthology.org/2021.emnlp-main.553,0,,,,,,,"When is Wall a Pared and when a Muro?: Extracting Rules Governing Lexical Selection. Learning fine-grained distinctions between vocabulary items is a key challenge in learning a new language. For example, the noun ""wall"" has different lexical manifestations in Spanish-""pared"" refers to an indoor wall while ""muro"" refers to an outside wall. However, this variety of lexical distinction may not be obvious to non-native learners unless the distinction is explained in such a way. In this work, we present a method for automatically identifying fine-grained lexical distinctions, and extracting concise descriptions explaining these distinctions in a human-and machine-readable format. We confirm the quality of these extracted descriptions in a language learning setup for two languages, Spanish and Greek, where we use them to teach non-native speakers when to translate a given ambiguous word into its different possible translations. Code and data are publicly released here. 1",When is Wall a Pared and when a Muro?: Extracting Rules Governing Lexical Selection,"Learning fine-grained distinctions between vocabulary items is a key challenge in learning a new language. For example, the noun ""wall"" has different lexical manifestations in Spanish-""pared"" refers to an indoor wall while ""muro"" refers to an outside wall. However, this variety of lexical distinction may not be obvious to non-native learners unless the distinction is explained in such a way. In this work, we present a method for automatically identifying fine-grained lexical distinctions, and extracting concise descriptions explaining these distinctions in a human-and machine-readable format. We confirm the quality of these extracted descriptions in a language learning setup for two languages, Spanish and Greek, where we use them to teach non-native speakers when to translate a given ambiguous word into its different possible translations. Code and data are publicly released here. 1",When is Wall a Pared and when a Muro?: Extracting Rules Governing Lexical Selection,"Learning fine-grained distinctions between vocabulary items is a key challenge in learning a new language. For example, the noun ""wall"" has different lexical manifestations in Spanish-""pared"" refers to an indoor wall while ""muro"" refers to an outside wall. However, this variety of lexical distinction may not be obvious to non-native learners unless the distinction is explained in such a way. In this work, we present a method for automatically identifying fine-grained lexical distinctions, and extracting concise descriptions explaining these distinctions in a human-and machine-readable format. We confirm the quality of these extracted descriptions in a language learning setup for two languages, Spanish and Greek, where we use them to teach non-native speakers when to translate a given ambiguous word into its different possible translations. Code and data are publicly released here. 1","The authors are grateful to the anonymous reviewers who took the time to provide many interesting comments that made the paper significantly better. We would also like to thank Nikolai Vogler for the original interface for data annotation, and all the learners for their participation in our study, and without whom this study would not have been possible or meaningful. This work is sponsored by the Waibel Presidential Fellowship and by the National Science Foundation under grant 1761548.","When is Wall a Pared and when a Muro?: Extracting Rules Governing Lexical Selection. Learning fine-grained distinctions between vocabulary items is a key challenge in learning a new language. For example, the noun ""wall"" has different lexical manifestations in Spanish-""pared"" refers to an indoor wall while ""muro"" refers to an outside wall. However, this variety of lexical distinction may not be obvious to non-native learners unless the distinction is explained in such a way. In this work, we present a method for automatically identifying fine-grained lexical distinctions, and extracting concise descriptions explaining these distinctions in a human-and machine-readable format. We confirm the quality of these extracted descriptions in a language learning setup for two languages, Spanish and Greek, where we use them to teach non-native speakers when to translate a given ambiguous word into its different possible translations. Code and data are publicly released here. 1",2021
regier-1991-learning,https://aclanthology.org/P91-1018,0,,,,,,,"Learning Perceptually-Grounded Semantics in the \textitL₀ Project. A method is presented for acquiring perceptuallygrounded semantics for spatial terms in a simple visual domain, as a part of the L0 miniature language acquisition project. Two central problems in this learning task are (a) ensuring that the terms learned generalize well, so that they can be accurately applied to new scenes, and (b) learning in the absence of explicit negative evidence. Solutions to these two problems are presented, and the results discussed.",Learning Perceptually-Grounded Semantics in the \textit{L₀} Project,"A method is presented for acquiring perceptuallygrounded semantics for spatial terms in a simple visual domain, as a part of the L0 miniature language acquisition project. Two central problems in this learning task are (a) ensuring that the terms learned generalize well, so that they can be accurately applied to new scenes, and (b) learning in the absence of explicit negative evidence. Solutions to these two problems are presented, and the results discussed.",Learning Perceptually-Grounded Semantics in the \textitL₀ Project,"A method is presented for acquiring perceptuallygrounded semantics for spatial terms in a simple visual domain, as a part of the L0 miniature language acquisition project. Two central problems in this learning task are (a) ensuring that the terms learned generalize well, so that they can be accurately applied to new scenes, and (b) learning in the absence of explicit negative evidence. Solutions to these two problems are presented, and the results discussed.",,"Learning Perceptually-Grounded Semantics in the \textitL₀ Project. A method is presented for acquiring perceptuallygrounded semantics for spatial terms in a simple visual domain, as a part of the L0 miniature language acquisition project. Two central problems in this learning task are (a) ensuring that the terms learned generalize well, so that they can be accurately applied to new scenes, and (b) learning in the absence of explicit negative evidence. Solutions to these two problems are presented, and the results discussed.",1991
kountz-etal-2008-laf,http://www.lrec-conf.org/proceedings/lrec2008/pdf/569_paper.pdf,0,,,,,,,"A LAF/GrAF based Encoding Scheme for underspecified Representations of syntactic Annotations.. Data models and encoding formats for syntactically annotated text corpora need to deal with syntactic ambiguity; underspecified representations are particularly well suited for the representation of ambiguous data because they allow for high informational efficiency. We discuss the issue of being informationally efficient, and the trade-off between efficient encoding of linguistic annotations and complete documentation of linguistic analyses. The main topic of this article is a data model and an encoding scheme based on LAF/GrAF (Ide and Romary, 2006; Ide and Suderman, 2007) which provides a flexible framework for encoding underspecified representations. We show how a set of dependency structures and a set of TiGer graphs (Brants et al., 2002) representing the readings of an ambiguous sentence can be encoded, and we discuss basic issues in querying corpora which are encoded using the framework presented here.",A {LAF}/{G}r{AF} based Encoding Scheme for underspecified Representations of syntactic Annotations.,"Data models and encoding formats for syntactically annotated text corpora need to deal with syntactic ambiguity; underspecified representations are particularly well suited for the representation of ambiguous data because they allow for high informational efficiency. We discuss the issue of being informationally efficient, and the trade-off between efficient encoding of linguistic annotations and complete documentation of linguistic analyses. The main topic of this article is a data model and an encoding scheme based on LAF/GrAF (Ide and Romary, 2006; Ide and Suderman, 2007) which provides a flexible framework for encoding underspecified representations. We show how a set of dependency structures and a set of TiGer graphs (Brants et al., 2002) representing the readings of an ambiguous sentence can be encoded, and we discuss basic issues in querying corpora which are encoded using the framework presented here.",A LAF/GrAF based Encoding Scheme for underspecified Representations of syntactic Annotations.,"Data models and encoding formats for syntactically annotated text corpora need to deal with syntactic ambiguity; underspecified representations are particularly well suited for the representation of ambiguous data because they allow for high informational efficiency. We discuss the issue of being informationally efficient, and the trade-off between efficient encoding of linguistic annotations and complete documentation of linguistic analyses. The main topic of this article is a data model and an encoding scheme based on LAF/GrAF (Ide and Romary, 2006; Ide and Suderman, 2007) which provides a flexible framework for encoding underspecified representations. We show how a set of dependency structures and a set of TiGer graphs (Brants et al., 2002) representing the readings of an ambiguous sentence can be encoded, and we discuss basic issues in querying corpora which are encoded using the framework presented here.",,"A LAF/GrAF based Encoding Scheme for underspecified Representations of syntactic Annotations.. Data models and encoding formats for syntactically annotated text corpora need to deal with syntactic ambiguity; underspecified representations are particularly well suited for the representation of ambiguous data because they allow for high informational efficiency. We discuss the issue of being informationally efficient, and the trade-off between efficient encoding of linguistic annotations and complete documentation of linguistic analyses. The main topic of this article is a data model and an encoding scheme based on LAF/GrAF (Ide and Romary, 2006; Ide and Suderman, 2007) which provides a flexible framework for encoding underspecified representations. We show how a set of dependency structures and a set of TiGer graphs (Brants et al., 2002) representing the readings of an ambiguous sentence can be encoded, and we discuss basic issues in querying corpora which are encoded using the framework presented here.",2008
misawa-etal-2017-character,https://aclanthology.org/W17-4114,0,,,,,,,"Character-based Bidirectional LSTM-CRF with words and characters for Japanese Named Entity Recognition. Recently, neural models have shown superior performance over conventional models in NER tasks. These models use CNN to extract sub-word information along with RNN to predict a tag for each word. However, these models have been tested almost entirely on English texts. It remains unclear whether they perform similarly in other languages. We worked on Japanese NER using neural models and discovered two obstacles of the state-ofthe-art model. First, CNN is unsuitable for extracting Japanese sub-word information. Secondly, a model predicting a tag for each word cannot extract an entity when a part of a word composes an entity. The contributions of this work are (i) verifying the effectiveness of the state-of-theart NER model for Japanese, (ii) proposing a neural model for predicting a tag for each character using word and character information. Experimentally obtained results demonstrate that our model outperforms the state-of-the-art neural English NER model in Japanese.",Character-based Bidirectional {LSTM}-{CRF} with words and characters for {J}apanese Named Entity Recognition,"Recently, neural models have shown superior performance over conventional models in NER tasks. These models use CNN to extract sub-word information along with RNN to predict a tag for each word. However, these models have been tested almost entirely on English texts. It remains unclear whether they perform similarly in other languages. We worked on Japanese NER using neural models and discovered two obstacles of the state-ofthe-art model. First, CNN is unsuitable for extracting Japanese sub-word information. Secondly, a model predicting a tag for each word cannot extract an entity when a part of a word composes an entity. The contributions of this work are (i) verifying the effectiveness of the state-of-theart NER model for Japanese, (ii) proposing a neural model for predicting a tag for each character using word and character information. Experimentally obtained results demonstrate that our model outperforms the state-of-the-art neural English NER model in Japanese.",Character-based Bidirectional LSTM-CRF with words and characters for Japanese Named Entity Recognition,"Recently, neural models have shown superior performance over conventional models in NER tasks. These models use CNN to extract sub-word information along with RNN to predict a tag for each word. However, these models have been tested almost entirely on English texts. It remains unclear whether they perform similarly in other languages. We worked on Japanese NER using neural models and discovered two obstacles of the state-ofthe-art model. First, CNN is unsuitable for extracting Japanese sub-word information. Secondly, a model predicting a tag for each word cannot extract an entity when a part of a word composes an entity. The contributions of this work are (i) verifying the effectiveness of the state-of-theart NER model for Japanese, (ii) proposing a neural model for predicting a tag for each character using word and character information. Experimentally obtained results demonstrate that our model outperforms the state-of-the-art neural English NER model in Japanese.",,"Character-based Bidirectional LSTM-CRF with words and characters for Japanese Named Entity Recognition. Recently, neural models have shown superior performance over conventional models in NER tasks. These models use CNN to extract sub-word information along with RNN to predict a tag for each word. However, these models have been tested almost entirely on English texts. It remains unclear whether they perform similarly in other languages. We worked on Japanese NER using neural models and discovered two obstacles of the state-ofthe-art model. First, CNN is unsuitable for extracting Japanese sub-word information. Secondly, a model predicting a tag for each word cannot extract an entity when a part of a word composes an entity. The contributions of this work are (i) verifying the effectiveness of the state-of-theart NER model for Japanese, (ii) proposing a neural model for predicting a tag for each character using word and character information. Experimentally obtained results demonstrate that our model outperforms the state-of-the-art neural English NER model in Japanese.",2017
zhang-etal-2022-modeling,https://aclanthology.org/2022.acl-long.84,0,,,,,,,"Modeling Temporal-Modal Entity Graph for Procedural Multimodal Machine Comprehension. Procedural Multimodal Documents (PMDs) organize textual instructions and corresponding images step by step. Comprehending PMDs and inducing their representations for the downstream reasoning tasks is designated as Procedural MultiModal Machine Comprehension (M 3 C). In this study, we approach Procedural M 3 C at a fine-grained level (compared with existing explorations at a document or sentence level), that is, entity. With delicate consideration, we model entity both in its temporal and cross-modal relation and propose a novel Temporal-Modal Entity Graph (TMEG). Specifically, a heterogeneous graph structure is formulated to capture textual and visual entities and trace their temporal-modal evolution. In addition, a graph aggregation module is introduced to conduct graph encoding and reasoning. Comprehensive experiments across three Procedural M 3 C tasks are conducted on a traditional dataset RecipeQA and our new dataset CraftQA, which can better evaluate the generalization of TMEG.",Modeling Temporal-Modal Entity Graph for Procedural Multimodal Machine Comprehension,"Procedural Multimodal Documents (PMDs) organize textual instructions and corresponding images step by step. Comprehending PMDs and inducing their representations for the downstream reasoning tasks is designated as Procedural MultiModal Machine Comprehension (M 3 C). In this study, we approach Procedural M 3 C at a fine-grained level (compared with existing explorations at a document or sentence level), that is, entity. With delicate consideration, we model entity both in its temporal and cross-modal relation and propose a novel Temporal-Modal Entity Graph (TMEG). Specifically, a heterogeneous graph structure is formulated to capture textual and visual entities and trace their temporal-modal evolution. In addition, a graph aggregation module is introduced to conduct graph encoding and reasoning. Comprehensive experiments across three Procedural M 3 C tasks are conducted on a traditional dataset RecipeQA and our new dataset CraftQA, which can better evaluate the generalization of TMEG.",Modeling Temporal-Modal Entity Graph for Procedural Multimodal Machine Comprehension,"Procedural Multimodal Documents (PMDs) organize textual instructions and corresponding images step by step. Comprehending PMDs and inducing their representations for the downstream reasoning tasks is designated as Procedural MultiModal Machine Comprehension (M 3 C). In this study, we approach Procedural M 3 C at a fine-grained level (compared with existing explorations at a document or sentence level), that is, entity. With delicate consideration, we model entity both in its temporal and cross-modal relation and propose a novel Temporal-Modal Entity Graph (TMEG). Specifically, a heterogeneous graph structure is formulated to capture textual and visual entities and trace their temporal-modal evolution. In addition, a graph aggregation module is introduced to conduct graph encoding and reasoning. Comprehensive experiments across three Procedural M 3 C tasks are conducted on a traditional dataset RecipeQA and our new dataset CraftQA, which can better evaluate the generalization of TMEG.",,"Modeling Temporal-Modal Entity Graph for Procedural Multimodal Machine Comprehension. Procedural Multimodal Documents (PMDs) organize textual instructions and corresponding images step by step. Comprehending PMDs and inducing their representations for the downstream reasoning tasks is designated as Procedural MultiModal Machine Comprehension (M 3 C). In this study, we approach Procedural M 3 C at a fine-grained level (compared with existing explorations at a document or sentence level), that is, entity. With delicate consideration, we model entity both in its temporal and cross-modal relation and propose a novel Temporal-Modal Entity Graph (TMEG). Specifically, a heterogeneous graph structure is formulated to capture textual and visual entities and trace their temporal-modal evolution. In addition, a graph aggregation module is introduced to conduct graph encoding and reasoning. Comprehensive experiments across three Procedural M 3 C tasks are conducted on a traditional dataset RecipeQA and our new dataset CraftQA, which can better evaluate the generalization of TMEG.",2022
vogel-etal-2013-emergence,https://aclanthology.org/N13-1127,0,,,,,,,"Emergence of Gricean Maxims from Multi-Agent Decision Theory. Grice characterized communication in terms of the cooperative principle, which enjoins speakers to make only contributions that serve the evolving conversational goals. We show that the cooperative principle and the associated maxims of relevance, quality, and quantity emerge from multi-agent decision theory. We utilize the Decentralized Partially Observable Markov Decision Process (Dec-POMDP) model of multi-agent decision making which relies only on basic definitions of rationality and the ability of agents to reason about each other's beliefs in maximizing joint utility. Our model uses cognitively-inspired heuristics to simplify the otherwise intractable task of reasoning jointly about actions, the environment, and the nested beliefs of other actors. Our experiments on a cooperative language task show that reasoning about others' belief states, and the resulting emergent Gricean communicative behavior, leads to significantly improved task performance.",Emergence of {G}ricean Maxims from Multi-Agent Decision Theory,"Grice characterized communication in terms of the cooperative principle, which enjoins speakers to make only contributions that serve the evolving conversational goals. We show that the cooperative principle and the associated maxims of relevance, quality, and quantity emerge from multi-agent decision theory. We utilize the Decentralized Partially Observable Markov Decision Process (Dec-POMDP) model of multi-agent decision making which relies only on basic definitions of rationality and the ability of agents to reason about each other's beliefs in maximizing joint utility. Our model uses cognitively-inspired heuristics to simplify the otherwise intractable task of reasoning jointly about actions, the environment, and the nested beliefs of other actors. Our experiments on a cooperative language task show that reasoning about others' belief states, and the resulting emergent Gricean communicative behavior, leads to significantly improved task performance.",Emergence of Gricean Maxims from Multi-Agent Decision Theory,"Grice characterized communication in terms of the cooperative principle, which enjoins speakers to make only contributions that serve the evolving conversational goals. We show that the cooperative principle and the associated maxims of relevance, quality, and quantity emerge from multi-agent decision theory. We utilize the Decentralized Partially Observable Markov Decision Process (Dec-POMDP) model of multi-agent decision making which relies only on basic definitions of rationality and the ability of agents to reason about each other's beliefs in maximizing joint utility. Our model uses cognitively-inspired heuristics to simplify the otherwise intractable task of reasoning jointly about actions, the environment, and the nested beliefs of other actors. Our experiments on a cooperative language task show that reasoning about others' belief states, and the resulting emergent Gricean communicative behavior, leads to significantly improved task performance.",This research was supported in part by ONR grants N00014-10-1-0109 and N00014-13-1-0287 and ARO grant W911NF-07-1-0216.,"Emergence of Gricean Maxims from Multi-Agent Decision Theory. Grice characterized communication in terms of the cooperative principle, which enjoins speakers to make only contributions that serve the evolving conversational goals. We show that the cooperative principle and the associated maxims of relevance, quality, and quantity emerge from multi-agent decision theory. We utilize the Decentralized Partially Observable Markov Decision Process (Dec-POMDP) model of multi-agent decision making which relies only on basic definitions of rationality and the ability of agents to reason about each other's beliefs in maximizing joint utility. Our model uses cognitively-inspired heuristics to simplify the otherwise intractable task of reasoning jointly about actions, the environment, and the nested beliefs of other actors. Our experiments on a cooperative language task show that reasoning about others' belief states, and the resulting emergent Gricean communicative behavior, leads to significantly improved task performance.",2013
reitter-etal-2006-priming,https://aclanthology.org/W06-1637,0,,,,,,,"Priming Effects in Combinatory Categorial Grammar. This paper presents a corpus-based account of structural priming in human sentence processing, focusing on the role that syntactic representations play in such an account. We estimate the strength of structural priming effects from a corpus of spontaneous spoken dialogue, annotated syntactically with Combinatory Categorial Grammar (CCG) derivations. This methodology allows us to test a range of predictions that CCG makes about priming. In particular, we present evidence for priming between lexical and syntactic categories encoding partially satisfied subcategorization frames, and we show that priming effects exist both for incremental and normal-form CCG derivations.",Priming Effects in {C}ombinatory {C}ategorial {G}rammar,"This paper presents a corpus-based account of structural priming in human sentence processing, focusing on the role that syntactic representations play in such an account. We estimate the strength of structural priming effects from a corpus of spontaneous spoken dialogue, annotated syntactically with Combinatory Categorial Grammar (CCG) derivations. This methodology allows us to test a range of predictions that CCG makes about priming. In particular, we present evidence for priming between lexical and syntactic categories encoding partially satisfied subcategorization frames, and we show that priming effects exist both for incremental and normal-form CCG derivations.",Priming Effects in Combinatory Categorial Grammar,"This paper presents a corpus-based account of structural priming in human sentence processing, focusing on the role that syntactic representations play in such an account. We estimate the strength of structural priming effects from a corpus of spontaneous spoken dialogue, annotated syntactically with Combinatory Categorial Grammar (CCG) derivations. This methodology allows us to test a range of predictions that CCG makes about priming. In particular, we present evidence for priming between lexical and syntactic categories encoding partially satisfied subcategorization frames, and we show that priming effects exist both for incremental and normal-form CCG derivations.","We would like to thank Mark Steedman, Roger Levy, Johanna Moore and three anonymous reviewers for their comments. The authors are grateful for being supported by the ","Priming Effects in Combinatory Categorial Grammar. This paper presents a corpus-based account of structural priming in human sentence processing, focusing on the role that syntactic representations play in such an account. We estimate the strength of structural priming effects from a corpus of spontaneous spoken dialogue, annotated syntactically with Combinatory Categorial Grammar (CCG) derivations. This methodology allows us to test a range of predictions that CCG makes about priming. In particular, we present evidence for priming between lexical and syntactic categories encoding partially satisfied subcategorization frames, and we show that priming effects exist both for incremental and normal-form CCG derivations.",2006
sinha-etal-2012-new-semantic,https://aclanthology.org/W12-5114,0,,,,,,,"A New Semantic Lexicon and Similarity Measure in Bangla. The Mental Lexicon (ML) refers to the organization of lexical entries of a language in the human mind.A clear knowledge of the structure of ML will help us to understand how the human brain processes language. The knowledge of semantic association among the words in ML is essential to many applications. Although, there are works on the representation of lexical entries based on their semantic association in the form of a lexicon in English and other languages, such works of Bangla is in a nascent stage. In this paper, we have proposed a distinct lexical organization based on semantic association between Bangla words which can be accessed efficiently by different applications. We have developed a novel approach of measuring the semantic similarity between words and verified it against user study. Further, a GUI has been designed for easy and efficient access.",A New Semantic Lexicon and Similarity Measure in {B}angla,"The Mental Lexicon (ML) refers to the organization of lexical entries of a language in the human mind.A clear knowledge of the structure of ML will help us to understand how the human brain processes language. The knowledge of semantic association among the words in ML is essential to many applications. Although, there are works on the representation of lexical entries based on their semantic association in the form of a lexicon in English and other languages, such works of Bangla is in a nascent stage. In this paper, we have proposed a distinct lexical organization based on semantic association between Bangla words which can be accessed efficiently by different applications. We have developed a novel approach of measuring the semantic similarity between words and verified it against user study. Further, a GUI has been designed for easy and efficient access.",A New Semantic Lexicon and Similarity Measure in Bangla,"The Mental Lexicon (ML) refers to the organization of lexical entries of a language in the human mind.A clear knowledge of the structure of ML will help us to understand how the human brain processes language. The knowledge of semantic association among the words in ML is essential to many applications. Although, there are works on the representation of lexical entries based on their semantic association in the form of a lexicon in English and other languages, such works of Bangla is in a nascent stage. In this paper, we have proposed a distinct lexical organization based on semantic association between Bangla words which can be accessed efficiently by different applications. We have developed a novel approach of measuring the semantic similarity between words and verified it against user study. Further, a GUI has been designed for easy and efficient access.",We are thankful to Society for Natural Language Technology Research Kolkata for helping us to develop the lexical resource. We are also thankful to those subjects who spend their time to manually evaluate our semantic similarity measure.,"A New Semantic Lexicon and Similarity Measure in Bangla. The Mental Lexicon (ML) refers to the organization of lexical entries of a language in the human mind.A clear knowledge of the structure of ML will help us to understand how the human brain processes language. The knowledge of semantic association among the words in ML is essential to many applications. Although, there are works on the representation of lexical entries based on their semantic association in the form of a lexicon in English and other languages, such works of Bangla is in a nascent stage. In this paper, we have proposed a distinct lexical organization based on semantic association between Bangla words which can be accessed efficiently by different applications. We have developed a novel approach of measuring the semantic similarity between words and verified it against user study. Further, a GUI has been designed for easy and efficient access.",2012
shazal-etal-2020-unified,https://aclanthology.org/2020.wanlp-1.15,0,,,,,,,"A Unified Model for Arabizi Detection and Transliteration using Sequence-to-Sequence Models. While online Arabic is primarily written using the Arabic script, a Roman-script variety called Arabizi is often seen on social media. Although this representation captures the phonology of the language, it is not a one-to-one mapping with the Arabic script version. This issue is exacerbated by the fact that Arabizi on social media is Dialectal Arabic which does not have a standard orthography. Furthermore, Arabizi tends to include a lot of code mixing between Arabic and English (or French). To map Arabizi text to Arabic script in the context of complete utterances, previously published efforts have split Arabizi detection and Arabic script target in two separate tasks. In this paper, we present the first effort on a unified model for Arabizi detection and transliteration into a code-mixed output with consistent Arabic spelling conventions, using a sequence-to-sequence deep learning model. Our best system achieves 80.6% word accuracy and 58.7% BLEU on a blind test set.",A Unified Model for {A}rabizi Detection and Transliteration using Sequence-to-Sequence Models,"While online Arabic is primarily written using the Arabic script, a Roman-script variety called Arabizi is often seen on social media. Although this representation captures the phonology of the language, it is not a one-to-one mapping with the Arabic script version. This issue is exacerbated by the fact that Arabizi on social media is Dialectal Arabic which does not have a standard orthography. Furthermore, Arabizi tends to include a lot of code mixing between Arabic and English (or French). To map Arabizi text to Arabic script in the context of complete utterances, previously published efforts have split Arabizi detection and Arabic script target in two separate tasks. In this paper, we present the first effort on a unified model for Arabizi detection and transliteration into a code-mixed output with consistent Arabic spelling conventions, using a sequence-to-sequence deep learning model. Our best system achieves 80.6% word accuracy and 58.7% BLEU on a blind test set.",A Unified Model for Arabizi Detection and Transliteration using Sequence-to-Sequence Models,"While online Arabic is primarily written using the Arabic script, a Roman-script variety called Arabizi is often seen on social media. Although this representation captures the phonology of the language, it is not a one-to-one mapping with the Arabic script version. This issue is exacerbated by the fact that Arabizi on social media is Dialectal Arabic which does not have a standard orthography. Furthermore, Arabizi tends to include a lot of code mixing between Arabic and English (or French). To map Arabizi text to Arabic script in the context of complete utterances, previously published efforts have split Arabizi detection and Arabic script target in two separate tasks. In this paper, we present the first effort on a unified model for Arabizi detection and transliteration into a code-mixed output with consistent Arabic spelling conventions, using a sequence-to-sequence deep learning model. Our best system achieves 80.6% word accuracy and 58.7% BLEU on a blind test set.","This research was carried out on the High Performance Computing resources at New York University Abu Dhabi (NYUAD). We would like to thank Daniel Watson, Ossama Obeid, Nasser Zalmout and Salam Khalifa from the Computational Approaches to Modeling Language Lab at NYUAD for their help and suggestions throughout this project. We thank Owen Rambow, and the paper reviewers for helpful suggestions.","A Unified Model for Arabizi Detection and Transliteration using Sequence-to-Sequence Models. While online Arabic is primarily written using the Arabic script, a Roman-script variety called Arabizi is often seen on social media. Although this representation captures the phonology of the language, it is not a one-to-one mapping with the Arabic script version. This issue is exacerbated by the fact that Arabizi on social media is Dialectal Arabic which does not have a standard orthography. Furthermore, Arabizi tends to include a lot of code mixing between Arabic and English (or French). To map Arabizi text to Arabic script in the context of complete utterances, previously published efforts have split Arabizi detection and Arabic script target in two separate tasks. In this paper, we present the first effort on a unified model for Arabizi detection and transliteration into a code-mixed output with consistent Arabic spelling conventions, using a sequence-to-sequence deep learning model. Our best system achieves 80.6% word accuracy and 58.7% BLEU on a blind test set.",2020
mihalcea-2004-co,https://aclanthology.org/W04-2405,0,,,,,,,"Co-training and Self-training for Word Sense Disambiguation. This paper investigates the application of cotraining and self-training to word sense disambiguation. Optimal and empirical parameter selection methods for co-training and self-training are investigated, with various degrees of error reduction. A new method that combines cotraining with majority voting is introduced, with the effect of smoothing the bootstrapping learning curves, and improving the average performance.",Co-training and Self-training for Word Sense Disambiguation,"This paper investigates the application of cotraining and self-training to word sense disambiguation. Optimal and empirical parameter selection methods for co-training and self-training are investigated, with various degrees of error reduction. A new method that combines cotraining with majority voting is introduced, with the effect of smoothing the bootstrapping learning curves, and improving the average performance.",Co-training and Self-training for Word Sense Disambiguation,"This paper investigates the application of cotraining and self-training to word sense disambiguation. Optimal and empirical parameter selection methods for co-training and self-training are investigated, with various degrees of error reduction. A new method that combines cotraining with majority voting is introduced, with the effect of smoothing the bootstrapping learning curves, and improving the average performance.",Many thanks to Carlo Strapparava and the three anonymous reviewers for useful comments and suggestions. This work was partially supported by a National Science Foundation grant IIS-0336793.,"Co-training and Self-training for Word Sense Disambiguation. This paper investigates the application of cotraining and self-training to word sense disambiguation. Optimal and empirical parameter selection methods for co-training and self-training are investigated, with various degrees of error reduction. A new method that combines cotraining with majority voting is introduced, with the effect of smoothing the bootstrapping learning curves, and improving the average performance.",2004
schwartz-gomez-2009-acquiring,https://aclanthology.org/W09-1701,0,,,,,,,"Acquiring Applicable Common Sense Knowledge from the Web. In this paper, a framework for acquiring common sense knowledge from the Web is presented. Common sense knowledge includes information about the world that humans use in their everyday lives. To acquire this knowledge, relationships between nouns are retrieved by using search phrases with automatically filled constituents. Through empirical analysis of the acquired nouns over Word-Net, probabilities are produced for relationships between a concept and a word rather than between two words. A specific goal of our acquisition method is to acquire knowledge that can be successfully applied to NLP problems. We test the validity of the acquired knowledge by means of an application to the problem of word sense disambiguation. Results show that the knowledge can be used to improve the accuracy of a state of the art unsupervised disambiguation system.",Acquiring Applicable Common Sense Knowledge from the Web,"In this paper, a framework for acquiring common sense knowledge from the Web is presented. Common sense knowledge includes information about the world that humans use in their everyday lives. To acquire this knowledge, relationships between nouns are retrieved by using search phrases with automatically filled constituents. Through empirical analysis of the acquired nouns over Word-Net, probabilities are produced for relationships between a concept and a word rather than between two words. A specific goal of our acquisition method is to acquire knowledge that can be successfully applied to NLP problems. We test the validity of the acquired knowledge by means of an application to the problem of word sense disambiguation. Results show that the knowledge can be used to improve the accuracy of a state of the art unsupervised disambiguation system.",Acquiring Applicable Common Sense Knowledge from the Web,"In this paper, a framework for acquiring common sense knowledge from the Web is presented. Common sense knowledge includes information about the world that humans use in their everyday lives. To acquire this knowledge, relationships between nouns are retrieved by using search phrases with automatically filled constituents. Through empirical analysis of the acquired nouns over Word-Net, probabilities are produced for relationships between a concept and a word rather than between two words. A specific goal of our acquisition method is to acquire knowledge that can be successfully applied to NLP problems. We test the validity of the acquired knowledge by means of an application to the problem of word sense disambiguation. Results show that the knowledge can be used to improve the accuracy of a state of the art unsupervised disambiguation system.",This research was supported by the NASA Engineering and Safety Center under Grant/Cooperative Agreement NNX08AJ98A.,"Acquiring Applicable Common Sense Knowledge from the Web. In this paper, a framework for acquiring common sense knowledge from the Web is presented. Common sense knowledge includes information about the world that humans use in their everyday lives. To acquire this knowledge, relationships between nouns are retrieved by using search phrases with automatically filled constituents. Through empirical analysis of the acquired nouns over Word-Net, probabilities are produced for relationships between a concept and a word rather than between two words. A specific goal of our acquisition method is to acquire knowledge that can be successfully applied to NLP problems. We test the validity of the acquired knowledge by means of an application to the problem of word sense disambiguation. Results show that the knowledge can be used to improve the accuracy of a state of the art unsupervised disambiguation system.",2009
turian-melamed-2005-constituent,https://aclanthology.org/W05-1515,0,,,,,,,"Constituent Parsing by Classification. Ordinary classification techniques can drive a conceptually simple constituent parser that achieves near state-of-the-art accuracy on standard test sets. Here we present such a parser, which avoids some of the limitations of other discriminative parsers. In particular, it does not place any restrictions upon which types of features are allowed. We also present several innovations for faster training of discriminative parsers: we show how training can be parallelized, how examples can be generated prior to training without a working parser, and how independently trained sub-classifiers that have never done any parsing can be effectively combined into a working parser. Finally, we propose a new figure-of-merit for bestfirst parsing with confidence-rated inferences. Our implementation",Constituent Parsing by Classification,"Ordinary classification techniques can drive a conceptually simple constituent parser that achieves near state-of-the-art accuracy on standard test sets. Here we present such a parser, which avoids some of the limitations of other discriminative parsers. In particular, it does not place any restrictions upon which types of features are allowed. We also present several innovations for faster training of discriminative parsers: we show how training can be parallelized, how examples can be generated prior to training without a working parser, and how independently trained sub-classifiers that have never done any parsing can be effectively combined into a working parser. Finally, we propose a new figure-of-merit for bestfirst parsing with confidence-rated inferences. Our implementation",Constituent Parsing by Classification,"Ordinary classification techniques can drive a conceptually simple constituent parser that achieves near state-of-the-art accuracy on standard test sets. Here we present such a parser, which avoids some of the limitations of other discriminative parsers. In particular, it does not place any restrictions upon which types of features are allowed. We also present several innovations for faster training of discriminative parsers: we show how training can be parallelized, how examples can be generated prior to training without a working parser, and how independently trained sub-classifiers that have never done any parsing can be effectively combined into a working parser. Finally, we propose a new figure-of-merit for bestfirst parsing with confidence-rated inferences. Our implementation","The authors would like to thank Dan Bikel, Mike Collins, Ralph Grishman, Adam Meyers, Mehryar Mohri, Satoshi Sekine, and Wei Wang, as well as the anonymous reviewers, for their helpful comments","Constituent Parsing by Classification. Ordinary classification techniques can drive a conceptually simple constituent parser that achieves near state-of-the-art accuracy on standard test sets. Here we present such a parser, which avoids some of the limitations of other discriminative parsers. In particular, it does not place any restrictions upon which types of features are allowed. We also present several innovations for faster training of discriminative parsers: we show how training can be parallelized, how examples can be generated prior to training without a working parser, and how independently trained sub-classifiers that have never done any parsing can be effectively combined into a working parser. Finally, we propose a new figure-of-merit for bestfirst parsing with confidence-rated inferences. Our implementation",2005
kameswara-sarma-2018-learning,https://aclanthology.org/N18-4007,0,,,,,,,"Learning Word Embeddings for Data Sparse and Sentiment Rich Data Sets. This research proposal describes two algorithms that are aimed at learning word embeddings for data sparse and sentiment rich data sets. The goal is to use word embeddings adapted for domain specific data sets in downstream applications such as sentiment classification. The first approach learns word embeddings in a supervised fashion via SWESA (Supervised Word Embeddings for Sentiment Analysis), an algorithm for sentiment analysis on data sets that are of modest size. SWESA leverages document labels to jointly learn polarity-aware word embeddings and a classifier to classify unseen documents. In the second approach domain adapted (DA) word embeddings are learned by exploiting the specificity of domain specific data sets and the breadth of generic word embeddings. The new embeddings are formed by aligning corresponding word vectors using Canonical Correlation Analysis (CCA) or the related nonlinear Kernel CCA. Experimental results on binary sentiment classification tasks using both approaches for standard data sets are presented.",Learning Word Embeddings for Data Sparse and Sentiment Rich Data Sets,"This research proposal describes two algorithms that are aimed at learning word embeddings for data sparse and sentiment rich data sets. The goal is to use word embeddings adapted for domain specific data sets in downstream applications such as sentiment classification. The first approach learns word embeddings in a supervised fashion via SWESA (Supervised Word Embeddings for Sentiment Analysis), an algorithm for sentiment analysis on data sets that are of modest size. SWESA leverages document labels to jointly learn polarity-aware word embeddings and a classifier to classify unseen documents. In the second approach domain adapted (DA) word embeddings are learned by exploiting the specificity of domain specific data sets and the breadth of generic word embeddings. The new embeddings are formed by aligning corresponding word vectors using Canonical Correlation Analysis (CCA) or the related nonlinear Kernel CCA. Experimental results on binary sentiment classification tasks using both approaches for standard data sets are presented.",Learning Word Embeddings for Data Sparse and Sentiment Rich Data Sets,"This research proposal describes two algorithms that are aimed at learning word embeddings for data sparse and sentiment rich data sets. The goal is to use word embeddings adapted for domain specific data sets in downstream applications such as sentiment classification. The first approach learns word embeddings in a supervised fashion via SWESA (Supervised Word Embeddings for Sentiment Analysis), an algorithm for sentiment analysis on data sets that are of modest size. SWESA leverages document labels to jointly learn polarity-aware word embeddings and a classifier to classify unseen documents. In the second approach domain adapted (DA) word embeddings are learned by exploiting the specificity of domain specific data sets and the breadth of generic word embeddings. The new embeddings are formed by aligning corresponding word vectors using Canonical Correlation Analysis (CCA) or the related nonlinear Kernel CCA. Experimental results on binary sentiment classification tasks using both approaches for standard data sets are presented.",,"Learning Word Embeddings for Data Sparse and Sentiment Rich Data Sets. This research proposal describes two algorithms that are aimed at learning word embeddings for data sparse and sentiment rich data sets. The goal is to use word embeddings adapted for domain specific data sets in downstream applications such as sentiment classification. The first approach learns word embeddings in a supervised fashion via SWESA (Supervised Word Embeddings for Sentiment Analysis), an algorithm for sentiment analysis on data sets that are of modest size. SWESA leverages document labels to jointly learn polarity-aware word embeddings and a classifier to classify unseen documents. In the second approach domain adapted (DA) word embeddings are learned by exploiting the specificity of domain specific data sets and the breadth of generic word embeddings. The new embeddings are formed by aligning corresponding word vectors using Canonical Correlation Analysis (CCA) or the related nonlinear Kernel CCA. Experimental results on binary sentiment classification tasks using both approaches for standard data sets are presented.",2018
silfverberg-etal-2017-data,https://aclanthology.org/K17-2010,0,,,,,,,"Data Augmentation for Morphological Reinflection. This paper presents the submission of the Linguistics Department of the University of Colorado at Boulder for the 2017 CoNLL-SIGMORPHON Shared Task on Universal Morphological Reinflection. The system is implemented as an RNN Encoder-Decoder. It is specifically geared toward a low-resource setting. To this end, it employs data augmentation for counteracting overfitting and a copy symbol for processing characters unseen in the training data. The system is an ensemble of ten models combined using a weighted voting scheme. It delivers substantial improvement in accuracy compared to a non-neural baseline system in presence of varying amounts of training data.",Data Augmentation for Morphological Reinflection,"This paper presents the submission of the Linguistics Department of the University of Colorado at Boulder for the 2017 CoNLL-SIGMORPHON Shared Task on Universal Morphological Reinflection. The system is implemented as an RNN Encoder-Decoder. It is specifically geared toward a low-resource setting. To this end, it employs data augmentation for counteracting overfitting and a copy symbol for processing characters unseen in the training data. The system is an ensemble of ten models combined using a weighted voting scheme. It delivers substantial improvement in accuracy compared to a non-neural baseline system in presence of varying amounts of training data.",Data Augmentation for Morphological Reinflection,"This paper presents the submission of the Linguistics Department of the University of Colorado at Boulder for the 2017 CoNLL-SIGMORPHON Shared Task on Universal Morphological Reinflection. The system is implemented as an RNN Encoder-Decoder. It is specifically geared toward a low-resource setting. To this end, it employs data augmentation for counteracting overfitting and a copy symbol for processing characters unseen in the training data. The system is an ensemble of ten models combined using a weighted voting scheme. It delivers substantial improvement in accuracy compared to a non-neural baseline system in presence of varying amounts of training data.",The third author has been partly sponsored by DARPA I20 in the program Low Resource Languages for Emergent Incidents (LORELEI) issued by DARPA/I20 under Contract No. HR0011-15-C-0113.,"Data Augmentation for Morphological Reinflection. This paper presents the submission of the Linguistics Department of the University of Colorado at Boulder for the 2017 CoNLL-SIGMORPHON Shared Task on Universal Morphological Reinflection. The system is implemented as an RNN Encoder-Decoder. It is specifically geared toward a low-resource setting. To this end, it employs data augmentation for counteracting overfitting and a copy symbol for processing characters unseen in the training data. The system is an ensemble of ten models combined using a weighted voting scheme. It delivers substantial improvement in accuracy compared to a non-neural baseline system in presence of varying amounts of training data.",2017
lehmann-1981-pragmalinguistics,https://aclanthology.org/J81-3004,0,,,,,,,"Pragmalinguistics Theory and Practice. ""Pragmalinguistics"" or the occupation with pragmatic aspects of language can be important where computational linguists or artificial intelligence researchers are concerned with natural language interfaces to computers, with modelling dialogue behavior, or the like. What speakers intend with their utterances, how hearers react to what they hear, and what they take the words to mean will all play a role of increasing importance when natural language systems have matured enough to cope readily with syntax and semantics. Asking a sensible question to a user or giving him a reasonable response often enough depends not only on the ""pure"" meaning of some previous utterances but also on attitudes, expectations, and intentions that the user may have. These are partly conveyed in the user's utterances and have to be taken into account, if a system is to do more than just give factual answers to factual requests. Blakar writes on language as a means of social power. His paper is anecdotal; he draws conclusions without stating from what premises; and he is on the whole not very explicit. Gregersen postulates in his article on the relationships between social class and language usage that an economic analysis of ""objective class positions"" has to precede sociolinguistic studies proper, but fails to show how the results of such an analysis will influence sociolinguistics.
Haeberlin writes on class-specific vocabulary as a communication problem. His ideas have been published before and in more detail.",Pragmalinguistics Theory and Practice,"""Pragmalinguistics"" or the occupation with pragmatic aspects of language can be important where computational linguists or artificial intelligence researchers are concerned with natural language interfaces to computers, with modelling dialogue behavior, or the like. What speakers intend with their utterances, how hearers react to what they hear, and what they take the words to mean will all play a role of increasing importance when natural language systems have matured enough to cope readily with syntax and semantics. Asking a sensible question to a user or giving him a reasonable response often enough depends not only on the ""pure"" meaning of some previous utterances but also on attitudes, expectations, and intentions that the user may have. These are partly conveyed in the user's utterances and have to be taken into account, if a system is to do more than just give factual answers to factual requests. Blakar writes on language as a means of social power. His paper is anecdotal; he draws conclusions without stating from what premises; and he is on the whole not very explicit. Gregersen postulates in his article on the relationships between social class and language usage that an economic analysis of ""objective class positions"" has to precede sociolinguistic studies proper, but fails to show how the results of such an analysis will influence sociolinguistics.
Haeberlin writes on class-specific vocabulary as a communication problem. His ideas have been published before and in more detail.",Pragmalinguistics Theory and Practice,"""Pragmalinguistics"" or the occupation with pragmatic aspects of language can be important where computational linguists or artificial intelligence researchers are concerned with natural language interfaces to computers, with modelling dialogue behavior, or the like. What speakers intend with their utterances, how hearers react to what they hear, and what they take the words to mean will all play a role of increasing importance when natural language systems have matured enough to cope readily with syntax and semantics. Asking a sensible question to a user or giving him a reasonable response often enough depends not only on the ""pure"" meaning of some previous utterances but also on attitudes, expectations, and intentions that the user may have. These are partly conveyed in the user's utterances and have to be taken into account, if a system is to do more than just give factual answers to factual requests. Blakar writes on language as a means of social power. His paper is anecdotal; he draws conclusions without stating from what premises; and he is on the whole not very explicit. Gregersen postulates in his article on the relationships between social class and language usage that an economic analysis of ""objective class positions"" has to precede sociolinguistic studies proper, but fails to show how the results of such an analysis will influence sociolinguistics.
Haeberlin writes on class-specific vocabulary as a communication problem. His ideas have been published before and in more detail.",,"Pragmalinguistics Theory and Practice. ""Pragmalinguistics"" or the occupation with pragmatic aspects of language can be important where computational linguists or artificial intelligence researchers are concerned with natural language interfaces to computers, with modelling dialogue behavior, or the like. What speakers intend with their utterances, how hearers react to what they hear, and what they take the words to mean will all play a role of increasing importance when natural language systems have matured enough to cope readily with syntax and semantics. Asking a sensible question to a user or giving him a reasonable response often enough depends not only on the ""pure"" meaning of some previous utterances but also on attitudes, expectations, and intentions that the user may have. These are partly conveyed in the user's utterances and have to be taken into account, if a system is to do more than just give factual answers to factual requests. Blakar writes on language as a means of social power. His paper is anecdotal; he draws conclusions without stating from what premises; and he is on the whole not very explicit. Gregersen postulates in his article on the relationships between social class and language usage that an economic analysis of ""objective class positions"" has to precede sociolinguistic studies proper, but fails to show how the results of such an analysis will influence sociolinguistics.
Haeberlin writes on class-specific vocabulary as a communication problem. His ideas have been published before and in more detail.",1981
aggarwal-mamidi-2017-automatic,https://aclanthology.org/P17-3012,0,,,,,,,"Automatic Generation of Jokes in Hindi. When it comes to computational language generation systems, humour is a relatively unexplored domain, especially more so for Hindi (or rather, for most languages other than English). Most researchers agree that a joke consists of two main parts-the setup and the punchline, with humour being encoded in the incongruity between the two. In this paper, we look at Dur se Dekha jokes, a restricted domain of humorous three liner poetry in Hindi. We analyze their structure to understand how humour is encoded in them and formalize it. We then develop a system which is successfully able to generate a basic form of these jokes.",Automatic Generation of Jokes in {H}indi,"When it comes to computational language generation systems, humour is a relatively unexplored domain, especially more so for Hindi (or rather, for most languages other than English). Most researchers agree that a joke consists of two main parts-the setup and the punchline, with humour being encoded in the incongruity between the two. In this paper, we look at Dur se Dekha jokes, a restricted domain of humorous three liner poetry in Hindi. We analyze their structure to understand how humour is encoded in them and formalize it. We then develop a system which is successfully able to generate a basic form of these jokes.",Automatic Generation of Jokes in Hindi,"When it comes to computational language generation systems, humour is a relatively unexplored domain, especially more so for Hindi (or rather, for most languages other than English). Most researchers agree that a joke consists of two main parts-the setup and the punchline, with humour being encoded in the incongruity between the two. In this paper, we look at Dur se Dekha jokes, a restricted domain of humorous three liner poetry in Hindi. We analyze their structure to understand how humour is encoded in them and formalize it. We then develop a system which is successfully able to generate a basic form of these jokes.",The authors would like to thank all the evaluators for their time and help in rating the jokes. We would also like to thank Kaveri Anuranjana for all the time she spent helping us put this work on paper.,"Automatic Generation of Jokes in Hindi. When it comes to computational language generation systems, humour is a relatively unexplored domain, especially more so for Hindi (or rather, for most languages other than English). Most researchers agree that a joke consists of two main parts-the setup and the punchline, with humour being encoded in the incongruity between the two. In this paper, we look at Dur se Dekha jokes, a restricted domain of humorous three liner poetry in Hindi. We analyze their structure to understand how humour is encoded in them and formalize it. We then develop a system which is successfully able to generate a basic form of these jokes.",2017
rohanian-etal-2020-verbal,https://aclanthology.org/2020.acl-main.259,0,,,,,,,"Verbal Multiword Expressions for Identification of Metaphor. Metaphor is a linguistic device in which a concept is expressed by mentioning another. Identifying metaphorical expressions, therefore, requires a non-compositional understanding of semantics. Multiword Expressions (MWEs), on the other hand, are linguistic phenomena with varying degrees of semantic opacity and their identification poses a challenge to computational models. This work is the first attempt at analysing the interplay of metaphor and MWEs processing through the design of a neural architecture whereby classification of metaphors is enhanced by informing the model of the presence of MWEs. To the best of our knowledge, this is the first ""MWE-aware"" metaphor identification system paving the way for further experiments on the complex interactions of these phenomena. The results and analyses show that this proposed architecture reach state-of-the-art on two different established metaphor datasets.",Verbal Multiword Expressions for Identification of Metaphor,"Metaphor is a linguistic device in which a concept is expressed by mentioning another. Identifying metaphorical expressions, therefore, requires a non-compositional understanding of semantics. Multiword Expressions (MWEs), on the other hand, are linguistic phenomena with varying degrees of semantic opacity and their identification poses a challenge to computational models. This work is the first attempt at analysing the interplay of metaphor and MWEs processing through the design of a neural architecture whereby classification of metaphors is enhanced by informing the model of the presence of MWEs. To the best of our knowledge, this is the first ""MWE-aware"" metaphor identification system paving the way for further experiments on the complex interactions of these phenomena. The results and analyses show that this proposed architecture reach state-of-the-art on two different established metaphor datasets.",Verbal Multiword Expressions for Identification of Metaphor,"Metaphor is a linguistic device in which a concept is expressed by mentioning another. Identifying metaphorical expressions, therefore, requires a non-compositional understanding of semantics. Multiword Expressions (MWEs), on the other hand, are linguistic phenomena with varying degrees of semantic opacity and their identification poses a challenge to computational models. This work is the first attempt at analysing the interplay of metaphor and MWEs processing through the design of a neural architecture whereby classification of metaphors is enhanced by informing the model of the presence of MWEs. To the best of our knowledge, this is the first ""MWE-aware"" metaphor identification system paving the way for further experiments on the complex interactions of these phenomena. The results and analyses show that this proposed architecture reach state-of-the-art on two different established metaphor datasets.",,"Verbal Multiword Expressions for Identification of Metaphor. Metaphor is a linguistic device in which a concept is expressed by mentioning another. Identifying metaphorical expressions, therefore, requires a non-compositional understanding of semantics. Multiword Expressions (MWEs), on the other hand, are linguistic phenomena with varying degrees of semantic opacity and their identification poses a challenge to computational models. This work is the first attempt at analysing the interplay of metaphor and MWEs processing through the design of a neural architecture whereby classification of metaphors is enhanced by informing the model of the presence of MWEs. To the best of our knowledge, this is the first ""MWE-aware"" metaphor identification system paving the way for further experiments on the complex interactions of these phenomena. The results and analyses show that this proposed architecture reach state-of-the-art on two different established metaphor datasets.",2020
gupta-etal-2021-disfl,https://aclanthology.org/2021.findings-acl.293,0,,,,,,,"Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering. Disfluencies is an under-studied topic in NLP, even though it is ubiquitous in human conversation. This is largely due to the lack of datasets containing disfluencies. In this paper, we present a new challenge question answering dataset, DISFL-QA, a derivative of SQUAD, where humans introduce contextual disfluencies in previously fluent questions. DISFL-QA contains a variety of challenging disfluencies that require a more comprehensive understanding of the text than what was necessary in prior datasets. Experiments show that the performance of existing state-of-the-art question answering models degrades significantly when tested on DISFL-QA in a zero-shot setting. We show data augmentation methods partially recover the loss in performance and also demonstrate the efficacy of using gold data for fine-tuning. We argue that we need large-scale disfluency datasets in order for NLP models to be robust to them. The dataset is publicly available at: https://github.com/ google-research-datasets/disfl-qa.",Disfl-{QA}: A Benchmark Dataset for Understanding Disfluencies in Question Answering,"Disfluencies is an under-studied topic in NLP, even though it is ubiquitous in human conversation. This is largely due to the lack of datasets containing disfluencies. In this paper, we present a new challenge question answering dataset, DISFL-QA, a derivative of SQUAD, where humans introduce contextual disfluencies in previously fluent questions. DISFL-QA contains a variety of challenging disfluencies that require a more comprehensive understanding of the text than what was necessary in prior datasets. Experiments show that the performance of existing state-of-the-art question answering models degrades significantly when tested on DISFL-QA in a zero-shot setting. We show data augmentation methods partially recover the loss in performance and also demonstrate the efficacy of using gold data for fine-tuning. We argue that we need large-scale disfluency datasets in order for NLP models to be robust to them. The dataset is publicly available at: https://github.com/ google-research-datasets/disfl-qa.",Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering,"Disfluencies is an under-studied topic in NLP, even though it is ubiquitous in human conversation. This is largely due to the lack of datasets containing disfluencies. In this paper, we present a new challenge question answering dataset, DISFL-QA, a derivative of SQUAD, where humans introduce contextual disfluencies in previously fluent questions. DISFL-QA contains a variety of challenging disfluencies that require a more comprehensive understanding of the text than what was necessary in prior datasets. Experiments show that the performance of existing state-of-the-art question answering models degrades significantly when tested on DISFL-QA in a zero-shot setting. We show data augmentation methods partially recover the loss in performance and also demonstrate the efficacy of using gold data for fine-tuning. We argue that we need large-scale disfluency datasets in order for NLP models to be robust to them. The dataset is publicly available at: https://github.com/ google-research-datasets/disfl-qa.","Constructing datasets for spoken problems. We would also like to bring attention to the fact that being a speech phenomenon, a spoken setup would have been an ideal choice for disfluencies dataset. This would have accounted for higher degree of confusion, hesitations, corrections, etc. while recalling parts of context on the fly, which otherwise one may find hard to create synthetically when given enough time to think. However, such a spoken setup is extremely tedious for data collection mainly due to: (i) privacy concerns with acquiring speech data from real world speech transcriptions, (ii) creating scenarios for simulated environment is a challenging task, and (iii) relatively low yield for cases containing disfluencies. In such cases, we believe that a targeted and purely textual mode of data collection can be more effective both in terms of cost and specificity.","Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering. Disfluencies is an under-studied topic in NLP, even though it is ubiquitous in human conversation. This is largely due to the lack of datasets containing disfluencies. In this paper, we present a new challenge question answering dataset, DISFL-QA, a derivative of SQUAD, where humans introduce contextual disfluencies in previously fluent questions. DISFL-QA contains a variety of challenging disfluencies that require a more comprehensive understanding of the text than what was necessary in prior datasets. Experiments show that the performance of existing state-of-the-art question answering models degrades significantly when tested on DISFL-QA in a zero-shot setting. We show data augmentation methods partially recover the loss in performance and also demonstrate the efficacy of using gold data for fine-tuning. We argue that we need large-scale disfluency datasets in order for NLP models to be robust to them. The dataset is publicly available at: https://github.com/ google-research-datasets/disfl-qa.",2021
mckeown-1983-paraphrasing,https://aclanthology.org/J83-1001,0,,,,,,,"Paraphrasing Questions Using Given and new information. The design and implementation of a paraphrase component for a natural language question-answering system (CO-OP) is presented. The component is used to produce a paraphrase of a user's question to the system, which is presented to the user before the question is evaluated and answered. A major point made is the role of given and new information in formulating a paraphrase that differs in a meaningful way from the user's question. A description is also given of the transformational grammar that is used by the paraphraser. 2 For example, in the question ""Which users work on projects sponsored by NASA?"", the speaker makes the existential presupposition that there are projects sponsored by NASA.",Paraphrasing Questions Using Given and new information,"The design and implementation of a paraphrase component for a natural language question-answering system (CO-OP) is presented. The component is used to produce a paraphrase of a user's question to the system, which is presented to the user before the question is evaluated and answered. A major point made is the role of given and new information in formulating a paraphrase that differs in a meaningful way from the user's question. A description is also given of the transformational grammar that is used by the paraphraser. 2 For example, in the question ""Which users work on projects sponsored by NASA?"", the speaker makes the existential presupposition that there are projects sponsored by NASA.",Paraphrasing Questions Using Given and new information,"The design and implementation of a paraphrase component for a natural language question-answering system (CO-OP) is presented. The component is used to produce a paraphrase of a user's question to the system, which is presented to the user before the question is evaluated and answered. A major point made is the role of given and new information in formulating a paraphrase that differs in a meaningful way from the user's question. A description is also given of the transformational grammar that is used by the paraphraser. 2 For example, in the question ""Which users work on projects sponsored by NASA?"", the speaker makes the existential presupposition that there are projects sponsored by NASA.",This work was partially supported by an IBM fellowship and NSF grant MC78-08401. I would like to thank Dr. Aravind K. Joshi and Dr. Bonnie Webber for their invaluable comments on the style and content of this paper.,"Paraphrasing Questions Using Given and new information. The design and implementation of a paraphrase component for a natural language question-answering system (CO-OP) is presented. The component is used to produce a paraphrase of a user's question to the system, which is presented to the user before the question is evaluated and answered. A major point made is the role of given and new information in formulating a paraphrase that differs in a meaningful way from the user's question. A description is also given of the transformational grammar that is used by the paraphraser. 2 For example, in the question ""Which users work on projects sponsored by NASA?"", the speaker makes the existential presupposition that there are projects sponsored by NASA.",1983
chiu-etal-2022-joint,https://aclanthology.org/2022.spnlp-1.5,0,,,,,,,"A Joint Learning Approach for Semi-supervised Neural Topic Modeling. Topic models are some of the most popular ways to represent textual data in an interpretable manner. Recently, advances in deep generative models, specifically auto-encoding variational Bayes (AEVB), have led to the introduction of unsupervised neural topic models, which leverage deep generative models as opposed to traditional statistics-based topic models. We extend upon these neural topic models by introducing the Label-Indexed Neural Topic Model (LI-NTM), which is, to the extent of our knowledge, the first effective upstream semisupervised neural topic model. We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks, with the most notable results in low labeled data regimes and for data-sets with informative labels; furthermore, our jointly learned classifier outperforms baseline classifiers in ablation studies.",A Joint Learning Approach for Semi-supervised Neural Topic Modeling,"Topic models are some of the most popular ways to represent textual data in an interpretable manner. Recently, advances in deep generative models, specifically auto-encoding variational Bayes (AEVB), have led to the introduction of unsupervised neural topic models, which leverage deep generative models as opposed to traditional statistics-based topic models. We extend upon these neural topic models by introducing the Label-Indexed Neural Topic Model (LI-NTM), which is, to the extent of our knowledge, the first effective upstream semisupervised neural topic model. We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks, with the most notable results in low labeled data regimes and for data-sets with informative labels; furthermore, our jointly learned classifier outperforms baseline classifiers in ablation studies.",A Joint Learning Approach for Semi-supervised Neural Topic Modeling,"Topic models are some of the most popular ways to represent textual data in an interpretable manner. Recently, advances in deep generative models, specifically auto-encoding variational Bayes (AEVB), have led to the introduction of unsupervised neural topic models, which leverage deep generative models as opposed to traditional statistics-based topic models. We extend upon these neural topic models by introducing the Label-Indexed Neural Topic Model (LI-NTM), which is, to the extent of our knowledge, the first effective upstream semisupervised neural topic model. We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks, with the most notable results in low labeled data regimes and for data-sets with informative labels; furthermore, our jointly learned classifier outperforms baseline classifiers in ablation studies.","AS is supported by R01MH123804, and FDV is supported by NSF IIS-1750358. All authors acknowledge insightful feedback from members of CS282 Fall 2021.","A Joint Learning Approach for Semi-supervised Neural Topic Modeling. Topic models are some of the most popular ways to represent textual data in an interpretable manner. Recently, advances in deep generative models, specifically auto-encoding variational Bayes (AEVB), have led to the introduction of unsupervised neural topic models, which leverage deep generative models as opposed to traditional statistics-based topic models. We extend upon these neural topic models by introducing the Label-Indexed Neural Topic Model (LI-NTM), which is, to the extent of our knowledge, the first effective upstream semisupervised neural topic model. We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks, with the most notable results in low labeled data regimes and for data-sets with informative labels; furthermore, our jointly learned classifier outperforms baseline classifiers in ablation studies.",2022
ws-1993-acquisition,https://aclanthology.org/W93-0100,0,,,,,,,Acquisition of Lexical Knowledge from Text. ,Acquisition of Lexical Knowledge from Text,,Acquisition of Lexical Knowledge from Text,,,Acquisition of Lexical Knowledge from Text. ,1993
vanni-miller-2002-scaling,http://www.lrec-conf.org/proceedings/lrec2002/pdf/306.pdf,0,,,,,,,"Scaling the ISLE Framework: Use of Existing Corpus Resources for Validation of MT Evaluation Metrics across Languages. This paper describes a machine translation (MT) evaluation (MTE) research program which has benefited from the availability of two collections of source language texts and the results of processing these texts with several commercial MT engines (DARPA 1994, Doyon, Taylor, & White 1999). The methodology entails the systematic development of a predic tive relationship between discrete, well-defined MTE metrics and specific information processing tasks that can be reliably performed with output of a given MT system. Unlike tests used in initial experiments on automated scoring (Jones and Rusk 2000), we employ traditional measures of MT output quality, selected from the International Standards for Language Engineering (ISLE) framework: Coherence, Clarity, Syntax, Morphology, General and Domain-specific Lexical robustness, to include Named-entity translation. Each test was originally validated on MT output produced by three Spanish-to-English systems (1994 DARPA MTE). We validate tests in the present work, however, with material taken from the MT Scale Evaluation research program produced by Japanese-to-English MT systems. Since Spanish and Japanese differ structurally on the morphological, syntactic, and discourse levels, a comparison of scores on tests measuring these output qualities should reveal how structural similarity, such as that enjoyed by Spanish and English, and structural contrast, such as that found between Japanese and English, affect the linguistic distinctions which must be accommodated by MT systems. Moreover, we show that metrics developed using Spanish-English MT output are equally effective when applied to Japanese-English MT output.",Scaling the {ISLE} Framework: Use of Existing Corpus Resources for Validation of {MT} Evaluation Metrics across Languages,"This paper describes a machine translation (MT) evaluation (MTE) research program which has benefited from the availability of two collections of source language texts and the results of processing these texts with several commercial MT engines (DARPA 1994, Doyon, Taylor, & White 1999). The methodology entails the systematic development of a predic tive relationship between discrete, well-defined MTE metrics and specific information processing tasks that can be reliably performed with output of a given MT system. Unlike tests used in initial experiments on automated scoring (Jones and Rusk 2000), we employ traditional measures of MT output quality, selected from the International Standards for Language Engineering (ISLE) framework: Coherence, Clarity, Syntax, Morphology, General and Domain-specific Lexical robustness, to include Named-entity translation. Each test was originally validated on MT output produced by three Spanish-to-English systems (1994 DARPA MTE). We validate tests in the present work, however, with material taken from the MT Scale Evaluation research program produced by Japanese-to-English MT systems. Since Spanish and Japanese differ structurally on the morphological, syntactic, and discourse levels, a comparison of scores on tests measuring these output qualities should reveal how structural similarity, such as that enjoyed by Spanish and English, and structural contrast, such as that found between Japanese and English, affect the linguistic distinctions which must be accommodated by MT systems. Moreover, we show that metrics developed using Spanish-English MT output are equally effective when applied to Japanese-English MT output.",Scaling the ISLE Framework: Use of Existing Corpus Resources for Validation of MT Evaluation Metrics across Languages,"This paper describes a machine translation (MT) evaluation (MTE) research program which has benefited from the availability of two collections of source language texts and the results of processing these texts with several commercial MT engines (DARPA 1994, Doyon, Taylor, & White 1999). The methodology entails the systematic development of a predic tive relationship between discrete, well-defined MTE metrics and specific information processing tasks that can be reliably performed with output of a given MT system. Unlike tests used in initial experiments on automated scoring (Jones and Rusk 2000), we employ traditional measures of MT output quality, selected from the International Standards for Language Engineering (ISLE) framework: Coherence, Clarity, Syntax, Morphology, General and Domain-specific Lexical robustness, to include Named-entity translation. Each test was originally validated on MT output produced by three Spanish-to-English systems (1994 DARPA MTE). We validate tests in the present work, however, with material taken from the MT Scale Evaluation research program produced by Japanese-to-English MT systems. Since Spanish and Japanese differ structurally on the morphological, syntactic, and discourse levels, a comparison of scores on tests measuring these output qualities should reveal how structural similarity, such as that enjoyed by Spanish and English, and structural contrast, such as that found between Japanese and English, affect the linguistic distinctions which must be accommodated by MT systems. Moreover, we show that metrics developed using Spanish-English MT output are equally effective when applied to Japanese-English MT output.",,"Scaling the ISLE Framework: Use of Existing Corpus Resources for Validation of MT Evaluation Metrics across Languages. This paper describes a machine translation (MT) evaluation (MTE) research program which has benefited from the availability of two collections of source language texts and the results of processing these texts with several commercial MT engines (DARPA 1994, Doyon, Taylor, & White 1999). The methodology entails the systematic development of a predic tive relationship between discrete, well-defined MTE metrics and specific information processing tasks that can be reliably performed with output of a given MT system. Unlike tests used in initial experiments on automated scoring (Jones and Rusk 2000), we employ traditional measures of MT output quality, selected from the International Standards for Language Engineering (ISLE) framework: Coherence, Clarity, Syntax, Morphology, General and Domain-specific Lexical robustness, to include Named-entity translation. Each test was originally validated on MT output produced by three Spanish-to-English systems (1994 DARPA MTE). We validate tests in the present work, however, with material taken from the MT Scale Evaluation research program produced by Japanese-to-English MT systems. Since Spanish and Japanese differ structurally on the morphological, syntactic, and discourse levels, a comparison of scores on tests measuring these output qualities should reveal how structural similarity, such as that enjoyed by Spanish and English, and structural contrast, such as that found between Japanese and English, affect the linguistic distinctions which must be accommodated by MT systems. Moreover, we show that metrics developed using Spanish-English MT output are equally effective when applied to Japanese-English MT output.",2002
shimizu-etal-2013-constructing,https://aclanthology.org/2013.iwslt-papers.3,0,,,,,,,"Constructing a speech translation system using simultaneous interpretation data. There has been a fair amount of work on automatic speech translation systems that translate in real-time, serving as a computerized version of a simultaneous interpreter. It has been noticed in the field of translation studies that simultaneous interpreters perform a number of tricks to make the content easier to understand in real-time, including dividing their translations into small chunks, or summarizing less important content. However, the majority of previous work has not specifically considered this fact, simply using translation data (made by translators) for learning of the machine translation system. In this paper, we examine the possibilities of additionally incorporating simultaneous interpretation data (made by simultaneous interpreters) in the learning process. First we collect simultaneous interpretation data from professional simultaneous interpreters of three levels, and perform an analysis of the data. Next, we incorporate the simultaneous interpretation data in the learning of the machine translation system. As a result, the translation style of the system becomes more similar to that of a highly experienced simultaneous interpreter. We also find that according to automatic evaluation metrics, our system achieves performance similar to that of a simultaneous interpreter that has 1 year of experience.",Constructing a speech translation system using simultaneous interpretation data,"There has been a fair amount of work on automatic speech translation systems that translate in real-time, serving as a computerized version of a simultaneous interpreter. It has been noticed in the field of translation studies that simultaneous interpreters perform a number of tricks to make the content easier to understand in real-time, including dividing their translations into small chunks, or summarizing less important content. However, the majority of previous work has not specifically considered this fact, simply using translation data (made by translators) for learning of the machine translation system. In this paper, we examine the possibilities of additionally incorporating simultaneous interpretation data (made by simultaneous interpreters) in the learning process. First we collect simultaneous interpretation data from professional simultaneous interpreters of three levels, and perform an analysis of the data. Next, we incorporate the simultaneous interpretation data in the learning of the machine translation system. As a result, the translation style of the system becomes more similar to that of a highly experienced simultaneous interpreter. We also find that according to automatic evaluation metrics, our system achieves performance similar to that of a simultaneous interpreter that has 1 year of experience.",Constructing a speech translation system using simultaneous interpretation data,"There has been a fair amount of work on automatic speech translation systems that translate in real-time, serving as a computerized version of a simultaneous interpreter. It has been noticed in the field of translation studies that simultaneous interpreters perform a number of tricks to make the content easier to understand in real-time, including dividing their translations into small chunks, or summarizing less important content. However, the majority of previous work has not specifically considered this fact, simply using translation data (made by translators) for learning of the machine translation system. In this paper, we examine the possibilities of additionally incorporating simultaneous interpretation data (made by simultaneous interpreters) in the learning process. First we collect simultaneous interpretation data from professional simultaneous interpreters of three levels, and perform an analysis of the data. Next, we incorporate the simultaneous interpretation data in the learning of the machine translation system. As a result, the translation style of the system becomes more similar to that of a highly experienced simultaneous interpreter. We also find that according to automatic evaluation metrics, our system achieves performance similar to that of a simultaneous interpreter that has 1 year of experience.",,"Constructing a speech translation system using simultaneous interpretation data. There has been a fair amount of work on automatic speech translation systems that translate in real-time, serving as a computerized version of a simultaneous interpreter. It has been noticed in the field of translation studies that simultaneous interpreters perform a number of tricks to make the content easier to understand in real-time, including dividing their translations into small chunks, or summarizing less important content. However, the majority of previous work has not specifically considered this fact, simply using translation data (made by translators) for learning of the machine translation system. In this paper, we examine the possibilities of additionally incorporating simultaneous interpretation data (made by simultaneous interpreters) in the learning process. First we collect simultaneous interpretation data from professional simultaneous interpreters of three levels, and perform an analysis of the data. Next, we incorporate the simultaneous interpretation data in the learning of the machine translation system. As a result, the translation style of the system becomes more similar to that of a highly experienced simultaneous interpreter. We also find that according to automatic evaluation metrics, our system achieves performance similar to that of a simultaneous interpreter that has 1 year of experience.",2013
uehara-thepkanjana-2014-called,https://aclanthology.org/Y14-1016,0,,,,,,,"The So-called Person Restriction of Internal State Predicates in Japanese in Contrast with Thai. Internal state predicates or ISPs refer to internal states of sentient beings, such as emotions, sensations and thought processes. Japanese ISPs with zero pronouns exhibit the ""person restriction"" in that the zero form of their subjects must be first person at the utterance time. This paper examines the person restriction of ISPs in Japanese in contrast with those in Thai, which is a zero pronominal language like Japanese. It is found that the person restriction is applicable to Japanese ISPs but not to Thai ones. This paper argues that the person restriction is not adequate to account for Japanese and Thai ISPs. We propose a new constraint to account for this phenomenon, i.e., the Experiencer-Conceptualizer Identity (ECI) Constraint, which states that ""The experiencer of the situation/event must be identical with the conceptualizer of that situation/event."" It is argued that both languages conventionalize the ECI constraint in ISP expressions but differ in how the ECI constraint is conventionalized.",The So-called Person Restriction of Internal State Predicates in {J}apanese in Contrast with {T}hai,"Internal state predicates or ISPs refer to internal states of sentient beings, such as emotions, sensations and thought processes. Japanese ISPs with zero pronouns exhibit the ""person restriction"" in that the zero form of their subjects must be first person at the utterance time. This paper examines the person restriction of ISPs in Japanese in contrast with those in Thai, which is a zero pronominal language like Japanese. It is found that the person restriction is applicable to Japanese ISPs but not to Thai ones. This paper argues that the person restriction is not adequate to account for Japanese and Thai ISPs. We propose a new constraint to account for this phenomenon, i.e., the Experiencer-Conceptualizer Identity (ECI) Constraint, which states that ""The experiencer of the situation/event must be identical with the conceptualizer of that situation/event."" It is argued that both languages conventionalize the ECI constraint in ISP expressions but differ in how the ECI constraint is conventionalized.",The So-called Person Restriction of Internal State Predicates in Japanese in Contrast with Thai,"Internal state predicates or ISPs refer to internal states of sentient beings, such as emotions, sensations and thought processes. Japanese ISPs with zero pronouns exhibit the ""person restriction"" in that the zero form of their subjects must be first person at the utterance time. This paper examines the person restriction of ISPs in Japanese in contrast with those in Thai, which is a zero pronominal language like Japanese. It is found that the person restriction is applicable to Japanese ISPs but not to Thai ones. This paper argues that the person restriction is not adequate to account for Japanese and Thai ISPs. We propose a new constraint to account for this phenomenon, i.e., the Experiencer-Conceptualizer Identity (ECI) Constraint, which states that ""The experiencer of the situation/event must be identical with the conceptualizer of that situation/event."" It is argued that both languages conventionalize the ECI constraint in ISP expressions but differ in how the ECI constraint is conventionalized.",This research work is partially supported by a Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (No. 24520416) awarded to the first author and the Ratchadaphiseksomphot Endowment Fund of Chulalongkorn University (RES560530179-HS) awarded to the second author.,"The So-called Person Restriction of Internal State Predicates in Japanese in Contrast with Thai. Internal state predicates or ISPs refer to internal states of sentient beings, such as emotions, sensations and thought processes. Japanese ISPs with zero pronouns exhibit the ""person restriction"" in that the zero form of their subjects must be first person at the utterance time. This paper examines the person restriction of ISPs in Japanese in contrast with those in Thai, which is a zero pronominal language like Japanese. It is found that the person restriction is applicable to Japanese ISPs but not to Thai ones. This paper argues that the person restriction is not adequate to account for Japanese and Thai ISPs. We propose a new constraint to account for this phenomenon, i.e., the Experiencer-Conceptualizer Identity (ECI) Constraint, which states that ""The experiencer of the situation/event must be identical with the conceptualizer of that situation/event."" It is argued that both languages conventionalize the ECI constraint in ISP expressions but differ in how the ECI constraint is conventionalized.",2014
chen-etal-2021-system,https://aclanthology.org/2021.autosimtrans-1.4,0,,,,,,,"System Description on Automatic Simultaneous Translation Workshop. This paper shows our submission on the second automatic simultaneous translation workshop at NAACL2021. We participate in all the two directions of Chinese-to-English translation, Chinese audio→English text and Chinese text→English text. We do data filtering and model training techniques to get the best BLEU score and reduce the average lagging. We propose a two-stage simultaneous translation pipeline system which is composed of Quartznet and BPE-based transformer. We propose a competitive simultaneous translation system and achieves a BLEU score of 24.39 in the audio input track.",System Description on Automatic Simultaneous Translation Workshop,"This paper shows our submission on the second automatic simultaneous translation workshop at NAACL2021. We participate in all the two directions of Chinese-to-English translation, Chinese audio→English text and Chinese text→English text. We do data filtering and model training techniques to get the best BLEU score and reduce the average lagging. We propose a two-stage simultaneous translation pipeline system which is composed of Quartznet and BPE-based transformer. We propose a competitive simultaneous translation system and achieves a BLEU score of 24.39 in the audio input track.",System Description on Automatic Simultaneous Translation Workshop,"This paper shows our submission on the second automatic simultaneous translation workshop at NAACL2021. We participate in all the two directions of Chinese-to-English translation, Chinese audio→English text and Chinese text→English text. We do data filtering and model training techniques to get the best BLEU score and reduce the average lagging. We propose a two-stage simultaneous translation pipeline system which is composed of Quartznet and BPE-based transformer. We propose a competitive simultaneous translation system and achieves a BLEU score of 24.39 in the audio input track.",,"System Description on Automatic Simultaneous Translation Workshop. This paper shows our submission on the second automatic simultaneous translation workshop at NAACL2021. We participate in all the two directions of Chinese-to-English translation, Chinese audio→English text and Chinese text→English text. We do data filtering and model training techniques to get the best BLEU score and reduce the average lagging. We propose a two-stage simultaneous translation pipeline system which is composed of Quartznet and BPE-based transformer. We propose a competitive simultaneous translation system and achieves a BLEU score of 24.39 in the audio input track.",2021
oflazer-1995-error,https://aclanthology.org/1995.iwpt-1.24,0,,,,,,,"Error-tolerant Finite State Recognition. Error-tolerant recognition enables the recognition of strings that deviate slightly fro� any string in the regular set recognized by the underlying finite state recognizer. In the context of natural language processing, it has applications in error-tolerant morphological analysis, and spe�g correction. After a description of the concepts and algorithms involved, we give examples from these two applications: In morphological analysis, error-tolerant recognition allows misspelled input word forms to� corrected, and morphologically analyzed concurrently. The algorithm can be applied to the moiphological analysis of any language whose morphology is fully captured by a single (and possibly very large) finite state transducer, regardless of the word formation processes (such as agglutination or productive compounding) and morphographemic phenomena involved. We present an .application to error tolerant analysis of agglutinative morphology of Turkish words. In spelling correction, error-tolerant recognition can be used to enumerate correct candidate forms from a given misspelled string within a certain edit distance. It can be applied to any language whose morphology is fully described by • a finite state transducer, or with a word list comprising all inflected forms with very large word lists of root• and inflected forms (some containing well over 200,000 forms), generating all candida� solutions within 10 to 45 milliseconds (with edit distance 1) on a SparcStation 10/41. For spelling correction in Turkish, error-tolerant recognition operating with a (circular) recognizer ofTurkish words (with about 29,000 states and 119,000 transitions) can generate all candidate words in less than 20 milliseconds (with edit distance 1). Spelling correction using a recognizer constructed from a large word German list that simulates compounding, also indicates that the approach is applicable in such cases.",Error-tolerant Finite State Recognition,"Error-tolerant recognition enables the recognition of strings that deviate slightly fro� any string in the regular set recognized by the underlying finite state recognizer. In the context of natural language processing, it has applications in error-tolerant morphological analysis, and spe�g correction. After a description of the concepts and algorithms involved, we give examples from these two applications: In morphological analysis, error-tolerant recognition allows misspelled input word forms to� corrected, and morphologically analyzed concurrently. The algorithm can be applied to the moiphological analysis of any language whose morphology is fully captured by a single (and possibly very large) finite state transducer, regardless of the word formation processes (such as agglutination or productive compounding) and morphographemic phenomena involved. We present an .application to error tolerant analysis of agglutinative morphology of Turkish words. In spelling correction, error-tolerant recognition can be used to enumerate correct candidate forms from a given misspelled string within a certain edit distance. It can be applied to any language whose morphology is fully described by • a finite state transducer, or with a word list comprising all inflected forms with very large word lists of root• and inflected forms (some containing well over 200,000 forms), generating all candida� solutions within 10 to 45 milliseconds (with edit distance 1) on a SparcStation 10/41. For spelling correction in Turkish, error-tolerant recognition operating with a (circular) recognizer ofTurkish words (with about 29,000 states and 119,000 transitions) can generate all candidate words in less than 20 milliseconds (with edit distance 1). Spelling correction using a recognizer constructed from a large word German list that simulates compounding, also indicates that the approach is applicable in such cases.",Error-tolerant Finite State Recognition,"Error-tolerant recognition enables the recognition of strings that deviate slightly fro� any string in the regular set recognized by the underlying finite state recognizer. In the context of natural language processing, it has applications in error-tolerant morphological analysis, and spe�g correction. After a description of the concepts and algorithms involved, we give examples from these two applications: In morphological analysis, error-tolerant recognition allows misspelled input word forms to� corrected, and morphologically analyzed concurrently. The algorithm can be applied to the moiphological analysis of any language whose morphology is fully captured by a single (and possibly very large) finite state transducer, regardless of the word formation processes (such as agglutination or productive compounding) and morphographemic phenomena involved. We present an .application to error tolerant analysis of agglutinative morphology of Turkish words. In spelling correction, error-tolerant recognition can be used to enumerate correct candidate forms from a given misspelled string within a certain edit distance. It can be applied to any language whose morphology is fully described by • a finite state transducer, or with a word list comprising all inflected forms with very large word lists of root• and inflected forms (some containing well over 200,000 forms), generating all candida� solutions within 10 to 45 milliseconds (with edit distance 1) on a SparcStation 10/41. For spelling correction in Turkish, error-tolerant recognition operating with a (circular) recognizer ofTurkish words (with about 29,000 states and 119,000 transitions) can generate all candidate words in less than 20 milliseconds (with edit distance 1). Spelling correction using a recognizer constructed from a large word German list that simulates compounding, also indicates that the approach is applicable in such cases.","This research was supported in part by a NATO Science for Stability Project Grant TU-LANGUAGE. I would like to thank Xerox Advanced Document Systems, and Lauri Karttunen of Xerox Pare and of Rank Xerox Research Centre (Grenoble) for providing us with the two-level transducer development software. Kemal Olku and Kurtulu� Yorulmaz of Bilkent University implemented some of the algorithms.","Error-tolerant Finite State Recognition. Error-tolerant recognition enables the recognition of strings that deviate slightly fro� any string in the regular set recognized by the underlying finite state recognizer. In the context of natural language processing, it has applications in error-tolerant morphological analysis, and spe�g correction. After a description of the concepts and algorithms involved, we give examples from these two applications: In morphological analysis, error-tolerant recognition allows misspelled input word forms to� corrected, and morphologically analyzed concurrently. The algorithm can be applied to the moiphological analysis of any language whose morphology is fully captured by a single (and possibly very large) finite state transducer, regardless of the word formation processes (such as agglutination or productive compounding) and morphographemic phenomena involved. We present an .application to error tolerant analysis of agglutinative morphology of Turkish words. In spelling correction, error-tolerant recognition can be used to enumerate correct candidate forms from a given misspelled string within a certain edit distance. It can be applied to any language whose morphology is fully described by • a finite state transducer, or with a word list comprising all inflected forms with very large word lists of root• and inflected forms (some containing well over 200,000 forms), generating all candida� solutions within 10 to 45 milliseconds (with edit distance 1) on a SparcStation 10/41. For spelling correction in Turkish, error-tolerant recognition operating with a (circular) recognizer ofTurkish words (with about 29,000 states and 119,000 transitions) can generate all candidate words in less than 20 milliseconds (with edit distance 1). Spelling correction using a recognizer constructed from a large word German list that simulates compounding, also indicates that the approach is applicable in such cases.",1995
mangeot-2014-motamot,http://www.lrec-conf.org/proceedings/lrec2014/pdf/128_Paper.pdf,0,,,,,,,"Mot\`aMot project: conversion of a French-Khmer published dictionary for building a multilingual lexical system. Economic issues related to the information processing techniques are very important. The development of such technologies is a major asset for developing countries like Cambodia and Laos, and emerging ones like Vietnam, Malaysia and Thailand. The MotAMot project aims to computerize an under-resourced language: Khmer, spoken mainly in Cambodia. The main goal of the project is the development of a multilingual lexical system targeted for Khmer. The macrostructure is a pivot one with each word sense of each language linked to a pivot axi. The microstructure comes from a simplification of the explanatory and combinatory dictionary. The lexical system has been initialized with data coming mainly from the conversion of the French-Khmer bilingual dictionary of Denis Richer from Word to XML format. The French part was completed with pronunciation and parts-of-speech coming from the FeM French-english-Malay dictionary. The Khmer headwords noted in IPA in the Richer dictionary were converted to Khmer writing with OpenFST, a finite state transducer tool. The resulting resource is available online for lookup, editing, download and remote programming via a REST API on a Jibiki platform.",{M}ot{\`a}{M}ot project: conversion of a {F}rench-{K}hmer published dictionary for building a multilingual lexical system,"Economic issues related to the information processing techniques are very important. The development of such technologies is a major asset for developing countries like Cambodia and Laos, and emerging ones like Vietnam, Malaysia and Thailand. The MotAMot project aims to computerize an under-resourced language: Khmer, spoken mainly in Cambodia. The main goal of the project is the development of a multilingual lexical system targeted for Khmer. The macrostructure is a pivot one with each word sense of each language linked to a pivot axi. The microstructure comes from a simplification of the explanatory and combinatory dictionary. The lexical system has been initialized with data coming mainly from the conversion of the French-Khmer bilingual dictionary of Denis Richer from Word to XML format. The French part was completed with pronunciation and parts-of-speech coming from the FeM French-english-Malay dictionary. The Khmer headwords noted in IPA in the Richer dictionary were converted to Khmer writing with OpenFST, a finite state transducer tool. The resulting resource is available online for lookup, editing, download and remote programming via a REST API on a Jibiki platform.",Mot\`aMot project: conversion of a French-Khmer published dictionary for building a multilingual lexical system,"Economic issues related to the information processing techniques are very important. The development of such technologies is a major asset for developing countries like Cambodia and Laos, and emerging ones like Vietnam, Malaysia and Thailand. The MotAMot project aims to computerize an under-resourced language: Khmer, spoken mainly in Cambodia. The main goal of the project is the development of a multilingual lexical system targeted for Khmer. The macrostructure is a pivot one with each word sense of each language linked to a pivot axi. The microstructure comes from a simplification of the explanatory and combinatory dictionary. The lexical system has been initialized with data coming mainly from the conversion of the French-Khmer bilingual dictionary of Denis Richer from Word to XML format. The French part was completed with pronunciation and parts-of-speech coming from the FeM French-english-Malay dictionary. The Khmer headwords noted in IPA in the Richer dictionary were converted to Khmer writing with OpenFST, a finite state transducer tool. The resulting resource is available online for lookup, editing, download and remote programming via a REST API on a Jibiki platform.",,"Mot\`aMot project: conversion of a French-Khmer published dictionary for building a multilingual lexical system. Economic issues related to the information processing techniques are very important. The development of such technologies is a major asset for developing countries like Cambodia and Laos, and emerging ones like Vietnam, Malaysia and Thailand. The MotAMot project aims to computerize an under-resourced language: Khmer, spoken mainly in Cambodia. The main goal of the project is the development of a multilingual lexical system targeted for Khmer. The macrostructure is a pivot one with each word sense of each language linked to a pivot axi. The microstructure comes from a simplification of the explanatory and combinatory dictionary. The lexical system has been initialized with data coming mainly from the conversion of the French-Khmer bilingual dictionary of Denis Richer from Word to XML format. The French part was completed with pronunciation and parts-of-speech coming from the FeM French-english-Malay dictionary. The Khmer headwords noted in IPA in the Richer dictionary were converted to Khmer writing with OpenFST, a finite state transducer tool. The resulting resource is available online for lookup, editing, download and remote programming via a REST API on a Jibiki platform.",2014
francopoulo-etal-2016-providing,https://aclanthology.org/W16-4711,0,,,,,,,"Providing and Analyzing NLP Terms for our Community. By its own nature, the Natural Language Processing (NLP) community is a priori the best equipped to study the evolution of its own publications, but works in this direction are rare and only recently have we seen a few attempts at charting the field. In this paper, we use the algorithms, resources, standards, tools and common practices of the NLP field to build a list of terms characteristic of ongoing research, by mining a large corpus of scientific publications, aiming at the largest possible exhaustivity and covering the largest possible time span. Study of the evolution of this term list through time reveals interesting insights on the dynamics of field and the availability of the term database and of the corpus (for a large part) make possible many further comparative studies in addition to providing a test field for a new graphic interface designed to perform visual time analytics of large sized thesauri.",Providing and Analyzing {NLP} Terms for our Community,"By its own nature, the Natural Language Processing (NLP) community is a priori the best equipped to study the evolution of its own publications, but works in this direction are rare and only recently have we seen a few attempts at charting the field. In this paper, we use the algorithms, resources, standards, tools and common practices of the NLP field to build a list of terms characteristic of ongoing research, by mining a large corpus of scientific publications, aiming at the largest possible exhaustivity and covering the largest possible time span. Study of the evolution of this term list through time reveals interesting insights on the dynamics of field and the availability of the term database and of the corpus (for a large part) make possible many further comparative studies in addition to providing a test field for a new graphic interface designed to perform visual time analytics of large sized thesauri.",Providing and Analyzing NLP Terms for our Community,"By its own nature, the Natural Language Processing (NLP) community is a priori the best equipped to study the evolution of its own publications, but works in this direction are rare and only recently have we seen a few attempts at charting the field. In this paper, we use the algorithms, resources, standards, tools and common practices of the NLP field to build a list of terms characteristic of ongoing research, by mining a large corpus of scientific publications, aiming at the largest possible exhaustivity and covering the largest possible time span. Study of the evolution of this term list through time reveals interesting insights on the dynamics of field and the availability of the term database and of the corpus (for a large part) make possible many further comparative studies in addition to providing a test field for a new graphic interface designed to perform visual time analytics of large sized thesauri.",,"Providing and Analyzing NLP Terms for our Community. By its own nature, the Natural Language Processing (NLP) community is a priori the best equipped to study the evolution of its own publications, but works in this direction are rare and only recently have we seen a few attempts at charting the field. In this paper, we use the algorithms, resources, standards, tools and common practices of the NLP field to build a list of terms characteristic of ongoing research, by mining a large corpus of scientific publications, aiming at the largest possible exhaustivity and covering the largest possible time span. Study of the evolution of this term list through time reveals interesting insights on the dynamics of field and the availability of the term database and of the corpus (for a large part) make possible many further comparative studies in addition to providing a test field for a new graphic interface designed to perform visual time analytics of large sized thesauri.",2016
goldberg-elhadad-2007-svm,https://aclanthology.org/P07-1029,0,,,,,,,"SVM Model Tampering and Anchored Learning: A Case Study in Hebrew NP Chunking. We study the issue of porting a known NLP method to a language with little existing NLP resources, specifically Hebrew SVM-based chunking. We introduce two SVM-based methods-Model Tampering and Anchored Learning. These allow fine grained analysis of the learned SVM models, which provides guidance to identify errors in the training corpus, distinguish the role and interaction of lexical features and eventually construct a model with ∼10% error reduction. The resulting chunker is shown to be robust in the presence of noise in the training corpus, relies on less lexical features than was previously understood and achieves an F-measure performance of 92.2 on automatically PoS-tagged text. The SVM analysis methods also provide general insight on SVM-based chunking.",{SVM} Model Tampering and Anchored Learning: A Case Study in {H}ebrew {NP} Chunking,"We study the issue of porting a known NLP method to a language with little existing NLP resources, specifically Hebrew SVM-based chunking. We introduce two SVM-based methods-Model Tampering and Anchored Learning. These allow fine grained analysis of the learned SVM models, which provides guidance to identify errors in the training corpus, distinguish the role and interaction of lexical features and eventually construct a model with ∼10% error reduction. The resulting chunker is shown to be robust in the presence of noise in the training corpus, relies on less lexical features than was previously understood and achieves an F-measure performance of 92.2 on automatically PoS-tagged text. The SVM analysis methods also provide general insight on SVM-based chunking.",SVM Model Tampering and Anchored Learning: A Case Study in Hebrew NP Chunking,"We study the issue of porting a known NLP method to a language with little existing NLP resources, specifically Hebrew SVM-based chunking. We introduce two SVM-based methods-Model Tampering and Anchored Learning. These allow fine grained analysis of the learned SVM models, which provides guidance to identify errors in the training corpus, distinguish the role and interaction of lexical features and eventually construct a model with ∼10% error reduction. The resulting chunker is shown to be robust in the presence of noise in the training corpus, relies on less lexical features than was previously understood and achieves an F-measure performance of 92.2 on automatically PoS-tagged text. The SVM analysis methods also provide general insight on SVM-based chunking.",,"SVM Model Tampering and Anchored Learning: A Case Study in Hebrew NP Chunking. We study the issue of porting a known NLP method to a language with little existing NLP resources, specifically Hebrew SVM-based chunking. We introduce two SVM-based methods-Model Tampering and Anchored Learning. These allow fine grained analysis of the learned SVM models, which provides guidance to identify errors in the training corpus, distinguish the role and interaction of lexical features and eventually construct a model with ∼10% error reduction. The resulting chunker is shown to be robust in the presence of noise in the training corpus, relies on less lexical features than was previously understood and achieves an F-measure performance of 92.2 on automatically PoS-tagged text. The SVM analysis methods also provide general insight on SVM-based chunking.",2007
hendrix-1982-natural,https://aclanthology.org/J82-2002,0,,,,,,,"Natural-Language Interface. A major problem faced by would-be users of computer systems is that computers generally make use of special-purpose languages familiar only to those trained in computer science. For a large number of applications requiring interaction between humans and computer systems, it would be highly desirable for machines to converse in English or other natural languages familiar to their human users.",Natural-Language Interface,"A major problem faced by would-be users of computer systems is that computers generally make use of special-purpose languages familiar only to those trained in computer science. For a large number of applications requiring interaction between humans and computer systems, it would be highly desirable for machines to converse in English or other natural languages familiar to their human users.",Natural-Language Interface,"A major problem faced by would-be users of computer systems is that computers generally make use of special-purpose languages familiar only to those trained in computer science. For a large number of applications requiring interaction between humans and computer systems, it would be highly desirable for machines to converse in English or other natural languages familiar to their human users.",,"Natural-Language Interface. A major problem faced by would-be users of computer systems is that computers generally make use of special-purpose languages familiar only to those trained in computer science. For a large number of applications requiring interaction between humans and computer systems, it would be highly desirable for machines to converse in English or other natural languages familiar to their human users.",1982
irvine-etal-2014-american,http://www.lrec-conf.org/proceedings/lrec2014/pdf/914_Paper.pdf,0,,,,,,,"The American Local News Corpus. We present the American Local News Corpus (ALNC), containing over 4 billion words of text from 2, 652 online newspapers in the United States. Each article in the corpus is associated with a timestamp, state, and city. All 50 U.S. states and 1, 924 cities are represented. We detail our method for taking daily snapshots of thousands of local and national newspapers and present two example corpus analyses. The first explores how different sports are talked about over time and geography. The second compares per capita murder rates with news coverage of murders across the 50 states. The ALNC is about the same size as the Gigaword corpus and is growing continuously. Version 1.0 is available for research use.",The {A}merican Local News Corpus,"We present the American Local News Corpus (ALNC), containing over 4 billion words of text from 2, 652 online newspapers in the United States. Each article in the corpus is associated with a timestamp, state, and city. All 50 U.S. states and 1, 924 cities are represented. We detail our method for taking daily snapshots of thousands of local and national newspapers and present two example corpus analyses. The first explores how different sports are talked about over time and geography. The second compares per capita murder rates with news coverage of murders across the 50 states. The ALNC is about the same size as the Gigaword corpus and is growing continuously. Version 1.0 is available for research use.",The American Local News Corpus,"We present the American Local News Corpus (ALNC), containing over 4 billion words of text from 2, 652 online newspapers in the United States. Each article in the corpus is associated with a timestamp, state, and city. All 50 U.S. states and 1, 924 cities are represented. We detail our method for taking daily snapshots of thousands of local and national newspapers and present two example corpus analyses. The first explores how different sports are talked about over time and geography. The second compares per capita murder rates with news coverage of murders across the 50 states. The ALNC is about the same size as the Gigaword corpus and is growing continuously. Version 1.0 is available for research use.",We would like to thank the creators of the Newspaper Map website for providing us with their database of U.S. newspapers. This material is based on research sponsored by DARPA under contract HR0011-09-1-0044 and by the Johns Hopkins University Human Language Technology Center of Excellence. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government.,"The American Local News Corpus. We present the American Local News Corpus (ALNC), containing over 4 billion words of text from 2, 652 online newspapers in the United States. Each article in the corpus is associated with a timestamp, state, and city. All 50 U.S. states and 1, 924 cities are represented. We detail our method for taking daily snapshots of thousands of local and national newspapers and present two example corpus analyses. The first explores how different sports are talked about over time and geography. The second compares per capita murder rates with news coverage of murders across the 50 states. The ALNC is about the same size as the Gigaword corpus and is growing continuously. Version 1.0 is available for research use.",2014
wissing-etal-2004-spoken,http://www.lrec-conf.org/proceedings/lrec2004/pdf/71.pdf,0,,,,,,,"A Spoken Afrikaans Language Resource Designed for Research on Pronunciation Variations. In this contribution, the design, collection, annotation and planned distribution of a new spoken language resource of Afrikaans (SALAR) is discussed. The corpus contains speech of mother tongue speakers of Afrikaans, and is intended to become a primary national language resource for phonetic research and research on pronunciation variations. As such, the corpus is designed to expose pronunciation variations due to regional accents, speech rate (normal and fast speech) and speech mode (read and spontaneous speech). The corpus is collected by the Potchefstroom Campus of the NorthWest University, but in all phases of the corpus creation process there was a close collaboration with ELIS-UG (Belgium), one of the institutions that has been engaged in the creation of the Spoken Dutch Corpus (CGN).",A Spoken {A}frikaans Language Resource Designed for Research on Pronunciation Variations,"In this contribution, the design, collection, annotation and planned distribution of a new spoken language resource of Afrikaans (SALAR) is discussed. The corpus contains speech of mother tongue speakers of Afrikaans, and is intended to become a primary national language resource for phonetic research and research on pronunciation variations. As such, the corpus is designed to expose pronunciation variations due to regional accents, speech rate (normal and fast speech) and speech mode (read and spontaneous speech). The corpus is collected by the Potchefstroom Campus of the NorthWest University, but in all phases of the corpus creation process there was a close collaboration with ELIS-UG (Belgium), one of the institutions that has been engaged in the creation of the Spoken Dutch Corpus (CGN).",A Spoken Afrikaans Language Resource Designed for Research on Pronunciation Variations,"In this contribution, the design, collection, annotation and planned distribution of a new spoken language resource of Afrikaans (SALAR) is discussed. The corpus contains speech of mother tongue speakers of Afrikaans, and is intended to become a primary national language resource for phonetic research and research on pronunciation variations. As such, the corpus is designed to expose pronunciation variations due to regional accents, speech rate (normal and fast speech) and speech mode (read and spontaneous speech). The corpus is collected by the Potchefstroom Campus of the NorthWest University, but in all phases of the corpus creation process there was a close collaboration with ELIS-UG (Belgium), one of the institutions that has been engaged in the creation of the Spoken Dutch Corpus (CGN).",,"A Spoken Afrikaans Language Resource Designed for Research on Pronunciation Variations. In this contribution, the design, collection, annotation and planned distribution of a new spoken language resource of Afrikaans (SALAR) is discussed. The corpus contains speech of mother tongue speakers of Afrikaans, and is intended to become a primary national language resource for phonetic research and research on pronunciation variations. As such, the corpus is designed to expose pronunciation variations due to regional accents, speech rate (normal and fast speech) and speech mode (read and spontaneous speech). The corpus is collected by the Potchefstroom Campus of the NorthWest University, but in all phases of the corpus creation process there was a close collaboration with ELIS-UG (Belgium), one of the institutions that has been engaged in the creation of the Spoken Dutch Corpus (CGN).",2004
lu-paladini-adell-2012-beyond,https://aclanthology.org/2012.amta-commercial.10,0,,,,,,,"Beyond MT: Source Content Quality and Process Automation. This document introduces the strategy implemented at CA Technologies to exploit Machine Translation (MT) at the corporate-wide level. We will introduce the different approaches followed to further improve the quality of the output of the machine translation engine once the engines have reached a maximum level of customization. Senior team support, clear communication between the parties involved and improvement measurement are the key components for the success of the initiative.",Beyond {MT}: Source Content Quality and Process Automation,"This document introduces the strategy implemented at CA Technologies to exploit Machine Translation (MT) at the corporate-wide level. We will introduce the different approaches followed to further improve the quality of the output of the machine translation engine once the engines have reached a maximum level of customization. Senior team support, clear communication between the parties involved and improvement measurement are the key components for the success of the initiative.",Beyond MT: Source Content Quality and Process Automation,"This document introduces the strategy implemented at CA Technologies to exploit Machine Translation (MT) at the corporate-wide level. We will introduce the different approaches followed to further improve the quality of the output of the machine translation engine once the engines have reached a maximum level of customization. Senior team support, clear communication between the parties involved and improvement measurement are the key components for the success of the initiative.",,"Beyond MT: Source Content Quality and Process Automation. This document introduces the strategy implemented at CA Technologies to exploit Machine Translation (MT) at the corporate-wide level. We will introduce the different approaches followed to further improve the quality of the output of the machine translation engine once the engines have reached a maximum level of customization. Senior team support, clear communication between the parties involved and improvement measurement are the key components for the success of the initiative.",2012
liu-etal-2018-towards-less,https://aclanthology.org/D18-1297,0,,,,,,,"Towards Less Generic Responses in Neural Conversation Models: A Statistical Re-weighting Method. Sequence-to-sequence neural generation models have achieved promising performance on short text conversation tasks. However, they tend to generate generic/dull responses, leading to unsatisfying dialogue experience. We observe that in conversation tasks, each query could have multiple responses, which forms a 1-ton or m-ton relationship in the view of the total corpus. The objective function used in standard sequence-to-sequence models will be dominated by loss terms with generic patterns. Inspired by this observation, we introduce a statistical re-weighting method that assigns different weights for the multiple responses of the same query, and trains the standard neural generation model with the weights. Experimental results on a large Chinese dialogue corpus show that our method improves the acceptance rate of generated responses compared with several baseline models and significantly reduces the number of generated generic responses.",Towards Less Generic Responses in Neural Conversation Models: A Statistical Re-weighting Method,"Sequence-to-sequence neural generation models have achieved promising performance on short text conversation tasks. However, they tend to generate generic/dull responses, leading to unsatisfying dialogue experience. We observe that in conversation tasks, each query could have multiple responses, which forms a 1-ton or m-ton relationship in the view of the total corpus. The objective function used in standard sequence-to-sequence models will be dominated by loss terms with generic patterns. Inspired by this observation, we introduce a statistical re-weighting method that assigns different weights for the multiple responses of the same query, and trains the standard neural generation model with the weights. Experimental results on a large Chinese dialogue corpus show that our method improves the acceptance rate of generated responses compared with several baseline models and significantly reduces the number of generated generic responses.",Towards Less Generic Responses in Neural Conversation Models: A Statistical Re-weighting Method,"Sequence-to-sequence neural generation models have achieved promising performance on short text conversation tasks. However, they tend to generate generic/dull responses, leading to unsatisfying dialogue experience. We observe that in conversation tasks, each query could have multiple responses, which forms a 1-ton or m-ton relationship in the view of the total corpus. The objective function used in standard sequence-to-sequence models will be dominated by loss terms with generic patterns. Inspired by this observation, we introduce a statistical re-weighting method that assigns different weights for the multiple responses of the same query, and trains the standard neural generation model with the weights. Experimental results on a large Chinese dialogue corpus show that our method improves the acceptance rate of generated responses compared with several baseline models and significantly reduces the number of generated generic responses.",,"Towards Less Generic Responses in Neural Conversation Models: A Statistical Re-weighting Method. Sequence-to-sequence neural generation models have achieved promising performance on short text conversation tasks. However, they tend to generate generic/dull responses, leading to unsatisfying dialogue experience. We observe that in conversation tasks, each query could have multiple responses, which forms a 1-ton or m-ton relationship in the view of the total corpus. The objective function used in standard sequence-to-sequence models will be dominated by loss terms with generic patterns. Inspired by this observation, we introduce a statistical re-weighting method that assigns different weights for the multiple responses of the same query, and trains the standard neural generation model with the weights. Experimental results on a large Chinese dialogue corpus show that our method improves the acceptance rate of generated responses compared with several baseline models and significantly reduces the number of generated generic responses.",2018
ide-suderman-2009-bridging,https://aclanthology.org/W09-3004,0,,,,,,,"Bridging the Gaps: Interoperability for GrAF, GATE, and UIMA. This paper explores interoperability for data represented using the Graph Annotation Framework (GrAF) (Ide and Suderman, 2007) and the data formats utilized by two general-purpose annotation systems: the General Architecture for Text Engineering (GATE) (Cunningham, 2002) and the Unstructured Information Management Architecture (UIMA). GrAF is intended to serve as a ""pivot"" to enable interoperability among different formats, and both GATE and UIMA are at least implicitly designed with an eye toward interoperability with other formats and tools. We describe the steps required to perform a round-trip rendering from GrAF to GATE and GrAF to UIMA CAS and back again, and outline the commonalities as well as the differences and gaps that came to light in the process.","Bridging the Gaps: Interoperability for {G}r{AF}, {GATE}, and {UIMA}","This paper explores interoperability for data represented using the Graph Annotation Framework (GrAF) (Ide and Suderman, 2007) and the data formats utilized by two general-purpose annotation systems: the General Architecture for Text Engineering (GATE) (Cunningham, 2002) and the Unstructured Information Management Architecture (UIMA). GrAF is intended to serve as a ""pivot"" to enable interoperability among different formats, and both GATE and UIMA are at least implicitly designed with an eye toward interoperability with other formats and tools. We describe the steps required to perform a round-trip rendering from GrAF to GATE and GrAF to UIMA CAS and back again, and outline the commonalities as well as the differences and gaps that came to light in the process.","Bridging the Gaps: Interoperability for GrAF, GATE, and UIMA","This paper explores interoperability for data represented using the Graph Annotation Framework (GrAF) (Ide and Suderman, 2007) and the data formats utilized by two general-purpose annotation systems: the General Architecture for Text Engineering (GATE) (Cunningham, 2002) and the Unstructured Information Management Architecture (UIMA). GrAF is intended to serve as a ""pivot"" to enable interoperability among different formats, and both GATE and UIMA are at least implicitly designed with an eye toward interoperability with other formats and tools. We describe the steps required to perform a round-trip rendering from GrAF to GATE and GrAF to UIMA CAS and back again, and outline the commonalities as well as the differences and gaps that came to light in the process.",This work was supported by an IBM UIMA Innovation Award and National Science Foundation grant INT-0753069.,"Bridging the Gaps: Interoperability for GrAF, GATE, and UIMA. This paper explores interoperability for data represented using the Graph Annotation Framework (GrAF) (Ide and Suderman, 2007) and the data formats utilized by two general-purpose annotation systems: the General Architecture for Text Engineering (GATE) (Cunningham, 2002) and the Unstructured Information Management Architecture (UIMA). GrAF is intended to serve as a ""pivot"" to enable interoperability among different formats, and both GATE and UIMA are at least implicitly designed with an eye toward interoperability with other formats and tools. We describe the steps required to perform a round-trip rendering from GrAF to GATE and GrAF to UIMA CAS and back again, and outline the commonalities as well as the differences and gaps that came to light in the process.",2009
nangia-bowman-2019-human,https://aclanthology.org/P19-1449,0,,,,,,,"Human vs. Muppet: A Conservative Estimate of Human Performance on the GLUE Benchmark. The GLUE benchmark (Wang et al., 2019b) is a suite of language understanding tasks which has seen dramatic progress in the past year, with average performance moving from 70.0 at launch to 83.9, state of the art at the time of writing (May 24, 2019). Here, we measure human performance on the benchmark, in order to learn whether significant headroom remains for further progress. We provide a conservative estimate of human performance on the benchmark through crowdsourcing: Our annotators are non-experts who must learn each task from a brief set of instructions and 20 examples. In spite of limited training, these annotators robustly outperform the state of the art on six of the nine GLUE tasks and achieve an average score of 87.1. Given the fast pace of progress however, the headroom we observe is quite limited. To reproduce the datapoor setting that our annotators must learn in, we also train the BERT model (Devlin et al., 2019) in limited-data regimes, and conclude that low-resource sentence classification remains a challenge for modern neural network approaches to text understanding.",Human vs. Muppet: A Conservative Estimate of Human Performance on the {GLUE} Benchmark,"The GLUE benchmark (Wang et al., 2019b) is a suite of language understanding tasks which has seen dramatic progress in the past year, with average performance moving from 70.0 at launch to 83.9, state of the art at the time of writing (May 24, 2019). Here, we measure human performance on the benchmark, in order to learn whether significant headroom remains for further progress. We provide a conservative estimate of human performance on the benchmark through crowdsourcing: Our annotators are non-experts who must learn each task from a brief set of instructions and 20 examples. In spite of limited training, these annotators robustly outperform the state of the art on six of the nine GLUE tasks and achieve an average score of 87.1. Given the fast pace of progress however, the headroom we observe is quite limited. To reproduce the datapoor setting that our annotators must learn in, we also train the BERT model (Devlin et al., 2019) in limited-data regimes, and conclude that low-resource sentence classification remains a challenge for modern neural network approaches to text understanding.",Human vs. Muppet: A Conservative Estimate of Human Performance on the GLUE Benchmark,"The GLUE benchmark (Wang et al., 2019b) is a suite of language understanding tasks which has seen dramatic progress in the past year, with average performance moving from 70.0 at launch to 83.9, state of the art at the time of writing (May 24, 2019). Here, we measure human performance on the benchmark, in order to learn whether significant headroom remains for further progress. We provide a conservative estimate of human performance on the benchmark through crowdsourcing: Our annotators are non-experts who must learn each task from a brief set of instructions and 20 examples. In spite of limited training, these annotators robustly outperform the state of the art on six of the nine GLUE tasks and achieve an average score of 87.1. Given the fast pace of progress however, the headroom we observe is quite limited. To reproduce the datapoor setting that our annotators must learn in, we also train the BERT model (Devlin et al., 2019) in limited-data regimes, and conclude that low-resource sentence classification remains a challenge for modern neural network approaches to text understanding.","This work was made possible in part by a donation to NYU from Eric and Wendy Schmidt made by recommendation of the Schmidt Futures program and by funding from Samsung Research. We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Titan V GPU used at NYU for this research. We thank Alex Wang and Amanpreet Singh for their help with conducting GLUE evaluations, and we thank Jason Phang for his help with training the BERT model.","Human vs. Muppet: A Conservative Estimate of Human Performance on the GLUE Benchmark. The GLUE benchmark (Wang et al., 2019b) is a suite of language understanding tasks which has seen dramatic progress in the past year, with average performance moving from 70.0 at launch to 83.9, state of the art at the time of writing (May 24, 2019). Here, we measure human performance on the benchmark, in order to learn whether significant headroom remains for further progress. We provide a conservative estimate of human performance on the benchmark through crowdsourcing: Our annotators are non-experts who must learn each task from a brief set of instructions and 20 examples. In spite of limited training, these annotators robustly outperform the state of the art on six of the nine GLUE tasks and achieve an average score of 87.1. Given the fast pace of progress however, the headroom we observe is quite limited. To reproduce the datapoor setting that our annotators must learn in, we also train the BERT model (Devlin et al., 2019) in limited-data regimes, and conclude that low-resource sentence classification remains a challenge for modern neural network approaches to text understanding.",2019
heyer-etal-1990-knowledge,https://aclanthology.org/C90-3073,0,,,,,,,"Knowledge Representation and Semantics in a Complex Domain: The UNIX Natural Language Help System GOETHE. Natural language help systems for complex domains requirc, in our view, an integration of semantic representation and knowledge base in order to adequately and efficiently deal with cognitively misconceived user input ut. We present such an integration by way of the notiml of a frame-semae~tics that has been implemented for the purposes of a natural language help system for UNIX.",Knowledge Representation and Semantics in a Complex Domain: The {UNIX} Natural Language Help System {GOETHE},"Natural language help systems for complex domains requirc, in our view, an integration of semantic representation and knowledge base in order to adequately and efficiently deal with cognitively misconceived user input ut. We present such an integration by way of the notiml of a frame-semae~tics that has been implemented for the purposes of a natural language help system for UNIX.",Knowledge Representation and Semantics in a Complex Domain: The UNIX Natural Language Help System GOETHE,"Natural language help systems for complex domains requirc, in our view, an integration of semantic representation and knowledge base in order to adequately and efficiently deal with cognitively misconceived user input ut. We present such an integration by way of the notiml of a frame-semae~tics that has been implemented for the purposes of a natural language help system for UNIX.",,"Knowledge Representation and Semantics in a Complex Domain: The UNIX Natural Language Help System GOETHE. Natural language help systems for complex domains requirc, in our view, an integration of semantic representation and knowledge base in order to adequately and efficiently deal with cognitively misconceived user input ut. We present such an integration by way of the notiml of a frame-semae~tics that has been implemented for the purposes of a natural language help system for UNIX.",1990
zhou-etal-2022-hierarchical,https://aclanthology.org/2022.findings-acl.170,0,,,,,,,"Hierarchical Recurrent Aggregative Generation for Few-Shot NLG. Large pretrained models enable transfer learning to low-resource domains for language generation tasks. However, previous end-to-end approaches do not account for the fact that some generation sub-tasks, specifically aggregation and lexicalisation, can benefit from transfer learning to different extents. To exploit these varying potentials for transfer learning, we propose a new hierarchical approach for few-shot and zero-shot generation. Our approach consists of a three-moduled jointly trained architecture: the first module independently lexicalises the distinct units of information in the input as sentence sub-units (e.g. phrases), the second module recurrently aggregates these sub-units to generate a unified intermediate output, while the third module subsequently post-edits it to generate a coherent and fluent final text. We perform extensive empirical analysis and ablation studies on fewshot and zero-shot settings across 4 datasets. Automatic and human evaluation shows that the proposed hierarchical approach is consistently capable of achieving state-of-the-art results when compared to previous work. 1",Hierarchical Recurrent Aggregative Generation for Few-Shot {NLG},"Large pretrained models enable transfer learning to low-resource domains for language generation tasks. However, previous end-to-end approaches do not account for the fact that some generation sub-tasks, specifically aggregation and lexicalisation, can benefit from transfer learning to different extents. To exploit these varying potentials for transfer learning, we propose a new hierarchical approach for few-shot and zero-shot generation. Our approach consists of a three-moduled jointly trained architecture: the first module independently lexicalises the distinct units of information in the input as sentence sub-units (e.g. phrases), the second module recurrently aggregates these sub-units to generate a unified intermediate output, while the third module subsequently post-edits it to generate a coherent and fluent final text. We perform extensive empirical analysis and ablation studies on fewshot and zero-shot settings across 4 datasets. Automatic and human evaluation shows that the proposed hierarchical approach is consistently capable of achieving state-of-the-art results when compared to previous work. 1",Hierarchical Recurrent Aggregative Generation for Few-Shot NLG,"Large pretrained models enable transfer learning to low-resource domains for language generation tasks. However, previous end-to-end approaches do not account for the fact that some generation sub-tasks, specifically aggregation and lexicalisation, can benefit from transfer learning to different extents. To exploit these varying potentials for transfer learning, we propose a new hierarchical approach for few-shot and zero-shot generation. Our approach consists of a three-moduled jointly trained architecture: the first module independently lexicalises the distinct units of information in the input as sentence sub-units (e.g. phrases), the second module recurrently aggregates these sub-units to generate a unified intermediate output, while the third module subsequently post-edits it to generate a coherent and fluent final text. We perform extensive empirical analysis and ablation studies on fewshot and zero-shot settings across 4 datasets. Automatic and human evaluation shows that the proposed hierarchical approach is consistently capable of achieving state-of-the-art results when compared to previous work. 1",,"Hierarchical Recurrent Aggregative Generation for Few-Shot NLG. Large pretrained models enable transfer learning to low-resource domains for language generation tasks. However, previous end-to-end approaches do not account for the fact that some generation sub-tasks, specifically aggregation and lexicalisation, can benefit from transfer learning to different extents. To exploit these varying potentials for transfer learning, we propose a new hierarchical approach for few-shot and zero-shot generation. Our approach consists of a three-moduled jointly trained architecture: the first module independently lexicalises the distinct units of information in the input as sentence sub-units (e.g. phrases), the second module recurrently aggregates these sub-units to generate a unified intermediate output, while the third module subsequently post-edits it to generate a coherent and fluent final text. We perform extensive empirical analysis and ablation studies on fewshot and zero-shot settings across 4 datasets. Automatic and human evaluation shows that the proposed hierarchical approach is consistently capable of achieving state-of-the-art results when compared to previous work. 1",2022
chen-etal-2021-employing,https://aclanthology.org/2021.rocling-1.30,0,,,,,,,"Employing low-pass filtered temporal speech features for the training of ideal ratio mask in speech enhancement. x [n] x
m [n] m m m D × 1 M X = [ 0 1 ... M −1 ], D × M",Employing low-pass filtered temporal speech features for the training of ideal ratio mask in speech enhancement,"x [n] x
m [n] m m m D × 1 M X = [ 0 1 ... M −1 ], D × M",Employing low-pass filtered temporal speech features for the training of ideal ratio mask in speech enhancement,"x [n] x
m [n] m m m D × 1 M X = [ 0 1 ... M −1 ], D × M",,"Employing low-pass filtered temporal speech features for the training of ideal ratio mask in speech enhancement. x [n] x
m [n] m m m D × 1 M X = [ 0 1 ... M −1 ], D × M",2021
hempelmann-etal-2005-evaluating,https://aclanthology.org/W05-0211,1,,,,education,,,"Evaluating State-of-the-Art Treebank-style Parsers for Coh-Metrix and Other Learning Technology Environments. This paper evaluates a series of freely available, state-of-the-art parsers on a standard benchmark as well as with respect to a set of data relevant for measuring text cohesion. We outline advantages and disadvantages of existing technologies and make recommendations. Our performance report uses traditional measures based on a gold standard as well as novel dimensions for parsing evaluation. To our knowledge this is the first attempt to evaluate parsers accross genres and grade levels for the implementation in learning technology.",Evaluating State-of-the-Art {T}reebank-style Parsers for {C}oh-{M}etrix and Other Learning Technology Environments,"This paper evaluates a series of freely available, state-of-the-art parsers on a standard benchmark as well as with respect to a set of data relevant for measuring text cohesion. We outline advantages and disadvantages of existing technologies and make recommendations. Our performance report uses traditional measures based on a gold standard as well as novel dimensions for parsing evaluation. To our knowledge this is the first attempt to evaluate parsers accross genres and grade levels for the implementation in learning technology.",Evaluating State-of-the-Art Treebank-style Parsers for Coh-Metrix and Other Learning Technology Environments,"This paper evaluates a series of freely available, state-of-the-art parsers on a standard benchmark as well as with respect to a set of data relevant for measuring text cohesion. We outline advantages and disadvantages of existing technologies and make recommendations. Our performance report uses traditional measures based on a gold standard as well as novel dimensions for parsing evaluation. To our knowledge this is the first attempt to evaluate parsers accross genres and grade levels for the implementation in learning technology.","This research was funded by Institute for Educations Science Grant IES R3056020018-02. Any opinions, findings, and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the IES. We are grateful to Philip M. McCarthy for his assistance in preparing some of our data.","Evaluating State-of-the-Art Treebank-style Parsers for Coh-Metrix and Other Learning Technology Environments. This paper evaluates a series of freely available, state-of-the-art parsers on a standard benchmark as well as with respect to a set of data relevant for measuring text cohesion. We outline advantages and disadvantages of existing technologies and make recommendations. Our performance report uses traditional measures based on a gold standard as well as novel dimensions for parsing evaluation. To our knowledge this is the first attempt to evaluate parsers accross genres and grade levels for the implementation in learning technology.",2005
roark-etal-2009-deriving,https://aclanthology.org/D09-1034,0,,,,,,,"Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing. A number of recent publications have made use of the incremental output of stochastic parsers to derive measures of high utility for psycholinguistic modeling, following the work of Hale (2001; 2003; 2006). In this paper, we present novel methods for calculating separate lexical and syntactic surprisal measures from a single incremental parser using a lexicalized PCFG. We also present an approximation to entropy measures that would otherwise be intractable to calculate for a grammar of that size. Empirical results demonstrate the utility of our methods in predicting human reading times.",Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing,"A number of recent publications have made use of the incremental output of stochastic parsers to derive measures of high utility for psycholinguistic modeling, following the work of Hale (2001; 2003; 2006). In this paper, we present novel methods for calculating separate lexical and syntactic surprisal measures from a single incremental parser using a lexicalized PCFG. We also present an approximation to entropy measures that would otherwise be intractable to calculate for a grammar of that size. Empirical results demonstrate the utility of our methods in predicting human reading times.",Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing,"A number of recent publications have made use of the incremental output of stochastic parsers to derive measures of high utility for psycholinguistic modeling, following the work of Hale (2001; 2003; 2006). In this paper, we present novel methods for calculating separate lexical and syntactic surprisal measures from a single incremental parser using a lexicalized PCFG. We also present an approximation to entropy measures that would otherwise be intractable to calculate for a grammar of that size. Empirical results demonstrate the utility of our methods in predicting human reading times.","Thanks to Michael Collins, John Hale and Shravan Vasishth for valuable discussions about this work. This research was supported in part by NSF Grant #BCS-0826654. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF.","Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing. A number of recent publications have made use of the incremental output of stochastic parsers to derive measures of high utility for psycholinguistic modeling, following the work of Hale (2001; 2003; 2006). In this paper, we present novel methods for calculating separate lexical and syntactic surprisal measures from a single incremental parser using a lexicalized PCFG. We also present an approximation to entropy measures that would otherwise be intractable to calculate for a grammar of that size. Empirical results demonstrate the utility of our methods in predicting human reading times.",2009
su-etal-1989-smoothing,https://aclanthology.org/O89-1010,0,,,,,,,Smoothing Statistic Databases in a Machine Translation System. ,Smoothing Statistic Databases in a Machine Translation System,,Smoothing Statistic Databases in a Machine Translation System,,,Smoothing Statistic Databases in a Machine Translation System. ,1989
raake-2002-content,http://www.lrec-conf.org/proceedings/lrec2002/pdf/107.pdf,0,,,,,,,"Does the Content of Speech Influence its Perceived Sound Quality?. From a user's perspective, the speech quality of modern telecommunication systems often differs from that of traditional wireline telephone systems. One aspect is a changed sound of the interlocutor's voice-introduced by an expansion of the transmissionbandwidth to wide-band, by low-bitrate coding and/or by the acoustic properties of specific user-interfaces. In order to quantify the effect of transmission on speech quality, subjective data to be correlated to transmission characteristics have to be collected in auditory tests. In this paper, a study is presented investigating in how far the content of specific speech material used in a listening-only test impacts its perceived sound quality. A set of French speech data was presented to two different groups of listeners: French native speakers and listeners without knowledge of French. The speech material consists of different text types, such as everyday speech or semantically unpredictable sentences (SUS). The listeners were asked to rate the sound quality of the transmitted voice on a onedimensional category rating scale. The French listeners' ratings were found to be lower for SUS, while those of the non-French listeners did not show any major dependency on text material. Hence, it can be stated that if a given speech sign is understood by the listeners, they are unable to separate form from function and reflect content in their ratings of sound.",Does the Content of Speech Influence its Perceived Sound Quality?,"From a user's perspective, the speech quality of modern telecommunication systems often differs from that of traditional wireline telephone systems. One aspect is a changed sound of the interlocutor's voice-introduced by an expansion of the transmissionbandwidth to wide-band, by low-bitrate coding and/or by the acoustic properties of specific user-interfaces. In order to quantify the effect of transmission on speech quality, subjective data to be correlated to transmission characteristics have to be collected in auditory tests. In this paper, a study is presented investigating in how far the content of specific speech material used in a listening-only test impacts its perceived sound quality. A set of French speech data was presented to two different groups of listeners: French native speakers and listeners without knowledge of French. The speech material consists of different text types, such as everyday speech or semantically unpredictable sentences (SUS). The listeners were asked to rate the sound quality of the transmitted voice on a onedimensional category rating scale. The French listeners' ratings were found to be lower for SUS, while those of the non-French listeners did not show any major dependency on text material. Hence, it can be stated that if a given speech sign is understood by the listeners, they are unable to separate form from function and reflect content in their ratings of sound.",Does the Content of Speech Influence its Perceived Sound Quality?,"From a user's perspective, the speech quality of modern telecommunication systems often differs from that of traditional wireline telephone systems. One aspect is a changed sound of the interlocutor's voice-introduced by an expansion of the transmissionbandwidth to wide-band, by low-bitrate coding and/or by the acoustic properties of specific user-interfaces. In order to quantify the effect of transmission on speech quality, subjective data to be correlated to transmission characteristics have to be collected in auditory tests. In this paper, a study is presented investigating in how far the content of specific speech material used in a listening-only test impacts its perceived sound quality. A set of French speech data was presented to two different groups of listeners: French native speakers and listeners without knowledge of French. The speech material consists of different text types, such as everyday speech or semantically unpredictable sentences (SUS). The listeners were asked to rate the sound quality of the transmitted voice on a onedimensional category rating scale. The French listeners' ratings were found to be lower for SUS, while those of the non-French listeners did not show any major dependency on text material. Hence, it can be stated that if a given speech sign is understood by the listeners, they are unable to separate form from function and reflect content in their ratings of sound.","This work has been carried out at the Institute of Communication Acoustics, Ruhr-University Bochum (Prof. J. Blauert, PD. U. Jekosch). It was performed in the framework of a PROCOPE co-operation with the LMA, CNRS, Marseille, France (Dr. G. Canévet). The author would like to thank U. Jekosch, S. Möller and S. Schaden for fruitful discussions and S. Meunier (CNRS Marseille) for her help in auditory test organization at the LMA.","Does the Content of Speech Influence its Perceived Sound Quality?. From a user's perspective, the speech quality of modern telecommunication systems often differs from that of traditional wireline telephone systems. One aspect is a changed sound of the interlocutor's voice-introduced by an expansion of the transmissionbandwidth to wide-band, by low-bitrate coding and/or by the acoustic properties of specific user-interfaces. In order to quantify the effect of transmission on speech quality, subjective data to be correlated to transmission characteristics have to be collected in auditory tests. In this paper, a study is presented investigating in how far the content of specific speech material used in a listening-only test impacts its perceived sound quality. A set of French speech data was presented to two different groups of listeners: French native speakers and listeners without knowledge of French. The speech material consists of different text types, such as everyday speech or semantically unpredictable sentences (SUS). The listeners were asked to rate the sound quality of the transmitted voice on a onedimensional category rating scale. The French listeners' ratings were found to be lower for SUS, while those of the non-French listeners did not show any major dependency on text material. Hence, it can be stated that if a given speech sign is understood by the listeners, they are unable to separate form from function and reflect content in their ratings of sound.",2002
duan-etal-2012-twitter,https://aclanthology.org/C12-1047,1,,,,peace_justice_and_strong_institutions,,,"Twitter Topic Summarization by Ranking Tweets using Social Influence and Content Quality. In this paper, we propose a time-line based framework for topic summarization in Twitter. We summarize topics by sub-topics along time line to fully capture rapid topic evolution in Twitter. Specifically, we rank and select salient and diversified tweets as a summary of each sub-topic. We have observed that ranking tweets is significantly different from ranking sentences in traditional extractive document summarization. We model and formulate the tweet ranking in a unified mutual reinforcement graph, where the social influence of users and the content quality of tweets are taken into consideration simultaneously in a mutually reinforcing manner. Extensive experiments are conducted on 3.9 million tweets. The results show that the proposed approach outperforms previous approaches by 14% improvement on average ROUGE-1. Moreover, we show how the content quality of tweets and the social influence of users effectively improve the performance of measuring the salience of tweets.",{T}witter Topic Summarization by Ranking Tweets using Social Influence and Content Quality,"In this paper, we propose a time-line based framework for topic summarization in Twitter. We summarize topics by sub-topics along time line to fully capture rapid topic evolution in Twitter. Specifically, we rank and select salient and diversified tweets as a summary of each sub-topic. We have observed that ranking tweets is significantly different from ranking sentences in traditional extractive document summarization. We model and formulate the tweet ranking in a unified mutual reinforcement graph, where the social influence of users and the content quality of tweets are taken into consideration simultaneously in a mutually reinforcing manner. Extensive experiments are conducted on 3.9 million tweets. The results show that the proposed approach outperforms previous approaches by 14% improvement on average ROUGE-1. Moreover, we show how the content quality of tweets and the social influence of users effectively improve the performance of measuring the salience of tweets.",Twitter Topic Summarization by Ranking Tweets using Social Influence and Content Quality,"In this paper, we propose a time-line based framework for topic summarization in Twitter. We summarize topics by sub-topics along time line to fully capture rapid topic evolution in Twitter. Specifically, we rank and select salient and diversified tweets as a summary of each sub-topic. We have observed that ranking tweets is significantly different from ranking sentences in traditional extractive document summarization. We model and formulate the tweet ranking in a unified mutual reinforcement graph, where the social influence of users and the content quality of tweets are taken into consideration simultaneously in a mutually reinforcing manner. Extensive experiments are conducted on 3.9 million tweets. The results show that the proposed approach outperforms previous approaches by 14% improvement on average ROUGE-1. Moreover, we show how the content quality of tweets and the social influence of users effectively improve the performance of measuring the salience of tweets.",,"Twitter Topic Summarization by Ranking Tweets using Social Influence and Content Quality. In this paper, we propose a time-line based framework for topic summarization in Twitter. We summarize topics by sub-topics along time line to fully capture rapid topic evolution in Twitter. Specifically, we rank and select salient and diversified tweets as a summary of each sub-topic. We have observed that ranking tweets is significantly different from ranking sentences in traditional extractive document summarization. We model and formulate the tweet ranking in a unified mutual reinforcement graph, where the social influence of users and the content quality of tweets are taken into consideration simultaneously in a mutually reinforcing manner. Extensive experiments are conducted on 3.9 million tweets. The results show that the proposed approach outperforms previous approaches by 14% improvement on average ROUGE-1. Moreover, we show how the content quality of tweets and the social influence of users effectively improve the performance of measuring the salience of tweets.",2012
goodman-1985-repairing,https://aclanthology.org/P85-1026,0,,,,,,,Repairing Reference Identification Failures by Relaxation. The goal of thls work is the enrichment of human-machlne mteractIons in a natural language,Repairing Reference Identification Failures by Relaxation,The goal of thls work is the enrichment of human-machlne mteractIons in a natural language,Repairing Reference Identification Failures by Relaxation,The goal of thls work is the enrichment of human-machlne mteractIons in a natural language,,Repairing Reference Identification Failures by Relaxation. The goal of thls work is the enrichment of human-machlne mteractIons in a natural language,1985
bruce-wiebe-1998-word,https://aclanthology.org/W98-1507,0,,,,,,,"Word-Sense Distinguishability and Inter-Coder Agreement. It. is common in NLP that the categories into which text is classified do not have fully objective definitions. Examples of such categories are lexical distinctions such as part-of-speech tags and wordsense distinctions, sentence level distinctions such as phrase attachment, and discourse level distinct.icms such as topic or speech-act categorization. This p>1per presents an approach to analy?-ing the agrcen1ent arnong lnnnan judges for the purpose of formulating a refined and more reliable set of category designations. We use these techniques to analyze the sense tags assigned by five judgps to the noun intcr•est. The initial tag set is takmi from Longman's Dictionary of Contemporary i:nglish. Through this process of analysis, we automatically identify and assign a revised set of sense tags for the data. The revised tags exhibit high reliability as measured by Cohen's r;.. Such techniques are important for formulating and evaluating both human and automated classification systems.",Word-Sense Distinguishability and Inter-Coder Agreement,"It. is common in NLP that the categories into which text is classified do not have fully objective definitions. Examples of such categories are lexical distinctions such as part-of-speech tags and wordsense distinctions, sentence level distinctions such as phrase attachment, and discourse level distinct.icms such as topic or speech-act categorization. This p>1per presents an approach to analy?-ing the agrcen1ent arnong lnnnan judges for the purpose of formulating a refined and more reliable set of category designations. We use these techniques to analyze the sense tags assigned by five judgps to the noun intcr•est. The initial tag set is takmi from Longman's Dictionary of Contemporary }i:nglish. Through this process of analysis, we automatically identify and assign a revised set of sense tags for the data. The revised tags exhibit high reliability as measured by Cohen's r;.. Such techniques are important for formulating and evaluating both human and automated classification systems.",Word-Sense Distinguishability and Inter-Coder Agreement,"It. is common in NLP that the categories into which text is classified do not have fully objective definitions. Examples of such categories are lexical distinctions such as part-of-speech tags and wordsense distinctions, sentence level distinctions such as phrase attachment, and discourse level distinct.icms such as topic or speech-act categorization. This p>1per presents an approach to analy?-ing the agrcen1ent arnong lnnnan judges for the purpose of formulating a refined and more reliable set of category designations. We use these techniques to analyze the sense tags assigned by five judgps to the noun intcr•est. The initial tag set is takmi from Longman's Dictionary of Contemporary i:nglish. Through this process of analysis, we automatically identify and assign a revised set of sense tags for the data. The revised tags exhibit high reliability as measured by Cohen's r;.. Such techniques are important for formulating and evaluating both human and automated classification systems.",,"Word-Sense Distinguishability and Inter-Coder Agreement. It. is common in NLP that the categories into which text is classified do not have fully objective definitions. Examples of such categories are lexical distinctions such as part-of-speech tags and wordsense distinctions, sentence level distinctions such as phrase attachment, and discourse level distinct.icms such as topic or speech-act categorization. This p>1per presents an approach to analy?-ing the agrcen1ent arnong lnnnan judges for the purpose of formulating a refined and more reliable set of category designations. We use these techniques to analyze the sense tags assigned by five judgps to the noun intcr•est. The initial tag set is takmi from Longman's Dictionary of Contemporary i:nglish. Through this process of analysis, we automatically identify and assign a revised set of sense tags for the data. The revised tags exhibit high reliability as measured by Cohen's r;.. Such techniques are important for formulating and evaluating both human and automated classification systems.",1998
lauscher-etal-2018-investigating,https://aclanthology.org/D18-1370,0,,,,,,,"Investigating the Role of Argumentation in the Rhetorical Analysis of Scientific Publications with Neural Multi-Task Learning Models. Exponential growth in the number of scientific publications yields the need for effective automatic analysis of rhetorical aspects of scientific writing. Acknowledging the argumentative nature of scientific text, in this work we investigate the link between the argumentative structure of scientific publications and rhetorical aspects such as discourse categories or citation contexts. To this end, we (1) augment a corpus of scientific publications annotated with four layers of rhetoric annotations with argumentation annotations and (2) investigate neural multi-task learning architectures combining argument extraction with a set of rhetorical classification tasks. By coupling rhetorical classifiers with the extraction of argumentative components in a joint multi-task learning setting, we obtain significant performance gains for different rhetorical analysis tasks.",Investigating the Role of Argumentation in the Rhetorical Analysis of Scientific Publications with Neural Multi-Task Learning Models,"Exponential growth in the number of scientific publications yields the need for effective automatic analysis of rhetorical aspects of scientific writing. Acknowledging the argumentative nature of scientific text, in this work we investigate the link between the argumentative structure of scientific publications and rhetorical aspects such as discourse categories or citation contexts. To this end, we (1) augment a corpus of scientific publications annotated with four layers of rhetoric annotations with argumentation annotations and (2) investigate neural multi-task learning architectures combining argument extraction with a set of rhetorical classification tasks. By coupling rhetorical classifiers with the extraction of argumentative components in a joint multi-task learning setting, we obtain significant performance gains for different rhetorical analysis tasks.",Investigating the Role of Argumentation in the Rhetorical Analysis of Scientific Publications with Neural Multi-Task Learning Models,"Exponential growth in the number of scientific publications yields the need for effective automatic analysis of rhetorical aspects of scientific writing. Acknowledging the argumentative nature of scientific text, in this work we investigate the link between the argumentative structure of scientific publications and rhetorical aspects such as discourse categories or citation contexts. To this end, we (1) augment a corpus of scientific publications annotated with four layers of rhetoric annotations with argumentation annotations and (2) investigate neural multi-task learning architectures combining argument extraction with a set of rhetorical classification tasks. By coupling rhetorical classifiers with the extraction of argumentative components in a joint multi-task learning setting, we obtain significant performance gains for different rhetorical analysis tasks.","This research was partly funded by the German Research Foundation (DFG), grant number EC 477/5-1 (LOC-DB). We thank our four annotators for their dedicated annotation effort and the anonymous reviewers for constructive and insightful comments.","Investigating the Role of Argumentation in the Rhetorical Analysis of Scientific Publications with Neural Multi-Task Learning Models. Exponential growth in the number of scientific publications yields the need for effective automatic analysis of rhetorical aspects of scientific writing. Acknowledging the argumentative nature of scientific text, in this work we investigate the link between the argumentative structure of scientific publications and rhetorical aspects such as discourse categories or citation contexts. To this end, we (1) augment a corpus of scientific publications annotated with four layers of rhetoric annotations with argumentation annotations and (2) investigate neural multi-task learning architectures combining argument extraction with a set of rhetorical classification tasks. By coupling rhetorical classifiers with the extraction of argumentative components in a joint multi-task learning setting, we obtain significant performance gains for different rhetorical analysis tasks.",2018
martschat-strube-2015-latent,https://aclanthology.org/Q15-1029,0,,,,,,,"Latent Structures for Coreference Resolution. Machine learning approaches to coreference resolution vary greatly in the modeling of the problem: while early approaches operated on the mention pair level, current research focuses on ranking architectures and antecedent trees. We propose a unified representation of different approaches to coreference resolution in terms of the structure they operate on. We represent several coreference resolution approaches proposed in the literature in our framework and evaluate their performance. Finally, we conduct a systematic analysis of the output of these approaches, highlighting differences and similarities.",Latent Structures for Coreference Resolution,"Machine learning approaches to coreference resolution vary greatly in the modeling of the problem: while early approaches operated on the mention pair level, current research focuses on ranking architectures and antecedent trees. We propose a unified representation of different approaches to coreference resolution in terms of the structure they operate on. We represent several coreference resolution approaches proposed in the literature in our framework and evaluate their performance. Finally, we conduct a systematic analysis of the output of these approaches, highlighting differences and similarities.",Latent Structures for Coreference Resolution,"Machine learning approaches to coreference resolution vary greatly in the modeling of the problem: while early approaches operated on the mention pair level, current research focuses on ranking architectures and antecedent trees. We propose a unified representation of different approaches to coreference resolution in terms of the structure they operate on. We represent several coreference resolution approaches proposed in the literature in our framework and evaluate their performance. Finally, we conduct a systematic analysis of the output of these approaches, highlighting differences and similarities.","This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a HITS PhD scholarship. We thank the anonymous reviewers and our colleagues Benjamin Heinzerling, Yufang Hou and Nafise Moosavi for feedback on earlier drafts of this paper. Furthermore, we are grateful to Anders Björkelund for helpful comments on cost functions.","Latent Structures for Coreference Resolution. Machine learning approaches to coreference resolution vary greatly in the modeling of the problem: while early approaches operated on the mention pair level, current research focuses on ranking architectures and antecedent trees. We propose a unified representation of different approaches to coreference resolution in terms of the structure they operate on. We represent several coreference resolution approaches proposed in the literature in our framework and evaluate their performance. Finally, we conduct a systematic analysis of the output of these approaches, highlighting differences and similarities.",2015
muller-2020-pymmax2,https://aclanthology.org/2020.law-1.16,0,,,,,,,"pyMMAX2: Deep Access to MMAX2 Projects from Python. pyMMAX2 is an API for processing MMAX2 stand-off annotation data in Python. It provides a lightweight basis for the development of code which opens up the Java-and XML-based ecosystem of MMAX2 for more recent, Python-based NLP and data science methods. While pyMMAX2 is pure Python, and most functionality is implemented from scratch, the API re-uses the complex implementation of the essential business logic for MMAX2 annotation schemes by interfacing with the original MMAX2 Java libraries. pyMMAX2 is available for download at http://github.com/nlpAThits/pyMMAX2.",py{MMAX}2: Deep Access to {MMAX}2 Projects from Python,"pyMMAX2 is an API for processing MMAX2 stand-off annotation data in Python. It provides a lightweight basis for the development of code which opens up the Java-and XML-based ecosystem of MMAX2 for more recent, Python-based NLP and data science methods. While pyMMAX2 is pure Python, and most functionality is implemented from scratch, the API re-uses the complex implementation of the essential business logic for MMAX2 annotation schemes by interfacing with the original MMAX2 Java libraries. pyMMAX2 is available for download at http://github.com/nlpAThits/pyMMAX2.",pyMMAX2: Deep Access to MMAX2 Projects from Python,"pyMMAX2 is an API for processing MMAX2 stand-off annotation data in Python. It provides a lightweight basis for the development of code which opens up the Java-and XML-based ecosystem of MMAX2 for more recent, Python-based NLP and data science methods. While pyMMAX2 is pure Python, and most functionality is implemented from scratch, the API re-uses the complex implementation of the essential business logic for MMAX2 annotation schemes by interfacing with the original MMAX2 Java libraries. pyMMAX2 is available for download at http://github.com/nlpAThits/pyMMAX2.","The work described in this paper was done as part of the project DeepCurate, which is funded by the German Federal Ministry of Education and Research (BMBF) (No. 031L0204) and the Klaus Tschira Foundation, Heidelberg, Germany. We thank the anonymous reviewers for their helpful suggestions.","pyMMAX2: Deep Access to MMAX2 Projects from Python. pyMMAX2 is an API for processing MMAX2 stand-off annotation data in Python. It provides a lightweight basis for the development of code which opens up the Java-and XML-based ecosystem of MMAX2 for more recent, Python-based NLP and data science methods. While pyMMAX2 is pure Python, and most functionality is implemented from scratch, the API re-uses the complex implementation of the essential business logic for MMAX2 annotation schemes by interfacing with the original MMAX2 Java libraries. pyMMAX2 is available for download at http://github.com/nlpAThits/pyMMAX2.",2020
bhattasali-etal-2018-processing,https://aclanthology.org/W18-4904,0,,,,,,,"Processing MWEs: Neurocognitive Bases of Verbal MWEs and Lexical Cohesiveness within MWEs. Multiword expressions have posed a challenge in the past for computational linguistics since they comprise a heterogeneous family of word clusters and are difficult to detect in natural language data. In this paper, we present a fMRI study based on language comprehension to provide neuroimaging evidence for processing MWEs. We investigate whether different MWEs have distinct neural bases, e.g. if verbal MWEs involve separate brain areas from non-verbal MWEs and if MWEs with varying levels of cohesiveness activate dissociable brain regions. Our study contributes neuroimaging evidence illustrating that different MWEs elicit spatially distinct patterns of activation. We also adapt an association measure, usually used to detect MWEs, as a cognitively plausible metric for language processing.",Processing {MWE}s: Neurocognitive Bases of Verbal {MWE}s and Lexical Cohesiveness within {MWE}s,"Multiword expressions have posed a challenge in the past for computational linguistics since they comprise a heterogeneous family of word clusters and are difficult to detect in natural language data. In this paper, we present a fMRI study based on language comprehension to provide neuroimaging evidence for processing MWEs. We investigate whether different MWEs have distinct neural bases, e.g. if verbal MWEs involve separate brain areas from non-verbal MWEs and if MWEs with varying levels of cohesiveness activate dissociable brain regions. Our study contributes neuroimaging evidence illustrating that different MWEs elicit spatially distinct patterns of activation. We also adapt an association measure, usually used to detect MWEs, as a cognitively plausible metric for language processing.",Processing MWEs: Neurocognitive Bases of Verbal MWEs and Lexical Cohesiveness within MWEs,"Multiword expressions have posed a challenge in the past for computational linguistics since they comprise a heterogeneous family of word clusters and are difficult to detect in natural language data. In this paper, we present a fMRI study based on language comprehension to provide neuroimaging evidence for processing MWEs. We investigate whether different MWEs have distinct neural bases, e.g. if verbal MWEs involve separate brain areas from non-verbal MWEs and if MWEs with varying levels of cohesiveness activate dissociable brain regions. Our study contributes neuroimaging evidence illustrating that different MWEs elicit spatially distinct patterns of activation. We also adapt an association measure, usually used to detect MWEs, as a cognitively plausible metric for language processing.",This material is based upon work supported by the National Science Foundation under Grant No. 1607441.,"Processing MWEs: Neurocognitive Bases of Verbal MWEs and Lexical Cohesiveness within MWEs. Multiword expressions have posed a challenge in the past for computational linguistics since they comprise a heterogeneous family of word clusters and are difficult to detect in natural language data. In this paper, we present a fMRI study based on language comprehension to provide neuroimaging evidence for processing MWEs. We investigate whether different MWEs have distinct neural bases, e.g. if verbal MWEs involve separate brain areas from non-verbal MWEs and if MWEs with varying levels of cohesiveness activate dissociable brain regions. Our study contributes neuroimaging evidence illustrating that different MWEs elicit spatially distinct patterns of activation. We also adapt an association measure, usually used to detect MWEs, as a cognitively plausible metric for language processing.",2018
farrow-dzikovska-2009-context,https://aclanthology.org/W09-1502,0,,,,,,,"Context-Dependent Regression Testing for Natural Language Processing. Regression testing of natural language systems is problematic for two main reasons: component input and output is complex, and system behaviour is context-dependent. We have developed a generic approach which solves both of these issues. We describe our regression tool, CONTEST, which supports context-dependent testing of dialogue system components, and discuss the regression test sets we developed, designed to effectively isolate components from changes and problems earlier in the pipeline. We believe that the same approach can be used in regression testing for other dialogue systems, as well as in testing any complex NLP system containing multiple components.",Context-Dependent Regression Testing for Natural Language Processing,"Regression testing of natural language systems is problematic for two main reasons: component input and output is complex, and system behaviour is context-dependent. We have developed a generic approach which solves both of these issues. We describe our regression tool, CONTEST, which supports context-dependent testing of dialogue system components, and discuss the regression test sets we developed, designed to effectively isolate components from changes and problems earlier in the pipeline. We believe that the same approach can be used in regression testing for other dialogue systems, as well as in testing any complex NLP system containing multiple components.",Context-Dependent Regression Testing for Natural Language Processing,"Regression testing of natural language systems is problematic for two main reasons: component input and output is complex, and system behaviour is context-dependent. We have developed a generic approach which solves both of these issues. We describe our regression tool, CONTEST, which supports context-dependent testing of dialogue system components, and discuss the regression test sets we developed, designed to effectively isolate components from changes and problems earlier in the pipeline. We believe that the same approach can be used in regression testing for other dialogue systems, as well as in testing any complex NLP system containing multiple components.",This work has been supported in part by Office of Naval Research grant N000140810043. We thank Charles Callaway for help with generation and tutoring tests.,"Context-Dependent Regression Testing for Natural Language Processing. Regression testing of natural language systems is problematic for two main reasons: component input and output is complex, and system behaviour is context-dependent. We have developed a generic approach which solves both of these issues. We describe our regression tool, CONTEST, which supports context-dependent testing of dialogue system components, and discuss the regression test sets we developed, designed to effectively isolate components from changes and problems earlier in the pipeline. We believe that the same approach can be used in regression testing for other dialogue systems, as well as in testing any complex NLP system containing multiple components.",2009
manuvinakurike-etal-2018-conversational,https://aclanthology.org/W18-5033,0,,,,,,,"Conversational Image Editing: Incremental Intent Identification in a New Dialogue Task. We present ""conversational image editing"", a novel real-world application domain combining dialogue, visual information, and the use of computer vision. We discuss the importance of dialogue incrementality in this task, and build various models for incremental intent identification based on deep learning and traditional classification algorithms. We show how our model based on convolutional neural networks outperforms models based on random forests, long short term memory networks, and conditional random fields. By training embeddings based on image-related dialogue corpora, we outperform pre-trained out-of-the-box embeddings, for intention identification tasks. Our experiments also provide evidence that incremental intent processing may be more efficient for the user and could save time in accomplishing tasks.",Conversational Image Editing: Incremental Intent Identification in a New Dialogue Task,"We present ""conversational image editing"", a novel real-world application domain combining dialogue, visual information, and the use of computer vision. We discuss the importance of dialogue incrementality in this task, and build various models for incremental intent identification based on deep learning and traditional classification algorithms. We show how our model based on convolutional neural networks outperforms models based on random forests, long short term memory networks, and conditional random fields. By training embeddings based on image-related dialogue corpora, we outperform pre-trained out-of-the-box embeddings, for intention identification tasks. Our experiments also provide evidence that incremental intent processing may be more efficient for the user and could save time in accomplishing tasks.",Conversational Image Editing: Incremental Intent Identification in a New Dialogue Task,"We present ""conversational image editing"", a novel real-world application domain combining dialogue, visual information, and the use of computer vision. We discuss the importance of dialogue incrementality in this task, and build various models for incremental intent identification based on deep learning and traditional classification algorithms. We show how our model based on convolutional neural networks outperforms models based on random forests, long short term memory networks, and conditional random fields. By training embeddings based on image-related dialogue corpora, we outperform pre-trained out-of-the-box embeddings, for intention identification tasks. Our experiments also provide evidence that incremental intent processing may be more efficient for the user and could save time in accomplishing tasks.","This work was supported by a generous gift of Adobe Systems Incorporated to USC/ICT, and the first author's internship at Adobe Research. The first and last authors were also funded by the U.S. Army Research Laboratory. Statements and opinions expressed do not necessarily reflect the position or policy of the U.S. Government, and no official endorsement should be inferred.","Conversational Image Editing: Incremental Intent Identification in a New Dialogue Task. We present ""conversational image editing"", a novel real-world application domain combining dialogue, visual information, and the use of computer vision. We discuss the importance of dialogue incrementality in this task, and build various models for incremental intent identification based on deep learning and traditional classification algorithms. We show how our model based on convolutional neural networks outperforms models based on random forests, long short term memory networks, and conditional random fields. By training embeddings based on image-related dialogue corpora, we outperform pre-trained out-of-the-box embeddings, for intention identification tasks. Our experiments also provide evidence that incremental intent processing may be more efficient for the user and could save time in accomplishing tasks.",2018
roth-2017-role,https://aclanthology.org/W17-6934,0,,,,,,,"Role Semantics for Better Models of Implicit Discourse Relations. Predicting the structure of a discourse is challenging because relations between discourse segments are often implicit and thus hard to distinguish computationally. I extend previous work to classify implicit discourse relations by introducing a novel set of features on the level of semantic roles. My results demonstrate that such features are helpful, yielding results competitive with other feature-rich approaches on the PDTB. My main contribution is an analysis of improvements that can be traced back to role-based features, providing insights into why and when role semantics is helpful.",Role Semantics for Better Models of Implicit Discourse Relations,"Predicting the structure of a discourse is challenging because relations between discourse segments are often implicit and thus hard to distinguish computationally. I extend previous work to classify implicit discourse relations by introducing a novel set of features on the level of semantic roles. My results demonstrate that such features are helpful, yielding results competitive with other feature-rich approaches on the PDTB. My main contribution is an analysis of improvements that can be traced back to role-based features, providing insights into why and when role semantics is helpful.",Role Semantics for Better Models of Implicit Discourse Relations,"Predicting the structure of a discourse is challenging because relations between discourse segments are often implicit and thus hard to distinguish computationally. I extend previous work to classify implicit discourse relations by introducing a novel set of features on the level of semantic roles. My results demonstrate that such features are helpful, yielding results competitive with other feature-rich approaches on the PDTB. My main contribution is an analysis of improvements that can be traced back to role-based features, providing insights into why and when role semantics is helpful.","This research was supported in part by the Cluster of Excellence ""Multimodal Computing and Interaction"" of the German Excellence Initiative, and a DFG Research Fellowship (RO 4848/1-1).","Role Semantics for Better Models of Implicit Discourse Relations. Predicting the structure of a discourse is challenging because relations between discourse segments are often implicit and thus hard to distinguish computationally. I extend previous work to classify implicit discourse relations by introducing a novel set of features on the level of semantic roles. My results demonstrate that such features are helpful, yielding results competitive with other feature-rich approaches on the PDTB. My main contribution is an analysis of improvements that can be traced back to role-based features, providing insights into why and when role semantics is helpful.",2017
yamaguchi-etal-2021-frustratingly,https://aclanthology.org/2021.emnlp-main.249,0,,,,,,,"Frustratingly Simple Pretraining Alternatives to Masked Language Modeling. Masked language modeling (MLM), a selfsupervised pretraining objective, is widely used in natural language processing for learning text representations. MLM trains a model to predict a random sample of input tokens that have been replaced by a [MASK] placeholder in a multi-class setting over the entire vocabulary. When pretraining, it is common to use alongside MLM other auxiliary objectives on the token or sequence level to improve downstream performance (e.g. next sentence prediction). However, no previous work so far has attempted in examining whether other simpler linguistically intuitive or not objectives can be used standalone as main pretraining objectives. In this paper, we explore five simple pretraining objectives based on token-level classification tasks as replacements of MLM. Empirical results on GLUE and SQUAD show that our proposed methods achieve comparable or better performance to MLM using a BERT-BASE architecture. We further validate our methods using smaller models, showing that pretraining a model with 41% of the BERT-BASE's parameters, BERT-MEDIUM results in only a 1% drop in GLUE scores with our best objective. 1 * Work was done while at the University of Sheffield.",Frustratingly Simple Pretraining Alternatives to Masked Language Modeling,"Masked language modeling (MLM), a selfsupervised pretraining objective, is widely used in natural language processing for learning text representations. MLM trains a model to predict a random sample of input tokens that have been replaced by a [MASK] placeholder in a multi-class setting over the entire vocabulary. When pretraining, it is common to use alongside MLM other auxiliary objectives on the token or sequence level to improve downstream performance (e.g. next sentence prediction). However, no previous work so far has attempted in examining whether other simpler linguistically intuitive or not objectives can be used standalone as main pretraining objectives. In this paper, we explore five simple pretraining objectives based on token-level classification tasks as replacements of MLM. Empirical results on GLUE and SQUAD show that our proposed methods achieve comparable or better performance to MLM using a BERT-BASE architecture. We further validate our methods using smaller models, showing that pretraining a model with 41% of the BERT-BASE's parameters, BERT-MEDIUM results in only a 1% drop in GLUE scores with our best objective. 1 * Work was done while at the University of Sheffield.",Frustratingly Simple Pretraining Alternatives to Masked Language Modeling,"Masked language modeling (MLM), a selfsupervised pretraining objective, is widely used in natural language processing for learning text representations. MLM trains a model to predict a random sample of input tokens that have been replaced by a [MASK] placeholder in a multi-class setting over the entire vocabulary. When pretraining, it is common to use alongside MLM other auxiliary objectives on the token or sequence level to improve downstream performance (e.g. next sentence prediction). However, no previous work so far has attempted in examining whether other simpler linguistically intuitive or not objectives can be used standalone as main pretraining objectives. In this paper, we explore five simple pretraining objectives based on token-level classification tasks as replacements of MLM. Empirical results on GLUE and SQUAD show that our proposed methods achieve comparable or better performance to MLM using a BERT-BASE architecture. We further validate our methods using smaller models, showing that pretraining a model with 41% of the BERT-BASE's parameters, BERT-MEDIUM results in only a 1% drop in GLUE scores with our best objective. 1 * Work was done while at the University of Sheffield.","NA is supported by EPSRC grant EP/V055712/1, part of the European Commission CHIST-ERA programme, call 2019 XAI: Explainable Machine Learning-based Artificial Intelligence. KM is supported by Amazon through the Alexa Fellowship scheme.","Frustratingly Simple Pretraining Alternatives to Masked Language Modeling. Masked language modeling (MLM), a selfsupervised pretraining objective, is widely used in natural language processing for learning text representations. MLM trains a model to predict a random sample of input tokens that have been replaced by a [MASK] placeholder in a multi-class setting over the entire vocabulary. When pretraining, it is common to use alongside MLM other auxiliary objectives on the token or sequence level to improve downstream performance (e.g. next sentence prediction). However, no previous work so far has attempted in examining whether other simpler linguistically intuitive or not objectives can be used standalone as main pretraining objectives. In this paper, we explore five simple pretraining objectives based on token-level classification tasks as replacements of MLM. Empirical results on GLUE and SQUAD show that our proposed methods achieve comparable or better performance to MLM using a BERT-BASE architecture. We further validate our methods using smaller models, showing that pretraining a model with 41% of the BERT-BASE's parameters, BERT-MEDIUM results in only a 1% drop in GLUE scores with our best objective. 1 * Work was done while at the University of Sheffield.",2021
herzig-etal-2016-classifying,https://aclanthology.org/W16-3609,0,,,,business_use,,,"Classifying Emotions in Customer Support Dialogues in Social Media. Providing customer support through social media channels is gaining increasing popularity. In such a context, automatic detection and analysis of the emotions expressed by customers is important, as is identification of the emotional techniques (e.g., apology, empathy, etc.) in the responses of customer service agents. Result of such an analysis can help assess the quality of such a service, help and inform agents about desirable responses, and help develop automated service agents for social media interactions. In this paper, we show that, in addition to text based turn features, dialogue features can significantly improve detection of emotions in social media customer service dialogues and help predict emotional techniques used by customer service agents. Got excited to pick up the latest bundle since it was on sale today, but now I can't download it at all. Bummer. =/ Yeah, no problems there. The error is coming when I actually try to download the games. Error code: 412344 Uh oh! To check, were you able to purchase that title? Let's confirm by signing in at http://t.co/53fsdfd real quick. Appreciate that! Let's power cycle and unplug modem/router for 2 mins then try again. Seems to be working now. Weird. I tried that 3 different times earlier. Thanks. Odd, but glad to hear that's sorted! Happy gaming, and we'll be here to help if any other questions or concerns arise.",Classifying Emotions in Customer Support Dialogues in Social Media,"Providing customer support through social media channels is gaining increasing popularity. In such a context, automatic detection and analysis of the emotions expressed by customers is important, as is identification of the emotional techniques (e.g., apology, empathy, etc.) in the responses of customer service agents. Result of such an analysis can help assess the quality of such a service, help and inform agents about desirable responses, and help develop automated service agents for social media interactions. In this paper, we show that, in addition to text based turn features, dialogue features can significantly improve detection of emotions in social media customer service dialogues and help predict emotional techniques used by customer service agents. Got excited to pick up the latest bundle since it was on sale today, but now I can't download it at all. Bummer. =/ Yeah, no problems there. The error is coming when I actually try to download the games. Error code: 412344 Uh oh! To check, were you able to purchase that title? Let's confirm by signing in at http://t.co/53fsdfd real quick. Appreciate that! Let's power cycle and unplug modem/router for 2 mins then try again. Seems to be working now. Weird. I tried that 3 different times earlier. Thanks. Odd, but glad to hear that's sorted! Happy gaming, and we'll be here to help if any other questions or concerns arise.",Classifying Emotions in Customer Support Dialogues in Social Media,"Providing customer support through social media channels is gaining increasing popularity. In such a context, automatic detection and analysis of the emotions expressed by customers is important, as is identification of the emotional techniques (e.g., apology, empathy, etc.) in the responses of customer service agents. Result of such an analysis can help assess the quality of such a service, help and inform agents about desirable responses, and help develop automated service agents for social media interactions. In this paper, we show that, in addition to text based turn features, dialogue features can significantly improve detection of emotions in social media customer service dialogues and help predict emotional techniques used by customer service agents. Got excited to pick up the latest bundle since it was on sale today, but now I can't download it at all. Bummer. =/ Yeah, no problems there. The error is coming when I actually try to download the games. Error code: 412344 Uh oh! To check, were you able to purchase that title? Let's confirm by signing in at http://t.co/53fsdfd real quick. Appreciate that! Let's power cycle and unplug modem/router for 2 mins then try again. Seems to be working now. Weird. I tried that 3 different times earlier. Thanks. Odd, but glad to hear that's sorted! Happy gaming, and we'll be here to help if any other questions or concerns arise.",,"Classifying Emotions in Customer Support Dialogues in Social Media. Providing customer support through social media channels is gaining increasing popularity. In such a context, automatic detection and analysis of the emotions expressed by customers is important, as is identification of the emotional techniques (e.g., apology, empathy, etc.) in the responses of customer service agents. Result of such an analysis can help assess the quality of such a service, help and inform agents about desirable responses, and help develop automated service agents for social media interactions. In this paper, we show that, in addition to text based turn features, dialogue features can significantly improve detection of emotions in social media customer service dialogues and help predict emotional techniques used by customer service agents. Got excited to pick up the latest bundle since it was on sale today, but now I can't download it at all. Bummer. =/ Yeah, no problems there. The error is coming when I actually try to download the games. Error code: 412344 Uh oh! To check, were you able to purchase that title? Let's confirm by signing in at http://t.co/53fsdfd real quick. Appreciate that! Let's power cycle and unplug modem/router for 2 mins then try again. Seems to be working now. Weird. I tried that 3 different times earlier. Thanks. Odd, but glad to hear that's sorted! Happy gaming, and we'll be here to help if any other questions or concerns arise.",2016
pighin-etal-2012-analysis,http://www.lrec-conf.org/proceedings/lrec2012/pdf/337_Paper.pdf,0,,,,,,,"An Analysis (and an Annotated Corpus) of User Responses to Machine Translation Output. We present an annotated resource consisting of open-domain translation requests, automatic translations and user-provided corrections collected from casual users of the translation portal http://reverso.net. The layers of annotation provide: 1) quality assessments for 830 correction suggestions for translations into English, at the segment level, and 2) 814 usefulness assessments for English-Spanish and English-French translation suggestions, a suggestion being useful if it contains at least local clues that can be used to improve translation quality. We also discuss the results of our preliminary experiments concerning 1) the development of an automatic filter to separate useful from non-useful feedback, and 2) the incorporation in the machine translation pipeline of bilingual phrases extracted from the suggestions. The annotated data, available for download from ftp://mi.eng.cam.ac.uk/data/faust/LW-UPC-Oct11-FAUST-feedback-annotation.tgz, is released under a Creative Commons license. To our best knowledge, this is the first resource of this kind that has ever been made publicly available.",An Analysis (and an Annotated Corpus) of User Responses to Machine Translation Output,"We present an annotated resource consisting of open-domain translation requests, automatic translations and user-provided corrections collected from casual users of the translation portal http://reverso.net. The layers of annotation provide: 1) quality assessments for 830 correction suggestions for translations into English, at the segment level, and 2) 814 usefulness assessments for English-Spanish and English-French translation suggestions, a suggestion being useful if it contains at least local clues that can be used to improve translation quality. We also discuss the results of our preliminary experiments concerning 1) the development of an automatic filter to separate useful from non-useful feedback, and 2) the incorporation in the machine translation pipeline of bilingual phrases extracted from the suggestions. The annotated data, available for download from ftp://mi.eng.cam.ac.uk/data/faust/LW-UPC-Oct11-FAUST-feedback-annotation.tgz, is released under a Creative Commons license. To our best knowledge, this is the first resource of this kind that has ever been made publicly available.",An Analysis (and an Annotated Corpus) of User Responses to Machine Translation Output,"We present an annotated resource consisting of open-domain translation requests, automatic translations and user-provided corrections collected from casual users of the translation portal http://reverso.net. The layers of annotation provide: 1) quality assessments for 830 correction suggestions for translations into English, at the segment level, and 2) 814 usefulness assessments for English-Spanish and English-French translation suggestions, a suggestion being useful if it contains at least local clues that can be used to improve translation quality. We also discuss the results of our preliminary experiments concerning 1) the development of an automatic filter to separate useful from non-useful feedback, and 2) the incorporation in the machine translation pipeline of bilingual phrases extracted from the suggestions. The annotated data, available for download from ftp://mi.eng.cam.ac.uk/data/faust/LW-UPC-Oct11-FAUST-feedback-annotation.tgz, is released under a Creative Commons license. To our best knowledge, this is the first resource of this kind that has ever been made publicly available.","This research has been partially funded by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement numbers 247762 (FAUST project, FP7-ICT-2009-4-247762) and 247914 (MOLTO project, FP7-ICT-2009-4-247914) and by the Spanish Ministry of Education and Science (OpenMT-2, TIN2009-14675-C03).","An Analysis (and an Annotated Corpus) of User Responses to Machine Translation Output. We present an annotated resource consisting of open-domain translation requests, automatic translations and user-provided corrections collected from casual users of the translation portal http://reverso.net. The layers of annotation provide: 1) quality assessments for 830 correction suggestions for translations into English, at the segment level, and 2) 814 usefulness assessments for English-Spanish and English-French translation suggestions, a suggestion being useful if it contains at least local clues that can be used to improve translation quality. We also discuss the results of our preliminary experiments concerning 1) the development of an automatic filter to separate useful from non-useful feedback, and 2) the incorporation in the machine translation pipeline of bilingual phrases extracted from the suggestions. The annotated data, available for download from ftp://mi.eng.cam.ac.uk/data/faust/LW-UPC-Oct11-FAUST-feedback-annotation.tgz, is released under a Creative Commons license. To our best knowledge, this is the first resource of this kind that has ever been made publicly available.",2012
yonezaki-enomoto-1980-database,https://aclanthology.org/C80-1032,0,,,,,,,"Database System Based on Intensional Logic. Model theoretic semantics of database systems is studied. As Rechard Montague has done in his work, 5 we translate statements of DDL and DML into intensional logic and the latter is interpreted with reference to a suitable model. Major advantages of its approach include (i) it leads itself to the design of database systems which can handle historical data, (ii) it provides with a formal description of database semantics.",Database System Based on Intensional Logic,"Model theoretic semantics of database systems is studied. As Rechard Montague has done in his work, 5 we translate statements of DDL and DML into intensional logic and the latter is interpreted with reference to a suitable model. Major advantages of its approach include (i) it leads itself to the design of database systems which can handle historical data, (ii) it provides with a formal description of database semantics.",Database System Based on Intensional Logic,"Model theoretic semantics of database systems is studied. As Rechard Montague has done in his work, 5 we translate statements of DDL and DML into intensional logic and the latter is interpreted with reference to a suitable model. Major advantages of its approach include (i) it leads itself to the design of database systems which can handle historical data, (ii) it provides with a formal description of database semantics.",Our thanks are due to Mr. Kenichi Murata for fruitful discussions and encouragement and to Prof. Takuya Katayama and many other people whose ideas we have unwittingly absorbed over the years.,"Database System Based on Intensional Logic. Model theoretic semantics of database systems is studied. As Rechard Montague has done in his work, 5 we translate statements of DDL and DML into intensional logic and the latter is interpreted with reference to a suitable model. Major advantages of its approach include (i) it leads itself to the design of database systems which can handle historical data, (ii) it provides with a formal description of database semantics.",1980
wing-baldridge-2011-simple,https://aclanthology.org/P11-1096,0,,,,,,,"Simple supervised document geolocation with geodesic grids. We investigate automatic geolocation (i.e. identification of the location, expressed as latitude/longitude coordinates) of documents. Geolocation can be an effective means of summarizing large document collections and it is an important component of geographic information retrieval. We describe several simple supervised methods for document geolocation using only the document's raw text as evidence. All of our methods predict locations in the context of geodesic grids of varying degrees of resolution. We evaluate the methods on geotagged Wikipedia articles and Twitter feeds. For Wikipedia, our best method obtains a median prediction error of just 11.8 kilometers. Twitter geolocation is more challenging: we obtain a median error of 479 km, an improvement on previous results for the dataset.",Simple supervised document geolocation with geodesic grids,"We investigate automatic geolocation (i.e. identification of the location, expressed as latitude/longitude coordinates) of documents. Geolocation can be an effective means of summarizing large document collections and it is an important component of geographic information retrieval. We describe several simple supervised methods for document geolocation using only the document's raw text as evidence. All of our methods predict locations in the context of geodesic grids of varying degrees of resolution. We evaluate the methods on geotagged Wikipedia articles and Twitter feeds. For Wikipedia, our best method obtains a median prediction error of just 11.8 kilometers. Twitter geolocation is more challenging: we obtain a median error of 479 km, an improvement on previous results for the dataset.",Simple supervised document geolocation with geodesic grids,"We investigate automatic geolocation (i.e. identification of the location, expressed as latitude/longitude coordinates) of documents. Geolocation can be an effective means of summarizing large document collections and it is an important component of geographic information retrieval. We describe several simple supervised methods for document geolocation using only the document's raw text as evidence. All of our methods predict locations in the context of geodesic grids of varying degrees of resolution. We evaluate the methods on geotagged Wikipedia articles and Twitter feeds. For Wikipedia, our best method obtains a median prediction error of just 11.8 kilometers. Twitter geolocation is more challenging: we obtain a median error of 479 km, an improvement on previous results for the dataset.","This research was supported by a grant from the Morris Memorial Trust Fund of the New York Community Trust and from the Longhorn Innovation Fund for Technology. This paper benefited from reviewer comments and from discussion in the Natural Language Learning reading group at UT Austin, with particular thanks to Matt Lease.","Simple supervised document geolocation with geodesic grids. We investigate automatic geolocation (i.e. identification of the location, expressed as latitude/longitude coordinates) of documents. Geolocation can be an effective means of summarizing large document collections and it is an important component of geographic information retrieval. We describe several simple supervised methods for document geolocation using only the document's raw text as evidence. All of our methods predict locations in the context of geodesic grids of varying degrees of resolution. We evaluate the methods on geotagged Wikipedia articles and Twitter feeds. For Wikipedia, our best method obtains a median prediction error of just 11.8 kilometers. Twitter geolocation is more challenging: we obtain a median error of 479 km, an improvement on previous results for the dataset.",2011
schulz-etal-2020-named,https://aclanthology.org/2020.lrec-1.553,1,,,,health,,,"Named Entities in Medical Case Reports: Corpus and Experiments. We present a new corpus comprising annotations of medical entities in case reports, originating from PubMed Central's open access library. In the case reports, we annotate cases, conditions, findings, factors and negation modifiers. Moreover, where applicable, we annotate relations between these entities. As such, this is the first corpus of this kind made available to the scientific community in English. It enables the initial investigation of automatic information extraction from case reports through tasks like Named Entity Recognition, Relation Extraction and (sentence/paragraph) relevance detection. Additionally, we present four strong baseline systems for the detection of medical entities made available through the annotated dataset.",Named Entities in Medical Case Reports: Corpus and Experiments,"We present a new corpus comprising annotations of medical entities in case reports, originating from PubMed Central's open access library. In the case reports, we annotate cases, conditions, findings, factors and negation modifiers. Moreover, where applicable, we annotate relations between these entities. As such, this is the first corpus of this kind made available to the scientific community in English. It enables the initial investigation of automatic information extraction from case reports through tasks like Named Entity Recognition, Relation Extraction and (sentence/paragraph) relevance detection. Additionally, we present four strong baseline systems for the detection of medical entities made available through the annotated dataset.",Named Entities in Medical Case Reports: Corpus and Experiments,"We present a new corpus comprising annotations of medical entities in case reports, originating from PubMed Central's open access library. In the case reports, we annotate cases, conditions, findings, factors and negation modifiers. Moreover, where applicable, we annotate relations between these entities. As such, this is the first corpus of this kind made available to the scientific community in English. It enables the initial investigation of automatic information extraction from case reports through tasks like Named Entity Recognition, Relation Extraction and (sentence/paragraph) relevance detection. Additionally, we present four strong baseline systems for the detection of medical entities made available through the annotated dataset.","The research presented in this article is funded by the German Federal Ministry of Education and Research (BMBF) through the project QURATOR (Unternehmen Region, Wachstumskern, grant no. 03WKDA1A), see http://qurator. ai. We want to thank our medical experts for their help annotating the data set, especially Ashlee Finckh and Sophie Klopfenstein.","Named Entities in Medical Case Reports: Corpus and Experiments. We present a new corpus comprising annotations of medical entities in case reports, originating from PubMed Central's open access library. In the case reports, we annotate cases, conditions, findings, factors and negation modifiers. Moreover, where applicable, we annotate relations between these entities. As such, this is the first corpus of this kind made available to the scientific community in English. It enables the initial investigation of automatic information extraction from case reports through tasks like Named Entity Recognition, Relation Extraction and (sentence/paragraph) relevance detection. Additionally, we present four strong baseline systems for the detection of medical entities made available through the annotated dataset.",2020
nakashole-mitchell-2014-language,https://aclanthology.org/P14-1095,1,,,,disinformation_and_fake_news,,,"Language-Aware Truth Assessment of Fact Candidates. This paper introduces FactChecker, language-aware approach to truth-finding. FactChecker differs from prior approaches in that it does not rely on iterative peer voting, instead it leverages language to infer believability of fact candidates. In particular, FactChecker makes use of linguistic features to detect if a given source objectively states facts or is speculative and opinionated. To ensure that fact candidates mentioned in similar sources have similar believability, FactChecker augments objectivity with a co-mention score to compute the overall believability score of a fact candidate. Our experiments on various datasets show that FactChecker yields higher accuracy than existing approaches.",Language-Aware Truth Assessment of Fact Candidates,"This paper introduces FactChecker, language-aware approach to truth-finding. FactChecker differs from prior approaches in that it does not rely on iterative peer voting, instead it leverages language to infer believability of fact candidates. In particular, FactChecker makes use of linguistic features to detect if a given source objectively states facts or is speculative and opinionated. To ensure that fact candidates mentioned in similar sources have similar believability, FactChecker augments objectivity with a co-mention score to compute the overall believability score of a fact candidate. Our experiments on various datasets show that FactChecker yields higher accuracy than existing approaches.",Language-Aware Truth Assessment of Fact Candidates,"This paper introduces FactChecker, language-aware approach to truth-finding. FactChecker differs from prior approaches in that it does not rely on iterative peer voting, instead it leverages language to infer believability of fact candidates. In particular, FactChecker makes use of linguistic features to detect if a given source objectively states facts or is speculative and opinionated. To ensure that fact candidates mentioned in similar sources have similar believability, FactChecker augments objectivity with a co-mention score to compute the overall believability score of a fact candidate. Our experiments on various datasets show that FactChecker yields higher accuracy than existing approaches.",We thank members of the NELL team at CMU for their helpful comments. This research was supported by DARPA under contract number FA8750-13-2-0005.,"Language-Aware Truth Assessment of Fact Candidates. This paper introduces FactChecker, language-aware approach to truth-finding. FactChecker differs from prior approaches in that it does not rely on iterative peer voting, instead it leverages language to infer believability of fact candidates. In particular, FactChecker makes use of linguistic features to detect if a given source objectively states facts or is speculative and opinionated. To ensure that fact candidates mentioned in similar sources have similar believability, FactChecker augments objectivity with a co-mention score to compute the overall believability score of a fact candidate. Our experiments on various datasets show that FactChecker yields higher accuracy than existing approaches.",2014
gu-etal-2022-phrase,https://aclanthology.org/2022.acl-long.444,0,,,,,,,"Phrase-aware Unsupervised Constituency Parsing. Recent studies have achieved inspiring success in unsupervised grammar induction using masked language modeling (MLM) as the proxy task. Despite their high accuracy in identifying low-level structures, prior arts tend to struggle in capturing high-level structures like clauses, since the MLM task usually only requires information from local context. In this work, we revisit LM-based constituency parsing from a phrase-centered perspective. Inspired by the natural reading process of human readers, we propose to regularize the parser with phrases extracted by an unsupervised phrase tagger to help the LM model quickly manage low-level structures. For a better understanding of high-level structures, we propose a phrase-guided masking strategy for LM to emphasize more on reconstructing nonphrase words. We show that the initial phrase regularization serves as an effective bootstrap, and phrase-guided masking improves the identification of high-level structures. Experiments on the public benchmark with two different backbone models demonstrate the effectiveness and generality of our method.",Phrase-aware Unsupervised Constituency Parsing,"Recent studies have achieved inspiring success in unsupervised grammar induction using masked language modeling (MLM) as the proxy task. Despite their high accuracy in identifying low-level structures, prior arts tend to struggle in capturing high-level structures like clauses, since the MLM task usually only requires information from local context. In this work, we revisit LM-based constituency parsing from a phrase-centered perspective. Inspired by the natural reading process of human readers, we propose to regularize the parser with phrases extracted by an unsupervised phrase tagger to help the LM model quickly manage low-level structures. For a better understanding of high-level structures, we propose a phrase-guided masking strategy for LM to emphasize more on reconstructing nonphrase words. We show that the initial phrase regularization serves as an effective bootstrap, and phrase-guided masking improves the identification of high-level structures. Experiments on the public benchmark with two different backbone models demonstrate the effectiveness and generality of our method.",Phrase-aware Unsupervised Constituency Parsing,"Recent studies have achieved inspiring success in unsupervised grammar induction using masked language modeling (MLM) as the proxy task. Despite their high accuracy in identifying low-level structures, prior arts tend to struggle in capturing high-level structures like clauses, since the MLM task usually only requires information from local context. In this work, we revisit LM-based constituency parsing from a phrase-centered perspective. Inspired by the natural reading process of human readers, we propose to regularize the parser with phrases extracted by an unsupervised phrase tagger to help the LM model quickly manage low-level structures. For a better understanding of high-level structures, we propose a phrase-guided masking strategy for LM to emphasize more on reconstructing nonphrase words. We show that the initial phrase regularization serves as an effective bootstrap, and phrase-guided masking improves the identification of high-level structures. Experiments on the public benchmark with two different backbone models demonstrate the effectiveness and generality of our method.","Research was supported in part by US DARPA KAIROS Program No. FA8750-19-2-1004, So-cialSim Program No. W911NF-17-C-0099, and INCAS Program No. HR001121C0165, National Science Foundation IIS-19-56151, IIS-17-41317, and IIS 17-04532, IBM-Illinois Discovery Accelerator Institute, and the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and do not necessarily represent the views, either expressed or implied, of DARPA or the U.S. Government. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.","Phrase-aware Unsupervised Constituency Parsing. Recent studies have achieved inspiring success in unsupervised grammar induction using masked language modeling (MLM) as the proxy task. Despite their high accuracy in identifying low-level structures, prior arts tend to struggle in capturing high-level structures like clauses, since the MLM task usually only requires information from local context. In this work, we revisit LM-based constituency parsing from a phrase-centered perspective. Inspired by the natural reading process of human readers, we propose to regularize the parser with phrases extracted by an unsupervised phrase tagger to help the LM model quickly manage low-level structures. For a better understanding of high-level structures, we propose a phrase-guided masking strategy for LM to emphasize more on reconstructing nonphrase words. We show that the initial phrase regularization serves as an effective bootstrap, and phrase-guided masking improves the identification of high-level structures. Experiments on the public benchmark with two different backbone models demonstrate the effectiveness and generality of our method.",2022
lagarda-casacuberta-2008-applying,https://aclanthology.org/2008.eamt-1.14,0,,,,,,,"Applying boosting to statistical machine translation. Boosting is a general method for improving the accuracy of a given learning algorithm under certain restrictions. In this work, AdaBoost, one of the most popular boosting algorithms, is adapted and applied to statistical machine translation. The appropriateness of this technique in this scenario is evaluated on a real translation task. Results from preliminary experiments confirm that statistical machine translation can take advantage from this technique, improving the translation quality.",Applying boosting to statistical machine translation,"Boosting is a general method for improving the accuracy of a given learning algorithm under certain restrictions. In this work, AdaBoost, one of the most popular boosting algorithms, is adapted and applied to statistical machine translation. The appropriateness of this technique in this scenario is evaluated on a real translation task. Results from preliminary experiments confirm that statistical machine translation can take advantage from this technique, improving the translation quality.",Applying boosting to statistical machine translation,"Boosting is a general method for improving the accuracy of a given learning algorithm under certain restrictions. In this work, AdaBoost, one of the most popular boosting algorithms, is adapted and applied to statistical machine translation. The appropriateness of this technique in this scenario is evaluated on a real translation task. Results from preliminary experiments confirm that statistical machine translation can take advantage from this technique, improving the translation quality.",,"Applying boosting to statistical machine translation. Boosting is a general method for improving the accuracy of a given learning algorithm under certain restrictions. In this work, AdaBoost, one of the most popular boosting algorithms, is adapted and applied to statistical machine translation. The appropriateness of this technique in this scenario is evaluated on a real translation task. Results from preliminary experiments confirm that statistical machine translation can take advantage from this technique, improving the translation quality.",2008
darwish-etal-2014-verifiably,https://aclanthology.org/D14-1154,0,,,,,,,"Verifiably Effective Arabic Dialect Identification. Several recent papers on Arabic dialect identification have hinted that using a word unigram model is sufficient and effective for the task. However, most previous work was done on a standard fairly homogeneous dataset of dialectal user comments. In this paper, we show that training on the standard dataset does not generalize, because a unigram model may be tuned to topics in the comments and does not capture the distinguishing features of dialects. We show that effective dialect identification requires that we account for the distinguishing lexical, morphological, and phonological phenomena of dialects. We show that accounting for such can improve dialect detection accuracy by nearly 10% absolute.",Verifiably Effective {A}rabic Dialect Identification,"Several recent papers on Arabic dialect identification have hinted that using a word unigram model is sufficient and effective for the task. However, most previous work was done on a standard fairly homogeneous dataset of dialectal user comments. In this paper, we show that training on the standard dataset does not generalize, because a unigram model may be tuned to topics in the comments and does not capture the distinguishing features of dialects. We show that effective dialect identification requires that we account for the distinguishing lexical, morphological, and phonological phenomena of dialects. We show that accounting for such can improve dialect detection accuracy by nearly 10% absolute.",Verifiably Effective Arabic Dialect Identification,"Several recent papers on Arabic dialect identification have hinted that using a word unigram model is sufficient and effective for the task. However, most previous work was done on a standard fairly homogeneous dataset of dialectal user comments. In this paper, we show that training on the standard dataset does not generalize, because a unigram model may be tuned to topics in the comments and does not capture the distinguishing features of dialects. We show that effective dialect identification requires that we account for the distinguishing lexical, morphological, and phonological phenomena of dialects. We show that accounting for such can improve dialect detection accuracy by nearly 10% absolute.",,"Verifiably Effective Arabic Dialect Identification. Several recent papers on Arabic dialect identification have hinted that using a word unigram model is sufficient and effective for the task. However, most previous work was done on a standard fairly homogeneous dataset of dialectal user comments. In this paper, we show that training on the standard dataset does not generalize, because a unigram model may be tuned to topics in the comments and does not capture the distinguishing features of dialects. We show that effective dialect identification requires that we account for the distinguishing lexical, morphological, and phonological phenomena of dialects. We show that accounting for such can improve dialect detection accuracy by nearly 10% absolute.",2014
miranda-etal-2018-multilingual,https://aclanthology.org/D18-1483,0,,,,,,,"Multilingual Clustering of Streaming News. Clustering news across languages enables efficient media monitoring by aggregating articles from multilingual sources into coherent stories. Doing so in an online setting allows scalable processing of massive news streams. To this end, we describe a novel method for clustering an incoming stream of multilingual documents into monolingual and crosslingual story clusters. Unlike typical clustering approaches that consider a small and known number of labels, we tackle the problem of discovering an ever growing number of cluster labels in an online fashion, using real news datasets in multiple languages. Our method is simple to implement, computationally efficient and produces state-of-the-art results on datasets in German, English and Spanish.",Multilingual Clustering of Streaming News,"Clustering news across languages enables efficient media monitoring by aggregating articles from multilingual sources into coherent stories. Doing so in an online setting allows scalable processing of massive news streams. To this end, we describe a novel method for clustering an incoming stream of multilingual documents into monolingual and crosslingual story clusters. Unlike typical clustering approaches that consider a small and known number of labels, we tackle the problem of discovering an ever growing number of cluster labels in an online fashion, using real news datasets in multiple languages. Our method is simple to implement, computationally efficient and produces state-of-the-art results on datasets in German, English and Spanish.",Multilingual Clustering of Streaming News,"Clustering news across languages enables efficient media monitoring by aggregating articles from multilingual sources into coherent stories. Doing so in an online setting allows scalable processing of massive news streams. To this end, we describe a novel method for clustering an incoming stream of multilingual documents into monolingual and crosslingual story clusters. Unlike typical clustering approaches that consider a small and known number of labels, we tackle the problem of discovering an ever growing number of cluster labels in an online fashion, using real news datasets in multiple languages. Our method is simple to implement, computationally efficient and produces state-of-the-art results on datasets in German, English and Spanish.","We would like to thank Esma Balkır, Nikos Papasarantopoulos, Afonso Mendes, Shashi Narayan and the anonymous reviewers for their feedback. This project was supported by the European H2020 project SUMMA, grant agreement 688139 (see http://www.summa-project.eu) and by a grant from Bloomberg.","Multilingual Clustering of Streaming News. Clustering news across languages enables efficient media monitoring by aggregating articles from multilingual sources into coherent stories. Doing so in an online setting allows scalable processing of massive news streams. To this end, we describe a novel method for clustering an incoming stream of multilingual documents into monolingual and crosslingual story clusters. Unlike typical clustering approaches that consider a small and known number of labels, we tackle the problem of discovering an ever growing number of cluster labels in an online fashion, using real news datasets in multiple languages. Our method is simple to implement, computationally efficient and produces state-of-the-art results on datasets in German, English and Spanish.",2018
popowich-etal-1997-lexicalist,https://aclanthology.org/1997.tmi-1.9,0,,,,,,,"A lexicalist approach to the translation of colloquial text. Colloquial English (CE) as found in television programs or typical conversations is different than text found in technical manuals, newspapers and books. Phrases tend to be shorter and less sophisticated. In this paper, we look at some of the theoretical and implementational issues involved in translating CE. We present a fully automatic large-scale multilingual natural language processing system for translation of CE input text, as found in the commercially transmitted closed-caption television signal, into simple target sentences. Our approach is based on the Whitelock's Shake and Bake machine translation paradigm, which relies heavily on lexical resources. The system currently translates from English to Spanish with the translation modules for Brazilian Portuguese under development.",A lexicalist approach to the translation of colloquial text,"Colloquial English (CE) as found in television programs or typical conversations is different than text found in technical manuals, newspapers and books. Phrases tend to be shorter and less sophisticated. In this paper, we look at some of the theoretical and implementational issues involved in translating CE. We present a fully automatic large-scale multilingual natural language processing system for translation of CE input text, as found in the commercially transmitted closed-caption television signal, into simple target sentences. Our approach is based on the Whitelock's Shake and Bake machine translation paradigm, which relies heavily on lexical resources. The system currently translates from English to Spanish with the translation modules for Brazilian Portuguese under development.",A lexicalist approach to the translation of colloquial text,"Colloquial English (CE) as found in television programs or typical conversations is different than text found in technical manuals, newspapers and books. Phrases tend to be shorter and less sophisticated. In this paper, we look at some of the theoretical and implementational issues involved in translating CE. We present a fully automatic large-scale multilingual natural language processing system for translation of CE input text, as found in the commercially transmitted closed-caption television signal, into simple target sentences. Our approach is based on the Whitelock's Shake and Bake machine translation paradigm, which relies heavily on lexical resources. The system currently translates from English to Spanish with the translation modules for Brazilian Portuguese under development.",,"A lexicalist approach to the translation of colloquial text. Colloquial English (CE) as found in television programs or typical conversations is different than text found in technical manuals, newspapers and books. Phrases tend to be shorter and less sophisticated. In this paper, we look at some of the theoretical and implementational issues involved in translating CE. We present a fully automatic large-scale multilingual natural language processing system for translation of CE input text, as found in the commercially transmitted closed-caption television signal, into simple target sentences. Our approach is based on the Whitelock's Shake and Bake machine translation paradigm, which relies heavily on lexical resources. The system currently translates from English to Spanish with the translation modules for Brazilian Portuguese under development.",1997
stajner-popovic-2018-improving,https://aclanthology.org/W18-7006,0,,,,,,,"Improving Machine Translation of English Relative Clauses with Automatic Text Simplification. This article explores the use of automatic sentence simplification as a preprocessing step in neural machine translation of English relative clauses into grammatically complex languages. Our experiments on English-to-Serbian and Englishto-German translation show that this approach can reduce technical post-editing effort (number of post-edit operations) to obtain correct translation. We find that larger improvements can be achieved for more complex target languages, as well as for MT systems with lower overall performance. The improvements mainly originate from correctly simplified sentences with relatively complex structure, while simpler structures are already translated sufficiently well using the original source sentences.",Improving Machine Translation of {E}nglish Relative Clauses with Automatic Text Simplification,"This article explores the use of automatic sentence simplification as a preprocessing step in neural machine translation of English relative clauses into grammatically complex languages. Our experiments on English-to-Serbian and Englishto-German translation show that this approach can reduce technical post-editing effort (number of post-edit operations) to obtain correct translation. We find that larger improvements can be achieved for more complex target languages, as well as for MT systems with lower overall performance. The improvements mainly originate from correctly simplified sentences with relatively complex structure, while simpler structures are already translated sufficiently well using the original source sentences.",Improving Machine Translation of English Relative Clauses with Automatic Text Simplification,"This article explores the use of automatic sentence simplification as a preprocessing step in neural machine translation of English relative clauses into grammatically complex languages. Our experiments on English-to-Serbian and Englishto-German translation show that this approach can reduce technical post-editing effort (number of post-edit operations) to obtain correct translation. We find that larger improvements can be achieved for more complex target languages, as well as for MT systems with lower overall performance. The improvements mainly originate from correctly simplified sentences with relatively complex structure, while simpler structures are already translated sufficiently well using the original source sentences.","Acknowledgements: This research was supported by the ADAPT Centre for Digital Content Technology at Dublin City University, funded under the Science Foundation Ireland Research Centres Programme (Grant 13/RC/2106) and co-","Improving Machine Translation of English Relative Clauses with Automatic Text Simplification. This article explores the use of automatic sentence simplification as a preprocessing step in neural machine translation of English relative clauses into grammatically complex languages. Our experiments on English-to-Serbian and Englishto-German translation show that this approach can reduce technical post-editing effort (number of post-edit operations) to obtain correct translation. We find that larger improvements can be achieved for more complex target languages, as well as for MT systems with lower overall performance. The improvements mainly originate from correctly simplified sentences with relatively complex structure, while simpler structures are already translated sufficiently well using the original source sentences.",2018
jansen-etal-2018-worldtree,https://aclanthology.org/L18-1433,1,,,,education,,,"WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-hop Inference. Developing methods of automated inference that are able to provide users with compelling human-readable justifications for why the answer to a question is correct is critical for domains such as science and medicine, where user trust and detecting costly errors are limiting factors to adoption. One of the central barriers to training question answering models on explainable inference tasks is the lack of gold explanations to serve as training data. In this paper we present a corpus of explanations for standardized science exams, a recent challenge task for question answering. We manually construct a corpus of detailed explanations for nearly all publicly available standardized elementary science question (approximately 1,680 3 rd through 5 th grade questions) and represent these as ""explanation graphs""-sets of lexically overlapping sentences that describe how to arrive at the correct answer to a question through a combination of domain and world knowledge. We also provide an explanation-centered tablestore, a collection of semi-structured tables that contain the knowledge to construct these elementary science explanations. Together, these two knowledge resources map out a substantial portion of the knowledge required for answering and explaining elementary science exams, and provide both structured and free-text training data for the explainable inference task.",{W}orld{T}ree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-hop Inference,"Developing methods of automated inference that are able to provide users with compelling human-readable justifications for why the answer to a question is correct is critical for domains such as science and medicine, where user trust and detecting costly errors are limiting factors to adoption. One of the central barriers to training question answering models on explainable inference tasks is the lack of gold explanations to serve as training data. In this paper we present a corpus of explanations for standardized science exams, a recent challenge task for question answering. We manually construct a corpus of detailed explanations for nearly all publicly available standardized elementary science question (approximately 1,680 3 rd through 5 th grade questions) and represent these as ""explanation graphs""-sets of lexically overlapping sentences that describe how to arrive at the correct answer to a question through a combination of domain and world knowledge. We also provide an explanation-centered tablestore, a collection of semi-structured tables that contain the knowledge to construct these elementary science explanations. Together, these two knowledge resources map out a substantial portion of the knowledge required for answering and explaining elementary science exams, and provide both structured and free-text training data for the explainable inference task.",WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-hop Inference,"Developing methods of automated inference that are able to provide users with compelling human-readable justifications for why the answer to a question is correct is critical for domains such as science and medicine, where user trust and detecting costly errors are limiting factors to adoption. One of the central barriers to training question answering models on explainable inference tasks is the lack of gold explanations to serve as training data. In this paper we present a corpus of explanations for standardized science exams, a recent challenge task for question answering. We manually construct a corpus of detailed explanations for nearly all publicly available standardized elementary science question (approximately 1,680 3 rd through 5 th grade questions) and represent these as ""explanation graphs""-sets of lexically overlapping sentences that describe how to arrive at the correct answer to a question through a combination of domain and world knowledge. We also provide an explanation-centered tablestore, a collection of semi-structured tables that contain the knowledge to construct these elementary science explanations. Together, these two knowledge resources map out a substantial portion of the knowledge required for answering and explaining elementary science exams, and provide both structured and free-text training data for the explainable inference task.",,"WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-hop Inference. Developing methods of automated inference that are able to provide users with compelling human-readable justifications for why the answer to a question is correct is critical for domains such as science and medicine, where user trust and detecting costly errors are limiting factors to adoption. One of the central barriers to training question answering models on explainable inference tasks is the lack of gold explanations to serve as training data. In this paper we present a corpus of explanations for standardized science exams, a recent challenge task for question answering. We manually construct a corpus of detailed explanations for nearly all publicly available standardized elementary science question (approximately 1,680 3 rd through 5 th grade questions) and represent these as ""explanation graphs""-sets of lexically overlapping sentences that describe how to arrive at the correct answer to a question through a combination of domain and world knowledge. We also provide an explanation-centered tablestore, a collection of semi-structured tables that contain the knowledge to construct these elementary science explanations. Together, these two knowledge resources map out a substantial portion of the knowledge required for answering and explaining elementary science exams, and provide both structured and free-text training data for the explainable inference task.",2018
skantze-etal-2013-exploring,https://aclanthology.org/W13-4029,0,,,,,,,"Exploring the effects of gaze and pauses in situated human-robot interaction. In this paper, we present a user study where a robot instructs a human on how to draw a route on a map, similar to a Map Task. This setup has allowed us to study user reactions to the robot's conversational behaviour in order to get a better understanding of how to generate utterances in incremental dialogue systems. We have analysed the participants' subjective rating, task completion, verbal responses, gaze behaviour, drawing activity, and cognitive load. The results show that users utilise the robot's gaze in order to disambiguate referring expressions and manage the flow of the interaction. Furthermore, we show that the user's behaviour is affected by how pauses are realised in the robot's speech.",Exploring the effects of gaze and pauses in situated human-robot interaction,"In this paper, we present a user study where a robot instructs a human on how to draw a route on a map, similar to a Map Task. This setup has allowed us to study user reactions to the robot's conversational behaviour in order to get a better understanding of how to generate utterances in incremental dialogue systems. We have analysed the participants' subjective rating, task completion, verbal responses, gaze behaviour, drawing activity, and cognitive load. The results show that users utilise the robot's gaze in order to disambiguate referring expressions and manage the flow of the interaction. Furthermore, we show that the user's behaviour is affected by how pauses are realised in the robot's speech.",Exploring the effects of gaze and pauses in situated human-robot interaction,"In this paper, we present a user study where a robot instructs a human on how to draw a route on a map, similar to a Map Task. This setup has allowed us to study user reactions to the robot's conversational behaviour in order to get a better understanding of how to generate utterances in incremental dialogue systems. We have analysed the participants' subjective rating, task completion, verbal responses, gaze behaviour, drawing activity, and cognitive load. The results show that users utilise the robot's gaze in order to disambiguate referring expressions and manage the flow of the interaction. Furthermore, we show that the user's behaviour is affected by how pauses are realised in the robot's speech.",Gabriel Skantze is supported by the Swedish research council (VR) project Incremental processing in multimodal conversational systems (2011-6237). Anna Hjalmarsson is supported by the Swedish Research Council (VR) project Classifying and deploying pauses for flow control in conversational systems . Catharine Oertel is supported by GetHomeSafe (EU 7th Framework STREP 288667).,"Exploring the effects of gaze and pauses in situated human-robot interaction. In this paper, we present a user study where a robot instructs a human on how to draw a route on a map, similar to a Map Task. This setup has allowed us to study user reactions to the robot's conversational behaviour in order to get a better understanding of how to generate utterances in incremental dialogue systems. We have analysed the participants' subjective rating, task completion, verbal responses, gaze behaviour, drawing activity, and cognitive load. The results show that users utilise the robot's gaze in order to disambiguate referring expressions and manage the flow of the interaction. Furthermore, we show that the user's behaviour is affected by how pauses are realised in the robot's speech.",2013
gao-etal-2021-learning,https://aclanthology.org/2021.textgraphs-1.6,0,,,,,,,"Learning Clause Representation from Dependency-Anchor Graph for Connective Prediction. Semantic representation that supports the choice of an appropriate connective between pairs of clauses inherently addresses discourse coherence, which is important for tasks such as narrative understanding, argumentation, and discourse parsing. We propose a novel clause embedding method that applies graph learning to a data structure we refer to as a dependencyanchor graph. The dependency anchor graph incorporates two kinds of syntactic information, constituency structure and dependency relations, to highlight the subject and verb phrase relation. This enhances coherencerelated aspects of representation. We design a neural model to learn a semantic representation for clauses from graph convolution over latent representations of the subject and verb phrase. We evaluate our method on two new datasets: a subset of a large corpus where the source texts are published novels, and a new dataset collected from students' essays. The results demonstrate a significant improvement over tree-based models, confirming the importance of emphasizing the subject and verb phrase. The performance gap between the two datasets illustrates the challenges of analyzing student's written text, plus a potential evaluation task for coherence modeling and an application for suggesting revisions to students.",Learning Clause Representation from Dependency-Anchor Graph for Connective Prediction,"Semantic representation that supports the choice of an appropriate connective between pairs of clauses inherently addresses discourse coherence, which is important for tasks such as narrative understanding, argumentation, and discourse parsing. We propose a novel clause embedding method that applies graph learning to a data structure we refer to as a dependencyanchor graph. The dependency anchor graph incorporates two kinds of syntactic information, constituency structure and dependency relations, to highlight the subject and verb phrase relation. This enhances coherencerelated aspects of representation. We design a neural model to learn a semantic representation for clauses from graph convolution over latent representations of the subject and verb phrase. We evaluate our method on two new datasets: a subset of a large corpus where the source texts are published novels, and a new dataset collected from students' essays. The results demonstrate a significant improvement over tree-based models, confirming the importance of emphasizing the subject and verb phrase. The performance gap between the two datasets illustrates the challenges of analyzing student's written text, plus a potential evaluation task for coherence modeling and an application for suggesting revisions to students.",Learning Clause Representation from Dependency-Anchor Graph for Connective Prediction,"Semantic representation that supports the choice of an appropriate connective between pairs of clauses inherently addresses discourse coherence, which is important for tasks such as narrative understanding, argumentation, and discourse parsing. We propose a novel clause embedding method that applies graph learning to a data structure we refer to as a dependencyanchor graph. The dependency anchor graph incorporates two kinds of syntactic information, constituency structure and dependency relations, to highlight the subject and verb phrase relation. This enhances coherencerelated aspects of representation. We design a neural model to learn a semantic representation for clauses from graph convolution over latent representations of the subject and verb phrase. We evaluate our method on two new datasets: a subset of a large corpus where the source texts are published novels, and a new dataset collected from students' essays. The results demonstrate a significant improvement over tree-based models, confirming the importance of emphasizing the subject and verb phrase. The performance gap between the two datasets illustrates the challenges of analyzing student's written text, plus a potential evaluation task for coherence modeling and an application for suggesting revisions to students.",,"Learning Clause Representation from Dependency-Anchor Graph for Connective Prediction. Semantic representation that supports the choice of an appropriate connective between pairs of clauses inherently addresses discourse coherence, which is important for tasks such as narrative understanding, argumentation, and discourse parsing. We propose a novel clause embedding method that applies graph learning to a data structure we refer to as a dependencyanchor graph. The dependency anchor graph incorporates two kinds of syntactic information, constituency structure and dependency relations, to highlight the subject and verb phrase relation. This enhances coherencerelated aspects of representation. We design a neural model to learn a semantic representation for clauses from graph convolution over latent representations of the subject and verb phrase. We evaluate our method on two new datasets: a subset of a large corpus where the source texts are published novels, and a new dataset collected from students' essays. The results demonstrate a significant improvement over tree-based models, confirming the importance of emphasizing the subject and verb phrase. The performance gap between the two datasets illustrates the challenges of analyzing student's written text, plus a potential evaluation task for coherence modeling and an application for suggesting revisions to students.",2021
nan-etal-2020-reasoning,https://aclanthology.org/2020.acl-main.141,0,,,,,,,"Reasoning with Latent Structure Refinement for Document-Level Relation Extraction. Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities. However, effective aggregation of relevant information in the document remains a challenging research question. Existing approaches construct static document-level graphs based on syntactic trees, co-references or heuristics from the unstructured text to model the dependencies. Unlike previous methods that may not be able to capture rich non-local interactions for inference, we propose a novel model that empowers the relational reasoning across sentences by automatically inducing the latent document-level graph. We further develop a refinement strategy, which enables the model to incrementally aggregate relevant information for multi-hop reasoning. Specifically, our model achieves an F 1 score of 59.05 on a large-scale documentlevel dataset (DocRED), significantly improving over the previous results, and also yields new state-of-the-art results on the CDR and GDA dataset. Furthermore, extensive analyses show that the model is able to discover more accurate inter-sentence relations. * * Equally Contributed. † † Work done during internship at SUTD.",Reasoning with Latent Structure Refinement for Document-Level Relation Extraction,"Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities. However, effective aggregation of relevant information in the document remains a challenging research question. Existing approaches construct static document-level graphs based on syntactic trees, co-references or heuristics from the unstructured text to model the dependencies. Unlike previous methods that may not be able to capture rich non-local interactions for inference, we propose a novel model that empowers the relational reasoning across sentences by automatically inducing the latent document-level graph. We further develop a refinement strategy, which enables the model to incrementally aggregate relevant information for multi-hop reasoning. Specifically, our model achieves an F 1 score of 59.05 on a large-scale documentlevel dataset (DocRED), significantly improving over the previous results, and also yields new state-of-the-art results on the CDR and GDA dataset. Furthermore, extensive analyses show that the model is able to discover more accurate inter-sentence relations. * * Equally Contributed. † † Work done during internship at SUTD.",Reasoning with Latent Structure Refinement for Document-Level Relation Extraction,"Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities. However, effective aggregation of relevant information in the document remains a challenging research question. Existing approaches construct static document-level graphs based on syntactic trees, co-references or heuristics from the unstructured text to model the dependencies. Unlike previous methods that may not be able to capture rich non-local interactions for inference, we propose a novel model that empowers the relational reasoning across sentences by automatically inducing the latent document-level graph. We further develop a refinement strategy, which enables the model to incrementally aggregate relevant information for multi-hop reasoning. Specifically, our model achieves an F 1 score of 59.05 on a large-scale documentlevel dataset (DocRED), significantly improving over the previous results, and also yields new state-of-the-art results on the CDR and GDA dataset. Furthermore, extensive analyses show that the model is able to discover more accurate inter-sentence relations. * * Equally Contributed. † † Work done during internship at SUTD.","We would like to thank the anonymous reviewers for their thoughtful and constructive comments. This research is supported by Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No: MOE2017-T2-1-156). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the Ministry of Education, Singapore.","Reasoning with Latent Structure Refinement for Document-Level Relation Extraction. Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities. However, effective aggregation of relevant information in the document remains a challenging research question. Existing approaches construct static document-level graphs based on syntactic trees, co-references or heuristics from the unstructured text to model the dependencies. Unlike previous methods that may not be able to capture rich non-local interactions for inference, we propose a novel model that empowers the relational reasoning across sentences by automatically inducing the latent document-level graph. We further develop a refinement strategy, which enables the model to incrementally aggregate relevant information for multi-hop reasoning. Specifically, our model achieves an F 1 score of 59.05 on a large-scale documentlevel dataset (DocRED), significantly improving over the previous results, and also yields new state-of-the-art results on the CDR and GDA dataset. Furthermore, extensive analyses show that the model is able to discover more accurate inter-sentence relations. * * Equally Contributed. † † Work done during internship at SUTD.",2020
martin-etal-2020-leveraging,https://aclanthology.org/2020.cllrd-1.5,0,,,,,,,"Leveraging Non-Specialists for Accurate and Time Efficient AMR Annotation. Meaning Representations (AMRs), a syntax-free representation of phrase semantics (Banarescu et al., 2013), are useful for capturing the meaning of a phrase and reflecting the relationship between concepts that are referred to. However, annotating AMRs is time consuming and expensive. The existing annotation process requires expertly trained workers who have knowledge of an extensive set of guidelines for parsing phrases. In this paper, we propose a cost-saving two-step process for the creation of a corpus of AMR-phrase pairs for spatial referring expressions. The first step uses non-specialists to perform simple annotations that can be leveraged in the second step to accelerate the annotation performed by the experts. We hypothesize that our process will decrease the cost per annotation and improve consistency across annotators. Few corpora of spatial referring expressions exist and the resulting language resource will be valuable for referring expression comprehension and generation modeling.",Leveraging Non-Specialists for Accurate and Time Efficient {AMR} Annotation,"Meaning Representations (AMRs), a syntax-free representation of phrase semantics (Banarescu et al., 2013), are useful for capturing the meaning of a phrase and reflecting the relationship between concepts that are referred to. However, annotating AMRs is time consuming and expensive. The existing annotation process requires expertly trained workers who have knowledge of an extensive set of guidelines for parsing phrases. In this paper, we propose a cost-saving two-step process for the creation of a corpus of AMR-phrase pairs for spatial referring expressions. The first step uses non-specialists to perform simple annotations that can be leveraged in the second step to accelerate the annotation performed by the experts. We hypothesize that our process will decrease the cost per annotation and improve consistency across annotators. Few corpora of spatial referring expressions exist and the resulting language resource will be valuable for referring expression comprehension and generation modeling.",Leveraging Non-Specialists for Accurate and Time Efficient AMR Annotation,"Meaning Representations (AMRs), a syntax-free representation of phrase semantics (Banarescu et al., 2013), are useful for capturing the meaning of a phrase and reflecting the relationship between concepts that are referred to. However, annotating AMRs is time consuming and expensive. The existing annotation process requires expertly trained workers who have knowledge of an extensive set of guidelines for parsing phrases. In this paper, we propose a cost-saving two-step process for the creation of a corpus of AMR-phrase pairs for spatial referring expressions. The first step uses non-specialists to perform simple annotations that can be leveraged in the second step to accelerate the annotation performed by the experts. We hypothesize that our process will decrease the cost per annotation and improve consistency across annotators. Few corpora of spatial referring expressions exist and the resulting language resource will be valuable for referring expression comprehension and generation modeling.",This work is partially supported by the National Science Foundation award number 1849357.,"Leveraging Non-Specialists for Accurate and Time Efficient AMR Annotation. Meaning Representations (AMRs), a syntax-free representation of phrase semantics (Banarescu et al., 2013), are useful for capturing the meaning of a phrase and reflecting the relationship between concepts that are referred to. However, annotating AMRs is time consuming and expensive. The existing annotation process requires expertly trained workers who have knowledge of an extensive set of guidelines for parsing phrases. In this paper, we propose a cost-saving two-step process for the creation of a corpus of AMR-phrase pairs for spatial referring expressions. The first step uses non-specialists to perform simple annotations that can be leveraged in the second step to accelerate the annotation performed by the experts. We hypothesize that our process will decrease the cost per annotation and improve consistency across annotators. Few corpora of spatial referring expressions exist and the resulting language resource will be valuable for referring expression comprehension and generation modeling.",2020
elder-etal-2020-make,https://aclanthology.org/2020.emnlp-main.230,0,,,,,,,"How to Make Neural Natural Language Generation as Reliable as Templates in Task-Oriented Dialogue. Neural Natural Language Generation (NLG) systems are well known for their unreliability. To overcome this issue, we propose a data augmentation approach which allows us to restrict the output of a network and guarantee reliability. While this restriction means generation will be less diverse than if randomly sampled, we include experiments that demonstrate the tendency of existing neural generation approaches to produce dull and repetitive text, and we argue that reliability is more important than diversity for this task. The system trained using this approach scored 100% in semantic accuracy on the E2E NLG Challenge dataset, the same as a template system.",How to Make Neural Natural Language Generation as Reliable as Templates in Task-Oriented Dialogue,"Neural Natural Language Generation (NLG) systems are well known for their unreliability. To overcome this issue, we propose a data augmentation approach which allows us to restrict the output of a network and guarantee reliability. While this restriction means generation will be less diverse than if randomly sampled, we include experiments that demonstrate the tendency of existing neural generation approaches to produce dull and repetitive text, and we argue that reliability is more important than diversity for this task. The system trained using this approach scored 100% in semantic accuracy on the E2E NLG Challenge dataset, the same as a template system.",How to Make Neural Natural Language Generation as Reliable as Templates in Task-Oriented Dialogue,"Neural Natural Language Generation (NLG) systems are well known for their unreliability. To overcome this issue, we propose a data augmentation approach which allows us to restrict the output of a network and guarantee reliability. While this restriction means generation will be less diverse than if randomly sampled, we include experiments that demonstrate the tendency of existing neural generation approaches to produce dull and repetitive text, and we argue that reliability is more important than diversity for this task. The system trained using this approach scored 100% in semantic accuracy on the E2E NLG Challenge dataset, the same as a template system.",We thank the anonymous reviewers for their helpful comments. This research is supported by Science Foundation Ireland in the ADAPT Centre for Digital Content Technology. The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.,"How to Make Neural Natural Language Generation as Reliable as Templates in Task-Oriented Dialogue. Neural Natural Language Generation (NLG) systems are well known for their unreliability. To overcome this issue, we propose a data augmentation approach which allows us to restrict the output of a network and guarantee reliability. While this restriction means generation will be less diverse than if randomly sampled, we include experiments that demonstrate the tendency of existing neural generation approaches to produce dull and repetitive text, and we argue that reliability is more important than diversity for this task. The system trained using this approach scored 100% in semantic accuracy on the E2E NLG Challenge dataset, the same as a template system.",2020
nagesh-2015-exploring,https://aclanthology.org/N15-2006,0,,,,,,,"Exploring Relational Features and Learning under Distant Supervision for Information Extraction Tasks. Information Extraction (IE) has become an indispensable tool in our quest to handle the data deluge of the information age. IE can broadly be classified into Named-entity Recognition (NER) and Relation Extraction (RE). In this thesis, we view the task of IE as finding patterns in unstructured data, which can either take the form of features and/or be specified by constraints. In NER, we study the categorization of complex relational 1 features and outline methods to learn feature combinations through induction. We demonstrate the efficacy of induction techniques in learning : i) rules for the identification of named entities in text-the novelty is the application of induction techniques to learn in a very expressive declarative rule language ii) a richer sequence labeling model-enabling optimal learning of discriminative features. In RE, our investigations are in the paradigm of distant supervision, which facilitates the creation of large albeit noisy training data. We devise an inference framework in which constraints can be easily specified in learning relation extractors. In addition, we reformulate the learning objective in a max-margin framework. To the best of our knowledge, our formulation is the first to optimize multi-variate non-linear performance measures such as F β for a latent variable structure prediction task.",Exploring Relational Features and Learning under Distant Supervision for Information Extraction Tasks,"Information Extraction (IE) has become an indispensable tool in our quest to handle the data deluge of the information age. IE can broadly be classified into Named-entity Recognition (NER) and Relation Extraction (RE). In this thesis, we view the task of IE as finding patterns in unstructured data, which can either take the form of features and/or be specified by constraints. In NER, we study the categorization of complex relational 1 features and outline methods to learn feature combinations through induction. We demonstrate the efficacy of induction techniques in learning : i) rules for the identification of named entities in text-the novelty is the application of induction techniques to learn in a very expressive declarative rule language ii) a richer sequence labeling model-enabling optimal learning of discriminative features. In RE, our investigations are in the paradigm of distant supervision, which facilitates the creation of large albeit noisy training data. We devise an inference framework in which constraints can be easily specified in learning relation extractors. In addition, we reformulate the learning objective in a max-margin framework. To the best of our knowledge, our formulation is the first to optimize multi-variate non-linear performance measures such as F β for a latent variable structure prediction task.",Exploring Relational Features and Learning under Distant Supervision for Information Extraction Tasks,"Information Extraction (IE) has become an indispensable tool in our quest to handle the data deluge of the information age. IE can broadly be classified into Named-entity Recognition (NER) and Relation Extraction (RE). In this thesis, we view the task of IE as finding patterns in unstructured data, which can either take the form of features and/or be specified by constraints. In NER, we study the categorization of complex relational 1 features and outline methods to learn feature combinations through induction. We demonstrate the efficacy of induction techniques in learning : i) rules for the identification of named entities in text-the novelty is the application of induction techniques to learn in a very expressive declarative rule language ii) a richer sequence labeling model-enabling optimal learning of discriminative features. In RE, our investigations are in the paradigm of distant supervision, which facilitates the creation of large albeit noisy training data. We devise an inference framework in which constraints can be easily specified in learning relation extractors. In addition, we reformulate the learning objective in a max-margin framework. To the best of our knowledge, our formulation is the first to optimize multi-variate non-linear performance measures such as F β for a latent variable structure prediction task.",,"Exploring Relational Features and Learning under Distant Supervision for Information Extraction Tasks. Information Extraction (IE) has become an indispensable tool in our quest to handle the data deluge of the information age. IE can broadly be classified into Named-entity Recognition (NER) and Relation Extraction (RE). In this thesis, we view the task of IE as finding patterns in unstructured data, which can either take the form of features and/or be specified by constraints. In NER, we study the categorization of complex relational 1 features and outline methods to learn feature combinations through induction. We demonstrate the efficacy of induction techniques in learning : i) rules for the identification of named entities in text-the novelty is the application of induction techniques to learn in a very expressive declarative rule language ii) a richer sequence labeling model-enabling optimal learning of discriminative features. In RE, our investigations are in the paradigm of distant supervision, which facilitates the creation of large albeit noisy training data. We devise an inference framework in which constraints can be easily specified in learning relation extractors. In addition, we reformulate the learning objective in a max-margin framework. To the best of our knowledge, our formulation is the first to optimize multi-variate non-linear performance measures such as F β for a latent variable structure prediction task.",2015
iosif-etal-2012-associative,http://www.lrec-conf.org/proceedings/lrec2012/pdf/536_Paper.pdf,0,,,,,,,"Associative and Semantic Features Extracted From Web-Harvested Corpora. We address the problem of automatic classification of associative and semantic relations between words, and particularly those that hold between nouns. Lexical relations such as synonymy, hypernymy/hyponymy, constitute the fundamental types of semantic relations. Associative relations are harder to define, since they include a long list of diverse relations, e.g., ""Cause-Effect"", ""Instrument-Agency"". Motivated by findings from the literature of psycholinguistics and corpus linguistics, we propose features that take advantage of general linguistic properties. For evaluation we merged three datasets assembled and validated by cognitive scientists. A proposed priming coefficient that measures the degree of asymmetry in the order of appearance of the words in text achieves the best classification results, followed by context-based similarity metrics. The web-based features achieve classification accuracy that exceeds 85%.",Associative and Semantic Features Extracted From Web-Harvested Corpora,"We address the problem of automatic classification of associative and semantic relations between words, and particularly those that hold between nouns. Lexical relations such as synonymy, hypernymy/hyponymy, constitute the fundamental types of semantic relations. Associative relations are harder to define, since they include a long list of diverse relations, e.g., ""Cause-Effect"", ""Instrument-Agency"". Motivated by findings from the literature of psycholinguistics and corpus linguistics, we propose features that take advantage of general linguistic properties. For evaluation we merged three datasets assembled and validated by cognitive scientists. A proposed priming coefficient that measures the degree of asymmetry in the order of appearance of the words in text achieves the best classification results, followed by context-based similarity metrics. The web-based features achieve classification accuracy that exceeds 85%.",Associative and Semantic Features Extracted From Web-Harvested Corpora,"We address the problem of automatic classification of associative and semantic relations between words, and particularly those that hold between nouns. Lexical relations such as synonymy, hypernymy/hyponymy, constitute the fundamental types of semantic relations. Associative relations are harder to define, since they include a long list of diverse relations, e.g., ""Cause-Effect"", ""Instrument-Agency"". Motivated by findings from the literature of psycholinguistics and corpus linguistics, we propose features that take advantage of general linguistic properties. For evaluation we merged three datasets assembled and validated by cognitive scientists. A proposed priming coefficient that measures the degree of asymmetry in the order of appearance of the words in text achieves the best classification results, followed by context-based similarity metrics. The web-based features achieve classification accuracy that exceeds 85%.",,"Associative and Semantic Features Extracted From Web-Harvested Corpora. We address the problem of automatic classification of associative and semantic relations between words, and particularly those that hold between nouns. Lexical relations such as synonymy, hypernymy/hyponymy, constitute the fundamental types of semantic relations. Associative relations are harder to define, since they include a long list of diverse relations, e.g., ""Cause-Effect"", ""Instrument-Agency"". Motivated by findings from the literature of psycholinguistics and corpus linguistics, we propose features that take advantage of general linguistic properties. For evaluation we merged three datasets assembled and validated by cognitive scientists. A proposed priming coefficient that measures the degree of asymmetry in the order of appearance of the words in text achieves the best classification results, followed by context-based similarity metrics. The web-based features achieve classification accuracy that exceeds 85%.",2012
sayeed-etal-2012-grammatical,https://aclanthology.org/N12-1085,0,,,,,,,"Grammatical structures for word-level sentiment detection. Existing work in fine-grained sentiment analysis focuses on sentences and phrases but ignores the contribution of individual words and their grammatical connections. This is because of a lack of both (1) annotated data at the word level and (2) algorithms that can leverage syntactic information in a principled way. We address the first need by annotating articles from the information technology business press via crowdsourcing to provide training and testing data. To address the second need, we propose a suffix-tree data structure to represent syntactic relationships between opinion targets and words in a sentence that are opinion-bearing. We show that a factor graph derived from this data structure acquires these relationships with a small number of word-level features. We demonstrate that our supervised model performs better than baselines that ignore syntactic features and constraints.",Grammatical structures for word-level sentiment detection,"Existing work in fine-grained sentiment analysis focuses on sentences and phrases but ignores the contribution of individual words and their grammatical connections. This is because of a lack of both (1) annotated data at the word level and (2) algorithms that can leverage syntactic information in a principled way. We address the first need by annotating articles from the information technology business press via crowdsourcing to provide training and testing data. To address the second need, we propose a suffix-tree data structure to represent syntactic relationships between opinion targets and words in a sentence that are opinion-bearing. We show that a factor graph derived from this data structure acquires these relationships with a small number of word-level features. We demonstrate that our supervised model performs better than baselines that ignore syntactic features and constraints.",Grammatical structures for word-level sentiment detection,"Existing work in fine-grained sentiment analysis focuses on sentences and phrases but ignores the contribution of individual words and their grammatical connections. This is because of a lack of both (1) annotated data at the word level and (2) algorithms that can leverage syntactic information in a principled way. We address the first need by annotating articles from the information technology business press via crowdsourcing to provide training and testing data. To address the second need, we propose a suffix-tree data structure to represent syntactic relationships between opinion targets and words in a sentence that are opinion-bearing. We show that a factor graph derived from this data structure acquires these relationships with a small number of word-level features. We demonstrate that our supervised model performs better than baselines that ignore syntactic features and constraints.","This paper is based upon work supported by the US National Science Foundation under Grant IIS-0729459. Additional support came from the Cluster of Excellence ""Multimodal Computing and Innovation"", Germany. Jordan Boyd-Graber is also supported by US National Science Foundation Grant NSF grant #1018625 and the Army Research Laboratory through ARL Cooperative Agreement W911NF-09-2-0072. Any opinions, findings, conclusions, or recommendations expressed are the authors' and do not necessarily reflect those of the sponsors.","Grammatical structures for word-level sentiment detection. Existing work in fine-grained sentiment analysis focuses on sentences and phrases but ignores the contribution of individual words and their grammatical connections. This is because of a lack of both (1) annotated data at the word level and (2) algorithms that can leverage syntactic information in a principled way. We address the first need by annotating articles from the information technology business press via crowdsourcing to provide training and testing data. To address the second need, we propose a suffix-tree data structure to represent syntactic relationships between opinion targets and words in a sentence that are opinion-bearing. We show that a factor graph derived from this data structure acquires these relationships with a small number of word-level features. We demonstrate that our supervised model performs better than baselines that ignore syntactic features and constraints.",2012
feng-2003-cooperative,https://aclanthology.org/N03-3010,0,,,,,,,"Cooperative model-based language understanding. In this paper, we propose a novel Cooperative Model for natural language understanding in a dialogue system. We build this based on both Finite State Model (FSM) and Statistical Learning Model (SLM). FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility. Statistical approach is much more robust but less accurate. Cooperative Model incorporates all the three strategies together and thus can suppress all the shortcomings of different strategies and has all the advantages of the three strategies.",Cooperative model-based language understanding,"In this paper, we propose a novel Cooperative Model for natural language understanding in a dialogue system. We build this based on both Finite State Model (FSM) and Statistical Learning Model (SLM). FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility. Statistical approach is much more robust but less accurate. Cooperative Model incorporates all the three strategies together and thus can suppress all the shortcomings of different strategies and has all the advantages of the three strategies.",Cooperative model-based language understanding,"In this paper, we propose a novel Cooperative Model for natural language understanding in a dialogue system. We build this based on both Finite State Model (FSM) and Statistical Learning Model (SLM). FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility. Statistical approach is much more robust but less accurate. Cooperative Model incorporates all the three strategies together and thus can suppress all the shortcomings of different strategies and has all the advantages of the three strategies.",The author would like to thank Deepak Ravichandran for his invaluable help of the whole work.,"Cooperative model-based language understanding. In this paper, we propose a novel Cooperative Model for natural language understanding in a dialogue system. We build this based on both Finite State Model (FSM) and Statistical Learning Model (SLM). FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility. Statistical approach is much more robust but less accurate. Cooperative Model incorporates all the three strategies together and thus can suppress all the shortcomings of different strategies and has all the advantages of the three strategies.",2003
romanov-etal-2019-adversarial,https://aclanthology.org/N19-1088,0,,,,,,,"Adversarial Decomposition of Text Representation. In this paper, we present a method for adversarial decomposition of text representation. This method can be used to decompose a representation of an input sentence into several independent vectors, each of them responsible for a specific aspect of the input sentence. We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change. We show that the proposed method is capable of fine-grained controlled change of these aspects of the input sentence. It is also learning a continuous (rather than categorical) representation of the style of the sentence, which is more linguistically realistic. The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition. Furthermore, we evaluate the obtained meaning embeddings on a downstream task of paraphrase detection and show that they significantly outperform the embeddings of a regular autoencoder.",Adversarial Decomposition of Text Representation,"In this paper, we present a method for adversarial decomposition of text representation. This method can be used to decompose a representation of an input sentence into several independent vectors, each of them responsible for a specific aspect of the input sentence. We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change. We show that the proposed method is capable of fine-grained controlled change of these aspects of the input sentence. It is also learning a continuous (rather than categorical) representation of the style of the sentence, which is more linguistically realistic. The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition. Furthermore, we evaluate the obtained meaning embeddings on a downstream task of paraphrase detection and show that they significantly outperform the embeddings of a regular autoencoder.",Adversarial Decomposition of Text Representation,"In this paper, we present a method for adversarial decomposition of text representation. This method can be used to decompose a representation of an input sentence into several independent vectors, each of them responsible for a specific aspect of the input sentence. We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change. We show that the proposed method is capable of fine-grained controlled change of these aspects of the input sentence. It is also learning a continuous (rather than categorical) representation of the style of the sentence, which is more linguistically realistic. The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition. Furthermore, we evaluate the obtained meaning embeddings on a downstream task of paraphrase detection and show that they significantly outperform the embeddings of a regular autoencoder.",,"Adversarial Decomposition of Text Representation. In this paper, we present a method for adversarial decomposition of text representation. This method can be used to decompose a representation of an input sentence into several independent vectors, each of them responsible for a specific aspect of the input sentence. We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change. We show that the proposed method is capable of fine-grained controlled change of these aspects of the input sentence. It is also learning a continuous (rather than categorical) representation of the style of the sentence, which is more linguistically realistic. The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition. Furthermore, we evaluate the obtained meaning embeddings on a downstream task of paraphrase detection and show that they significantly outperform the embeddings of a regular autoencoder.",2019
reeves-1982-terminology,https://aclanthology.org/1982.tc-1.12,0,,,,,,,"Terminology for translators. Should a technical translator be a subject specialist with additional linguistic skills? Or a trained linguist with some specialist knowledge? It is an old debate and plainly in practice successful translators can derive from both categories. Indeed in the past entry into the profession was often largely determined by personal circumstances -an engineer who had acquired linguistic knowledge through overseas postings might turn later in his career, or as a side-line, to translating engineering texts. A language graduate, finding him or usually herself, confined to earning a living from the home, acquired knowledge of a technical area in a self-teaching process. Today, however, the enormous growth of scientific discovery and technological innovation together with the internationalisation of trade make, as we all know, the systematic training of translators a necessity. Decisions therefore have to be taken about the most efficacious methods to be adopted in the training process and the old question of linguist versus specialist recurs with fresh urgency.
Or at least it would appear to. But there is a further complicating factor: the technical translator is principally concerned with the language in which the message is expressed, whereas the sender was principally concerned with the topic of the message. The sender used the special language of his area to describe and analyse the extra-linguistic reality that was his primary interest. But the translator's primary interest is the special language itself: in short a subject's terminology assumes first importance for the translator. And the moment we speak of terminology in this context as 'an aggregate of terms representing the system of concepts in an individual subject field'(1), we are reminded that the translator also needs to understand the principles according to which a particular terminology is established, the relationship between various monolingual terminologies and between the specialist terminologies of one language and those of another language(2). Thus the technical translator has to be an expert in three discrete disciplines: translation itself, a technical specialism and the theory and practice of terminology.",Terminology for translators,"Should a technical translator be a subject specialist with additional linguistic skills? Or a trained linguist with some specialist knowledge? It is an old debate and plainly in practice successful translators can derive from both categories. Indeed in the past entry into the profession was often largely determined by personal circumstances -an engineer who had acquired linguistic knowledge through overseas postings might turn later in his career, or as a side-line, to translating engineering texts. A language graduate, finding him or usually herself, confined to earning a living from the home, acquired knowledge of a technical area in a self-teaching process. Today, however, the enormous growth of scientific discovery and technological innovation together with the internationalisation of trade make, as we all know, the systematic training of translators a necessity. Decisions therefore have to be taken about the most efficacious methods to be adopted in the training process and the old question of linguist versus specialist recurs with fresh urgency.
Or at least it would appear to. But there is a further complicating factor: the technical translator is principally concerned with the language in which the message is expressed, whereas the sender was principally concerned with the topic of the message. The sender used the special language of his area to describe and analyse the extra-linguistic reality that was his primary interest. But the translator's primary interest is the special language itself: in short a subject's terminology assumes first importance for the translator. And the moment we speak of terminology in this context as 'an aggregate of terms representing the system of concepts in an individual subject field'(1), we are reminded that the translator also needs to understand the principles according to which a particular terminology is established, the relationship between various monolingual terminologies and between the specialist terminologies of one language and those of another language(2). Thus the technical translator has to be an expert in three discrete disciplines: translation itself, a technical specialism and the theory and practice of terminology.",Terminology for translators,"Should a technical translator be a subject specialist with additional linguistic skills? Or a trained linguist with some specialist knowledge? It is an old debate and plainly in practice successful translators can derive from both categories. Indeed in the past entry into the profession was often largely determined by personal circumstances -an engineer who had acquired linguistic knowledge through overseas postings might turn later in his career, or as a side-line, to translating engineering texts. A language graduate, finding him or usually herself, confined to earning a living from the home, acquired knowledge of a technical area in a self-teaching process. Today, however, the enormous growth of scientific discovery and technological innovation together with the internationalisation of trade make, as we all know, the systematic training of translators a necessity. Decisions therefore have to be taken about the most efficacious methods to be adopted in the training process and the old question of linguist versus specialist recurs with fresh urgency.
Or at least it would appear to. But there is a further complicating factor: the technical translator is principally concerned with the language in which the message is expressed, whereas the sender was principally concerned with the topic of the message. The sender used the special language of his area to describe and analyse the extra-linguistic reality that was his primary interest. But the translator's primary interest is the special language itself: in short a subject's terminology assumes first importance for the translator. And the moment we speak of terminology in this context as 'an aggregate of terms representing the system of concepts in an individual subject field'(1), we are reminded that the translator also needs to understand the principles according to which a particular terminology is established, the relationship between various monolingual terminologies and between the specialist terminologies of one language and those of another language(2). Thus the technical translator has to be an expert in three discrete disciplines: translation itself, a technical specialism and the theory and practice of terminology.",,"Terminology for translators. Should a technical translator be a subject specialist with additional linguistic skills? Or a trained linguist with some specialist knowledge? It is an old debate and plainly in practice successful translators can derive from both categories. Indeed in the past entry into the profession was often largely determined by personal circumstances -an engineer who had acquired linguistic knowledge through overseas postings might turn later in his career, or as a side-line, to translating engineering texts. A language graduate, finding him or usually herself, confined to earning a living from the home, acquired knowledge of a technical area in a self-teaching process. Today, however, the enormous growth of scientific discovery and technological innovation together with the internationalisation of trade make, as we all know, the systematic training of translators a necessity. Decisions therefore have to be taken about the most efficacious methods to be adopted in the training process and the old question of linguist versus specialist recurs with fresh urgency.
Or at least it would appear to. But there is a further complicating factor: the technical translator is principally concerned with the language in which the message is expressed, whereas the sender was principally concerned with the topic of the message. The sender used the special language of his area to describe and analyse the extra-linguistic reality that was his primary interest. But the translator's primary interest is the special language itself: in short a subject's terminology assumes first importance for the translator. And the moment we speak of terminology in this context as 'an aggregate of terms representing the system of concepts in an individual subject field'(1), we are reminded that the translator also needs to understand the principles according to which a particular terminology is established, the relationship between various monolingual terminologies and between the specialist terminologies of one language and those of another language(2). Thus the technical translator has to be an expert in three discrete disciplines: translation itself, a technical specialism and the theory and practice of terminology.",1982
yusupov-kuratov-2018-nips,https://aclanthology.org/C18-1312,0,,,,,,,"NIPS Conversational Intelligence Challenge 2017 Winner System: Skill-based Conversational Agent with Supervised Dialog Manager. We present bot#1337: a dialog system developed for the 1 st NIPS Conversational Intelligence Challenge 2017 (ConvAI). The aim of the competition was to implement a bot capable of conversing with humans based on a given passage of text. To enable conversation, we implemented a set of skills for our bot, including chitchat , topic detection, text summarization, question answering and question generation. The system has been trained in a supervised setting using a dialogue manager to select an appropriate skill for generating a response. The latter allows a developer to focus on the skill implementation rather than the finite state machine based dialog manager. The proposed system bot#1337 won the competition with an average dialogue quality score of 2.78 out of 5 given by human evaluators. Source code and trained models for the bot#1337 are available on GitHub.",{NIPS} Conversational Intelligence Challenge 2017 Winner System: Skill-based Conversational Agent with Supervised Dialog Manager,"We present bot#1337: a dialog system developed for the 1 st NIPS Conversational Intelligence Challenge 2017 (ConvAI). The aim of the competition was to implement a bot capable of conversing with humans based on a given passage of text. To enable conversation, we implemented a set of skills for our bot, including chitchat , topic detection, text summarization, question answering and question generation. The system has been trained in a supervised setting using a dialogue manager to select an appropriate skill for generating a response. The latter allows a developer to focus on the skill implementation rather than the finite state machine based dialog manager. The proposed system bot#1337 won the competition with an average dialogue quality score of 2.78 out of 5 given by human evaluators. Source code and trained models for the bot#1337 are available on GitHub.",NIPS Conversational Intelligence Challenge 2017 Winner System: Skill-based Conversational Agent with Supervised Dialog Manager,"We present bot#1337: a dialog system developed for the 1 st NIPS Conversational Intelligence Challenge 2017 (ConvAI). The aim of the competition was to implement a bot capable of conversing with humans based on a given passage of text. To enable conversation, we implemented a set of skills for our bot, including chitchat , topic detection, text summarization, question answering and question generation. The system has been trained in a supervised setting using a dialogue manager to select an appropriate skill for generating a response. The latter allows a developer to focus on the skill implementation rather than the finite state machine based dialog manager. The proposed system bot#1337 won the competition with an average dialogue quality score of 2.78 out of 5 given by human evaluators. Source code and trained models for the bot#1337 are available on GitHub.","We thank Mikhail Burtsev, Luiza Sayfullina and Mikhail Pavlov for comments that greatly improved the manuscript. We would also like to thank the Reason8.ai company for providing computational resources and grant for NIPS 2017 ticket. We thank Neural Systems and Deep Learning Lab of MIPT for ideas and support.","NIPS Conversational Intelligence Challenge 2017 Winner System: Skill-based Conversational Agent with Supervised Dialog Manager. We present bot#1337: a dialog system developed for the 1 st NIPS Conversational Intelligence Challenge 2017 (ConvAI). The aim of the competition was to implement a bot capable of conversing with humans based on a given passage of text. To enable conversation, we implemented a set of skills for our bot, including chitchat , topic detection, text summarization, question answering and question generation. The system has been trained in a supervised setting using a dialogue manager to select an appropriate skill for generating a response. The latter allows a developer to focus on the skill implementation rather than the finite state machine based dialog manager. The proposed system bot#1337 won the competition with an average dialogue quality score of 2.78 out of 5 given by human evaluators. Source code and trained models for the bot#1337 are available on GitHub.",2018
trawinski-2003-licensing,https://aclanthology.org/W03-1813,0,,,,,,,"Licensing Complex Prepositions via Lexical Constraints. In this paper, we will investigate a cross-linguistic phenomenon referred to as complex prepositions (CPs), which is a frequent type of multiword expressions (MWEs) in many languages. Based on empirical data, we will point out the problems of the traditional treatment of CPs as complex lexical categories, and, thus, propose an analysis using the formal paradigm of the HPSG in the tradition of (Pollard and Sag, 1994). Our objective is to provide an approach to CPs which (1) convincingly explains empirical data, (2) is consistent with the underlying formal framework and does not require any extensions or modifications of the existing description apparatus, (3) is computationally tractable.",Licensing Complex Prepositions via Lexical Constraints,"In this paper, we will investigate a cross-linguistic phenomenon referred to as complex prepositions (CPs), which is a frequent type of multiword expressions (MWEs) in many languages. Based on empirical data, we will point out the problems of the traditional treatment of CPs as complex lexical categories, and, thus, propose an analysis using the formal paradigm of the HPSG in the tradition of (Pollard and Sag, 1994). Our objective is to provide an approach to CPs which (1) convincingly explains empirical data, (2) is consistent with the underlying formal framework and does not require any extensions or modifications of the existing description apparatus, (3) is computationally tractable.",Licensing Complex Prepositions via Lexical Constraints,"In this paper, we will investigate a cross-linguistic phenomenon referred to as complex prepositions (CPs), which is a frequent type of multiword expressions (MWEs) in many languages. Based on empirical data, we will point out the problems of the traditional treatment of CPs as complex lexical categories, and, thus, propose an analysis using the formal paradigm of the HPSG in the tradition of (Pollard and Sag, 1994). Our objective is to provide an approach to CPs which (1) convincingly explains empirical data, (2) is consistent with the underlying formal framework and does not require any extensions or modifications of the existing description apparatus, (3) is computationally tractable.","I would like to thank Manfred Sailer, Frank Richter, and the anonymous reviewers of the ACL-2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment in Sapporo for their interesting comments on the issue presented in this paper and Carmella Payne for help with English.","Licensing Complex Prepositions via Lexical Constraints. In this paper, we will investigate a cross-linguistic phenomenon referred to as complex prepositions (CPs), which is a frequent type of multiword expressions (MWEs) in many languages. Based on empirical data, we will point out the problems of the traditional treatment of CPs as complex lexical categories, and, thus, propose an analysis using the formal paradigm of the HPSG in the tradition of (Pollard and Sag, 1994). Our objective is to provide an approach to CPs which (1) convincingly explains empirical data, (2) is consistent with the underlying formal framework and does not require any extensions or modifications of the existing description apparatus, (3) is computationally tractable.",2003
yeniterzi-oflazer-2010-syntax,https://aclanthology.org/P10-1047,0,,,,,,,"Syntax-to-Morphology Mapping in Factored Phrase-Based Statistical Machine Translation from English to Turkish. We present a novel scheme to apply factored phrase-based SMT to a language pair with very disparate morphological structures. Our approach relies on syntactic analysis on the source side (English) and then encodes a wide variety of local and non-local syntactic structures as complex structural tags which appear as additional factors in the training data. On the target side (Turkish), we only perform morphological analysis and disambiguation but treat the complete complex morphological tag as a factor, instead of separating morphemes. We incrementally explore capturing various syntactic substructures as complex tags on the English side, and evaluate how our translations improve in BLEU scores. Our maximal set of source and target side transformations, coupled with some additional techniques, provide an 39% relative improvement from a baseline 17.08 to 23.78 BLEU, all averaged over 10 training and test sets. Now that the syntactic analysis on the English side is available, we also experiment with more long distance constituent reordering to bring the English constituent order close to Turkish, but find that these transformations do not provide any additional consistent tangible gains when averaged over the 10 sets.",Syntax-to-Morphology Mapping in Factored Phrase-Based Statistical Machine Translation from {E}nglish to {T}urkish,"We present a novel scheme to apply factored phrase-based SMT to a language pair with very disparate morphological structures. Our approach relies on syntactic analysis on the source side (English) and then encodes a wide variety of local and non-local syntactic structures as complex structural tags which appear as additional factors in the training data. On the target side (Turkish), we only perform morphological analysis and disambiguation but treat the complete complex morphological tag as a factor, instead of separating morphemes. We incrementally explore capturing various syntactic substructures as complex tags on the English side, and evaluate how our translations improve in BLEU scores. Our maximal set of source and target side transformations, coupled with some additional techniques, provide an 39% relative improvement from a baseline 17.08 to 23.78 BLEU, all averaged over 10 training and test sets. Now that the syntactic analysis on the English side is available, we also experiment with more long distance constituent reordering to bring the English constituent order close to Turkish, but find that these transformations do not provide any additional consistent tangible gains when averaged over the 10 sets.",Syntax-to-Morphology Mapping in Factored Phrase-Based Statistical Machine Translation from English to Turkish,"We present a novel scheme to apply factored phrase-based SMT to a language pair with very disparate morphological structures. Our approach relies on syntactic analysis on the source side (English) and then encodes a wide variety of local and non-local syntactic structures as complex structural tags which appear as additional factors in the training data. On the target side (Turkish), we only perform morphological analysis and disambiguation but treat the complete complex morphological tag as a factor, instead of separating morphemes. We incrementally explore capturing various syntactic substructures as complex tags on the English side, and evaluate how our translations improve in BLEU scores. Our maximal set of source and target side transformations, coupled with some additional techniques, provide an 39% relative improvement from a baseline 17.08 to 23.78 BLEU, all averaged over 10 training and test sets. Now that the syntactic analysis on the English side is available, we also experiment with more long distance constituent reordering to bring the English constituent order close to Turkish, but find that these transformations do not provide any additional consistent tangible gains when averaged over the 10 sets.",We thank Joakim Nivre for providing us with the parser. This publication was made possible by the generous support of the Qatar Foundation through Carnegie Mellon University's Seed Research program. The statements made herein are solely the responsibility of the authors.,"Syntax-to-Morphology Mapping in Factored Phrase-Based Statistical Machine Translation from English to Turkish. We present a novel scheme to apply factored phrase-based SMT to a language pair with very disparate morphological structures. Our approach relies on syntactic analysis on the source side (English) and then encodes a wide variety of local and non-local syntactic structures as complex structural tags which appear as additional factors in the training data. On the target side (Turkish), we only perform morphological analysis and disambiguation but treat the complete complex morphological tag as a factor, instead of separating morphemes. We incrementally explore capturing various syntactic substructures as complex tags on the English side, and evaluate how our translations improve in BLEU scores. Our maximal set of source and target side transformations, coupled with some additional techniques, provide an 39% relative improvement from a baseline 17.08 to 23.78 BLEU, all averaged over 10 training and test sets. Now that the syntactic analysis on the English side is available, we also experiment with more long distance constituent reordering to bring the English constituent order close to Turkish, but find that these transformations do not provide any additional consistent tangible gains when averaged over the 10 sets.",2010
zhang-etal-2012-whitepaper,https://aclanthology.org/W12-4401,0,,,,,,,"Whitepaper of NEWS 2012 Shared Task on Machine Transliteration. Transliteration is defined as phonetic translation of names across languages. Transliteration of Named Entities (NEs) is necessary in many applications, such as machine translation, corpus alignment, cross-language IR, information extraction and automatic lexicon acquisition. All such systems call for high-performance transliteration, which is the focus of shared task in the NEWS 2012 workshop. The objective of the shared task is to promote machine transliteration research by providing a common benchmarking platform for the community to evaluate the state-of-the-art technologies.",Whitepaper of {NEWS} 2012 Shared Task on Machine Transliteration,"Transliteration is defined as phonetic translation of names across languages. Transliteration of Named Entities (NEs) is necessary in many applications, such as machine translation, corpus alignment, cross-language IR, information extraction and automatic lexicon acquisition. All such systems call for high-performance transliteration, which is the focus of shared task in the NEWS 2012 workshop. The objective of the shared task is to promote machine transliteration research by providing a common benchmarking platform for the community to evaluate the state-of-the-art technologies.",Whitepaper of NEWS 2012 Shared Task on Machine Transliteration,"Transliteration is defined as phonetic translation of names across languages. Transliteration of Named Entities (NEs) is necessary in many applications, such as machine translation, corpus alignment, cross-language IR, information extraction and automatic lexicon acquisition. All such systems call for high-performance transliteration, which is the focus of shared task in the NEWS 2012 workshop. The objective of the shared task is to promote machine transliteration research by providing a common benchmarking platform for the community to evaluate the state-of-the-art technologies.",,"Whitepaper of NEWS 2012 Shared Task on Machine Transliteration. Transliteration is defined as phonetic translation of names across languages. Transliteration of Named Entities (NEs) is necessary in many applications, such as machine translation, corpus alignment, cross-language IR, information extraction and automatic lexicon acquisition. All such systems call for high-performance transliteration, which is the focus of shared task in the NEWS 2012 workshop. The objective of the shared task is to promote machine transliteration research by providing a common benchmarking platform for the community to evaluate the state-of-the-art technologies.",2012
lee-lee-2014-postech,https://aclanthology.org/W14-1709,0,,,,,,,"POSTECH Grammatical Error Correction System in the CoNLL-2014 Shared Task. This paper describes the POSTECH grammatical error correction system. Various methods are proposed to correct errors such as rule-based, probability n-gram vector approaches and router-based approach. Google N-gram count corpus is used mainly as the correction resource. Correction candidates are extracted from NUCLE training data and each candidate is evaluated with development data to extract high precision rules and n-gram frames. Out of 13 participating teams, our system is ranked 4 th on both the original and revised annotation.",{POSTECH} Grammatical Error Correction System in the {C}o{NLL}-2014 Shared Task,"This paper describes the POSTECH grammatical error correction system. Various methods are proposed to correct errors such as rule-based, probability n-gram vector approaches and router-based approach. Google N-gram count corpus is used mainly as the correction resource. Correction candidates are extracted from NUCLE training data and each candidate is evaluated with development data to extract high precision rules and n-gram frames. Out of 13 participating teams, our system is ranked 4 th on both the original and revised annotation.",POSTECH Grammatical Error Correction System in the CoNLL-2014 Shared Task,"This paper describes the POSTECH grammatical error correction system. Various methods are proposed to correct errors such as rule-based, probability n-gram vector approaches and router-based approach. Google N-gram count corpus is used mainly as the correction resource. Correction candidates are extracted from NUCLE training data and each candidate is evaluated with development data to extract high precision rules and n-gram frames. Out of 13 participating teams, our system is ranked 4 th on both the original and revised annotation.","This research was supported by the MSIP(The Ministry of Science, ICT and Future Planning), Korea and Microsoft Research, under IT/SW Creative research program supervised by the NIPA(National IT Industry Promotion Agency) (NIPA-2013-H0503-13-1006) and this research was supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(2010-0019523).","POSTECH Grammatical Error Correction System in the CoNLL-2014 Shared Task. This paper describes the POSTECH grammatical error correction system. Various methods are proposed to correct errors such as rule-based, probability n-gram vector approaches and router-based approach. Google N-gram count corpus is used mainly as the correction resource. Correction candidates are extracted from NUCLE training data and each candidate is evaluated with development data to extract high precision rules and n-gram frames. Out of 13 participating teams, our system is ranked 4 th on both the original and revised annotation.",2014
aoki-yamamoto-2007-opinion,https://aclanthology.org/Y07-1007,0,,,,,,,"Opinion Extraction based on Syntactic Pieces. This paper addresses a task of opinion extraction from given documents and its positive/negative classification. We propose a sentence classification method using a notion of syntactic piece. Syntactic piece is a minimum unit of structure, and is used as an alternative processing unit of n-gram and whole tree structure. We compute its semantic orientation, and classify opinion sentences into positive or negative. We have conducted an experiment on more than 5000 opinion sentences of multiple domains, and have proven that our approach attains high performance at 91% precision.",Opinion Extraction based on Syntactic Pieces,"This paper addresses a task of opinion extraction from given documents and its positive/negative classification. We propose a sentence classification method using a notion of syntactic piece. Syntactic piece is a minimum unit of structure, and is used as an alternative processing unit of n-gram and whole tree structure. We compute its semantic orientation, and classify opinion sentences into positive or negative. We have conducted an experiment on more than 5000 opinion sentences of multiple domains, and have proven that our approach attains high performance at 91% precision.",Opinion Extraction based on Syntactic Pieces,"This paper addresses a task of opinion extraction from given documents and its positive/negative classification. We propose a sentence classification method using a notion of syntactic piece. Syntactic piece is a minimum unit of structure, and is used as an alternative processing unit of n-gram and whole tree structure. We compute its semantic orientation, and classify opinion sentences into positive or negative. We have conducted an experiment on more than 5000 opinion sentences of multiple domains, and have proven that our approach attains high performance at 91% precision.",,"Opinion Extraction based on Syntactic Pieces. This paper addresses a task of opinion extraction from given documents and its positive/negative classification. We propose a sentence classification method using a notion of syntactic piece. Syntactic piece is a minimum unit of structure, and is used as an alternative processing unit of n-gram and whole tree structure. We compute its semantic orientation, and classify opinion sentences into positive or negative. We have conducted an experiment on more than 5000 opinion sentences of multiple domains, and have proven that our approach attains high performance at 91% precision.",2007
junczys-dowmunt-grundkiewicz-2016-phrase,https://aclanthology.org/D16-1161,0,,,,,,,"Phrase-based Machine Translation is State-of-the-Art for Automatic Grammatical Error Correction. In this work, we study parameter tuning towards the M 2 metric, the standard metric for automatic grammar error correction (GEC) tasks. After implementing M 2 as a scorer in the Moses tuning framework, we investigate interactions of dense and sparse features, different optimizers, and tuning strategies for the CoNLL-2014 shared task. We notice erratic behavior when optimizing sparse feature weights with M 2 and offer partial solutions. We find that a bare-bones phrase-based SMT setup with task-specific parameter-tuning outperforms all previously published results for the CoNLL-2014 test set by a large margin (46.37% M 2 over previously 41.75%, by an SMT system with neural features) while being trained on the same, publicly available data. Our newly introduced dense and sparse features widen that gap, and we improve the state-of-the-art to 49.49% M 2 .",Phrase-based Machine Translation is State-of-the-Art for Automatic Grammatical Error Correction,"In this work, we study parameter tuning towards the M 2 metric, the standard metric for automatic grammar error correction (GEC) tasks. After implementing M 2 as a scorer in the Moses tuning framework, we investigate interactions of dense and sparse features, different optimizers, and tuning strategies for the CoNLL-2014 shared task. We notice erratic behavior when optimizing sparse feature weights with M 2 and offer partial solutions. We find that a bare-bones phrase-based SMT setup with task-specific parameter-tuning outperforms all previously published results for the CoNLL-2014 test set by a large margin (46.37% M 2 over previously 41.75%, by an SMT system with neural features) while being trained on the same, publicly available data. Our newly introduced dense and sparse features widen that gap, and we improve the state-of-the-art to 49.49% M 2 .",Phrase-based Machine Translation is State-of-the-Art for Automatic Grammatical Error Correction,"In this work, we study parameter tuning towards the M 2 metric, the standard metric for automatic grammar error correction (GEC) tasks. After implementing M 2 as a scorer in the Moses tuning framework, we investigate interactions of dense and sparse features, different optimizers, and tuning strategies for the CoNLL-2014 shared task. We notice erratic behavior when optimizing sparse feature weights with M 2 and offer partial solutions. We find that a bare-bones phrase-based SMT setup with task-specific parameter-tuning outperforms all previously published results for the CoNLL-2014 test set by a large margin (46.37% M 2 over previously 41.75%, by an SMT system with neural features) while being trained on the same, publicly available data. Our newly introduced dense and sparse features widen that gap, and we improve the state-of-the-art to 49.49% M 2 .","The authors would like to thank Colin Cherry for his help with Batch Mira hyper-parameters and Kenneth Heafield for many helpful comments and discussions. This work was partially funded by the Polish National Science Centre (Grant No. 2014/15/N/ST6/02330) and by Facebook. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Facebook.","Phrase-based Machine Translation is State-of-the-Art for Automatic Grammatical Error Correction. In this work, we study parameter tuning towards the M 2 metric, the standard metric for automatic grammar error correction (GEC) tasks. After implementing M 2 as a scorer in the Moses tuning framework, we investigate interactions of dense and sparse features, different optimizers, and tuning strategies for the CoNLL-2014 shared task. We notice erratic behavior when optimizing sparse feature weights with M 2 and offer partial solutions. We find that a bare-bones phrase-based SMT setup with task-specific parameter-tuning outperforms all previously published results for the CoNLL-2014 test set by a large margin (46.37% M 2 over previously 41.75%, by an SMT system with neural features) while being trained on the same, publicly available data. Our newly introduced dense and sparse features widen that gap, and we improve the state-of-the-art to 49.49% M 2 .",2016
childs-etal-1998-coreference,https://aclanthology.org/X98-1010,0,,,,,,,"Coreference Resolution Strategies From an Application Perspective. As part of our TIPSTER III research program, we have continued our research into strategies to resolve coreferences within a free text document; this research was begun during our TIPSTER II research program. In the TIPSTER II Proceedings paper, ""An Evaluation of Coreference Resolution Strategies for Acquiring Associated Information,"" the goal was to evaluate the contributions of various techniques for associating an entity with three types of information: 1) name variations, 2) descriptive phrases, and 3) location information.",Coreference Resolution Strategies From an Application Perspective,"As part of our TIPSTER III research program, we have continued our research into strategies to resolve coreferences within a free text document; this research was begun during our TIPSTER II research program. In the TIPSTER II Proceedings paper, ""An Evaluation of Coreference Resolution Strategies for Acquiring Associated Information,"" the goal was to evaluate the contributions of various techniques for associating an entity with three types of information: 1) name variations, 2) descriptive phrases, and 3) location information.",Coreference Resolution Strategies From an Application Perspective,"As part of our TIPSTER III research program, we have continued our research into strategies to resolve coreferences within a free text document; this research was begun during our TIPSTER II research program. In the TIPSTER II Proceedings paper, ""An Evaluation of Coreference Resolution Strategies for Acquiring Associated Information,"" the goal was to evaluate the contributions of various techniques for associating an entity with three types of information: 1) name variations, 2) descriptive phrases, and 3) location information.",,"Coreference Resolution Strategies From an Application Perspective. As part of our TIPSTER III research program, we have continued our research into strategies to resolve coreferences within a free text document; this research was begun during our TIPSTER II research program. In the TIPSTER II Proceedings paper, ""An Evaluation of Coreference Resolution Strategies for Acquiring Associated Information,"" the goal was to evaluate the contributions of various techniques for associating an entity with three types of information: 1) name variations, 2) descriptive phrases, and 3) location information.",1998
costa-jussa-etal-2020-multilingual,https://aclanthology.org/2020.cl-2.1,0,,,,,,,"Multilingual and Interlingual Semantic Representations for Natural Language Processing: A Brief Introduction. We introduce the Computational Linguistics special issue on Multilingual and Interlingual Semantic Representations for Natural Language Processing. We situate the special issue's five articles in the context of our fast-changing field, explaining our motivation for this project. We offer a brief summary of the work in the issue, which includes developments on lexical and sentential semantic representations, from symbolic and neural perspectives. 1. Motivation This special issue arose from our observation of two trends in the fields of computational linguistics and natural language processing. The first trend is a matter of increasing demand for language technologies that serve diverse populations, particularly those whose languages have received little attention in the research community.",Multilingual and Interlingual Semantic Representations for Natural Language Processing: A Brief Introduction,"We introduce the Computational Linguistics special issue on Multilingual and Interlingual Semantic Representations for Natural Language Processing. We situate the special issue's five articles in the context of our fast-changing field, explaining our motivation for this project. We offer a brief summary of the work in the issue, which includes developments on lexical and sentential semantic representations, from symbolic and neural perspectives. 1. Motivation This special issue arose from our observation of two trends in the fields of computational linguistics and natural language processing. The first trend is a matter of increasing demand for language technologies that serve diverse populations, particularly those whose languages have received little attention in the research community.",Multilingual and Interlingual Semantic Representations for Natural Language Processing: A Brief Introduction,"We introduce the Computational Linguistics special issue on Multilingual and Interlingual Semantic Representations for Natural Language Processing. We situate the special issue's five articles in the context of our fast-changing field, explaining our motivation for this project. We offer a brief summary of the work in the issue, which includes developments on lexical and sentential semantic representations, from symbolic and neural perspectives. 1. Motivation This special issue arose from our observation of two trends in the fields of computational linguistics and natural language processing. The first trend is a matter of increasing demand for language technologies that serve diverse populations, particularly those whose languages have received little attention in the research community.","We thank Kyle Lo for assistance with the S2ORC data. MRC is supported in part by a Google Faculty Research Award 2018, Spanish Ministerio de Economía y Competitividad, the European Regional Development Fund and the Agencia Estatal de Investigación, through the postdoctoral senior grant Ramón y Cajal, the contract TEC2015-69266-P (MINECO/FEDER,EU) and the contract PCIN-2017-079 (AEI/MINECO). N. A. S. is supported by National Science Foundation grant IIS-1562364. C. E. B. is funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee). Responsibility for the content of this publication is with the authors.","Multilingual and Interlingual Semantic Representations for Natural Language Processing: A Brief Introduction. We introduce the Computational Linguistics special issue on Multilingual and Interlingual Semantic Representations for Natural Language Processing. We situate the special issue's five articles in the context of our fast-changing field, explaining our motivation for this project. We offer a brief summary of the work in the issue, which includes developments on lexical and sentential semantic representations, from symbolic and neural perspectives. 1. Motivation This special issue arose from our observation of two trends in the fields of computational linguistics and natural language processing. The first trend is a matter of increasing demand for language technologies that serve diverse populations, particularly those whose languages have received little attention in the research community.",2020
kotlerman-etal-2012-sentence,https://aclanthology.org/S12-1005,0,,,,,,,"Sentence Clustering via Projection over Term Clusters. This paper presents a novel sentence clustering scheme based on projecting sentences over term clusters. The scheme incorporates external knowledge to overcome lexical variability and small corpus size, and outperforms common sentence clustering methods on two reallife industrial datasets.",Sentence Clustering via Projection over Term Clusters,"This paper presents a novel sentence clustering scheme based on projecting sentences over term clusters. The scheme incorporates external knowledge to overcome lexical variability and small corpus size, and outperforms common sentence clustering methods on two reallife industrial datasets.",Sentence Clustering via Projection over Term Clusters,"This paper presents a novel sentence clustering scheme based on projecting sentences over term clusters. The scheme incorporates external knowledge to overcome lexical variability and small corpus size, and outperforms common sentence clustering methods on two reallife industrial datasets.",,"Sentence Clustering via Projection over Term Clusters. This paper presents a novel sentence clustering scheme based on projecting sentences over term clusters. The scheme incorporates external knowledge to overcome lexical variability and small corpus size, and outperforms common sentence clustering methods on two reallife industrial datasets.",2012
clodfelder-2003-lsa,https://aclanthology.org/W03-0319,0,,,,,,,"An LSA Implementation Against Parallel Texts in French and English. This paper presents the results of applying the Latent Semantic Analysis (LSA) methodology to a small collection of parallel texts in French and English. The goal of the analysis was to determine what the methodology might reveal regarding the difficulty level of either the machinetranslation (MT) task or the text-alignment (TA) task. In a perfectly parallel corpus where the texts are exactly aligned, it is expected that the word distributions between the two languages be perfectly symmetrical. Where they are symmetrical, the difficulty level of the machine-translation or the textalignment task should be low. The results of this analysis show that even in a perfectly aligned corpus, the word distributions between the two languages deviate and because they do, LSA may contribute much to our understanding of the difficulty of the MT and TA tasks. 1. Credits This paper discusses an implementation of the Latent Semantic Analysis (LSA) methodology against a small collection of perfectly parallel texts in French and English 1. The texts were made available by the HLT-NAACL and are taken from daily House journals of the Canadian Parliament. They were edited by Ulrich Germann. The LSA procedures were implemented in R, a system for statistical computation and graphics, and were written by John C.",An {LSA} Implementation Against Parallel Texts in {F}rench and {E}nglish,"This paper presents the results of applying the Latent Semantic Analysis (LSA) methodology to a small collection of parallel texts in French and English. The goal of the analysis was to determine what the methodology might reveal regarding the difficulty level of either the machinetranslation (MT) task or the text-alignment (TA) task. In a perfectly parallel corpus where the texts are exactly aligned, it is expected that the word distributions between the two languages be perfectly symmetrical. Where they are symmetrical, the difficulty level of the machine-translation or the textalignment task should be low. The results of this analysis show that even in a perfectly aligned corpus, the word distributions between the two languages deviate and because they do, LSA may contribute much to our understanding of the difficulty of the MT and TA tasks. 1. Credits This paper discusses an implementation of the Latent Semantic Analysis (LSA) methodology against a small collection of perfectly parallel texts in French and English 1. The texts were made available by the HLT-NAACL and are taken from daily House journals of the Canadian Parliament. They were edited by Ulrich Germann. The LSA procedures were implemented in R, a system for statistical computation and graphics, and were written by John C.",An LSA Implementation Against Parallel Texts in French and English,"This paper presents the results of applying the Latent Semantic Analysis (LSA) methodology to a small collection of parallel texts in French and English. The goal of the analysis was to determine what the methodology might reveal regarding the difficulty level of either the machinetranslation (MT) task or the text-alignment (TA) task. In a perfectly parallel corpus where the texts are exactly aligned, it is expected that the word distributions between the two languages be perfectly symmetrical. Where they are symmetrical, the difficulty level of the machine-translation or the textalignment task should be low. The results of this analysis show that even in a perfectly aligned corpus, the word distributions between the two languages deviate and because they do, LSA may contribute much to our understanding of the difficulty of the MT and TA tasks. 1. Credits This paper discusses an implementation of the Latent Semantic Analysis (LSA) methodology against a small collection of perfectly parallel texts in French and English 1. The texts were made available by the HLT-NAACL and are taken from daily House journals of the Canadian Parliament. They were edited by Ulrich Germann. The LSA procedures were implemented in R, a system for statistical computation and graphics, and were written by John C.",,"An LSA Implementation Against Parallel Texts in French and English. This paper presents the results of applying the Latent Semantic Analysis (LSA) methodology to a small collection of parallel texts in French and English. The goal of the analysis was to determine what the methodology might reveal regarding the difficulty level of either the machinetranslation (MT) task or the text-alignment (TA) task. In a perfectly parallel corpus where the texts are exactly aligned, it is expected that the word distributions between the two languages be perfectly symmetrical. Where they are symmetrical, the difficulty level of the machine-translation or the textalignment task should be low. The results of this analysis show that even in a perfectly aligned corpus, the word distributions between the two languages deviate and because they do, LSA may contribute much to our understanding of the difficulty of the MT and TA tasks. 1. Credits This paper discusses an implementation of the Latent Semantic Analysis (LSA) methodology against a small collection of perfectly parallel texts in French and English 1. The texts were made available by the HLT-NAACL and are taken from daily House journals of the Canadian Parliament. They were edited by Ulrich Germann. The LSA procedures were implemented in R, a system for statistical computation and graphics, and were written by John C.",2003
suglia-etal-2020-compguesswhat,https://aclanthology.org/2020.acl-main.682,0,,,,,,,"CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language Learning. Approaches to Grounded Language Learning typically focus on a single task-based final performance measure that may not depend on desirable properties of the learned hidden representations, such as their ability to predict salient attributes or to generalise to unseen situations. To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three subtasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zeroshot evaluation. We also propose a new dataset CompGuessWhat?! as an instance of this framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. To this end, we extend the original GuessWhat?! dataset by including a semantic layer on top of the perceptual one. Specifically, we enrich the Vi-sualGenome scene graphs associated with the GuessWhat?! images with abstract and situated attributes. By using diagnostic classifiers, we show that current models learn representations that are not expressive enough to encode object attributes (average F1 of 44.27). In addition, they do not learn strategies nor representations that are robust enough to perform well when novel scenes or objects are involved in gameplay (zero-shot best accuracy 50.06%).",{C}omp{G}uess{W}hat?!: A Multi-task Evaluation Framework for Grounded Language Learning,"Approaches to Grounded Language Learning typically focus on a single task-based final performance measure that may not depend on desirable properties of the learned hidden representations, such as their ability to predict salient attributes or to generalise to unseen situations. To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three subtasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zeroshot evaluation. We also propose a new dataset CompGuessWhat?! as an instance of this framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. To this end, we extend the original GuessWhat?! dataset by including a semantic layer on top of the perceptual one. Specifically, we enrich the Vi-sualGenome scene graphs associated with the GuessWhat?! images with abstract and situated attributes. By using diagnostic classifiers, we show that current models learn representations that are not expressive enough to encode object attributes (average F1 of 44.27). In addition, they do not learn strategies nor representations that are robust enough to perform well when novel scenes or objects are involved in gameplay (zero-shot best accuracy 50.06%).",CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language Learning,"Approaches to Grounded Language Learning typically focus on a single task-based final performance measure that may not depend on desirable properties of the learned hidden representations, such as their ability to predict salient attributes or to generalise to unseen situations. To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three subtasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zeroshot evaluation. We also propose a new dataset CompGuessWhat?! as an instance of this framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. To this end, we extend the original GuessWhat?! dataset by including a semantic layer on top of the perceptual one. Specifically, we enrich the Vi-sualGenome scene graphs associated with the GuessWhat?! images with abstract and situated attributes. By using diagnostic classifiers, we show that current models learn representations that are not expressive enough to encode object attributes (average F1 of 44.27). In addition, they do not learn strategies nor representations that are robust enough to perform well when novel scenes or objects are involved in gameplay (zero-shot best accuracy 50.06%).",We thank Arash Eshghi and Yonatan Bisk for fruitful discussions in the early stages of the project.,"CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language Learning. Approaches to Grounded Language Learning typically focus on a single task-based final performance measure that may not depend on desirable properties of the learned hidden representations, such as their ability to predict salient attributes or to generalise to unseen situations. To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three subtasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zeroshot evaluation. We also propose a new dataset CompGuessWhat?! as an instance of this framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. To this end, we extend the original GuessWhat?! dataset by including a semantic layer on top of the perceptual one. Specifically, we enrich the Vi-sualGenome scene graphs associated with the GuessWhat?! images with abstract and situated attributes. By using diagnostic classifiers, we show that current models learn representations that are not expressive enough to encode object attributes (average F1 of 44.27). In addition, they do not learn strategies nor representations that are robust enough to perform well when novel scenes or objects are involved in gameplay (zero-shot best accuracy 50.06%).",2020
grishman-ksiezyk-1990-causal,https://aclanthology.org/C90-3023,0,,,,,,,"Causal and Temporal Text Analysis: The Role of the Domain Model. It is generally recognized that interpreting natural language input may require access to detailed knowledge of the domain involved. This is particularly tree for multi-sentence discourse, where we must not only analyze the individual sentences but also establish the connections between them. Simple semantic constraints --an object classification hierarchy, a catalog of meaningful semantic relations --are not sufficient. However, the appropriate structure for integrating a language analyzer with a complex dynamic (time-dependent) model ---one which can scale up beyond 'toy' domains --is not yet well understood.
To explore these design issues, we have developed a system which uses a rich model of a real, nontrivial piece of equipment in order to analyze, in depth, reports of the failure of this equipment. This system has been fully implemented and demonstrated on actual failure reports. In outlining this system over the next few pages, we focus particularly on the language analysis components which require detailed domain knowledge, and how these requirements have affected the design of the domain model.",Causal and Temporal Text Analysis: The Role of the Domain Model,"It is generally recognized that interpreting natural language input may require access to detailed knowledge of the domain involved. This is particularly tree for multi-sentence discourse, where we must not only analyze the individual sentences but also establish the connections between them. Simple semantic constraints --an object classification hierarchy, a catalog of meaningful semantic relations --are not sufficient. However, the appropriate structure for integrating a language analyzer with a complex dynamic (time-dependent) model ---one which can scale up beyond 'toy' domains --is not yet well understood.
To explore these design issues, we have developed a system which uses a rich model of a real, nontrivial piece of equipment in order to analyze, in depth, reports of the failure of this equipment. This system has been fully implemented and demonstrated on actual failure reports. In outlining this system over the next few pages, we focus particularly on the language analysis components which require detailed domain knowledge, and how these requirements have affected the design of the domain model.",Causal and Temporal Text Analysis: The Role of the Domain Model,"It is generally recognized that interpreting natural language input may require access to detailed knowledge of the domain involved. This is particularly tree for multi-sentence discourse, where we must not only analyze the individual sentences but also establish the connections between them. Simple semantic constraints --an object classification hierarchy, a catalog of meaningful semantic relations --are not sufficient. However, the appropriate structure for integrating a language analyzer with a complex dynamic (time-dependent) model ---one which can scale up beyond 'toy' domains --is not yet well understood.
To explore these design issues, we have developed a system which uses a rich model of a real, nontrivial piece of equipment in order to analyze, in depth, reports of the failure of this equipment. This system has been fully implemented and demonstrated on actual failure reports. In outlining this system over the next few pages, we focus particularly on the language analysis components which require detailed domain knowledge, and how these requirements have affected the design of the domain model.",This research was supported by the Defense Advazlced Research Projects Agency under Contract N00014-85-K-0163 from the Office of Naval Research.,"Causal and Temporal Text Analysis: The Role of the Domain Model. It is generally recognized that interpreting natural language input may require access to detailed knowledge of the domain involved. This is particularly tree for multi-sentence discourse, where we must not only analyze the individual sentences but also establish the connections between them. Simple semantic constraints --an object classification hierarchy, a catalog of meaningful semantic relations --are not sufficient. However, the appropriate structure for integrating a language analyzer with a complex dynamic (time-dependent) model ---one which can scale up beyond 'toy' domains --is not yet well understood.
To explore these design issues, we have developed a system which uses a rich model of a real, nontrivial piece of equipment in order to analyze, in depth, reports of the failure of this equipment. This system has been fully implemented and demonstrated on actual failure reports. In outlining this system over the next few pages, we focus particularly on the language analysis components which require detailed domain knowledge, and how these requirements have affected the design of the domain model.",1990
seyffarth-2019-modeling,https://aclanthology.org/W19-1003,0,,,,,,,"Modeling the Induced Action Alternation and the Caused-Motion Construction with Tree Adjoining Grammar (TAG) and Semantic Frames. The induced action alternation and the caused-motion construction are two phenomena that allow English verbs to be interpreted as motion-causing events. This is possible when a verb is used with a direct object and a directional phrase, even when the verb does not lexically signify causativity or motion, as in ""Sylvia laughed Mary off the stage"". While participation in the induced action alternation is a lexical property of certain verbs, the caused-motion construction is not anchored in the lexicon. We model both phenomena with XMG-2 and use the TuLiPA parser to create compositional semantic frames for example sentences. We show how such frames represent the key differences between these two phenomena at the syntax-semantics interface, and how TAG can be used to derive distinct analyses for them.",Modeling the Induced Action Alternation and the Caused-Motion Construction with {T}ree {A}djoining {G}rammar ({TAG}) and Semantic Frames,"The induced action alternation and the caused-motion construction are two phenomena that allow English verbs to be interpreted as motion-causing events. This is possible when a verb is used with a direct object and a directional phrase, even when the verb does not lexically signify causativity or motion, as in ""Sylvia laughed Mary off the stage"". While participation in the induced action alternation is a lexical property of certain verbs, the caused-motion construction is not anchored in the lexicon. We model both phenomena with XMG-2 and use the TuLiPA parser to create compositional semantic frames for example sentences. We show how such frames represent the key differences between these two phenomena at the syntax-semantics interface, and how TAG can be used to derive distinct analyses for them.",Modeling the Induced Action Alternation and the Caused-Motion Construction with Tree Adjoining Grammar (TAG) and Semantic Frames,"The induced action alternation and the caused-motion construction are two phenomena that allow English verbs to be interpreted as motion-causing events. This is possible when a verb is used with a direct object and a directional phrase, even when the verb does not lexically signify causativity or motion, as in ""Sylvia laughed Mary off the stage"". While participation in the induced action alternation is a lexical property of certain verbs, the caused-motion construction is not anchored in the lexicon. We model both phenomena with XMG-2 and use the TuLiPA parser to create compositional semantic frames for example sentences. We show how such frames represent the key differences between these two phenomena at the syntax-semantics interface, and how TAG can be used to derive distinct analyses for them.",,"Modeling the Induced Action Alternation and the Caused-Motion Construction with Tree Adjoining Grammar (TAG) and Semantic Frames. The induced action alternation and the caused-motion construction are two phenomena that allow English verbs to be interpreted as motion-causing events. This is possible when a verb is used with a direct object and a directional phrase, even when the verb does not lexically signify causativity or motion, as in ""Sylvia laughed Mary off the stage"". While participation in the induced action alternation is a lexical property of certain verbs, the caused-motion construction is not anchored in the lexicon. We model both phenomena with XMG-2 and use the TuLiPA parser to create compositional semantic frames for example sentences. We show how such frames represent the key differences between these two phenomena at the syntax-semantics interface, and how TAG can be used to derive distinct analyses for them.",2019
wilks-1976-semantics,https://aclanthology.org/1976.earlymt-1.20,0,,,,,,,"Semantics and world knowledge in MT. I presented very simple and straightforward paragraphs from recent newspapers to show that even t h e most congenial real t e x t s require, for their translation, sume notions of Inference, knowledge, and what 1 call ""preferenrules"", 'in 1974, I wm t for a year to the Institute for Semantic and Cognitive Studieq in Switzerland and then to the University of Edinburgh, where I have worked on theoretical defects in that Stanford model and ways of overcoming them in a later implementation.",Semantics and world knowledge in {MT},"I presented very simple and straightforward paragraphs from recent newspapers to show that even t h e most congenial real t e x t s require, for their translation, sume notions of Inference, knowledge, and what 1 call ""preferenrules"", 'in 1974, I wm t for a year to the Institute for Semantic and Cognitive Studieq in Switzerland and then to the University of Edinburgh, where I have worked on theoretical defects in that Stanford model and ways of overcoming them in a later implementation.",Semantics and world knowledge in MT,"I presented very simple and straightforward paragraphs from recent newspapers to show that even t h e most congenial real t e x t s require, for their translation, sume notions of Inference, knowledge, and what 1 call ""preferenrules"", 'in 1974, I wm t for a year to the Institute for Semantic and Cognitive Studieq in Switzerland and then to the University of Edinburgh, where I have worked on theoretical defects in that Stanford model and ways of overcoming them in a later implementation.",,"Semantics and world knowledge in MT. I presented very simple and straightforward paragraphs from recent newspapers to show that even t h e most congenial real t e x t s require, for their translation, sume notions of Inference, knowledge, and what 1 call ""preferenrules"", 'in 1974, I wm t for a year to the Institute for Semantic and Cognitive Studieq in Switzerland and then to the University of Edinburgh, where I have worked on theoretical defects in that Stanford model and ways of overcoming them in a later implementation.",1976
di-marco-navigli-2013-clustering,https://aclanthology.org/J13-3008,0,,,,,,,"Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction. Web search result clustering aims to facilitate information search on the Web. Rather than the results of a query being presented as a flat list, they are grouped on the basis of their similarity and subsequently shown to the user as a list of clusters. Each cluster is intended to represent a different meaning of the input query, thus taking into account the lexical ambiguity (i.e., polysemy) issue. Existing Web clustering methods typically rely on some shallow notion of textual similarity between search result snippets, however. As a result, text snippets with no word in common tend to be clustered separately even if they share the same meaning, whereas snippets with words in common may be grouped together even if they refer to different meanings of the input query. In this article we present a novel approach to Web search result clustering based on the automatic discovery of word senses from raw text, a task referred to as Word Sense Induction. Key to our approach is to first acquire the various senses (i.e., meanings) of an ambiguous query and then cluster the search results based on their semantic similarity to the word senses induced. Our experiments, conducted on data sets of ambiguous queries, show that our approach outperforms both Web clustering and search engines.",Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction,"Web search result clustering aims to facilitate information search on the Web. Rather than the results of a query being presented as a flat list, they are grouped on the basis of their similarity and subsequently shown to the user as a list of clusters. Each cluster is intended to represent a different meaning of the input query, thus taking into account the lexical ambiguity (i.e., polysemy) issue. Existing Web clustering methods typically rely on some shallow notion of textual similarity between search result snippets, however. As a result, text snippets with no word in common tend to be clustered separately even if they share the same meaning, whereas snippets with words in common may be grouped together even if they refer to different meanings of the input query. In this article we present a novel approach to Web search result clustering based on the automatic discovery of word senses from raw text, a task referred to as Word Sense Induction. Key to our approach is to first acquire the various senses (i.e., meanings) of an ambiguous query and then cluster the search results based on their semantic similarity to the word senses induced. Our experiments, conducted on data sets of ambiguous queries, show that our approach outperforms both Web clustering and search engines.",Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction,"Web search result clustering aims to facilitate information search on the Web. Rather than the results of a query being presented as a flat list, they are grouped on the basis of their similarity and subsequently shown to the user as a list of clusters. Each cluster is intended to represent a different meaning of the input query, thus taking into account the lexical ambiguity (i.e., polysemy) issue. Existing Web clustering methods typically rely on some shallow notion of textual similarity between search result snippets, however. As a result, text snippets with no word in common tend to be clustered separately even if they share the same meaning, whereas snippets with words in common may be grouped together even if they refer to different meanings of the input query. In this article we present a novel approach to Web search result clustering based on the automatic discovery of word senses from raw text, a task referred to as Word Sense Induction. Key to our approach is to first acquire the various senses (i.e., meanings) of an ambiguous query and then cluster the search results based on their semantic similarity to the word senses induced. Our experiments, conducted on data sets of ambiguous queries, show that our approach outperforms both Web clustering and search engines.",The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI no. 259234 ,"Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction. Web search result clustering aims to facilitate information search on the Web. Rather than the results of a query being presented as a flat list, they are grouped on the basis of their similarity and subsequently shown to the user as a list of clusters. Each cluster is intended to represent a different meaning of the input query, thus taking into account the lexical ambiguity (i.e., polysemy) issue. Existing Web clustering methods typically rely on some shallow notion of textual similarity between search result snippets, however. As a result, text snippets with no word in common tend to be clustered separately even if they share the same meaning, whereas snippets with words in common may be grouped together even if they refer to different meanings of the input query. In this article we present a novel approach to Web search result clustering based on the automatic discovery of word senses from raw text, a task referred to as Word Sense Induction. Key to our approach is to first acquire the various senses (i.e., meanings) of an ambiguous query and then cluster the search results based on their semantic similarity to the word senses induced. Our experiments, conducted on data sets of ambiguous queries, show that our approach outperforms both Web clustering and search engines.",2013
lisowska-underwood-2006-rote,http://www.lrec-conf.org/proceedings/lrec2006/pdf/187_pdf.pdf,0,,,,,,,ROTE: A Tool to Support Users in Defining the Relative Importance of Quality Characteristics. This paper describes the Relative Ordering Tool for Evaluation (ROTE) which is designed to support the process of building a parameterised quality model for evaluation. It is a very simple tool which enables users to specify the relative importance of quality characteristics (and associated metrics) to reflect the users' particular requirements. The tool allows users to order any number of quality characteristics by comparing them in a pair-wise fashion. The tool was developed in the context of a collaborative project developing a text mining system. A full scale evaluation of the text mining system was designed and executed for three different users and the ROTE tool was successfully applied by those users during that process. The tool will be made available for general use by the evaluation community.,{ROTE}: A Tool to Support Users in Defining the Relative Importance of Quality Characteristics,This paper describes the Relative Ordering Tool for Evaluation (ROTE) which is designed to support the process of building a parameterised quality model for evaluation. It is a very simple tool which enables users to specify the relative importance of quality characteristics (and associated metrics) to reflect the users' particular requirements. The tool allows users to order any number of quality characteristics by comparing them in a pair-wise fashion. The tool was developed in the context of a collaborative project developing a text mining system. A full scale evaluation of the text mining system was designed and executed for three different users and the ROTE tool was successfully applied by those users during that process. The tool will be made available for general use by the evaluation community.,ROTE: A Tool to Support Users in Defining the Relative Importance of Quality Characteristics,This paper describes the Relative Ordering Tool for Evaluation (ROTE) which is designed to support the process of building a parameterised quality model for evaluation. It is a very simple tool which enables users to specify the relative importance of quality characteristics (and associated metrics) to reflect the users' particular requirements. The tool allows users to order any number of quality characteristics by comparing them in a pair-wise fashion. The tool was developed in the context of a collaborative project developing a text mining system. A full scale evaluation of the text mining system was designed and executed for three different users and the ROTE tool was successfully applied by those users during that process. The tool will be made available for general use by the evaluation community.,,ROTE: A Tool to Support Users in Defining the Relative Importance of Quality Characteristics. This paper describes the Relative Ordering Tool for Evaluation (ROTE) which is designed to support the process of building a parameterised quality model for evaluation. It is a very simple tool which enables users to specify the relative importance of quality characteristics (and associated metrics) to reflect the users' particular requirements. The tool allows users to order any number of quality characteristics by comparing them in a pair-wise fashion. The tool was developed in the context of a collaborative project developing a text mining system. A full scale evaluation of the text mining system was designed and executed for three different users and the ROTE tool was successfully applied by those users during that process. The tool will be made available for general use by the evaluation community.,2006
xiao-guo-2012-multi,https://aclanthology.org/C12-1174,0,,,,,,,"Multi-View AdaBoost for Multilingual Subjectivity Analysis. Subjectivity analysis has received increasing attention in natural language processing field. Most of the subjectivity analysis works however are conducted on single languages. In this paper, we propose to perform multilingual subjectivity analysis by combining multi-view learning and AdaBoost techniques. We aim to show that by boosting multi-view classifiers we can develop more effective multilingual subjectivity analysis tools for new languages as well as increase the classification performance for English data. We empirically evaluate our two multi-view AdaBoost approaches on the multilingual MPQA dataset. The experimental results show the multi-view AdaBoost approaches significantly outperform existing monolingual and multilingual methods.",Multi-View {A}da{B}oost for Multilingual Subjectivity Analysis,"Subjectivity analysis has received increasing attention in natural language processing field. Most of the subjectivity analysis works however are conducted on single languages. In this paper, we propose to perform multilingual subjectivity analysis by combining multi-view learning and AdaBoost techniques. We aim to show that by boosting multi-view classifiers we can develop more effective multilingual subjectivity analysis tools for new languages as well as increase the classification performance for English data. We empirically evaluate our two multi-view AdaBoost approaches on the multilingual MPQA dataset. The experimental results show the multi-view AdaBoost approaches significantly outperform existing monolingual and multilingual methods.",Multi-View AdaBoost for Multilingual Subjectivity Analysis,"Subjectivity analysis has received increasing attention in natural language processing field. Most of the subjectivity analysis works however are conducted on single languages. In this paper, we propose to perform multilingual subjectivity analysis by combining multi-view learning and AdaBoost techniques. We aim to show that by boosting multi-view classifiers we can develop more effective multilingual subjectivity analysis tools for new languages as well as increase the classification performance for English data. We empirically evaluate our two multi-view AdaBoost approaches on the multilingual MPQA dataset. The experimental results show the multi-view AdaBoost approaches significantly outperform existing monolingual and multilingual methods.",,"Multi-View AdaBoost for Multilingual Subjectivity Analysis. Subjectivity analysis has received increasing attention in natural language processing field. Most of the subjectivity analysis works however are conducted on single languages. In this paper, we propose to perform multilingual subjectivity analysis by combining multi-view learning and AdaBoost techniques. We aim to show that by boosting multi-view classifiers we can develop more effective multilingual subjectivity analysis tools for new languages as well as increase the classification performance for English data. We empirically evaluate our two multi-view AdaBoost approaches on the multilingual MPQA dataset. The experimental results show the multi-view AdaBoost approaches significantly outperform existing monolingual and multilingual methods.",2012
maxwell-2015-accounting,https://aclanthology.org/W15-4809,0,,,,,,,"Accounting for Allomorphy in Finite-state Transducers. Building morphological parsers with existing finite state toolkits can result in something of a mis-match between the programming language of the toolkit and the linguistic concepts familiar to the average linguist. We illustrate this mismatch with a particular linguistic construct, suppletive allomorphy, and discuss ways to encode suppletive allomorphy in the Stuttgart Finite State tools (sfst). The complexity of the general solution motivates our work in providing an alternative formalism for morphology and phonology, one which can be translated automatically into sfst or other morphological parsing engines.",Accounting for Allomorphy in Finite-state Transducers,"Building morphological parsers with existing finite state toolkits can result in something of a mis-match between the programming language of the toolkit and the linguistic concepts familiar to the average linguist. We illustrate this mismatch with a particular linguistic construct, suppletive allomorphy, and discuss ways to encode suppletive allomorphy in the Stuttgart Finite State tools (sfst). The complexity of the general solution motivates our work in providing an alternative formalism for morphology and phonology, one which can be translated automatically into sfst or other morphological parsing engines.",Accounting for Allomorphy in Finite-state Transducers,"Building morphological parsers with existing finite state toolkits can result in something of a mis-match between the programming language of the toolkit and the linguistic concepts familiar to the average linguist. We illustrate this mismatch with a particular linguistic construct, suppletive allomorphy, and discuss ways to encode suppletive allomorphy in the Stuttgart Finite State tools (sfst). The complexity of the general solution motivates our work in providing an alternative formalism for morphology and phonology, one which can be translated automatically into sfst or other morphological parsing engines.",,"Accounting for Allomorphy in Finite-state Transducers. Building morphological parsers with existing finite state toolkits can result in something of a mis-match between the programming language of the toolkit and the linguistic concepts familiar to the average linguist. We illustrate this mismatch with a particular linguistic construct, suppletive allomorphy, and discuss ways to encode suppletive allomorphy in the Stuttgart Finite State tools (sfst). The complexity of the general solution motivates our work in providing an alternative formalism for morphology and phonology, one which can be translated automatically into sfst or other morphological parsing engines.",2015
nilsson-etal-2007-generalizing,https://aclanthology.org/P07-1122,0,,,,,,,"Generalizing Tree Transformations for Inductive Dependency Parsing. Previous studies in data-driven dependency parsing have shown that tree transformations can improve parsing accuracy for specific parsers and data sets. We investigate to what extent this can be generalized across languages/treebanks and parsers, focusing on pseudo-projective parsing, as a way of capturing non-projective dependencies, and transformations used to facilitate parsing of coordinate structures and verb groups. The results indicate that the beneficial effect of pseudo-projective parsing is independent of parsing strategy but sensitive to language or treebank specific properties. By contrast, the construction specific transformations appear to be more sensitive to parsing strategy but have a constant positive effect over several languages.",Generalizing Tree Transformations for Inductive Dependency Parsing,"Previous studies in data-driven dependency parsing have shown that tree transformations can improve parsing accuracy for specific parsers and data sets. We investigate to what extent this can be generalized across languages/treebanks and parsers, focusing on pseudo-projective parsing, as a way of capturing non-projective dependencies, and transformations used to facilitate parsing of coordinate structures and verb groups. The results indicate that the beneficial effect of pseudo-projective parsing is independent of parsing strategy but sensitive to language or treebank specific properties. By contrast, the construction specific transformations appear to be more sensitive to parsing strategy but have a constant positive effect over several languages.",Generalizing Tree Transformations for Inductive Dependency Parsing,"Previous studies in data-driven dependency parsing have shown that tree transformations can improve parsing accuracy for specific parsers and data sets. We investigate to what extent this can be generalized across languages/treebanks and parsers, focusing on pseudo-projective parsing, as a way of capturing non-projective dependencies, and transformations used to facilitate parsing of coordinate structures and verb groups. The results indicate that the beneficial effect of pseudo-projective parsing is independent of parsing strategy but sensitive to language or treebank specific properties. By contrast, the construction specific transformations appear to be more sensitive to parsing strategy but have a constant positive effect over several languages.",,"Generalizing Tree Transformations for Inductive Dependency Parsing. Previous studies in data-driven dependency parsing have shown that tree transformations can improve parsing accuracy for specific parsers and data sets. We investigate to what extent this can be generalized across languages/treebanks and parsers, focusing on pseudo-projective parsing, as a way of capturing non-projective dependencies, and transformations used to facilitate parsing of coordinate structures and verb groups. The results indicate that the beneficial effect of pseudo-projective parsing is independent of parsing strategy but sensitive to language or treebank specific properties. By contrast, the construction specific transformations appear to be more sensitive to parsing strategy but have a constant positive effect over several languages.",2007
liang-etal-2021-iterative-multi,https://aclanthology.org/2021.findings-emnlp.152,0,,,,,,,"An Iterative Multi-Knowledge Transfer Network for Aspect-Based Sentiment Analysis. Aspect-based sentiment analysis (ABSA) mainly involves three subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification, which are typically handled in a separate or joint manner. However, previous approaches do not well exploit the interactive relations among three subtasks and do not pertinently leverage the easily available document-level labeled domain/sentiment knowledge, which restricts their performances. To address these issues, we propose a novel Iterative Multi-Knowledge Transfer Network (IMKTN) for end-to-end ABSA. For one thing, through the interactive correlations between the ABSA subtasks, our IMKTN transfers the task-specific knowledge from any two of the three subtasks to another one at the token level by utilizing a well-designed routing algorithm, that is, any two of the three subtasks will help the third one. For another, our IMKTN pertinently transfers the document-level knowledge, i.e., domainspecific and sentiment-related knowledge, to the aspect-level subtasks to further enhance the corresponding performance. Experimental results on three benchmark datasets demonstrate the effectiveness and superiority of our approach.",An Iterative Multi-Knowledge Transfer Network for Aspect-Based Sentiment Analysis,"Aspect-based sentiment analysis (ABSA) mainly involves three subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification, which are typically handled in a separate or joint manner. However, previous approaches do not well exploit the interactive relations among three subtasks and do not pertinently leverage the easily available document-level labeled domain/sentiment knowledge, which restricts their performances. To address these issues, we propose a novel Iterative Multi-Knowledge Transfer Network (IMKTN) for end-to-end ABSA. For one thing, through the interactive correlations between the ABSA subtasks, our IMKTN transfers the task-specific knowledge from any two of the three subtasks to another one at the token level by utilizing a well-designed routing algorithm, that is, any two of the three subtasks will help the third one. For another, our IMKTN pertinently transfers the document-level knowledge, i.e., domainspecific and sentiment-related knowledge, to the aspect-level subtasks to further enhance the corresponding performance. Experimental results on three benchmark datasets demonstrate the effectiveness and superiority of our approach.",An Iterative Multi-Knowledge Transfer Network for Aspect-Based Sentiment Analysis,"Aspect-based sentiment analysis (ABSA) mainly involves three subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification, which are typically handled in a separate or joint manner. However, previous approaches do not well exploit the interactive relations among three subtasks and do not pertinently leverage the easily available document-level labeled domain/sentiment knowledge, which restricts their performances. To address these issues, we propose a novel Iterative Multi-Knowledge Transfer Network (IMKTN) for end-to-end ABSA. For one thing, through the interactive correlations between the ABSA subtasks, our IMKTN transfers the task-specific knowledge from any two of the three subtasks to another one at the token level by utilizing a well-designed routing algorithm, that is, any two of the three subtasks will help the third one. For another, our IMKTN pertinently transfers the document-level knowledge, i.e., domainspecific and sentiment-related knowledge, to the aspect-level subtasks to further enhance the corresponding performance. Experimental results on three benchmark datasets demonstrate the effectiveness and superiority of our approach.","The research work descried in this paper has been supported by the National Key R&D Program of China (2019YFB1405200) and the National Nature Science Foundation of China (No. 61976015, 61976016, 61876198 and 61370130). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper.","An Iterative Multi-Knowledge Transfer Network for Aspect-Based Sentiment Analysis. Aspect-based sentiment analysis (ABSA) mainly involves three subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification, which are typically handled in a separate or joint manner. However, previous approaches do not well exploit the interactive relations among three subtasks and do not pertinently leverage the easily available document-level labeled domain/sentiment knowledge, which restricts their performances. To address these issues, we propose a novel Iterative Multi-Knowledge Transfer Network (IMKTN) for end-to-end ABSA. For one thing, through the interactive correlations between the ABSA subtasks, our IMKTN transfers the task-specific knowledge from any two of the three subtasks to another one at the token level by utilizing a well-designed routing algorithm, that is, any two of the three subtasks will help the third one. For another, our IMKTN pertinently transfers the document-level knowledge, i.e., domainspecific and sentiment-related knowledge, to the aspect-level subtasks to further enhance the corresponding performance. Experimental results on three benchmark datasets demonstrate the effectiveness and superiority of our approach.",2021
shi-etal-2021-neural,https://aclanthology.org/2021.emnlp-main.298,0,,,,,,,"Neural Natural Logic Inference for Interpretable Question Answering. Many open-domain question answering problems can be cast as a textual entailment task, where a question and candidate answers are concatenated to form hypotheses. A QA system then determines if the supporting knowledge bases, regarded as potential premises, entail the hypotheses. In this paper, we investigate a neural-symbolic QA approach that integrates natural logic reasoning within deep learning architectures, towards developing effective and yet explainable question answering models. The proposed model gradually bridges a hypothesis and candidate premises following natural logic inference steps to build proof paths. Entailment scores between the acquired intermediate hypotheses and candidate premises are measured to determine if a premise entails the hypothesis. As the natural logic reasoning process forms a tree-like, hierarchical structure, we embed hypotheses and premises in a Hyperbolic space rather than Euclidean space to acquire more precise representations. Empirically, our method outperforms prior work on answering multiple-choice science questions, achieving the best results on two publicly available datasets. The natural logic inference process inherently provides evidence to help explain the prediction process.",Neural Natural Logic Inference for Interpretable Question Answering,"Many open-domain question answering problems can be cast as a textual entailment task, where a question and candidate answers are concatenated to form hypotheses. A QA system then determines if the supporting knowledge bases, regarded as potential premises, entail the hypotheses. In this paper, we investigate a neural-symbolic QA approach that integrates natural logic reasoning within deep learning architectures, towards developing effective and yet explainable question answering models. The proposed model gradually bridges a hypothesis and candidate premises following natural logic inference steps to build proof paths. Entailment scores between the acquired intermediate hypotheses and candidate premises are measured to determine if a premise entails the hypothesis. As the natural logic reasoning process forms a tree-like, hierarchical structure, we embed hypotheses and premises in a Hyperbolic space rather than Euclidean space to acquire more precise representations. Empirically, our method outperforms prior work on answering multiple-choice science questions, achieving the best results on two publicly available datasets. The natural logic inference process inherently provides evidence to help explain the prediction process.",Neural Natural Logic Inference for Interpretable Question Answering,"Many open-domain question answering problems can be cast as a textual entailment task, where a question and candidate answers are concatenated to form hypotheses. A QA system then determines if the supporting knowledge bases, regarded as potential premises, entail the hypotheses. In this paper, we investigate a neural-symbolic QA approach that integrates natural logic reasoning within deep learning architectures, towards developing effective and yet explainable question answering models. The proposed model gradually bridges a hypothesis and candidate premises following natural logic inference steps to build proof paths. Entailment scores between the acquired intermediate hypotheses and candidate premises are measured to determine if a premise entails the hypothesis. As the natural logic reasoning process forms a tree-like, hierarchical structure, we embed hypotheses and premises in a Hyperbolic space rather than Euclidean space to acquire more precise representations. Empirically, our method outperforms prior work on answering multiple-choice science questions, achieving the best results on two publicly available datasets. The natural logic inference process inherently provides evidence to help explain the prediction process.","We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (2020AAA0106501), and the National Natural Science Foundation of China (61976073).","Neural Natural Logic Inference for Interpretable Question Answering. Many open-domain question answering problems can be cast as a textual entailment task, where a question and candidate answers are concatenated to form hypotheses. A QA system then determines if the supporting knowledge bases, regarded as potential premises, entail the hypotheses. In this paper, we investigate a neural-symbolic QA approach that integrates natural logic reasoning within deep learning architectures, towards developing effective and yet explainable question answering models. The proposed model gradually bridges a hypothesis and candidate premises following natural logic inference steps to build proof paths. Entailment scores between the acquired intermediate hypotheses and candidate premises are measured to determine if a premise entails the hypothesis. As the natural logic reasoning process forms a tree-like, hierarchical structure, we embed hypotheses and premises in a Hyperbolic space rather than Euclidean space to acquire more precise representations. Empirically, our method outperforms prior work on answering multiple-choice science questions, achieving the best results on two publicly available datasets. The natural logic inference process inherently provides evidence to help explain the prediction process.",2021
schofield-mehr-2016-gender,https://aclanthology.org/W16-0204,0,,,,,,,"Gender-Distinguishing Features in Film Dialogue. Film scripts provide a means of examining generalized western social perceptions of accepted human behavior. In particular, we focus on how dialogue in films describes gender, identifying linguistic and structural differences in speech for men and women and in same and different-gendered pairs. Using the Cornell Movie-Dialogs Corpus (Danescu-Niculescu-Mizil et al., 2012a), we identify significant linguistic and structural features of dialogue that differentiate genders in conversation and analyze how those effects relate to existing literature on gender in film. Author's Note (July 2020) The subsequent work below makes gender determinations based on a binary assignment assessed using statistics from most common baby names. We regret and recommend against this heuristic for several reasons:",Gender-Distinguishing Features in Film Dialogue,"Film scripts provide a means of examining generalized western social perceptions of accepted human behavior. In particular, we focus on how dialogue in films describes gender, identifying linguistic and structural differences in speech for men and women and in same and different-gendered pairs. Using the Cornell Movie-Dialogs Corpus (Danescu-Niculescu-Mizil et al., 2012a), we identify significant linguistic and structural features of dialogue that differentiate genders in conversation and analyze how those effects relate to existing literature on gender in film. Author's Note (July 2020) The subsequent work below makes gender determinations based on a binary assignment assessed using statistics from most common baby names. We regret and recommend against this heuristic for several reasons:",Gender-Distinguishing Features in Film Dialogue,"Film scripts provide a means of examining generalized western social perceptions of accepted human behavior. In particular, we focus on how dialogue in films describes gender, identifying linguistic and structural differences in speech for men and women and in same and different-gendered pairs. Using the Cornell Movie-Dialogs Corpus (Danescu-Niculescu-Mizil et al., 2012a), we identify significant linguistic and structural features of dialogue that differentiate genders in conversation and analyze how those effects relate to existing literature on gender in film. Author's Note (July 2020) The subsequent work below makes gender determinations based on a binary assignment assessed using statistics from most common baby names. We regret and recommend against this heuristic for several reasons:","We thank C. Danescu-Niculescu-Mizil, L. Lee, D. Mimno, J. Hessel, and the members of the NLP and Social Interaction course at Cornell for their support and ideas in developing this paper. We thank the workshop chairs and our anonymous reviewers for their thoughtful comments and suggestions.","Gender-Distinguishing Features in Film Dialogue. Film scripts provide a means of examining generalized western social perceptions of accepted human behavior. In particular, we focus on how dialogue in films describes gender, identifying linguistic and structural differences in speech for men and women and in same and different-gendered pairs. Using the Cornell Movie-Dialogs Corpus (Danescu-Niculescu-Mizil et al., 2012a), we identify significant linguistic and structural features of dialogue that differentiate genders in conversation and analyze how those effects relate to existing literature on gender in film. Author's Note (July 2020) The subsequent work below makes gender determinations based on a binary assignment assessed using statistics from most common baby names. We regret and recommend against this heuristic for several reasons:",2016
hying-2007-corpus,https://aclanthology.org/W07-1601,0,,,,,,,A Corpus-Based Analysis of Geometric Constraints on Projective Prepositions. This paper presents a corpus-based method for automatic evaluation of geometric constraints on projective prepositions. The method is used to find an appropriate model of geometric constraints for a twodimensional domain. Two simple models are evaluated against the uses of projective prepositions in a corpus of natural language dialogues to find the best parameters of these models. Both models cover more than 96% of the data correctly. An extra treatment of negative uses of projective prepositions (e.g. A is not above B) improves both models getting close to full coverage.,A Corpus-Based Analysis of Geometric Constraints on Projective Prepositions,This paper presents a corpus-based method for automatic evaluation of geometric constraints on projective prepositions. The method is used to find an appropriate model of geometric constraints for a twodimensional domain. Two simple models are evaluated against the uses of projective prepositions in a corpus of natural language dialogues to find the best parameters of these models. Both models cover more than 96% of the data correctly. An extra treatment of negative uses of projective prepositions (e.g. A is not above B) improves both models getting close to full coverage.,A Corpus-Based Analysis of Geometric Constraints on Projective Prepositions,This paper presents a corpus-based method for automatic evaluation of geometric constraints on projective prepositions. The method is used to find an appropriate model of geometric constraints for a twodimensional domain. Two simple models are evaluated against the uses of projective prepositions in a corpus of natural language dialogues to find the best parameters of these models. Both models cover more than 96% of the data correctly. An extra treatment of negative uses of projective prepositions (e.g. A is not above B) improves both models getting close to full coverage.,,A Corpus-Based Analysis of Geometric Constraints on Projective Prepositions. This paper presents a corpus-based method for automatic evaluation of geometric constraints on projective prepositions. The method is used to find an appropriate model of geometric constraints for a twodimensional domain. Two simple models are evaluated against the uses of projective prepositions in a corpus of natural language dialogues to find the best parameters of these models. Both models cover more than 96% of the data correctly. An extra treatment of negative uses of projective prepositions (e.g. A is not above B) improves both models getting close to full coverage.,2007
goutte-etal-2004-aligning,https://aclanthology.org/P04-1064,0,,,,,,,"Aligning words using matrix factorisation. Aligning words from sentences which are mutual translations is an important problem in different settings, such as bilingual terminology extraction, Machine Translation, or projection of linguistic features. Here, we view word alignment as matrix factorisation. In order to produce proper alignments, we show that factors must satisfy a number of constraints such as orthogonality. We then propose an algorithm for orthogonal non-negative matrix factorisation, based on a probabilistic model of the alignment data, and apply it to word alignment. This is illustrated on a French-English alignment task from the Hansard.",Aligning words using matrix factorisation,"Aligning words from sentences which are mutual translations is an important problem in different settings, such as bilingual terminology extraction, Machine Translation, or projection of linguistic features. Here, we view word alignment as matrix factorisation. In order to produce proper alignments, we show that factors must satisfy a number of constraints such as orthogonality. We then propose an algorithm for orthogonal non-negative matrix factorisation, based on a probabilistic model of the alignment data, and apply it to word alignment. This is illustrated on a French-English alignment task from the Hansard.",Aligning words using matrix factorisation,"Aligning words from sentences which are mutual translations is an important problem in different settings, such as bilingual terminology extraction, Machine Translation, or projection of linguistic features. Here, we view word alignment as matrix factorisation. In order to produce proper alignments, we show that factors must satisfy a number of constraints such as orthogonality. We then propose an algorithm for orthogonal non-negative matrix factorisation, based on a probabilistic model of the alignment data, and apply it to word alignment. This is illustrated on a French-English alignment task from the Hansard.",We acknowledge the Machine Learning group at XRCE for discussions related to the topic of word alignment. We would like to thank the three anonymous reviewers for their comments.,"Aligning words using matrix factorisation. Aligning words from sentences which are mutual translations is an important problem in different settings, such as bilingual terminology extraction, Machine Translation, or projection of linguistic features. Here, we view word alignment as matrix factorisation. In order to produce proper alignments, we show that factors must satisfy a number of constraints such as orthogonality. We then propose an algorithm for orthogonal non-negative matrix factorisation, based on a probabilistic model of the alignment data, and apply it to word alignment. This is illustrated on a French-English alignment task from the Hansard.",2004
schulte-im-walde-2010-comparing,http://www.lrec-conf.org/proceedings/lrec2010/pdf/632_Paper.pdf,0,,,,,,,"Comparing Computational Models of Selectional Preferences - Second-order Co-Occurrence vs. Latent Semantic Clusters. This paper presents a comparison of three computational approaches to selectional preferences: (i) an intuitive distributional approach that uses second-order co-occurrence of predicates and complement properties; (ii) an EM-based clustering approach that models the strengths of predicate-noun relationships by latent semantic clusters; and (iii) an extension of the latent semantic clusters by incorporating the MDL principle into the EM training, thus explicitly modelling the predicate-noun selectional preferences by WordNet classes. We describe various experiments on German data and two evaluations, and demonstrate that the simple distributional model outperforms the more complex cluster-based models in most cases, but does itself not always beat the powerful frequency baseline.",Comparing Computational Models of Selectional Preferences - Second-order Co-Occurrence vs. Latent Semantic Clusters,"This paper presents a comparison of three computational approaches to selectional preferences: (i) an intuitive distributional approach that uses second-order co-occurrence of predicates and complement properties; (ii) an EM-based clustering approach that models the strengths of predicate-noun relationships by latent semantic clusters; and (iii) an extension of the latent semantic clusters by incorporating the MDL principle into the EM training, thus explicitly modelling the predicate-noun selectional preferences by WordNet classes. We describe various experiments on German data and two evaluations, and demonstrate that the simple distributional model outperforms the more complex cluster-based models in most cases, but does itself not always beat the powerful frequency baseline.",Comparing Computational Models of Selectional Preferences - Second-order Co-Occurrence vs. Latent Semantic Clusters,"This paper presents a comparison of three computational approaches to selectional preferences: (i) an intuitive distributional approach that uses second-order co-occurrence of predicates and complement properties; (ii) an EM-based clustering approach that models the strengths of predicate-noun relationships by latent semantic clusters; and (iii) an extension of the latent semantic clusters by incorporating the MDL principle into the EM training, thus explicitly modelling the predicate-noun selectional preferences by WordNet classes. We describe various experiments on German data and two evaluations, and demonstrate that the simple distributional model outperforms the more complex cluster-based models in most cases, but does itself not always beat the powerful frequency baseline.",,"Comparing Computational Models of Selectional Preferences - Second-order Co-Occurrence vs. Latent Semantic Clusters. This paper presents a comparison of three computational approaches to selectional preferences: (i) an intuitive distributional approach that uses second-order co-occurrence of predicates and complement properties; (ii) an EM-based clustering approach that models the strengths of predicate-noun relationships by latent semantic clusters; and (iii) an extension of the latent semantic clusters by incorporating the MDL principle into the EM training, thus explicitly modelling the predicate-noun selectional preferences by WordNet classes. We describe various experiments on German data and two evaluations, and demonstrate that the simple distributional model outperforms the more complex cluster-based models in most cases, but does itself not always beat the powerful frequency baseline.",2010
nightingale-tanaka-2003-comparing,https://aclanthology.org/W03-0321,0,,,,,,,Comparing the Sentence Alignment Yield from Two News Corpora Using a Dictionary-Based Alignment System. ,Comparing the Sentence Alignment Yield from Two News Corpora Using a Dictionary-Based Alignment System,,Comparing the Sentence Alignment Yield from Two News Corpora Using a Dictionary-Based Alignment System,,,Comparing the Sentence Alignment Yield from Two News Corpora Using a Dictionary-Based Alignment System. ,2003
noklestad-softeland-2007-tagging,https://aclanthology.org/W07-2436,0,,,,,,,"Tagging a Norwegian Speech Corpus. This paper describes work on the grammatical tagging of a newly created Norwegian speech corpus: the first corpus of modern Norwegian speech. We use an iterative procedure to perform computer-aided manual tagging of a part of the corpus. This material is then used to train the final taggers, which are applied to the rest of the corpus. We experiment with taggers that are based on three different data-driven methods: memory-based learning, decision trees, and hidden Markov models, and find that the decision tree tagger performs best. We also test the effects of removing pauses and/or hesitations from the material before training and applying the taggers. We conclude that these attempts at cleaning up hurt the performance of the taggers, indicating that such material, rather than functioning as noise, actually contributes important information about the grammatical function of the words in their nearest context.",Tagging a {N}orwegian Speech Corpus,"This paper describes work on the grammatical tagging of a newly created Norwegian speech corpus: the first corpus of modern Norwegian speech. We use an iterative procedure to perform computer-aided manual tagging of a part of the corpus. This material is then used to train the final taggers, which are applied to the rest of the corpus. We experiment with taggers that are based on three different data-driven methods: memory-based learning, decision trees, and hidden Markov models, and find that the decision tree tagger performs best. We also test the effects of removing pauses and/or hesitations from the material before training and applying the taggers. We conclude that these attempts at cleaning up hurt the performance of the taggers, indicating that such material, rather than functioning as noise, actually contributes important information about the grammatical function of the words in their nearest context.",Tagging a Norwegian Speech Corpus,"This paper describes work on the grammatical tagging of a newly created Norwegian speech corpus: the first corpus of modern Norwegian speech. We use an iterative procedure to perform computer-aided manual tagging of a part of the corpus. This material is then used to train the final taggers, which are applied to the rest of the corpus. We experiment with taggers that are based on three different data-driven methods: memory-based learning, decision trees, and hidden Markov models, and find that the decision tree tagger performs best. We also test the effects of removing pauses and/or hesitations from the material before training and applying the taggers. We conclude that these attempts at cleaning up hurt the performance of the taggers, indicating that such material, rather than functioning as noise, actually contributes important information about the grammatical function of the words in their nearest context.",,"Tagging a Norwegian Speech Corpus. This paper describes work on the grammatical tagging of a newly created Norwegian speech corpus: the first corpus of modern Norwegian speech. We use an iterative procedure to perform computer-aided manual tagging of a part of the corpus. This material is then used to train the final taggers, which are applied to the rest of the corpus. We experiment with taggers that are based on three different data-driven methods: memory-based learning, decision trees, and hidden Markov models, and find that the decision tree tagger performs best. We also test the effects of removing pauses and/or hesitations from the material before training and applying the taggers. We conclude that these attempts at cleaning up hurt the performance of the taggers, indicating that such material, rather than functioning as noise, actually contributes important information about the grammatical function of the words in their nearest context.",2007
nikulasdottir-etal-2018-open,https://aclanthology.org/L18-1495,0,,,,,,,"Open ASR for Icelandic: Resources and a Baseline System. Developing language resources is an important task when creating a speech recognition system for a less-resourced language. In this paper we describe available language resources and their preparation for use in a large vocabulary speech recognition (LVSR) system for Icelandic. The content of a speech corpus is analysed and training and test sets compiled, a pronunciation dictionary is extended, and text normalization for language modeling performed. An ASR system based on neural networks is implemented using these resources and tested using different acoustic training sets. Experimental results show a clear increase in word-error-rate (WER) when using smaller training sets, indicating that extension of the speech corpus for training would improve the system. When testing on data with known vocabulary only, the WER is 7.99%, but on an open vocabulary test set the WER is 15.72%. Furthermore, impact of the content of the acoustic training corpus is examined. The current results indicate that an ASR system could profit from carefully selected phonotactical data, however, further experiments are needed to verify this impression.",Open {ASR} for {I}celandic: Resources and a Baseline System,"Developing language resources is an important task when creating a speech recognition system for a less-resourced language. In this paper we describe available language resources and their preparation for use in a large vocabulary speech recognition (LVSR) system for Icelandic. The content of a speech corpus is analysed and training and test sets compiled, a pronunciation dictionary is extended, and text normalization for language modeling performed. An ASR system based on neural networks is implemented using these resources and tested using different acoustic training sets. Experimental results show a clear increase in word-error-rate (WER) when using smaller training sets, indicating that extension of the speech corpus for training would improve the system. When testing on data with known vocabulary only, the WER is 7.99%, but on an open vocabulary test set the WER is 15.72%. Furthermore, impact of the content of the acoustic training corpus is examined. The current results indicate that an ASR system could profit from carefully selected phonotactical data, however, further experiments are needed to verify this impression.",Open ASR for Icelandic: Resources and a Baseline System,"Developing language resources is an important task when creating a speech recognition system for a less-resourced language. In this paper we describe available language resources and their preparation for use in a large vocabulary speech recognition (LVSR) system for Icelandic. The content of a speech corpus is analysed and training and test sets compiled, a pronunciation dictionary is extended, and text normalization for language modeling performed. An ASR system based on neural networks is implemented using these resources and tested using different acoustic training sets. Experimental results show a clear increase in word-error-rate (WER) when using smaller training sets, indicating that extension of the speech corpus for training would improve the system. When testing on data with known vocabulary only, the WER is 7.99%, but on an open vocabulary test set the WER is 15.72%. Furthermore, impact of the content of the acoustic training corpus is examined. The current results indicate that an ASR system could profit from carefully selected phonotactical data, however, further experiments are needed to verify this impression.",The project Open ASR for Icelandic was supported by the Icelandic Language Technology Fund (ILTF). ,"Open ASR for Icelandic: Resources and a Baseline System. Developing language resources is an important task when creating a speech recognition system for a less-resourced language. In this paper we describe available language resources and their preparation for use in a large vocabulary speech recognition (LVSR) system for Icelandic. The content of a speech corpus is analysed and training and test sets compiled, a pronunciation dictionary is extended, and text normalization for language modeling performed. An ASR system based on neural networks is implemented using these resources and tested using different acoustic training sets. Experimental results show a clear increase in word-error-rate (WER) when using smaller training sets, indicating that extension of the speech corpus for training would improve the system. When testing on data with known vocabulary only, the WER is 7.99%, but on an open vocabulary test set the WER is 15.72%. Furthermore, impact of the content of the acoustic training corpus is examined. The current results indicate that an ASR system could profit from carefully selected phonotactical data, however, further experiments are needed to verify this impression.",2018
wang-ng-2013-beam,https://aclanthology.org/N13-1050,0,,,,,,,"A Beam-Search Decoder for Normalization of Social Media Text with Application to Machine Translation. Social media texts are written in an informal style, which hinders other natural language processing (NLP) applications such as machine translation. Text normalization is thus important for processing of social media text. Previous work mostly focused on normalizing words by replacing an informal word with its formal form. In this paper, to further improve other downstream NLP applications, we argue that other normalization operations should also be performed, e.g., missing word recovery and punctuation correction. A novel beam-search decoder is proposed to effectively integrate various normalization operations. Empirical results show that our system obtains statistically significant improvements over two strong baselines in both normalization and translation tasks, for both Chinese and English.",A Beam-Search Decoder for Normalization of Social Media Text with Application to Machine Translation,"Social media texts are written in an informal style, which hinders other natural language processing (NLP) applications such as machine translation. Text normalization is thus important for processing of social media text. Previous work mostly focused on normalizing words by replacing an informal word with its formal form. In this paper, to further improve other downstream NLP applications, we argue that other normalization operations should also be performed, e.g., missing word recovery and punctuation correction. A novel beam-search decoder is proposed to effectively integrate various normalization operations. Empirical results show that our system obtains statistically significant improvements over two strong baselines in both normalization and translation tasks, for both Chinese and English.",A Beam-Search Decoder for Normalization of Social Media Text with Application to Machine Translation,"Social media texts are written in an informal style, which hinders other natural language processing (NLP) applications such as machine translation. Text normalization is thus important for processing of social media text. Previous work mostly focused on normalizing words by replacing an informal word with its formal form. In this paper, to further improve other downstream NLP applications, we argue that other normalization operations should also be performed, e.g., missing word recovery and punctuation correction. A novel beam-search decoder is proposed to effectively integrate various normalization operations. Empirical results show that our system obtains statistically significant improvements over two strong baselines in both normalization and translation tasks, for both Chinese and English.",We thank all the anonymous reviewers for their comments which have helped us improve this paper. This research is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Office.,"A Beam-Search Decoder for Normalization of Social Media Text with Application to Machine Translation. Social media texts are written in an informal style, which hinders other natural language processing (NLP) applications such as machine translation. Text normalization is thus important for processing of social media text. Previous work mostly focused on normalizing words by replacing an informal word with its formal form. In this paper, to further improve other downstream NLP applications, we argue that other normalization operations should also be performed, e.g., missing word recovery and punctuation correction. A novel beam-search decoder is proposed to effectively integrate various normalization operations. Empirical results show that our system obtains statistically significant improvements over two strong baselines in both normalization and translation tasks, for both Chinese and English.",2013
hazarika-etal-2018-conversational,https://aclanthology.org/N18-1193,0,,,,,,,"Conversational Memory Network for Emotion Recognition in Dyadic Dialogue Videos. Emotion recognition in conversations is crucial for the development of empathetic machines. Present methods mostly ignore the role of inter-speaker dependency relations while classifying emotions in conversations. In this paper, we address recognizing utterance-level emotions in dyadic conversational videos. We propose a deep neural framework, termed conversational memory network, which leverages contextual information from the conversation history. The framework takes a multimodal approach comprising audio, visual and textual features with gated recurrent units to model past utterances of each speaker into memories. Such memories are then merged using attention-based hops to capture inter-speaker dependencies. Experiments show an accuracy improvement of 3−4% over the state of the art.",Conversational Memory Network for Emotion Recognition in Dyadic Dialogue Videos,"Emotion recognition in conversations is crucial for the development of empathetic machines. Present methods mostly ignore the role of inter-speaker dependency relations while classifying emotions in conversations. In this paper, we address recognizing utterance-level emotions in dyadic conversational videos. We propose a deep neural framework, termed conversational memory network, which leverages contextual information from the conversation history. The framework takes a multimodal approach comprising audio, visual and textual features with gated recurrent units to model past utterances of each speaker into memories. Such memories are then merged using attention-based hops to capture inter-speaker dependencies. Experiments show an accuracy improvement of 3−4% over the state of the art.",Conversational Memory Network for Emotion Recognition in Dyadic Dialogue Videos,"Emotion recognition in conversations is crucial for the development of empathetic machines. Present methods mostly ignore the role of inter-speaker dependency relations while classifying emotions in conversations. In this paper, we address recognizing utterance-level emotions in dyadic conversational videos. We propose a deep neural framework, termed conversational memory network, which leverages contextual information from the conversation history. The framework takes a multimodal approach comprising audio, visual and textual features with gated recurrent units to model past utterances of each speaker into memories. Such memories are then merged using attention-based hops to capture inter-speaker dependencies. Experiments show an accuracy improvement of 3−4% over the state of the art.","This research was supported in part by the National Natural Science Foundation of China under Grant no. 61472266 and by the National University of Singapore (Suzhou) Research Institute, 377 Lin Quan Street, Suzhou Industrial Park, Jiang Su, People's Republic of China, 215123.","Conversational Memory Network for Emotion Recognition in Dyadic Dialogue Videos. Emotion recognition in conversations is crucial for the development of empathetic machines. Present methods mostly ignore the role of inter-speaker dependency relations while classifying emotions in conversations. In this paper, we address recognizing utterance-level emotions in dyadic conversational videos. We propose a deep neural framework, termed conversational memory network, which leverages contextual information from the conversation history. The framework takes a multimodal approach comprising audio, visual and textual features with gated recurrent units to model past utterances of each speaker into memories. Such memories are then merged using attention-based hops to capture inter-speaker dependencies. Experiments show an accuracy improvement of 3−4% over the state of the art.",2018
jin-1991-translation,https://aclanthology.org/1991.mtsummit-papers.14,0,,,,,,,"Translation Accuracy and Translation Efficiency. ULTRA (Universal Language Translator) is a multi-lingua] bidirectional translation system between English, Spanish, German, Japanese and Chinese. It employs an interlingua] structure to translate among these five languages. An interlingual representation is used as a deep structure through which any pair of these languages can be translated in either direction. This paper describes some techniques used in the Chinese system to solve problems in word ordering, language equivalency, Chinese verb constituent and prepositional phrase attachment. By means of these techniques translation quality has been significantly improved. Heuristic search, which results in translation efficiency, is also discussed,",Translation Accuracy and Translation Efficiency,"ULTRA (Universal Language Translator) is a multi-lingua] bidirectional translation system between English, Spanish, German, Japanese and Chinese. It employs an interlingua] structure to translate among these five languages. An interlingual representation is used as a deep structure through which any pair of these languages can be translated in either direction. This paper describes some techniques used in the Chinese system to solve problems in word ordering, language equivalency, Chinese verb constituent and prepositional phrase attachment. By means of these techniques translation quality has been significantly improved. Heuristic search, which results in translation efficiency, is also discussed,",Translation Accuracy and Translation Efficiency,"ULTRA (Universal Language Translator) is a multi-lingua] bidirectional translation system between English, Spanish, German, Japanese and Chinese. It employs an interlingua] structure to translate among these five languages. An interlingual representation is used as a deep structure through which any pair of these languages can be translated in either direction. This paper describes some techniques used in the Chinese system to solve problems in word ordering, language equivalency, Chinese verb constituent and prepositional phrase attachment. By means of these techniques translation quality has been significantly improved. Heuristic search, which results in translation efficiency, is also discussed,",,"Translation Accuracy and Translation Efficiency. ULTRA (Universal Language Translator) is a multi-lingua] bidirectional translation system between English, Spanish, German, Japanese and Chinese. It employs an interlingua] structure to translate among these five languages. An interlingual representation is used as a deep structure through which any pair of these languages can be translated in either direction. This paper describes some techniques used in the Chinese system to solve problems in word ordering, language equivalency, Chinese verb constituent and prepositional phrase attachment. By means of these techniques translation quality has been significantly improved. Heuristic search, which results in translation efficiency, is also discussed,",1991
gene-2021-post,https://aclanthology.org/2021.triton-1.22,0,,,,,,,"The Post-Editing Workflow: Training Challenges for LSPs, Post-Editors and Academia. Language technology is already largely adopted by most Language Service Providers (LSPs) and integrated into their traditional translation processes. In this context, there are many different approaches to applying Post-Editing (PE) of a machine translated text, involving different workflow processes and steps that can be more or less effective and favorable. In the present paper, we propose a 3-step Post-Editing Workflow (PEW). Drawing from industry insight, this paper aims to provide a basic framework for LSPs and Post-Editors on how to streamline Post-Editing workflows in order to improve quality, achieve higher profitability and better return on investment and standardize and facilitate internal processes in terms of management and linguist effort when it comes to PE services. We argue that a comprehensive PEW consists in three essential tasks: Pre-Editing, Post-Editing and Annotation/Machine Translation (MT) evaluation processes (Guerrero, 2018) supported by three essential roles: Pre-Editor, Post-Editor and Annotator (Gene, 2020). Furthermore, the present paper demonstrates the training challenges arising from this PEW, supported by empirical research results, as reflected in a digital survey among language industry professionals (Gene, 2020), which was conducted in the context of a Post-Editing Webinar. Its sample comprised 51 representatives of LSPs and 12 representatives of SLVs (Single Language Vendors) representatives.","The Post-Editing Workflow: Training Challenges for {LSP}s, Post-Editors and Academia","Language technology is already largely adopted by most Language Service Providers (LSPs) and integrated into their traditional translation processes. In this context, there are many different approaches to applying Post-Editing (PE) of a machine translated text, involving different workflow processes and steps that can be more or less effective and favorable. In the present paper, we propose a 3-step Post-Editing Workflow (PEW). Drawing from industry insight, this paper aims to provide a basic framework for LSPs and Post-Editors on how to streamline Post-Editing workflows in order to improve quality, achieve higher profitability and better return on investment and standardize and facilitate internal processes in terms of management and linguist effort when it comes to PE services. We argue that a comprehensive PEW consists in three essential tasks: Pre-Editing, Post-Editing and Annotation/Machine Translation (MT) evaluation processes (Guerrero, 2018) supported by three essential roles: Pre-Editor, Post-Editor and Annotator (Gene, 2020). Furthermore, the present paper demonstrates the training challenges arising from this PEW, supported by empirical research results, as reflected in a digital survey among language industry professionals (Gene, 2020), which was conducted in the context of a Post-Editing Webinar. Its sample comprised 51 representatives of LSPs and 12 representatives of SLVs (Single Language Vendors) representatives.","The Post-Editing Workflow: Training Challenges for LSPs, Post-Editors and Academia","Language technology is already largely adopted by most Language Service Providers (LSPs) and integrated into their traditional translation processes. In this context, there are many different approaches to applying Post-Editing (PE) of a machine translated text, involving different workflow processes and steps that can be more or less effective and favorable. In the present paper, we propose a 3-step Post-Editing Workflow (PEW). Drawing from industry insight, this paper aims to provide a basic framework for LSPs and Post-Editors on how to streamline Post-Editing workflows in order to improve quality, achieve higher profitability and better return on investment and standardize and facilitate internal processes in terms of management and linguist effort when it comes to PE services. We argue that a comprehensive PEW consists in three essential tasks: Pre-Editing, Post-Editing and Annotation/Machine Translation (MT) evaluation processes (Guerrero, 2018) supported by three essential roles: Pre-Editor, Post-Editor and Annotator (Gene, 2020). Furthermore, the present paper demonstrates the training challenges arising from this PEW, supported by empirical research results, as reflected in a digital survey among language industry professionals (Gene, 2020), which was conducted in the context of a Post-Editing Webinar. Its sample comprised 51 representatives of LSPs and 12 representatives of SLVs (Single Language Vendors) representatives.",,"The Post-Editing Workflow: Training Challenges for LSPs, Post-Editors and Academia. Language technology is already largely adopted by most Language Service Providers (LSPs) and integrated into their traditional translation processes. In this context, there are many different approaches to applying Post-Editing (PE) of a machine translated text, involving different workflow processes and steps that can be more or less effective and favorable. In the present paper, we propose a 3-step Post-Editing Workflow (PEW). Drawing from industry insight, this paper aims to provide a basic framework for LSPs and Post-Editors on how to streamline Post-Editing workflows in order to improve quality, achieve higher profitability and better return on investment and standardize and facilitate internal processes in terms of management and linguist effort when it comes to PE services. We argue that a comprehensive PEW consists in three essential tasks: Pre-Editing, Post-Editing and Annotation/Machine Translation (MT) evaluation processes (Guerrero, 2018) supported by three essential roles: Pre-Editor, Post-Editor and Annotator (Gene, 2020). Furthermore, the present paper demonstrates the training challenges arising from this PEW, supported by empirical research results, as reflected in a digital survey among language industry professionals (Gene, 2020), which was conducted in the context of a Post-Editing Webinar. Its sample comprised 51 representatives of LSPs and 12 representatives of SLVs (Single Language Vendors) representatives.",2021
quasthoff-wolff-2000-flexible,http://www.lrec-conf.org/proceedings/lrec2000/pdf/226.pdf,0,,,,,,,"A Flexible Infrastructure for Large Monolingual Corpora. In this paper we describe a flexible and portable infrastructure for setting up large monolingual language corpora. The approach is based on collecting a large amount of monolingual text from various sources. The input data is processed on the basis of a sentencebased text segmentation algorithm. We describe the entry structure of the corpus database as well as various query types and tools for information extraction. Among them, the extraction and usage of sentence-based word collocations is discussed in detail. Finally we give an overview of different application for this language resource. A WWW interface allows for public access to most of the data and information extraction tools (http://wortschatz.uni-leipzig.de).",A Flexible Infrastructure for Large Monolingual Corpora,"In this paper we describe a flexible and portable infrastructure for setting up large monolingual language corpora. The approach is based on collecting a large amount of monolingual text from various sources. The input data is processed on the basis of a sentencebased text segmentation algorithm. We describe the entry structure of the corpus database as well as various query types and tools for information extraction. Among them, the extraction and usage of sentence-based word collocations is discussed in detail. Finally we give an overview of different application for this language resource. A WWW interface allows for public access to most of the data and information extraction tools (http://wortschatz.uni-leipzig.de).",A Flexible Infrastructure for Large Monolingual Corpora,"In this paper we describe a flexible and portable infrastructure for setting up large monolingual language corpora. The approach is based on collecting a large amount of monolingual text from various sources. The input data is processed on the basis of a sentencebased text segmentation algorithm. We describe the entry structure of the corpus database as well as various query types and tools for information extraction. Among them, the extraction and usage of sentence-based word collocations is discussed in detail. Finally we give an overview of different application for this language resource. A WWW interface allows for public access to most of the data and information extraction tools (http://wortschatz.uni-leipzig.de).",,"A Flexible Infrastructure for Large Monolingual Corpora. In this paper we describe a flexible and portable infrastructure for setting up large monolingual language corpora. The approach is based on collecting a large amount of monolingual text from various sources. The input data is processed on the basis of a sentencebased text segmentation algorithm. We describe the entry structure of the corpus database as well as various query types and tools for information extraction. Among them, the extraction and usage of sentence-based word collocations is discussed in detail. Finally we give an overview of different application for this language resource. A WWW interface allows for public access to most of the data and information extraction tools (http://wortschatz.uni-leipzig.de).",2000
yuan-etal-2021-cambridge,https://aclanthology.org/2021.semeval-1.74,0,,,,,,,"Cambridge at SemEval-2021 Task 1: An Ensemble of Feature-Based and Neural Models for Lexical Complexity Prediction. This paper describes our submission to the SemEval-2021 shared task on Lexical Complexity Prediction. We approached it as a regression problem and present an ensemble combining four systems, one feature-based and three neural with fine-tuning, frequency pre-training and multi-task learning, achieving Pearson scores of 0.8264 and 0.7556 on the trial and test sets respectively (sub-task 1). We further present our analysis of the results and discuss our findings.",{C}ambridge at {S}em{E}val-2021 Task 1: An Ensemble of Feature-Based and Neural Models for Lexical Complexity Prediction,"This paper describes our submission to the SemEval-2021 shared task on Lexical Complexity Prediction. We approached it as a regression problem and present an ensemble combining four systems, one feature-based and three neural with fine-tuning, frequency pre-training and multi-task learning, achieving Pearson scores of 0.8264 and 0.7556 on the trial and test sets respectively (sub-task 1). We further present our analysis of the results and discuss our findings.",Cambridge at SemEval-2021 Task 1: An Ensemble of Feature-Based and Neural Models for Lexical Complexity Prediction,"This paper describes our submission to the SemEval-2021 shared task on Lexical Complexity Prediction. We approached it as a regression problem and present an ensemble combining four systems, one feature-based and three neural with fine-tuning, frequency pre-training and multi-task learning, achieving Pearson scores of 0.8264 and 0.7556 on the trial and test sets respectively (sub-task 1). We further present our analysis of the results and discuss our findings.","We thank Sian Gooding and Ekaterina Kochmar for support and advice. This paper reports on research supported by Cambridge Assessment, University of Cambridge. This work was performed using resources provided by the Cambridge Service for Data Driven Discovery operated by the University of Cambridge Research Computing Service, provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council. We acknowledge NVIDIA for an Academic Hardware Grant.","Cambridge at SemEval-2021 Task 1: An Ensemble of Feature-Based and Neural Models for Lexical Complexity Prediction. This paper describes our submission to the SemEval-2021 shared task on Lexical Complexity Prediction. We approached it as a regression problem and present an ensemble combining four systems, one feature-based and three neural with fine-tuning, frequency pre-training and multi-task learning, achieving Pearson scores of 0.8264 and 0.7556 on the trial and test sets respectively (sub-task 1). We further present our analysis of the results and discuss our findings.",2021
filice-etal-2017-kelp,https://aclanthology.org/S17-2053,0,,,,,,,"KeLP at SemEval-2017 Task 3: Learning Pairwise Patterns in Community Question Answering. This paper describes the KeLP system participating in the SemEval-2017 community Question Answering (cQA) task. The system is a refinement of the kernel-based sentence pair modeling we proposed for the previous year challenge. It is implemented within the Kernel-based Learning Platform called KeLP, from which we inherit the team's name. Our primary submission ranked first in subtask A, and third in subtasks B and C, being the only systems appearing in the top-3 ranking for all the English subtasks. This shows that the proposed framework, which has minor variations among the three subtasks, is extremely flexible and effective in tackling learning tasks defined on sentence pairs.",{K}e{LP} at {S}em{E}val-2017 Task 3: Learning Pairwise Patterns in Community Question Answering,"This paper describes the KeLP system participating in the SemEval-2017 community Question Answering (cQA) task. The system is a refinement of the kernel-based sentence pair modeling we proposed for the previous year challenge. It is implemented within the Kernel-based Learning Platform called KeLP, from which we inherit the team's name. Our primary submission ranked first in subtask A, and third in subtasks B and C, being the only systems appearing in the top-3 ranking for all the English subtasks. This shows that the proposed framework, which has minor variations among the three subtasks, is extremely flexible and effective in tackling learning tasks defined on sentence pairs.",KeLP at SemEval-2017 Task 3: Learning Pairwise Patterns in Community Question Answering,"This paper describes the KeLP system participating in the SemEval-2017 community Question Answering (cQA) task. The system is a refinement of the kernel-based sentence pair modeling we proposed for the previous year challenge. It is implemented within the Kernel-based Learning Platform called KeLP, from which we inherit the team's name. Our primary submission ranked first in subtask A, and third in subtasks B and C, being the only systems appearing in the top-3 ranking for all the English subtasks. This shows that the proposed framework, which has minor variations among the three subtasks, is extremely flexible and effective in tackling learning tasks defined on sentence pairs.","This work has been partially supported by the EC project CogNet, 671625 (H2020-ICT-2014-2, Research and Innovation action).","KeLP at SemEval-2017 Task 3: Learning Pairwise Patterns in Community Question Answering. This paper describes the KeLP system participating in the SemEval-2017 community Question Answering (cQA) task. The system is a refinement of the kernel-based sentence pair modeling we proposed for the previous year challenge. It is implemented within the Kernel-based Learning Platform called KeLP, from which we inherit the team's name. Our primary submission ranked first in subtask A, and third in subtasks B and C, being the only systems appearing in the top-3 ranking for all the English subtasks. This shows that the proposed framework, which has minor variations among the three subtasks, is extremely flexible and effective in tackling learning tasks defined on sentence pairs.",2017
zhou-etal-2021-low,https://aclanthology.org/2021.sustainlp-1.1,0,,,,,,,"Low Resource Quadratic Forms for Knowledge Graph Embeddings. We address the problem of link prediction between entities and relations of knowledge graphs. State of the art techniques that address this problem, while increasingly accurate, are computationally intensive. In this paper we cast link prediction as a sparse convex program whose solution defines a quadratic form that is used as a ranking function. The structure of our convex program is such that standard support vector machine software packages, which are numerically robust and efficient, can solve it. We show that on benchmark data sets, our model's performance is competitive with state of the art models, but training times can be reduced by a factor of 40 using only CPUbased (and not GPU-accelerated) computing resources. This approach may be suitable for applications where balancing the demands of graph completion performance against computational efficiency is a desirable trade-off.",Low Resource Quadratic Forms for Knowledge Graph Embeddings,"We address the problem of link prediction between entities and relations of knowledge graphs. State of the art techniques that address this problem, while increasingly accurate, are computationally intensive. In this paper we cast link prediction as a sparse convex program whose solution defines a quadratic form that is used as a ranking function. The structure of our convex program is such that standard support vector machine software packages, which are numerically robust and efficient, can solve it. We show that on benchmark data sets, our model's performance is competitive with state of the art models, but training times can be reduced by a factor of 40 using only CPUbased (and not GPU-accelerated) computing resources. This approach may be suitable for applications where balancing the demands of graph completion performance against computational efficiency is a desirable trade-off.",Low Resource Quadratic Forms for Knowledge Graph Embeddings,"We address the problem of link prediction between entities and relations of knowledge graphs. State of the art techniques that address this problem, while increasingly accurate, are computationally intensive. In this paper we cast link prediction as a sparse convex program whose solution defines a quadratic form that is used as a ranking function. The structure of our convex program is such that standard support vector machine software packages, which are numerically robust and efficient, can solve it. We show that on benchmark data sets, our model's performance is competitive with state of the art models, but training times can be reduced by a factor of 40 using only CPUbased (and not GPU-accelerated) computing resources. This approach may be suitable for applications where balancing the demands of graph completion performance against computational efficiency is a desirable trade-off.",,"Low Resource Quadratic Forms for Knowledge Graph Embeddings. We address the problem of link prediction between entities and relations of knowledge graphs. State of the art techniques that address this problem, while increasingly accurate, are computationally intensive. In this paper we cast link prediction as a sparse convex program whose solution defines a quadratic form that is used as a ranking function. The structure of our convex program is such that standard support vector machine software packages, which are numerically robust and efficient, can solve it. We show that on benchmark data sets, our model's performance is competitive with state of the art models, but training times can be reduced by a factor of 40 using only CPUbased (and not GPU-accelerated) computing resources. This approach may be suitable for applications where balancing the demands of graph completion performance against computational efficiency is a desirable trade-off.",2021
li-etal-2015-dependency-parsing,https://aclanthology.org/Y15-2039,0,,,,,,,"Dependency parsing for Chinese long sentence: A second-stage main structure parsing method. This paper explores the problem of parsing Chinese long sentences. Inspired by human sentence processing, a second-stage parsing method, referred as main structure parsing in this paper, are proposed to improve the parsing performance as well as maintaining its high accuracy and efficiency on Chinese long sentences. Three different methods have attempted in this paper and the result shows that the best performance comes from the method using Chinese comma as the boundary of the sub-sentence. According to our experiment about testing on the Chinese dependency Treebank 1.0 data, it improves long dependency accuracy by around 6.0% than the baseline parser and 3.2% than the previous best model.",Dependency parsing for {C}hinese long sentence: A second-stage main structure parsing method,"This paper explores the problem of parsing Chinese long sentences. Inspired by human sentence processing, a second-stage parsing method, referred as main structure parsing in this paper, are proposed to improve the parsing performance as well as maintaining its high accuracy and efficiency on Chinese long sentences. Three different methods have attempted in this paper and the result shows that the best performance comes from the method using Chinese comma as the boundary of the sub-sentence. According to our experiment about testing on the Chinese dependency Treebank 1.0 data, it improves long dependency accuracy by around 6.0% than the baseline parser and 3.2% than the previous best model.",Dependency parsing for Chinese long sentence: A second-stage main structure parsing method,"This paper explores the problem of parsing Chinese long sentences. Inspired by human sentence processing, a second-stage parsing method, referred as main structure parsing in this paper, are proposed to improve the parsing performance as well as maintaining its high accuracy and efficiency on Chinese long sentences. Three different methods have attempted in this paper and the result shows that the best performance comes from the method using Chinese comma as the boundary of the sub-sentence. According to our experiment about testing on the Chinese dependency Treebank 1.0 data, it improves long dependency accuracy by around 6.0% than the baseline parser and 3.2% than the previous best model.",,"Dependency parsing for Chinese long sentence: A second-stage main structure parsing method. This paper explores the problem of parsing Chinese long sentences. Inspired by human sentence processing, a second-stage parsing method, referred as main structure parsing in this paper, are proposed to improve the parsing performance as well as maintaining its high accuracy and efficiency on Chinese long sentences. Three different methods have attempted in this paper and the result shows that the best performance comes from the method using Chinese comma as the boundary of the sub-sentence. According to our experiment about testing on the Chinese dependency Treebank 1.0 data, it improves long dependency accuracy by around 6.0% than the baseline parser and 3.2% than the previous best model.",2015
tkachenko-etal-2018-searching,https://aclanthology.org/P18-1112,0,,,,,,,"Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings. We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks.",Searching for the {X}-Factor: Exploring Corpus Subjectivity for Word Embeddings,"We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks.",Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings,"We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks.","This research is supported by the National Research Foundation, Prime Minister's Office, Singapore under its NRF Fellowship Programme (Award No. NRF-NRFF2016-07).","Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings. We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks.",2018
shuster-etal-2020-image,https://aclanthology.org/2020.acl-main.219,0,,,,,,,"Image-Chat: Engaging Grounded Conversations. To achieve the long-term goal of machines being able to engage humans in conversation, our models should captivate the interest of their speaking partners. Communication grounded in images, whereby a dialogue is conducted based on a given photo, is a setup naturally appealing to humans (Hu et al., 2014). In this work we study large-scale architectures and datasets for this goal. We test a set of neural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. To test such models, we collect a dataset of grounded human-human conversations, where speakers are asked to play roles given a provided emotional mood or style, as the use of such traits is also a key factor in engagingness (Guo et al., 2019). Our dataset, Image-Chat, consists of 202k dialogues over 202k images using 215 possible style traits. Automatic metrics and human evaluations of engagingness show the efficacy of our approach; in particular, we obtain state-of-the-art performance on the existing IGC task, and our best performing model is almost on par with humans on the Image-Chat test set (preferred 47.7% of the time).",Image-Chat: Engaging Grounded Conversations,"To achieve the long-term goal of machines being able to engage humans in conversation, our models should captivate the interest of their speaking partners. Communication grounded in images, whereby a dialogue is conducted based on a given photo, is a setup naturally appealing to humans (Hu et al., 2014). In this work we study large-scale architectures and datasets for this goal. We test a set of neural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. To test such models, we collect a dataset of grounded human-human conversations, where speakers are asked to play roles given a provided emotional mood or style, as the use of such traits is also a key factor in engagingness (Guo et al., 2019). Our dataset, Image-Chat, consists of 202k dialogues over 202k images using 215 possible style traits. Automatic metrics and human evaluations of engagingness show the efficacy of our approach; in particular, we obtain state-of-the-art performance on the existing IGC task, and our best performing model is almost on par with humans on the Image-Chat test set (preferred 47.7% of the time).",Image-Chat: Engaging Grounded Conversations,"To achieve the long-term goal of machines being able to engage humans in conversation, our models should captivate the interest of their speaking partners. Communication grounded in images, whereby a dialogue is conducted based on a given photo, is a setup naturally appealing to humans (Hu et al., 2014). In this work we study large-scale architectures and datasets for this goal. We test a set of neural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. To test such models, we collect a dataset of grounded human-human conversations, where speakers are asked to play roles given a provided emotional mood or style, as the use of such traits is also a key factor in engagingness (Guo et al., 2019). Our dataset, Image-Chat, consists of 202k dialogues over 202k images using 215 possible style traits. Automatic metrics and human evaluations of engagingness show the efficacy of our approach; in particular, we obtain state-of-the-art performance on the existing IGC task, and our best performing model is almost on par with humans on the Image-Chat test set (preferred 47.7% of the time).",,"Image-Chat: Engaging Grounded Conversations. To achieve the long-term goal of machines being able to engage humans in conversation, our models should captivate the interest of their speaking partners. Communication grounded in images, whereby a dialogue is conducted based on a given photo, is a setup naturally appealing to humans (Hu et al., 2014). In this work we study large-scale architectures and datasets for this goal. We test a set of neural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. To test such models, we collect a dataset of grounded human-human conversations, where speakers are asked to play roles given a provided emotional mood or style, as the use of such traits is also a key factor in engagingness (Guo et al., 2019). Our dataset, Image-Chat, consists of 202k dialogues over 202k images using 215 possible style traits. Automatic metrics and human evaluations of engagingness show the efficacy of our approach; in particular, we obtain state-of-the-art performance on the existing IGC task, and our best performing model is almost on par with humans on the Image-Chat test set (preferred 47.7% of the time).",2020
xia-etal-2000-comparing,https://aclanthology.org/W00-1208,0,,,,,,,"Comparing Lexicalized Treebank Grammars Extracted from Chinese, Korean, and English Corpora. In this paper, we present a method for comparing Lexicalized Tree Adjoining Grammars extracted from annotated corpora for three languages: English, Chinese and Korean. This method makes it possible to do a quantitative comparison between the syntactic structures of each language, thereby providing a way of testing the Universal Grammar Hypothesis, the foundation of modern linguistic theories.","Comparing Lexicalized Treebank Grammars Extracted from {C}hinese, {K}orean, and {E}nglish Corpora","In this paper, we present a method for comparing Lexicalized Tree Adjoining Grammars extracted from annotated corpora for three languages: English, Chinese and Korean. This method makes it possible to do a quantitative comparison between the syntactic structures of each language, thereby providing a way of testing the Universal Grammar Hypothesis, the foundation of modern linguistic theories.","Comparing Lexicalized Treebank Grammars Extracted from Chinese, Korean, and English Corpora","In this paper, we present a method for comparing Lexicalized Tree Adjoining Grammars extracted from annotated corpora for three languages: English, Chinese and Korean. This method makes it possible to do a quantitative comparison between the syntactic structures of each language, thereby providing a way of testing the Universal Grammar Hypothesis, the foundation of modern linguistic theories.",,"Comparing Lexicalized Treebank Grammars Extracted from Chinese, Korean, and English Corpora. In this paper, we present a method for comparing Lexicalized Tree Adjoining Grammars extracted from annotated corpora for three languages: English, Chinese and Korean. This method makes it possible to do a quantitative comparison between the syntactic structures of each language, thereby providing a way of testing the Universal Grammar Hypothesis, the foundation of modern linguistic theories.",2000
lubis-etal-2018-unsupervised,https://aclanthology.org/W18-5017,1,,,,health,,,"Unsupervised Counselor Dialogue Clustering for Positive Emotion Elicitation in Neural Dialogue System. Positive emotion elicitation seeks to improve user's emotional state through dialogue system interaction, where a chatbased scenario is layered with an implicit goal to address user's emotional needs. Standard neural dialogue system approaches still fall short in this situation as they tend to generate only short, generic responses. Learning from expert actions is critical, as these potentially differ from standard dialogue acts. In this paper, we propose using a hierarchical neural network for response generation that is conditioned on 1) expert's action, 2) dialogue context, and 3) user emotion, encoded from user input. We construct a corpus of interactions between a counselor and 30 participants following a negative emotional exposure to learn expert actions and responses in a positive emotion elicitation scenario. Instead of relying on the expensive, labor intensive, and often ambiguous human annotations, we unsupervisedly cluster the expert's responses and use the resulting labels to train the network. Our experiments and evaluation show that the proposed approach yields lower perplexity and generates a larger variety of responses.",Unsupervised Counselor Dialogue Clustering for Positive Emotion Elicitation in Neural Dialogue System,"Positive emotion elicitation seeks to improve user's emotional state through dialogue system interaction, where a chatbased scenario is layered with an implicit goal to address user's emotional needs. Standard neural dialogue system approaches still fall short in this situation as they tend to generate only short, generic responses. Learning from expert actions is critical, as these potentially differ from standard dialogue acts. In this paper, we propose using a hierarchical neural network for response generation that is conditioned on 1) expert's action, 2) dialogue context, and 3) user emotion, encoded from user input. We construct a corpus of interactions between a counselor and 30 participants following a negative emotional exposure to learn expert actions and responses in a positive emotion elicitation scenario. Instead of relying on the expensive, labor intensive, and often ambiguous human annotations, we unsupervisedly cluster the expert's responses and use the resulting labels to train the network. Our experiments and evaluation show that the proposed approach yields lower perplexity and generates a larger variety of responses.",Unsupervised Counselor Dialogue Clustering for Positive Emotion Elicitation in Neural Dialogue System,"Positive emotion elicitation seeks to improve user's emotional state through dialogue system interaction, where a chatbased scenario is layered with an implicit goal to address user's emotional needs. Standard neural dialogue system approaches still fall short in this situation as they tend to generate only short, generic responses. Learning from expert actions is critical, as these potentially differ from standard dialogue acts. In this paper, we propose using a hierarchical neural network for response generation that is conditioned on 1) expert's action, 2) dialogue context, and 3) user emotion, encoded from user input. We construct a corpus of interactions between a counselor and 30 participants following a negative emotional exposure to learn expert actions and responses in a positive emotion elicitation scenario. Instead of relying on the expensive, labor intensive, and often ambiguous human annotations, we unsupervisedly cluster the expert's responses and use the resulting labels to train the network. Our experiments and evaluation show that the proposed approach yields lower perplexity and generates a larger variety of responses.",Part of this work was supported by JSPS KAKENHI Grant Numbers JP17H06101 and JP17K00237.,"Unsupervised Counselor Dialogue Clustering for Positive Emotion Elicitation in Neural Dialogue System. Positive emotion elicitation seeks to improve user's emotional state through dialogue system interaction, where a chatbased scenario is layered with an implicit goal to address user's emotional needs. Standard neural dialogue system approaches still fall short in this situation as they tend to generate only short, generic responses. Learning from expert actions is critical, as these potentially differ from standard dialogue acts. In this paper, we propose using a hierarchical neural network for response generation that is conditioned on 1) expert's action, 2) dialogue context, and 3) user emotion, encoded from user input. We construct a corpus of interactions between a counselor and 30 participants following a negative emotional exposure to learn expert actions and responses in a positive emotion elicitation scenario. Instead of relying on the expensive, labor intensive, and often ambiguous human annotations, we unsupervisedly cluster the expert's responses and use the resulting labels to train the network. Our experiments and evaluation show that the proposed approach yields lower perplexity and generates a larger variety of responses.",2018
van-halteren-2008-source,https://aclanthology.org/C08-1118,0,,,,,,,Source Language Markers in EUROPARL Translations. This paper shows that it is very often possible to identify the source language of medium-length speeches in the EU-ROPARL corpus on the basis of frequency counts of word n-grams (87.2%-96.7% accuracy depending on classification method). The paper also examines in detail which positive markers are most powerful and identifies a number of linguistic aspects as well as culture-and domain-related ones.1,Source Language Markers in {EUROPARL} Translations,This paper shows that it is very often possible to identify the source language of medium-length speeches in the EU-ROPARL corpus on the basis of frequency counts of word n-grams (87.2%-96.7% accuracy depending on classification method). The paper also examines in detail which positive markers are most powerful and identifies a number of linguistic aspects as well as culture-and domain-related ones.1,Source Language Markers in EUROPARL Translations,This paper shows that it is very often possible to identify the source language of medium-length speeches in the EU-ROPARL corpus on the basis of frequency counts of word n-grams (87.2%-96.7% accuracy depending on classification method). The paper also examines in detail which positive markers are most powerful and identifies a number of linguistic aspects as well as culture-and domain-related ones.1,,Source Language Markers in EUROPARL Translations. This paper shows that it is very often possible to identify the source language of medium-length speeches in the EU-ROPARL corpus on the basis of frequency counts of word n-grams (87.2%-96.7% accuracy depending on classification method). The paper also examines in detail which positive markers are most powerful and identifies a number of linguistic aspects as well as culture-and domain-related ones.1,2008
karimi-tang-2019-learning,https://aclanthology.org/N19-1347,1,,,,disinformation_and_fake_news,,,"Learning Hierarchical Discourse-level Structure for Fake News Detection. On the one hand, nowadays, fake news articles are easily propagated through various online media platforms and have become a grand threat to the trustworthiness of information. On the other hand, our understanding of the language of fake news is still minimal. Incorporating hierarchical discourse-level structure of fake and real news articles is one crucial step toward a better understanding of how these articles are structured. Nevertheless, this has rarely been investigated in the fake news detection domain and faces tremendous challenges. First, existing methods for capturing discourse-level structure rely on annotated corpora which are not available for fake news datasets. Second, how to extract out useful information from such discovered structures is another challenge. To address these challenges, we propose Hierarchical Discourselevel Structure for Fake news detection. HDSF learns and constructs a discourse-level structure for fake/real news articles in an automated and data-driven manner. Moreover, we identify insightful structure-related properties, which can explain the discovered structures and boost our understating of fake news. Conducted experiments show the effectiveness of the proposed approach. Further structural analysis suggests that real and fake news present substantial differences in the hierarchical discourse-level structures.",Learning Hierarchical Discourse-level Structure for Fake News Detection,"On the one hand, nowadays, fake news articles are easily propagated through various online media platforms and have become a grand threat to the trustworthiness of information. On the other hand, our understanding of the language of fake news is still minimal. Incorporating hierarchical discourse-level structure of fake and real news articles is one crucial step toward a better understanding of how these articles are structured. Nevertheless, this has rarely been investigated in the fake news detection domain and faces tremendous challenges. First, existing methods for capturing discourse-level structure rely on annotated corpora which are not available for fake news datasets. Second, how to extract out useful information from such discovered structures is another challenge. To address these challenges, we propose Hierarchical Discourselevel Structure for Fake news detection. HDSF learns and constructs a discourse-level structure for fake/real news articles in an automated and data-driven manner. Moreover, we identify insightful structure-related properties, which can explain the discovered structures and boost our understating of fake news. Conducted experiments show the effectiveness of the proposed approach. Further structural analysis suggests that real and fake news present substantial differences in the hierarchical discourse-level structures.",Learning Hierarchical Discourse-level Structure for Fake News Detection,"On the one hand, nowadays, fake news articles are easily propagated through various online media platforms and have become a grand threat to the trustworthiness of information. On the other hand, our understanding of the language of fake news is still minimal. Incorporating hierarchical discourse-level structure of fake and real news articles is one crucial step toward a better understanding of how these articles are structured. Nevertheless, this has rarely been investigated in the fake news detection domain and faces tremendous challenges. First, existing methods for capturing discourse-level structure rely on annotated corpora which are not available for fake news datasets. Second, how to extract out useful information from such discovered structures is another challenge. To address these challenges, we propose Hierarchical Discourselevel Structure for Fake news detection. HDSF learns and constructs a discourse-level structure for fake/real news articles in an automated and data-driven manner. Moreover, we identify insightful structure-related properties, which can explain the discovered structures and boost our understating of fake news. Conducted experiments show the effectiveness of the proposed approach. Further structural analysis suggests that real and fake news present substantial differences in the hierarchical discourse-level structures.",,"Learning Hierarchical Discourse-level Structure for Fake News Detection. On the one hand, nowadays, fake news articles are easily propagated through various online media platforms and have become a grand threat to the trustworthiness of information. On the other hand, our understanding of the language of fake news is still minimal. Incorporating hierarchical discourse-level structure of fake and real news articles is one crucial step toward a better understanding of how these articles are structured. Nevertheless, this has rarely been investigated in the fake news detection domain and faces tremendous challenges. First, existing methods for capturing discourse-level structure rely on annotated corpora which are not available for fake news datasets. Second, how to extract out useful information from such discovered structures is another challenge. To address these challenges, we propose Hierarchical Discourselevel Structure for Fake news detection. HDSF learns and constructs a discourse-level structure for fake/real news articles in an automated and data-driven manner. Moreover, we identify insightful structure-related properties, which can explain the discovered structures and boost our understating of fake news. Conducted experiments show the effectiveness of the proposed approach. Further structural analysis suggests that real and fake news present substantial differences in the hierarchical discourse-level structures.",2019
bari-etal-2021-uxla,https://aclanthology.org/2021.acl-long.154,0,,,,,,,"UXLA: A Robust Unsupervised Data Augmentation Framework for Zero-Resource Cross-Lingual NLP. Transfer learning has yielded state-of-the-art (SoTA) results in many supervised NLP tasks. However, annotated data for every target task in every target language is rare, especially for low-resource languages. We propose UXLA a novel unsupervised data augmentation framework for zero-resource transfer learning scenarios. In particular, UXLA aims to solve crosslingual adaptation problems from a source language task distribution to an unknown target language task distribution, assuming no training label in the target language. At its core, UXLA performs simultaneous selftraining with data augmentation and unsupervised sample selection. To show its effectiveness, we conduct extensive experiments on three diverse zero-resource cross-lingual transfer tasks. UXLA achieves SoTA results in all the tasks, outperforming the baselines by a good margin. With an in-depth framework dissection, we demonstrate the cumulative contributions of different components to its success.",{UXLA}: A Robust Unsupervised Data Augmentation Framework for Zero-Resource Cross-Lingual {NLP},"Transfer learning has yielded state-of-the-art (SoTA) results in many supervised NLP tasks. However, annotated data for every target task in every target language is rare, especially for low-resource languages. We propose UXLA a novel unsupervised data augmentation framework for zero-resource transfer learning scenarios. In particular, UXLA aims to solve crosslingual adaptation problems from a source language task distribution to an unknown target language task distribution, assuming no training label in the target language. At its core, UXLA performs simultaneous selftraining with data augmentation and unsupervised sample selection. To show its effectiveness, we conduct extensive experiments on three diverse zero-resource cross-lingual transfer tasks. UXLA achieves SoTA results in all the tasks, outperforming the baselines by a good margin. With an in-depth framework dissection, we demonstrate the cumulative contributions of different components to its success.",UXLA: A Robust Unsupervised Data Augmentation Framework for Zero-Resource Cross-Lingual NLP,"Transfer learning has yielded state-of-the-art (SoTA) results in many supervised NLP tasks. However, annotated data for every target task in every target language is rare, especially for low-resource languages. We propose UXLA a novel unsupervised data augmentation framework for zero-resource transfer learning scenarios. In particular, UXLA aims to solve crosslingual adaptation problems from a source language task distribution to an unknown target language task distribution, assuming no training label in the target language. At its core, UXLA performs simultaneous selftraining with data augmentation and unsupervised sample selection. To show its effectiveness, we conduct extensive experiments on three diverse zero-resource cross-lingual transfer tasks. UXLA achieves SoTA results in all the tasks, outperforming the baselines by a good margin. With an in-depth framework dissection, we demonstrate the cumulative contributions of different components to its success.",,"UXLA: A Robust Unsupervised Data Augmentation Framework for Zero-Resource Cross-Lingual NLP. Transfer learning has yielded state-of-the-art (SoTA) results in many supervised NLP tasks. However, annotated data for every target task in every target language is rare, especially for low-resource languages. We propose UXLA a novel unsupervised data augmentation framework for zero-resource transfer learning scenarios. In particular, UXLA aims to solve crosslingual adaptation problems from a source language task distribution to an unknown target language task distribution, assuming no training label in the target language. At its core, UXLA performs simultaneous selftraining with data augmentation and unsupervised sample selection. To show its effectiveness, we conduct extensive experiments on three diverse zero-resource cross-lingual transfer tasks. UXLA achieves SoTA results in all the tasks, outperforming the baselines by a good margin. With an in-depth framework dissection, we demonstrate the cumulative contributions of different components to its success.",2021
navigli-etal-2010-annotated,http://www.lrec-conf.org/proceedings/lrec2010/pdf/20_Paper.pdf,0,,,,,,,"An Annotated Dataset for Extracting Definitions and Hypernyms from the Web. This paper presents and analyzes an annotated corpus of definitions, created to train an algorithm for the automatic extraction of definitions and hypernyms from Web documents. As an additional resource, we also include a corpus of non-definitions with syntactic patterns similar to those of definition sentences, e.g.: ""An android is a robot"" vs. ""Snowcap is unmistakable"". Domain and style independence is obtained thanks to the annotation of a sample of the Wikipedia corpus and to a novel pattern generalization algorithm based on wordclass lattices (WCL). A lattice is a directed acyclic graph (DAG), a subclass of nondeterministic finite state automata (NFA). The lattice structure has the purpose of preserving the salient differences among distinct sequences, while eliminating redundant information. The WCL algorithm will be integrated into an improved version of the GlossExtractor Web application (Velardi et al., 2008). This paper is mostly concerned with a description of the corpus, the annotation strategy, and a linguistic analysis of the data. A summary of the WCL algorithm is also provided for the sake of completeness.",An Annotated Dataset for Extracting Definitions and Hypernyms from the Web,"This paper presents and analyzes an annotated corpus of definitions, created to train an algorithm for the automatic extraction of definitions and hypernyms from Web documents. As an additional resource, we also include a corpus of non-definitions with syntactic patterns similar to those of definition sentences, e.g.: ""An android is a robot"" vs. ""Snowcap is unmistakable"". Domain and style independence is obtained thanks to the annotation of a sample of the Wikipedia corpus and to a novel pattern generalization algorithm based on wordclass lattices (WCL). A lattice is a directed acyclic graph (DAG), a subclass of nondeterministic finite state automata (NFA). The lattice structure has the purpose of preserving the salient differences among distinct sequences, while eliminating redundant information. The WCL algorithm will be integrated into an improved version of the GlossExtractor Web application (Velardi et al., 2008). This paper is mostly concerned with a description of the corpus, the annotation strategy, and a linguistic analysis of the data. A summary of the WCL algorithm is also provided for the sake of completeness.",An Annotated Dataset for Extracting Definitions and Hypernyms from the Web,"This paper presents and analyzes an annotated corpus of definitions, created to train an algorithm for the automatic extraction of definitions and hypernyms from Web documents. As an additional resource, we also include a corpus of non-definitions with syntactic patterns similar to those of definition sentences, e.g.: ""An android is a robot"" vs. ""Snowcap is unmistakable"". Domain and style independence is obtained thanks to the annotation of a sample of the Wikipedia corpus and to a novel pattern generalization algorithm based on wordclass lattices (WCL). A lattice is a directed acyclic graph (DAG), a subclass of nondeterministic finite state automata (NFA). The lattice structure has the purpose of preserving the salient differences among distinct sequences, while eliminating redundant information. The WCL algorithm will be integrated into an improved version of the GlossExtractor Web application (Velardi et al., 2008). This paper is mostly concerned with a description of the corpus, the annotation strategy, and a linguistic analysis of the data. A summary of the WCL algorithm is also provided for the sake of completeness.",,"An Annotated Dataset for Extracting Definitions and Hypernyms from the Web. This paper presents and analyzes an annotated corpus of definitions, created to train an algorithm for the automatic extraction of definitions and hypernyms from Web documents. As an additional resource, we also include a corpus of non-definitions with syntactic patterns similar to those of definition sentences, e.g.: ""An android is a robot"" vs. ""Snowcap is unmistakable"". Domain and style independence is obtained thanks to the annotation of a sample of the Wikipedia corpus and to a novel pattern generalization algorithm based on wordclass lattices (WCL). A lattice is a directed acyclic graph (DAG), a subclass of nondeterministic finite state automata (NFA). The lattice structure has the purpose of preserving the salient differences among distinct sequences, while eliminating redundant information. The WCL algorithm will be integrated into an improved version of the GlossExtractor Web application (Velardi et al., 2008). This paper is mostly concerned with a description of the corpus, the annotation strategy, and a linguistic analysis of the data. A summary of the WCL algorithm is also provided for the sake of completeness.",2010
deville-etal-1996-anthem,https://aclanthology.org/1996.amta-1.27,1,,,,health,,,"ANTHEM: advanced natural language interface for multilingual text generation in healthcare (LRE 62-007). The ANTHEM project: ""Advanced Natural Language Interface for Multilingual Text Generation in Healthcare"" (LRE 62-007) is co-financed by the European Union within the ""Linguistic Research and Engineering"" program. The ANTHEM consortium is coordinated by W. Ceusters of RAMIT vzw (Ghent University Hospital) and further consists of the Institute of Modern Languages of the University of Namur (G. Deville), the IAI of the University of Saarbrücken (O. Streiter), the CRP-CU of Luxembourg (P. Mousel), the University of Liege (C. Gérardy), Datasoft Management nv -Oostende (J. Devlies) and the Military Hospital in Brussels (D. Penson).",{ANTHEM}: advanced natural language interface for multilingual text generation in healthcare ({LRE} 62-007),"The ANTHEM project: ""Advanced Natural Language Interface for Multilingual Text Generation in Healthcare"" (LRE 62-007) is co-financed by the European Union within the ""Linguistic Research and Engineering"" program. The ANTHEM consortium is coordinated by W. Ceusters of RAMIT vzw (Ghent University Hospital) and further consists of the Institute of Modern Languages of the University of Namur (G. Deville), the IAI of the University of Saarbrücken (O. Streiter), the CRP-CU of Luxembourg (P. Mousel), the University of Liege (C. Gérardy), Datasoft Management nv -Oostende (J. Devlies) and the Military Hospital in Brussels (D. Penson).",ANTHEM: advanced natural language interface for multilingual text generation in healthcare (LRE 62-007),"The ANTHEM project: ""Advanced Natural Language Interface for Multilingual Text Generation in Healthcare"" (LRE 62-007) is co-financed by the European Union within the ""Linguistic Research and Engineering"" program. The ANTHEM consortium is coordinated by W. Ceusters of RAMIT vzw (Ghent University Hospital) and further consists of the Institute of Modern Languages of the University of Namur (G. Deville), the IAI of the University of Saarbrücken (O. Streiter), the CRP-CU of Luxembourg (P. Mousel), the University of Liege (C. Gérardy), Datasoft Management nv -Oostende (J. Devlies) and the Military Hospital in Brussels (D. Penson).",,"ANTHEM: advanced natural language interface for multilingual text generation in healthcare (LRE 62-007). The ANTHEM project: ""Advanced Natural Language Interface for Multilingual Text Generation in Healthcare"" (LRE 62-007) is co-financed by the European Union within the ""Linguistic Research and Engineering"" program. The ANTHEM consortium is coordinated by W. Ceusters of RAMIT vzw (Ghent University Hospital) and further consists of the Institute of Modern Languages of the University of Namur (G. Deville), the IAI of the University of Saarbrücken (O. Streiter), the CRP-CU of Luxembourg (P. Mousel), the University of Liege (C. Gérardy), Datasoft Management nv -Oostende (J. Devlies) and the Military Hospital in Brussels (D. Penson).",1996
sapena-etal-2010-global,https://aclanthology.org/C10-2125,0,,,,,,,"A Global Relaxation Labeling Approach to Coreference Resolution. This paper presents a constraint-based graph partitioning approach to coreference resolution solved by relaxation labeling. The approach combines the strengths of groupwise classifiers and chain formation methods in one global method. Experiments show that our approach significantly outperforms systems based on separate classification and chain formation steps, and that it achieves the best results in the state of the art for the same dataset and metrics.",A Global Relaxation Labeling Approach to Coreference Resolution,"This paper presents a constraint-based graph partitioning approach to coreference resolution solved by relaxation labeling. The approach combines the strengths of groupwise classifiers and chain formation methods in one global method. Experiments show that our approach significantly outperforms systems based on separate classification and chain formation steps, and that it achieves the best results in the state of the art for the same dataset and metrics.",A Global Relaxation Labeling Approach to Coreference Resolution,"This paper presents a constraint-based graph partitioning approach to coreference resolution solved by relaxation labeling. The approach combines the strengths of groupwise classifiers and chain formation methods in one global method. Experiments show that our approach significantly outperforms systems based on separate classification and chain formation steps, and that it achieves the best results in the state of the art for the same dataset and metrics.",,"A Global Relaxation Labeling Approach to Coreference Resolution. This paper presents a constraint-based graph partitioning approach to coreference resolution solved by relaxation labeling. The approach combines the strengths of groupwise classifiers and chain formation methods in one global method. Experiments show that our approach significantly outperforms systems based on separate classification and chain formation steps, and that it achieves the best results in the state of the art for the same dataset and metrics.",2010
artetxe-etal-2015-building,https://aclanthology.org/2015.eamt-1.3,0,,,,,,,"Building hybrid machine translation systems by using an EBMT preprocessor to create partialtranslations. This paper presents a hybrid machine translation framework based on a preprocessor that translates fragments of the input text by using example-based machine translation techniques. The preprocessor resembles a translation memory with named-entity and chunk generalization, and generates a high quality partial translation that is then completed by the main translation engine, which can be either rule-based (RBMT) or statistical (SMT). Results are reported for both RBMT and SMT hybridization as well as the preprocessor on its own, showing the effectiveness of our approach.",Building hybrid machine translation systems by using an {EBMT} preprocessor to create partialtranslations,"This paper presents a hybrid machine translation framework based on a preprocessor that translates fragments of the input text by using example-based machine translation techniques. The preprocessor resembles a translation memory with named-entity and chunk generalization, and generates a high quality partial translation that is then completed by the main translation engine, which can be either rule-based (RBMT) or statistical (SMT). Results are reported for both RBMT and SMT hybridization as well as the preprocessor on its own, showing the effectiveness of our approach.",Building hybrid machine translation systems by using an EBMT preprocessor to create partialtranslations,"This paper presents a hybrid machine translation framework based on a preprocessor that translates fragments of the input text by using example-based machine translation techniques. The preprocessor resembles a translation memory with named-entity and chunk generalization, and generates a high quality partial translation that is then completed by the main translation engine, which can be either rule-based (RBMT) or statistical (SMT). Results are reported for both RBMT and SMT hybridization as well as the preprocessor on its own, showing the effectiveness of our approach.","The research leading to these results was carried out as part of the TACARDI project (Spanish Ministry of Education and Science, TIN2012-38523-C02-011, with FEDER funding) and the QTLeap project funded by the European Commission (FP7-ICT-2013.4.1-610516).","Building hybrid machine translation systems by using an EBMT preprocessor to create partialtranslations. This paper presents a hybrid machine translation framework based on a preprocessor that translates fragments of the input text by using example-based machine translation techniques. The preprocessor resembles a translation memory with named-entity and chunk generalization, and generates a high quality partial translation that is then completed by the main translation engine, which can be either rule-based (RBMT) or statistical (SMT). Results are reported for both RBMT and SMT hybridization as well as the preprocessor on its own, showing the effectiveness of our approach.",2015
pawar-etal-2015-noun,https://aclanthology.org/W15-5905,0,,,,,,,"Noun Phrase Chunking for Marathi using Distant Supervision. Information Extraction from Indian languages requires effective shallow parsing, especially identification of ""meaningful"" noun phrases. Particularly, for an agglutinative and free word order language like Marathi, this problem is quite challenging. We model this task of extracting noun phrases as a sequence labelling problem. A Distant Supervision framework is used to automatically create a large labelled data for training the sequence labelling model. The framework exploits a set of heuristic rules based on corpus statistics for the automatic labelling. Our approach puts together the benefits of heuristic rules, a large unlabelled corpus as well as supervised learning to model complex underlying characteristics of noun phrase occurrences. In comparison to a simple English-like chunking baseline and a publicly available Marathi Shallow Parser, our method demonstrates a better performance.",Noun Phrase Chunking for {M}arathi using Distant Supervision,"Information Extraction from Indian languages requires effective shallow parsing, especially identification of ""meaningful"" noun phrases. Particularly, for an agglutinative and free word order language like Marathi, this problem is quite challenging. We model this task of extracting noun phrases as a sequence labelling problem. A Distant Supervision framework is used to automatically create a large labelled data for training the sequence labelling model. The framework exploits a set of heuristic rules based on corpus statistics for the automatic labelling. Our approach puts together the benefits of heuristic rules, a large unlabelled corpus as well as supervised learning to model complex underlying characteristics of noun phrase occurrences. In comparison to a simple English-like chunking baseline and a publicly available Marathi Shallow Parser, our method demonstrates a better performance.",Noun Phrase Chunking for Marathi using Distant Supervision,"Information Extraction from Indian languages requires effective shallow parsing, especially identification of ""meaningful"" noun phrases. Particularly, for an agglutinative and free word order language like Marathi, this problem is quite challenging. We model this task of extracting noun phrases as a sequence labelling problem. A Distant Supervision framework is used to automatically create a large labelled data for training the sequence labelling model. The framework exploits a set of heuristic rules based on corpus statistics for the automatic labelling. Our approach puts together the benefits of heuristic rules, a large unlabelled corpus as well as supervised learning to model complex underlying characteristics of noun phrase occurrences. In comparison to a simple English-like chunking baseline and a publicly available Marathi Shallow Parser, our method demonstrates a better performance.",,"Noun Phrase Chunking for Marathi using Distant Supervision. Information Extraction from Indian languages requires effective shallow parsing, especially identification of ""meaningful"" noun phrases. Particularly, for an agglutinative and free word order language like Marathi, this problem is quite challenging. We model this task of extracting noun phrases as a sequence labelling problem. A Distant Supervision framework is used to automatically create a large labelled data for training the sequence labelling model. The framework exploits a set of heuristic rules based on corpus statistics for the automatic labelling. Our approach puts together the benefits of heuristic rules, a large unlabelled corpus as well as supervised learning to model complex underlying characteristics of noun phrase occurrences. In comparison to a simple English-like chunking baseline and a publicly available Marathi Shallow Parser, our method demonstrates a better performance.",2015
webb-etal-2008-cross,http://www.lrec-conf.org/proceedings/lrec2008/pdf/502_paper.pdf,0,,,,,,,"Cross-Domain Dialogue Act Tagging. We present recent work in the area of Cross-Domain Dialogue Act (DA) tagging. We have previously reported on the use of a simple dialogue act classifier based on purely intra-utterance features-principally involving word n-gram cue phrases automatically generated from a training corpus. Such a classifier performs surprisingly well, rivalling scores obtained using far more sophisticated language modelling techniques. In this paper, we apply these automatically extracted cues to a new annotated corpus, to determine the portability and generality of the cues we learn.",Cross-Domain Dialogue Act Tagging,"We present recent work in the area of Cross-Domain Dialogue Act (DA) tagging. We have previously reported on the use of a simple dialogue act classifier based on purely intra-utterance features-principally involving word n-gram cue phrases automatically generated from a training corpus. Such a classifier performs surprisingly well, rivalling scores obtained using far more sophisticated language modelling techniques. In this paper, we apply these automatically extracted cues to a new annotated corpus, to determine the portability and generality of the cues we learn.",Cross-Domain Dialogue Act Tagging,"We present recent work in the area of Cross-Domain Dialogue Act (DA) tagging. We have previously reported on the use of a simple dialogue act classifier based on purely intra-utterance features-principally involving word n-gram cue phrases automatically generated from a training corpus. Such a classifier performs surprisingly well, rivalling scores obtained using far more sophisticated language modelling techniques. In this paper, we apply these automatically extracted cues to a new annotated corpus, to determine the portability and generality of the cues we learn.",,"Cross-Domain Dialogue Act Tagging. We present recent work in the area of Cross-Domain Dialogue Act (DA) tagging. We have previously reported on the use of a simple dialogue act classifier based on purely intra-utterance features-principally involving word n-gram cue phrases automatically generated from a training corpus. Such a classifier performs surprisingly well, rivalling scores obtained using far more sophisticated language modelling techniques. In this paper, we apply these automatically extracted cues to a new annotated corpus, to determine the portability and generality of the cues we learn.",2008
volokh-neumann-2012-parsing,https://aclanthology.org/W12-5615,0,,,,,,,"Parsing Hindi with MDParser. We describe our participation in the MTPIL Hindi Parsing Shared Task-2012. Our system achieved the following results: 82.44% LAS/90.91% UAS (auto) and 85.31% LAS/92.88% UAS (gold). Our parser is based on the linear classification, which is suboptimal as far as the accuracy is concerned. The strong point of our approach is its speed. For parsing development the system requires 0.935 seconds, which corresponds to a parsing speed of 1318 sentences per second. The Hindi Treebank contains much less different part of speech tags than many other treebanks and therefore it was absolutely necessary to use the additional morphosyntactic features available in the treebank. We were able to build classifiers predicting those, using only the standard word form and part of speech features, with a high accuracy.",Parsing {H}indi with {MDP}arser,"We describe our participation in the MTPIL Hindi Parsing Shared Task-2012. Our system achieved the following results: 82.44% LAS/90.91% UAS (auto) and 85.31% LAS/92.88% UAS (gold). Our parser is based on the linear classification, which is suboptimal as far as the accuracy is concerned. The strong point of our approach is its speed. For parsing development the system requires 0.935 seconds, which corresponds to a parsing speed of 1318 sentences per second. The Hindi Treebank contains much less different part of speech tags than many other treebanks and therefore it was absolutely necessary to use the additional morphosyntactic features available in the treebank. We were able to build classifiers predicting those, using only the standard word form and part of speech features, with a high accuracy.",Parsing Hindi with MDParser,"We describe our participation in the MTPIL Hindi Parsing Shared Task-2012. Our system achieved the following results: 82.44% LAS/90.91% UAS (auto) and 85.31% LAS/92.88% UAS (gold). Our parser is based on the linear classification, which is suboptimal as far as the accuracy is concerned. The strong point of our approach is its speed. For parsing development the system requires 0.935 seconds, which corresponds to a parsing speed of 1318 sentences per second. The Hindi Treebank contains much less different part of speech tags than many other treebanks and therefore it was absolutely necessary to use the additional morphosyntactic features available in the treebank. We were able to build classifiers predicting those, using only the standard word form and part of speech features, with a high accuracy.",The work presented here was partially supported by a research grant from the German Federal Ministry of Education and Research (BMBF) to the DFKI project Deependance (FKZ. 01IW11003).,"Parsing Hindi with MDParser. We describe our participation in the MTPIL Hindi Parsing Shared Task-2012. Our system achieved the following results: 82.44% LAS/90.91% UAS (auto) and 85.31% LAS/92.88% UAS (gold). Our parser is based on the linear classification, which is suboptimal as far as the accuracy is concerned. The strong point of our approach is its speed. For parsing development the system requires 0.935 seconds, which corresponds to a parsing speed of 1318 sentences per second. The Hindi Treebank contains much less different part of speech tags than many other treebanks and therefore it was absolutely necessary to use the additional morphosyntactic features available in the treebank. We were able to build classifiers predicting those, using only the standard word form and part of speech features, with a high accuracy.",2012
moeljadi-etal-2015-building,https://aclanthology.org/W15-3302,0,,,,,,,Building an HPSG-based Indonesian Resource Grammar (INDRA). This paper presents the creation and the initial stage development of a broad,Building an {HPSG}-based {I}ndonesian Resource Grammar ({INDRA}),This paper presents the creation and the initial stage development of a broad,Building an HPSG-based Indonesian Resource Grammar (INDRA),This paper presents the creation and the initial stage development of a broad,Thanks to Michael Wayne Goodman and Dan Flickinger for teaching us how to use GitHub and FFTB. Thanks to Fam Rashel for helping us with POS Tagger and to Lian Tze Lim for helping us improve Wordnet Bahasa. This research was supported in part by the MOE Tier 2 grant That's what you meant: a Rich Representation for Manipulation of Meaning (MOE ARC41/13).,Building an HPSG-based Indonesian Resource Grammar (INDRA). This paper presents the creation and the initial stage development of a broad,2015
borovikov-etal-2009-edeal,https://aclanthology.org/2009.mtsummit-government.7,0,,,,,,,"The EDEAL Project for Automated Processing of African Languages. The EDEAL project seeks to identify, collect, evaluate, and enhance resources relevant to processing collected material in African languages. Its priority languages are Swahili, Hausa, Oromo, and Yoruba. Resources of interest include software for OCR, Machine Translation (MT), and Named Entity Extraction (NEE), as well as data resources for developing and evaluating tools for these languages, and approaches-whether automated or manual-for developing capabilities for languages that lack significant data resources and reference material. We have surveyed the available resources, and the project is now in its first execution phase, focused on providing end-to-end capabilities and solid data coverage for a single language; we have chosen Swahili since it has the best existing coverage to build on. The results of the work will be freely available to the U.S. Government community.",The {EDEAL} Project for Automated Processing of {A}frican Languages,"The EDEAL project seeks to identify, collect, evaluate, and enhance resources relevant to processing collected material in African languages. Its priority languages are Swahili, Hausa, Oromo, and Yoruba. Resources of interest include software for OCR, Machine Translation (MT), and Named Entity Extraction (NEE), as well as data resources for developing and evaluating tools for these languages, and approaches-whether automated or manual-for developing capabilities for languages that lack significant data resources and reference material. We have surveyed the available resources, and the project is now in its first execution phase, focused on providing end-to-end capabilities and solid data coverage for a single language; we have chosen Swahili since it has the best existing coverage to build on. The results of the work will be freely available to the U.S. Government community.",The EDEAL Project for Automated Processing of African Languages,"The EDEAL project seeks to identify, collect, evaluate, and enhance resources relevant to processing collected material in African languages. Its priority languages are Swahili, Hausa, Oromo, and Yoruba. Resources of interest include software for OCR, Machine Translation (MT), and Named Entity Extraction (NEE), as well as data resources for developing and evaluating tools for these languages, and approaches-whether automated or manual-for developing capabilities for languages that lack significant data resources and reference material. We have surveyed the available resources, and the project is now in its first execution phase, focused on providing end-to-end capabilities and solid data coverage for a single language; we have chosen Swahili since it has the best existing coverage to build on. The results of the work will be freely available to the U.S. Government community.","The work described here is performed by a team at CACI that includes, in addition to the authors, Marta Cruz, Mark Turner, and a large team of native speakers of different African languages.This work is sponsored by funding from the Defense Intelligence Agency (DIA) under contract GS-35F-0342N. We are very grateful for the wise guidance of Nick Bemish and Theresa Williams.","The EDEAL Project for Automated Processing of African Languages. The EDEAL project seeks to identify, collect, evaluate, and enhance resources relevant to processing collected material in African languages. Its priority languages are Swahili, Hausa, Oromo, and Yoruba. Resources of interest include software for OCR, Machine Translation (MT), and Named Entity Extraction (NEE), as well as data resources for developing and evaluating tools for these languages, and approaches-whether automated or manual-for developing capabilities for languages that lack significant data resources and reference material. We have surveyed the available resources, and the project is now in its first execution phase, focused on providing end-to-end capabilities and solid data coverage for a single language; we have chosen Swahili since it has the best existing coverage to build on. The results of the work will be freely available to the U.S. Government community.",2009
offersgaard-hansen-2016-facilitating,https://aclanthology.org/L16-1398,0,,,,,,,"Facilitating Metadata Interoperability in CLARIN-DK. The issue for CLARIN archives at the metadata level is to facilitate the user's possibility to describe their data, even with their own standard, and at the same time make these metadata meaningful for a variety of users with a variety of resource types, and ensure that the metadata are useful for search across all resources both at the national and at the European level. We see that different people from different research communities fill in the metadata in different ways even though the metadata was defined and documented. This has impacted when the metadata are harvested and displayed in different environments. A loss of information is at stake. In this paper we view the challenges of ensuring metadata interoperability through examples of propagation of metadata values from the CLARIN-DK archive to the VLO. We see that the CLARIN Community in many ways support interoperability, but argue that agreeing upon standards, making clear definitions of the semantics of the metadata and their content is inevitable for the interoperability to work successfully. The key points are clear and freely available definitions, accessible documentation and easily usable facilities and guidelines for the metadata creators.",Facilitating Metadata Interoperability in {CLARIN}-{DK},"The issue for CLARIN archives at the metadata level is to facilitate the user's possibility to describe their data, even with their own standard, and at the same time make these metadata meaningful for a variety of users with a variety of resource types, and ensure that the metadata are useful for search across all resources both at the national and at the European level. We see that different people from different research communities fill in the metadata in different ways even though the metadata was defined and documented. This has impacted when the metadata are harvested and displayed in different environments. A loss of information is at stake. In this paper we view the challenges of ensuring metadata interoperability through examples of propagation of metadata values from the CLARIN-DK archive to the VLO. We see that the CLARIN Community in many ways support interoperability, but argue that agreeing upon standards, making clear definitions of the semantics of the metadata and their content is inevitable for the interoperability to work successfully. The key points are clear and freely available definitions, accessible documentation and easily usable facilities and guidelines for the metadata creators.",Facilitating Metadata Interoperability in CLARIN-DK,"The issue for CLARIN archives at the metadata level is to facilitate the user's possibility to describe their data, even with their own standard, and at the same time make these metadata meaningful for a variety of users with a variety of resource types, and ensure that the metadata are useful for search across all resources both at the national and at the European level. We see that different people from different research communities fill in the metadata in different ways even though the metadata was defined and documented. This has impacted when the metadata are harvested and displayed in different environments. A loss of information is at stake. In this paper we view the challenges of ensuring metadata interoperability through examples of propagation of metadata values from the CLARIN-DK archive to the VLO. We see that the CLARIN Community in many ways support interoperability, but argue that agreeing upon standards, making clear definitions of the semantics of the metadata and their content is inevitable for the interoperability to work successfully. The key points are clear and freely available definitions, accessible documentation and easily usable facilities and guidelines for the metadata creators.",,"Facilitating Metadata Interoperability in CLARIN-DK. The issue for CLARIN archives at the metadata level is to facilitate the user's possibility to describe their data, even with their own standard, and at the same time make these metadata meaningful for a variety of users with a variety of resource types, and ensure that the metadata are useful for search across all resources both at the national and at the European level. We see that different people from different research communities fill in the metadata in different ways even though the metadata was defined and documented. This has impacted when the metadata are harvested and displayed in different environments. A loss of information is at stake. In this paper we view the challenges of ensuring metadata interoperability through examples of propagation of metadata values from the CLARIN-DK archive to the VLO. We see that the CLARIN Community in many ways support interoperability, but argue that agreeing upon standards, making clear definitions of the semantics of the metadata and their content is inevitable for the interoperability to work successfully. The key points are clear and freely available definitions, accessible documentation and easily usable facilities and guidelines for the metadata creators.",2016
ito-etal-2020-langsmith,https://aclanthology.org/2020.emnlp-demos.28,1,,,,industry_innovation_infrastructure,,,"Langsmith: An Interactive Academic Text Revision System. Despite the current diversity and inclusion initiatives in the academic community, researchers with a non-native command of English still face significant obstacles when writing papers in English. This paper presents the Langsmith editor, which assists inexperienced, non-native researchers to write English papers, especially in the natural language processing (NLP) field. Our system can suggest fluent, academic-style sentences to writers based on their rough, incomplete phrases or sentences. The system also encourages interaction between human writers and the computerized revision system. The experimental results demonstrated that Langsmith helps non-native English-speaker students write papers in English. The system is available at https://emnlp-demo.editor. langsmith.co.jp/. * The authors contributed equally 1 The 58th Annual Meeting of the Association for Computational Linguistics 2 See https://www.youtube.com/channel/ UCjHeZPe0tT6bWxVVvum1bFQ for the screencast.",Langsmith: An Interactive Academic Text Revision System,"Despite the current diversity and inclusion initiatives in the academic community, researchers with a non-native command of English still face significant obstacles when writing papers in English. This paper presents the Langsmith editor, which assists inexperienced, non-native researchers to write English papers, especially in the natural language processing (NLP) field. Our system can suggest fluent, academic-style sentences to writers based on their rough, incomplete phrases or sentences. The system also encourages interaction between human writers and the computerized revision system. The experimental results demonstrated that Langsmith helps non-native English-speaker students write papers in English. The system is available at https://emnlp-demo.editor. langsmith.co.jp/. * The authors contributed equally 1 The 58th Annual Meeting of the Association for Computational Linguistics 2 See https://www.youtube.com/channel/ UCjHeZPe0tT6bWxVVvum1bFQ for the screencast.",Langsmith: An Interactive Academic Text Revision System,"Despite the current diversity and inclusion initiatives in the academic community, researchers with a non-native command of English still face significant obstacles when writing papers in English. This paper presents the Langsmith editor, which assists inexperienced, non-native researchers to write English papers, especially in the natural language processing (NLP) field. Our system can suggest fluent, academic-style sentences to writers based on their rough, incomplete phrases or sentences. The system also encourages interaction between human writers and the computerized revision system. The experimental results demonstrated that Langsmith helps non-native English-speaker students write papers in English. The system is available at https://emnlp-demo.editor. langsmith.co.jp/. * The authors contributed equally 1 The 58th Annual Meeting of the Association for Computational Linguistics 2 See https://www.youtube.com/channel/ UCjHeZPe0tT6bWxVVvum1bFQ for the screencast.",We are grateful to Ana Brassard for her feedback on English. We also appreciate the participants of our user studies. This work was supported by Grant-in-Aid for JSPS Fellows Grant Number JP20J22697. 21 We conducted the one-side sign test. The difference is significant with p ≤ 0.05.,"Langsmith: An Interactive Academic Text Revision System. Despite the current diversity and inclusion initiatives in the academic community, researchers with a non-native command of English still face significant obstacles when writing papers in English. This paper presents the Langsmith editor, which assists inexperienced, non-native researchers to write English papers, especially in the natural language processing (NLP) field. Our system can suggest fluent, academic-style sentences to writers based on their rough, incomplete phrases or sentences. The system also encourages interaction between human writers and the computerized revision system. The experimental results demonstrated that Langsmith helps non-native English-speaker students write papers in English. The system is available at https://emnlp-demo.editor. langsmith.co.jp/. * The authors contributed equally 1 The 58th Annual Meeting of the Association for Computational Linguistics 2 See https://www.youtube.com/channel/ UCjHeZPe0tT6bWxVVvum1bFQ for the screencast.",2020
chang-etal-2019-bias,https://aclanthology.org/D19-2004,1,,,,social_equality,gender_equality,,Bias and Fairness in Natural Language Processing. ,Bias and Fairness in Natural Language Processing,,Bias and Fairness in Natural Language Processing,,,Bias and Fairness in Natural Language Processing. ,2019
mccoy-etal-2020-berts,https://aclanthology.org/2020.blackboxnlp-1.21,0,,,,,,,"BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. If the same neural network architecture is trained multiple times on the same dataset, will it make similar linguistic generalizations across runs? To study this question, we finetuned 100 instances of BERT on the Multigenre Natural Language Inference (MNLI) dataset and evaluated them on the HANS dataset, which evaluates syntactic generalization in natural language inference. On the MNLI development set, the behavior of all instances was remarkably consistent, with accuracy ranging between 83.6% and 84.8%. In stark contrast, the same models varied widely in their generalization performance. For example, on the simple case of subject-object swap (e.g., determining that the doctor visited the lawyer does not entail the lawyer visited the doctor), accuracy ranged from 0.0% to 66.2%. Such variation is likely due to the presence of many local minima in the loss surface that are equally attractive to a low-bias learner such as a neural network; decreasing the variability may therefore require models with stronger inductive biases.",{BERT}s of a feather do not generalize together: Large variability in generalization across models with similar test set performance,"If the same neural network architecture is trained multiple times on the same dataset, will it make similar linguistic generalizations across runs? To study this question, we finetuned 100 instances of BERT on the Multigenre Natural Language Inference (MNLI) dataset and evaluated them on the HANS dataset, which evaluates syntactic generalization in natural language inference. On the MNLI development set, the behavior of all instances was remarkably consistent, with accuracy ranging between 83.6% and 84.8%. In stark contrast, the same models varied widely in their generalization performance. For example, on the simple case of subject-object swap (e.g., determining that the doctor visited the lawyer does not entail the lawyer visited the doctor), accuracy ranged from 0.0% to 66.2%. Such variation is likely due to the presence of many local minima in the loss surface that are equally attractive to a low-bias learner such as a neural network; decreasing the variability may therefore require models with stronger inductive biases.",BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance,"If the same neural network architecture is trained multiple times on the same dataset, will it make similar linguistic generalizations across runs? To study this question, we finetuned 100 instances of BERT on the Multigenre Natural Language Inference (MNLI) dataset and evaluated them on the HANS dataset, which evaluates syntactic generalization in natural language inference. On the MNLI development set, the behavior of all instances was remarkably consistent, with accuracy ranging between 83.6% and 84.8%. In stark contrast, the same models varied widely in their generalization performance. For example, on the simple case of subject-object swap (e.g., determining that the doctor visited the lawyer does not entail the lawyer visited the doctor), accuracy ranged from 0.0% to 66.2%. Such variation is likely due to the presence of many local minima in the loss surface that are equally attractive to a low-bias learner such as a neural network; decreasing the variability may therefore require models with stronger inductive biases.","We are grateful to Emily Pitler, Dipanjan Das, and the members of the Johns Hopkins Computation and Psycholinguistics lab group for helpful comments. Any errors are our own.This project is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1746891 and by a gift to TL from Google, and it was conducted using computational resources from the Maryland Advanced Research Computing Center (MARCC). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, Google, or MARCC.","BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. If the same neural network architecture is trained multiple times on the same dataset, will it make similar linguistic generalizations across runs? To study this question, we finetuned 100 instances of BERT on the Multigenre Natural Language Inference (MNLI) dataset and evaluated them on the HANS dataset, which evaluates syntactic generalization in natural language inference. On the MNLI development set, the behavior of all instances was remarkably consistent, with accuracy ranging between 83.6% and 84.8%. In stark contrast, the same models varied widely in their generalization performance. For example, on the simple case of subject-object swap (e.g., determining that the doctor visited the lawyer does not entail the lawyer visited the doctor), accuracy ranged from 0.0% to 66.2%. Such variation is likely due to the presence of many local minima in the loss surface that are equally attractive to a low-bias learner such as a neural network; decreasing the variability may therefore require models with stronger inductive biases.",2020
jimenez-etal-2013-unal,https://aclanthology.org/S13-2020,0,,,,,,,"UNAL: Discriminating between Literal and Figurative Phrasal Usage Using Distributional Statistics and POS tags. In this paper we describe the system used to participate in the sub task 5b in the Phrasal Semantics challenge (task 5) in SemEval 2013. This sub task consists in discriminating literal and figurative usage of phrases with compositional and non-compositional meanings in context. The proposed approach is based on part-of-speech tags, stylistic features and distributional statistics gathered from the same development-training-test text collection. The system obtained a relative improvement in accuracy against the most-frequentclass baseline of 49.8% in the ""unseen contexts"" (LexSample) setting and 8.5% in ""unseen phrases"" (AllWords).",{UNAL}: Discriminating between Literal and Figurative Phrasal Usage Using Distributional Statistics and {POS} tags,"In this paper we describe the system used to participate in the sub task 5b in the Phrasal Semantics challenge (task 5) in SemEval 2013. This sub task consists in discriminating literal and figurative usage of phrases with compositional and non-compositional meanings in context. The proposed approach is based on part-of-speech tags, stylistic features and distributional statistics gathered from the same development-training-test text collection. The system obtained a relative improvement in accuracy against the most-frequentclass baseline of 49.8% in the ""unseen contexts"" (LexSample) setting and 8.5% in ""unseen phrases"" (AllWords).",UNAL: Discriminating between Literal and Figurative Phrasal Usage Using Distributional Statistics and POS tags,"In this paper we describe the system used to participate in the sub task 5b in the Phrasal Semantics challenge (task 5) in SemEval 2013. This sub task consists in discriminating literal and figurative usage of phrases with compositional and non-compositional meanings in context. The proposed approach is based on part-of-speech tags, stylistic features and distributional statistics gathered from the same development-training-test text collection. The system obtained a relative improvement in accuracy against the most-frequentclass baseline of 49.8% in the ""unseen contexts"" (LexSample) setting and 8.5% in ""unseen phrases"" (AllWords).","This research was funded in part by the Systems and Industrial Engineering Department, the Office of Student Welfare of the National University of Colombia, Bogotá, and through a grant from the Colombian Department for Science, Technology and Innovation, Colciencias, proj. 1101-521-28465 with funding from ""El Patrimonio Autónomo Fondo Nacional de Financiamiento para la Ciencia, la Tecnología y la Innovación, Francisco José de Caldas."" The third author recognizes the support from Mexican Government (SNI, COFAA-IPN, SIP 20131702, CONACYT 50206-H) and CONACYT-DST India (proj. 122030 ""Answer Validation through Textual Entailment"").","UNAL: Discriminating between Literal and Figurative Phrasal Usage Using Distributional Statistics and POS tags. In this paper we describe the system used to participate in the sub task 5b in the Phrasal Semantics challenge (task 5) in SemEval 2013. This sub task consists in discriminating literal and figurative usage of phrases with compositional and non-compositional meanings in context. The proposed approach is based on part-of-speech tags, stylistic features and distributional statistics gathered from the same development-training-test text collection. The system obtained a relative improvement in accuracy against the most-frequentclass baseline of 49.8% in the ""unseen contexts"" (LexSample) setting and 8.5% in ""unseen phrases"" (AllWords).",2013
zhao-etal-2015-auditory,https://aclanthology.org/Y15-1036,0,,,,,,,"Auditory Synaesthesia and Near Synonyms: A Corpus-Based Analysis of sheng1 and yin1 in Mandarin Chinese. This paper explores the nature of linguistic synaesthesia in the auditory domain through a corpus-based lexical semantic study of near synonyms. It has been established that the near synonyms 聲 sheng ""sound"" and 音 yin ""sound"" in Mandarin Chinese have different semantic functions in representing auditory production and auditory perception respectively. Thus, our study is devoted to testing whether linguistic synaesthesia is sensitive to this semantic dichotomy of cognition in particular, and to examining the relationship between linguistic synaesthesia and cognitive modelling in general. Based on the corpus, we find that the near synonyms exhibit both similarities and differences on synaesthesia. The similarities lie in that both 聲 and 音 are productive recipients of synaesthetic transfers, and vision acts as the source domain most frequently. Besides, the differences exist in selective constraints for 聲 and 音 with synaesthetic modifiers as well as syntactic functions of the whole combinations. We propose that the similarities can be explained by the cognitive characteristics of the sound, while the differences are determined by the influence of the semantic dichotomy of production/perception on synaesthesia. Therefore, linguistic synaesthesia is not a random association, but can be motivated and predicted by cognition. 1 The terms, ""lower domains"" and ""higher domains"", are copied from Ullmann (1957), where the former refers to touch, taste and smell, and the later includes hearing and vision.",Auditory Synaesthesia and Near Synonyms: A Corpus-Based Analysis of sheng1 and yin1 in {M}andarin {C}hinese,"This paper explores the nature of linguistic synaesthesia in the auditory domain through a corpus-based lexical semantic study of near synonyms. It has been established that the near synonyms 聲 sheng ""sound"" and 音 yin ""sound"" in Mandarin Chinese have different semantic functions in representing auditory production and auditory perception respectively. Thus, our study is devoted to testing whether linguistic synaesthesia is sensitive to this semantic dichotomy of cognition in particular, and to examining the relationship between linguistic synaesthesia and cognitive modelling in general. Based on the corpus, we find that the near synonyms exhibit both similarities and differences on synaesthesia. The similarities lie in that both 聲 and 音 are productive recipients of synaesthetic transfers, and vision acts as the source domain most frequently. Besides, the differences exist in selective constraints for 聲 and 音 with synaesthetic modifiers as well as syntactic functions of the whole combinations. We propose that the similarities can be explained by the cognitive characteristics of the sound, while the differences are determined by the influence of the semantic dichotomy of production/perception on synaesthesia. Therefore, linguistic synaesthesia is not a random association, but can be motivated and predicted by cognition. 1 The terms, ""lower domains"" and ""higher domains"", are copied from Ullmann (1957), where the former refers to touch, taste and smell, and the later includes hearing and vision.",Auditory Synaesthesia and Near Synonyms: A Corpus-Based Analysis of sheng1 and yin1 in Mandarin Chinese,"This paper explores the nature of linguistic synaesthesia in the auditory domain through a corpus-based lexical semantic study of near synonyms. It has been established that the near synonyms 聲 sheng ""sound"" and 音 yin ""sound"" in Mandarin Chinese have different semantic functions in representing auditory production and auditory perception respectively. Thus, our study is devoted to testing whether linguistic synaesthesia is sensitive to this semantic dichotomy of cognition in particular, and to examining the relationship between linguistic synaesthesia and cognitive modelling in general. Based on the corpus, we find that the near synonyms exhibit both similarities and differences on synaesthesia. The similarities lie in that both 聲 and 音 are productive recipients of synaesthetic transfers, and vision acts as the source domain most frequently. Besides, the differences exist in selective constraints for 聲 and 音 with synaesthetic modifiers as well as syntactic functions of the whole combinations. We propose that the similarities can be explained by the cognitive characteristics of the sound, while the differences are determined by the influence of the semantic dichotomy of production/perception on synaesthesia. Therefore, linguistic synaesthesia is not a random association, but can be motivated and predicted by cognition. 1 The terms, ""lower domains"" and ""higher domains"", are copied from Ullmann (1957), where the former refers to touch, taste and smell, and the later includes hearing and vision.",We would like to give thanks to Dennis Tay from the Hong Kong Polytechnic University for his insightful comments on this work.,"Auditory Synaesthesia and Near Synonyms: A Corpus-Based Analysis of sheng1 and yin1 in Mandarin Chinese. This paper explores the nature of linguistic synaesthesia in the auditory domain through a corpus-based lexical semantic study of near synonyms. It has been established that the near synonyms 聲 sheng ""sound"" and 音 yin ""sound"" in Mandarin Chinese have different semantic functions in representing auditory production and auditory perception respectively. Thus, our study is devoted to testing whether linguistic synaesthesia is sensitive to this semantic dichotomy of cognition in particular, and to examining the relationship between linguistic synaesthesia and cognitive modelling in general. Based on the corpus, we find that the near synonyms exhibit both similarities and differences on synaesthesia. The similarities lie in that both 聲 and 音 are productive recipients of synaesthetic transfers, and vision acts as the source domain most frequently. Besides, the differences exist in selective constraints for 聲 and 音 with synaesthetic modifiers as well as syntactic functions of the whole combinations. We propose that the similarities can be explained by the cognitive characteristics of the sound, while the differences are determined by the influence of the semantic dichotomy of production/perception on synaesthesia. Therefore, linguistic synaesthesia is not a random association, but can be motivated and predicted by cognition. 1 The terms, ""lower domains"" and ""higher domains"", are copied from Ullmann (1957), where the former refers to touch, taste and smell, and the later includes hearing and vision.",2015
zheng-etal-2019-boundary,https://aclanthology.org/D19-1034,0,,,,,,,"A Boundary-aware Neural Model for Nested Named Entity Recognition. In natural language processing, it is common that many entities contain other entities inside them. Most existing works on named entity recognition (NER) only deal with flat entities but ignore nested ones. We propose a boundary-aware neural model for nested NER which leverages entity boundaries to predict entity categorical labels. Our model can locate entities precisely by detecting boundaries using sequence labeling models. Based on the detected boundaries, our model utilizes the boundary-relevant regions to predict entity categorical labels, which can decrease computation cost and relieve error propagation problem in layered sequence labeling model. We introduce multitask learning to capture the dependencies of entity boundaries and their categorical labels, which helps to improve the performance of identifying entities. We conduct our experiments on nested NER datasets and the experimental results demonstrate that our model outperforms other state-of-the-art methods.",A Boundary-aware Neural Model for Nested Named Entity Recognition,"In natural language processing, it is common that many entities contain other entities inside them. Most existing works on named entity recognition (NER) only deal with flat entities but ignore nested ones. We propose a boundary-aware neural model for nested NER which leverages entity boundaries to predict entity categorical labels. Our model can locate entities precisely by detecting boundaries using sequence labeling models. Based on the detected boundaries, our model utilizes the boundary-relevant regions to predict entity categorical labels, which can decrease computation cost and relieve error propagation problem in layered sequence labeling model. We introduce multitask learning to capture the dependencies of entity boundaries and their categorical labels, which helps to improve the performance of identifying entities. We conduct our experiments on nested NER datasets and the experimental results demonstrate that our model outperforms other state-of-the-art methods.",A Boundary-aware Neural Model for Nested Named Entity Recognition,"In natural language processing, it is common that many entities contain other entities inside them. Most existing works on named entity recognition (NER) only deal with flat entities but ignore nested ones. We propose a boundary-aware neural model for nested NER which leverages entity boundaries to predict entity categorical labels. Our model can locate entities precisely by detecting boundaries using sequence labeling models. Based on the detected boundaries, our model utilizes the boundary-relevant regions to predict entity categorical labels, which can decrease computation cost and relieve error propagation problem in layered sequence labeling model. We introduce multitask learning to capture the dependencies of entity boundaries and their categorical labels, which helps to improve the performance of identifying entities. We conduct our experiments on nested NER datasets and the experimental results demonstrate that our model outperforms other state-of-the-art methods.","This work was supported by the Fundamental Research Funds for the Central Universities, SCUT (No. 2017ZD048, D2182480), the Science and Technology Planning Project of Guangdong Province (No.2017B050506004), the Science and Technology Programs of Guangzhou (No. 201704030076,201802010027,201902010046) and a CUHK Research Committee Funding (Direct Grants) (Project Code: EE16963).","A Boundary-aware Neural Model for Nested Named Entity Recognition. In natural language processing, it is common that many entities contain other entities inside them. Most existing works on named entity recognition (NER) only deal with flat entities but ignore nested ones. We propose a boundary-aware neural model for nested NER which leverages entity boundaries to predict entity categorical labels. Our model can locate entities precisely by detecting boundaries using sequence labeling models. Based on the detected boundaries, our model utilizes the boundary-relevant regions to predict entity categorical labels, which can decrease computation cost and relieve error propagation problem in layered sequence labeling model. We introduce multitask learning to capture the dependencies of entity boundaries and their categorical labels, which helps to improve the performance of identifying entities. We conduct our experiments on nested NER datasets and the experimental results demonstrate that our model outperforms other state-of-the-art methods.",2019
fares-etal-2019-arabic,https://aclanthology.org/W19-4626,0,,,,,,,Arabic Dialect Identification with Deep Learning and Hybrid Frequency Based Features. Studies on Dialectical Arabic are growing more important by the day as it becomes the primary written and spoken form of Arabic online in informal settings. Among the important problems that should be explored is that of dialect identification. This paper reports different techniques that can be applied towards such goal and reports their performance on the Multi Arabic Dialect Applications and Resources (MADAR) Arabic Dialect Corpora. Our results show that improving on traditional systems using frequency based features and non deep learning classifiers is a challenging task. We propose different models based on different word and document representations. Our top model is able to achieve an F1 macro averaged score of 65.66 on MADAR's smallscale parallel corpus of 25 dialects and Modern Standard Arabic (MSA).,{A}rabic Dialect Identification with Deep Learning and Hybrid Frequency Based Features,Studies on Dialectical Arabic are growing more important by the day as it becomes the primary written and spoken form of Arabic online in informal settings. Among the important problems that should be explored is that of dialect identification. This paper reports different techniques that can be applied towards such goal and reports their performance on the Multi Arabic Dialect Applications and Resources (MADAR) Arabic Dialect Corpora. Our results show that improving on traditional systems using frequency based features and non deep learning classifiers is a challenging task. We propose different models based on different word and document representations. Our top model is able to achieve an F1 macro averaged score of 65.66 on MADAR's smallscale parallel corpus of 25 dialects and Modern Standard Arabic (MSA).,Arabic Dialect Identification with Deep Learning and Hybrid Frequency Based Features,Studies on Dialectical Arabic are growing more important by the day as it becomes the primary written and spoken form of Arabic online in informal settings. Among the important problems that should be explored is that of dialect identification. This paper reports different techniques that can be applied towards such goal and reports their performance on the Multi Arabic Dialect Applications and Resources (MADAR) Arabic Dialect Corpora. Our results show that improving on traditional systems using frequency based features and non deep learning classifiers is a challenging task. We propose different models based on different word and document representations. Our top model is able to achieve an F1 macro averaged score of 65.66 on MADAR's smallscale parallel corpus of 25 dialects and Modern Standard Arabic (MSA).,,Arabic Dialect Identification with Deep Learning and Hybrid Frequency Based Features. Studies on Dialectical Arabic are growing more important by the day as it becomes the primary written and spoken form of Arabic online in informal settings. Among the important problems that should be explored is that of dialect identification. This paper reports different techniques that can be applied towards such goal and reports their performance on the Multi Arabic Dialect Applications and Resources (MADAR) Arabic Dialect Corpora. Our results show that improving on traditional systems using frequency based features and non deep learning classifiers is a challenging task. We propose different models based on different word and document representations. Our top model is able to achieve an F1 macro averaged score of 65.66 on MADAR's smallscale parallel corpus of 25 dialects and Modern Standard Arabic (MSA).,2019
hillard-etal-2003-detection,https://aclanthology.org/N03-2012,1,,,,partnership,,,"Detection Of Agreement vs. Disagreement In Meetings: Training With Unlabeled Data. To support summarization of automatically transcribed meetings, we introduce a classifier to recognize agreement or disagreement utterances, utilizing both word-based and prosodic cues. We show that hand-labeling efforts can be minimized by using unsupervised training on a large unlabeled data set combined with supervised training on a small amount of data. For ASR transcripts with over 45% WER, the system recovers nearly 80% of agree/disagree utterances with a confusion rate of only 3%.",Detection Of Agreement vs. Disagreement In Meetings: Training With Unlabeled Data,"To support summarization of automatically transcribed meetings, we introduce a classifier to recognize agreement or disagreement utterances, utilizing both word-based and prosodic cues. We show that hand-labeling efforts can be minimized by using unsupervised training on a large unlabeled data set combined with supervised training on a small amount of data. For ASR transcripts with over 45% WER, the system recovers nearly 80% of agree/disagree utterances with a confusion rate of only 3%.",Detection Of Agreement vs. Disagreement In Meetings: Training With Unlabeled Data,"To support summarization of automatically transcribed meetings, we introduce a classifier to recognize agreement or disagreement utterances, utilizing both word-based and prosodic cues. We show that hand-labeling efforts can be minimized by using unsupervised training on a large unlabeled data set combined with supervised training on a small amount of data. For ASR transcripts with over 45% WER, the system recovers nearly 80% of agree/disagree utterances with a confusion rate of only 3%.","This work is supported in part by the NSF under grants 0121396 and 0619921, DARPA grant N660019928924, and NASA grant NCC 2-1256. Any opinions, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of these agencies.","Detection Of Agreement vs. Disagreement In Meetings: Training With Unlabeled Data. To support summarization of automatically transcribed meetings, we introduce a classifier to recognize agreement or disagreement utterances, utilizing both word-based and prosodic cues. We show that hand-labeling efforts can be minimized by using unsupervised training on a large unlabeled data set combined with supervised training on a small amount of data. For ASR transcripts with over 45% WER, the system recovers nearly 80% of agree/disagree utterances with a confusion rate of only 3%.",2003
zajac-1999-aspects,https://aclanthology.org/W99-0506,0,,,,,,,"On Some Aspects of Lexical Standardization. In developing and using many large mult~-hngual multt-purpose lexicons at CRL, we ~denttfied three dlstmct problem areas (1) an appropriate metalanguage (formahsm) tot representing and processing lex~cal knowledge (2) a standard generic lex~cal framework defimng a common lex~cal entry structure (names ot features and types ot content), and (3) shared umversal hngu~st~c types In th~s paper, we present the solutions developed at CRL addressing d~mens~ons 1 and 2, and we mention the ongoing research addressing dlmens~on 3",On Some Aspects of Lexical Standardization,"In developing and using many large mult~-hngual multt-purpose lexicons at CRL, we ~denttfied three dlstmct problem areas (1) an appropriate metalanguage (formahsm) tot representing and processing lex~cal knowledge (2) a standard generic lex~cal framework defimng a common lex~cal entry structure (names ot features and types ot content), and (3) shared umversal hngu~st~c types In th~s paper, we present the solutions developed at CRL addressing d~mens~ons 1 and 2, and we mention the ongoing research addressing dlmens~on 3",On Some Aspects of Lexical Standardization,"In developing and using many large mult~-hngual multt-purpose lexicons at CRL, we ~denttfied three dlstmct problem areas (1) an appropriate metalanguage (formahsm) tot representing and processing lex~cal knowledge (2) a standard generic lex~cal framework defimng a common lex~cal entry structure (names ot features and types ot content), and (3) shared umversal hngu~st~c types In th~s paper, we present the solutions developed at CRL addressing d~mens~ons 1 and 2, and we mention the ongoing research addressing dlmens~on 3",,"On Some Aspects of Lexical Standardization. In developing and using many large mult~-hngual multt-purpose lexicons at CRL, we ~denttfied three dlstmct problem areas (1) an appropriate metalanguage (formahsm) tot representing and processing lex~cal knowledge (2) a standard generic lex~cal framework defimng a common lex~cal entry structure (names ot features and types ot content), and (3) shared umversal hngu~st~c types In th~s paper, we present the solutions developed at CRL addressing d~mens~ons 1 and 2, and we mention the ongoing research addressing dlmens~on 3",1999
dagan-2009-time,https://aclanthology.org/W09-3701,0,,,,,,,"It's time for a semantic inference engine. A common computational goal is to encapsulate the modeling of a target phenomenon within a unified and comprehensive ""engine"", which addresses a broad range of the required processing tasks. This goal is followed in common modeling of the morphological and syntactic levels of natural language, where most processing tasks are encapsulated within morphological analyzers and syntactic parsers. In this talk I suggest that computational modeling of the semantic level should also focus on encapsulating the various processing tasks within a unified module (engine). The input/output specification of such engine (API) can be based on the textual entailment paradigm, which will be described in brief and suggested as an attractive framework for applied semantic inference. The talk will illustrate an initial proposal for the engine's API, designed to be embedded within the prominent language processing applications. Finally, I will sketch the entailment formalism and efficient inference algorithm developed at Bar-Ilan University, which illustrates a principled transformational (rather than interpretational) approach towards developing a comprehensive semantic engine.",It{'}s time for a semantic inference engine,"A common computational goal is to encapsulate the modeling of a target phenomenon within a unified and comprehensive ""engine"", which addresses a broad range of the required processing tasks. This goal is followed in common modeling of the morphological and syntactic levels of natural language, where most processing tasks are encapsulated within morphological analyzers and syntactic parsers. In this talk I suggest that computational modeling of the semantic level should also focus on encapsulating the various processing tasks within a unified module (engine). The input/output specification of such engine (API) can be based on the textual entailment paradigm, which will be described in brief and suggested as an attractive framework for applied semantic inference. The talk will illustrate an initial proposal for the engine's API, designed to be embedded within the prominent language processing applications. Finally, I will sketch the entailment formalism and efficient inference algorithm developed at Bar-Ilan University, which illustrates a principled transformational (rather than interpretational) approach towards developing a comprehensive semantic engine.",It's time for a semantic inference engine,"A common computational goal is to encapsulate the modeling of a target phenomenon within a unified and comprehensive ""engine"", which addresses a broad range of the required processing tasks. This goal is followed in common modeling of the morphological and syntactic levels of natural language, where most processing tasks are encapsulated within morphological analyzers and syntactic parsers. In this talk I suggest that computational modeling of the semantic level should also focus on encapsulating the various processing tasks within a unified module (engine). The input/output specification of such engine (API) can be based on the textual entailment paradigm, which will be described in brief and suggested as an attractive framework for applied semantic inference. The talk will illustrate an initial proposal for the engine's API, designed to be embedded within the prominent language processing applications. Finally, I will sketch the entailment formalism and efficient inference algorithm developed at Bar-Ilan University, which illustrates a principled transformational (rather than interpretational) approach towards developing a comprehensive semantic engine.",,"It's time for a semantic inference engine. A common computational goal is to encapsulate the modeling of a target phenomenon within a unified and comprehensive ""engine"", which addresses a broad range of the required processing tasks. This goal is followed in common modeling of the morphological and syntactic levels of natural language, where most processing tasks are encapsulated within morphological analyzers and syntactic parsers. In this talk I suggest that computational modeling of the semantic level should also focus on encapsulating the various processing tasks within a unified module (engine). The input/output specification of such engine (API) can be based on the textual entailment paradigm, which will be described in brief and suggested as an attractive framework for applied semantic inference. The talk will illustrate an initial proposal for the engine's API, designed to be embedded within the prominent language processing applications. Finally, I will sketch the entailment formalism and efficient inference algorithm developed at Bar-Ilan University, which illustrates a principled transformational (rather than interpretational) approach towards developing a comprehensive semantic engine.",2009
el-baff-etal-2018-challenge,https://aclanthology.org/K18-1044,1,,,,peace_justice_and_strong_institutions,,,"Challenge or Empower: Revisiting Argumentation Quality in a News Editorial Corpus. News editorials are said to shape public opinion, which makes them a powerful tool and an important source of political argumentation. However, rarely do editorials change anyone's stance on an issue completely, nor do they tend to argue explicitly (but rather follow a subtle rhetorical strategy). So, what does argumentation quality mean for editorials then? We develop the notion that an effective editorial challenges readers with opposing stance, and at the same time empowers the arguing skills of readers that share the editorial's stance-or even challenges both sides. To study argumentation quality based on this notion, we introduce a new corpus with 1000 editorials from the New York Times, annotated for their perceived effect along with the annotators' political orientations. Analyzing the corpus, we find that annotators with different orientation disagree on the effect significantly. While only 1% of all editorials changed anyone's stance, more than 5% meet our notion. We conclude that our corpus serves as a suitable resource for studying the argumentation quality of news editorials.",Challenge or Empower: Revisiting Argumentation Quality in a News Editorial Corpus,"News editorials are said to shape public opinion, which makes them a powerful tool and an important source of political argumentation. However, rarely do editorials change anyone's stance on an issue completely, nor do they tend to argue explicitly (but rather follow a subtle rhetorical strategy). So, what does argumentation quality mean for editorials then? We develop the notion that an effective editorial challenges readers with opposing stance, and at the same time empowers the arguing skills of readers that share the editorial's stance-or even challenges both sides. To study argumentation quality based on this notion, we introduce a new corpus with 1000 editorials from the New York Times, annotated for their perceived effect along with the annotators' political orientations. Analyzing the corpus, we find that annotators with different orientation disagree on the effect significantly. While only 1% of all editorials changed anyone's stance, more than 5% meet our notion. We conclude that our corpus serves as a suitable resource for studying the argumentation quality of news editorials.",Challenge or Empower: Revisiting Argumentation Quality in a News Editorial Corpus,"News editorials are said to shape public opinion, which makes them a powerful tool and an important source of political argumentation. However, rarely do editorials change anyone's stance on an issue completely, nor do they tend to argue explicitly (but rather follow a subtle rhetorical strategy). So, what does argumentation quality mean for editorials then? We develop the notion that an effective editorial challenges readers with opposing stance, and at the same time empowers the arguing skills of readers that share the editorial's stance-or even challenges both sides. To study argumentation quality based on this notion, we introduce a new corpus with 1000 editorials from the New York Times, annotated for their perceived effect along with the annotators' political orientations. Analyzing the corpus, we find that annotators with different orientation disagree on the effect significantly. While only 1% of all editorials changed anyone's stance, more than 5% meet our notion. We conclude that our corpus serves as a suitable resource for studying the argumentation quality of news editorials.",,"Challenge or Empower: Revisiting Argumentation Quality in a News Editorial Corpus. News editorials are said to shape public opinion, which makes them a powerful tool and an important source of political argumentation. However, rarely do editorials change anyone's stance on an issue completely, nor do they tend to argue explicitly (but rather follow a subtle rhetorical strategy). So, what does argumentation quality mean for editorials then? We develop the notion that an effective editorial challenges readers with opposing stance, and at the same time empowers the arguing skills of readers that share the editorial's stance-or even challenges both sides. To study argumentation quality based on this notion, we introduce a new corpus with 1000 editorials from the New York Times, annotated for their perceived effect along with the annotators' political orientations. Analyzing the corpus, we find that annotators with different orientation disagree on the effect significantly. While only 1% of all editorials changed anyone's stance, more than 5% meet our notion. We conclude that our corpus serves as a suitable resource for studying the argumentation quality of news editorials.",2018
ji-etal-2021-discrete,https://aclanthology.org/2021.naacl-main.431,0,,,,,,,"Discrete Argument Representation Learning for Interactive Argument Pair Identification. In this paper, we focus on identifying interactive argument pairs from two posts with opposite stances to a certain topic. Considering opinions are exchanged from different perspectives of the discussing topic, we study the discrete representations for arguments to capture varying aspects in argumentation languages (e.g., the debate focus and the participant behavior). Moreover, we utilize hierarchical structure to model post-wise information incorporating contextual knowledge. Experimental results on the large-scale dataset collected from CMV show that our proposed framework can significantly outperform the competitive baselines. Further analyses reveal why our model yields superior performance and prove the usefulness of our learned representations.",Discrete Argument Representation Learning for Interactive Argument Pair Identification,"In this paper, we focus on identifying interactive argument pairs from two posts with opposite stances to a certain topic. Considering opinions are exchanged from different perspectives of the discussing topic, we study the discrete representations for arguments to capture varying aspects in argumentation languages (e.g., the debate focus and the participant behavior). Moreover, we utilize hierarchical structure to model post-wise information incorporating contextual knowledge. Experimental results on the large-scale dataset collected from CMV show that our proposed framework can significantly outperform the competitive baselines. Further analyses reveal why our model yields superior performance and prove the usefulness of our learned representations.",Discrete Argument Representation Learning for Interactive Argument Pair Identification,"In this paper, we focus on identifying interactive argument pairs from two posts with opposite stances to a certain topic. Considering opinions are exchanged from different perspectives of the discussing topic, we study the discrete representations for arguments to capture varying aspects in argumentation languages (e.g., the debate focus and the participant behavior). Moreover, we utilize hierarchical structure to model post-wise information incorporating contextual knowledge. Experimental results on the large-scale dataset collected from CMV show that our proposed framework can significantly outperform the competitive baselines. Further analyses reveal why our model yields superior performance and prove the usefulness of our learned representations.","This work is partially supported by National Natural Science Foundation of China (No.71991471), Science and Technology Commission of Shanghai Municipality Grant (No.20dz1200600). Jing Li is supported by CCF-Tencent Rhino-Bird Young Faculty Open Research Fund (R-ZDCJ), the Hong Kong Polytechnic University internal funds (1-BE2W and 1-ZVRH), and NSFC Young Scientists Fund 62006203.","Discrete Argument Representation Learning for Interactive Argument Pair Identification. In this paper, we focus on identifying interactive argument pairs from two posts with opposite stances to a certain topic. Considering opinions are exchanged from different perspectives of the discussing topic, we study the discrete representations for arguments to capture varying aspects in argumentation languages (e.g., the debate focus and the participant behavior). Moreover, we utilize hierarchical structure to model post-wise information incorporating contextual knowledge. Experimental results on the large-scale dataset collected from CMV show that our proposed framework can significantly outperform the competitive baselines. Further analyses reveal why our model yields superior performance and prove the usefulness of our learned representations.",2021
shen-etal-2006-jhu,https://aclanthology.org/2006.iwslt-evaluation.8,0,,,,,,,"The JHU workshop 2006 IWSLT system. This paper describes the SMT we built during the 2006 JHU Summer Workshop for the IWSLT 2006 evaluation. Our effort focuses on two parts of the speech translation problem: 1) efficient decoding of word lattices and 2) novel applications of factored translation models to IWSLT-specific problems. In this paper, we present results from the open-track Chinese-to-English condition. Improvements of 5-10% relative BLEU are obtained over a high performing baseline. We introduce a new open-source decoder that implements the state-of-the-art in statistical machine translation.",The {JHU} workshop 2006 {IWSLT} system,"This paper describes the SMT we built during the 2006 JHU Summer Workshop for the IWSLT 2006 evaluation. Our effort focuses on two parts of the speech translation problem: 1) efficient decoding of word lattices and 2) novel applications of factored translation models to IWSLT-specific problems. In this paper, we present results from the open-track Chinese-to-English condition. Improvements of 5-10% relative BLEU are obtained over a high performing baseline. We introduce a new open-source decoder that implements the state-of-the-art in statistical machine translation.",The JHU workshop 2006 IWSLT system,"This paper describes the SMT we built during the 2006 JHU Summer Workshop for the IWSLT 2006 evaluation. Our effort focuses on two parts of the speech translation problem: 1) efficient decoding of word lattices and 2) novel applications of factored translation models to IWSLT-specific problems. In this paper, we present results from the open-track Chinese-to-English condition. Improvements of 5-10% relative BLEU are obtained over a high performing baseline. We introduce a new open-source decoder that implements the state-of-the-art in statistical machine translation.","We would like to thank our JHU summer workshop team members (Philipp Koehn, Hieu Hoang, Chris Dyer, Ondrej Bojar, Chris Callison-Burch, Brooke Cowan, Christine Moran, Alexandra Constantin and Evan Herbst) who made this construction of this system possible. We wish to acknowledge their diligent efforts to make the moses decoder stable in a six-week period.We would also like to thank the staff and faculty of CLSP at John's Hopkins University for graciously hosting us during the summer workshop.","The JHU workshop 2006 IWSLT system. This paper describes the SMT we built during the 2006 JHU Summer Workshop for the IWSLT 2006 evaluation. Our effort focuses on two parts of the speech translation problem: 1) efficient decoding of word lattices and 2) novel applications of factored translation models to IWSLT-specific problems. In this paper, we present results from the open-track Chinese-to-English condition. Improvements of 5-10% relative BLEU are obtained over a high performing baseline. We introduce a new open-source decoder that implements the state-of-the-art in statistical machine translation.",2006
ali-etal-2013-hear,https://aclanthology.org/I13-1077,1,,,,health,,,"Can I Hear You? Sentiment Analysis on Medical Forums. Text mining studies have started to investigae relations between positive and negative opinions and patients' physical health. Several studies linked the personal lexicon with health and the health-related behavior of the individual. However, few text mining studies were performed to analyze opinions expressed in a large volume of user-written Web content. Our current study focused on performing sentiment analysis on several medical forums dedicated to Hearing Loss (HL). We categorized messages posted on the forums as positive, negative and neutral. Our study had two stages: first, we applied manual annotation of the posts with two annotators and have 82.01% overall agreement with kappa 0.65 and then we applied Machine Learning techniques to classify the posts.",Can {I} Hear You? Sentiment Analysis on Medical Forums,"Text mining studies have started to investigae relations between positive and negative opinions and patients' physical health. Several studies linked the personal lexicon with health and the health-related behavior of the individual. However, few text mining studies were performed to analyze opinions expressed in a large volume of user-written Web content. Our current study focused on performing sentiment analysis on several medical forums dedicated to Hearing Loss (HL). We categorized messages posted on the forums as positive, negative and neutral. Our study had two stages: first, we applied manual annotation of the posts with two annotators and have 82.01% overall agreement with kappa 0.65 and then we applied Machine Learning techniques to classify the posts.",Can I Hear You? Sentiment Analysis on Medical Forums,"Text mining studies have started to investigae relations between positive and negative opinions and patients' physical health. Several studies linked the personal lexicon with health and the health-related behavior of the individual. However, few text mining studies were performed to analyze opinions expressed in a large volume of user-written Web content. Our current study focused on performing sentiment analysis on several medical forums dedicated to Hearing Loss (HL). We categorized messages posted on the forums as positive, negative and neutral. Our study had two stages: first, we applied manual annotation of the posts with two annotators and have 82.01% overall agreement with kappa 0.65 and then we applied Machine Learning techniques to classify the posts.",This work in part has been funded by a Natural Sciences and Engineering Research Council of Canada Discovery Research Grant and by a Children's Hospital of Eastern Ontario Department of Surgery Research Grant.,"Can I Hear You? Sentiment Analysis on Medical Forums. Text mining studies have started to investigae relations between positive and negative opinions and patients' physical health. Several studies linked the personal lexicon with health and the health-related behavior of the individual. However, few text mining studies were performed to analyze opinions expressed in a large volume of user-written Web content. Our current study focused on performing sentiment analysis on several medical forums dedicated to Hearing Loss (HL). We categorized messages posted on the forums as positive, negative and neutral. Our study had two stages: first, we applied manual annotation of the posts with two annotators and have 82.01% overall agreement with kappa 0.65 and then we applied Machine Learning techniques to classify the posts.",2013
strzalkowski-vauthey-1991-fast,https://aclanthology.org/H91-1068,0,,,,,,,"Fast Text Processing for Information Retrieval. We describe an advanced text processing system for information retrieval from natural language document collections. We use both syntactic processing as well as statistical term clustering to obtain a representation of documents which would be more accurate than those obtained with more traditional keyword methods. A reliable top-down parser has been developed that allows for fast processing of large amounts of text, and for a precise identification of desired types of phrases for statistical analysis. Two statistical measures are computed: the measure of informational contribution of words in phrases, and the similarity measure between words.",Fast Text Processing for Information Retrieval,"We describe an advanced text processing system for information retrieval from natural language document collections. We use both syntactic processing as well as statistical term clustering to obtain a representation of documents which would be more accurate than those obtained with more traditional keyword methods. A reliable top-down parser has been developed that allows for fast processing of large amounts of text, and for a precise identification of desired types of phrases for statistical analysis. Two statistical measures are computed: the measure of informational contribution of words in phrases, and the similarity measure between words.",Fast Text Processing for Information Retrieval,"We describe an advanced text processing system for information retrieval from natural language document collections. We use both syntactic processing as well as statistical term clustering to obtain a representation of documents which would be more accurate than those obtained with more traditional keyword methods. A reliable top-down parser has been developed that allows for fast processing of large amounts of text, and for a precise identification of desired types of phrases for statistical analysis. Two statistical measures are computed: the measure of informational contribution of words in phrases, and the similarity measure between words.",,"Fast Text Processing for Information Retrieval. We describe an advanced text processing system for information retrieval from natural language document collections. We use both syntactic processing as well as statistical term clustering to obtain a representation of documents which would be more accurate than those obtained with more traditional keyword methods. A reliable top-down parser has been developed that allows for fast processing of large amounts of text, and for a precise identification of desired types of phrases for statistical analysis. Two statistical measures are computed: the measure of informational contribution of words in phrases, and the similarity measure between words.",1991
kokkinakis-gerdin-2009-issues,https://aclanthology.org/W09-4505,0,,,,,,,"Issues on Quality Assessment of SNOMED CT® Subsets -- Term Validation and Term Extraction. The aim of this paper is to apply and develop methods based on Natural Language Processing for automatically testing the validity, reliability and coverage of various Swedish SNOMED-CT subsets, the Systematized NOmenclature of MEDicine-Clinical Terms a multiaxial, hierarchical classification system which is currently being translated from English to Swedish. Our work has been developed across two dimensions. Initially a Swedish electronic text collection of scientific medical documents has been collected and processed to a uniform format. Secondly, a term processing activity has been taken place. In the first phase of this activity, various SNOMED CT subsets have been mapped to the text collection for evaluating the validity and reliability of the translated terms. In parallel, a large number of term candidates have been extracted from the corpus in order to examine the coverage of SNOMED CT. Term candidates that are currently not included in the Swedish SNOMED CT can be either parts of compounds, parts of potential multiword terms, terms that are not yet been translated or potentially new candidates. In order to achieve these goals a number of automatic term recognition algorithms have been applied to the corpus. The results of the later process is to be reviewed by domain experts (relevant to the subsets extracted) through a relevant interface who can decide whether a new set of terms can be incorporated in the Swedish translation of SNOMED CT or not.",Issues on Quality Assessment of {SNOMED} {CT}® Subsets {--} Term Validation and Term Extraction,"The aim of this paper is to apply and develop methods based on Natural Language Processing for automatically testing the validity, reliability and coverage of various Swedish SNOMED-CT subsets, the Systematized NOmenclature of MEDicine-Clinical Terms a multiaxial, hierarchical classification system which is currently being translated from English to Swedish. Our work has been developed across two dimensions. Initially a Swedish electronic text collection of scientific medical documents has been collected and processed to a uniform format. Secondly, a term processing activity has been taken place. In the first phase of this activity, various SNOMED CT subsets have been mapped to the text collection for evaluating the validity and reliability of the translated terms. In parallel, a large number of term candidates have been extracted from the corpus in order to examine the coverage of SNOMED CT. Term candidates that are currently not included in the Swedish SNOMED CT can be either parts of compounds, parts of potential multiword terms, terms that are not yet been translated or potentially new candidates. In order to achieve these goals a number of automatic term recognition algorithms have been applied to the corpus. The results of the later process is to be reviewed by domain experts (relevant to the subsets extracted) through a relevant interface who can decide whether a new set of terms can be incorporated in the Swedish translation of SNOMED CT or not.",Issues on Quality Assessment of SNOMED CT® Subsets -- Term Validation and Term Extraction,"The aim of this paper is to apply and develop methods based on Natural Language Processing for automatically testing the validity, reliability and coverage of various Swedish SNOMED-CT subsets, the Systematized NOmenclature of MEDicine-Clinical Terms a multiaxial, hierarchical classification system which is currently being translated from English to Swedish. Our work has been developed across two dimensions. Initially a Swedish electronic text collection of scientific medical documents has been collected and processed to a uniform format. Secondly, a term processing activity has been taken place. In the first phase of this activity, various SNOMED CT subsets have been mapped to the text collection for evaluating the validity and reliability of the translated terms. In parallel, a large number of term candidates have been extracted from the corpus in order to examine the coverage of SNOMED CT. Term candidates that are currently not included in the Swedish SNOMED CT can be either parts of compounds, parts of potential multiword terms, terms that are not yet been translated or potentially new candidates. In order to achieve these goals a number of automatic term recognition algorithms have been applied to the corpus. The results of the later process is to be reviewed by domain experts (relevant to the subsets extracted) through a relevant interface who can decide whether a new set of terms can be incorporated in the Swedish translation of SNOMED CT or not.",We would like to thank the editors of the Journal of the Swedish Medical Association and DiabetologNytt for making the electronic versions available to this study.,"Issues on Quality Assessment of SNOMED CT® Subsets -- Term Validation and Term Extraction. The aim of this paper is to apply and develop methods based on Natural Language Processing for automatically testing the validity, reliability and coverage of various Swedish SNOMED-CT subsets, the Systematized NOmenclature of MEDicine-Clinical Terms a multiaxial, hierarchical classification system which is currently being translated from English to Swedish. Our work has been developed across two dimensions. Initially a Swedish electronic text collection of scientific medical documents has been collected and processed to a uniform format. Secondly, a term processing activity has been taken place. In the first phase of this activity, various SNOMED CT subsets have been mapped to the text collection for evaluating the validity and reliability of the translated terms. In parallel, a large number of term candidates have been extracted from the corpus in order to examine the coverage of SNOMED CT. Term candidates that are currently not included in the Swedish SNOMED CT can be either parts of compounds, parts of potential multiword terms, terms that are not yet been translated or potentially new candidates. In order to achieve these goals a number of automatic term recognition algorithms have been applied to the corpus. The results of the later process is to be reviewed by domain experts (relevant to the subsets extracted) through a relevant interface who can decide whether a new set of terms can be incorporated in the Swedish translation of SNOMED CT or not.",2009
xiao-etal-2013-learning,https://aclanthology.org/D13-1016,0,,,,,,,"Learning Latent Word Representations for Domain Adaptation using Supervised Word Clustering. Domain adaptation has been popularly studied on exploiting labeled information from a source domain to learn a prediction model in a target domain. In this paper, we develop a novel representation learning approach to address domain adaptation for text classification with automatically induced discriminative latent features, which are generalizable across domains while informative to the prediction task. Specifically, we propose a hierarchical multinomial Naive Bayes model with latent variables to conduct supervised word clustering on labeled documents from both source and target domains, and then use the produced cluster distribution of each word as its latent feature representation for domain adaptation. We train this latent graphical model using a simple expectation-maximization (EM) algorithm. We empirically evaluate the proposed method with both cross-domain document categorization tasks on Reuters-21578 dataset and cross-domain sentiment classification tasks on Amazon product review dataset. The experimental results demonstrate that our proposed approach achieves superior performance compared with alternative methods.",Learning Latent Word Representations for Domain Adaptation using Supervised Word Clustering,"Domain adaptation has been popularly studied on exploiting labeled information from a source domain to learn a prediction model in a target domain. In this paper, we develop a novel representation learning approach to address domain adaptation for text classification with automatically induced discriminative latent features, which are generalizable across domains while informative to the prediction task. Specifically, we propose a hierarchical multinomial Naive Bayes model with latent variables to conduct supervised word clustering on labeled documents from both source and target domains, and then use the produced cluster distribution of each word as its latent feature representation for domain adaptation. We train this latent graphical model using a simple expectation-maximization (EM) algorithm. We empirically evaluate the proposed method with both cross-domain document categorization tasks on Reuters-21578 dataset and cross-domain sentiment classification tasks on Amazon product review dataset. The experimental results demonstrate that our proposed approach achieves superior performance compared with alternative methods.",Learning Latent Word Representations for Domain Adaptation using Supervised Word Clustering,"Domain adaptation has been popularly studied on exploiting labeled information from a source domain to learn a prediction model in a target domain. In this paper, we develop a novel representation learning approach to address domain adaptation for text classification with automatically induced discriminative latent features, which are generalizable across domains while informative to the prediction task. Specifically, we propose a hierarchical multinomial Naive Bayes model with latent variables to conduct supervised word clustering on labeled documents from both source and target domains, and then use the produced cluster distribution of each word as its latent feature representation for domain adaptation. We train this latent graphical model using a simple expectation-maximization (EM) algorithm. We empirically evaluate the proposed method with both cross-domain document categorization tasks on Reuters-21578 dataset and cross-domain sentiment classification tasks on Amazon product review dataset. The experimental results demonstrate that our proposed approach achieves superior performance compared with alternative methods.",,"Learning Latent Word Representations for Domain Adaptation using Supervised Word Clustering. Domain adaptation has been popularly studied on exploiting labeled information from a source domain to learn a prediction model in a target domain. In this paper, we develop a novel representation learning approach to address domain adaptation for text classification with automatically induced discriminative latent features, which are generalizable across domains while informative to the prediction task. Specifically, we propose a hierarchical multinomial Naive Bayes model with latent variables to conduct supervised word clustering on labeled documents from both source and target domains, and then use the produced cluster distribution of each word as its latent feature representation for domain adaptation. We train this latent graphical model using a simple expectation-maximization (EM) algorithm. We empirically evaluate the proposed method with both cross-domain document categorization tasks on Reuters-21578 dataset and cross-domain sentiment classification tasks on Amazon product review dataset. The experimental results demonstrate that our proposed approach achieves superior performance compared with alternative methods.",2013
iida-etal-2010-incorporating,https://aclanthology.org/P10-1128,0,,,,,,,"Incorporating Extra-Linguistic Information into Reference Resolution in Collaborative Task Dialogue. This paper proposes an approach to reference resolution in situated dialogues by exploiting extra-linguistic information. Recently, investigations of referential behaviours involved in situations in the real world have received increasing attention by researchers (Di Eugenio et al., 2000; Byron, 2005; van Deemter, 2007; Spanger et al., 2009). In order to create an accurate reference resolution model, we need to handle extra-linguistic information as well as textual information examined by existing approaches (Soon et al., 2001; Ng and Cardie, 2002, etc.). In this paper, we incorporate extra-linguistic information into an existing corpus-based reference resolution model, and investigate its effects on reference resolution problems within a corpus of Japanese dialogues. The results demonstrate that our proposed model achieves an accuracy of 79.0% for this task.",Incorporating Extra-Linguistic Information into Reference Resolution in Collaborative Task Dialogue,"This paper proposes an approach to reference resolution in situated dialogues by exploiting extra-linguistic information. Recently, investigations of referential behaviours involved in situations in the real world have received increasing attention by researchers (Di Eugenio et al., 2000; Byron, 2005; van Deemter, 2007; Spanger et al., 2009). In order to create an accurate reference resolution model, we need to handle extra-linguistic information as well as textual information examined by existing approaches (Soon et al., 2001; Ng and Cardie, 2002, etc.). In this paper, we incorporate extra-linguistic information into an existing corpus-based reference resolution model, and investigate its effects on reference resolution problems within a corpus of Japanese dialogues. The results demonstrate that our proposed model achieves an accuracy of 79.0% for this task.",Incorporating Extra-Linguistic Information into Reference Resolution in Collaborative Task Dialogue,"This paper proposes an approach to reference resolution in situated dialogues by exploiting extra-linguistic information. Recently, investigations of referential behaviours involved in situations in the real world have received increasing attention by researchers (Di Eugenio et al., 2000; Byron, 2005; van Deemter, 2007; Spanger et al., 2009). In order to create an accurate reference resolution model, we need to handle extra-linguistic information as well as textual information examined by existing approaches (Soon et al., 2001; Ng and Cardie, 2002, etc.). In this paper, we incorporate extra-linguistic information into an existing corpus-based reference resolution model, and investigate its effects on reference resolution problems within a corpus of Japanese dialogues. The results demonstrate that our proposed model achieves an accuracy of 79.0% for this task.",,"Incorporating Extra-Linguistic Information into Reference Resolution in Collaborative Task Dialogue. This paper proposes an approach to reference resolution in situated dialogues by exploiting extra-linguistic information. Recently, investigations of referential behaviours involved in situations in the real world have received increasing attention by researchers (Di Eugenio et al., 2000; Byron, 2005; van Deemter, 2007; Spanger et al., 2009). In order to create an accurate reference resolution model, we need to handle extra-linguistic information as well as textual information examined by existing approaches (Soon et al., 2001; Ng and Cardie, 2002, etc.). In this paper, we incorporate extra-linguistic information into an existing corpus-based reference resolution model, and investigate its effects on reference resolution problems within a corpus of Japanese dialogues. The results demonstrate that our proposed model achieves an accuracy of 79.0% for this task.",2010
imamura-2002-application,https://aclanthology.org/2002.tmi-papers.9,0,,,,,,,"Application of translation knowledge acquired by hierarchical phrase alignment for pattern-based MT. Hierarchical phrase alignment is a method for extracting equivalent phrases from bilingual sentences, even though they belong to different language families. The method automatically extracts transfer knowledge from about 125K English and Japanese bilingual sentences and then applies it to a pattern-based MT system. The translation quality is then evaluated. The knowledge needs to be cleaned, since the corpus contains various translations and the phrase alignment contains errors. Various cleaning methods are applied in this paper. The results indicate that when the best cleaning method is used, the knowledge acquired by hierarchical phrase alignment is comparable to manually acquired knowledge.",Application of translation knowledge acquired by hierarchical phrase alignment for pattern-based {MT},"Hierarchical phrase alignment is a method for extracting equivalent phrases from bilingual sentences, even though they belong to different language families. The method automatically extracts transfer knowledge from about 125K English and Japanese bilingual sentences and then applies it to a pattern-based MT system. The translation quality is then evaluated. The knowledge needs to be cleaned, since the corpus contains various translations and the phrase alignment contains errors. Various cleaning methods are applied in this paper. The results indicate that when the best cleaning method is used, the knowledge acquired by hierarchical phrase alignment is comparable to manually acquired knowledge.",Application of translation knowledge acquired by hierarchical phrase alignment for pattern-based MT,"Hierarchical phrase alignment is a method for extracting equivalent phrases from bilingual sentences, even though they belong to different language families. The method automatically extracts transfer knowledge from about 125K English and Japanese bilingual sentences and then applies it to a pattern-based MT system. The translation quality is then evaluated. The knowledge needs to be cleaned, since the corpus contains various translations and the phrase alignment contains errors. Various cleaning methods are applied in this paper. The results indicate that when the best cleaning method is used, the knowledge acquired by hierarchical phrase alignment is comparable to manually acquired knowledge.",,"Application of translation knowledge acquired by hierarchical phrase alignment for pattern-based MT. Hierarchical phrase alignment is a method for extracting equivalent phrases from bilingual sentences, even though they belong to different language families. The method automatically extracts transfer knowledge from about 125K English and Japanese bilingual sentences and then applies it to a pattern-based MT system. The translation quality is then evaluated. The knowledge needs to be cleaned, since the corpus contains various translations and the phrase alignment contains errors. Various cleaning methods are applied in this paper. The results indicate that when the best cleaning method is used, the knowledge acquired by hierarchical phrase alignment is comparable to manually acquired knowledge.",2002
hua-wang-2017-pilot,https://aclanthology.org/W17-4513,0,,,,,,,"A Pilot Study of Domain Adaptation Effect for Neural Abstractive Summarization. We study the problem of domain adaptation for neural abstractive summarization. We make initial efforts in investigating what information can be transferred to a new domain. Experimental results on news stories and opinion articles indicate that neural summarization model benefits from pre-training based on extractive summaries. We also find that the combination of in-domain and out-of-domain setup yields better summaries when in-domain data is insufficient. Further analysis shows that, the model is capable to select salient content even trained on out-of-domain data, but requires in-domain data to capture the style for a target domain.",A Pilot Study of Domain Adaptation Effect for Neural Abstractive Summarization,"We study the problem of domain adaptation for neural abstractive summarization. We make initial efforts in investigating what information can be transferred to a new domain. Experimental results on news stories and opinion articles indicate that neural summarization model benefits from pre-training based on extractive summaries. We also find that the combination of in-domain and out-of-domain setup yields better summaries when in-domain data is insufficient. Further analysis shows that, the model is capable to select salient content even trained on out-of-domain data, but requires in-domain data to capture the style for a target domain.",A Pilot Study of Domain Adaptation Effect for Neural Abstractive Summarization,"We study the problem of domain adaptation for neural abstractive summarization. We make initial efforts in investigating what information can be transferred to a new domain. Experimental results on news stories and opinion articles indicate that neural summarization model benefits from pre-training based on extractive summaries. We also find that the combination of in-domain and out-of-domain setup yields better summaries when in-domain data is insufficient. Further analysis shows that, the model is capable to select salient content even trained on out-of-domain data, but requires in-domain data to capture the style for a target domain.",This work was supported in part by National Science Foundation Grant IIS-1566382 and a GPU gift from Nvidia. We thank three anonymous reviewers for their valuable suggestions on various aspects of this work.,"A Pilot Study of Domain Adaptation Effect for Neural Abstractive Summarization. We study the problem of domain adaptation for neural abstractive summarization. We make initial efforts in investigating what information can be transferred to a new domain. Experimental results on news stories and opinion articles indicate that neural summarization model benefits from pre-training based on extractive summaries. We also find that the combination of in-domain and out-of-domain setup yields better summaries when in-domain data is insufficient. Further analysis shows that, the model is capable to select salient content even trained on out-of-domain data, but requires in-domain data to capture the style for a target domain.",2017
cotterell-etal-2016-sigmorphon,https://aclanthology.org/W16-2002,0,,,,,,,"The SIGMORPHON 2016 Shared Task---Morphological Reinflection. The 2016 SIGMORPHON Shared Task was devoted to the problem of morphological reinflection. It introduced morphological datasets for 10 languages with diverse typological characteristics. The shared task drew submissions from 9 teams representing 11 institutions reflecting a variety of approaches to addressing supervised learning of reinflection. For the simplest task, inflection generation from lemmas, the best system averaged 95.56% exact-match accuracy across all languages, ranging from Maltese (88.99%) to Hungarian (99.30%). With the relatively large training datasets provided, recurrent neural network architectures consistently performed best-in fact, there was a significant margin between neural and non-neural approaches. The best neural approach, averaged over all tasks and languages, outperformed the best nonneural one by 13.76% absolute; on individual tasks and languages the gap in accuracy sometimes exceeded 60%. Overall, the results show a strong state of the art, and serve as encouragement for future shared tasks that explore morphological analysis and generation with varying degrees of supervision.",The {SIGMORPHON} 2016 Shared {T}ask{---}{M}orphological Reinflection,"The 2016 SIGMORPHON Shared Task was devoted to the problem of morphological reinflection. It introduced morphological datasets for 10 languages with diverse typological characteristics. The shared task drew submissions from 9 teams representing 11 institutions reflecting a variety of approaches to addressing supervised learning of reinflection. For the simplest task, inflection generation from lemmas, the best system averaged 95.56% exact-match accuracy across all languages, ranging from Maltese (88.99%) to Hungarian (99.30%). With the relatively large training datasets provided, recurrent neural network architectures consistently performed best-in fact, there was a significant margin between neural and non-neural approaches. The best neural approach, averaged over all tasks and languages, outperformed the best nonneural one by 13.76% absolute; on individual tasks and languages the gap in accuracy sometimes exceeded 60%. Overall, the results show a strong state of the art, and serve as encouragement for future shared tasks that explore morphological analysis and generation with varying degrees of supervision.",The SIGMORPHON 2016 Shared Task---Morphological Reinflection,"The 2016 SIGMORPHON Shared Task was devoted to the problem of morphological reinflection. It introduced morphological datasets for 10 languages with diverse typological characteristics. The shared task drew submissions from 9 teams representing 11 institutions reflecting a variety of approaches to addressing supervised learning of reinflection. For the simplest task, inflection generation from lemmas, the best system averaged 95.56% exact-match accuracy across all languages, ranging from Maltese (88.99%) to Hungarian (99.30%). With the relatively large training datasets provided, recurrent neural network architectures consistently performed best-in fact, there was a significant margin between neural and non-neural approaches. The best neural approach, averaged over all tasks and languages, outperformed the best nonneural one by 13.76% absolute; on individual tasks and languages the gap in accuracy sometimes exceeded 60%. Overall, the results show a strong state of the art, and serve as encouragement for future shared tasks that explore morphological analysis and generation with varying degrees of supervision.",,"The SIGMORPHON 2016 Shared Task---Morphological Reinflection. The 2016 SIGMORPHON Shared Task was devoted to the problem of morphological reinflection. It introduced morphological datasets for 10 languages with diverse typological characteristics. The shared task drew submissions from 9 teams representing 11 institutions reflecting a variety of approaches to addressing supervised learning of reinflection. For the simplest task, inflection generation from lemmas, the best system averaged 95.56% exact-match accuracy across all languages, ranging from Maltese (88.99%) to Hungarian (99.30%). With the relatively large training datasets provided, recurrent neural network architectures consistently performed best-in fact, there was a significant margin between neural and non-neural approaches. The best neural approach, averaged over all tasks and languages, outperformed the best nonneural one by 13.76% absolute; on individual tasks and languages the gap in accuracy sometimes exceeded 60%. Overall, the results show a strong state of the art, and serve as encouragement for future shared tasks that explore morphological analysis and generation with varying degrees of supervision.",2016
wang-etal-2021-easy,https://aclanthology.org/2021.findings-acl.415,0,,,,,,,"As Easy as 1, 2, 3: Behavioural Testing of NMT Systems for Numerical Translation. Mistranslated numbers have the potential to cause serious effects, such as financial loss or medical misinformation. In this work we develop comprehensive assessments of the robustness of neural machine translation systems to numerical text via behavioural testing. We explore a variety of numerical translation capabilities a system is expected to exhibit and design effective test examples to expose system underperformance. We find that numerical mistranslation is a general issue: major commercial systems and state-of-the-art research models fail on many of our test examples, for high-and low-resource languages. Our tests reveal novel errors that have not previously been reported in NMT systems, to the best of our knowledge. Lastly, we discuss strategies to mitigate numerical mistranslation.","As Easy as 1, 2, 3: Behavioural Testing of {NMT} Systems for Numerical Translation","Mistranslated numbers have the potential to cause serious effects, such as financial loss or medical misinformation. In this work we develop comprehensive assessments of the robustness of neural machine translation systems to numerical text via behavioural testing. We explore a variety of numerical translation capabilities a system is expected to exhibit and design effective test examples to expose system underperformance. We find that numerical mistranslation is a general issue: major commercial systems and state-of-the-art research models fail on many of our test examples, for high-and low-resource languages. Our tests reveal novel errors that have not previously been reported in NMT systems, to the best of our knowledge. Lastly, we discuss strategies to mitigate numerical mistranslation.","As Easy as 1, 2, 3: Behavioural Testing of NMT Systems for Numerical Translation","Mistranslated numbers have the potential to cause serious effects, such as financial loss or medical misinformation. In this work we develop comprehensive assessments of the robustness of neural machine translation systems to numerical text via behavioural testing. We explore a variety of numerical translation capabilities a system is expected to exhibit and design effective test examples to expose system underperformance. We find that numerical mistranslation is a general issue: major commercial systems and state-of-the-art research models fail on many of our test examples, for high-and low-resource languages. Our tests reveal novel errors that have not previously been reported in NMT systems, to the best of our knowledge. Lastly, we discuss strategies to mitigate numerical mistranslation.",We thank all anonymous reviewers for their constructive comments. The authors acknowledge funding support by Facebook.,"As Easy as 1, 2, 3: Behavioural Testing of NMT Systems for Numerical Translation. Mistranslated numbers have the potential to cause serious effects, such as financial loss or medical misinformation. In this work we develop comprehensive assessments of the robustness of neural machine translation systems to numerical text via behavioural testing. We explore a variety of numerical translation capabilities a system is expected to exhibit and design effective test examples to expose system underperformance. We find that numerical mistranslation is a general issue: major commercial systems and state-of-the-art research models fail on many of our test examples, for high-and low-resource languages. Our tests reveal novel errors that have not previously been reported in NMT systems, to the best of our knowledge. Lastly, we discuss strategies to mitigate numerical mistranslation.",2021
pulman-1980-parsing,https://aclanthology.org/C80-1009,0,,,,,,,"Parsing and Syntactic Theory. It is argued that many constraints on syntactic rules are a consequence of simple assumptions about parsing mechanisms. If generally true, this suggests an interesting new line of research for syntactic theory.",Parsing and Syntactic Theory,"It is argued that many constraints on syntactic rules are a consequence of simple assumptions about parsing mechanisms. If generally true, this suggests an interesting new line of research for syntactic theory.",Parsing and Syntactic Theory,"It is argued that many constraints on syntactic rules are a consequence of simple assumptions about parsing mechanisms. If generally true, this suggests an interesting new line of research for syntactic theory.",,"Parsing and Syntactic Theory. It is argued that many constraints on syntactic rules are a consequence of simple assumptions about parsing mechanisms. If generally true, this suggests an interesting new line of research for syntactic theory.",1980
hajishirzi-etal-2013-joint,https://aclanthology.org/D13-1029,0,,,,,,,"Joint Coreference Resolution and Named-Entity Linking with Multi-Pass Sieves. Many errors in coreference resolution come from semantic mismatches due to inadequate world knowledge. Errors in named-entity linking (NEL), on the other hand, are often caused by superficial modeling of entity context. This paper demonstrates that these two tasks are complementary. We introduce NECO, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each. NECO extends the Stanford deterministic coreference system by automatically linking mentions to Wikipedia and introducing new NEL-informed mention-merging sieves. Linking improves mention-detection and enables new semantic attributes to be incorporated from Freebase, while coreference provides better context modeling by propagating named-entity links within mention clusters. Experiments show consistent improvements across a number of datasets and experimental conditions, including over 11% reduction in MUC coreference error and nearly 21% reduction in F1 NEL error on ACE 2004 newswire data.",Joint Coreference Resolution and Named-Entity Linking with Multi-Pass Sieves,"Many errors in coreference resolution come from semantic mismatches due to inadequate world knowledge. Errors in named-entity linking (NEL), on the other hand, are often caused by superficial modeling of entity context. This paper demonstrates that these two tasks are complementary. We introduce NECO, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each. NECO extends the Stanford deterministic coreference system by automatically linking mentions to Wikipedia and introducing new NEL-informed mention-merging sieves. Linking improves mention-detection and enables new semantic attributes to be incorporated from Freebase, while coreference provides better context modeling by propagating named-entity links within mention clusters. Experiments show consistent improvements across a number of datasets and experimental conditions, including over 11% reduction in MUC coreference error and nearly 21% reduction in F1 NEL error on ACE 2004 newswire data.",Joint Coreference Resolution and Named-Entity Linking with Multi-Pass Sieves,"Many errors in coreference resolution come from semantic mismatches due to inadequate world knowledge. Errors in named-entity linking (NEL), on the other hand, are often caused by superficial modeling of entity context. This paper demonstrates that these two tasks are complementary. We introduce NECO, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each. NECO extends the Stanford deterministic coreference system by automatically linking mentions to Wikipedia and introducing new NEL-informed mention-merging sieves. Linking improves mention-detection and enables new semantic attributes to be incorporated from Freebase, while coreference provides better context modeling by propagating named-entity links within mention clusters. Experiments show consistent improvements across a number of datasets and experimental conditions, including over 11% reduction in MUC coreference error and nearly 21% reduction in F1 NEL error on ACE 2004 newswire data.","The research was supported in part by grants from DARPA under the DEFT program through the AFRL (FA8750-13-2-0019) and the CSSG (N11AP20020), the ONR (N00014-12-1-0211), and the NSF (IIS-1115966). Support was also provided by a gift from Google, an NSF Graduate Research Fellowship, and the WRF / TJ Cable Professorship. The authors thank Greg Durrett, Heeyoung Lee, Mitchell Koch, Xiao Ling, Mark Yatskar, Kenton Lee, Eunsol Choi, Gabriel Schubiner, Nicholas FitzGerald, Tom Kwiatkowski, and the anonymous reviewers for helpful comments and feedback on the work.","Joint Coreference Resolution and Named-Entity Linking with Multi-Pass Sieves. Many errors in coreference resolution come from semantic mismatches due to inadequate world knowledge. Errors in named-entity linking (NEL), on the other hand, are often caused by superficial modeling of entity context. This paper demonstrates that these two tasks are complementary. We introduce NECO, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each. NECO extends the Stanford deterministic coreference system by automatically linking mentions to Wikipedia and introducing new NEL-informed mention-merging sieves. Linking improves mention-detection and enables new semantic attributes to be incorporated from Freebase, while coreference provides better context modeling by propagating named-entity links within mention clusters. Experiments show consistent improvements across a number of datasets and experimental conditions, including over 11% reduction in MUC coreference error and nearly 21% reduction in F1 NEL error on ACE 2004 newswire data.",2013
moraes-etal-2014-adapting,https://aclanthology.org/W14-4409,1,,,,education,,,"Adapting Graph Summaries to the Users' Reading Levels. Deciding on the complexity of a generated text in NLG systems is a contentious task. Some systems propose the generation of simple text for low-skilled readers; some choose what they anticipate to be a ""good measure"" of complexity by balancing sentence length and number of sentences (using scales such as the D-level sentence complexity) for the text; while others target high-skilled readers. In this work, we discuss an approach that aims to leverage the experience of the reader when reading generated text by matching the syntactic complexity of the generated text to the reading level of the surrounding text. We propose an approach for sentence aggregation and lexical choice that allows generated summaries of line graphs in multimodal articles available online to match the reading level of the text of the article in which the graphs appear. The technique is developed in the context of the SIGHT (Summarizing Information Graphics Textually) system. This paper tackles the micro planning phase of sentence generation discussing additionally the steps of lexical choice, and pronominalization.",Adapting Graph Summaries to the Users{'} Reading Levels,"Deciding on the complexity of a generated text in NLG systems is a contentious task. Some systems propose the generation of simple text for low-skilled readers; some choose what they anticipate to be a ""good measure"" of complexity by balancing sentence length and number of sentences (using scales such as the D-level sentence complexity) for the text; while others target high-skilled readers. In this work, we discuss an approach that aims to leverage the experience of the reader when reading generated text by matching the syntactic complexity of the generated text to the reading level of the surrounding text. We propose an approach for sentence aggregation and lexical choice that allows generated summaries of line graphs in multimodal articles available online to match the reading level of the text of the article in which the graphs appear. The technique is developed in the context of the SIGHT (Summarizing Information Graphics Textually) system. This paper tackles the micro planning phase of sentence generation discussing additionally the steps of lexical choice, and pronominalization.",Adapting Graph Summaries to the Users' Reading Levels,"Deciding on the complexity of a generated text in NLG systems is a contentious task. Some systems propose the generation of simple text for low-skilled readers; some choose what they anticipate to be a ""good measure"" of complexity by balancing sentence length and number of sentences (using scales such as the D-level sentence complexity) for the text; while others target high-skilled readers. In this work, we discuss an approach that aims to leverage the experience of the reader when reading generated text by matching the syntactic complexity of the generated text to the reading level of the surrounding text. We propose an approach for sentence aggregation and lexical choice that allows generated summaries of line graphs in multimodal articles available online to match the reading level of the text of the article in which the graphs appear. The technique is developed in the context of the SIGHT (Summarizing Information Graphics Textually) system. This paper tackles the micro planning phase of sentence generation discussing additionally the steps of lexical choice, and pronominalization.",,"Adapting Graph Summaries to the Users' Reading Levels. Deciding on the complexity of a generated text in NLG systems is a contentious task. Some systems propose the generation of simple text for low-skilled readers; some choose what they anticipate to be a ""good measure"" of complexity by balancing sentence length and number of sentences (using scales such as the D-level sentence complexity) for the text; while others target high-skilled readers. In this work, we discuss an approach that aims to leverage the experience of the reader when reading generated text by matching the syntactic complexity of the generated text to the reading level of the surrounding text. We propose an approach for sentence aggregation and lexical choice that allows generated summaries of line graphs in multimodal articles available online to match the reading level of the text of the article in which the graphs appear. The technique is developed in the context of the SIGHT (Summarizing Information Graphics Textually) system. This paper tackles the micro planning phase of sentence generation discussing additionally the steps of lexical choice, and pronominalization.",2014
navigli-velardi-2002-automatic,http://www.lrec-conf.org/proceedings/lrec2002/pdf/47.pdf,0,,,,,,,Automatic Adaptation of WordNet to Domains. ,Automatic Adaptation of {W}ord{N}et to Domains,,Automatic Adaptation of WordNet to Domains,,,Automatic Adaptation of WordNet to Domains. ,2002
calixto-etal-2019-latent,https://aclanthology.org/P19-1642,0,,,,,,,"Latent Variable Model for Multi-modal Translation. In this work, we propose to model the interaction between visual and textual features for multi-modal neural machine translation (MMT) through a latent variable model. This latent variable can be seen as a multi-modal stochastic embedding of an image and its description in a foreign language. It is used in a target-language decoder and also to predict image features. Importantly, our model formulation utilises visual and textual inputs during training but does not require that images be available at test time. We show that our latent variable MMT formulation improves considerably over strong baselines, including a multi-task learning approach (Elliott and Kádár, 2017) and a conditional variational auto-encoder approach (Toyama et al., 2016). Finally, we show improvements due to (i) predicting image features in addition to only conditioning on them, (ii) imposing a constraint on the KL term to promote models with nonnegligible mutual information between inputs and latent variable, and (iii) by training on additional target-language image descriptions (i.e. synthetic data).",Latent Variable Model for Multi-modal Translation,"In this work, we propose to model the interaction between visual and textual features for multi-modal neural machine translation (MMT) through a latent variable model. This latent variable can be seen as a multi-modal stochastic embedding of an image and its description in a foreign language. It is used in a target-language decoder and also to predict image features. Importantly, our model formulation utilises visual and textual inputs during training but does not require that images be available at test time. We show that our latent variable MMT formulation improves considerably over strong baselines, including a multi-task learning approach (Elliott and Kádár, 2017) and a conditional variational auto-encoder approach (Toyama et al., 2016). Finally, we show improvements due to (i) predicting image features in addition to only conditioning on them, (ii) imposing a constraint on the KL term to promote models with nonnegligible mutual information between inputs and latent variable, and (iii) by training on additional target-language image descriptions (i.e. synthetic data).",Latent Variable Model for Multi-modal Translation,"In this work, we propose to model the interaction between visual and textual features for multi-modal neural machine translation (MMT) through a latent variable model. This latent variable can be seen as a multi-modal stochastic embedding of an image and its description in a foreign language. It is used in a target-language decoder and also to predict image features. Importantly, our model formulation utilises visual and textual inputs during training but does not require that images be available at test time. We show that our latent variable MMT formulation improves considerably over strong baselines, including a multi-task learning approach (Elliott and Kádár, 2017) and a conditional variational auto-encoder approach (Toyama et al., 2016). Finally, we show improvements due to (i) predicting image features in addition to only conditioning on them, (ii) imposing a constraint on the KL term to promote models with nonnegligible mutual information between inputs and latent variable, and (iii) by training on additional target-language image descriptions (i.e. synthetic data).",This work is supported by the Dutch Organisation for Scientific Research (NWO) VICI Grant nr. 277-89-002.,"Latent Variable Model for Multi-modal Translation. In this work, we propose to model the interaction between visual and textual features for multi-modal neural machine translation (MMT) through a latent variable model. This latent variable can be seen as a multi-modal stochastic embedding of an image and its description in a foreign language. It is used in a target-language decoder and also to predict image features. Importantly, our model formulation utilises visual and textual inputs during training but does not require that images be available at test time. We show that our latent variable MMT formulation improves considerably over strong baselines, including a multi-task learning approach (Elliott and Kádár, 2017) and a conditional variational auto-encoder approach (Toyama et al., 2016). Finally, we show improvements due to (i) predicting image features in addition to only conditioning on them, (ii) imposing a constraint on the KL term to promote models with nonnegligible mutual information between inputs and latent variable, and (iii) by training on additional target-language image descriptions (i.e. synthetic data).",2019
reynaert-etal-2010-balancing,http://www.lrec-conf.org/proceedings/lrec2010/pdf/549_Paper.pdf,0,,,,,,,"Balancing SoNaR: IPR versus Processing Issues in a 500-Million-Word Written Dutch Reference Corpus. In The Low Countries, a major reference corpus for written Dutch is currently being built. In this paper, we discuss the interplay between data acquisition and data processing during the creation of the SoNaR Corpus. Based on recent developments in traditional corpus compiling and new web harvesting approaches, SoNaR is designed to contain 500 million words, balanced over 36 text types including both traditional and new media texts. Beside its balanced design, every text sample included in SoNaR will have its IPR issues settled to the largest extent possible. This data collection task presents many challenges because every decision taken on the level of text acquisition has ramifications for the level of processing and the general usability of the corpus later on. As far as the traditional text types are concerned, each text brings its own processing requirements and issues. For new media texts-SMS, chat-the problem is even more complex, issues such as anonimity, recognizability and citation right, all present problems that have to be tackled one way or another. The solutions may actually lead to the creation of two corpora: a gigaword SoNaR, IPR-cleared for research purposes, and the smallerof commissioned size-more privacy compliant SoNaR, IPR-cleared for commercial purposes as well.",Balancing {S}o{N}a{R}: {IPR} versus Processing Issues in a 500-Million-Word Written {D}utch Reference Corpus,"In The Low Countries, a major reference corpus for written Dutch is currently being built. In this paper, we discuss the interplay between data acquisition and data processing during the creation of the SoNaR Corpus. Based on recent developments in traditional corpus compiling and new web harvesting approaches, SoNaR is designed to contain 500 million words, balanced over 36 text types including both traditional and new media texts. Beside its balanced design, every text sample included in SoNaR will have its IPR issues settled to the largest extent possible. This data collection task presents many challenges because every decision taken on the level of text acquisition has ramifications for the level of processing and the general usability of the corpus later on. As far as the traditional text types are concerned, each text brings its own processing requirements and issues. For new media texts-SMS, chat-the problem is even more complex, issues such as anonimity, recognizability and citation right, all present problems that have to be tackled one way or another. The solutions may actually lead to the creation of two corpora: a gigaword SoNaR, IPR-cleared for research purposes, and the smallerof commissioned size-more privacy compliant SoNaR, IPR-cleared for commercial purposes as well.",Balancing SoNaR: IPR versus Processing Issues in a 500-Million-Word Written Dutch Reference Corpus,"In The Low Countries, a major reference corpus for written Dutch is currently being built. In this paper, we discuss the interplay between data acquisition and data processing during the creation of the SoNaR Corpus. Based on recent developments in traditional corpus compiling and new web harvesting approaches, SoNaR is designed to contain 500 million words, balanced over 36 text types including both traditional and new media texts. Beside its balanced design, every text sample included in SoNaR will have its IPR issues settled to the largest extent possible. This data collection task presents many challenges because every decision taken on the level of text acquisition has ramifications for the level of processing and the general usability of the corpus later on. As far as the traditional text types are concerned, each text brings its own processing requirements and issues. For new media texts-SMS, chat-the problem is even more complex, issues such as anonimity, recognizability and citation right, all present problems that have to be tackled one way or another. The solutions may actually lead to the creation of two corpora: a gigaword SoNaR, IPR-cleared for research purposes, and the smallerof commissioned size-more privacy compliant SoNaR, IPR-cleared for commercial purposes as well.",The SoNaR project is funded by the Nederlandse Taalunie (NTU: Dutch Language Union) within the framework of the STEVIN programme under grant number STE07014. See also http://taalunieversum.org/taal/technologie/stevin/,"Balancing SoNaR: IPR versus Processing Issues in a 500-Million-Word Written Dutch Reference Corpus. In The Low Countries, a major reference corpus for written Dutch is currently being built. In this paper, we discuss the interplay between data acquisition and data processing during the creation of the SoNaR Corpus. Based on recent developments in traditional corpus compiling and new web harvesting approaches, SoNaR is designed to contain 500 million words, balanced over 36 text types including both traditional and new media texts. Beside its balanced design, every text sample included in SoNaR will have its IPR issues settled to the largest extent possible. This data collection task presents many challenges because every decision taken on the level of text acquisition has ramifications for the level of processing and the general usability of the corpus later on. As far as the traditional text types are concerned, each text brings its own processing requirements and issues. For new media texts-SMS, chat-the problem is even more complex, issues such as anonimity, recognizability and citation right, all present problems that have to be tackled one way or another. The solutions may actually lead to the creation of two corpora: a gigaword SoNaR, IPR-cleared for research purposes, and the smallerof commissioned size-more privacy compliant SoNaR, IPR-cleared for commercial purposes as well.",2010
li-etal-2009-chinese,https://aclanthology.org/W09-0433,0,,,,,,,"Chinese Syntactic Reordering for Adequate Generation of Korean Verbal Phrases in Chinese-to-Korean SMT. Chinese and Korean belong to different language families in terms of word-order and morphological typology. Chinese is an SVO and morphologically poor language while Korean is an SOV and morphologically rich one. In Chinese-to-Korean SMT systems, systematic differences between the verbal systems of the two languages make the generation of Korean verbal phrases difficult. To resolve the difficulties, we address two issues in this paper. The first issue is that the verb position is different from the viewpoint of word-order typology. The second is the difficulty of complex morphology generation of Korean verbs from the viewpoint of morphological typology. We propose a Chinese syntactic reordering that is better at generating Korean verbal phrases in Chinese-to-Korean SMT. Specifically, we consider reordering rules targeting Chinese verb phrases (VPs), preposition phrases (PPs), and modality-bearing words that are closely related to Korean verbal phrases. We verify our system with two corpora of different domains. Our proposed approach significantly improves the performance of our system over a baseline phrased-based SMT system. The relative improvements in the two corpora are +9.32% and +5.43%, respectively.",{C}hinese Syntactic Reordering for Adequate Generation of {K}orean Verbal Phrases in {C}hinese-to-{K}orean {SMT},"Chinese and Korean belong to different language families in terms of word-order and morphological typology. Chinese is an SVO and morphologically poor language while Korean is an SOV and morphologically rich one. In Chinese-to-Korean SMT systems, systematic differences between the verbal systems of the two languages make the generation of Korean verbal phrases difficult. To resolve the difficulties, we address two issues in this paper. The first issue is that the verb position is different from the viewpoint of word-order typology. The second is the difficulty of complex morphology generation of Korean verbs from the viewpoint of morphological typology. We propose a Chinese syntactic reordering that is better at generating Korean verbal phrases in Chinese-to-Korean SMT. Specifically, we consider reordering rules targeting Chinese verb phrases (VPs), preposition phrases (PPs), and modality-bearing words that are closely related to Korean verbal phrases. We verify our system with two corpora of different domains. Our proposed approach significantly improves the performance of our system over a baseline phrased-based SMT system. The relative improvements in the two corpora are +9.32% and +5.43%, respectively.",Chinese Syntactic Reordering for Adequate Generation of Korean Verbal Phrases in Chinese-to-Korean SMT,"Chinese and Korean belong to different language families in terms of word-order and morphological typology. Chinese is an SVO and morphologically poor language while Korean is an SOV and morphologically rich one. In Chinese-to-Korean SMT systems, systematic differences between the verbal systems of the two languages make the generation of Korean verbal phrases difficult. To resolve the difficulties, we address two issues in this paper. The first issue is that the verb position is different from the viewpoint of word-order typology. The second is the difficulty of complex morphology generation of Korean verbs from the viewpoint of morphological typology. We propose a Chinese syntactic reordering that is better at generating Korean verbal phrases in Chinese-to-Korean SMT. Specifically, we consider reordering rules targeting Chinese verb phrases (VPs), preposition phrases (PPs), and modality-bearing words that are closely related to Korean verbal phrases. We verify our system with two corpora of different domains. Our proposed approach significantly improves the performance of our system over a baseline phrased-based SMT system. The relative improvements in the two corpora are +9.32% and +5.43%, respectively.",This work was supported in part by MKE & II-TA through the IT Leading R&D Support Project and also in part by the BK 21 Project in 2009.,"Chinese Syntactic Reordering for Adequate Generation of Korean Verbal Phrases in Chinese-to-Korean SMT. Chinese and Korean belong to different language families in terms of word-order and morphological typology. Chinese is an SVO and morphologically poor language while Korean is an SOV and morphologically rich one. In Chinese-to-Korean SMT systems, systematic differences between the verbal systems of the two languages make the generation of Korean verbal phrases difficult. To resolve the difficulties, we address two issues in this paper. The first issue is that the verb position is different from the viewpoint of word-order typology. The second is the difficulty of complex morphology generation of Korean verbs from the viewpoint of morphological typology. We propose a Chinese syntactic reordering that is better at generating Korean verbal phrases in Chinese-to-Korean SMT. Specifically, we consider reordering rules targeting Chinese verb phrases (VPs), preposition phrases (PPs), and modality-bearing words that are closely related to Korean verbal phrases. We verify our system with two corpora of different domains. Our proposed approach significantly improves the performance of our system over a baseline phrased-based SMT system. The relative improvements in the two corpora are +9.32% and +5.43%, respectively.",2009
yuen-etal-2004-morpheme,https://aclanthology.org/C04-1145,0,,,,,,,"Morpheme-based Derivation of Bipolar Semantic Orientation of Chinese Words. The evaluative character of a word is called its semantic orientation (SO). A positive SO indicates desirability (e.g. Good, Honest) and a negative SO indicates undesirability (e.g., Bad, Ugly). This paper presents a method, based on Turney (2003), for inferring the SO of a word from its statistical association with strongly-polarized words and morphemes in Chinese. It is noted that morphemes are much less numerous than words, and that also a small number of fundamental morphemes may be used in the modified system to great advantage. The algorithm was tested on 1,249 words (604 positive and 645 negative) in a corpus of 34 million words, and was run with 20 and 40 polarized words respectively, giving a high precision (79.96% to 81.05%), but a low recall (45.56% to 59.57%). The algorithm was then run with 20 polarized morphemes, or single characters, in the same corpus, giving a high precision of 80.23% and a high recall of 85.03%. We concluded that morphemes in Chinese, as in any language, constitute a distinct sub-lexical unit which, though small in number, has greater linguistic significance than words, as seen by the significant enhancement of results with a much smaller corpus than that required by Turney.",Morpheme-based Derivation of Bipolar Semantic Orientation of {C}hinese Words,"The evaluative character of a word is called its semantic orientation (SO). A positive SO indicates desirability (e.g. Good, Honest) and a negative SO indicates undesirability (e.g., Bad, Ugly). This paper presents a method, based on Turney (2003), for inferring the SO of a word from its statistical association with strongly-polarized words and morphemes in Chinese. It is noted that morphemes are much less numerous than words, and that also a small number of fundamental morphemes may be used in the modified system to great advantage. The algorithm was tested on 1,249 words (604 positive and 645 negative) in a corpus of 34 million words, and was run with 20 and 40 polarized words respectively, giving a high precision (79.96% to 81.05%), but a low recall (45.56% to 59.57%). The algorithm was then run with 20 polarized morphemes, or single characters, in the same corpus, giving a high precision of 80.23% and a high recall of 85.03%. We concluded that morphemes in Chinese, as in any language, constitute a distinct sub-lexical unit which, though small in number, has greater linguistic significance than words, as seen by the significant enhancement of results with a much smaller corpus than that required by Turney.",Morpheme-based Derivation of Bipolar Semantic Orientation of Chinese Words,"The evaluative character of a word is called its semantic orientation (SO). A positive SO indicates desirability (e.g. Good, Honest) and a negative SO indicates undesirability (e.g., Bad, Ugly). This paper presents a method, based on Turney (2003), for inferring the SO of a word from its statistical association with strongly-polarized words and morphemes in Chinese. It is noted that morphemes are much less numerous than words, and that also a small number of fundamental morphemes may be used in the modified system to great advantage. The algorithm was tested on 1,249 words (604 positive and 645 negative) in a corpus of 34 million words, and was run with 20 and 40 polarized words respectively, giving a high precision (79.96% to 81.05%), but a low recall (45.56% to 59.57%). The algorithm was then run with 20 polarized morphemes, or single characters, in the same corpus, giving a high precision of 80.23% and a high recall of 85.03%. We concluded that morphemes in Chinese, as in any language, constitute a distinct sub-lexical unit which, though small in number, has greater linguistic significance than words, as seen by the significant enhancement of results with a much smaller corpus than that required by Turney.",,"Morpheme-based Derivation of Bipolar Semantic Orientation of Chinese Words. The evaluative character of a word is called its semantic orientation (SO). A positive SO indicates desirability (e.g. Good, Honest) and a negative SO indicates undesirability (e.g., Bad, Ugly). This paper presents a method, based on Turney (2003), for inferring the SO of a word from its statistical association with strongly-polarized words and morphemes in Chinese. It is noted that morphemes are much less numerous than words, and that also a small number of fundamental morphemes may be used in the modified system to great advantage. The algorithm was tested on 1,249 words (604 positive and 645 negative) in a corpus of 34 million words, and was run with 20 and 40 polarized words respectively, giving a high precision (79.96% to 81.05%), but a low recall (45.56% to 59.57%). The algorithm was then run with 20 polarized morphemes, or single characters, in the same corpus, giving a high precision of 80.23% and a high recall of 85.03%. We concluded that morphemes in Chinese, as in any language, constitute a distinct sub-lexical unit which, though small in number, has greater linguistic significance than words, as seen by the significant enhancement of results with a much smaller corpus than that required by Turney.",2004
demirsahin-etal-2020-open,https://aclanthology.org/2020.lrec-1.804,0,,,,,,,"Open-source Multi-speaker Corpora of the English Accents in the British Isles. This paper presents a dataset of transcribed highquality audio of English sentences recorded by volunteers speaking with different accents of the British Isles. The dataset is intended for linguistic analysis as well as use for speech technologies. The recording scripts were curated specifically for accent elicitation, covering a variety of phonological phenomena and providing a high phoneme coverage. The scripts include pronunciations of global locations, major airlines and common personal names in different accents; and native speaker pronunciations of local words. Overlapping lines for all speakers were included for idiolect elicitation, which include the same or similar lines with other existing resources such as the CSTR VCTK corpus and the Speech Accent Archive to allow for easy comparison of personal and regional accents. The resulting corpora include over 31 hours of recordings from 120 volunteers who selfidentify as native speakers of Southern England, Midlands, Northern England, Welsh, Scottish and Irish varieties of English.",Open-source Multi-speaker Corpora of the {E}nglish Accents in the {B}ritish Isles,"This paper presents a dataset of transcribed highquality audio of English sentences recorded by volunteers speaking with different accents of the British Isles. The dataset is intended for linguistic analysis as well as use for speech technologies. The recording scripts were curated specifically for accent elicitation, covering a variety of phonological phenomena and providing a high phoneme coverage. The scripts include pronunciations of global locations, major airlines and common personal names in different accents; and native speaker pronunciations of local words. Overlapping lines for all speakers were included for idiolect elicitation, which include the same or similar lines with other existing resources such as the CSTR VCTK corpus and the Speech Accent Archive to allow for easy comparison of personal and regional accents. The resulting corpora include over 31 hours of recordings from 120 volunteers who selfidentify as native speakers of Southern England, Midlands, Northern England, Welsh, Scottish and Irish varieties of English.",Open-source Multi-speaker Corpora of the English Accents in the British Isles,"This paper presents a dataset of transcribed highquality audio of English sentences recorded by volunteers speaking with different accents of the British Isles. The dataset is intended for linguistic analysis as well as use for speech technologies. The recording scripts were curated specifically for accent elicitation, covering a variety of phonological phenomena and providing a high phoneme coverage. The scripts include pronunciations of global locations, major airlines and common personal names in different accents; and native speaker pronunciations of local words. Overlapping lines for all speakers were included for idiolect elicitation, which include the same or similar lines with other existing resources such as the CSTR VCTK corpus and the Speech Accent Archive to allow for easy comparison of personal and regional accents. The resulting corpora include over 31 hours of recordings from 120 volunteers who selfidentify as native speakers of Southern England, Midlands, Northern England, Welsh, Scottish and Irish varieties of English.","The authors would like to thank Dawn Knight, Anna Jones and Alex Thomas from Cardiff University for their assis tance in collecting the Welsh English data presented in this paper. The authors also thank Richard Sproat for his com ments on the earlier drafts of this paper. Finally, the authors thank the anonymous reviewers for many helpful sugges tions.","Open-source Multi-speaker Corpora of the English Accents in the British Isles. This paper presents a dataset of transcribed highquality audio of English sentences recorded by volunteers speaking with different accents of the British Isles. The dataset is intended for linguistic analysis as well as use for speech technologies. The recording scripts were curated specifically for accent elicitation, covering a variety of phonological phenomena and providing a high phoneme coverage. The scripts include pronunciations of global locations, major airlines and common personal names in different accents; and native speaker pronunciations of local words. Overlapping lines for all speakers were included for idiolect elicitation, which include the same or similar lines with other existing resources such as the CSTR VCTK corpus and the Speech Accent Archive to allow for easy comparison of personal and regional accents. The resulting corpora include over 31 hours of recordings from 120 volunteers who selfidentify as native speakers of Southern England, Midlands, Northern England, Welsh, Scottish and Irish varieties of English.",2020
kruengkrai-etal-2021-multi,https://aclanthology.org/2021.findings-acl.217,1,,,,disinformation_and_fake_news,,,"A Multi-Level Attention Model for Evidence-Based Fact Checking. Evidence-based fact checking aims to verify the truthfulness of a claim against evidence extracted from textual sources. Learning a representation that effectively captures relations between a claim and evidence can be challenging. Recent state-of-the-art approaches have developed increasingly sophisticated models based on graph structures. We present a simple model that can be trained on sequence structures. Our model enables inter-sentence attentions at different levels and can benefit from joint training. Results on a large-scale dataset for Fact Extraction and VERification (FEVER) show that our model outperforms the graphbased approaches and yields 1.09% and 1.42% improvements in label accuracy and FEVER score, respectively, over the best published model. 1",A Multi-Level Attention Model for Evidence-Based Fact Checking,"Evidence-based fact checking aims to verify the truthfulness of a claim against evidence extracted from textual sources. Learning a representation that effectively captures relations between a claim and evidence can be challenging. Recent state-of-the-art approaches have developed increasingly sophisticated models based on graph structures. We present a simple model that can be trained on sequence structures. Our model enables inter-sentence attentions at different levels and can benefit from joint training. Results on a large-scale dataset for Fact Extraction and VERification (FEVER) show that our model outperforms the graphbased approaches and yields 1.09% and 1.42% improvements in label accuracy and FEVER score, respectively, over the best published model. 1",A Multi-Level Attention Model for Evidence-Based Fact Checking,"Evidence-based fact checking aims to verify the truthfulness of a claim against evidence extracted from textual sources. Learning a representation that effectively captures relations between a claim and evidence can be challenging. Recent state-of-the-art approaches have developed increasingly sophisticated models based on graph structures. We present a simple model that can be trained on sequence structures. Our model enables inter-sentence attentions at different levels and can benefit from joint training. Results on a large-scale dataset for Fact Extraction and VERification (FEVER) show that our model outperforms the graphbased approaches and yields 1.09% and 1.42% improvements in label accuracy and FEVER score, respectively, over the best published model. 1","We thank Erica Cooper (NII) for providing valuable feedback on an earlier draft of this paper. This work is supported by JST CREST Grants (JPMJCR18A6 and JPMJCR20D3) and MEXT KAKENHI Grants (21H04906), Japan.","A Multi-Level Attention Model for Evidence-Based Fact Checking. Evidence-based fact checking aims to verify the truthfulness of a claim against evidence extracted from textual sources. Learning a representation that effectively captures relations between a claim and evidence can be challenging. Recent state-of-the-art approaches have developed increasingly sophisticated models based on graph structures. We present a simple model that can be trained on sequence structures. Our model enables inter-sentence attentions at different levels and can benefit from joint training. Results on a large-scale dataset for Fact Extraction and VERification (FEVER) show that our model outperforms the graphbased approaches and yields 1.09% and 1.42% improvements in label accuracy and FEVER score, respectively, over the best published model. 1",2021
liu-etal-2021-self,https://aclanthology.org/2021.naacl-main.334,1,,,,health,,,"Self-Alignment Pretraining for Biomedical Entity Representations. Despite the widespread success of selfsupervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SAPBERT, a pretraining scheme that selfaligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipelinebased hybrid systems, SAPBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without taskspecific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BIOBERT, SCIBERT and PUB-MEDBERT, our pretraining scheme proves to be both effective and robust. 1",Self-Alignment Pretraining for Biomedical Entity Representations,"Despite the widespread success of selfsupervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SAPBERT, a pretraining scheme that selfaligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipelinebased hybrid systems, SAPBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without taskspecific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BIOBERT, SCIBERT and PUB-MEDBERT, our pretraining scheme proves to be both effective and robust. 1",Self-Alignment Pretraining for Biomedical Entity Representations,"Despite the widespread success of selfsupervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SAPBERT, a pretraining scheme that selfaligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipelinebased hybrid systems, SAPBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without taskspecific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BIOBERT, SCIBERT and PUB-MEDBERT, our pretraining scheme proves to be both effective and robust. 1",We thank the three reviewers and the Area Chair for their insightful comments and suggestions. FL is supported by Grace & Thomas C.H. Chan Cambridge Scholarship. NC and MB would like to,"Self-Alignment Pretraining for Biomedical Entity Representations. Despite the widespread success of selfsupervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SAPBERT, a pretraining scheme that selfaligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipelinebased hybrid systems, SAPBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without taskspecific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BIOBERT, SCIBERT and PUB-MEDBERT, our pretraining scheme proves to be both effective and robust. 1",2021
liu-emerson-2022-learning,https://aclanthology.org/2022.acl-long.275,0,,,,,,,"Learning Functional Distributional Semantics with Visual Data. Functional Distributional Semantics is a recently proposed framework for learning distributional semantics that provides linguistic interpretability. It models the meaning of a word as a binary classifier rather than a numerical vector. In this work, we propose a method to train a Functional Distributional Semantics model with grounded visual data. We train it on the Visual Genome dataset, which is closer to the kind of data encountered in human language acquisition than a large text corpus. On four external evaluation datasets, our model outperforms previous work on learning semantics from Visual Genome. 1",Learning Functional Distributional Semantics with Visual Data,"Functional Distributional Semantics is a recently proposed framework for learning distributional semantics that provides linguistic interpretability. It models the meaning of a word as a binary classifier rather than a numerical vector. In this work, we propose a method to train a Functional Distributional Semantics model with grounded visual data. We train it on the Visual Genome dataset, which is closer to the kind of data encountered in human language acquisition than a large text corpus. On four external evaluation datasets, our model outperforms previous work on learning semantics from Visual Genome. 1",Learning Functional Distributional Semantics with Visual Data,"Functional Distributional Semantics is a recently proposed framework for learning distributional semantics that provides linguistic interpretability. It models the meaning of a word as a binary classifier rather than a numerical vector. In this work, we propose a method to train a Functional Distributional Semantics model with grounded visual data. We train it on the Visual Genome dataset, which is closer to the kind of data encountered in human language acquisition than a large text corpus. On four external evaluation datasets, our model outperforms previous work on learning semantics from Visual Genome. 1",,"Learning Functional Distributional Semantics with Visual Data. Functional Distributional Semantics is a recently proposed framework for learning distributional semantics that provides linguistic interpretability. It models the meaning of a word as a binary classifier rather than a numerical vector. In this work, we propose a method to train a Functional Distributional Semantics model with grounded visual data. We train it on the Visual Genome dataset, which is closer to the kind of data encountered in human language acquisition than a large text corpus. On four external evaluation datasets, our model outperforms previous work on learning semantics from Visual Genome. 1",2022
pultrova-2019-correlation,https://aclanthology.org/W19-8504,0,,,,,,,"Correlation between the gradability of Latin adjectives and the ability to form qualitative abstract nouns. Comparison is distinctly limited in scope among grammatical categories in that it is unable, for semantic reasons, to produce comparative and superlative forms for many representatives of the word class to which it applies as a category (adjectives and their derived adverbs). In Latin and other dead languages, it is nontrivial to decide with certainty whether an adjective is gradable or not: being non-native speakers, we cannot rely on linguistic intuition; nor can a definitive answer be reached by consulting the corpus of Latin texts (the fact that an item is not attested in the surviving corpus obviously does not mean that it did not exist in Latin). What needs to be found are properties of adjectives correlated with gradability/ nongradability that are directly discernible at the level of written language. The present contribution gives one such property, showing that there is a strong correlation between gradability and the ability of an adjective to form abstract nouns. 1 Comparison: conceptual vs grammatical category Comparison is a grammatical category that has for a long time practically escaped the attention of linguists studying Latin. Only relatively recently were detailed studies published on the phenomenon of comparison on a cognitive and functional basis, 1 investigating how two or more entities could be compared in a language, what patterns are used in these various ways of comparison in Latin, and what different meanings comparatives and superlatives may have. These studies clearly demonstratewhich is true in other languages as wellthat it does not hold that comparison in Latin is always carried out using the forms of comparative and superlative, nor does it hold that comparatives and superlatives always perform the basic function of simple comparison of two or more entities. It follows that it is useful, even necessary, as with other grammatical categories, to differentiate between comparison on the one hand as a conceptual category that is expressed at the level of the whole proposition (""Paul is higher than John"" = ""John is not as high as Paul""), and on the other hand comparison as a grammatical/morphological category (""the formal modification of some predicative wordmost often an adjective-representing a parameter of gradation or comparison"" 2). The present author is currently working on a monograph that examines the morphological category of Latin comparison. Put simply, she does not ask which means may be employed in Latin to express comparison, but how the forms of comparative and superlative are used. The present contribution deals with one question falling within the scope of this work. 2 Specific nature of category of comparison The grammatical category of comparison is distinctly limited, not being able to produce the forms of comparative and superlative from all the representatives of the word class to which it applies as a category (i.e. adjectives and their derived adverbs). A certain degree of limitation is not exceptional in itself (e.g. in the category of number there are singularia tantum and pluralia tantum; in the category of verb voice, intransitive verbs, for instance, cannot form personal passive forms; etc.); however, comparison is restricted to an exceptional degree. For example, according to the Czech National Corpus,",Correlation between the gradability of {L}atin adjectives and the ability to form qualitative abstract nouns,"Comparison is distinctly limited in scope among grammatical categories in that it is unable, for semantic reasons, to produce comparative and superlative forms for many representatives of the word class to which it applies as a category (adjectives and their derived adverbs). In Latin and other dead languages, it is nontrivial to decide with certainty whether an adjective is gradable or not: being non-native speakers, we cannot rely on linguistic intuition; nor can a definitive answer be reached by consulting the corpus of Latin texts (the fact that an item is not attested in the surviving corpus obviously does not mean that it did not exist in Latin). What needs to be found are properties of adjectives correlated with gradability/ nongradability that are directly discernible at the level of written language. The present contribution gives one such property, showing that there is a strong correlation between gradability and the ability of an adjective to form abstract nouns. 1 Comparison: conceptual vs grammatical category Comparison is a grammatical category that has for a long time practically escaped the attention of linguists studying Latin. Only relatively recently were detailed studies published on the phenomenon of comparison on a cognitive and functional basis, 1 investigating how two or more entities could be compared in a language, what patterns are used in these various ways of comparison in Latin, and what different meanings comparatives and superlatives may have. These studies clearly demonstratewhich is true in other languages as wellthat it does not hold that comparison in Latin is always carried out using the forms of comparative and superlative, nor does it hold that comparatives and superlatives always perform the basic function of simple comparison of two or more entities. It follows that it is useful, even necessary, as with other grammatical categories, to differentiate between comparison on the one hand as a conceptual category that is expressed at the level of the whole proposition (""Paul is higher than John"" = ""John is not as high as Paul""), and on the other hand comparison as a grammatical/morphological category (""the formal modification of some predicative wordmost often an adjective-representing a parameter of gradation or comparison"" 2). The present author is currently working on a monograph that examines the morphological category of Latin comparison. Put simply, she does not ask which means may be employed in Latin to express comparison, but how the forms of comparative and superlative are used. The present contribution deals with one question falling within the scope of this work. 2 Specific nature of category of comparison The grammatical category of comparison is distinctly limited, not being able to produce the forms of comparative and superlative from all the representatives of the word class to which it applies as a category (i.e. adjectives and their derived adverbs). A certain degree of limitation is not exceptional in itself (e.g. in the category of number there are singularia tantum and pluralia tantum; in the category of verb voice, intransitive verbs, for instance, cannot form personal passive forms; etc.); however, comparison is restricted to an exceptional degree. For example, according to the Czech National Corpus,",Correlation between the gradability of Latin adjectives and the ability to form qualitative abstract nouns,"Comparison is distinctly limited in scope among grammatical categories in that it is unable, for semantic reasons, to produce comparative and superlative forms for many representatives of the word class to which it applies as a category (adjectives and their derived adverbs). In Latin and other dead languages, it is nontrivial to decide with certainty whether an adjective is gradable or not: being non-native speakers, we cannot rely on linguistic intuition; nor can a definitive answer be reached by consulting the corpus of Latin texts (the fact that an item is not attested in the surviving corpus obviously does not mean that it did not exist in Latin). What needs to be found are properties of adjectives correlated with gradability/ nongradability that are directly discernible at the level of written language. The present contribution gives one such property, showing that there is a strong correlation between gradability and the ability of an adjective to form abstract nouns. 1 Comparison: conceptual vs grammatical category Comparison is a grammatical category that has for a long time practically escaped the attention of linguists studying Latin. Only relatively recently were detailed studies published on the phenomenon of comparison on a cognitive and functional basis, 1 investigating how two or more entities could be compared in a language, what patterns are used in these various ways of comparison in Latin, and what different meanings comparatives and superlatives may have. These studies clearly demonstratewhich is true in other languages as wellthat it does not hold that comparison in Latin is always carried out using the forms of comparative and superlative, nor does it hold that comparatives and superlatives always perform the basic function of simple comparison of two or more entities. It follows that it is useful, even necessary, as with other grammatical categories, to differentiate between comparison on the one hand as a conceptual category that is expressed at the level of the whole proposition (""Paul is higher than John"" = ""John is not as high as Paul""), and on the other hand comparison as a grammatical/morphological category (""the formal modification of some predicative wordmost often an adjective-representing a parameter of gradation or comparison"" 2). The present author is currently working on a monograph that examines the morphological category of Latin comparison. Put simply, she does not ask which means may be employed in Latin to express comparison, but how the forms of comparative and superlative are used. The present contribution deals with one question falling within the scope of this work. 2 Specific nature of category of comparison The grammatical category of comparison is distinctly limited, not being able to produce the forms of comparative and superlative from all the representatives of the word class to which it applies as a category (i.e. adjectives and their derived adverbs). A certain degree of limitation is not exceptional in itself (e.g. in the category of number there are singularia tantum and pluralia tantum; in the category of verb voice, intransitive verbs, for instance, cannot form personal passive forms; etc.); however, comparison is restricted to an exceptional degree. For example, according to the Czech National Corpus,",,"Correlation between the gradability of Latin adjectives and the ability to form qualitative abstract nouns. Comparison is distinctly limited in scope among grammatical categories in that it is unable, for semantic reasons, to produce comparative and superlative forms for many representatives of the word class to which it applies as a category (adjectives and their derived adverbs). In Latin and other dead languages, it is nontrivial to decide with certainty whether an adjective is gradable or not: being non-native speakers, we cannot rely on linguistic intuition; nor can a definitive answer be reached by consulting the corpus of Latin texts (the fact that an item is not attested in the surviving corpus obviously does not mean that it did not exist in Latin). What needs to be found are properties of adjectives correlated with gradability/ nongradability that are directly discernible at the level of written language. The present contribution gives one such property, showing that there is a strong correlation between gradability and the ability of an adjective to form abstract nouns. 1 Comparison: conceptual vs grammatical category Comparison is a grammatical category that has for a long time practically escaped the attention of linguists studying Latin. Only relatively recently were detailed studies published on the phenomenon of comparison on a cognitive and functional basis, 1 investigating how two or more entities could be compared in a language, what patterns are used in these various ways of comparison in Latin, and what different meanings comparatives and superlatives may have. These studies clearly demonstratewhich is true in other languages as wellthat it does not hold that comparison in Latin is always carried out using the forms of comparative and superlative, nor does it hold that comparatives and superlatives always perform the basic function of simple comparison of two or more entities. It follows that it is useful, even necessary, as with other grammatical categories, to differentiate between comparison on the one hand as a conceptual category that is expressed at the level of the whole proposition (""Paul is higher than John"" = ""John is not as high as Paul""), and on the other hand comparison as a grammatical/morphological category (""the formal modification of some predicative wordmost often an adjective-representing a parameter of gradation or comparison"" 2). The present author is currently working on a monograph that examines the morphological category of Latin comparison. Put simply, she does not ask which means may be employed in Latin to express comparison, but how the forms of comparative and superlative are used. The present contribution deals with one question falling within the scope of this work. 2 Specific nature of category of comparison The grammatical category of comparison is distinctly limited, not being able to produce the forms of comparative and superlative from all the representatives of the word class to which it applies as a category (i.e. adjectives and their derived adverbs). A certain degree of limitation is not exceptional in itself (e.g. in the category of number there are singularia tantum and pluralia tantum; in the category of verb voice, intransitive verbs, for instance, cannot form personal passive forms; etc.); however, comparison is restricted to an exceptional degree. For example, according to the Czech National Corpus,",2019
webber-di-eugenio-1990-free,https://aclanthology.org/C90-2068,0,,,,,,,"Free Adjuncts in Natural Language Instructions. In thi,~ paper, we give a brief account of our project Animation from Instructions, the view of instructions it reflects, and the semantics of one construction-the free adjunct-that is common in Natural Language instructions. *We thank Mark Steedman, Hans Karlgren and Breck Baldwin for comments and advice. They are not to blame for any er-~-ors in the translation of their advice into the present form. The ,:esem'ch was supported by DARPA grant no. N0014-85-K0018, and ARO grant no. DAAL03-89-C0031. 1Tiffs is not to suggest that animation can be driven solely from that common representation: other types of knowledge axe clearly needed as well-including knowledge of motor skills and other performance characteristics.",Free Adjuncts in Natural Language Instructions,"In thi,~ paper, we give a brief account of our project Animation from Instructions, the view of instructions it reflects, and the semantics of one construction-the free adjunct-that is common in Natural Language instructions. *We thank Mark Steedman, Hans Karlgren and Breck Baldwin for comments and advice. They are not to blame for any er-~-ors in the translation of their advice into the present form. The ,:esem'ch was supported by DARPA grant no. N0014-85-K0018, and ARO grant no. DAAL03-89-C0031. 1Tiffs is not to suggest that animation can be driven solely from that common representation: other types of knowledge axe clearly needed as well-including knowledge of motor skills and other performance characteristics.",Free Adjuncts in Natural Language Instructions,"In thi,~ paper, we give a brief account of our project Animation from Instructions, the view of instructions it reflects, and the semantics of one construction-the free adjunct-that is common in Natural Language instructions. *We thank Mark Steedman, Hans Karlgren and Breck Baldwin for comments and advice. They are not to blame for any er-~-ors in the translation of their advice into the present form. The ,:esem'ch was supported by DARPA grant no. N0014-85-K0018, and ARO grant no. DAAL03-89-C0031. 1Tiffs is not to suggest that animation can be driven solely from that common representation: other types of knowledge axe clearly needed as well-including knowledge of motor skills and other performance characteristics.",,"Free Adjuncts in Natural Language Instructions. In thi,~ paper, we give a brief account of our project Animation from Instructions, the view of instructions it reflects, and the semantics of one construction-the free adjunct-that is common in Natural Language instructions. *We thank Mark Steedman, Hans Karlgren and Breck Baldwin for comments and advice. They are not to blame for any er-~-ors in the translation of their advice into the present form. The ,:esem'ch was supported by DARPA grant no. N0014-85-K0018, and ARO grant no. DAAL03-89-C0031. 1Tiffs is not to suggest that animation can be driven solely from that common representation: other types of knowledge axe clearly needed as well-including knowledge of motor skills and other performance characteristics.",1990
watanabe-sumita-2002-bidirectional,https://aclanthology.org/C02-1050,0,,,,,,,"Bidirectional Decoding for Statistical Machine Translation. This paper describes the right-to-left decoding method, which translates an input string by generating in right-to-left direction. In addition, presented is the bidirectional decoding method, that can take both of the advantages of left-to-right and right-to-left decoding method by generating output in both ways and by merging hypothesized partial outputs of two directions. The experimental results on Japanese and English translation showed that the right-to-left was better for Englith-to-Japanese translation, while the left-to-right was suitable for Japanese-to-English translation. It was also observed that the bidirectional method was better for English-to-Japanese translation.",Bidirectional Decoding for Statistical Machine Translation,"This paper describes the right-to-left decoding method, which translates an input string by generating in right-to-left direction. In addition, presented is the bidirectional decoding method, that can take both of the advantages of left-to-right and right-to-left decoding method by generating output in both ways and by merging hypothesized partial outputs of two directions. The experimental results on Japanese and English translation showed that the right-to-left was better for Englith-to-Japanese translation, while the left-to-right was suitable for Japanese-to-English translation. It was also observed that the bidirectional method was better for English-to-Japanese translation.",Bidirectional Decoding for Statistical Machine Translation,"This paper describes the right-to-left decoding method, which translates an input string by generating in right-to-left direction. In addition, presented is the bidirectional decoding method, that can take both of the advantages of left-to-right and right-to-left decoding method by generating output in both ways and by merging hypothesized partial outputs of two directions. The experimental results on Japanese and English translation showed that the right-to-left was better for Englith-to-Japanese translation, while the left-to-right was suitable for Japanese-to-English translation. It was also observed that the bidirectional method was better for English-to-Japanese translation.","The research reported here was supported in part by a contract with the Telecommunications Advancement Organization of Japan entitled, ""A study of speech dialogue translation technology based on a large corpus"".","Bidirectional Decoding for Statistical Machine Translation. This paper describes the right-to-left decoding method, which translates an input string by generating in right-to-left direction. In addition, presented is the bidirectional decoding method, that can take both of the advantages of left-to-right and right-to-left decoding method by generating output in both ways and by merging hypothesized partial outputs of two directions. The experimental results on Japanese and English translation showed that the right-to-left was better for Englith-to-Japanese translation, while the left-to-right was suitable for Japanese-to-English translation. It was also observed that the bidirectional method was better for English-to-Japanese translation.",2002
lee-1999-spoken,https://aclanthology.org/Y99-1019,0,,,,,,,"Spoken Language Systems - Technical Challenges for Speech and Natural Language Processing. Speech is the most natural means of communication among humans. It is also believed that spoken language processing will play a major role in establishing a universal interface between humans and machines. Most of the existing spoken language systems are rather primitive. For example, speech synthesizers for reading unrestrict text of any language is only producing machine-sounding speech. Automatic speech recognizers are capable of recognizing spoken language from a selective population doing a highly restricted task. In this talk, we present some examples of spoken language translation and dialogue systems and examine the capabilities and limitations of current spoken language technologies. We also discuss technical challenges for language researchers to help realize the vision of natural human-machine communication to allow humans to converse with machines in any language to access information and solve problems.",Spoken Language Systems - Technical Challenges for Speech and Natural Language Processing,"Speech is the most natural means of communication among humans. It is also believed that spoken language processing will play a major role in establishing a universal interface between humans and machines. Most of the existing spoken language systems are rather primitive. For example, speech synthesizers for reading unrestrict text of any language is only producing machine-sounding speech. Automatic speech recognizers are capable of recognizing spoken language from a selective population doing a highly restricted task. In this talk, we present some examples of spoken language translation and dialogue systems and examine the capabilities and limitations of current spoken language technologies. We also discuss technical challenges for language researchers to help realize the vision of natural human-machine communication to allow humans to converse with machines in any language to access information and solve problems.",Spoken Language Systems - Technical Challenges for Speech and Natural Language Processing,"Speech is the most natural means of communication among humans. It is also believed that spoken language processing will play a major role in establishing a universal interface between humans and machines. Most of the existing spoken language systems are rather primitive. For example, speech synthesizers for reading unrestrict text of any language is only producing machine-sounding speech. Automatic speech recognizers are capable of recognizing spoken language from a selective population doing a highly restricted task. In this talk, we present some examples of spoken language translation and dialogue systems and examine the capabilities and limitations of current spoken language technologies. We also discuss technical challenges for language researchers to help realize the vision of natural human-machine communication to allow humans to converse with machines in any language to access information and solve problems.",,"Spoken Language Systems - Technical Challenges for Speech and Natural Language Processing. Speech is the most natural means of communication among humans. It is also believed that spoken language processing will play a major role in establishing a universal interface between humans and machines. Most of the existing spoken language systems are rather primitive. For example, speech synthesizers for reading unrestrict text of any language is only producing machine-sounding speech. Automatic speech recognizers are capable of recognizing spoken language from a selective population doing a highly restricted task. In this talk, we present some examples of spoken language translation and dialogue systems and examine the capabilities and limitations of current spoken language technologies. We also discuss technical challenges for language researchers to help realize the vision of natural human-machine communication to allow humans to converse with machines in any language to access information and solve problems.",1999
peng-hu-2016-web,https://aclanthology.org/2016.amta-users.4,0,,,,,,,"Web App UI Layout Sniffer. layout doesn't work. ciency is painfully low. Consequently, it dramatically slows down product delivery in today's",Web App {UI} Layout Sniffer,"layout doesn't work. ciency is painfully low. Consequently, it dramatically slows down product delivery in today's",Web App UI Layout Sniffer,"layout doesn't work. ciency is painfully low. Consequently, it dramatically slows down product delivery in today's",,"Web App UI Layout Sniffer. layout doesn't work. ciency is painfully low. Consequently, it dramatically slows down product delivery in today's",2016
wang-cardie-2012-focused,https://aclanthology.org/W12-1642,0,,,,,,,"Focused Meeting Summarization via Unsupervised Relation Extraction. We present a novel unsupervised framework for focused meeting summarization that views the problem as an instance of relation extraction. We adapt an existing in-domain relation learner (Chen et al., 2011) by exploiting a set of task-specific constraints and features. We evaluate the approach on a decision summarization task and show that it outperforms unsupervised utterance-level extractive summarization baselines as well as an existing generic relation-extraction-based summarization method. Moreover, our approach produces summaries competitive with those generated by supervised methods in terms of the standard ROUGE score.",Focused Meeting Summarization via Unsupervised Relation Extraction,"We present a novel unsupervised framework for focused meeting summarization that views the problem as an instance of relation extraction. We adapt an existing in-domain relation learner (Chen et al., 2011) by exploiting a set of task-specific constraints and features. We evaluate the approach on a decision summarization task and show that it outperforms unsupervised utterance-level extractive summarization baselines as well as an existing generic relation-extraction-based summarization method. Moreover, our approach produces summaries competitive with those generated by supervised methods in terms of the standard ROUGE score.",Focused Meeting Summarization via Unsupervised Relation Extraction,"We present a novel unsupervised framework for focused meeting summarization that views the problem as an instance of relation extraction. We adapt an existing in-domain relation learner (Chen et al., 2011) by exploiting a set of task-specific constraints and features. We evaluate the approach on a decision summarization task and show that it outperforms unsupervised utterance-level extractive summarization baselines as well as an existing generic relation-extraction-based summarization method. Moreover, our approach produces summaries competitive with those generated by supervised methods in terms of the standard ROUGE score.","Acknowledgments This work was supported in part by National Science Foundation Grants IIS-0968450 and IIS-1111176, and by a gift from Google.","Focused Meeting Summarization via Unsupervised Relation Extraction. We present a novel unsupervised framework for focused meeting summarization that views the problem as an instance of relation extraction. We adapt an existing in-domain relation learner (Chen et al., 2011) by exploiting a set of task-specific constraints and features. We evaluate the approach on a decision summarization task and show that it outperforms unsupervised utterance-level extractive summarization baselines as well as an existing generic relation-extraction-based summarization method. Moreover, our approach produces summaries competitive with those generated by supervised methods in terms of the standard ROUGE score.",2012
ritchie-etal-2006-find,https://aclanthology.org/W06-0804,0,,,,,,,"How to Find Better Index Terms Through Citations. We consider the question of how information from the textual context of citations in scientific papers could improve indexing of the cited papers. We first present examples which show that the context should in principle provide better and new index terms. We then discuss linguistic phenomena around citations and which type of processing would improve the automatic determination of the right context. We present a case study, studying the effect of combining the existing index terms of a paper with additional terms from papers citing that paper in our corpus. Finally, we discuss the need for experimentation for the practical validation of our claim.",How to Find Better Index Terms Through Citations,"We consider the question of how information from the textual context of citations in scientific papers could improve indexing of the cited papers. We first present examples which show that the context should in principle provide better and new index terms. We then discuss linguistic phenomena around citations and which type of processing would improve the automatic determination of the right context. We present a case study, studying the effect of combining the existing index terms of a paper with additional terms from papers citing that paper in our corpus. Finally, we discuss the need for experimentation for the practical validation of our claim.",How to Find Better Index Terms Through Citations,"We consider the question of how information from the textual context of citations in scientific papers could improve indexing of the cited papers. We first present examples which show that the context should in principle provide better and new index terms. We then discuss linguistic phenomena around citations and which type of processing would improve the automatic determination of the right context. We present a case study, studying the effect of combining the existing index terms of a paper with additional terms from papers citing that paper in our corpus. Finally, we discuss the need for experimentation for the practical validation of our claim.",,"How to Find Better Index Terms Through Citations. We consider the question of how information from the textual context of citations in scientific papers could improve indexing of the cited papers. We first present examples which show that the context should in principle provide better and new index terms. We then discuss linguistic phenomena around citations and which type of processing would improve the automatic determination of the right context. We present a case study, studying the effect of combining the existing index terms of a paper with additional terms from papers citing that paper in our corpus. Finally, we discuss the need for experimentation for the practical validation of our claim.",2006
lee-etal-2017-ntnu,https://aclanthology.org/S17-2165,1,,,,industry_innovation_infrastructure,,,"The NTNU System at SemEval-2017 Task 10: Extracting Keyphrases and Relations from Scientific Publications Using Multiple Conditional Random Fields. This study describes the design of the NTNU system for the ScienceIE task at the SemEval 2017 workshop. We use self-defined feature templates and multiple conditional random fields with extracted features to identify keyphrases along with categorized labels and their relations from scientific publications. A total of 16 teams participated in evaluation scenario 1 (subtasks A, B, and C), with only 7 teams competing in all subtasks. Our best micro-averaging F1 across the three subtasks is 0.23, ranking in the middle among all 16 submissions.",The {NTNU} System at {S}em{E}val-2017 Task 10: Extracting Keyphrases and Relations from Scientific Publications Using Multiple Conditional Random Fields,"This study describes the design of the NTNU system for the ScienceIE task at the SemEval 2017 workshop. We use self-defined feature templates and multiple conditional random fields with extracted features to identify keyphrases along with categorized labels and their relations from scientific publications. A total of 16 teams participated in evaluation scenario 1 (subtasks A, B, and C), with only 7 teams competing in all subtasks. Our best micro-averaging F1 across the three subtasks is 0.23, ranking in the middle among all 16 submissions.",The NTNU System at SemEval-2017 Task 10: Extracting Keyphrases and Relations from Scientific Publications Using Multiple Conditional Random Fields,"This study describes the design of the NTNU system for the ScienceIE task at the SemEval 2017 workshop. We use self-defined feature templates and multiple conditional random fields with extracted features to identify keyphrases along with categorized labels and their relations from scientific publications. A total of 16 teams participated in evaluation scenario 1 (subtasks A, B, and C), with only 7 teams competing in all subtasks. Our best micro-averaging F1 across the three subtasks is 0.23, ranking in the middle among all 16 submissions.","This study was partially supported by the Ministry of Science and Technology, under the grant MOST 105-2221-E-003-020-MY2 and the ""Aim for the Top University Project"" and ""Center of Learning Technology for Chinese"" of National Taiwan Normal University, sponsored by the Ministry of Education, Taiwan.","The NTNU System at SemEval-2017 Task 10: Extracting Keyphrases and Relations from Scientific Publications Using Multiple Conditional Random Fields. This study describes the design of the NTNU system for the ScienceIE task at the SemEval 2017 workshop. We use self-defined feature templates and multiple conditional random fields with extracted features to identify keyphrases along with categorized labels and their relations from scientific publications. A total of 16 teams participated in evaluation scenario 1 (subtasks A, B, and C), with only 7 teams competing in all subtasks. Our best micro-averaging F1 across the three subtasks is 0.23, ranking in the middle among all 16 submissions.",2017
polisciuc-etal-2015-understanding,https://aclanthology.org/W15-2810,1,,,,sustainable_cities,,,"Understanding Urban Land Use through the Visualization of Points of Interest. Semantic data regarding points of interest in urban areas are hard to visualize. Due to the high number of points and categories they belong, as well as the associated textual information, maps become heavily cluttered and hard to read. Using traditional visualization techniques (e.g. dot distribution maps, typographic maps) partially solve this problem. Although, these techniques address different issues of the problem, their combination is hard and typically results in an efficient visualization. In our approach, we present a method to represent clusters of points of interest as shapes, which is based on vacuum package metaphor. The calculated shapes characterize sets of points and allow their use as containers for textual information. Additionally, we present a strategy for placing text onto polygons. The suggested method can be used in interactive visual exploration of semantic data distributed in space, and for creating maps with similar characteristics of dot distribution maps, but using shapes instead of points.",Understanding Urban Land Use through the Visualization of Points of Interest,"Semantic data regarding points of interest in urban areas are hard to visualize. Due to the high number of points and categories they belong, as well as the associated textual information, maps become heavily cluttered and hard to read. Using traditional visualization techniques (e.g. dot distribution maps, typographic maps) partially solve this problem. Although, these techniques address different issues of the problem, their combination is hard and typically results in an efficient visualization. In our approach, we present a method to represent clusters of points of interest as shapes, which is based on vacuum package metaphor. The calculated shapes characterize sets of points and allow their use as containers for textual information. Additionally, we present a strategy for placing text onto polygons. The suggested method can be used in interactive visual exploration of semantic data distributed in space, and for creating maps with similar characteristics of dot distribution maps, but using shapes instead of points.",Understanding Urban Land Use through the Visualization of Points of Interest,"Semantic data regarding points of interest in urban areas are hard to visualize. Due to the high number of points and categories they belong, as well as the associated textual information, maps become heavily cluttered and hard to read. Using traditional visualization techniques (e.g. dot distribution maps, typographic maps) partially solve this problem. Although, these techniques address different issues of the problem, their combination is hard and typically results in an efficient visualization. In our approach, we present a method to represent clusters of points of interest as shapes, which is based on vacuum package metaphor. The calculated shapes characterize sets of points and allow their use as containers for textual information. Additionally, we present a strategy for placing text onto polygons. The suggested method can be used in interactive visual exploration of semantic data distributed in space, and for creating maps with similar characteristics of dot distribution maps, but using shapes instead of points.",This work was supported by the InfoCrowds project -FCT-PTDC/ECM-TRA/1898/2012FCT.,"Understanding Urban Land Use through the Visualization of Points of Interest. Semantic data regarding points of interest in urban areas are hard to visualize. Due to the high number of points and categories they belong, as well as the associated textual information, maps become heavily cluttered and hard to read. Using traditional visualization techniques (e.g. dot distribution maps, typographic maps) partially solve this problem. Although, these techniques address different issues of the problem, their combination is hard and typically results in an efficient visualization. In our approach, we present a method to represent clusters of points of interest as shapes, which is based on vacuum package metaphor. The calculated shapes characterize sets of points and allow their use as containers for textual information. Additionally, we present a strategy for placing text onto polygons. The suggested method can be used in interactive visual exploration of semantic data distributed in space, and for creating maps with similar characteristics of dot distribution maps, but using shapes instead of points.",2015
rothlisberger-2002-cls,https://aclanthology.org/2002.tc-1.11,0,,,,,,,"CLS Workflow - a translation workflow system. As the translation industry is faced with ever more challenging deadlines to meet and production costs to keep under tight control, translation companies need to find efficient ways of managing their work processes. CLS Corporate Language Services AG, a translation provider for the financial services and telecoms industries has tackled this issue by developing their own workflow application based on Lotus Notes. The system's various modules have been developed and enhanced over the last four years. Current enhancement projects include interfaces to the accounting tool used in the company, web-based information systems for clients and translation providers and the close integration of some of the CAT tools the company uses.",{CLS} Workflow - a translation workflow system,"As the translation industry is faced with ever more challenging deadlines to meet and production costs to keep under tight control, translation companies need to find efficient ways of managing their work processes. CLS Corporate Language Services AG, a translation provider for the financial services and telecoms industries has tackled this issue by developing their own workflow application based on Lotus Notes. The system's various modules have been developed and enhanced over the last four years. Current enhancement projects include interfaces to the accounting tool used in the company, web-based information systems for clients and translation providers and the close integration of some of the CAT tools the company uses.",CLS Workflow - a translation workflow system,"As the translation industry is faced with ever more challenging deadlines to meet and production costs to keep under tight control, translation companies need to find efficient ways of managing their work processes. CLS Corporate Language Services AG, a translation provider for the financial services and telecoms industries has tackled this issue by developing their own workflow application based on Lotus Notes. The system's various modules have been developed and enhanced over the last four years. Current enhancement projects include interfaces to the accounting tool used in the company, web-based information systems for clients and translation providers and the close integration of some of the CAT tools the company uses.",,"CLS Workflow - a translation workflow system. As the translation industry is faced with ever more challenging deadlines to meet and production costs to keep under tight control, translation companies need to find efficient ways of managing their work processes. CLS Corporate Language Services AG, a translation provider for the financial services and telecoms industries has tackled this issue by developing their own workflow application based on Lotus Notes. The system's various modules have been developed and enhanced over the last four years. Current enhancement projects include interfaces to the accounting tool used in the company, web-based information systems for clients and translation providers and the close integration of some of the CAT tools the company uses.",2002
dobrovoljc-etal-2019-improving,https://aclanthology.org/W19-8004,0,,,,,,,"Improving UD processing via satellite resources for morphology. This paper presents the conversion of the reference language resources for Croatian and Slovenian morphology processing to UD morphological specifications. We show that the newly available training corpora and inflectional dictionaries improve the baseline stanfordnlp performance obtained on officially released UD datasets for lemmatization, morphology prediction and dependency parsing, illustrating the potential value of such satellite UD resources for languages with rich morphology.",Improving {UD} processing via satellite resources for morphology,"This paper presents the conversion of the reference language resources for Croatian and Slovenian morphology processing to UD morphological specifications. We show that the newly available training corpora and inflectional dictionaries improve the baseline stanfordnlp performance obtained on officially released UD datasets for lemmatization, morphology prediction and dependency parsing, illustrating the potential value of such satellite UD resources for languages with rich morphology.",Improving UD processing via satellite resources for morphology,"This paper presents the conversion of the reference language resources for Croatian and Slovenian morphology processing to UD morphological specifications. We show that the newly available training corpora and inflectional dictionaries improve the baseline stanfordnlp performance obtained on officially released UD datasets for lemmatization, morphology prediction and dependency parsing, illustrating the potential value of such satellite UD resources for languages with rich morphology.","The authors acknowledge the financial support from the Slovenian Research Agency through the research core funding no. P6-0411 (Language resources and technologies for Slovene language), the research project no. J6-8256 (New grammar of contemporary standard Slovene: sources and methods) and the Slovenian research infrastructure CLARIN.SI.","Improving UD processing via satellite resources for morphology. This paper presents the conversion of the reference language resources for Croatian and Slovenian morphology processing to UD morphological specifications. We show that the newly available training corpora and inflectional dictionaries improve the baseline stanfordnlp performance obtained on officially released UD datasets for lemmatization, morphology prediction and dependency parsing, illustrating the potential value of such satellite UD resources for languages with rich morphology.",2019
wolf-sonkin-etal-2018-structured,https://aclanthology.org/P18-1245,0,,,,,,,"A Structured Variational Autoencoder for Contextual Morphological Inflection. Statistical morphological inflectors are typically trained on fully supervised, type-level data. One remaining open research question is the following: How can we effectively exploit raw, token-level data to improve their performance? To this end, we introduce a novel generative latent-variable model for the semi-supervised learning of inflection generation. To enable posterior inference over the latent variables, we derive an efficient variational inference procedure based on the wake-sleep algorithm. We experiment on 23 languages, using the Universal Dependencies corpora in a simulated low-resource setting, and find improvements of over 10% absolute accuracy in some cases.",A Structured Variational Autoencoder for Contextual Morphological Inflection,"Statistical morphological inflectors are typically trained on fully supervised, type-level data. One remaining open research question is the following: How can we effectively exploit raw, token-level data to improve their performance? To this end, we introduce a novel generative latent-variable model for the semi-supervised learning of inflection generation. To enable posterior inference over the latent variables, we derive an efficient variational inference procedure based on the wake-sleep algorithm. We experiment on 23 languages, using the Universal Dependencies corpora in a simulated low-resource setting, and find improvements of over 10% absolute accuracy in some cases.",A Structured Variational Autoencoder for Contextual Morphological Inflection,"Statistical morphological inflectors are typically trained on fully supervised, type-level data. One remaining open research question is the following: How can we effectively exploit raw, token-level data to improve their performance? To this end, we introduce a novel generative latent-variable model for the semi-supervised learning of inflection generation. To enable posterior inference over the latent variables, we derive an efficient variational inference procedure based on the wake-sleep algorithm. We experiment on 23 languages, using the Universal Dependencies corpora in a simulated low-resource setting, and find improvements of over 10% absolute accuracy in some cases.",,"A Structured Variational Autoencoder for Contextual Morphological Inflection. Statistical morphological inflectors are typically trained on fully supervised, type-level data. One remaining open research question is the following: How can we effectively exploit raw, token-level data to improve their performance? To this end, we introduce a novel generative latent-variable model for the semi-supervised learning of inflection generation. To enable posterior inference over the latent variables, we derive an efficient variational inference procedure based on the wake-sleep algorithm. We experiment on 23 languages, using the Universal Dependencies corpora in a simulated low-resource setting, and find improvements of over 10% absolute accuracy in some cases.",2018
ahmed-etal-2020-multilingual,https://aclanthology.org/2020.lrec-1.516,0,,,,,,,"Multilingual Corpus Creation for Multilingual Semantic Similarity Task. In natural language processing, the performance of a semantic similarity task relies heavily on the availability of a large corpus. Various monolingual corpora are available (mainly English); but multilingual resources are very limited. In this work, we describe a semiautomated framework to create a multilingual corpus which can be used for the multilingual semantic similarity task. The similar sentence pairs are obtained by crawling bilingual websites, whereas the dissimilar sentence pairs are selected by applying topic modeling and an Open-AI GPT model on the similar sentence pairs. We focus on websites in the government, insurance, and banking domains to collect English-French and English-Spanish sentence pairs; however, this corpus creation approach can be applied to any other industry vertical provided that a bilingual website exists. We also show experimental results for multilingual semantic similarity to verify the quality of the corpus and demonstrate its usage.",Multilingual Corpus Creation for Multilingual Semantic Similarity Task,"In natural language processing, the performance of a semantic similarity task relies heavily on the availability of a large corpus. Various monolingual corpora are available (mainly English); but multilingual resources are very limited. In this work, we describe a semiautomated framework to create a multilingual corpus which can be used for the multilingual semantic similarity task. The similar sentence pairs are obtained by crawling bilingual websites, whereas the dissimilar sentence pairs are selected by applying topic modeling and an Open-AI GPT model on the similar sentence pairs. We focus on websites in the government, insurance, and banking domains to collect English-French and English-Spanish sentence pairs; however, this corpus creation approach can be applied to any other industry vertical provided that a bilingual website exists. We also show experimental results for multilingual semantic similarity to verify the quality of the corpus and demonstrate its usage.",Multilingual Corpus Creation for Multilingual Semantic Similarity Task,"In natural language processing, the performance of a semantic similarity task relies heavily on the availability of a large corpus. Various monolingual corpora are available (mainly English); but multilingual resources are very limited. In this work, we describe a semiautomated framework to create a multilingual corpus which can be used for the multilingual semantic similarity task. The similar sentence pairs are obtained by crawling bilingual websites, whereas the dissimilar sentence pairs are selected by applying topic modeling and an Open-AI GPT model on the similar sentence pairs. We focus on websites in the government, insurance, and banking domains to collect English-French and English-Spanish sentence pairs; however, this corpus creation approach can be applied to any other industry vertical provided that a bilingual website exists. We also show experimental results for multilingual semantic similarity to verify the quality of the corpus and demonstrate its usage.",This research was supported by Mitacs through the Mitacs Accelerate program. We also acknowledge the helpful comments provided by the reviewers.,"Multilingual Corpus Creation for Multilingual Semantic Similarity Task. In natural language processing, the performance of a semantic similarity task relies heavily on the availability of a large corpus. Various monolingual corpora are available (mainly English); but multilingual resources are very limited. In this work, we describe a semiautomated framework to create a multilingual corpus which can be used for the multilingual semantic similarity task. The similar sentence pairs are obtained by crawling bilingual websites, whereas the dissimilar sentence pairs are selected by applying topic modeling and an Open-AI GPT model on the similar sentence pairs. We focus on websites in the government, insurance, and banking domains to collect English-French and English-Spanish sentence pairs; however, this corpus creation approach can be applied to any other industry vertical provided that a bilingual website exists. We also show experimental results for multilingual semantic similarity to verify the quality of the corpus and demonstrate its usage.",2020
hermann-etal-2012-unsupervised,https://aclanthology.org/S12-1021,0,,,,,,,"An Unsupervised Ranking Model for Noun-Noun Compositionality. We propose an unsupervised system that learns continuous degrees of lexicality for noun-noun compounds, beating a strong baseline on several tasks. We demonstrate that the distributional representations of compounds and their parts can be used to learn a finegrained representation of semantic contribution. Finally, we argue such a representation captures compositionality better than the current status-quo which treats compositionality as a binary classification problem.",An Unsupervised Ranking Model for Noun-Noun Compositionality,"We propose an unsupervised system that learns continuous degrees of lexicality for noun-noun compounds, beating a strong baseline on several tasks. We demonstrate that the distributional representations of compounds and their parts can be used to learn a finegrained representation of semantic contribution. Finally, we argue such a representation captures compositionality better than the current status-quo which treats compositionality as a binary classification problem.",An Unsupervised Ranking Model for Noun-Noun Compositionality,"We propose an unsupervised system that learns continuous degrees of lexicality for noun-noun compounds, beating a strong baseline on several tasks. We demonstrate that the distributional representations of compounds and their parts can be used to learn a finegrained representation of semantic contribution. Finally, we argue such a representation captures compositionality better than the current status-quo which treats compositionality as a binary classification problem.",The authors would like to acknowledge the use of the Oxford Supercomputing Centre (OSC) in carrying out this work.,"An Unsupervised Ranking Model for Noun-Noun Compositionality. We propose an unsupervised system that learns continuous degrees of lexicality for noun-noun compounds, beating a strong baseline on several tasks. We demonstrate that the distributional representations of compounds and their parts can be used to learn a finegrained representation of semantic contribution. Finally, we argue such a representation captures compositionality better than the current status-quo which treats compositionality as a binary classification problem.",2012
babych-etal-2009-evaluation,https://aclanthology.org/2009.eamt-1.6,0,,,,,,,"Evaluation-Guided Pre-Editing of Source Text: Improving MT-Tractability of Light Verb Constructions. This paper reports an experiment on evaluating and improving MT quality of light-verb construction (LVCs)-combinations of a 'semantically depleted' verb and its complement. Our method uses construction-level human evaluation for systematic discovery of mistranslated contexts and creating automatic pre-editing rules, which make the constructions more tractable for Rule-Based Machine Translation (RBMT) systems. For rewritten phrases we achieve about 40% reduction in the number of incomprehensible translations into English from both French and Russian. The proposed method can be used for enhancing automatic pre-editing functionality of state-of-theart MT systems. It will allow MT users to create their own rewriting rules for frequently mistranslated constructions and contexts, going beyond existing systems' capabilities offered by user dictionaries and do-not translate lists.",Evaluation-Guided Pre-Editing of Source Text: Improving {MT}-Tractability of Light Verb Constructions,"This paper reports an experiment on evaluating and improving MT quality of light-verb construction (LVCs)-combinations of a 'semantically depleted' verb and its complement. Our method uses construction-level human evaluation for systematic discovery of mistranslated contexts and creating automatic pre-editing rules, which make the constructions more tractable for Rule-Based Machine Translation (RBMT) systems. For rewritten phrases we achieve about 40% reduction in the number of incomprehensible translations into English from both French and Russian. The proposed method can be used for enhancing automatic pre-editing functionality of state-of-theart MT systems. It will allow MT users to create their own rewriting rules for frequently mistranslated constructions and contexts, going beyond existing systems' capabilities offered by user dictionaries and do-not translate lists.",Evaluation-Guided Pre-Editing of Source Text: Improving MT-Tractability of Light Verb Constructions,"This paper reports an experiment on evaluating and improving MT quality of light-verb construction (LVCs)-combinations of a 'semantically depleted' verb and its complement. Our method uses construction-level human evaluation for systematic discovery of mistranslated contexts and creating automatic pre-editing rules, which make the constructions more tractable for Rule-Based Machine Translation (RBMT) systems. For rewritten phrases we achieve about 40% reduction in the number of incomprehensible translations into English from both French and Russian. The proposed method can be used for enhancing automatic pre-editing functionality of state-of-theart MT systems. It will allow MT users to create their own rewriting rules for frequently mistranslated constructions and contexts, going beyond existing systems' capabilities offered by user dictionaries and do-not translate lists.",,"Evaluation-Guided Pre-Editing of Source Text: Improving MT-Tractability of Light Verb Constructions. This paper reports an experiment on evaluating and improving MT quality of light-verb construction (LVCs)-combinations of a 'semantically depleted' verb and its complement. Our method uses construction-level human evaluation for systematic discovery of mistranslated contexts and creating automatic pre-editing rules, which make the constructions more tractable for Rule-Based Machine Translation (RBMT) systems. For rewritten phrases we achieve about 40% reduction in the number of incomprehensible translations into English from both French and Russian. The proposed method can be used for enhancing automatic pre-editing functionality of state-of-theart MT systems. It will allow MT users to create their own rewriting rules for frequently mistranslated constructions and contexts, going beyond existing systems' capabilities offered by user dictionaries and do-not translate lists.",2009
pan-etal-2019-twitter,https://aclanthology.org/P19-1252,0,,,,,,,"Twitter Homophily: Network Based Prediction of User's Occupation. In this paper, we investigate the importance of social network information compared to content information in the prediction of a Twitter user's occupational class. We show that the content information of a user's tweets, the profile descriptions of a user's follower/following community, and the user's social network provide useful information for classifying a user's occupational group. In our study, we extend an existing dataset for this problem, and we achieve significantly better performance by using social network homophily that has not been fully exploited in previous work. In our analysis, we found that by using the graph convolutional network to exploit social homophily, we can achieve competitive performance on this dataset with just a small fraction of the training data.",{T}witter Homophily: Network Based Prediction of User{'}s Occupation,"In this paper, we investigate the importance of social network information compared to content information in the prediction of a Twitter user's occupational class. We show that the content information of a user's tweets, the profile descriptions of a user's follower/following community, and the user's social network provide useful information for classifying a user's occupational group. In our study, we extend an existing dataset for this problem, and we achieve significantly better performance by using social network homophily that has not been fully exploited in previous work. In our analysis, we found that by using the graph convolutional network to exploit social homophily, we can achieve competitive performance on this dataset with just a small fraction of the training data.",Twitter Homophily: Network Based Prediction of User's Occupation,"In this paper, we investigate the importance of social network information compared to content information in the prediction of a Twitter user's occupational class. We show that the content information of a user's tweets, the profile descriptions of a user's follower/following community, and the user's social network provide useful information for classifying a user's occupational group. In our study, we extend an existing dataset for this problem, and we achieve significantly better performance by using social network homophily that has not been fully exploited in previous work. In our analysis, we found that by using the graph convolutional network to exploit social homophily, we can achieve competitive performance on this dataset with just a small fraction of the training data.",We would like to thank the reviewers for their helpful comments on our work. This work is supported by DSO grant DSOCL17061.,"Twitter Homophily: Network Based Prediction of User's Occupation. In this paper, we investigate the importance of social network information compared to content information in the prediction of a Twitter user's occupational class. We show that the content information of a user's tweets, the profile descriptions of a user's follower/following community, and the user's social network provide useful information for classifying a user's occupational group. In our study, we extend an existing dataset for this problem, and we achieve significantly better performance by using social network homophily that has not been fully exploited in previous work. In our analysis, we found that by using the graph convolutional network to exploit social homophily, we can achieve competitive performance on this dataset with just a small fraction of the training data.",2019
nguyen-chiang-2017-transfer,https://aclanthology.org/I17-2050,0,,,,,,,"Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation. We present a simple method to improve neural translation of a low-resource language pair using parallel data from a related, also low-resource, language pair. The method is based on the transfer method of Zoph et al., but whereas their method ignores any source vocabulary overlap, ours exploits it. First, we split words using Byte Pair Encoding (BPE) to increase vocabulary overlap. Then, we train a model on the first language pair and transfer its parameters, including its source word embeddings, to another model and continue training on the second language pair. Our experiments show that transfer learning helps word-based translation only slightly, but when used on top of a much stronger BPE baseline, it yields larger improvements of up to 4.3 BLEU.","Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation","We present a simple method to improve neural translation of a low-resource language pair using parallel data from a related, also low-resource, language pair. The method is based on the transfer method of Zoph et al., but whereas their method ignores any source vocabulary overlap, ours exploits it. First, we split words using Byte Pair Encoding (BPE) to increase vocabulary overlap. Then, we train a model on the first language pair and transfer its parameters, including its source word embeddings, to another model and continue training on the second language pair. Our experiments show that transfer learning helps word-based translation only slightly, but when used on top of a much stronger BPE baseline, it yields larger improvements of up to 4.3 BLEU.","Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation","We present a simple method to improve neural translation of a low-resource language pair using parallel data from a related, also low-resource, language pair. The method is based on the transfer method of Zoph et al., but whereas their method ignores any source vocabulary overlap, ours exploits it. First, we split words using Byte Pair Encoding (BPE) to increase vocabulary overlap. Then, we train a model on the first language pair and transfer its parameters, including its source word embeddings, to another model and continue training on the second language pair. Our experiments show that transfer learning helps word-based translation only slightly, but when used on top of a much stronger BPE baseline, it yields larger improvements of up to 4.3 BLEU.","This research was supported in part by University of Southern California subcontract 67108176 under DARPA contract HR0011-15-C-0115. Nguyen was supported by a fellowship from the Vietnam Education Foundation. We would like to express our great appreciation to Dr. Sharon Hu for letting us use her group's GPU cluster (supported by NSF award 1629914), and to NVIDIA corporation for the donation of a Titan X GPU.","Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation. We present a simple method to improve neural translation of a low-resource language pair using parallel data from a related, also low-resource, language pair. The method is based on the transfer method of Zoph et al., but whereas their method ignores any source vocabulary overlap, ours exploits it. First, we split words using Byte Pair Encoding (BPE) to increase vocabulary overlap. Then, we train a model on the first language pair and transfer its parameters, including its source word embeddings, to another model and continue training on the second language pair. Our experiments show that transfer learning helps word-based translation only slightly, but when used on top of a much stronger BPE baseline, it yields larger improvements of up to 4.3 BLEU.",2017
kochmar-shutova-2017-modelling,https://aclanthology.org/W17-5033,1,,,,education,,,"Modelling semantic acquisition in second language learning. Using methods of statistical analysis, we investigate how semantic knowledge is acquired in English as a second language and evaluate the pace of development across a number of predicate types and content word combinations, as well as across the levels of language proficiency and native languages. Our exploratory study helps identify the most problematic areas for language learners with different backgrounds and at different stages of learning.",Modelling semantic acquisition in second language learning,"Using methods of statistical analysis, we investigate how semantic knowledge is acquired in English as a second language and evaluate the pace of development across a number of predicate types and content word combinations, as well as across the levels of language proficiency and native languages. Our exploratory study helps identify the most problematic areas for language learners with different backgrounds and at different stages of learning.",Modelling semantic acquisition in second language learning,"Using methods of statistical analysis, we investigate how semantic knowledge is acquired in English as a second language and evaluate the pace of development across a number of predicate types and content word combinations, as well as across the levels of language proficiency and native languages. Our exploratory study helps identify the most problematic areas for language learners with different backgrounds and at different stages of learning.",We are grateful to the BEA reviewers for their helpful and instructive feedback. Ekaterina Kochmar's research is supported by Cambridge English Language Assessment via the ALTA Institute. Ekaterina Shutova's research is supported by the Leverhulme Trust Early Career Fellowship.,"Modelling semantic acquisition in second language learning. Using methods of statistical analysis, we investigate how semantic knowledge is acquired in English as a second language and evaluate the pace of development across a number of predicate types and content word combinations, as well as across the levels of language proficiency and native languages. Our exploratory study helps identify the most problematic areas for language learners with different backgrounds and at different stages of learning.",2017
alfonseca-etal-2013-heady,https://aclanthology.org/P13-1122,0,,,,,,,"HEADY: News headline abstraction through event pattern clustering. This paper presents HEADY: a novel, abstractive approach for headline generation from news collections. From a web-scale corpus of English news, we mine syntactic patterns that a Noisy-OR model generalizes into event descriptions. At inference time, we query the model with the patterns observed in an unseen news collection, identify the event that better captures the gist of the collection and retrieve the most appropriate pattern to generate a headline. HEADY improves over a state-of-theart open-domain title abstraction method, bridging half of the gap that separates it from extractive methods using humangenerated titles in manual evaluations, and performs comparably to human-generated headlines as evaluated with ROUGE.",{HEADY}: News headline abstraction through event pattern clustering,"This paper presents HEADY: a novel, abstractive approach for headline generation from news collections. From a web-scale corpus of English news, we mine syntactic patterns that a Noisy-OR model generalizes into event descriptions. At inference time, we query the model with the patterns observed in an unseen news collection, identify the event that better captures the gist of the collection and retrieve the most appropriate pattern to generate a headline. HEADY improves over a state-of-theart open-domain title abstraction method, bridging half of the gap that separates it from extractive methods using humangenerated titles in manual evaluations, and performs comparably to human-generated headlines as evaluated with ROUGE.",HEADY: News headline abstraction through event pattern clustering,"This paper presents HEADY: a novel, abstractive approach for headline generation from news collections. From a web-scale corpus of English news, we mine syntactic patterns that a Noisy-OR model generalizes into event descriptions. At inference time, we query the model with the patterns observed in an unseen news collection, identify the event that better captures the gist of the collection and retrieve the most appropriate pattern to generate a headline. HEADY improves over a state-of-theart open-domain title abstraction method, bridging half of the gap that separates it from extractive methods using humangenerated titles in manual evaluations, and performs comparably to human-generated headlines as evaluated with ROUGE.",The research leading to these results has received funding from: the EU's 7 th Framework Programme (FP7/2007-2013) under grant agreement number 257790; the Spanish Ministry of Science and Innovation's project Holopedia (TIN2010-21128-C02); and the Regional Government of Madrid's MA2VICMR (S2009/TIC1542). We would like to thank Katja Filippova and the anonymous reviewers for their insightful comments.,"HEADY: News headline abstraction through event pattern clustering. This paper presents HEADY: a novel, abstractive approach for headline generation from news collections. From a web-scale corpus of English news, we mine syntactic patterns that a Noisy-OR model generalizes into event descriptions. At inference time, we query the model with the patterns observed in an unseen news collection, identify the event that better captures the gist of the collection and retrieve the most appropriate pattern to generate a headline. HEADY improves over a state-of-theart open-domain title abstraction method, bridging half of the gap that separates it from extractive methods using humangenerated titles in manual evaluations, and performs comparably to human-generated headlines as evaluated with ROUGE.",2013
gupta-etal-2014-text,https://aclanthology.org/S14-1010,0,,,,,,,"Text Summarization through Entailment-based Minimum Vertex Cover. Sentence Connectivity is a textual characteristic that may be incorporated intelligently for the selection of sentences of a well meaning summary. However, the existing summarization methods do not utilize its potential fully. The present paper introduces a novel method for singledocument text summarization. It poses the text summarization task as an optimization problem, and attempts to solve it using Weighted Minimum Vertex Cover (WMVC), a graph-based algorithm. Textual entailment, an established indicator of semantic relationships between text units, is used to measure sentence connectivity and construct the graph on which WMVC operates. Experiments on a standard summarization dataset show that the suggested algorithm outperforms related methods.",Text Summarization through Entailment-based Minimum Vertex Cover,"Sentence Connectivity is a textual characteristic that may be incorporated intelligently for the selection of sentences of a well meaning summary. However, the existing summarization methods do not utilize its potential fully. The present paper introduces a novel method for singledocument text summarization. It poses the text summarization task as an optimization problem, and attempts to solve it using Weighted Minimum Vertex Cover (WMVC), a graph-based algorithm. Textual entailment, an established indicator of semantic relationships between text units, is used to measure sentence connectivity and construct the graph on which WMVC operates. Experiments on a standard summarization dataset show that the suggested algorithm outperforms related methods.",Text Summarization through Entailment-based Minimum Vertex Cover,"Sentence Connectivity is a textual characteristic that may be incorporated intelligently for the selection of sentences of a well meaning summary. However, the existing summarization methods do not utilize its potential fully. The present paper introduces a novel method for singledocument text summarization. It poses the text summarization task as an optimization problem, and attempts to solve it using Weighted Minimum Vertex Cover (WMVC), a graph-based algorithm. Textual entailment, an established indicator of semantic relationships between text units, is used to measure sentence connectivity and construct the graph on which WMVC operates. Experiments on a standard summarization dataset show that the suggested algorithm outperforms related methods.",,"Text Summarization through Entailment-based Minimum Vertex Cover. Sentence Connectivity is a textual characteristic that may be incorporated intelligently for the selection of sentences of a well meaning summary. However, the existing summarization methods do not utilize its potential fully. The present paper introduces a novel method for singledocument text summarization. It poses the text summarization task as an optimization problem, and attempts to solve it using Weighted Minimum Vertex Cover (WMVC), a graph-based algorithm. Textual entailment, an established indicator of semantic relationships between text units, is used to measure sentence connectivity and construct the graph on which WMVC operates. Experiments on a standard summarization dataset show that the suggested algorithm outperforms related methods.",2014
wilson-wiebe-2003-annotating,https://aclanthology.org/W03-2102,1,,,,peace_justice_and_strong_institutions,,,"Annotating Opinions in the World Press. In this paper we present a detailed scheme for annotating expressions of opinions, beliefs, emotions, sentiment and speculation (private states) in the news and other discourse. We explore inter-annotator agreement for individual private state expressions, and show that these low-level annotations are useful for producing higher-level subjective sentence annotations.",Annotating Opinions in the World Press,"In this paper we present a detailed scheme for annotating expressions of opinions, beliefs, emotions, sentiment and speculation (private states) in the news and other discourse. We explore inter-annotator agreement for individual private state expressions, and show that these low-level annotations are useful for producing higher-level subjective sentence annotations.",Annotating Opinions in the World Press,"In this paper we present a detailed scheme for annotating expressions of opinions, beliefs, emotions, sentiment and speculation (private states) in the news and other discourse. We explore inter-annotator agreement for individual private state expressions, and show that these low-level annotations are useful for producing higher-level subjective sentence annotations.",,"Annotating Opinions in the World Press. In this paper we present a detailed scheme for annotating expressions of opinions, beliefs, emotions, sentiment and speculation (private states) in the news and other discourse. We explore inter-annotator agreement for individual private state expressions, and show that these low-level annotations are useful for producing higher-level subjective sentence annotations.",2003
biju-etal-2022-input,https://aclanthology.org/2022.findings-acl.4,0,,,,,,,"Input-specific Attention Subnetworks for Adversarial Detection. Self-attention heads are characteristic of Transformer models and have been well studied for interpretability and pruning. In this work, we demonstrate an altogether different utility of attention heads, namely for adversarial detection. Specifically, we propose a method to construct input-specific attention subnetworks (IAS) from which we extract three features to discriminate between authentic and adversarial inputs. The resultant detector significantly improves (by over 7.5%) the state-of-the-art adversarial detection accuracy for the BERT encoder on 10 NLU datasets with 11 different adversarial attack types. We also demonstrate that our method (a) is more accurate for larger models which are likely to have more spurious correlations and thus vulnerable to adversarial attack, and (b) performs well even with modest training sets of adversarial examples. P(negative) = 0.015 P(positive) = 0.985 the acting, costumes, music, cinematography and sound are all astounding given the production's austere locales. the acting, costumes, music, cinematography and sound are all astuonding given the production's austere lcoales.",Input-specific Attention Subnetworks for Adversarial Detection,"Self-attention heads are characteristic of Transformer models and have been well studied for interpretability and pruning. In this work, we demonstrate an altogether different utility of attention heads, namely for adversarial detection. Specifically, we propose a method to construct input-specific attention subnetworks (IAS) from which we extract three features to discriminate between authentic and adversarial inputs. The resultant detector significantly improves (by over 7.5%) the state-of-the-art adversarial detection accuracy for the BERT encoder on 10 NLU datasets with 11 different adversarial attack types. We also demonstrate that our method (a) is more accurate for larger models which are likely to have more spurious correlations and thus vulnerable to adversarial attack, and (b) performs well even with modest training sets of adversarial examples. P(negative) = 0.015 P(positive) = 0.985 the acting, costumes, music, cinematography and sound are all astounding given the production's austere locales. the acting, costumes, music, cinematography and sound are all astuonding given the production's austere lcoales.",Input-specific Attention Subnetworks for Adversarial Detection,"Self-attention heads are characteristic of Transformer models and have been well studied for interpretability and pruning. In this work, we demonstrate an altogether different utility of attention heads, namely for adversarial detection. Specifically, we propose a method to construct input-specific attention subnetworks (IAS) from which we extract three features to discriminate between authentic and adversarial inputs. The resultant detector significantly improves (by over 7.5%) the state-of-the-art adversarial detection accuracy for the BERT encoder on 10 NLU datasets with 11 different adversarial attack types. We also demonstrate that our method (a) is more accurate for larger models which are likely to have more spurious correlations and thus vulnerable to adversarial attack, and (b) performs well even with modest training sets of adversarial examples. P(negative) = 0.015 P(positive) = 0.985 the acting, costumes, music, cinematography and sound are all astounding given the production's austere locales. the acting, costumes, music, cinematography and sound are all astuonding given the production's austere lcoales.",We thank Samsung and IITM Pravartak for supporting our work through their joint fellowship program. We also wish to thank the anonymous reviewers for their efforts in evaluating our work and providing us with constructive feedback.,"Input-specific Attention Subnetworks for Adversarial Detection. Self-attention heads are characteristic of Transformer models and have been well studied for interpretability and pruning. In this work, we demonstrate an altogether different utility of attention heads, namely for adversarial detection. Specifically, we propose a method to construct input-specific attention subnetworks (IAS) from which we extract three features to discriminate between authentic and adversarial inputs. The resultant detector significantly improves (by over 7.5%) the state-of-the-art adversarial detection accuracy for the BERT encoder on 10 NLU datasets with 11 different adversarial attack types. We also demonstrate that our method (a) is more accurate for larger models which are likely to have more spurious correlations and thus vulnerable to adversarial attack, and (b) performs well even with modest training sets of adversarial examples. P(negative) = 0.015 P(positive) = 0.985 the acting, costumes, music, cinematography and sound are all astounding given the production's austere locales. the acting, costumes, music, cinematography and sound are all astuonding given the production's austere lcoales.",2022
gonzalez-etal-2012-graphical,https://aclanthology.org/P12-3024,0,,,,,,,"A Graphical Interface for MT Evaluation and Error Analysis. Error analysis in machine translation is a necessary step in order to investigate the strengths and weaknesses of the MT systems under development and allow fair comparisons among them. This work presents an application that shows how a set of heterogeneous automatic metrics can be used to evaluate a test bed of automatic translations. To do so, we have set up an online graphical interface for the ASIYA toolkit, a rich repository of evaluation measures working at different linguistic levels. The current implementation of the interface shows constituency and dependency trees as well as shallow syntactic and semantic annotations, and word alignments. The intelligent visualization of the linguistic structures used by the metrics, as well as a set of navigational functionalities, may lead towards advanced methods for automatic error analysis.",A Graphical Interface for {MT} Evaluation and Error Analysis,"Error analysis in machine translation is a necessary step in order to investigate the strengths and weaknesses of the MT systems under development and allow fair comparisons among them. This work presents an application that shows how a set of heterogeneous automatic metrics can be used to evaluate a test bed of automatic translations. To do so, we have set up an online graphical interface for the ASIYA toolkit, a rich repository of evaluation measures working at different linguistic levels. The current implementation of the interface shows constituency and dependency trees as well as shallow syntactic and semantic annotations, and word alignments. The intelligent visualization of the linguistic structures used by the metrics, as well as a set of navigational functionalities, may lead towards advanced methods for automatic error analysis.",A Graphical Interface for MT Evaluation and Error Analysis,"Error analysis in machine translation is a necessary step in order to investigate the strengths and weaknesses of the MT systems under development and allow fair comparisons among them. This work presents an application that shows how a set of heterogeneous automatic metrics can be used to evaluate a test bed of automatic translations. To do so, we have set up an online graphical interface for the ASIYA toolkit, a rich repository of evaluation measures working at different linguistic levels. The current implementation of the interface shows constituency and dependency trees as well as shallow syntactic and semantic annotations, and word alignments. The intelligent visualization of the linguistic structures used by the metrics, as well as a set of navigational functionalities, may lead towards advanced methods for automatic error analysis.","This research has been partially funded by the Spanish Ministry of Education and Science (OpenMT-2, TIN2009-14675-C03) and the European Community's Seventh Framework Programme under grant agreement numbers 247762 (FAUST project, FP7- ICT-2009-4-247762) and 247914 (MOLTO project, FP7- ICT-2009-4-247914).","A Graphical Interface for MT Evaluation and Error Analysis. Error analysis in machine translation is a necessary step in order to investigate the strengths and weaknesses of the MT systems under development and allow fair comparisons among them. This work presents an application that shows how a set of heterogeneous automatic metrics can be used to evaluate a test bed of automatic translations. To do so, we have set up an online graphical interface for the ASIYA toolkit, a rich repository of evaluation measures working at different linguistic levels. The current implementation of the interface shows constituency and dependency trees as well as shallow syntactic and semantic annotations, and word alignments. The intelligent visualization of the linguistic structures used by the metrics, as well as a set of navigational functionalities, may lead towards advanced methods for automatic error analysis.",2012
nishiguchi-2010-ccg,https://aclanthology.org/Y10-1057,0,,,,,,,"CCG of Japanese Sentence-final Particles. The aim of this paper is to provide formalization of Japanese sentence-final particles in the framework of Combinatory Categorial Grammar (CCG) (Steedman 1996, 2000, Szabolcsi 1987). While certain amount of literature has discussed the descriptive meaning of Japanese sentence-final particles (Takubo and Kinsui 1997, Chino 2001), little formal account has been provided except for McCready (2007)'s analysis from the viewpoint of dynamic semantics and relevance theory. I analyze particles such as yo and ne as verum focus operators (Höhle 1992, Romero and Han 2004).",{CCG} of {J}apanese Sentence-final Particles,"The aim of this paper is to provide formalization of Japanese sentence-final particles in the framework of Combinatory Categorial Grammar (CCG) (Steedman 1996, 2000, Szabolcsi 1987). While certain amount of literature has discussed the descriptive meaning of Japanese sentence-final particles (Takubo and Kinsui 1997, Chino 2001), little formal account has been provided except for McCready (2007)'s analysis from the viewpoint of dynamic semantics and relevance theory. I analyze particles such as yo and ne as verum focus operators (Höhle 1992, Romero and Han 2004).",CCG of Japanese Sentence-final Particles,"The aim of this paper is to provide formalization of Japanese sentence-final particles in the framework of Combinatory Categorial Grammar (CCG) (Steedman 1996, 2000, Szabolcsi 1987). While certain amount of literature has discussed the descriptive meaning of Japanese sentence-final particles (Takubo and Kinsui 1997, Chino 2001), little formal account has been provided except for McCready (2007)'s analysis from the viewpoint of dynamic semantics and relevance theory. I analyze particles such as yo and ne as verum focus operators (Höhle 1992, Romero and Han 2004).",,"CCG of Japanese Sentence-final Particles. The aim of this paper is to provide formalization of Japanese sentence-final particles in the framework of Combinatory Categorial Grammar (CCG) (Steedman 1996, 2000, Szabolcsi 1987). While certain amount of literature has discussed the descriptive meaning of Japanese sentence-final particles (Takubo and Kinsui 1997, Chino 2001), little formal account has been provided except for McCready (2007)'s analysis from the viewpoint of dynamic semantics and relevance theory. I analyze particles such as yo and ne as verum focus operators (Höhle 1992, Romero and Han 2004).",2010
yangarber-etal-2002-unsupervised,https://aclanthology.org/C02-1154,0,,,,,,,Unsupervised Learning of Generalized Names. ,Unsupervised Learning of Generalized Names,,Unsupervised Learning of Generalized Names,,,Unsupervised Learning of Generalized Names. ,2002
ravichandran-hovy-2002-learning,https://aclanthology.org/P02-1006,0,,,,,,,"Learning surface text patterns for a Question Answering System. In this paper we explore the power of surface text patterns for open-domain question answering systems. In order to obtain an optimal set of patterns, we have developed a method for learning such patterns automatically. A tagged corpus is built from the Internet in a bootstrapping process by providing a few hand-crafted examples of each question type to Altavista. Patterns are then automatically extracted from the returned documents and standardized. We calculate the precision of each pattern, and the average precision for each question type. These patterns are then applied to find answers to new questions. Using the TREC-10 question set, we report results for two cases: answers determined from the TREC-10 corpus and from the web.",Learning surface text patterns for a Question Answering System,"In this paper we explore the power of surface text patterns for open-domain question answering systems. In order to obtain an optimal set of patterns, we have developed a method for learning such patterns automatically. A tagged corpus is built from the Internet in a bootstrapping process by providing a few hand-crafted examples of each question type to Altavista. Patterns are then automatically extracted from the returned documents and standardized. We calculate the precision of each pattern, and the average precision for each question type. These patterns are then applied to find answers to new questions. Using the TREC-10 question set, we report results for two cases: answers determined from the TREC-10 corpus and from the web.",Learning surface text patterns for a Question Answering System,"In this paper we explore the power of surface text patterns for open-domain question answering systems. In order to obtain an optimal set of patterns, we have developed a method for learning such patterns automatically. A tagged corpus is built from the Internet in a bootstrapping process by providing a few hand-crafted examples of each question type to Altavista. Patterns are then automatically extracted from the returned documents and standardized. We calculate the precision of each pattern, and the average precision for each question type. These patterns are then applied to find answers to new questions. Using the TREC-10 question set, we report results for two cases: answers determined from the TREC-10 corpus and from the web.",This work was supported by the Advanced Research and Development Activity (ARDA)'s Advanced Question Answering for Intelligence (AQUAINT) Program under contract number MDA908-02-C-0007.,"Learning surface text patterns for a Question Answering System. In this paper we explore the power of surface text patterns for open-domain question answering systems. In order to obtain an optimal set of patterns, we have developed a method for learning such patterns automatically. A tagged corpus is built from the Internet in a bootstrapping process by providing a few hand-crafted examples of each question type to Altavista. Patterns are then automatically extracted from the returned documents and standardized. We calculate the precision of each pattern, and the average precision for each question type. These patterns are then applied to find answers to new questions. Using the TREC-10 question set, we report results for two cases: answers determined from the TREC-10 corpus and from the web.",2002
hahn-powell-etal-2017-swanson,https://aclanthology.org/P17-4018,1,,,,industry_innovation_infrastructure,,,"Swanson linking revisited: Accelerating literature-based discovery across domains using a conceptual influence graph. We introduce a modular approach for literature-based discovery consisting of a machine reading and knowledge assembly component that together produce a graph of influence relations (e.g., ""A promotes B"") from a collection of publications. A search engine is used to explore direct and indirect influence chains. Query results are substantiated with textual evidence, ranked according to their relevance, and presented in both a table-based view, as well as a network graph visualization. Our approach operates in both domain-specific settings, where there are knowledge bases and ontologies available to guide reading, and in multi-domain settings where such resources are absent. We demonstrate that this deep reading and search system reduces the effort needed to uncover ""undiscovered public knowledge"", and that with the aid of this tool a domain expert was able to drastically reduce her model building time from months to two days.",Swanson linking revisited: Accelerating literature-based discovery across domains using a conceptual influence graph,"We introduce a modular approach for literature-based discovery consisting of a machine reading and knowledge assembly component that together produce a graph of influence relations (e.g., ""A promotes B"") from a collection of publications. A search engine is used to explore direct and indirect influence chains. Query results are substantiated with textual evidence, ranked according to their relevance, and presented in both a table-based view, as well as a network graph visualization. Our approach operates in both domain-specific settings, where there are knowledge bases and ontologies available to guide reading, and in multi-domain settings where such resources are absent. We demonstrate that this deep reading and search system reduces the effort needed to uncover ""undiscovered public knowledge"", and that with the aid of this tool a domain expert was able to drastically reduce her model building time from months to two days.",Swanson linking revisited: Accelerating literature-based discovery across domains using a conceptual influence graph,"We introduce a modular approach for literature-based discovery consisting of a machine reading and knowledge assembly component that together produce a graph of influence relations (e.g., ""A promotes B"") from a collection of publications. A search engine is used to explore direct and indirect influence chains. Query results are substantiated with textual evidence, ranked according to their relevance, and presented in both a table-based view, as well as a network graph visualization. Our approach operates in both domain-specific settings, where there are knowledge bases and ontologies available to guide reading, and in multi-domain settings where such resources are absent. We demonstrate that this deep reading and search system reduces the effort needed to uncover ""undiscovered public knowledge"", and that with the aid of this tool a domain expert was able to drastically reduce her model building time from months to two days.","This work was funded by the DARPA Big Mechanism program under ARO contract W911NF-14-1-0395 and by the Bill and Melinda Gates Foundation HBGDki Initiative. The authors declare a financial interest in lum.ai, which licenses the intellectual property involved in this research. This interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies.","Swanson linking revisited: Accelerating literature-based discovery across domains using a conceptual influence graph. We introduce a modular approach for literature-based discovery consisting of a machine reading and knowledge assembly component that together produce a graph of influence relations (e.g., ""A promotes B"") from a collection of publications. A search engine is used to explore direct and indirect influence chains. Query results are substantiated with textual evidence, ranked according to their relevance, and presented in both a table-based view, as well as a network graph visualization. Our approach operates in both domain-specific settings, where there are knowledge bases and ontologies available to guide reading, and in multi-domain settings where such resources are absent. We demonstrate that this deep reading and search system reduces the effort needed to uncover ""undiscovered public knowledge"", and that with the aid of this tool a domain expert was able to drastically reduce her model building time from months to two days.",2017
berwick-1980-computational,https://aclanthology.org/P80-1014,0,,,,,,,"Computational Analogues of Constraints on Grammars: A Model of Syntactic Acquisition. A principal goal of modern linguistics is to account for the apparently rapid and uniform acquisition of syntactic knowledge, given the relatively impoverished input that evidently serves as the basis for the induction of that knowledge -the so-called projection problem. At least since Chomsky, the usual response to the projection problem has been to characterize knowledge of language as a grammar, and then proceed by restricting so severely the class of grammars available for acquisition that the induction task is greatly simplified -perhaps trivialized. consistent with our lcnowledge of what language is and of which stages the child passes through in learning it."" [2, page 218] In particular, ahhough the final psycholinguistic evidence is not yet in, children do not appear to receive negative evidence as a basis for the induction of syntactic rules. That is, they do not receive direct reinforcement for what is no_..~t a syntactically well-formed sentence (see Brown and Hanlon [3] and Newport, Gleitman, and Gleitman [4] for discussion). Á If syntactic acquisition can proceed using just positive examples, then it would seem completely unnecessary to move to any enrichment of the input data that is as yet unsupported by psycholinguistic evidence. 2",Computational Analogues of Constraints on Grammars: A Model of Syntactic Acquisition,"A principal goal of modern linguistics is to account for the apparently rapid and uniform acquisition of syntactic knowledge, given the relatively impoverished input that evidently serves as the basis for the induction of that knowledge -the so-called projection problem. At least since Chomsky, the usual response to the projection problem has been to characterize knowledge of language as a grammar, and then proceed by restricting so severely the class of grammars available for acquisition that the induction task is greatly simplified -perhaps trivialized. consistent with our lcnowledge of what language is and of which stages the child passes through in learning it."" [2, page 218] In particular, ahhough the final psycholinguistic evidence is not yet in, children do not appear to receive negative evidence as a basis for the induction of syntactic rules. That is, they do not receive direct reinforcement for what is no_..~t a syntactically well-formed sentence (see Brown and Hanlon [3] and Newport, Gleitman, and Gleitman [4] for discussion). Á If syntactic acquisition can proceed using just positive examples, then it would seem completely unnecessary to move to any enrichment of the input data that is as yet unsupported by psycholinguistic evidence. 2",Computational Analogues of Constraints on Grammars: A Model of Syntactic Acquisition,"A principal goal of modern linguistics is to account for the apparently rapid and uniform acquisition of syntactic knowledge, given the relatively impoverished input that evidently serves as the basis for the induction of that knowledge -the so-called projection problem. At least since Chomsky, the usual response to the projection problem has been to characterize knowledge of language as a grammar, and then proceed by restricting so severely the class of grammars available for acquisition that the induction task is greatly simplified -perhaps trivialized. consistent with our lcnowledge of what language is and of which stages the child passes through in learning it."" [2, page 218] In particular, ahhough the final psycholinguistic evidence is not yet in, children do not appear to receive negative evidence as a basis for the induction of syntactic rules. That is, they do not receive direct reinforcement for what is no_..~t a syntactically well-formed sentence (see Brown and Hanlon [3] and Newport, Gleitman, and Gleitman [4] for discussion). Á If syntactic acquisition can proceed using just positive examples, then it would seem completely unnecessary to move to any enrichment of the input data that is as yet unsupported by psycholinguistic evidence. 2",,"Computational Analogues of Constraints on Grammars: A Model of Syntactic Acquisition. A principal goal of modern linguistics is to account for the apparently rapid and uniform acquisition of syntactic knowledge, given the relatively impoverished input that evidently serves as the basis for the induction of that knowledge -the so-called projection problem. At least since Chomsky, the usual response to the projection problem has been to characterize knowledge of language as a grammar, and then proceed by restricting so severely the class of grammars available for acquisition that the induction task is greatly simplified -perhaps trivialized. consistent with our lcnowledge of what language is and of which stages the child passes through in learning it."" [2, page 218] In particular, ahhough the final psycholinguistic evidence is not yet in, children do not appear to receive negative evidence as a basis for the induction of syntactic rules. That is, they do not receive direct reinforcement for what is no_..~t a syntactically well-formed sentence (see Brown and Hanlon [3] and Newport, Gleitman, and Gleitman [4] for discussion). Á If syntactic acquisition can proceed using just positive examples, then it would seem completely unnecessary to move to any enrichment of the input data that is as yet unsupported by psycholinguistic evidence. 2",1980
conlon-evens-1992-computers,https://aclanthology.org/C92-4190,0,,,,,,,"Can Computers Handle Adverbs?. The adverb is the most complicated, and perhaps also the most interesting part of speech. Past research in natural language processing, however, has not dealt seriously with adverbs, though linguists have done significant work on this word class. The current paper draws on this linguistic research to organize an adverbial lexicon which will be useful for information retrieval and natural language processing systems.",Can Computers Handle Adverbs?,"The adverb is the most complicated, and perhaps also the most interesting part of speech. Past research in natural language processing, however, has not dealt seriously with adverbs, though linguists have done significant work on this word class. The current paper draws on this linguistic research to organize an adverbial lexicon which will be useful for information retrieval and natural language processing systems.",Can Computers Handle Adverbs?,"The adverb is the most complicated, and perhaps also the most interesting part of speech. Past research in natural language processing, however, has not dealt seriously with adverbs, though linguists have done significant work on this word class. The current paper draws on this linguistic research to organize an adverbial lexicon which will be useful for information retrieval and natural language processing systems.",,"Can Computers Handle Adverbs?. The adverb is the most complicated, and perhaps also the most interesting part of speech. Past research in natural language processing, however, has not dealt seriously with adverbs, though linguists have done significant work on this word class. The current paper draws on this linguistic research to organize an adverbial lexicon which will be useful for information retrieval and natural language processing systems.",1992
czulo-etal-2020-beyond,https://aclanthology.org/2020.framenet-1.1,0,,,,,,,"Beyond lexical semantics: notes on pragmatic frames. FrameNets as an incarnation of frame semantics have been set up to deal with lexicographic issues (cf. Fillmore and Baker 2010, among others). They are thus concerned with lexical units (LUs) and conceptual structures which categorize these together. These lexically-evoked frames, however, generally do not reflect pragmatic properties of constructions (LUs and other types of non-lexical constructions), such as expressing illocutions or establishing relations between speaker and hearer. From the viewpoint of a multilingual annotation effort, the Global FrameNet Shared Annotation Task, we discuss two phenomena, greetings and tag questions, highlighting the necessity both to investigate the role between construction and frame annotation and to develop pragmatic frames (and constructions) related to different facets of social interaction and situation-bound usage restrictions that are not explicitly lexicalized.",Beyond lexical semantics: notes on pragmatic frames,"FrameNets as an incarnation of frame semantics have been set up to deal with lexicographic issues (cf. Fillmore and Baker 2010, among others). They are thus concerned with lexical units (LUs) and conceptual structures which categorize these together. These lexically-evoked frames, however, generally do not reflect pragmatic properties of constructions (LUs and other types of non-lexical constructions), such as expressing illocutions or establishing relations between speaker and hearer. From the viewpoint of a multilingual annotation effort, the Global FrameNet Shared Annotation Task, we discuss two phenomena, greetings and tag questions, highlighting the necessity both to investigate the role between construction and frame annotation and to develop pragmatic frames (and constructions) related to different facets of social interaction and situation-bound usage restrictions that are not explicitly lexicalized.",Beyond lexical semantics: notes on pragmatic frames,"FrameNets as an incarnation of frame semantics have been set up to deal with lexicographic issues (cf. Fillmore and Baker 2010, among others). They are thus concerned with lexical units (LUs) and conceptual structures which categorize these together. These lexically-evoked frames, however, generally do not reflect pragmatic properties of constructions (LUs and other types of non-lexical constructions), such as expressing illocutions or establishing relations between speaker and hearer. From the viewpoint of a multilingual annotation effort, the Global FrameNet Shared Annotation Task, we discuss two phenomena, greetings and tag questions, highlighting the necessity both to investigate the role between construction and frame annotation and to develop pragmatic frames (and constructions) related to different facets of social interaction and situation-bound usage restrictions that are not explicitly lexicalized.","Research presented in this paper is funded by CAPES/PROBRAL and DAAD PPP Programs, under the grant numbers 88887.144043/2017-00 and 57390800, respectively.","Beyond lexical semantics: notes on pragmatic frames. FrameNets as an incarnation of frame semantics have been set up to deal with lexicographic issues (cf. Fillmore and Baker 2010, among others). They are thus concerned with lexical units (LUs) and conceptual structures which categorize these together. These lexically-evoked frames, however, generally do not reflect pragmatic properties of constructions (LUs and other types of non-lexical constructions), such as expressing illocutions or establishing relations between speaker and hearer. From the viewpoint of a multilingual annotation effort, the Global FrameNet Shared Annotation Task, we discuss two phenomena, greetings and tag questions, highlighting the necessity both to investigate the role between construction and frame annotation and to develop pragmatic frames (and constructions) related to different facets of social interaction and situation-bound usage restrictions that are not explicitly lexicalized.",2020
lison-bibauw-2017-dialogues,https://aclanthology.org/W17-5546,0,,,,,,,"Not All Dialogues are Created Equal: Instance Weighting for Neural Conversational Models. Neural conversational models require substantial amounts of dialogue data to estimate their parameters and are therefore usually learned on large corpora such as chat forums, Twitter discussions or movie subtitles. These corpora are, however, often challenging to work with, notably due to their frequent lack of turn segmentation and the presence of multiple references external to the dialogue itself. This paper shows that these challenges can be mitigated by adding a weighting model into the neural architecture. The weighting model, which is itself estimated from dialogue data, associates each training example to a numerical weight that reflects its intrinsic quality for dialogue modelling. At training time, these sample weights are included into the empirical loss to be minimised. Evaluation results on retrieval-based models trained on movie and TV subtitles demonstrate that the inclusion of such a weighting model improves the model performance on unsupervised metrics.",Not All Dialogues are Created Equal: Instance Weighting for Neural Conversational Models,"Neural conversational models require substantial amounts of dialogue data to estimate their parameters and are therefore usually learned on large corpora such as chat forums, Twitter discussions or movie subtitles. These corpora are, however, often challenging to work with, notably due to their frequent lack of turn segmentation and the presence of multiple references external to the dialogue itself. This paper shows that these challenges can be mitigated by adding a weighting model into the neural architecture. The weighting model, which is itself estimated from dialogue data, associates each training example to a numerical weight that reflects its intrinsic quality for dialogue modelling. At training time, these sample weights are included into the empirical loss to be minimised. Evaluation results on retrieval-based models trained on movie and TV subtitles demonstrate that the inclusion of such a weighting model improves the model performance on unsupervised metrics.",Not All Dialogues are Created Equal: Instance Weighting for Neural Conversational Models,"Neural conversational models require substantial amounts of dialogue data to estimate their parameters and are therefore usually learned on large corpora such as chat forums, Twitter discussions or movie subtitles. These corpora are, however, often challenging to work with, notably due to their frequent lack of turn segmentation and the presence of multiple references external to the dialogue itself. This paper shows that these challenges can be mitigated by adding a weighting model into the neural architecture. The weighting model, which is itself estimated from dialogue data, associates each training example to a numerical weight that reflects its intrinsic quality for dialogue modelling. At training time, these sample weights are included into the empirical loss to be minimised. Evaluation results on retrieval-based models trained on movie and TV subtitles demonstrate that the inclusion of such a weighting model improves the model performance on unsupervised metrics.",,"Not All Dialogues are Created Equal: Instance Weighting for Neural Conversational Models. Neural conversational models require substantial amounts of dialogue data to estimate their parameters and are therefore usually learned on large corpora such as chat forums, Twitter discussions or movie subtitles. These corpora are, however, often challenging to work with, notably due to their frequent lack of turn segmentation and the presence of multiple references external to the dialogue itself. This paper shows that these challenges can be mitigated by adding a weighting model into the neural architecture. The weighting model, which is itself estimated from dialogue data, associates each training example to a numerical weight that reflects its intrinsic quality for dialogue modelling. At training time, these sample weights are included into the empirical loss to be minimised. Evaluation results on retrieval-based models trained on movie and TV subtitles demonstrate that the inclusion of such a weighting model improves the model performance on unsupervised metrics.",2017
wu-etal-2022-generating,https://aclanthology.org/2022.acl-long.190,0,,,,,,,"Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets. Natural language processing models often exploit spurious correlations between taskindependent features and labels in datasets to perform well only within the distributions they are trained on, while not generalising to different task distributions. We propose to tackle this problem by generating a debiased version of a dataset, which can then be used to train a debiased, off-the-shelf model, by simply replacing its training data. Our approach consists of 1) a method for training data generators to generate high-quality, label-consistent data samples; and 2) a filtering mechanism for removing data points that contribute to spurious correlations, measured in terms of z-statistics. We generate debiased versions of the SNLI and MNLI datasets, 1 and we evaluate on a large suite of debiased, outof-distribution, and adversarial test sets. Results show that models trained on our debiased datasets generalise better than those trained on the original datasets in all settings. On the majority of the datasets, our method outperforms or performs comparably to previous state-ofthe-art debiasing strategies, and when combined with an orthogonal technique, productof-experts, it improves further and outperforms previous best results of SNLI-hard and MNLI-hard.",Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets,"Natural language processing models often exploit spurious correlations between taskindependent features and labels in datasets to perform well only within the distributions they are trained on, while not generalising to different task distributions. We propose to tackle this problem by generating a debiased version of a dataset, which can then be used to train a debiased, off-the-shelf model, by simply replacing its training data. Our approach consists of 1) a method for training data generators to generate high-quality, label-consistent data samples; and 2) a filtering mechanism for removing data points that contribute to spurious correlations, measured in terms of z-statistics. We generate debiased versions of the SNLI and MNLI datasets, 1 and we evaluate on a large suite of debiased, outof-distribution, and adversarial test sets. Results show that models trained on our debiased datasets generalise better than those trained on the original datasets in all settings. On the majority of the datasets, our method outperforms or performs comparably to previous state-ofthe-art debiasing strategies, and when combined with an orthogonal technique, productof-experts, it improves further and outperforms previous best results of SNLI-hard and MNLI-hard.",Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets,"Natural language processing models often exploit spurious correlations between taskindependent features and labels in datasets to perform well only within the distributions they are trained on, while not generalising to different task distributions. We propose to tackle this problem by generating a debiased version of a dataset, which can then be used to train a debiased, off-the-shelf model, by simply replacing its training data. Our approach consists of 1) a method for training data generators to generate high-quality, label-consistent data samples; and 2) a filtering mechanism for removing data points that contribute to spurious correlations, measured in terms of z-statistics. We generate debiased versions of the SNLI and MNLI datasets, 1 and we evaluate on a large suite of debiased, outof-distribution, and adversarial test sets. Results show that models trained on our debiased datasets generalise better than those trained on the original datasets in all settings. On the majority of the datasets, our method outperforms or performs comparably to previous state-ofthe-art debiasing strategies, and when combined with an orthogonal technique, productof-experts, it improves further and outperforms previous best results of SNLI-hard and MNLI-hard.","The authors would like to thank Max Bartolo, Alexis Ross, Doug Downey, Jesse Dodge, Pasquale Minervini, and Sebastian Riedel for their helpful discussion and feedback.","Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets. Natural language processing models often exploit spurious correlations between taskindependent features and labels in datasets to perform well only within the distributions they are trained on, while not generalising to different task distributions. We propose to tackle this problem by generating a debiased version of a dataset, which can then be used to train a debiased, off-the-shelf model, by simply replacing its training data. Our approach consists of 1) a method for training data generators to generate high-quality, label-consistent data samples; and 2) a filtering mechanism for removing data points that contribute to spurious correlations, measured in terms of z-statistics. We generate debiased versions of the SNLI and MNLI datasets, 1 and we evaluate on a large suite of debiased, outof-distribution, and adversarial test sets. Results show that models trained on our debiased datasets generalise better than those trained on the original datasets in all settings. On the majority of the datasets, our method outperforms or performs comparably to previous state-ofthe-art debiasing strategies, and when combined with an orthogonal technique, productof-experts, it improves further and outperforms previous best results of SNLI-hard and MNLI-hard.",2022
yildiz-etal-2014-constructing,https://aclanthology.org/P14-2019,0,,,,,,,"Constructing a Turkish-English Parallel TreeBank. In this paper, we report our preliminary efforts in building an English-Turkish parallel treebank corpus for statistical machine translation. In the corpus, we manually generated parallel trees for about 5,000 sentences from Penn Treebank. English sentences in our set have a maximum of 15 tokens, including punctuation. We constrained the translated trees to the reordering of the children and the replacement of the leaf nodes with appropriate glosses. We also report the tools that we built and used in our tree translation task.",Constructing a {T}urkish-{E}nglish Parallel {T}ree{B}ank,"In this paper, we report our preliminary efforts in building an English-Turkish parallel treebank corpus for statistical machine translation. In the corpus, we manually generated parallel trees for about 5,000 sentences from Penn Treebank. English sentences in our set have a maximum of 15 tokens, including punctuation. We constrained the translated trees to the reordering of the children and the replacement of the leaf nodes with appropriate glosses. We also report the tools that we built and used in our tree translation task.",Constructing a Turkish-English Parallel TreeBank,"In this paper, we report our preliminary efforts in building an English-Turkish parallel treebank corpus for statistical machine translation. In the corpus, we manually generated parallel trees for about 5,000 sentences from Penn Treebank. English sentences in our set have a maximum of 15 tokens, including punctuation. We constrained the translated trees to the reordering of the children and the replacement of the leaf nodes with appropriate glosses. We also report the tools that we built and used in our tree translation task.",,"Constructing a Turkish-English Parallel TreeBank. In this paper, we report our preliminary efforts in building an English-Turkish parallel treebank corpus for statistical machine translation. In the corpus, we manually generated parallel trees for about 5,000 sentences from Penn Treebank. English sentences in our set have a maximum of 15 tokens, including punctuation. We constrained the translated trees to the reordering of the children and the replacement of the leaf nodes with appropriate glosses. We also report the tools that we built and used in our tree translation task.",2014
rosner-etal-2014-modeling,http://www.lrec-conf.org/proceedings/lrec2014/pdf/321_Paper.pdf,0,,,,,,,Modeling and evaluating dialog success in the LAST MINUTE corpus. The LAST MINUTE corpus comprises records and transcripts of naturalistic problem solving dialogs between N = 130 subjects and a companion system simulated in a Wizard of Oz experiment. Our goal is to detect dialog situations where subjects might break up the dialog with the system which might happen when the subject is unsuccessful. We present a dialog act based representation of the dialog courses in the problem solving phase of the experiment and propose and evaluate measures for dialog success or failure derived from this representation. This dialog act representation refines our previous coarse measure as it enables the correct classification of many dialog sequences that were ambiguous before. The dialog act representation is useful for the identification of different subject groups and the exploration of interesting dialog courses in the corpus. We find young females to be most successful in the challenging last part of the problem solving phase and young subjects to have the initiative in the dialog more often than the elderly.,Modeling and evaluating dialog success in the {LAST} {MINUTE} corpus,The LAST MINUTE corpus comprises records and transcripts of naturalistic problem solving dialogs between N = 130 subjects and a companion system simulated in a Wizard of Oz experiment. Our goal is to detect dialog situations where subjects might break up the dialog with the system which might happen when the subject is unsuccessful. We present a dialog act based representation of the dialog courses in the problem solving phase of the experiment and propose and evaluate measures for dialog success or failure derived from this representation. This dialog act representation refines our previous coarse measure as it enables the correct classification of many dialog sequences that were ambiguous before. The dialog act representation is useful for the identification of different subject groups and the exploration of interesting dialog courses in the corpus. We find young females to be most successful in the challenging last part of the problem solving phase and young subjects to have the initiative in the dialog more often than the elderly.,Modeling and evaluating dialog success in the LAST MINUTE corpus,The LAST MINUTE corpus comprises records and transcripts of naturalistic problem solving dialogs between N = 130 subjects and a companion system simulated in a Wizard of Oz experiment. Our goal is to detect dialog situations where subjects might break up the dialog with the system which might happen when the subject is unsuccessful. We present a dialog act based representation of the dialog courses in the problem solving phase of the experiment and propose and evaluate measures for dialog success or failure derived from this representation. This dialog act representation refines our previous coarse measure as it enables the correct classification of many dialog sequences that were ambiguous before. The dialog act representation is useful for the identification of different subject groups and the exploration of interesting dialog courses in the corpus. We find young females to be most successful in the challenging last part of the problem solving phase and young subjects to have the initiative in the dialog more often than the elderly.,"The presented study is performed in the framework of the Transregional Collaborative Research Centre SFB/TRR 62 ""A Companion-Technology for Cognitive Technical Systems"" funded by the German Research Foundation (DFG). The responsibility for the content of this paper lies with the authors.",Modeling and evaluating dialog success in the LAST MINUTE corpus. The LAST MINUTE corpus comprises records and transcripts of naturalistic problem solving dialogs between N = 130 subjects and a companion system simulated in a Wizard of Oz experiment. Our goal is to detect dialog situations where subjects might break up the dialog with the system which might happen when the subject is unsuccessful. We present a dialog act based representation of the dialog courses in the problem solving phase of the experiment and propose and evaluate measures for dialog success or failure derived from this representation. This dialog act representation refines our previous coarse measure as it enables the correct classification of many dialog sequences that were ambiguous before. The dialog act representation is useful for the identification of different subject groups and the exploration of interesting dialog courses in the corpus. We find young females to be most successful in the challenging last part of the problem solving phase and young subjects to have the initiative in the dialog more often than the elderly.,2014
sahlgren-etal-2021-basically,https://aclanthology.org/2021.nodalida-main.39,0,,,,,,,"It's Basically the Same Language Anyway: the Case for a Nordic Language Model. When is it beneficial for a research community to organize a broader collaborative effort on a topic, and when should we instead promote individual efforts? In this opinion piece, we argue that we are at a stage in the development of large-scale language models where a collaborative effort is desirable, despite the fact that the preconditions for making individual contributions have never been better. We consider a number of arguments for collaboratively developing a large-scale Nordic language model, include environmental considerations, cost, data availability, language typology, cultural similarity, and transparency. Our primary goal is to raise awareness and foster a discussion about our potential impact and responsibility as NLP community.",It{'}s Basically the Same Language Anyway: the Case for a Nordic Language Model,"When is it beneficial for a research community to organize a broader collaborative effort on a topic, and when should we instead promote individual efforts? In this opinion piece, we argue that we are at a stage in the development of large-scale language models where a collaborative effort is desirable, despite the fact that the preconditions for making individual contributions have never been better. We consider a number of arguments for collaboratively developing a large-scale Nordic language model, include environmental considerations, cost, data availability, language typology, cultural similarity, and transparency. Our primary goal is to raise awareness and foster a discussion about our potential impact and responsibility as NLP community.",It's Basically the Same Language Anyway: the Case for a Nordic Language Model,"When is it beneficial for a research community to organize a broader collaborative effort on a topic, and when should we instead promote individual efforts? In this opinion piece, we argue that we are at a stage in the development of large-scale language models where a collaborative effort is desirable, despite the fact that the preconditions for making individual contributions have never been better. We consider a number of arguments for collaboratively developing a large-scale Nordic language model, include environmental considerations, cost, data availability, language typology, cultural similarity, and transparency. Our primary goal is to raise awareness and foster a discussion about our potential impact and responsibility as NLP community.",,"It's Basically the Same Language Anyway: the Case for a Nordic Language Model. When is it beneficial for a research community to organize a broader collaborative effort on a topic, and when should we instead promote individual efforts? In this opinion piece, we argue that we are at a stage in the development of large-scale language models where a collaborative effort is desirable, despite the fact that the preconditions for making individual contributions have never been better. We consider a number of arguments for collaboratively developing a large-scale Nordic language model, include environmental considerations, cost, data availability, language typology, cultural similarity, and transparency. Our primary goal is to raise awareness and foster a discussion about our potential impact and responsibility as NLP community.",2021
lee-bryant-2002-contextual,https://aclanthology.org/C02-1124,1,,,,industry_innovation_infrastructure,,,Contextual Natural Language Processing and DAML for Understanding Software Requirements Specifications. ,Contextual Natural Language Processing and {DAML} for Understanding Software Requirements Specifications,,Contextual Natural Language Processing and DAML for Understanding Software Requirements Specifications,,,Contextual Natural Language Processing and DAML for Understanding Software Requirements Specifications. ,2002
yoshikawa-etal-2012-identifying,https://aclanthology.org/C12-2134,0,,,,,,,Identifying Temporal Relations by Sentence and Document Optimizations. This paper presents a temporal relation identification method optimizing relations at sentence and document levels. Temporal relation identification is to identify temporal orders between events and time expressions. Various approaches of this task have been studied through the shared tasks TempEval (Verhagen et al.,Identifying Temporal Relations by Sentence and Document Optimizations,This paper presents a temporal relation identification method optimizing relations at sentence and document levels. Temporal relation identification is to identify temporal orders between events and time expressions. Various approaches of this task have been studied through the shared tasks TempEval (Verhagen et al.,Identifying Temporal Relations by Sentence and Document Optimizations,This paper presents a temporal relation identification method optimizing relations at sentence and document levels. Temporal relation identification is to identify temporal orders between events and time expressions. Various approaches of this task have been studied through the shared tasks TempEval (Verhagen et al.,,Identifying Temporal Relations by Sentence and Document Optimizations. This paper presents a temporal relation identification method optimizing relations at sentence and document levels. Temporal relation identification is to identify temporal orders between events and time expressions. Various approaches of this task have been studied through the shared tasks TempEval (Verhagen et al.,2012
yamron-etal-1994-automatic-component,https://aclanthology.org/H94-1096,0,,,,,,,"The Automatic Component of the LINGSTAT Machine-Aided Translation System. LINGSTAT is an interactive machine-aided translation system designed to increase the productivity of a translator. It is aimed both at experienced users whose goal is high quality translation, and inexperienced users with little knowledge of the source whose goal is simply to extract information from foreign language text. The system makes use of statistical information gathered from parallel and single-language corpora, but also draws from linguistic sources of knowledge. The first problem to be studied is Japanese to English translation, and work is progressing on a Spanish to English system.
In the newest version of LINGSTAT, the user is provided with a draft translation of the source document, which may be used for reference or modified. The translation process in LINGSTAT consists of the following steps: 1) tokenization and morphological analysis; 2) parsing; 3) rearrangement of the source into English order; 4) annotation and selection of glosses.",The Automatic Component of the {LINGSTAT} Machine-Aided Translation System,"LINGSTAT is an interactive machine-aided translation system designed to increase the productivity of a translator. It is aimed both at experienced users whose goal is high quality translation, and inexperienced users with little knowledge of the source whose goal is simply to extract information from foreign language text. The system makes use of statistical information gathered from parallel and single-language corpora, but also draws from linguistic sources of knowledge. The first problem to be studied is Japanese to English translation, and work is progressing on a Spanish to English system.
In the newest version of LINGSTAT, the user is provided with a draft translation of the source document, which may be used for reference or modified. The translation process in LINGSTAT consists of the following steps: 1) tokenization and morphological analysis; 2) parsing; 3) rearrangement of the source into English order; 4) annotation and selection of glosses.",The Automatic Component of the LINGSTAT Machine-Aided Translation System,"LINGSTAT is an interactive machine-aided translation system designed to increase the productivity of a translator. It is aimed both at experienced users whose goal is high quality translation, and inexperienced users with little knowledge of the source whose goal is simply to extract information from foreign language text. The system makes use of statistical information gathered from parallel and single-language corpora, but also draws from linguistic sources of knowledge. The first problem to be studied is Japanese to English translation, and work is progressing on a Spanish to English system.
In the newest version of LINGSTAT, the user is provided with a draft translation of the source document, which may be used for reference or modified. The translation process in LINGSTAT consists of the following steps: 1) tokenization and morphological analysis; 2) parsing; 3) rearrangement of the source into English order; 4) annotation and selection of glosses.",,"The Automatic Component of the LINGSTAT Machine-Aided Translation System. LINGSTAT is an interactive machine-aided translation system designed to increase the productivity of a translator. It is aimed both at experienced users whose goal is high quality translation, and inexperienced users with little knowledge of the source whose goal is simply to extract information from foreign language text. The system makes use of statistical information gathered from parallel and single-language corpora, but also draws from linguistic sources of knowledge. The first problem to be studied is Japanese to English translation, and work is progressing on a Spanish to English system.
In the newest version of LINGSTAT, the user is provided with a draft translation of the source document, which may be used for reference or modified. The translation process in LINGSTAT consists of the following steps: 1) tokenization and morphological analysis; 2) parsing; 3) rearrangement of the source into English order; 4) annotation and selection of glosses.",1994
nimishakavi-etal-2016-relation,https://aclanthology.org/D16-1040,0,,,,,,,"Relation Schema Induction using Tensor Factorization with Side Information. Given a set of documents from a specific domain (e.g., medical research journals), how do we automatically build a Knowledge Graph (KG) for that domain? Automatic identification of relations and their schemas, i.e., type signature of arguments of relations (e.g., undergo(Patient, Surgery)), is an important first step towards this goal. We refer to this problem as Relation Schema Induction (RSI). In this paper, we propose Schema Induction using Coupled Tensor Factorization (SICTF), a novel tensor factorization method for relation schema induction. SICTF factorizes Open Information Extraction (OpenIE) triples extracted from a domain corpus along with additional side information in a principled way to induce relation schemas. To the best of our knowledge, this is the first application of tensor factorization for the RSI problem. Through extensive experiments on multiple real-world datasets, we find that SICTF is not only more accurate than state-of-the-art baselines, but also significantly faster (about 14x faster).",Relation Schema Induction using Tensor Factorization with Side Information,"Given a set of documents from a specific domain (e.g., medical research journals), how do we automatically build a Knowledge Graph (KG) for that domain? Automatic identification of relations and their schemas, i.e., type signature of arguments of relations (e.g., undergo(Patient, Surgery)), is an important first step towards this goal. We refer to this problem as Relation Schema Induction (RSI). In this paper, we propose Schema Induction using Coupled Tensor Factorization (SICTF), a novel tensor factorization method for relation schema induction. SICTF factorizes Open Information Extraction (OpenIE) triples extracted from a domain corpus along with additional side information in a principled way to induce relation schemas. To the best of our knowledge, this is the first application of tensor factorization for the RSI problem. Through extensive experiments on multiple real-world datasets, we find that SICTF is not only more accurate than state-of-the-art baselines, but also significantly faster (about 14x faster).",Relation Schema Induction using Tensor Factorization with Side Information,"Given a set of documents from a specific domain (e.g., medical research journals), how do we automatically build a Knowledge Graph (KG) for that domain? Automatic identification of relations and their schemas, i.e., type signature of arguments of relations (e.g., undergo(Patient, Surgery)), is an important first step towards this goal. We refer to this problem as Relation Schema Induction (RSI). In this paper, we propose Schema Induction using Coupled Tensor Factorization (SICTF), a novel tensor factorization method for relation schema induction. SICTF factorizes Open Information Extraction (OpenIE) triples extracted from a domain corpus along with additional side information in a principled way to induce relation schemas. To the best of our knowledge, this is the first application of tensor factorization for the RSI problem. Through extensive experiments on multiple real-world datasets, we find that SICTF is not only more accurate than state-of-the-art baselines, but also significantly faster (about 14x faster).","Thanks to the members of MALL Lab, IISc who read our drafts and gave valuable feedback and we also thank the reviewers for their constructive reviews. This research has been supported in part by Bosch Engineering and Business Solutions and Google.","Relation Schema Induction using Tensor Factorization with Side Information. Given a set of documents from a specific domain (e.g., medical research journals), how do we automatically build a Knowledge Graph (KG) for that domain? Automatic identification of relations and their schemas, i.e., type signature of arguments of relations (e.g., undergo(Patient, Surgery)), is an important first step towards this goal. We refer to this problem as Relation Schema Induction (RSI). In this paper, we propose Schema Induction using Coupled Tensor Factorization (SICTF), a novel tensor factorization method for relation schema induction. SICTF factorizes Open Information Extraction (OpenIE) triples extracted from a domain corpus along with additional side information in a principled way to induce relation schemas. To the best of our knowledge, this is the first application of tensor factorization for the RSI problem. Through extensive experiments on multiple real-world datasets, we find that SICTF is not only more accurate than state-of-the-art baselines, but also significantly faster (about 14x faster).",2016
peters-braschler-2002-importance,http://www.lrec-conf.org/proceedings/lrec2002/pdf/163.pdf,0,,,,,,,"The Importance of Evaluation for Cross-Language System Development: the CLEF Experience. The aim of the Cross-Language Evaluation Forum (CLEF) is to develop and maintain an infrastructure for the evaluation of information retrieval systems operating on European languages in both monolingual and cross-language contexts, and to create testsuites of reusable data that can be employed by system developers for benchmarking purposes. Two CLEF evaluation campaigns have been held so far (CLEF 2000 and CLEF 2001); CLEF 2002 is now under way. The paper describes the objectives and the organisation of these campaigns, and gives a first assessment of the results. In conclusion, plans for future CLEF campaigns are reported.",The Importance of Evaluation for Cross-Language System Development: the {CLEF} Experience,"The aim of the Cross-Language Evaluation Forum (CLEF) is to develop and maintain an infrastructure for the evaluation of information retrieval systems operating on European languages in both monolingual and cross-language contexts, and to create testsuites of reusable data that can be employed by system developers for benchmarking purposes. Two CLEF evaluation campaigns have been held so far (CLEF 2000 and CLEF 2001); CLEF 2002 is now under way. The paper describes the objectives and the organisation of these campaigns, and gives a first assessment of the results. In conclusion, plans for future CLEF campaigns are reported.",The Importance of Evaluation for Cross-Language System Development: the CLEF Experience,"The aim of the Cross-Language Evaluation Forum (CLEF) is to develop and maintain an infrastructure for the evaluation of information retrieval systems operating on European languages in both monolingual and cross-language contexts, and to create testsuites of reusable data that can be employed by system developers for benchmarking purposes. Two CLEF evaluation campaigns have been held so far (CLEF 2000 and CLEF 2001); CLEF 2002 is now under way. The paper describes the objectives and the organisation of these campaigns, and gives a first assessment of the results. In conclusion, plans for future CLEF campaigns are reported.",We gratefully acknowledge the support of all the data providers and copyright holders: ,"The Importance of Evaluation for Cross-Language System Development: the CLEF Experience. The aim of the Cross-Language Evaluation Forum (CLEF) is to develop and maintain an infrastructure for the evaluation of information retrieval systems operating on European languages in both monolingual and cross-language contexts, and to create testsuites of reusable data that can be employed by system developers for benchmarking purposes. Two CLEF evaluation campaigns have been held so far (CLEF 2000 and CLEF 2001); CLEF 2002 is now under way. The paper describes the objectives and the organisation of these campaigns, and gives a first assessment of the results. In conclusion, plans for future CLEF campaigns are reported.",2002
stanovsky-etal-2017-integrating,https://aclanthology.org/P17-2056,0,,,,,,,"Integrating Deep Linguistic Features in Factuality Prediction over Unified Datasets. Previous models for the assessment of commitment towards a predicate in a sentence (also known as factuality prediction) were trained and tested against a specific annotated dataset, subsequently limiting the generality of their results. In this work we propose an intuitive method for mapping three previously annotated corpora onto a single factuality scale, thereby enabling models to be tested across these corpora. In addition, we design a novel model for factuality prediction by first extending a previous rule-based factuality prediction system and applying it over an abstraction of dependency trees, and then using the output of this system in a supervised classifier. We show that this model outperforms previous methods on all three datasets. We make both the unified factuality corpus and our new model publicly available.",Integrating Deep Linguistic Features in Factuality Prediction over Unified Datasets,"Previous models for the assessment of commitment towards a predicate in a sentence (also known as factuality prediction) were trained and tested against a specific annotated dataset, subsequently limiting the generality of their results. In this work we propose an intuitive method for mapping three previously annotated corpora onto a single factuality scale, thereby enabling models to be tested across these corpora. In addition, we design a novel model for factuality prediction by first extending a previous rule-based factuality prediction system and applying it over an abstraction of dependency trees, and then using the output of this system in a supervised classifier. We show that this model outperforms previous methods on all three datasets. We make both the unified factuality corpus and our new model publicly available.",Integrating Deep Linguistic Features in Factuality Prediction over Unified Datasets,"Previous models for the assessment of commitment towards a predicate in a sentence (also known as factuality prediction) were trained and tested against a specific annotated dataset, subsequently limiting the generality of their results. In this work we propose an intuitive method for mapping three previously annotated corpora onto a single factuality scale, thereby enabling models to be tested across these corpora. In addition, we design a novel model for factuality prediction by first extending a previous rule-based factuality prediction system and applying it over an abstraction of dependency trees, and then using the output of this system in a supervised classifier. We show that this model outperforms previous methods on all three datasets. We make both the unified factuality corpus and our new model publicly available.","We would like to thank the anonymous reviewers for their helpful comments. This work was supported in part by grants from the MAGNET program of the Israeli Office of the Chief Scientist (OCS) and by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1).","Integrating Deep Linguistic Features in Factuality Prediction over Unified Datasets. Previous models for the assessment of commitment towards a predicate in a sentence (also known as factuality prediction) were trained and tested against a specific annotated dataset, subsequently limiting the generality of their results. In this work we propose an intuitive method for mapping three previously annotated corpora onto a single factuality scale, thereby enabling models to be tested across these corpora. In addition, we design a novel model for factuality prediction by first extending a previous rule-based factuality prediction system and applying it over an abstraction of dependency trees, and then using the output of this system in a supervised classifier. We show that this model outperforms previous methods on all three datasets. We make both the unified factuality corpus and our new model publicly available.",2017
ettinger-2020-bert,https://aclanthology.org/2020.tacl-1.3,0,,,,,,,"What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models. Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models. In this paper we introduce a suite of diagnostics drawn from human language experiments, which allow us to ask targeted questions about information used by language models for generating predictions in context. As a case study, we apply these diagnostics to the popular BERT model, finding that it can generally distinguish good from bad completions involving shared category or role reversal, albeit with less sensitivity than humans, and it robustly retrieves noun hypernyms, but it struggles with challenging inference and role-based event predictionand, in particular, it shows clear insensitivity to the contextual impacts of negation.",What {BERT} Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models,"Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models. In this paper we introduce a suite of diagnostics drawn from human language experiments, which allow us to ask targeted questions about information used by language models for generating predictions in context. As a case study, we apply these diagnostics to the popular BERT model, finding that it can generally distinguish good from bad completions involving shared category or role reversal, albeit with less sensitivity than humans, and it robustly retrieves noun hypernyms, but it struggles with challenging inference and role-based event predictionand, in particular, it shows clear insensitivity to the contextual impacts of negation.",What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models,"Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models. In this paper we introduce a suite of diagnostics drawn from human language experiments, which allow us to ask targeted questions about information used by language models for generating predictions in context. As a case study, we apply these diagnostics to the popular BERT model, finding that it can generally distinguish good from bad completions involving shared category or role reversal, albeit with less sensitivity than humans, and it robustly retrieves noun hypernyms, but it struggles with challenging inference and role-based event predictionand, in particular, it shows clear insensitivity to the contextual impacts of negation.","We would like to thank Tal Linzen, Kevin Gimpel, Yoav Goldberg, Marco Baroni, and several anon-ymous reviewers for valuable feedback on earlier versions of this paper. We also thank members of the Toyota Technological Institute at Chicago for useful discussion of these and related issues.","What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models. Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models. In this paper we introduce a suite of diagnostics drawn from human language experiments, which allow us to ask targeted questions about information used by language models for generating predictions in context. As a case study, we apply these diagnostics to the popular BERT model, finding that it can generally distinguish good from bad completions involving shared category or role reversal, albeit with less sensitivity than humans, and it robustly retrieves noun hypernyms, but it struggles with challenging inference and role-based event predictionand, in particular, it shows clear insensitivity to the contextual impacts of negation.",2020
chang-1994-word,https://aclanthology.org/C94-2198,0,,,,,,,"Word Class Discovery for Postprocessing Chinese Handwriting Recognition. This article presents a novel Chinese class n-gram model for contextual postprocessing of haudwriting recognition results. The word classes in the model are automatically discovered by a corpus-based simulated anuealing procedure. Three other language models, least-word, word-frequency, and the powerflfl interword character bigram model, have been constructed for comparison. Extensive experiments on large text corpora show that the discovered class bigram model outperforms the other three competing models.",Word Class Discovery for Postprocessing {C}hinese Handwriting Recognition,"This article presents a novel Chinese class n-gram model for contextual postprocessing of haudwriting recognition results. The word classes in the model are automatically discovered by a corpus-based simulated anuealing procedure. Three other language models, least-word, word-frequency, and the powerflfl interword character bigram model, have been constructed for comparison. Extensive experiments on large text corpora show that the discovered class bigram model outperforms the other three competing models.",Word Class Discovery for Postprocessing Chinese Handwriting Recognition,"This article presents a novel Chinese class n-gram model for contextual postprocessing of haudwriting recognition results. The word classes in the model are automatically discovered by a corpus-based simulated anuealing procedure. Three other language models, least-word, word-frequency, and the powerflfl interword character bigram model, have been constructed for comparison. Extensive experiments on large text corpora show that the discovered class bigram model outperforms the other three competing models.","Thanks are due to the Chinese llandwriting l.ecognilion group, ATC/CCL/ITIL] for the character recognizer, especially Y.-C. l,ai for preparing the recognition results. This paper is a partial result of the project no. 37112100 conducted by the. H'II under sponsorship of the Minister of F, conomie Affairs, R.O.C.","Word Class Discovery for Postprocessing Chinese Handwriting Recognition. This article presents a novel Chinese class n-gram model for contextual postprocessing of haudwriting recognition results. The word classes in the model are automatically discovered by a corpus-based simulated anuealing procedure. Three other language models, least-word, word-frequency, and the powerflfl interword character bigram model, have been constructed for comparison. Extensive experiments on large text corpora show that the discovered class bigram model outperforms the other three competing models.",1994
mccoy-1986-role,https://aclanthology.org/H86-1018,0,,,,,,,"The Role of Perspective in Responding to Property Misconceptions. In order to adequately respond to misconceptions involving an object's properties, we must have a context-sensitive method for determining object similarity. Such a method is introduced here. Some of the necessary contextual information is captured by a new notion of object perspective. It is shown how object perspective can be used to account for different responses to a given misconception in different contexts.",The Role of Perspective in Responding to Property Misconceptions,"In order to adequately respond to misconceptions involving an object's properties, we must have a context-sensitive method for determining object similarity. Such a method is introduced here. Some of the necessary contextual information is captured by a new notion of object perspective. It is shown how object perspective can be used to account for different responses to a given misconception in different contexts.",The Role of Perspective in Responding to Property Misconceptions,"In order to adequately respond to misconceptions involving an object's properties, we must have a context-sensitive method for determining object similarity. Such a method is introduced here. Some of the necessary contextual information is captured by a new notion of object perspective. It is shown how object perspective can be used to account for different responses to a given misconception in different contexts.",,"The Role of Perspective in Responding to Property Misconceptions. In order to adequately respond to misconceptions involving an object's properties, we must have a context-sensitive method for determining object similarity. Such a method is introduced here. Some of the necessary contextual information is captured by a new notion of object perspective. It is shown how object perspective can be used to account for different responses to a given misconception in different contexts.",1986
rabinovich-etal-2017-personalized,https://aclanthology.org/E17-1101,0,,,,,,,"Personalized Machine Translation: Preserving Original Author Traits. The language that we produce reflects our personality, and various personal and demographic characteristics can be detected in natural language texts. We focus on one particular personal trait of the author, gender, and study how it is manifested in original texts and in translations. We show that author's gender has a powerful, clear signal in originals texts, but this signal is obfuscated in human and machine translation. We then propose simple domainadaptation techniques that help retain the original gender traits in the translation, without harming the quality of the translation, thereby creating more personalized machine translation systems.",Personalized Machine Translation: Preserving Original Author Traits,"The language that we produce reflects our personality, and various personal and demographic characteristics can be detected in natural language texts. We focus on one particular personal trait of the author, gender, and study how it is manifested in original texts and in translations. We show that author's gender has a powerful, clear signal in originals texts, but this signal is obfuscated in human and machine translation. We then propose simple domainadaptation techniques that help retain the original gender traits in the translation, without harming the quality of the translation, thereby creating more personalized machine translation systems.",Personalized Machine Translation: Preserving Original Author Traits,"The language that we produce reflects our personality, and various personal and demographic characteristics can be detected in natural language texts. We focus on one particular personal trait of the author, gender, and study how it is manifested in original texts and in translations. We show that author's gender has a powerful, clear signal in originals texts, but this signal is obfuscated in human and machine translation. We then propose simple domainadaptation techniques that help retain the original gender traits in the translation, without harming the quality of the translation, thereby creating more personalized machine translation systems.","This research was partly supported by the H2020 QT21 project (645452, Lucia Specia). We are grateful to Sergiu Nisioi for sharing the initial collection of properties of Members of the European Parliament. We also thank our anonymous reviewers for their constructive feedback.","Personalized Machine Translation: Preserving Original Author Traits. The language that we produce reflects our personality, and various personal and demographic characteristics can be detected in natural language texts. We focus on one particular personal trait of the author, gender, and study how it is manifested in original texts and in translations. We show that author's gender has a powerful, clear signal in originals texts, but this signal is obfuscated in human and machine translation. We then propose simple domainadaptation techniques that help retain the original gender traits in the translation, without harming the quality of the translation, thereby creating more personalized machine translation systems.",2017
candito-constant-2014-strategies,https://aclanthology.org/P14-1070,0,,,,,,,"Strategies for Contiguous Multiword Expression Analysis and Dependency Parsing. In this paper, we investigate various strategies to predict both syntactic dependency parsing and contiguous multiword expression (MWE) recognition, testing them on the dependency version of French Treebank (Abeillé and Barrier, 2004), as instantiated in the SPMRL Shared Task (Seddah et al., 2013). Our work focuses on using an alternative representation of syntactically regular MWEs, which captures their syntactic internal structure. We obtain a system with comparable performance to that of previous works on this dataset, but which predicts both syntactic dependencies and the internal structure of MWEs. This can be useful for capturing the various degrees of semantic compositionality of MWEs.",Strategies for Contiguous Multiword Expression Analysis and Dependency Parsing,"In this paper, we investigate various strategies to predict both syntactic dependency parsing and contiguous multiword expression (MWE) recognition, testing them on the dependency version of French Treebank (Abeillé and Barrier, 2004), as instantiated in the SPMRL Shared Task (Seddah et al., 2013). Our work focuses on using an alternative representation of syntactically regular MWEs, which captures their syntactic internal structure. We obtain a system with comparable performance to that of previous works on this dataset, but which predicts both syntactic dependencies and the internal structure of MWEs. This can be useful for capturing the various degrees of semantic compositionality of MWEs.",Strategies for Contiguous Multiword Expression Analysis and Dependency Parsing,"In this paper, we investigate various strategies to predict both syntactic dependency parsing and contiguous multiword expression (MWE) recognition, testing them on the dependency version of French Treebank (Abeillé and Barrier, 2004), as instantiated in the SPMRL Shared Task (Seddah et al., 2013). Our work focuses on using an alternative representation of syntactically regular MWEs, which captures their syntactic internal structure. We obtain a system with comparable performance to that of previous works on this dataset, but which predicts both syntactic dependencies and the internal structure of MWEs. This can be useful for capturing the various degrees of semantic compositionality of MWEs.",,"Strategies for Contiguous Multiword Expression Analysis and Dependency Parsing. In this paper, we investigate various strategies to predict both syntactic dependency parsing and contiguous multiword expression (MWE) recognition, testing them on the dependency version of French Treebank (Abeillé and Barrier, 2004), as instantiated in the SPMRL Shared Task (Seddah et al., 2013). Our work focuses on using an alternative representation of syntactically regular MWEs, which captures their syntactic internal structure. We obtain a system with comparable performance to that of previous works on this dataset, but which predicts both syntactic dependencies and the internal structure of MWEs. This can be useful for capturing the various degrees of semantic compositionality of MWEs.",2014
ws-1998-treatment,https://aclanthology.org/W98-0600,0,,,,,,,"The Computational Treatment of Nominals. iii
Toni Badia and Roser Sauri The Representation of Syntactically Unexpressed Complements to Nouns ......... ",The Computational Treatment of Nominals,"iii
Toni Badia and Roser Sauri The Representation of Syntactically Unexpressed Complements to Nouns ......... ",The Computational Treatment of Nominals,"iii
Toni Badia and Roser Sauri The Representation of Syntactically Unexpressed Complements to Nouns ......... ",,"The Computational Treatment of Nominals. iii
Toni Badia and Roser Sauri The Representation of Syntactically Unexpressed Complements to Nouns ......... ",1998
pavalanathan-eisenstein-2015-confounds,https://aclanthology.org/D15-1256,0,,,,,,,"Confounds and Consequences in Geotagged Twitter Data. Twitter is often used in quantitative studies that identify geographically-preferred topics, writing styles, and entities. These studies rely on either GPS coordinates attached to individual messages, or on the user-supplied location field in each profile. In this paper, we compare these data acquisition techniques and quantify the biases that they introduce; we also measure their effects on linguistic analysis and textbased geolocation. GPS-tagging and selfreported locations yield measurably different corpora, and these linguistic differences are partially attributable to differences in dataset composition by age and gender. Using a latent variable model to induce age and gender, we show how these demographic variables interact with geography to affect language use. We also show that the accuracy of text-based geolocation varies with population demographics, giving the best results for men above the age of 40.",Confounds and Consequences in Geotagged {T}witter Data,"Twitter is often used in quantitative studies that identify geographically-preferred topics, writing styles, and entities. These studies rely on either GPS coordinates attached to individual messages, or on the user-supplied location field in each profile. In this paper, we compare these data acquisition techniques and quantify the biases that they introduce; we also measure their effects on linguistic analysis and textbased geolocation. GPS-tagging and selfreported locations yield measurably different corpora, and these linguistic differences are partially attributable to differences in dataset composition by age and gender. Using a latent variable model to induce age and gender, we show how these demographic variables interact with geography to affect language use. We also show that the accuracy of text-based geolocation varies with population demographics, giving the best results for men above the age of 40.",Confounds and Consequences in Geotagged Twitter Data,"Twitter is often used in quantitative studies that identify geographically-preferred topics, writing styles, and entities. These studies rely on either GPS coordinates attached to individual messages, or on the user-supplied location field in each profile. In this paper, we compare these data acquisition techniques and quantify the biases that they introduce; we also measure their effects on linguistic analysis and textbased geolocation. GPS-tagging and selfreported locations yield measurably different corpora, and these linguistic differences are partially attributable to differences in dataset composition by age and gender. Using a latent variable model to induce age and gender, we show how these demographic variables interact with geography to affect language use. We also show that the accuracy of text-based geolocation varies with population demographics, giving the best results for men above the age of 40.","Acknowledgments Thanks to the anonymous reviewers for their useful and constructive feedback on our submission. The following members of the Georgia Tech Computational Linguistics Laboratory offered feedback throughout the research process: Naman Goyal, Yangfeng Ji, Vinodh Krishan, Ana Smith, Yijie Wang, and Yi Yang. This research was supported by the National Science Foundation under awards IIS-1111142 and RI-1452443, by the National Institutes of Health under award number R01GM112697-01, and by the Air Force Office of Scientific Research. The content is solely the responsibility of the authors and does not necessarily represent the official views of these sponsors.","Confounds and Consequences in Geotagged Twitter Data. Twitter is often used in quantitative studies that identify geographically-preferred topics, writing styles, and entities. These studies rely on either GPS coordinates attached to individual messages, or on the user-supplied location field in each profile. In this paper, we compare these data acquisition techniques and quantify the biases that they introduce; we also measure their effects on linguistic analysis and textbased geolocation. GPS-tagging and selfreported locations yield measurably different corpora, and these linguistic differences are partially attributable to differences in dataset composition by age and gender. Using a latent variable model to induce age and gender, we show how these demographic variables interact with geography to affect language use. We also show that the accuracy of text-based geolocation varies with population demographics, giving the best results for men above the age of 40.",2015
chen-bunescu-2017-exploration,https://aclanthology.org/I17-2075,0,,,,,,,"An Exploration of Data Augmentation and RNN Architectures for Question Ranking in Community Question Answering. The automation of tasks in community question answering (cQA) is dominated by machine learning approaches, whose performance is often limited by the number of training examples. Starting from a neural sequence learning approach with attention, we explore the impact of two data augmentation techniques on question ranking performance: a method that swaps reference questions with their paraphrases, and training on examples automatically selected from external datasets. Both methods are shown to lead to substantial gains in accuracy over a strong baseline. Further improvements are obtained by changing the model architecture to mirror the structure seen in the data.",An Exploration of Data Augmentation and {RNN} Architectures for Question Ranking in Community Question Answering,"The automation of tasks in community question answering (cQA) is dominated by machine learning approaches, whose performance is often limited by the number of training examples. Starting from a neural sequence learning approach with attention, we explore the impact of two data augmentation techniques on question ranking performance: a method that swaps reference questions with their paraphrases, and training on examples automatically selected from external datasets. Both methods are shown to lead to substantial gains in accuracy over a strong baseline. Further improvements are obtained by changing the model architecture to mirror the structure seen in the data.",An Exploration of Data Augmentation and RNN Architectures for Question Ranking in Community Question Answering,"The automation of tasks in community question answering (cQA) is dominated by machine learning approaches, whose performance is often limited by the number of training examples. Starting from a neural sequence learning approach with attention, we explore the impact of two data augmentation techniques on question ranking performance: a method that swaps reference questions with their paraphrases, and training on examples automatically selected from external datasets. Both methods are shown to lead to substantial gains in accuracy over a strong baseline. Further improvements are obtained by changing the model architecture to mirror the structure seen in the data.",We would like to thank the anonymous reviewers for their helpful comments. This work was supported by an allocation of computing time from the Ohio Supercomputer Center.,"An Exploration of Data Augmentation and RNN Architectures for Question Ranking in Community Question Answering. The automation of tasks in community question answering (cQA) is dominated by machine learning approaches, whose performance is often limited by the number of training examples. Starting from a neural sequence learning approach with attention, we explore the impact of two data augmentation techniques on question ranking performance: a method that swaps reference questions with their paraphrases, and training on examples automatically selected from external datasets. Both methods are shown to lead to substantial gains in accuracy over a strong baseline. Further improvements are obtained by changing the model architecture to mirror the structure seen in the data.",2017
surana-chinagundi-2022-ginius,https://aclanthology.org/2022.ltedi-1.43,1,,,,health,,,"giniUs @LT-EDI-ACL2022: Aasha: Transformers based Hope-EDI. This paper describes team giniUs' submission to the Hope Speech Detection for Equality, Diversity and Inclusion Shared Task organised by LT-EDI ACL 2022. We have fine-tuned the RoBERTa-large pre-trained model and extracted the last four Decoder layers to build a binary classifier. Our best result on the leaderboard achieves a weighted F1 score of 0.86 and a Macro F1 score of 0.51 for English. We rank fourth in the English task. We have opensourced our code implementations on GitHub to facilitate easy reproducibility by the scientific community.",gini{U}s @{LT}-{EDI}-{ACL}2022: Aasha: Transformers based Hope-{EDI},"This paper describes team giniUs' submission to the Hope Speech Detection for Equality, Diversity and Inclusion Shared Task organised by LT-EDI ACL 2022. We have fine-tuned the RoBERTa-large pre-trained model and extracted the last four Decoder layers to build a binary classifier. Our best result on the leaderboard achieves a weighted F1 score of 0.86 and a Macro F1 score of 0.51 for English. We rank fourth in the English task. We have opensourced our code implementations on GitHub to facilitate easy reproducibility by the scientific community.",giniUs @LT-EDI-ACL2022: Aasha: Transformers based Hope-EDI,"This paper describes team giniUs' submission to the Hope Speech Detection for Equality, Diversity and Inclusion Shared Task organised by LT-EDI ACL 2022. We have fine-tuned the RoBERTa-large pre-trained model and extracted the last four Decoder layers to build a binary classifier. Our best result on the leaderboard achieves a weighted F1 score of 0.86 and a Macro F1 score of 0.51 for English. We rank fourth in the English task. We have opensourced our code implementations on GitHub to facilitate easy reproducibility by the scientific community.",,"giniUs @LT-EDI-ACL2022: Aasha: Transformers based Hope-EDI. This paper describes team giniUs' submission to the Hope Speech Detection for Equality, Diversity and Inclusion Shared Task organised by LT-EDI ACL 2022. We have fine-tuned the RoBERTa-large pre-trained model and extracted the last four Decoder layers to build a binary classifier. Our best result on the leaderboard achieves a weighted F1 score of 0.86 and a Macro F1 score of 0.51 for English. We rank fourth in the English task. We have opensourced our code implementations on GitHub to facilitate easy reproducibility by the scientific community.",2022
okabe-etal-2005-query,https://aclanthology.org/H05-1121,0,,,,,,,"Query Expansion with the Minimum User Feedback by Transductive Learning. Query expansion techniques generally select new query terms from a set of top ranked documents. Although a user's manual judgment of those documents would much help to select good expansion terms, it is difficult to get enough feedback from users in practical situations. In this paper we propose a query expansion technique which performs well even if a user notifies just a relevant document and a non-relevant document. In order to tackle this specific condition, we introduce two refinements to a well-known query expansion technique. One is application of a transductive learning technique in order to increase relevant documents. The other is a modified parameter estimation method which laps the predictions by multiple learning trials and try to differentiate the importance of candidate terms for expansion in relevant documents. Experimental results show that our technique outperforms some traditional query expansion methods in several evaluation measures.",Query Expansion with the Minimum User Feedback by Transductive Learning,"Query expansion techniques generally select new query terms from a set of top ranked documents. Although a user's manual judgment of those documents would much help to select good expansion terms, it is difficult to get enough feedback from users in practical situations. In this paper we propose a query expansion technique which performs well even if a user notifies just a relevant document and a non-relevant document. In order to tackle this specific condition, we introduce two refinements to a well-known query expansion technique. One is application of a transductive learning technique in order to increase relevant documents. The other is a modified parameter estimation method which laps the predictions by multiple learning trials and try to differentiate the importance of candidate terms for expansion in relevant documents. Experimental results show that our technique outperforms some traditional query expansion methods in several evaluation measures.",Query Expansion with the Minimum User Feedback by Transductive Learning,"Query expansion techniques generally select new query terms from a set of top ranked documents. Although a user's manual judgment of those documents would much help to select good expansion terms, it is difficult to get enough feedback from users in practical situations. In this paper we propose a query expansion technique which performs well even if a user notifies just a relevant document and a non-relevant document. In order to tackle this specific condition, we introduce two refinements to a well-known query expansion technique. One is application of a transductive learning technique in order to increase relevant documents. The other is a modified parameter estimation method which laps the predictions by multiple learning trials and try to differentiate the importance of candidate terms for expansion in relevant documents. Experimental results show that our technique outperforms some traditional query expansion methods in several evaluation measures.",,"Query Expansion with the Minimum User Feedback by Transductive Learning. Query expansion techniques generally select new query terms from a set of top ranked documents. Although a user's manual judgment of those documents would much help to select good expansion terms, it is difficult to get enough feedback from users in practical situations. In this paper we propose a query expansion technique which performs well even if a user notifies just a relevant document and a non-relevant document. In order to tackle this specific condition, we introduce two refinements to a well-known query expansion technique. One is application of a transductive learning technique in order to increase relevant documents. The other is a modified parameter estimation method which laps the predictions by multiple learning trials and try to differentiate the importance of candidate terms for expansion in relevant documents. Experimental results show that our technique outperforms some traditional query expansion methods in several evaluation measures.",2005
wang-etal-2012-exploiting,https://aclanthology.org/C12-2128,0,,,,,,,"Exploiting Discourse Relations for Sentiment Analysis. The overall sentiment of a text is critically affected by its discourse structure. By splitting a text into text spans with different discourse relations, we automatically train the weights of different relations in accordance with their importance, and then make use of discourse structure knowledge to improve sentiment classification. In this paper, we utilize explicit connectives to predict discourse relations, and then propose several methods to incorporate discourse relation knowledge to the task of sentiment analysis. All our methods integrating discourse relations perform better than the baseline methods, validating the effectiveness of using discourse relations in Chinese sentiment analysis. We also automatically find out the most influential discourse relations and connectives in sentiment analysis.",Exploiting Discourse Relations for Sentiment Analysis,"The overall sentiment of a text is critically affected by its discourse structure. By splitting a text into text spans with different discourse relations, we automatically train the weights of different relations in accordance with their importance, and then make use of discourse structure knowledge to improve sentiment classification. In this paper, we utilize explicit connectives to predict discourse relations, and then propose several methods to incorporate discourse relation knowledge to the task of sentiment analysis. All our methods integrating discourse relations perform better than the baseline methods, validating the effectiveness of using discourse relations in Chinese sentiment analysis. We also automatically find out the most influential discourse relations and connectives in sentiment analysis.",Exploiting Discourse Relations for Sentiment Analysis,"The overall sentiment of a text is critically affected by its discourse structure. By splitting a text into text spans with different discourse relations, we automatically train the weights of different relations in accordance with their importance, and then make use of discourse structure knowledge to improve sentiment classification. In this paper, we utilize explicit connectives to predict discourse relations, and then propose several methods to incorporate discourse relation knowledge to the task of sentiment analysis. All our methods integrating discourse relations perform better than the baseline methods, validating the effectiveness of using discourse relations in Chinese sentiment analysis. We also automatically find out the most influential discourse relations and connectives in sentiment analysis.",,"Exploiting Discourse Relations for Sentiment Analysis. The overall sentiment of a text is critically affected by its discourse structure. By splitting a text into text spans with different discourse relations, we automatically train the weights of different relations in accordance with their importance, and then make use of discourse structure knowledge to improve sentiment classification. In this paper, we utilize explicit connectives to predict discourse relations, and then propose several methods to incorporate discourse relation knowledge to the task of sentiment analysis. All our methods integrating discourse relations perform better than the baseline methods, validating the effectiveness of using discourse relations in Chinese sentiment analysis. We also automatically find out the most influential discourse relations and connectives in sentiment analysis.",2012
nimb-2004-corpus,http://www.lrec-conf.org/proceedings/lrec2004/pdf/284.pdf,0,,,,,,,"A Corpus-based Syntactic Lexicon for Adverbs. A word class often neglected in the field of NLP resources, namely adverbs, has lately been described in a computational lexicon produced at CST as one of the results of a Ph.D.-project. The adverb lexicon, which is integrated in the Danish STO lexicon, gives detailed syntactic information on the type of modification and position, as well as on other syntactic properties of approx 800 Danish adverbs. One of the aims of the lexicon has been to establish a clear distinction between syntactic and semantic information-where other lexicons often generalize over the syntactic behavior of semantic classes of adverbs, every adverb is described with respect to its proper syntactic behavior in a text corpus, revealing very individual syntactic properties. Syntactic information on adverbs is needed in NLP systems generating text to ensure correct placing in the phrase they modify. Also in systems analyzing text, this information is needed in order to attach the adverbs to the right node in the syntactic parse trees. Within the field of linguistic research, several results can be deduced from the lexicon, e.g. knowledge of syntactic classes of Danish adverbs.",A Corpus-based Syntactic Lexicon for Adverbs,"A word class often neglected in the field of NLP resources, namely adverbs, has lately been described in a computational lexicon produced at CST as one of the results of a Ph.D.-project. The adverb lexicon, which is integrated in the Danish STO lexicon, gives detailed syntactic information on the type of modification and position, as well as on other syntactic properties of approx 800 Danish adverbs. One of the aims of the lexicon has been to establish a clear distinction between syntactic and semantic information-where other lexicons often generalize over the syntactic behavior of semantic classes of adverbs, every adverb is described with respect to its proper syntactic behavior in a text corpus, revealing very individual syntactic properties. Syntactic information on adverbs is needed in NLP systems generating text to ensure correct placing in the phrase they modify. Also in systems analyzing text, this information is needed in order to attach the adverbs to the right node in the syntactic parse trees. Within the field of linguistic research, several results can be deduced from the lexicon, e.g. knowledge of syntactic classes of Danish adverbs.",A Corpus-based Syntactic Lexicon for Adverbs,"A word class often neglected in the field of NLP resources, namely adverbs, has lately been described in a computational lexicon produced at CST as one of the results of a Ph.D.-project. The adverb lexicon, which is integrated in the Danish STO lexicon, gives detailed syntactic information on the type of modification and position, as well as on other syntactic properties of approx 800 Danish adverbs. One of the aims of the lexicon has been to establish a clear distinction between syntactic and semantic information-where other lexicons often generalize over the syntactic behavior of semantic classes of adverbs, every adverb is described with respect to its proper syntactic behavior in a text corpus, revealing very individual syntactic properties. Syntactic information on adverbs is needed in NLP systems generating text to ensure correct placing in the phrase they modify. Also in systems analyzing text, this information is needed in order to attach the adverbs to the right node in the syntactic parse trees. Within the field of linguistic research, several results can be deduced from the lexicon, e.g. knowledge of syntactic classes of Danish adverbs.",,"A Corpus-based Syntactic Lexicon for Adverbs. A word class often neglected in the field of NLP resources, namely adverbs, has lately been described in a computational lexicon produced at CST as one of the results of a Ph.D.-project. The adverb lexicon, which is integrated in the Danish STO lexicon, gives detailed syntactic information on the type of modification and position, as well as on other syntactic properties of approx 800 Danish adverbs. One of the aims of the lexicon has been to establish a clear distinction between syntactic and semantic information-where other lexicons often generalize over the syntactic behavior of semantic classes of adverbs, every adverb is described with respect to its proper syntactic behavior in a text corpus, revealing very individual syntactic properties. Syntactic information on adverbs is needed in NLP systems generating text to ensure correct placing in the phrase they modify. Also in systems analyzing text, this information is needed in order to attach the adverbs to the right node in the syntactic parse trees. Within the field of linguistic research, several results can be deduced from the lexicon, e.g. knowledge of syntactic classes of Danish adverbs.",2004
muis-etal-2018-low,https://aclanthology.org/C18-1007,0,,,,,,,"Low-resource Cross-lingual Event Type Detection via Distant Supervision with Minimal Effort. The use of machine learning for NLP generally requires resources for training. Tasks performed in a low-resource language usually rely on labeled data in another, typically resource-rich, language. However, there might not be enough labeled data even in a resource-rich language such as English. In such cases, one approach is to use a hand-crafted approach that utilizes only a small bilingual dictionary with minimal manual verification to create distantly supervised data. Another is to explore typical machine learning techniques, for example adversarial training of bilingual word representations. We find that in event-type detection task-the task to classify [parts of] documents into a fixed set of labels-they give about the same performance. We explore ways in which the two methods can be complementary and also see how to best utilize a limited budget for manual annotation to maximize performance gain.",Low-resource Cross-lingual Event Type Detection via Distant Supervision with Minimal Effort,"The use of machine learning for NLP generally requires resources for training. Tasks performed in a low-resource language usually rely on labeled data in another, typically resource-rich, language. However, there might not be enough labeled data even in a resource-rich language such as English. In such cases, one approach is to use a hand-crafted approach that utilizes only a small bilingual dictionary with minimal manual verification to create distantly supervised data. Another is to explore typical machine learning techniques, for example adversarial training of bilingual word representations. We find that in event-type detection task-the task to classify [parts of] documents into a fixed set of labels-they give about the same performance. We explore ways in which the two methods can be complementary and also see how to best utilize a limited budget for manual annotation to maximize performance gain.",Low-resource Cross-lingual Event Type Detection via Distant Supervision with Minimal Effort,"The use of machine learning for NLP generally requires resources for training. Tasks performed in a low-resource language usually rely on labeled data in another, typically resource-rich, language. However, there might not be enough labeled data even in a resource-rich language such as English. In such cases, one approach is to use a hand-crafted approach that utilizes only a small bilingual dictionary with minimal manual verification to create distantly supervised data. Another is to explore typical machine learning techniques, for example adversarial training of bilingual word representations. We find that in event-type detection task-the task to classify [parts of] documents into a fixed set of labels-they give about the same performance. We explore ways in which the two methods can be complementary and also see how to best utilize a limited budget for manual annotation to maximize performance gain.","We acknowledge NIST for coordinating the SF type evaluation and providing the test data. NIST serves to coordinate the evaluations in order to support research and to help advance the state-of-the-art. NIST evaluations are not viewed as a competition, and such results reported by NIST are not to be construed, or represented, as endorsements of any participants system, or as official findings on the part of NIST or the U.S. Government. We thank Lori Levin for the inputs for an earlier version of this paper. This project was sponsored by the Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O), program: Low Resource Languages for Emergent Incidents (LORELEI), issued by DARPA/I2O under Contract No. HR0011-15-C-0114.","Low-resource Cross-lingual Event Type Detection via Distant Supervision with Minimal Effort. The use of machine learning for NLP generally requires resources for training. Tasks performed in a low-resource language usually rely on labeled data in another, typically resource-rich, language. However, there might not be enough labeled data even in a resource-rich language such as English. In such cases, one approach is to use a hand-crafted approach that utilizes only a small bilingual dictionary with minimal manual verification to create distantly supervised data. Another is to explore typical machine learning techniques, for example adversarial training of bilingual word representations. We find that in event-type detection task-the task to classify [parts of] documents into a fixed set of labels-they give about the same performance. We explore ways in which the two methods can be complementary and also see how to best utilize a limited budget for manual annotation to maximize performance gain.",2018
zhong-etal-2019-closer,https://aclanthology.org/D19-5410,0,,,,,,,"A Closer Look at Data Bias in Neural Extractive Summarization Models. In this paper, we take stock of the current state of summarization datasets and explore how different factors of datasets influence the generalization behaviour of neural extractive summarization models. Specifically, we first propose several properties of datasets, which matter for the generalization of summarization models. Then we build the connection between priors residing in datasets and model designs, analyzing how different properties of datasets influence the choices of model structure design and training methods. Finally, by taking a typical dataset as an example, we rethink the process of the model design based on the experience of the above analysis. We demonstrate that when we have a deep understanding of the characteristics of datasets, a simple approach can bring significant improvements to the existing stateof-the-art model. 1 Introduction Neural network-based models have achieved great success on summarization tasks (",A Closer Look at Data Bias in Neural Extractive Summarization Models,"In this paper, we take stock of the current state of summarization datasets and explore how different factors of datasets influence the generalization behaviour of neural extractive summarization models. Specifically, we first propose several properties of datasets, which matter for the generalization of summarization models. Then we build the connection between priors residing in datasets and model designs, analyzing how different properties of datasets influence the choices of model structure design and training methods. Finally, by taking a typical dataset as an example, we rethink the process of the model design based on the experience of the above analysis. We demonstrate that when we have a deep understanding of the characteristics of datasets, a simple approach can bring significant improvements to the existing stateof-the-art model. 1 Introduction Neural network-based models have achieved great success on summarization tasks (",A Closer Look at Data Bias in Neural Extractive Summarization Models,"In this paper, we take stock of the current state of summarization datasets and explore how different factors of datasets influence the generalization behaviour of neural extractive summarization models. Specifically, we first propose several properties of datasets, which matter for the generalization of summarization models. Then we build the connection between priors residing in datasets and model designs, analyzing how different properties of datasets influence the choices of model structure design and training methods. Finally, by taking a typical dataset as an example, we rethink the process of the model design based on the experience of the above analysis. We demonstrate that when we have a deep understanding of the characteristics of datasets, a simple approach can bring significant improvements to the existing stateof-the-art model. 1 Introduction Neural network-based models have achieved great success on summarization tasks (",,"A Closer Look at Data Bias in Neural Extractive Summarization Models. In this paper, we take stock of the current state of summarization datasets and explore how different factors of datasets influence the generalization behaviour of neural extractive summarization models. Specifically, we first propose several properties of datasets, which matter for the generalization of summarization models. Then we build the connection between priors residing in datasets and model designs, analyzing how different properties of datasets influence the choices of model structure design and training methods. Finally, by taking a typical dataset as an example, we rethink the process of the model design based on the experience of the above analysis. We demonstrate that when we have a deep understanding of the characteristics of datasets, a simple approach can bring significant improvements to the existing stateof-the-art model. 1 Introduction Neural network-based models have achieved great success on summarization tasks (",2019
karamanolakis-etal-2020-txtract,https://aclanthology.org/2020.acl-main.751,0,,,,business_use,,,"TXtract: Taxonomy-Aware Knowledge Extraction for Thousands of Product Categories. Extracting structured knowledge from product profiles is crucial for various applications in e-Commerce. State-of-the-art approaches for knowledge extraction were each designed for a single category of product, and thus do not apply to real-life e-Commerce scenarios, which often contain thousands of diverse categories. This paper proposes TXtract, a taxonomyaware knowledge extraction model that applies to thousands of product categories organized in a hierarchical taxonomy. Through category conditional self-attention and multi-task learning, our approach is both scalable, as it trains a single model for thousands of categories, and effective, as it extracts categoryspecific attribute values. Experiments on products from a taxonomy with 4,000 categories show that TXtract outperforms state-of-the-art approaches by up to 10% in F1 and 15% in coverage across all categories.",{TX}tract: Taxonomy-Aware Knowledge Extraction for Thousands of Product Categories,"Extracting structured knowledge from product profiles is crucial for various applications in e-Commerce. State-of-the-art approaches for knowledge extraction were each designed for a single category of product, and thus do not apply to real-life e-Commerce scenarios, which often contain thousands of diverse categories. This paper proposes TXtract, a taxonomyaware knowledge extraction model that applies to thousands of product categories organized in a hierarchical taxonomy. Through category conditional self-attention and multi-task learning, our approach is both scalable, as it trains a single model for thousands of categories, and effective, as it extracts categoryspecific attribute values. Experiments on products from a taxonomy with 4,000 categories show that TXtract outperforms state-of-the-art approaches by up to 10% in F1 and 15% in coverage across all categories.",TXtract: Taxonomy-Aware Knowledge Extraction for Thousands of Product Categories,"Extracting structured knowledge from product profiles is crucial for various applications in e-Commerce. State-of-the-art approaches for knowledge extraction were each designed for a single category of product, and thus do not apply to real-life e-Commerce scenarios, which often contain thousands of diverse categories. This paper proposes TXtract, a taxonomyaware knowledge extraction model that applies to thousands of product categories organized in a hierarchical taxonomy. Through category conditional self-attention and multi-task learning, our approach is both scalable, as it trains a single model for thousands of categories, and effective, as it extracts categoryspecific attribute values. Experiments on products from a taxonomy with 4,000 categories show that TXtract outperforms state-of-the-art approaches by up to 10% in F1 and 15% in coverage across all categories.","The authors would like to sincerely thank Ron Benson, Christos Faloutsos, Andrey Kan, Yan Liang, Yaqing Wang, and Tong Zhao for their insightful comments on the paper, and Gabriel Blanco, Alexandre Manduca, Saurabh Deshpande, Jay Ren, and Johanna Umana for their constructive feedback on data integration for the experiments.","TXtract: Taxonomy-Aware Knowledge Extraction for Thousands of Product Categories. Extracting structured knowledge from product profiles is crucial for various applications in e-Commerce. State-of-the-art approaches for knowledge extraction were each designed for a single category of product, and thus do not apply to real-life e-Commerce scenarios, which often contain thousands of diverse categories. This paper proposes TXtract, a taxonomyaware knowledge extraction model that applies to thousands of product categories organized in a hierarchical taxonomy. Through category conditional self-attention and multi-task learning, our approach is both scalable, as it trains a single model for thousands of categories, and effective, as it extracts categoryspecific attribute values. Experiments on products from a taxonomy with 4,000 categories show that TXtract outperforms state-of-the-art approaches by up to 10% in F1 and 15% in coverage across all categories.",2020
leung-etal-2016-developing,https://aclanthology.org/W16-5403,0,,,,,,,"Developing Universal Dependencies for Mandarin Chinese. This article proposes a Universal Dependency Annotation Scheme for Mandarin Chinese, including POS tags and dependency analysis. We identify cases of idiosyncrasy of Mandarin Chinese that are difficult to fit into the current schema which has mainly been based on the descriptions of various Indo-European languages. We discuss differences between our scheme and those of the Stanford Chinese Dependencies and the Chinese Dependency Treebank.",Developing {U}niversal {D}ependencies for {M}andarin {C}hinese,"This article proposes a Universal Dependency Annotation Scheme for Mandarin Chinese, including POS tags and dependency analysis. We identify cases of idiosyncrasy of Mandarin Chinese that are difficult to fit into the current schema which has mainly been based on the descriptions of various Indo-European languages. We discuss differences between our scheme and those of the Stanford Chinese Dependencies and the Chinese Dependency Treebank.",Developing Universal Dependencies for Mandarin Chinese,"This article proposes a Universal Dependency Annotation Scheme for Mandarin Chinese, including POS tags and dependency analysis. We identify cases of idiosyncrasy of Mandarin Chinese that are difficult to fit into the current schema which has mainly been based on the descriptions of various Indo-European languages. We discuss differences between our scheme and those of the Stanford Chinese Dependencies and the Chinese Dependency Treebank.",This work was supported by a grant from the PROCORE-France/Hong Kong Joint Research Scheme sponsored by the Research Grants Council and the Consulate General of France in Hong Kong (Reference No.: F-CityU107/15 and N° 35322RG); and by a Strategic Research Grant (Project No. 7004494) from City University of Hong Kong.,"Developing Universal Dependencies for Mandarin Chinese. This article proposes a Universal Dependency Annotation Scheme for Mandarin Chinese, including POS tags and dependency analysis. We identify cases of idiosyncrasy of Mandarin Chinese that are difficult to fit into the current schema which has mainly been based on the descriptions of various Indo-European languages. We discuss differences between our scheme and those of the Stanford Chinese Dependencies and the Chinese Dependency Treebank.",2016
mondal-etal-2021-classification,https://aclanthology.org/2021.smm4h-1.29,1,,,,health,,,"Classification of COVID19 tweets using Machine Learning Approaches. The reported work is a description of our participation in the ""Classification of COVID19 tweets containing symptoms"" shared task, organized by the ""Social Media Mining for Health Applications (SMM4H)"" workshop. The literature describes two machine learning approaches that were used to build a threeclass classification system, that categorizes tweets related to COVID19, into three classes, viz., self-reports, non-personal reports, and literature/news mentions. The steps for preprocessing tweets, feature extraction, and the development of the machine learning models, are described extensively in the documentation. Both the developed learning models, when evaluated by the organizers, garnered F1 scores of 0.93 and 0.92 respectively.",Classification of {COVID}19 tweets using Machine Learning Approaches,"The reported work is a description of our participation in the ""Classification of COVID19 tweets containing symptoms"" shared task, organized by the ""Social Media Mining for Health Applications (SMM4H)"" workshop. The literature describes two machine learning approaches that were used to build a threeclass classification system, that categorizes tweets related to COVID19, into three classes, viz., self-reports, non-personal reports, and literature/news mentions. The steps for preprocessing tweets, feature extraction, and the development of the machine learning models, are described extensively in the documentation. Both the developed learning models, when evaluated by the organizers, garnered F1 scores of 0.93 and 0.92 respectively.",Classification of COVID19 tweets using Machine Learning Approaches,"The reported work is a description of our participation in the ""Classification of COVID19 tweets containing symptoms"" shared task, organized by the ""Social Media Mining for Health Applications (SMM4H)"" workshop. The literature describes two machine learning approaches that were used to build a threeclass classification system, that categorizes tweets related to COVID19, into three classes, viz., self-reports, non-personal reports, and literature/news mentions. The steps for preprocessing tweets, feature extraction, and the development of the machine learning models, are described extensively in the documentation. Both the developed learning models, when evaluated by the organizers, garnered F1 scores of 0.93 and 0.92 respectively.",,"Classification of COVID19 tweets using Machine Learning Approaches. The reported work is a description of our participation in the ""Classification of COVID19 tweets containing symptoms"" shared task, organized by the ""Social Media Mining for Health Applications (SMM4H)"" workshop. The literature describes two machine learning approaches that were used to build a threeclass classification system, that categorizes tweets related to COVID19, into three classes, viz., self-reports, non-personal reports, and literature/news mentions. The steps for preprocessing tweets, feature extraction, and the development of the machine learning models, are described extensively in the documentation. Both the developed learning models, when evaluated by the organizers, garnered F1 scores of 0.93 and 0.92 respectively.",2021
gero-etal-2022-sparks,https://aclanthology.org/2022.in2writing-1.12,1,,,,industry_innovation_infrastructure,,,"Sparks: Inspiration for Science Writing using Language Models. Large-scale language models are rapidly improving, performing well on a variety of tasks with little to no customization. In this work we investigate how language models can support science writing, a challenging writing task that is both open-ended and highly constrained. We present a system for generating ""sparks"", sentences related to a scientific concept intended to inspire writers. We run a user study with 13 STEM graduate students and find three main use cases of sparks-inspiration, translation, and perspective-each of which correlates with a unique interaction pattern. We also find that while participants were more likely to select higher quality sparks, the overall quality of sparks seen by a given participant did not correlate with their satisfaction with the tool. 1",Sparks: Inspiration for Science Writing using Language Models,"Large-scale language models are rapidly improving, performing well on a variety of tasks with little to no customization. In this work we investigate how language models can support science writing, a challenging writing task that is both open-ended and highly constrained. We present a system for generating ""sparks"", sentences related to a scientific concept intended to inspire writers. We run a user study with 13 STEM graduate students and find three main use cases of sparks-inspiration, translation, and perspective-each of which correlates with a unique interaction pattern. We also find that while participants were more likely to select higher quality sparks, the overall quality of sparks seen by a given participant did not correlate with their satisfaction with the tool. 1",Sparks: Inspiration for Science Writing using Language Models,"Large-scale language models are rapidly improving, performing well on a variety of tasks with little to no customization. In this work we investigate how language models can support science writing, a challenging writing task that is both open-ended and highly constrained. We present a system for generating ""sparks"", sentences related to a scientific concept intended to inspire writers. We run a user study with 13 STEM graduate students and find three main use cases of sparks-inspiration, translation, and perspective-each of which correlates with a unique interaction pattern. We also find that while participants were more likely to select higher quality sparks, the overall quality of sparks seen by a given participant did not correlate with their satisfaction with the tool. 1",,"Sparks: Inspiration for Science Writing using Language Models. Large-scale language models are rapidly improving, performing well on a variety of tasks with little to no customization. In this work we investigate how language models can support science writing, a challenging writing task that is both open-ended and highly constrained. We present a system for generating ""sparks"", sentences related to a scientific concept intended to inspire writers. We run a user study with 13 STEM graduate students and find three main use cases of sparks-inspiration, translation, and perspective-each of which correlates with a unique interaction pattern. We also find that while participants were more likely to select higher quality sparks, the overall quality of sparks seen by a given participant did not correlate with their satisfaction with the tool. 1",2022
zhang-etal-2019-paws,https://aclanthology.org/N19-1131,0,,,,,,,"PAWS: Paraphrase Adversaries from Word Scrambling. Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like flights from New York to Florida and flights from Florida to New York. This paper introduces PAWS (Paraphrase Adversaries from Word Scrambling), a new dataset with 108,463 wellformed paraphrase and non-paraphrase pairs with high lexical overlap. Challenging pairs are generated by controlled word swapping and back translation, followed by fluency and paraphrase judgments by human raters. Stateof-the-art models trained on existing datasets have dismal performance on PAWS (<40% accuracy); however, including PAWS training data for these models improves their accuracy to 85% while maintaining performance on existing tasks. In contrast, models that do not capture non-local contextual information fail even with PAWS training examples. As such, PAWS provides an effective instrument for driving further progress on models that better exploit structure, context, and pairwise comparisons.",{PAWS}: Paraphrase Adversaries from Word Scrambling,"Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like flights from New York to Florida and flights from Florida to New York. This paper introduces PAWS (Paraphrase Adversaries from Word Scrambling), a new dataset with 108,463 wellformed paraphrase and non-paraphrase pairs with high lexical overlap. Challenging pairs are generated by controlled word swapping and back translation, followed by fluency and paraphrase judgments by human raters. Stateof-the-art models trained on existing datasets have dismal performance on PAWS (<40% accuracy); however, including PAWS training data for these models improves their accuracy to 85% while maintaining performance on existing tasks. In contrast, models that do not capture non-local contextual information fail even with PAWS training examples. As such, PAWS provides an effective instrument for driving further progress on models that better exploit structure, context, and pairwise comparisons.",PAWS: Paraphrase Adversaries from Word Scrambling,"Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like flights from New York to Florida and flights from Florida to New York. This paper introduces PAWS (Paraphrase Adversaries from Word Scrambling), a new dataset with 108,463 wellformed paraphrase and non-paraphrase pairs with high lexical overlap. Challenging pairs are generated by controlled word swapping and back translation, followed by fluency and paraphrase judgments by human raters. Stateof-the-art models trained on existing datasets have dismal performance on PAWS (<40% accuracy); however, including PAWS training data for these models improves their accuracy to 85% while maintaining performance on existing tasks. In contrast, models that do not capture non-local contextual information fail even with PAWS training examples. As such, PAWS provides an effective instrument for driving further progress on models that better exploit structure, context, and pairwise comparisons.","We would like to thank our anonymous reviewers and the Google AI Language team, especially Emily Pitler, for the insightful comments that contributed to this paper. Many thanks also to the Data Compute team, especially Ashwin Kakarla and Henry Jicha, for their help with the annotations","PAWS: Paraphrase Adversaries from Word Scrambling. Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like flights from New York to Florida and flights from Florida to New York. This paper introduces PAWS (Paraphrase Adversaries from Word Scrambling), a new dataset with 108,463 wellformed paraphrase and non-paraphrase pairs with high lexical overlap. Challenging pairs are generated by controlled word swapping and back translation, followed by fluency and paraphrase judgments by human raters. Stateof-the-art models trained on existing datasets have dismal performance on PAWS (<40% accuracy); however, including PAWS training data for these models improves their accuracy to 85% while maintaining performance on existing tasks. In contrast, models that do not capture non-local contextual information fail even with PAWS training examples. As such, PAWS provides an effective instrument for driving further progress on models that better exploit structure, context, and pairwise comparisons.",2019
sassano-kurohashi-2010-using,https://aclanthology.org/P10-1037,0,,,,,,,"Using Smaller Constituents Rather Than Sentences in Active Learning for Japanese Dependency Parsing. We investigate active learning methods for Japanese dependency parsing. We propose active learning methods of using partial dependency relations in a given sentence for parsing and evaluate their effectiveness empirically. Furthermore, we utilize syntactic constraints of Japanese to obtain more labeled examples from precious labeled ones that annotators give. Experimental results show that our proposed methods improve considerably the learning curve of Japanese dependency parsing. In order to achieve an accuracy of over 88.3%, one of our methods requires only 34.4% of labeled examples as compared to passive learning.",Using Smaller Constituents Rather Than Sentences in Active Learning for {J}apanese Dependency Parsing,"We investigate active learning methods for Japanese dependency parsing. We propose active learning methods of using partial dependency relations in a given sentence for parsing and evaluate their effectiveness empirically. Furthermore, we utilize syntactic constraints of Japanese to obtain more labeled examples from precious labeled ones that annotators give. Experimental results show that our proposed methods improve considerably the learning curve of Japanese dependency parsing. In order to achieve an accuracy of over 88.3%, one of our methods requires only 34.4% of labeled examples as compared to passive learning.",Using Smaller Constituents Rather Than Sentences in Active Learning for Japanese Dependency Parsing,"We investigate active learning methods for Japanese dependency parsing. We propose active learning methods of using partial dependency relations in a given sentence for parsing and evaluate their effectiveness empirically. Furthermore, we utilize syntactic constraints of Japanese to obtain more labeled examples from precious labeled ones that annotators give. Experimental results show that our proposed methods improve considerably the learning curve of Japanese dependency parsing. In order to achieve an accuracy of over 88.3%, one of our methods requires only 34.4% of labeled examples as compared to passive learning.",We would like to thank the anonymous reviewers and Tomohide Shibata for their valuable comments.,"Using Smaller Constituents Rather Than Sentences in Active Learning for Japanese Dependency Parsing. We investigate active learning methods for Japanese dependency parsing. We propose active learning methods of using partial dependency relations in a given sentence for parsing and evaluate their effectiveness empirically. Furthermore, we utilize syntactic constraints of Japanese to obtain more labeled examples from precious labeled ones that annotators give. Experimental results show that our proposed methods improve considerably the learning curve of Japanese dependency parsing. In order to achieve an accuracy of over 88.3%, one of our methods requires only 34.4% of labeled examples as compared to passive learning.",2010
shimorina-belz-2022-human,https://aclanthology.org/2022.humeval-1.6,0,,,,,,,"The Human Evaluation Datasheet: A Template for Recording Details of Human Evaluation Experiments in NLP. This paper presents the Human Evaluation Datasheet (HEDS), a template for recording the details of individual human evaluation experiments in Natural Language Processing (NLP), and reports on first experience of researchers using HEDS sheets in practice. Originally taking inspiration from seminal papers by Bender and Friedman (2018), Mitchell et al. (2019), and Gebru et al. (2020), HEDS facilitates the recording of properties of human evaluations in sufficient detail, and with sufficient standardisation, to support comparability, meta-evaluation, and reproducibility assessments for human evaluations. These are crucial for scientifically principled evaluation, but the overhead of completing a detailed datasheet is substantial, and we discuss possible ways of addressing this and other issues observed in practice.",The Human Evaluation Datasheet: A Template for Recording Details of Human Evaluation Experiments in {NLP},"This paper presents the Human Evaluation Datasheet (HEDS), a template for recording the details of individual human evaluation experiments in Natural Language Processing (NLP), and reports on first experience of researchers using HEDS sheets in practice. Originally taking inspiration from seminal papers by Bender and Friedman (2018), Mitchell et al. (2019), and Gebru et al. (2020), HEDS facilitates the recording of properties of human evaluations in sufficient detail, and with sufficient standardisation, to support comparability, meta-evaluation, and reproducibility assessments for human evaluations. These are crucial for scientifically principled evaluation, but the overhead of completing a detailed datasheet is substantial, and we discuss possible ways of addressing this and other issues observed in practice.",The Human Evaluation Datasheet: A Template for Recording Details of Human Evaluation Experiments in NLP,"This paper presents the Human Evaluation Datasheet (HEDS), a template for recording the details of individual human evaluation experiments in Natural Language Processing (NLP), and reports on first experience of researchers using HEDS sheets in practice. Originally taking inspiration from seminal papers by Bender and Friedman (2018), Mitchell et al. (2019), and Gebru et al. (2020), HEDS facilitates the recording of properties of human evaluations in sufficient detail, and with sufficient standardisation, to support comparability, meta-evaluation, and reproducibility assessments for human evaluations. These are crucial for scientifically principled evaluation, but the overhead of completing a detailed datasheet is substantial, and we discuss possible ways of addressing this and other issues observed in practice.",,"The Human Evaluation Datasheet: A Template for Recording Details of Human Evaluation Experiments in NLP. This paper presents the Human Evaluation Datasheet (HEDS), a template for recording the details of individual human evaluation experiments in Natural Language Processing (NLP), and reports on first experience of researchers using HEDS sheets in practice. Originally taking inspiration from seminal papers by Bender and Friedman (2018), Mitchell et al. (2019), and Gebru et al. (2020), HEDS facilitates the recording of properties of human evaluations in sufficient detail, and with sufficient standardisation, to support comparability, meta-evaluation, and reproducibility assessments for human evaluations. These are crucial for scientifically principled evaluation, but the overhead of completing a detailed datasheet is substantial, and we discuss possible ways of addressing this and other issues observed in practice.",2022
hayes-2004-publisher,https://aclanthology.org/W04-3109,1,,,,industry_innovation_infrastructure,,,"Publisher Perspective on Broad Full-text Literature Access for Text Mining in Academic and Corporate Endeavors. There is a great deal of interest in obtaining access to the vast stores of full-text literature held by the various publishers. The need to balance a reduction of the restrictions on access with the protection of the revenue streams of the publishers is critical. Without the publishers, the content would not be available and support for several scientific societies would also disappear. On the other hand, the value of the literature holdings, while it appears to be quite high, is not currently adequately exploited. Text mining and more effective information retrieval is necessary to take full advantage of the information captured by the millions of electronic journal articles currently available.",Publisher Perspective on Broad Full-text Literature Access for Text Mining in Academic and Corporate Endeavors,"There is a great deal of interest in obtaining access to the vast stores of full-text literature held by the various publishers. The need to balance a reduction of the restrictions on access with the protection of the revenue streams of the publishers is critical. Without the publishers, the content would not be available and support for several scientific societies would also disappear. On the other hand, the value of the literature holdings, while it appears to be quite high, is not currently adequately exploited. Text mining and more effective information retrieval is necessary to take full advantage of the information captured by the millions of electronic journal articles currently available.",Publisher Perspective on Broad Full-text Literature Access for Text Mining in Academic and Corporate Endeavors,"There is a great deal of interest in obtaining access to the vast stores of full-text literature held by the various publishers. The need to balance a reduction of the restrictions on access with the protection of the revenue streams of the publishers is critical. Without the publishers, the content would not be available and support for several scientific societies would also disappear. On the other hand, the value of the literature holdings, while it appears to be quite high, is not currently adequately exploited. Text mining and more effective information retrieval is necessary to take full advantage of the information captured by the millions of electronic journal articles currently available.",,"Publisher Perspective on Broad Full-text Literature Access for Text Mining in Academic and Corporate Endeavors. There is a great deal of interest in obtaining access to the vast stores of full-text literature held by the various publishers. The need to balance a reduction of the restrictions on access with the protection of the revenue streams of the publishers is critical. Without the publishers, the content would not be available and support for several scientific societies would also disappear. On the other hand, the value of the literature holdings, while it appears to be quite high, is not currently adequately exploited. Text mining and more effective information retrieval is necessary to take full advantage of the information captured by the millions of electronic journal articles currently available.",2004
schloder-fernandez-2014-role,https://aclanthology.org/W14-4321,0,,,,,,,The Role of Polarity in Inferring Acceptance and Rejection in Dialogue. We study the role that logical polarity plays in determining the rejection or acceptance function of an utterance in dialogue. We develop a model inspired by recent work on the semantics of negation and polarity particles and test it on annotated data from two spoken dialogue corpora: the Switchboard Corpus and the AMI Meeting Corpus. Our experiments show that taking into account the relative polarity of a proposal under discussion and of its response greatly helps to distinguish rejections from acceptances in both corpora.,The Role of Polarity in Inferring Acceptance and Rejection in Dialogue,We study the role that logical polarity plays in determining the rejection or acceptance function of an utterance in dialogue. We develop a model inspired by recent work on the semantics of negation and polarity particles and test it on annotated data from two spoken dialogue corpora: the Switchboard Corpus and the AMI Meeting Corpus. Our experiments show that taking into account the relative polarity of a proposal under discussion and of its response greatly helps to distinguish rejections from acceptances in both corpora.,The Role of Polarity in Inferring Acceptance and Rejection in Dialogue,We study the role that logical polarity plays in determining the rejection or acceptance function of an utterance in dialogue. We develop a model inspired by recent work on the semantics of negation and polarity particles and test it on annotated data from two spoken dialogue corpora: the Switchboard Corpus and the AMI Meeting Corpus. Our experiments show that taking into account the relative polarity of a proposal under discussion and of its response greatly helps to distinguish rejections from acceptances in both corpora.,,The Role of Polarity in Inferring Acceptance and Rejection in Dialogue. We study the role that logical polarity plays in determining the rejection or acceptance function of an utterance in dialogue. We develop a model inspired by recent work on the semantics of negation and polarity particles and test it on annotated data from two spoken dialogue corpora: the Switchboard Corpus and the AMI Meeting Corpus. Our experiments show that taking into account the relative polarity of a proposal under discussion and of its response greatly helps to distinguish rejections from acceptances in both corpora.,2014
liu-etal-2007-forest,https://aclanthology.org/P07-1089,0,,,,,,,"Forest-to-String Statistical Translation Rules. In this paper, we propose forest-to-string rules to enhance the expressive power of tree-to-string translation models. A forestto-string rule is capable of capturing nonsyntactic phrase pairs by describing the correspondence between multiple parse trees and one string. To integrate these rules into tree-to-string translation models, auxiliary rules are introduced to provide a generalization level. Experimental results show that, on the NIST 2005 Chinese-English test set, the tree-to-string model augmented with forest-to-string rules achieves a relative improvement of 4.3% in terms of BLEU score over the original model which allows treeto-string rules only.",Forest-to-String Statistical Translation Rules,"In this paper, we propose forest-to-string rules to enhance the expressive power of tree-to-string translation models. A forestto-string rule is capable of capturing nonsyntactic phrase pairs by describing the correspondence between multiple parse trees and one string. To integrate these rules into tree-to-string translation models, auxiliary rules are introduced to provide a generalization level. Experimental results show that, on the NIST 2005 Chinese-English test set, the tree-to-string model augmented with forest-to-string rules achieves a relative improvement of 4.3% in terms of BLEU score over the original model which allows treeto-string rules only.",Forest-to-String Statistical Translation Rules,"In this paper, we propose forest-to-string rules to enhance the expressive power of tree-to-string translation models. A forestto-string rule is capable of capturing nonsyntactic phrase pairs by describing the correspondence between multiple parse trees and one string. To integrate these rules into tree-to-string translation models, auxiliary rules are introduced to provide a generalization level. Experimental results show that, on the NIST 2005 Chinese-English test set, the tree-to-string model augmented with forest-to-string rules achieves a relative improvement of 4.3% in terms of BLEU score over the original model which allows treeto-string rules only.","This work was supported by National Natural Science Foundation of China, Contract No. 60603095 and 60573188.","Forest-to-String Statistical Translation Rules. In this paper, we propose forest-to-string rules to enhance the expressive power of tree-to-string translation models. A forestto-string rule is capable of capturing nonsyntactic phrase pairs by describing the correspondence between multiple parse trees and one string. To integrate these rules into tree-to-string translation models, auxiliary rules are introduced to provide a generalization level. Experimental results show that, on the NIST 2005 Chinese-English test set, the tree-to-string model augmented with forest-to-string rules achieves a relative improvement of 4.3% in terms of BLEU score over the original model which allows treeto-string rules only.",2007
junczys-dowmunt-grundkiewicz-2014-amu,https://aclanthology.org/W14-1703,0,,,,,,,"The AMU System in the CoNLL-2014 Shared Task: Grammatical Error Correction by Data-Intensive and Feature-Rich Statistical Machine Translation. Statistical machine translation toolkits like Moses have not been designed with grammatical error correction in mind. In order to achieve competitive results in this area, it is not enough to simply add more data. Optimization procedures need to be customized, task-specific features should be introduced. Only then can the decoder take advantage of relevant data. We demonstrate the validity of the above claims by combining web-scale language models and large-scale error-corrected texts with parameter tuning according to the task metric and correction-specific features. Our system achieves a result of 35.0% F 0.5 on the blind CoNLL-2014 test set, ranking on third place. A similar system, equipped with identical models but without tuned parameters and specialized features, stagnates at 25.4%.",The {AMU} System in the {C}o{NLL}-2014 Shared Task: Grammatical Error Correction by Data-Intensive and Feature-Rich Statistical Machine Translation,"Statistical machine translation toolkits like Moses have not been designed with grammatical error correction in mind. In order to achieve competitive results in this area, it is not enough to simply add more data. Optimization procedures need to be customized, task-specific features should be introduced. Only then can the decoder take advantage of relevant data. We demonstrate the validity of the above claims by combining web-scale language models and large-scale error-corrected texts with parameter tuning according to the task metric and correction-specific features. Our system achieves a result of 35.0% F 0.5 on the blind CoNLL-2014 test set, ranking on third place. A similar system, equipped with identical models but without tuned parameters and specialized features, stagnates at 25.4%.",The AMU System in the CoNLL-2014 Shared Task: Grammatical Error Correction by Data-Intensive and Feature-Rich Statistical Machine Translation,"Statistical machine translation toolkits like Moses have not been designed with grammatical error correction in mind. In order to achieve competitive results in this area, it is not enough to simply add more data. Optimization procedures need to be customized, task-specific features should be introduced. Only then can the decoder take advantage of relevant data. We demonstrate the validity of the above claims by combining web-scale language models and large-scale error-corrected texts with parameter tuning according to the task metric and correction-specific features. Our system achieves a result of 35.0% F 0.5 on the blind CoNLL-2014 test set, ranking on third place. A similar system, equipped with identical models but without tuned parameters and specialized features, stagnates at 25.4%.",,"The AMU System in the CoNLL-2014 Shared Task: Grammatical Error Correction by Data-Intensive and Feature-Rich Statistical Machine Translation. Statistical machine translation toolkits like Moses have not been designed with grammatical error correction in mind. In order to achieve competitive results in this area, it is not enough to simply add more data. Optimization procedures need to be customized, task-specific features should be introduced. Only then can the decoder take advantage of relevant data. We demonstrate the validity of the above claims by combining web-scale language models and large-scale error-corrected texts with parameter tuning according to the task metric and correction-specific features. Our system achieves a result of 35.0% F 0.5 on the blind CoNLL-2014 test set, ranking on third place. A similar system, equipped with identical models but without tuned parameters and specialized features, stagnates at 25.4%.",2014
wu-etal-2011-answering,https://aclanthology.org/I11-1107,0,,,,,,,"Answering Complex Questions via Exploiting Social Q\&A Collection. This paper regards social Q&A collections, such as Yahoo! Answer as a knowledge repository and investigates techniques to mine knowledge from them for improving a sentence-based complex question answering (QA) system. In particular, we present a question-type-specific method (QTSM) that studies at extracting question-type-dependent cue expressions from the social Q&A pairs in which question types are the same as the submitted question. The QTSM is also compared with question-specific and monolingual translation-based methods presented in previous work. Thereinto, the question-specific method (QSM) aims at extracting question-dependent answer words from social Q&A pairs in which questions are similar to the submitted question. The monolingual translationbased method (MTM) learns word-toword translation probabilities from all social Q&A pairs without consideration of question and question type. Experiments on extension of the NTCIR 2008 Chinese test data set verify the performance ranking of these methods as: QTSM > QSM, MTM. The largest F 3 improvements of the proposed QTSM over the QSM and MTM reach 6.0% and 5.8%, respectively.",Answering Complex Questions via Exploiting Social {Q}{\&}{A} Collection,"This paper regards social Q&A collections, such as Yahoo! Answer as a knowledge repository and investigates techniques to mine knowledge from them for improving a sentence-based complex question answering (QA) system. In particular, we present a question-type-specific method (QTSM) that studies at extracting question-type-dependent cue expressions from the social Q&A pairs in which question types are the same as the submitted question. The QTSM is also compared with question-specific and monolingual translation-based methods presented in previous work. Thereinto, the question-specific method (QSM) aims at extracting question-dependent answer words from social Q&A pairs in which questions are similar to the submitted question. The monolingual translationbased method (MTM) learns word-toword translation probabilities from all social Q&A pairs without consideration of question and question type. Experiments on extension of the NTCIR 2008 Chinese test data set verify the performance ranking of these methods as: QTSM > {QSM, MTM}. The largest F 3 improvements of the proposed QTSM over the QSM and MTM reach 6.0% and 5.8%, respectively.",Answering Complex Questions via Exploiting Social Q\&A Collection,"This paper regards social Q&A collections, such as Yahoo! Answer as a knowledge repository and investigates techniques to mine knowledge from them for improving a sentence-based complex question answering (QA) system. In particular, we present a question-type-specific method (QTSM) that studies at extracting question-type-dependent cue expressions from the social Q&A pairs in which question types are the same as the submitted question. The QTSM is also compared with question-specific and monolingual translation-based methods presented in previous work. Thereinto, the question-specific method (QSM) aims at extracting question-dependent answer words from social Q&A pairs in which questions are similar to the submitted question. The monolingual translationbased method (MTM) learns word-toword translation probabilities from all social Q&A pairs without consideration of question and question type. Experiments on extension of the NTCIR 2008 Chinese test data set verify the performance ranking of these methods as: QTSM > QSM, MTM. The largest F 3 improvements of the proposed QTSM over the QSM and MTM reach 6.0% and 5.8%, respectively.",,"Answering Complex Questions via Exploiting Social Q\&A Collection. This paper regards social Q&A collections, such as Yahoo! Answer as a knowledge repository and investigates techniques to mine knowledge from them for improving a sentence-based complex question answering (QA) system. In particular, we present a question-type-specific method (QTSM) that studies at extracting question-type-dependent cue expressions from the social Q&A pairs in which question types are the same as the submitted question. The QTSM is also compared with question-specific and monolingual translation-based methods presented in previous work. Thereinto, the question-specific method (QSM) aims at extracting question-dependent answer words from social Q&A pairs in which questions are similar to the submitted question. The monolingual translationbased method (MTM) learns word-toword translation probabilities from all social Q&A pairs without consideration of question and question type. Experiments on extension of the NTCIR 2008 Chinese test data set verify the performance ranking of these methods as: QTSM > QSM, MTM. The largest F 3 improvements of the proposed QTSM over the QSM and MTM reach 6.0% and 5.8%, respectively.",2011
swanson-yamangil-2012-correction,https://aclanthology.org/N12-1037,1,,,,education,,,"Correction Detection and Error Type Selection as an ESL Educational Aid. We present a classifier that discriminates between types of corrections made by teachers of English in student essays. We define a set of linguistically motivated feature templates for a log-linear classification model, train this classifier on sentence pairs extracted from the Cambridge Learner Corpus, and achieve 89% accuracy improving upon a 33% baseline. Furthermore, we incorporate our classifier into a novel application that takes as input a set of corrected essays that have been sentence aligned with their originals and outputs the individual corrections classified by error type. We report the F-Score of our implementation on this task.",Correction Detection and Error Type Selection as an {ESL} Educational Aid,"We present a classifier that discriminates between types of corrections made by teachers of English in student essays. We define a set of linguistically motivated feature templates for a log-linear classification model, train this classifier on sentence pairs extracted from the Cambridge Learner Corpus, and achieve 89% accuracy improving upon a 33% baseline. Furthermore, we incorporate our classifier into a novel application that takes as input a set of corrected essays that have been sentence aligned with their originals and outputs the individual corrections classified by error type. We report the F-Score of our implementation on this task.",Correction Detection and Error Type Selection as an ESL Educational Aid,"We present a classifier that discriminates between types of corrections made by teachers of English in student essays. We define a set of linguistically motivated feature templates for a log-linear classification model, train this classifier on sentence pairs extracted from the Cambridge Learner Corpus, and achieve 89% accuracy improving upon a 33% baseline. Furthermore, we incorporate our classifier into a novel application that takes as input a set of corrected essays that have been sentence aligned with their originals and outputs the individual corrections classified by error type. We report the F-Score of our implementation on this task.",,"Correction Detection and Error Type Selection as an ESL Educational Aid. We present a classifier that discriminates between types of corrections made by teachers of English in student essays. We define a set of linguistically motivated feature templates for a log-linear classification model, train this classifier on sentence pairs extracted from the Cambridge Learner Corpus, and achieve 89% accuracy improving upon a 33% baseline. Furthermore, we incorporate our classifier into a novel application that takes as input a set of corrected essays that have been sentence aligned with their originals and outputs the individual corrections classified by error type. We report the F-Score of our implementation on this task.",2012
federico-etal-2011-overview,https://aclanthology.org/2011.iwslt-evaluation.1,0,,,,,,,"Overview of the IWSLT 2011 evaluation campaign. We report here on the eighth Evaluation Campaign organized by the IWSLT workshop. This year, the IWSLT evaluation focused on the automatic translation of public talks and included tracks for speech recognition, speech translation, text translation, and system combination. Unlike previous years, all data supplied for the evaluation has been publicly released on the workshop website, and is at the disposal of researchers interested in working on our benchmarks and in comparing their results with those published at the workshop. This paper provides an overview of the IWSLT 2011 Evaluation Campaign, which includes: descriptions of the supplied data and evaluation specifications of each track, the list of participants specifying their submitted runs, a detailed description of the subjective evaluation carried out, the main findings of each exercise drawn from the results and the system descriptions prepared by the participants, and, finally, several detailed tables reporting all the evaluation results.",Overview of the {IWSLT} 2011 evaluation campaign,"We report here on the eighth Evaluation Campaign organized by the IWSLT workshop. This year, the IWSLT evaluation focused on the automatic translation of public talks and included tracks for speech recognition, speech translation, text translation, and system combination. Unlike previous years, all data supplied for the evaluation has been publicly released on the workshop website, and is at the disposal of researchers interested in working on our benchmarks and in comparing their results with those published at the workshop. This paper provides an overview of the IWSLT 2011 Evaluation Campaign, which includes: descriptions of the supplied data and evaluation specifications of each track, the list of participants specifying their submitted runs, a detailed description of the subjective evaluation carried out, the main findings of each exercise drawn from the results and the system descriptions prepared by the participants, and, finally, several detailed tables reporting all the evaluation results.",Overview of the IWSLT 2011 evaluation campaign,"We report here on the eighth Evaluation Campaign organized by the IWSLT workshop. This year, the IWSLT evaluation focused on the automatic translation of public talks and included tracks for speech recognition, speech translation, text translation, and system combination. Unlike previous years, all data supplied for the evaluation has been publicly released on the workshop website, and is at the disposal of researchers interested in working on our benchmarks and in comparing their results with those published at the workshop. This paper provides an overview of the IWSLT 2011 Evaluation Campaign, which includes: descriptions of the supplied data and evaluation specifications of each track, the list of participants specifying their submitted runs, a detailed description of the subjective evaluation carried out, the main findings of each exercise drawn from the results and the system descriptions prepared by the participants, and, finally, several detailed tables reporting all the evaluation results.",,"Overview of the IWSLT 2011 evaluation campaign. We report here on the eighth Evaluation Campaign organized by the IWSLT workshop. This year, the IWSLT evaluation focused on the automatic translation of public talks and included tracks for speech recognition, speech translation, text translation, and system combination. Unlike previous years, all data supplied for the evaluation has been publicly released on the workshop website, and is at the disposal of researchers interested in working on our benchmarks and in comparing their results with those published at the workshop. This paper provides an overview of the IWSLT 2011 Evaluation Campaign, which includes: descriptions of the supplied data and evaluation specifications of each track, the list of participants specifying their submitted runs, a detailed description of the subjective evaluation carried out, the main findings of each exercise drawn from the results and the system descriptions prepared by the participants, and, finally, several detailed tables reporting all the evaluation results.",2011
preotiuc-pietro-ungar-2018-user,https://aclanthology.org/C18-1130,0,,,,,,,"User-Level Race and Ethnicity Predictors from Twitter Text. User demographic inference from social media text has the potential to improve a range of downstream applications, including real-time passive polling or quantifying demographic bias. This study focuses on developing models for user-level race and ethnicity prediction. We introduce a data set of users who self-report their race/ethnicity through a survey, in contrast to previous approaches that use distantly supervised data or perceived labels. We develop predictive models from text which accurately predict the membership of a user to the four largest racial and ethnic groups with up to .884 AUC and make these available to the research community.",User-Level Race and Ethnicity Predictors from {T}witter Text,"User demographic inference from social media text has the potential to improve a range of downstream applications, including real-time passive polling or quantifying demographic bias. This study focuses on developing models for user-level race and ethnicity prediction. We introduce a data set of users who self-report their race/ethnicity through a survey, in contrast to previous approaches that use distantly supervised data or perceived labels. We develop predictive models from text which accurately predict the membership of a user to the four largest racial and ethnic groups with up to .884 AUC and make these available to the research community.",User-Level Race and Ethnicity Predictors from Twitter Text,"User demographic inference from social media text has the potential to improve a range of downstream applications, including real-time passive polling or quantifying demographic bias. This study focuses on developing models for user-level race and ethnicity prediction. We introduce a data set of users who self-report their race/ethnicity through a survey, in contrast to previous approaches that use distantly supervised data or perceived labels. We develop predictive models from text which accurately predict the membership of a user to the four largest racial and ethnic groups with up to .884 AUC and make these available to the research community.","The authors acknowledge the support of the Templeton Religion Trust, grant TRT-0048.","User-Level Race and Ethnicity Predictors from Twitter Text. User demographic inference from social media text has the potential to improve a range of downstream applications, including real-time passive polling or quantifying demographic bias. This study focuses on developing models for user-level race and ethnicity prediction. We introduce a data set of users who self-report their race/ethnicity through a survey, in contrast to previous approaches that use distantly supervised data or perceived labels. We develop predictive models from text which accurately predict the membership of a user to the four largest racial and ethnic groups with up to .884 AUC and make these available to the research community.",2018
li-etal-2014-annotating,http://www.lrec-conf.org/proceedings/lrec2014/pdf/250_Paper.pdf,0,,,,,,,"Annotating Relation Mentions in Tabloid Press. This paper presents a new resource for the training and evaluation needed by relation extraction experiments. The corpus consists of annotations of mentions for three semantic relations: marriage, parent-child, siblings, selected from the domain of biographic facts about persons and their social relationships. The corpus contains more than one hundred news articles from Tabloid Press. In the current corpus, we only consider the relation mentions occurring in the individual sentences. We provide multi-level annotations which specify the marked facts from relation, argument, entity, down to the token level, thus allowing for detailed analysis of linguistic phenomena and their interactions. A generic markup tool Recon developed at the DFKI LT lab has been utilised for the annotation task. The corpus has been annotated by two human experts, supported by additional conflict resolution conducted by a third expert. As shown in the evaluation, the annotation is of high quality as proved by the stated inter-annotator agreements both on sentence level and on relationmention level. The current corpus is already in active use in our research for evaluation of the relation extraction performance of our automatically learned extraction patterns.",Annotating Relation Mentions in Tabloid Press,"This paper presents a new resource for the training and evaluation needed by relation extraction experiments. The corpus consists of annotations of mentions for three semantic relations: marriage, parent-child, siblings, selected from the domain of biographic facts about persons and their social relationships. The corpus contains more than one hundred news articles from Tabloid Press. In the current corpus, we only consider the relation mentions occurring in the individual sentences. We provide multi-level annotations which specify the marked facts from relation, argument, entity, down to the token level, thus allowing for detailed analysis of linguistic phenomena and their interactions. A generic markup tool Recon developed at the DFKI LT lab has been utilised for the annotation task. The corpus has been annotated by two human experts, supported by additional conflict resolution conducted by a third expert. As shown in the evaluation, the annotation is of high quality as proved by the stated inter-annotator agreements both on sentence level and on relationmention level. The current corpus is already in active use in our research for evaluation of the relation extraction performance of our automatically learned extraction patterns.",Annotating Relation Mentions in Tabloid Press,"This paper presents a new resource for the training and evaluation needed by relation extraction experiments. The corpus consists of annotations of mentions for three semantic relations: marriage, parent-child, siblings, selected from the domain of biographic facts about persons and their social relationships. The corpus contains more than one hundred news articles from Tabloid Press. In the current corpus, we only consider the relation mentions occurring in the individual sentences. We provide multi-level annotations which specify the marked facts from relation, argument, entity, down to the token level, thus allowing for detailed analysis of linguistic phenomena and their interactions. A generic markup tool Recon developed at the DFKI LT lab has been utilised for the annotation task. The corpus has been annotated by two human experts, supported by additional conflict resolution conducted by a third expert. As shown in the evaluation, the annotation is of high quality as proved by the stated inter-annotator agreements both on sentence level and on relationmention level. The current corpus is already in active use in our research for evaluation of the relation extraction performance of our automatically learned extraction patterns.",This research was partially supported by the German Federal Ministry of Education and Research (BMBF) through the project Deependance (contract 01IW11003) and by Google through a Focused Research Award for the project LUcKY granted in July 2013.,"Annotating Relation Mentions in Tabloid Press. This paper presents a new resource for the training and evaluation needed by relation extraction experiments. The corpus consists of annotations of mentions for three semantic relations: marriage, parent-child, siblings, selected from the domain of biographic facts about persons and their social relationships. The corpus contains more than one hundred news articles from Tabloid Press. In the current corpus, we only consider the relation mentions occurring in the individual sentences. We provide multi-level annotations which specify the marked facts from relation, argument, entity, down to the token level, thus allowing for detailed analysis of linguistic phenomena and their interactions. A generic markup tool Recon developed at the DFKI LT lab has been utilised for the annotation task. The corpus has been annotated by two human experts, supported by additional conflict resolution conducted by a third expert. As shown in the evaluation, the annotation is of high quality as proved by the stated inter-annotator agreements both on sentence level and on relationmention level. The current corpus is already in active use in our research for evaluation of the relation extraction performance of our automatically learned extraction patterns.",2014
resnik-etal-2013-using,https://aclanthology.org/D13-1133,1,,,,health,education,,"Using Topic Modeling to Improve Prediction of Neuroticism and Depression in College Students. We investigate the value-add of topic modeling in text analysis for depression, and for neuroticism as a strongly associated personality measure. Using Pennebaker's Linguistic Inquiry and Word Count (LIWC) lexicon to provide baseline features, we show that straightforward topic modeling using Latent Dirichlet Allocation (LDA) yields interpretable, psychologically relevant ""themes"" that add value in prediction of clinical assessments.",Using Topic Modeling to Improve Prediction of Neuroticism and Depression in College Students,"We investigate the value-add of topic modeling in text analysis for depression, and for neuroticism as a strongly associated personality measure. Using Pennebaker's Linguistic Inquiry and Word Count (LIWC) lexicon to provide baseline features, we show that straightforward topic modeling using Latent Dirichlet Allocation (LDA) yields interpretable, psychologically relevant ""themes"" that add value in prediction of clinical assessments.",Using Topic Modeling to Improve Prediction of Neuroticism and Depression in College Students,"We investigate the value-add of topic modeling in text analysis for depression, and for neuroticism as a strongly associated personality measure. Using Pennebaker's Linguistic Inquiry and Word Count (LIWC) lexicon to provide baseline features, we show that straightforward topic modeling using Latent Dirichlet Allocation (LDA) yields interpretable, psychologically relevant ""themes"" that add value in prediction of clinical assessments.","We are grateful to Jamie Pennebaker for the LIWC lexicon and for allowing us to use data from Pennebaker and King (1999) and Rude et al. (2004), to the three psychologists who kindly took the time to provide human ratings, and to our reviewers for helpful comments. This work has been supported in part by NSF grant IIS-1211153.","Using Topic Modeling to Improve Prediction of Neuroticism and Depression in College Students. We investigate the value-add of topic modeling in text analysis for depression, and for neuroticism as a strongly associated personality measure. Using Pennebaker's Linguistic Inquiry and Word Count (LIWC) lexicon to provide baseline features, we show that straightforward topic modeling using Latent Dirichlet Allocation (LDA) yields interpretable, psychologically relevant ""themes"" that add value in prediction of clinical assessments.",2013
sowa-1979-semantics,https://aclanthology.org/P79-1010,0,,,,,,,"Semantics of Conceptual Graphs. Conceptual graphs are both a language for representing knowledge and patterns for constructing models. They form models in the AI sense of structures that approximate some actual or possible system in the real world. They also form models in the logical sense of structures for which some set of axioms are true. When combined with recent developments in nonstandard logic and semantics, conceptual graphs can form a bridge between heuristic techniques of AI and formal techniques of model theory.",Semantics of Conceptual Graphs,"Conceptual graphs are both a language for representing knowledge and patterns for constructing models. They form models in the AI sense of structures that approximate some actual or possible system in the real world. They also form models in the logical sense of structures for which some set of axioms are true. When combined with recent developments in nonstandard logic and semantics, conceptual graphs can form a bridge between heuristic techniques of AI and formal techniques of model theory.",Semantics of Conceptual Graphs,"Conceptual graphs are both a language for representing knowledge and patterns for constructing models. They form models in the AI sense of structures that approximate some actual or possible system in the real world. They also form models in the logical sense of structures for which some set of axioms are true. When combined with recent developments in nonstandard logic and semantics, conceptual graphs can form a bridge between heuristic techniques of AI and formal techniques of model theory.",,"Semantics of Conceptual Graphs. Conceptual graphs are both a language for representing knowledge and patterns for constructing models. They form models in the AI sense of structures that approximate some actual or possible system in the real world. They also form models in the logical sense of structures for which some set of axioms are true. When combined with recent developments in nonstandard logic and semantics, conceptual graphs can form a bridge between heuristic techniques of AI and formal techniques of model theory.",1979
zelasko-2018-expanding,https://aclanthology.org/L18-1295,0,,,,,,,"Expanding Abbreviations in a Strongly Inflected Language: Are Morphosyntactic Tags Sufficient?. In this paper, the problem of recovery of morphological information lost in abbreviated forms is addressed with a focus on highly inflected languages. Evidence is presented that the correct inflected form of an expanded abbreviation can in many cases be deduced solely from the morphosyntactic tags of the context. The prediction model is a deep bidirectional LSTM network with tag embedding. The training and evaluation data are gathered by finding the words which could have been abbreviated and using their corresponding morphosyntactic tags as the labels, while the tags of the context words are used as the input features for classification. The network is trained on over 10 million words from the Polish Sejm Corpus and achieves 74.2% prediction accuracy on a smaller, but more general National Corpus of Polish. The analysis of errors suggests that performance in this task may improve if some prior knowledge about the abbreviated word is incorporated into the model.",Expanding Abbreviations in a Strongly Inflected Language: Are Morphosyntactic Tags Sufficient?,"In this paper, the problem of recovery of morphological information lost in abbreviated forms is addressed with a focus on highly inflected languages. Evidence is presented that the correct inflected form of an expanded abbreviation can in many cases be deduced solely from the morphosyntactic tags of the context. The prediction model is a deep bidirectional LSTM network with tag embedding. The training and evaluation data are gathered by finding the words which could have been abbreviated and using their corresponding morphosyntactic tags as the labels, while the tags of the context words are used as the input features for classification. The network is trained on over 10 million words from the Polish Sejm Corpus and achieves 74.2% prediction accuracy on a smaller, but more general National Corpus of Polish. The analysis of errors suggests that performance in this task may improve if some prior knowledge about the abbreviated word is incorporated into the model.",Expanding Abbreviations in a Strongly Inflected Language: Are Morphosyntactic Tags Sufficient?,"In this paper, the problem of recovery of morphological information lost in abbreviated forms is addressed with a focus on highly inflected languages. Evidence is presented that the correct inflected form of an expanded abbreviation can in many cases be deduced solely from the morphosyntactic tags of the context. The prediction model is a deep bidirectional LSTM network with tag embedding. The training and evaluation data are gathered by finding the words which could have been abbreviated and using their corresponding morphosyntactic tags as the labels, while the tags of the context words are used as the input features for classification. The network is trained on over 10 million words from the Polish Sejm Corpus and achieves 74.2% prediction accuracy on a smaller, but more general National Corpus of Polish. The analysis of errors suggests that performance in this task may improve if some prior knowledge about the abbreviated word is incorporated into the model.",,"Expanding Abbreviations in a Strongly Inflected Language: Are Morphosyntactic Tags Sufficient?. In this paper, the problem of recovery of morphological information lost in abbreviated forms is addressed with a focus on highly inflected languages. Evidence is presented that the correct inflected form of an expanded abbreviation can in many cases be deduced solely from the morphosyntactic tags of the context. The prediction model is a deep bidirectional LSTM network with tag embedding. The training and evaluation data are gathered by finding the words which could have been abbreviated and using their corresponding morphosyntactic tags as the labels, while the tags of the context words are used as the input features for classification. The network is trained on over 10 million words from the Polish Sejm Corpus and achieves 74.2% prediction accuracy on a smaller, but more general National Corpus of Polish. The analysis of errors suggests that performance in this task may improve if some prior knowledge about the abbreviated word is incorporated into the model.",2018
mihalcea-moldovan-1999-method,https://aclanthology.org/P99-1020,0,,,,,,,"A Method for Word Sense Disambiguation of Unrestricted Text. Selecting the most appropriate sense for an ambiguous word in a sentence is a central problem in Natural Language Processing. In this paper, we present a method that attempts to disambiguate all the nouns, verbs, adverbs and adjectives in a text, using the senses provided in WordNet. The senses are ranked using two sources of information: (1) the Internet for gathering statistics for word-word cooccurrences and (2)WordNet for measuring the semantic density for a pair of words. We report an average accuracy of 80% for the first ranked sense, and 91% for the first two ranked senses. Extensions of this method for larger windows of more than two words are considered.",A Method for Word Sense Disambiguation of Unrestricted Text,"Selecting the most appropriate sense for an ambiguous word in a sentence is a central problem in Natural Language Processing. In this paper, we present a method that attempts to disambiguate all the nouns, verbs, adverbs and adjectives in a text, using the senses provided in WordNet. The senses are ranked using two sources of information: (1) the Internet for gathering statistics for word-word cooccurrences and (2)WordNet for measuring the semantic density for a pair of words. We report an average accuracy of 80% for the first ranked sense, and 91% for the first two ranked senses. Extensions of this method for larger windows of more than two words are considered.",A Method for Word Sense Disambiguation of Unrestricted Text,"Selecting the most appropriate sense for an ambiguous word in a sentence is a central problem in Natural Language Processing. In this paper, we present a method that attempts to disambiguate all the nouns, verbs, adverbs and adjectives in a text, using the senses provided in WordNet. The senses are ranked using two sources of information: (1) the Internet for gathering statistics for word-word cooccurrences and (2)WordNet for measuring the semantic density for a pair of words. We report an average accuracy of 80% for the first ranked sense, and 91% for the first two ranked senses. Extensions of this method for larger windows of more than two words are considered.",,"A Method for Word Sense Disambiguation of Unrestricted Text. Selecting the most appropriate sense for an ambiguous word in a sentence is a central problem in Natural Language Processing. In this paper, we present a method that attempts to disambiguate all the nouns, verbs, adverbs and adjectives in a text, using the senses provided in WordNet. The senses are ranked using two sources of information: (1) the Internet for gathering statistics for word-word cooccurrences and (2)WordNet for measuring the semantic density for a pair of words. We report an average accuracy of 80% for the first ranked sense, and 91% for the first two ranked senses. Extensions of this method for larger windows of more than two words are considered.",1999
ben-abacha-zweigenbaum-2011-medical,https://aclanthology.org/W11-0207,1,,,,health,,,Medical Entity Recognition: A Comparaison of Semantic and Statistical Methods. Medical Entity Recognition is a crucial step towards efficient medical texts analysis. In this paper we present and compare three methods based on domain-knowledge and machine-learning techniques. We study two research directions through these approaches: (i) a first direction where noun phrases are extracted in a first step with a chunker before the final classification step and (ii) a second direction where machine learning techniques are used to identify simultaneously entities boundaries and categories. Each of the presented approaches is tested on a standard corpus of clinical texts. The obtained results show that the hybrid approach based on both machine learning and domain knowledge obtains the best performance.,Medical Entity Recognition: A Comparaison of Semantic and Statistical Methods,Medical Entity Recognition is a crucial step towards efficient medical texts analysis. In this paper we present and compare three methods based on domain-knowledge and machine-learning techniques. We study two research directions through these approaches: (i) a first direction where noun phrases are extracted in a first step with a chunker before the final classification step and (ii) a second direction where machine learning techniques are used to identify simultaneously entities boundaries and categories. Each of the presented approaches is tested on a standard corpus of clinical texts. The obtained results show that the hybrid approach based on both machine learning and domain knowledge obtains the best performance.,Medical Entity Recognition: A Comparaison of Semantic and Statistical Methods,Medical Entity Recognition is a crucial step towards efficient medical texts analysis. In this paper we present and compare three methods based on domain-knowledge and machine-learning techniques. We study two research directions through these approaches: (i) a first direction where noun phrases are extracted in a first step with a chunker before the final classification step and (ii) a second direction where machine learning techniques are used to identify simultaneously entities boundaries and categories. Each of the presented approaches is tested on a standard corpus of clinical texts. The obtained results show that the hybrid approach based on both machine learning and domain knowledge obtains the best performance.,This work has been partially supported by OSEO under the Quaero program.,Medical Entity Recognition: A Comparaison of Semantic and Statistical Methods. Medical Entity Recognition is a crucial step towards efficient medical texts analysis. In this paper we present and compare three methods based on domain-knowledge and machine-learning techniques. We study two research directions through these approaches: (i) a first direction where noun phrases are extracted in a first step with a chunker before the final classification step and (ii) a second direction where machine learning techniques are used to identify simultaneously entities boundaries and categories. Each of the presented approaches is tested on a standard corpus of clinical texts. The obtained results show that the hybrid approach based on both machine learning and domain knowledge obtains the best performance.,2011
vincze-almasi-2014-non,https://aclanthology.org/W14-0116,0,,,,,,,"Non-Lexicalized Concepts in Wordnets: A Case Study of English and Hungarian. Here, we investigate non-lexicalized synsets found in the Hungarian wordnet, and compare them to the English one, in the context of wordnet building principles. We propose some strategies that may be used to overcome difficulties concerning non-lexicalized synsets in wordnets constructed using the expand method. It is shown that the merge model could also have been applied to Hungarian, and with the help of the above-mentioned strategies, a wordnet based on the expand model can be transformed into a wordnet similar to that constructed with the merge model.",Non-Lexicalized Concepts in Wordnets: A Case Study of {E}nglish and {H}ungarian,"Here, we investigate non-lexicalized synsets found in the Hungarian wordnet, and compare them to the English one, in the context of wordnet building principles. We propose some strategies that may be used to overcome difficulties concerning non-lexicalized synsets in wordnets constructed using the expand method. It is shown that the merge model could also have been applied to Hungarian, and with the help of the above-mentioned strategies, a wordnet based on the expand model can be transformed into a wordnet similar to that constructed with the merge model.",Non-Lexicalized Concepts in Wordnets: A Case Study of English and Hungarian,"Here, we investigate non-lexicalized synsets found in the Hungarian wordnet, and compare them to the English one, in the context of wordnet building principles. We propose some strategies that may be used to overcome difficulties concerning non-lexicalized synsets in wordnets constructed using the expand method. It is shown that the merge model could also have been applied to Hungarian, and with the help of the above-mentioned strategies, a wordnet based on the expand model can be transformed into a wordnet similar to that constructed with the merge model.","This work was in part supported by the European Union and co-funded by the European Social Fund through the project Telemedicine-focused research activities in the fields of mathematics, informatics and medical sciences (grant no.: TÁMOP-4.2.2.A-11/1/KONV-2012-0073).","Non-Lexicalized Concepts in Wordnets: A Case Study of English and Hungarian. Here, we investigate non-lexicalized synsets found in the Hungarian wordnet, and compare them to the English one, in the context of wordnet building principles. We propose some strategies that may be used to overcome difficulties concerning non-lexicalized synsets in wordnets constructed using the expand method. It is shown that the merge model could also have been applied to Hungarian, and with the help of the above-mentioned strategies, a wordnet based on the expand model can be transformed into a wordnet similar to that constructed with the merge model.",2014
dillinger-seligman-2006-conversertm,https://aclanthology.org/W06-3706,1,,,,health,,,"Converser\mbox$^\mboxTM$: Highly Interactive Speech-to-Speech Translation for Healthcare. We describe a highly interactive system for bidirectional, broad-coverage spoken language communication in the healthcare area. The paper briefly reviews the system's interactive foundations, and then goes on to discuss in greater depth our Translation Shortcuts facility, which minimizes the need for interactive verification of sentences after they have been vetted. This facility also considerably speeds throughput while maintaining accuracy, and allows use by minimally literate patients for whom any mode of text entry might be difficult.",Converser{\mbox{$^\mbox{TM}$}}: Highly Interactive Speech-to-Speech Translation for Healthcare,"We describe a highly interactive system for bidirectional, broad-coverage spoken language communication in the healthcare area. The paper briefly reviews the system's interactive foundations, and then goes on to discuss in greater depth our Translation Shortcuts facility, which minimizes the need for interactive verification of sentences after they have been vetted. This facility also considerably speeds throughput while maintaining accuracy, and allows use by minimally literate patients for whom any mode of text entry might be difficult.",Converser\mbox$^\mboxTM$: Highly Interactive Speech-to-Speech Translation for Healthcare,"We describe a highly interactive system for bidirectional, broad-coverage spoken language communication in the healthcare area. The paper briefly reviews the system's interactive foundations, and then goes on to discuss in greater depth our Translation Shortcuts facility, which minimizes the need for interactive verification of sentences after they have been vetted. This facility also considerably speeds throughput while maintaining accuracy, and allows use by minimally literate patients for whom any mode of text entry might be difficult.",,"Converser\mbox$^\mboxTM$: Highly Interactive Speech-to-Speech Translation for Healthcare. We describe a highly interactive system for bidirectional, broad-coverage spoken language communication in the healthcare area. The paper briefly reviews the system's interactive foundations, and then goes on to discuss in greater depth our Translation Shortcuts facility, which minimizes the need for interactive verification of sentences after they have been vetted. This facility also considerably speeds throughput while maintaining accuracy, and allows use by minimally literate patients for whom any mode of text entry might be difficult.",2006
dadu-pant-2020-sarcasm,https://aclanthology.org/2020.figlang-1.6,0,,,,,,,"Sarcasm Detection using Context Separators in Online Discourse. Sarcasm is an intricate form of speech, where meaning is conveyed implicitly. Being a convoluted form of expression, detecting sarcasm is an assiduous problem. The difficulty in recognition of sarcasm has many pitfalls, including misunderstandings in everyday communications, which leads us to an increasing focus on automated sarcasm detection. In the second edition of the Figurative Language Processing (FigLang 2020) workshop, the shared task of sarcasm detection released two datasets, containing responses along with their context sampled from Twitter and Reddit. In this work, we use RoBERT a large to detect sarcasm in both the datasets. We further assert the importance of context in improving the performance of contextual word embedding based models by using three different types of inputs-Response-only, Context-Response, and Context-Response (Separated). We show that our proposed architecture performs competitively for both the datasets. We also show that the addition of a separation token between context and target response results in an improvement of 5.13% in the F1-score in the Reddit dataset.",Sarcasm Detection using Context Separators in Online Discourse,"Sarcasm is an intricate form of speech, where meaning is conveyed implicitly. Being a convoluted form of expression, detecting sarcasm is an assiduous problem. The difficulty in recognition of sarcasm has many pitfalls, including misunderstandings in everyday communications, which leads us to an increasing focus on automated sarcasm detection. In the second edition of the Figurative Language Processing (FigLang 2020) workshop, the shared task of sarcasm detection released two datasets, containing responses along with their context sampled from Twitter and Reddit. In this work, we use RoBERT a large to detect sarcasm in both the datasets. We further assert the importance of context in improving the performance of contextual word embedding based models by using three different types of inputs-Response-only, Context-Response, and Context-Response (Separated). We show that our proposed architecture performs competitively for both the datasets. We also show that the addition of a separation token between context and target response results in an improvement of 5.13% in the F1-score in the Reddit dataset.",Sarcasm Detection using Context Separators in Online Discourse,"Sarcasm is an intricate form of speech, where meaning is conveyed implicitly. Being a convoluted form of expression, detecting sarcasm is an assiduous problem. The difficulty in recognition of sarcasm has many pitfalls, including misunderstandings in everyday communications, which leads us to an increasing focus on automated sarcasm detection. In the second edition of the Figurative Language Processing (FigLang 2020) workshop, the shared task of sarcasm detection released two datasets, containing responses along with their context sampled from Twitter and Reddit. In this work, we use RoBERT a large to detect sarcasm in both the datasets. We further assert the importance of context in improving the performance of contextual word embedding based models by using three different types of inputs-Response-only, Context-Response, and Context-Response (Separated). We show that our proposed architecture performs competitively for both the datasets. We also show that the addition of a separation token between context and target response results in an improvement of 5.13% in the F1-score in the Reddit dataset.",,"Sarcasm Detection using Context Separators in Online Discourse. Sarcasm is an intricate form of speech, where meaning is conveyed implicitly. Being a convoluted form of expression, detecting sarcasm is an assiduous problem. The difficulty in recognition of sarcasm has many pitfalls, including misunderstandings in everyday communications, which leads us to an increasing focus on automated sarcasm detection. In the second edition of the Figurative Language Processing (FigLang 2020) workshop, the shared task of sarcasm detection released two datasets, containing responses along with their context sampled from Twitter and Reddit. In this work, we use RoBERT a large to detect sarcasm in both the datasets. We further assert the importance of context in improving the performance of contextual word embedding based models by using three different types of inputs-Response-only, Context-Response, and Context-Response (Separated). We show that our proposed architecture performs competitively for both the datasets. We also show that the addition of a separation token between context and target response results in an improvement of 5.13% in the F1-score in the Reddit dataset.",2020
jardine-teufel-2014-topical,https://aclanthology.org/E14-1053,0,,,,,,,"Topical PageRank: A Model of Scientific Expertise for Bibliographic Search. We model scientific expertise as a mixture of topics and authority. Authority is calculated based on the network properties of each topic network. ThemedPageRank, our combination of LDA-derived topics with PageRank differs from previous models in that topics influence both the bias and transition probabilities of PageRank. It also incorporates the age of documents. Our model is general in that it can be applied to all tasks which require an estimate of document-document, documentquery, document-topic and topic-query similarities. We present two evaluations, one on the task of restoring the reference lists of 10,000 articles, the other on the task of automatically creating reading lists that mimic reading lists created by experts. In both evaluations, our system beats state-of-the-art, as well as Google Scholar and Google Search indexed againt the corpus. Our experiments also allow us to quantify the beneficial effect of our two proposed modifications to PageRank.",Topical {P}age{R}ank: A Model of Scientific Expertise for Bibliographic Search,"We model scientific expertise as a mixture of topics and authority. Authority is calculated based on the network properties of each topic network. ThemedPageRank, our combination of LDA-derived topics with PageRank differs from previous models in that topics influence both the bias and transition probabilities of PageRank. It also incorporates the age of documents. Our model is general in that it can be applied to all tasks which require an estimate of document-document, documentquery, document-topic and topic-query similarities. We present two evaluations, one on the task of restoring the reference lists of 10,000 articles, the other on the task of automatically creating reading lists that mimic reading lists created by experts. In both evaluations, our system beats state-of-the-art, as well as Google Scholar and Google Search indexed againt the corpus. Our experiments also allow us to quantify the beneficial effect of our two proposed modifications to PageRank.",Topical PageRank: A Model of Scientific Expertise for Bibliographic Search,"We model scientific expertise as a mixture of topics and authority. Authority is calculated based on the network properties of each topic network. ThemedPageRank, our combination of LDA-derived topics with PageRank differs from previous models in that topics influence both the bias and transition probabilities of PageRank. It also incorporates the age of documents. Our model is general in that it can be applied to all tasks which require an estimate of document-document, documentquery, document-topic and topic-query similarities. We present two evaluations, one on the task of restoring the reference lists of 10,000 articles, the other on the task of automatically creating reading lists that mimic reading lists created by experts. In both evaluations, our system beats state-of-the-art, as well as Google Scholar and Google Search indexed againt the corpus. Our experiments also allow us to quantify the beneficial effect of our two proposed modifications to PageRank.",,"Topical PageRank: A Model of Scientific Expertise for Bibliographic Search. We model scientific expertise as a mixture of topics and authority. Authority is calculated based on the network properties of each topic network. ThemedPageRank, our combination of LDA-derived topics with PageRank differs from previous models in that topics influence both the bias and transition probabilities of PageRank. It also incorporates the age of documents. Our model is general in that it can be applied to all tasks which require an estimate of document-document, documentquery, document-topic and topic-query similarities. We present two evaluations, one on the task of restoring the reference lists of 10,000 articles, the other on the task of automatically creating reading lists that mimic reading lists created by experts. In both evaluations, our system beats state-of-the-art, as well as Google Scholar and Google Search indexed againt the corpus. Our experiments also allow us to quantify the beneficial effect of our two proposed modifications to PageRank.",2014
merlo-1997-attaching,https://aclanthology.org/W97-0317,0,,,,,,,"Attaching Multiple Prepositional Phrases: Backed-off Estimation Generalized. There has recently been considerable interest in the use of lexically-based statistical techniques to resolve prepositional phrase attachments. To our knowledge, however, these investigations have only considered the problem of attaching the first PP, i.e., in a IV NP PP] configuration. In this paper, we consider one technique which has been successfully applied to this problem, backed-off estimation, and demonstrate how it can be extended to deal with the problem of multiple PP attachment. The multiple PP attachment introduces two related problems: sparser data (since multiple PPs are naturally rarer), and greater syntactic ambiguity (more attachment configurations which must be distinguished). We present and algorithm which solves this problem through re-use of the relatively rich data obtained from first PP training, in resolving subsequent PP attachments.",Attaching Multiple Prepositional Phrases: Backed-off Estimation Generalized,"There has recently been considerable interest in the use of lexically-based statistical techniques to resolve prepositional phrase attachments. To our knowledge, however, these investigations have only considered the problem of attaching the first PP, i.e., in a IV NP PP] configuration. In this paper, we consider one technique which has been successfully applied to this problem, backed-off estimation, and demonstrate how it can be extended to deal with the problem of multiple PP attachment. The multiple PP attachment introduces two related problems: sparser data (since multiple PPs are naturally rarer), and greater syntactic ambiguity (more attachment configurations which must be distinguished). We present and algorithm which solves this problem through re-use of the relatively rich data obtained from first PP training, in resolving subsequent PP attachments.",Attaching Multiple Prepositional Phrases: Backed-off Estimation Generalized,"There has recently been considerable interest in the use of lexically-based statistical techniques to resolve prepositional phrase attachments. To our knowledge, however, these investigations have only considered the problem of attaching the first PP, i.e., in a IV NP PP] configuration. In this paper, we consider one technique which has been successfully applied to this problem, backed-off estimation, and demonstrate how it can be extended to deal with the problem of multiple PP attachment. The multiple PP attachment introduces two related problems: sparser data (since multiple PPs are naturally rarer), and greater syntactic ambiguity (more attachment configurations which must be distinguished). We present and algorithm which solves this problem through re-use of the relatively rich data obtained from first PP training, in resolving subsequent PP attachments.","We gratefully acknowledge the support of the British Council and the Swiss National Science Foundation on grant 83BC044708 to the first two authors, and on grant 12-43283.95 and fellowship 8210-46569 from the Swiss NSF to the first author. We thank the audiences at Edinburgh and Pennsylvania for their useful comments. All errors remain our responsibility.","Attaching Multiple Prepositional Phrases: Backed-off Estimation Generalized. There has recently been considerable interest in the use of lexically-based statistical techniques to resolve prepositional phrase attachments. To our knowledge, however, these investigations have only considered the problem of attaching the first PP, i.e., in a IV NP PP] configuration. In this paper, we consider one technique which has been successfully applied to this problem, backed-off estimation, and demonstrate how it can be extended to deal with the problem of multiple PP attachment. The multiple PP attachment introduces two related problems: sparser data (since multiple PPs are naturally rarer), and greater syntactic ambiguity (more attachment configurations which must be distinguished). We present and algorithm which solves this problem through re-use of the relatively rich data obtained from first PP training, in resolving subsequent PP attachments.",1997
louis-newman-2012-summarization,https://aclanthology.org/C12-2075,0,,,,finance,,,Summarization of Business-Related Tweets: A Concept-Based Approach. We present a method for summarizing the collection of tweets related to a business. Our procedure aggregates tweets into subtopic clusters which are then ranked and summarized by a few representative tweets from each cluster. Central to our approach is the ability to group diverse tweets into clusters. The broad clustering is induced by first learning a small set of business-related concepts automatically from free text and then subdividing the tweets into these concepts. Cluster ranking is performed using an importance score which combines topic coherence and sentiment value of the tweets. We also discuss alternative methods to summarize these tweets and evaluate the approaches using a small user study. Results show that the concept-based summaries are ranked favourably by the users.,Summarization of Business-Related Tweets: A Concept-Based Approach,We present a method for summarizing the collection of tweets related to a business. Our procedure aggregates tweets into subtopic clusters which are then ranked and summarized by a few representative tweets from each cluster. Central to our approach is the ability to group diverse tweets into clusters. The broad clustering is induced by first learning a small set of business-related concepts automatically from free text and then subdividing the tweets into these concepts. Cluster ranking is performed using an importance score which combines topic coherence and sentiment value of the tweets. We also discuss alternative methods to summarize these tweets and evaluate the approaches using a small user study. Results show that the concept-based summaries are ranked favourably by the users.,Summarization of Business-Related Tweets: A Concept-Based Approach,We present a method for summarizing the collection of tweets related to a business. Our procedure aggregates tweets into subtopic clusters which are then ranked and summarized by a few representative tweets from each cluster. Central to our approach is the ability to group diverse tweets into clusters. The broad clustering is induced by first learning a small set of business-related concepts automatically from free text and then subdividing the tweets into these concepts. Cluster ranking is performed using an importance score which combines topic coherence and sentiment value of the tweets. We also discuss alternative methods to summarize these tweets and evaluate the approaches using a small user study. Results show that the concept-based summaries are ranked favourably by the users.,,Summarization of Business-Related Tweets: A Concept-Based Approach. We present a method for summarizing the collection of tweets related to a business. Our procedure aggregates tweets into subtopic clusters which are then ranked and summarized by a few representative tweets from each cluster. Central to our approach is the ability to group diverse tweets into clusters. The broad clustering is induced by first learning a small set of business-related concepts automatically from free text and then subdividing the tweets into these concepts. Cluster ranking is performed using an importance score which combines topic coherence and sentiment value of the tweets. We also discuss alternative methods to summarize these tweets and evaluate the approaches using a small user study. Results show that the concept-based summaries are ranked favourably by the users.,2012
koufakou-scott-2020-lexicon,https://aclanthology.org/2020.trac-1.24,1,,,,hate_speech,,,"Lexicon-Enhancement of Embedding-based Approaches Towards the Detection of Abusive Language. Detecting abusive language is a significant research topic, which has received a lot of attention recently. Our work focuses on detecting personal attacks in online conversations. As previous research on this task has largely used deep learning based on embeddings, we explore the use of lexicons to enhance embedding-based methods in an effort to see how these methods apply in the particular task of detecting personal attacks. The methods implemented and experimented with in this paper are quite different from each other, not only in the type of lexicons they use (sentiment or semantic), but also in the way they use the knowledge from the lexicons, in order to construct or to change embeddings that are ultimately fed into the learning model. The sentiment lexicon approaches focus on integrating sentiment information (in the form of sentiment embeddings) into the learning model. The semantic lexicon approaches focus on transforming the original word embeddings so that they better represent relationships extracted from a semantic lexicon. Based on our experimental results, semantic lexicon methods are superior to the rest of the methods in this paper, with at least 4% macro-averaged F1 improvement over the baseline.",Lexicon-Enhancement of Embedding-based Approaches Towards the Detection of Abusive Language,"Detecting abusive language is a significant research topic, which has received a lot of attention recently. Our work focuses on detecting personal attacks in online conversations. As previous research on this task has largely used deep learning based on embeddings, we explore the use of lexicons to enhance embedding-based methods in an effort to see how these methods apply in the particular task of detecting personal attacks. The methods implemented and experimented with in this paper are quite different from each other, not only in the type of lexicons they use (sentiment or semantic), but also in the way they use the knowledge from the lexicons, in order to construct or to change embeddings that are ultimately fed into the learning model. The sentiment lexicon approaches focus on integrating sentiment information (in the form of sentiment embeddings) into the learning model. The semantic lexicon approaches focus on transforming the original word embeddings so that they better represent relationships extracted from a semantic lexicon. Based on our experimental results, semantic lexicon methods are superior to the rest of the methods in this paper, with at least 4% macro-averaged F1 improvement over the baseline.",Lexicon-Enhancement of Embedding-based Approaches Towards the Detection of Abusive Language,"Detecting abusive language is a significant research topic, which has received a lot of attention recently. Our work focuses on detecting personal attacks in online conversations. As previous research on this task has largely used deep learning based on embeddings, we explore the use of lexicons to enhance embedding-based methods in an effort to see how these methods apply in the particular task of detecting personal attacks. The methods implemented and experimented with in this paper are quite different from each other, not only in the type of lexicons they use (sentiment or semantic), but also in the way they use the knowledge from the lexicons, in order to construct or to change embeddings that are ultimately fed into the learning model. The sentiment lexicon approaches focus on integrating sentiment information (in the form of sentiment embeddings) into the learning model. The semantic lexicon approaches focus on transforming the original word embeddings so that they better represent relationships extracted from a semantic lexicon. Based on our experimental results, semantic lexicon methods are superior to the rest of the methods in this paper, with at least 4% macro-averaged F1 improvement over the baseline.",We gratefully acknowledge the Google Cloud Platform (GCP) research credits program and the TensorFlow Research Cloud (TFRC) program.,"Lexicon-Enhancement of Embedding-based Approaches Towards the Detection of Abusive Language. Detecting abusive language is a significant research topic, which has received a lot of attention recently. Our work focuses on detecting personal attacks in online conversations. As previous research on this task has largely used deep learning based on embeddings, we explore the use of lexicons to enhance embedding-based methods in an effort to see how these methods apply in the particular task of detecting personal attacks. The methods implemented and experimented with in this paper are quite different from each other, not only in the type of lexicons they use (sentiment or semantic), but also in the way they use the knowledge from the lexicons, in order to construct or to change embeddings that are ultimately fed into the learning model. The sentiment lexicon approaches focus on integrating sentiment information (in the form of sentiment embeddings) into the learning model. The semantic lexicon approaches focus on transforming the original word embeddings so that they better represent relationships extracted from a semantic lexicon. Based on our experimental results, semantic lexicon methods are superior to the rest of the methods in this paper, with at least 4% macro-averaged F1 improvement over the baseline.",2020
nguyen-etal-2010-nonparametric,https://aclanthology.org/C10-1092,0,,,,,,,"Nonparametric Word Segmentation for Machine Translation. We present an unsupervised word segmentation model for machine translation. The model uses existing monolingual segmentation techniques and models the joint distribution over source sentence segmentations and alignments to the target sentence. During inference, the monolingual segmentation model and the bilingual word alignment model are coupled so that the alignments to the target sentence guide the segmentation of the source sentence. The experiments show improvements on Arabic-English and Chinese-English translation tasks.",Nonparametric Word Segmentation for Machine Translation,"We present an unsupervised word segmentation model for machine translation. The model uses existing monolingual segmentation techniques and models the joint distribution over source sentence segmentations and alignments to the target sentence. During inference, the monolingual segmentation model and the bilingual word alignment model are coupled so that the alignments to the target sentence guide the segmentation of the source sentence. The experiments show improvements on Arabic-English and Chinese-English translation tasks.",Nonparametric Word Segmentation for Machine Translation,"We present an unsupervised word segmentation model for machine translation. The model uses existing monolingual segmentation techniques and models the joint distribution over source sentence segmentations and alignments to the target sentence. During inference, the monolingual segmentation model and the bilingual word alignment model are coupled so that the alignments to the target sentence guide the segmentation of the source sentence. The experiments show improvements on Arabic-English and Chinese-English translation tasks.","We thank Kevin Gimpel for interesting discussions and technical advice. We also thank the anonymous reviewers for useful feedback. This work was supported by DARPA Gale project, NSF grants 0844507 and 0915187.","Nonparametric Word Segmentation for Machine Translation. We present an unsupervised word segmentation model for machine translation. The model uses existing monolingual segmentation techniques and models the joint distribution over source sentence segmentations and alignments to the target sentence. During inference, the monolingual segmentation model and the bilingual word alignment model are coupled so that the alignments to the target sentence guide the segmentation of the source sentence. The experiments show improvements on Arabic-English and Chinese-English translation tasks.",2010
lazaridou-etal-2016-red,https://aclanthology.org/P16-2035,0,,,,,,,"The red one!: On learning to refer to things based on discriminative properties. As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.",The red one!: On learning to refer to things based on discriminative properties,"As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.",The red one!: On learning to refer to things based on discriminative properties,"As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.",This work was supported by ERC 2011 Starting Independent Research Grant n. 283554 (COM-POSES). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used for this research.,"The red one!: On learning to refer to things based on discriminative properties. As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.",2016
torabi-asr-demberg-2012-implicitness,https://aclanthology.org/C12-1163,0,,,,,,,"Implicitness of Discourse Relations. The annotations of explicit and implicit discourse connectives in the Penn Discourse Treebank make it possible to investigate on a large scale how different types of discourse relations are expressed. Assuming an account of the Uniform Information Density hypothesis, we expect that discourse relations should be expressed explicitly with a discourse connector when they are unexpected, but may be implicit when the discourse relation can be anticipated. We investigate whether discourse relations which have been argued to be expected by the comprehender exhibit a higher ratio of implicit connectors. We find support for two hypotheses put forth in previous research which suggest that continuous and causal relations are presupposed by language users when processing consecutive sentences in a text. We then proceed to analyze the effect of Implicit Causality (IC) verbs (which have been argued to raise an expectation for an explanation) as a local cue for an upcoming causal relation.",Implicitness of Discourse Relations,"The annotations of explicit and implicit discourse connectives in the Penn Discourse Treebank make it possible to investigate on a large scale how different types of discourse relations are expressed. Assuming an account of the Uniform Information Density hypothesis, we expect that discourse relations should be expressed explicitly with a discourse connector when they are unexpected, but may be implicit when the discourse relation can be anticipated. We investigate whether discourse relations which have been argued to be expected by the comprehender exhibit a higher ratio of implicit connectors. We find support for two hypotheses put forth in previous research which suggest that continuous and causal relations are presupposed by language users when processing consecutive sentences in a text. We then proceed to analyze the effect of Implicit Causality (IC) verbs (which have been argued to raise an expectation for an explanation) as a local cue for an upcoming causal relation.",Implicitness of Discourse Relations,"The annotations of explicit and implicit discourse connectives in the Penn Discourse Treebank make it possible to investigate on a large scale how different types of discourse relations are expressed. Assuming an account of the Uniform Information Density hypothesis, we expect that discourse relations should be expressed explicitly with a discourse connector when they are unexpected, but may be implicit when the discourse relation can be anticipated. We investigate whether discourse relations which have been argued to be expected by the comprehender exhibit a higher ratio of implicit connectors. We find support for two hypotheses put forth in previous research which suggest that continuous and causal relations are presupposed by language users when processing consecutive sentences in a text. We then proceed to analyze the effect of Implicit Causality (IC) verbs (which have been argued to raise an expectation for an explanation) as a local cue for an upcoming causal relation.",,"Implicitness of Discourse Relations. The annotations of explicit and implicit discourse connectives in the Penn Discourse Treebank make it possible to investigate on a large scale how different types of discourse relations are expressed. Assuming an account of the Uniform Information Density hypothesis, we expect that discourse relations should be expressed explicitly with a discourse connector when they are unexpected, but may be implicit when the discourse relation can be anticipated. We investigate whether discourse relations which have been argued to be expected by the comprehender exhibit a higher ratio of implicit connectors. We find support for two hypotheses put forth in previous research which suggest that continuous and causal relations are presupposed by language users when processing consecutive sentences in a text. We then proceed to analyze the effect of Implicit Causality (IC) verbs (which have been argued to raise an expectation for an explanation) as a local cue for an upcoming causal relation.",2012
batista-navarro-ananiadou-2011-building,https://aclanthology.org/W11-0210,1,,,,health,industry_innovation_infrastructure,,"Building a Coreference-Annotated Corpus from the Domain of Biochemistry. One of the reasons for which the resolution of coreferences has remained a challenging information extraction task, especially in the biomedical domain, is the lack of training data in the form of annotated corpora. In order to address this issue, we developed the HANAPIN corpus. It consists of full-text articles from biochemistry literature, covering entities of several semantic types: chemical compounds, drug targets (e.g., proteins, enzymes, cell lines, pathogens), diseases, organisms and drug effects. All of the coreferring expressions pertaining to these semantic types were annotated based on the annotation scheme that we developed. We observed four general types of coreferences in the corpus: sortal, pronominal, abbreviation and numerical. Using the MASI distance metric, we obtained 84% in computing the inter-annotator agreement in terms of Krippendorff's alpha. Consisting of 20 full-text, open-access articles, the corpus will enable other researchers to use it as a resource for their own coreference resolution methodologies.",Building a Coreference-Annotated Corpus from the Domain of Biochemistry,"One of the reasons for which the resolution of coreferences has remained a challenging information extraction task, especially in the biomedical domain, is the lack of training data in the form of annotated corpora. In order to address this issue, we developed the HANAPIN corpus. It consists of full-text articles from biochemistry literature, covering entities of several semantic types: chemical compounds, drug targets (e.g., proteins, enzymes, cell lines, pathogens), diseases, organisms and drug effects. All of the coreferring expressions pertaining to these semantic types were annotated based on the annotation scheme that we developed. We observed four general types of coreferences in the corpus: sortal, pronominal, abbreviation and numerical. Using the MASI distance metric, we obtained 84% in computing the inter-annotator agreement in terms of Krippendorff's alpha. Consisting of 20 full-text, open-access articles, the corpus will enable other researchers to use it as a resource for their own coreference resolution methodologies.",Building a Coreference-Annotated Corpus from the Domain of Biochemistry,"One of the reasons for which the resolution of coreferences has remained a challenging information extraction task, especially in the biomedical domain, is the lack of training data in the form of annotated corpora. In order to address this issue, we developed the HANAPIN corpus. It consists of full-text articles from biochemistry literature, covering entities of several semantic types: chemical compounds, drug targets (e.g., proteins, enzymes, cell lines, pathogens), diseases, organisms and drug effects. All of the coreferring expressions pertaining to these semantic types were annotated based on the annotation scheme that we developed. We observed four general types of coreferences in the corpus: sortal, pronominal, abbreviation and numerical. Using the MASI distance metric, we obtained 84% in computing the inter-annotator agreement in terms of Krippendorff's alpha. Consisting of 20 full-text, open-access articles, the corpus will enable other researchers to use it as a resource for their own coreference resolution methodologies.","The UK National Centre for Text Mining is funded by the UK Joint Information Systems Committee (JISC). The authors would also like to acknowledge the Office of the Chancellor, in collaboration with the Office of the Vice-Chancellor for Research and Development, of the University of the Philippines Diliman for funding support through the Outright Research Grant.The authors also thank Paul Thompson for his feedback on the annotation guidelines, and the anonymous reviewers for their helpful comments.","Building a Coreference-Annotated Corpus from the Domain of Biochemistry. One of the reasons for which the resolution of coreferences has remained a challenging information extraction task, especially in the biomedical domain, is the lack of training data in the form of annotated corpora. In order to address this issue, we developed the HANAPIN corpus. It consists of full-text articles from biochemistry literature, covering entities of several semantic types: chemical compounds, drug targets (e.g., proteins, enzymes, cell lines, pathogens), diseases, organisms and drug effects. All of the coreferring expressions pertaining to these semantic types were annotated based on the annotation scheme that we developed. We observed four general types of coreferences in the corpus: sortal, pronominal, abbreviation and numerical. Using the MASI distance metric, we obtained 84% in computing the inter-annotator agreement in terms of Krippendorff's alpha. Consisting of 20 full-text, open-access articles, the corpus will enable other researchers to use it as a resource for their own coreference resolution methodologies.",2011
mogele-etal-2006-smartweb,http://www.lrec-conf.org/proceedings/lrec2006/pdf/277_pdf.pdf,0,,,,,,,"SmartWeb UMTS Speech Data Collection: The SmartWeb Handheld Corpus. In this paper we outline the German speech data collection for the SmartWeb project, which is funded by the German Ministry of Science and Education. We focus on the SmartWeb Handheld Corpus (SHC), which has been collected by the Bavarian Archive for Speech Signals (BAS) at the Phonetic Institute (IPSK) of Munich University. Signals of SHC are being recorded in real-life environments (indoor and outdoor) with real background noise as well as real transmission line errors. We developed a new elicitation method and recording technique, called situational prompting, which facilitates collecting realistic dialogue speech data in a cost efficient way. We can show that almost realistic speech queries to a dialogue system issued over a mobile PDA or smart phone can be collected very efficiently using an automatic speech server. We describe the technical and linguistic features of the resulting speech corpus, which will be publicly available at BAS or ELDA.",{S}mart{W}eb {UMTS} Speech Data Collection: The {S}mart{W}eb Handheld Corpus,"In this paper we outline the German speech data collection for the SmartWeb project, which is funded by the German Ministry of Science and Education. We focus on the SmartWeb Handheld Corpus (SHC), which has been collected by the Bavarian Archive for Speech Signals (BAS) at the Phonetic Institute (IPSK) of Munich University. Signals of SHC are being recorded in real-life environments (indoor and outdoor) with real background noise as well as real transmission line errors. We developed a new elicitation method and recording technique, called situational prompting, which facilitates collecting realistic dialogue speech data in a cost efficient way. We can show that almost realistic speech queries to a dialogue system issued over a mobile PDA or smart phone can be collected very efficiently using an automatic speech server. We describe the technical and linguistic features of the resulting speech corpus, which will be publicly available at BAS or ELDA.",SmartWeb UMTS Speech Data Collection: The SmartWeb Handheld Corpus,"In this paper we outline the German speech data collection for the SmartWeb project, which is funded by the German Ministry of Science and Education. We focus on the SmartWeb Handheld Corpus (SHC), which has been collected by the Bavarian Archive for Speech Signals (BAS) at the Phonetic Institute (IPSK) of Munich University. Signals of SHC are being recorded in real-life environments (indoor and outdoor) with real background noise as well as real transmission line errors. We developed a new elicitation method and recording technique, called situational prompting, which facilitates collecting realistic dialogue speech data in a cost efficient way. We can show that almost realistic speech queries to a dialogue system issued over a mobile PDA or smart phone can be collected very efficiently using an automatic speech server. We describe the technical and linguistic features of the resulting speech corpus, which will be publicly available at BAS or ELDA.",,"SmartWeb UMTS Speech Data Collection: The SmartWeb Handheld Corpus. In this paper we outline the German speech data collection for the SmartWeb project, which is funded by the German Ministry of Science and Education. We focus on the SmartWeb Handheld Corpus (SHC), which has been collected by the Bavarian Archive for Speech Signals (BAS) at the Phonetic Institute (IPSK) of Munich University. Signals of SHC are being recorded in real-life environments (indoor and outdoor) with real background noise as well as real transmission line errors. We developed a new elicitation method and recording technique, called situational prompting, which facilitates collecting realistic dialogue speech data in a cost efficient way. We can show that almost realistic speech queries to a dialogue system issued over a mobile PDA or smart phone can be collected very efficiently using an automatic speech server. We describe the technical and linguistic features of the resulting speech corpus, which will be publicly available at BAS or ELDA.",2006
mao-etal-2021-lightweight,https://aclanthology.org/2021.acl-long.226,0,,,,,,,"Lightweight Cross-Lingual Sentence Representation Learning. Large-scale models for learning fixeddimensional cross-lingual sentence representations like LASER (Artetxe and Schwenk, 2019b) lead to significant improvement in performance on downstream tasks. However, further increases and modifications based on such large-scale models are usually impractical due to memory limitations. In this work, we introduce a lightweight dual-transformer architecture with just 2 layers for generating memory-efficient cross-lingual sentence representations. We explore different training tasks and observe that current cross-lingual training tasks leave a lot to be desired for this shallow architecture. To ameliorate this, we propose a novel cross-lingual language model, which combines the existing single-word masked language model with the newly proposed cross-lingual token-level reconstruction task. We further augment the training task by the introduction of two computationally-lite sentence-level contrastive learning tasks to enhance the alignment of cross-lingual sentence representation space, which compensates for the learning bottleneck of the lightweight transformer for generative tasks. Our comparisons with competing models on cross-lingual sentence retrieval and multilingual document classification confirm the effectiveness of the newly proposed training tasks for a shallow model. 1",Lightweight Cross-Lingual Sentence Representation Learning,"Large-scale models for learning fixeddimensional cross-lingual sentence representations like LASER (Artetxe and Schwenk, 2019b) lead to significant improvement in performance on downstream tasks. However, further increases and modifications based on such large-scale models are usually impractical due to memory limitations. In this work, we introduce a lightweight dual-transformer architecture with just 2 layers for generating memory-efficient cross-lingual sentence representations. We explore different training tasks and observe that current cross-lingual training tasks leave a lot to be desired for this shallow architecture. To ameliorate this, we propose a novel cross-lingual language model, which combines the existing single-word masked language model with the newly proposed cross-lingual token-level reconstruction task. We further augment the training task by the introduction of two computationally-lite sentence-level contrastive learning tasks to enhance the alignment of cross-lingual sentence representation space, which compensates for the learning bottleneck of the lightweight transformer for generative tasks. Our comparisons with competing models on cross-lingual sentence retrieval and multilingual document classification confirm the effectiveness of the newly proposed training tasks for a shallow model. 1",Lightweight Cross-Lingual Sentence Representation Learning,"Large-scale models for learning fixeddimensional cross-lingual sentence representations like LASER (Artetxe and Schwenk, 2019b) lead to significant improvement in performance on downstream tasks. However, further increases and modifications based on such large-scale models are usually impractical due to memory limitations. In this work, we introduce a lightweight dual-transformer architecture with just 2 layers for generating memory-efficient cross-lingual sentence representations. We explore different training tasks and observe that current cross-lingual training tasks leave a lot to be desired for this shallow architecture. To ameliorate this, we propose a novel cross-lingual language model, which combines the existing single-word masked language model with the newly proposed cross-lingual token-level reconstruction task. We further augment the training task by the introduction of two computationally-lite sentence-level contrastive learning tasks to enhance the alignment of cross-lingual sentence representation space, which compensates for the learning bottleneck of the lightweight transformer for generative tasks. Our comparisons with competing models on cross-lingual sentence retrieval and multilingual document classification confirm the effectiveness of the newly proposed training tasks for a shallow model. 1","We would like to thank all the reviewers for their valuable comments and suggestions to improve this paper. This work was partially supported by Grantin-Aid for Young Scientists #19K20343, JSPS.","Lightweight Cross-Lingual Sentence Representation Learning. Large-scale models for learning fixeddimensional cross-lingual sentence representations like LASER (Artetxe and Schwenk, 2019b) lead to significant improvement in performance on downstream tasks. However, further increases and modifications based on such large-scale models are usually impractical due to memory limitations. In this work, we introduce a lightweight dual-transformer architecture with just 2 layers for generating memory-efficient cross-lingual sentence representations. We explore different training tasks and observe that current cross-lingual training tasks leave a lot to be desired for this shallow architecture. To ameliorate this, we propose a novel cross-lingual language model, which combines the existing single-word masked language model with the newly proposed cross-lingual token-level reconstruction task. We further augment the training task by the introduction of two computationally-lite sentence-level contrastive learning tasks to enhance the alignment of cross-lingual sentence representation space, which compensates for the learning bottleneck of the lightweight transformer for generative tasks. Our comparisons with competing models on cross-lingual sentence retrieval and multilingual document classification confirm the effectiveness of the newly proposed training tasks for a shallow model. 1",2021
boyd-graber-etal-2012-besting,https://aclanthology.org/D12-1118,0,,,,,,,"Besting the Quiz Master: Crowdsourcing Incremental Classification Games. Cost-sensitive classification, where the features used in machine learning tasks have a cost, has been explored as a means of balancing knowledge against the expense of incrementally obtaining new features. We introduce a setting where humans engage in classification with incrementally revealed features: the collegiate trivia circuit. By providing the community with a web-based system to practice, we collected tens of thousands of implicit word-byword ratings of how useful features are for eliciting correct answers. Observing humans' classification process, we improve the performance of a state-of-the art classifier. We also use the dataset to evaluate a system to compete in the incremental classification task through a reduction of reinforcement learning to classification. Our system learns when to answer a question, performing better than baselines and most human players.",Besting the Quiz Master: Crowdsourcing Incremental Classification Games,"Cost-sensitive classification, where the features used in machine learning tasks have a cost, has been explored as a means of balancing knowledge against the expense of incrementally obtaining new features. We introduce a setting where humans engage in classification with incrementally revealed features: the collegiate trivia circuit. By providing the community with a web-based system to practice, we collected tens of thousands of implicit word-byword ratings of how useful features are for eliciting correct answers. Observing humans' classification process, we improve the performance of a state-of-the art classifier. We also use the dataset to evaluate a system to compete in the incremental classification task through a reduction of reinforcement learning to classification. Our system learns when to answer a question, performing better than baselines and most human players.",Besting the Quiz Master: Crowdsourcing Incremental Classification Games,"Cost-sensitive classification, where the features used in machine learning tasks have a cost, has been explored as a means of balancing knowledge against the expense of incrementally obtaining new features. We introduce a setting where humans engage in classification with incrementally revealed features: the collegiate trivia circuit. By providing the community with a web-based system to practice, we collected tens of thousands of implicit word-byword ratings of how useful features are for eliciting correct answers. Observing humans' classification process, we improve the performance of a state-of-the art classifier. We also use the dataset to evaluate a system to compete in the incremental classification task through a reduction of reinforcement learning to classification. Our system learns when to answer a question, performing better than baselines and most human players.","We thank the many players who played our online quiz bowl to provide our data (and hopefully had fun doing so) and Carlo Angiuli, Arnav Moudgil, and Jerry Vinokurov for providing access to quiz bowl questions. This research was supported by NSF grant #1018625. Jordan Boyd-Graber is also supported by the Army Research Laboratory through ARL Cooperative Agreement W911NF-09-2-0072. Any opinions, findings, conclusions, or recommendations expressed are the authors' and do not necessarily reflect those of the sponsors.","Besting the Quiz Master: Crowdsourcing Incremental Classification Games. Cost-sensitive classification, where the features used in machine learning tasks have a cost, has been explored as a means of balancing knowledge against the expense of incrementally obtaining new features. We introduce a setting where humans engage in classification with incrementally revealed features: the collegiate trivia circuit. By providing the community with a web-based system to practice, we collected tens of thousands of implicit word-byword ratings of how useful features are for eliciting correct answers. Observing humans' classification process, we improve the performance of a state-of-the art classifier. We also use the dataset to evaluate a system to compete in the incremental classification task through a reduction of reinforcement learning to classification. Our system learns when to answer a question, performing better than baselines and most human players.",2012
kovatchev-etal-2021-vectors,https://aclanthology.org/2021.acl-long.96,1,,,,education,,,"Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability. In this paper we implement and compare 7 different data augmentation strategies for the task of automatic scoring of children's ability to understand others' thoughts, feelings, and desires (or ""mindreading""). We recruit in-domain experts to re-annotate augmented samples and determine to what extent each strategy preserves the original rating. We also carry out multiple experiments to measure how much each augmentation strategy improves the performance of automatic scoring systems. To determine the capabilities of automatic systems to generalize to unseen data, we create UK-MIND-20-a new corpus of children's performance on tests of mindreading, consisting of 10,320 question-answer pairs. We obtain a new state-of-the-art performance on the MIND-CA corpus, improving macro-F1-score by 6 points. Results indicate that both the number of training examples and the quality of the augmentation strategies affect the performance of the systems. The taskspecific augmentations generally outperform task-agnostic augmentations. Automatic augmentations based on vectors (GloVe, FastText) perform the worst. We find that systems trained on MIND-CA generalize well to UK-MIND-20. We demonstrate that data augmentation strategies also improve the performance on unseen data.",Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children{'}s mindreading ability,"In this paper we implement and compare 7 different data augmentation strategies for the task of automatic scoring of children's ability to understand others' thoughts, feelings, and desires (or ""mindreading""). We recruit in-domain experts to re-annotate augmented samples and determine to what extent each strategy preserves the original rating. We also carry out multiple experiments to measure how much each augmentation strategy improves the performance of automatic scoring systems. To determine the capabilities of automatic systems to generalize to unseen data, we create UK-MIND-20-a new corpus of children's performance on tests of mindreading, consisting of 10,320 question-answer pairs. We obtain a new state-of-the-art performance on the MIND-CA corpus, improving macro-F1-score by 6 points. Results indicate that both the number of training examples and the quality of the augmentation strategies affect the performance of the systems. The taskspecific augmentations generally outperform task-agnostic augmentations. Automatic augmentations based on vectors (GloVe, FastText) perform the worst. We find that systems trained on MIND-CA generalize well to UK-MIND-20. We demonstrate that data augmentation strategies also improve the performance on unseen data.",Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability,"In this paper we implement and compare 7 different data augmentation strategies for the task of automatic scoring of children's ability to understand others' thoughts, feelings, and desires (or ""mindreading""). We recruit in-domain experts to re-annotate augmented samples and determine to what extent each strategy preserves the original rating. We also carry out multiple experiments to measure how much each augmentation strategy improves the performance of automatic scoring systems. To determine the capabilities of automatic systems to generalize to unseen data, we create UK-MIND-20-a new corpus of children's performance on tests of mindreading, consisting of 10,320 question-answer pairs. We obtain a new state-of-the-art performance on the MIND-CA corpus, improving macro-F1-score by 6 points. Results indicate that both the number of training examples and the quality of the augmentation strategies affect the performance of the systems. The taskspecific augmentations generally outperform task-agnostic augmentations. Automatic augmentations based on vectors (GloVe, FastText) perform the worst. We find that systems trained on MIND-CA generalize well to UK-MIND-20. We demonstrate that data augmentation strategies also improve the performance on unseen data.",We would like to thank Imogen Grumley Traynor and Irene Luque Aguilera for the annotation and the creation of the lists of synonyms and phrases. We also want to thank the anonymous reviewers for their feedback and suggestions. This project was funded by a grant from Wellcome to R. T. Devine.,"Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability. In this paper we implement and compare 7 different data augmentation strategies for the task of automatic scoring of children's ability to understand others' thoughts, feelings, and desires (or ""mindreading""). We recruit in-domain experts to re-annotate augmented samples and determine to what extent each strategy preserves the original rating. We also carry out multiple experiments to measure how much each augmentation strategy improves the performance of automatic scoring systems. To determine the capabilities of automatic systems to generalize to unseen data, we create UK-MIND-20-a new corpus of children's performance on tests of mindreading, consisting of 10,320 question-answer pairs. We obtain a new state-of-the-art performance on the MIND-CA corpus, improving macro-F1-score by 6 points. Results indicate that both the number of training examples and the quality of the augmentation strategies affect the performance of the systems. The taskspecific augmentations generally outperform task-agnostic augmentations. Automatic augmentations based on vectors (GloVe, FastText) perform the worst. We find that systems trained on MIND-CA generalize well to UK-MIND-20. We demonstrate that data augmentation strategies also improve the performance on unseen data.",2021
qin-etal-2021-dont,https://aclanthology.org/2021.emnlp-main.182,0,,,,,,,"Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System. Consistency Identification has obtained remarkable success on open-domain dialogue, which can be used for preventing inconsistent response generation. However, in contrast to the rapid development in open-domain dialogue, few efforts have been made to the task-oriented dialogue direction. In this paper, we argue that consistency problem is more urgent in task-oriented domain. To facilitate the research, we introduce CI-ToD, a novel dataset for Consistency Identification in Taskoriented Dialog system. In addition, we not only annotate the single label to enable the model to judge whether the system response is contradictory, but also provide more finegrained labels (i.e., Dialogue History Inconsistency, User Query Inconsistency and Knowledge Base Inconsistency) to encourage model to know what inconsistent sources lead to it. Empirical results show that state-of-the-art methods only achieve 51.3%, which is far behind the human performance of 93.2%, indicating that there is ample room for improving consistency identification ability. Finally, we conduct exhaustive experiments and qualitative analysis to comprehend key challenges and provide guidance for future directions. All datasets and models are publicly available at https://github.com/yizhen20133868/CI-ToD.",Don{'}t be Contradicted with Anything! {CI}-{T}o{D}: Towards Benchmarking Consistency for Task-oriented Dialogue System,"Consistency Identification has obtained remarkable success on open-domain dialogue, which can be used for preventing inconsistent response generation. However, in contrast to the rapid development in open-domain dialogue, few efforts have been made to the task-oriented dialogue direction. In this paper, we argue that consistency problem is more urgent in task-oriented domain. To facilitate the research, we introduce CI-ToD, a novel dataset for Consistency Identification in Taskoriented Dialog system. In addition, we not only annotate the single label to enable the model to judge whether the system response is contradictory, but also provide more finegrained labels (i.e., Dialogue History Inconsistency, User Query Inconsistency and Knowledge Base Inconsistency) to encourage model to know what inconsistent sources lead to it. Empirical results show that state-of-the-art methods only achieve 51.3%, which is far behind the human performance of 93.2%, indicating that there is ample room for improving consistency identification ability. Finally, we conduct exhaustive experiments and qualitative analysis to comprehend key challenges and provide guidance for future directions. All datasets and models are publicly available at https://github.com/yizhen20133868/CI-ToD.",Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System,"Consistency Identification has obtained remarkable success on open-domain dialogue, which can be used for preventing inconsistent response generation. However, in contrast to the rapid development in open-domain dialogue, few efforts have been made to the task-oriented dialogue direction. In this paper, we argue that consistency problem is more urgent in task-oriented domain. To facilitate the research, we introduce CI-ToD, a novel dataset for Consistency Identification in Taskoriented Dialog system. In addition, we not only annotate the single label to enable the model to judge whether the system response is contradictory, but also provide more finegrained labels (i.e., Dialogue History Inconsistency, User Query Inconsistency and Knowledge Base Inconsistency) to encourage model to know what inconsistent sources lead to it. Empirical results show that state-of-the-art methods only achieve 51.3%, which is far behind the human performance of 93.2%, indicating that there is ample room for improving consistency identification ability. Finally, we conduct exhaustive experiments and qualitative analysis to comprehend key challenges and provide guidance for future directions. All datasets and models are publicly available at https://github.com/yizhen20133868/CI-ToD.",This work was supported by the National Key R&D Program of China via grant 2020AAA0106501 and the National Natural Science Foundation of China (NSFC) via grant 61976072 and 61772153. This work was also supported by the Zhejiang Lab's International Talent Fund for Young Professionals.,"Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System. Consistency Identification has obtained remarkable success on open-domain dialogue, which can be used for preventing inconsistent response generation. However, in contrast to the rapid development in open-domain dialogue, few efforts have been made to the task-oriented dialogue direction. In this paper, we argue that consistency problem is more urgent in task-oriented domain. To facilitate the research, we introduce CI-ToD, a novel dataset for Consistency Identification in Taskoriented Dialog system. In addition, we not only annotate the single label to enable the model to judge whether the system response is contradictory, but also provide more finegrained labels (i.e., Dialogue History Inconsistency, User Query Inconsistency and Knowledge Base Inconsistency) to encourage model to know what inconsistent sources lead to it. Empirical results show that state-of-the-art methods only achieve 51.3%, which is far behind the human performance of 93.2%, indicating that there is ample room for improving consistency identification ability. Finally, we conduct exhaustive experiments and qualitative analysis to comprehend key challenges and provide guidance for future directions. All datasets and models are publicly available at https://github.com/yizhen20133868/CI-ToD.",2021
turian-etal-2010-word,https://aclanthology.org/P10-1040,0,,,,,,,"Word Representations: A Simple and General Method for Semi-Supervised Learning. If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here:",Word Representations: A Simple and General Method for Semi-Supervised Learning,"If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here:",Word Representations: A Simple and General Method for Semi-Supervised Learning,"If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here:","Thank you to Magnus Sahlgren, Bob Carpenter, Percy Liang, Alexander Yates, and the anonymous reviewers for useful discussion. Thank you to Andriy Mnih for inducing his embeddings on RCV1 for us. Joseph Turian and Yoshua Bengio acknowledge the following agencies for research funding and computing support: NSERC, RQCHP, CIFAR. Lev Ratinov was supported by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author and do not necessarily reflect the view of the Air Force Research Laboratory (AFRL).","Word Representations: A Simple and General Method for Semi-Supervised Learning. If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here:",2010
yaseen-etal-2006-building,http://www.lrec-conf.org/proceedings/lrec2006/pdf/131_pdf.pdf,0,,,,,,,"Building Annotated Written and Spoken Arabic LRs in NEMLAR Project. The NEMLAR project: Network for Euro-Mediterranean LAnguage Resource and human language technology development and support; (www.nemlar.org) is a project supported by the EC with partners from Europe and the Middle East; whose objective is to build a network of specialized partners to promote and support the development of Arabic Language Resources in the Mediterranean region. The project focused on identifying the state of the art of LRs in the region, assessing priority requirements through consultations with language industry and communication players, and establishing a protocol for developing and identifying a Basic Language Resource Kit (BLARK) for Arabic, and to assess first priority requirements. The BLARK is defined as the minimal set of language resources that is necessary to do any pre-competitive research and education, in addition to the development of crucial components for any future NLP industry. Following the identification of high priority resources the NEMLAR partners agreed to focus on, and produce three main resources, which are: 1) Annotated Arabic written corpus of about 500 K words, 2) Arabic speech corpus for TTS applications of 2x5 hours, and 3) Arabic broadcast news speech corpus of 40 hours Modern Standard Arabic. For each of the resources underlying linguistic models and assumptions of the corpus, technical specifications, methodologies for the collection and building of the resources, validation and verification mechanisms were put and applied for the three LRs.",Building Annotated Written and Spoken {A}rabic {LR}s in {NEMLAR} Project,"The NEMLAR project: Network for Euro-Mediterranean LAnguage Resource and human language technology development and support; (www.nemlar.org) is a project supported by the EC with partners from Europe and the Middle East; whose objective is to build a network of specialized partners to promote and support the development of Arabic Language Resources in the Mediterranean region. The project focused on identifying the state of the art of LRs in the region, assessing priority requirements through consultations with language industry and communication players, and establishing a protocol for developing and identifying a Basic Language Resource Kit (BLARK) for Arabic, and to assess first priority requirements. The BLARK is defined as the minimal set of language resources that is necessary to do any pre-competitive research and education, in addition to the development of crucial components for any future NLP industry. Following the identification of high priority resources the NEMLAR partners agreed to focus on, and produce three main resources, which are: 1) Annotated Arabic written corpus of about 500 K words, 2) Arabic speech corpus for TTS applications of 2x5 hours, and 3) Arabic broadcast news speech corpus of 40 hours Modern Standard Arabic. For each of the resources underlying linguistic models and assumptions of the corpus, technical specifications, methodologies for the collection and building of the resources, validation and verification mechanisms were put and applied for the three LRs.",Building Annotated Written and Spoken Arabic LRs in NEMLAR Project,"The NEMLAR project: Network for Euro-Mediterranean LAnguage Resource and human language technology development and support; (www.nemlar.org) is a project supported by the EC with partners from Europe and the Middle East; whose objective is to build a network of specialized partners to promote and support the development of Arabic Language Resources in the Mediterranean region. The project focused on identifying the state of the art of LRs in the region, assessing priority requirements through consultations with language industry and communication players, and establishing a protocol for developing and identifying a Basic Language Resource Kit (BLARK) for Arabic, and to assess first priority requirements. The BLARK is defined as the minimal set of language resources that is necessary to do any pre-competitive research and education, in addition to the development of crucial components for any future NLP industry. Following the identification of high priority resources the NEMLAR partners agreed to focus on, and produce three main resources, which are: 1) Annotated Arabic written corpus of about 500 K words, 2) Arabic speech corpus for TTS applications of 2x5 hours, and 3) Arabic broadcast news speech corpus of 40 hours Modern Standard Arabic. For each of the resources underlying linguistic models and assumptions of the corpus, technical specifications, methodologies for the collection and building of the resources, validation and verification mechanisms were put and applied for the three LRs.","The authors wish to thank the European Commission for the support granted through the INCO-MED programme. The INCO-MED programme has enhanced the development of the cultural dialogue and partnerships across the Mediterranean, as well as the advancement of science to the benefit of all involved parties. It was wise to select language technology as one of the areas to support.The authors also want to thank all of the project participants, cf. www.NEMLAR.org for their contributions.","Building Annotated Written and Spoken Arabic LRs in NEMLAR Project. The NEMLAR project: Network for Euro-Mediterranean LAnguage Resource and human language technology development and support; (www.nemlar.org) is a project supported by the EC with partners from Europe and the Middle East; whose objective is to build a network of specialized partners to promote and support the development of Arabic Language Resources in the Mediterranean region. The project focused on identifying the state of the art of LRs in the region, assessing priority requirements through consultations with language industry and communication players, and establishing a protocol for developing and identifying a Basic Language Resource Kit (BLARK) for Arabic, and to assess first priority requirements. The BLARK is defined as the minimal set of language resources that is necessary to do any pre-competitive research and education, in addition to the development of crucial components for any future NLP industry. Following the identification of high priority resources the NEMLAR partners agreed to focus on, and produce three main resources, which are: 1) Annotated Arabic written corpus of about 500 K words, 2) Arabic speech corpus for TTS applications of 2x5 hours, and 3) Arabic broadcast news speech corpus of 40 hours Modern Standard Arabic. For each of the resources underlying linguistic models and assumptions of the corpus, technical specifications, methodologies for the collection and building of the resources, validation and verification mechanisms were put and applied for the three LRs.",2006
chae-2004-analysis,https://aclanthology.org/Y04-1006,0,,,,,,,"An Analysis of the Korean [manyak ... V-telato] Construction : An Indexed Phrase Structure Grammar Approach. Concord adverbial constructions in Korean show unbounded dependency relationships between two non-empty entities. There are two different types of unboundedness involved: one between a concord adverbial and a verbal ending and the other between the adverbial as a modifier and a predicate. In addition, these unboundedness relationships exhibit properties of ""downward movement"" phenomena. In this paper, we examine the Indexed Phrase Structure Grammar analysis of the constructions presented in Chae (2003, 2004), and propose to introduce a new feature to solve its conceptual problem. Then, we provide an analysis of conditional-concessive constructions, which is a subtype of concord adverbial constructions. These constructions are special in the sense that they contain a seemingly incompatible combination of a conditional adverbial and a concessive verbal ending. We argue that they are basically conditional constructions despite their concessive meaning.",An Analysis of the {K}orean [manyak ... {V}-telato] Construction : An Indexed Phrase Structure Grammar Approach,"Concord adverbial constructions in Korean show unbounded dependency relationships between two non-empty entities. There are two different types of unboundedness involved: one between a concord adverbial and a verbal ending and the other between the adverbial as a modifier and a predicate. In addition, these unboundedness relationships exhibit properties of ""downward movement"" phenomena. In this paper, we examine the Indexed Phrase Structure Grammar analysis of the constructions presented in Chae (2003, 2004), and propose to introduce a new feature to solve its conceptual problem. Then, we provide an analysis of conditional-concessive constructions, which is a subtype of concord adverbial constructions. These constructions are special in the sense that they contain a seemingly incompatible combination of a conditional adverbial and a concessive verbal ending. We argue that they are basically conditional constructions despite their concessive meaning.",An Analysis of the Korean [manyak ... V-telato] Construction : An Indexed Phrase Structure Grammar Approach,"Concord adverbial constructions in Korean show unbounded dependency relationships between two non-empty entities. There are two different types of unboundedness involved: one between a concord adverbial and a verbal ending and the other between the adverbial as a modifier and a predicate. In addition, these unboundedness relationships exhibit properties of ""downward movement"" phenomena. In this paper, we examine the Indexed Phrase Structure Grammar analysis of the constructions presented in Chae (2003, 2004), and propose to introduce a new feature to solve its conceptual problem. Then, we provide an analysis of conditional-concessive constructions, which is a subtype of concord adverbial constructions. These constructions are special in the sense that they contain a seemingly incompatible combination of a conditional adverbial and a concessive verbal ending. We argue that they are basically conditional constructions despite their concessive meaning.","An earlier version of this paper was presented at a monthly meeting of the Korean Society for Language and Information on April 24, 2004. I appreciate valuable comments and suggestions from Beom-mo Kang, Yong-Beom Kim, Seungho Nam, Jae-Hak Yoon and others.","An Analysis of the Korean [manyak ... V-telato] Construction : An Indexed Phrase Structure Grammar Approach. Concord adverbial constructions in Korean show unbounded dependency relationships between two non-empty entities. There are two different types of unboundedness involved: one between a concord adverbial and a verbal ending and the other between the adverbial as a modifier and a predicate. In addition, these unboundedness relationships exhibit properties of ""downward movement"" phenomena. In this paper, we examine the Indexed Phrase Structure Grammar analysis of the constructions presented in Chae (2003, 2004), and propose to introduce a new feature to solve its conceptual problem. Then, we provide an analysis of conditional-concessive constructions, which is a subtype of concord adverbial constructions. These constructions are special in the sense that they contain a seemingly incompatible combination of a conditional adverbial and a concessive verbal ending. We argue that they are basically conditional constructions despite their concessive meaning.",2004
saggion-2007-shef,https://aclanthology.org/S07-1063,0,,,,,,,SHEF: Semantic Tagging and Summarization Techniques Applied to Cross-document Coreference. We describe experiments for the crossdocument coreference task in SemEval 2007. Our cross-document coreference system uses an in-house agglomerative clustering implementation to group documents referring to the same entity. Clustering uses vector representations created by summarization and semantic tagging analysis components. We present evaluation results for four system configurations demonstrating the potential of the applied techniques.,{SHEF}: Semantic Tagging and Summarization Techniques Applied to Cross-document Coreference,We describe experiments for the crossdocument coreference task in SemEval 2007. Our cross-document coreference system uses an in-house agglomerative clustering implementation to group documents referring to the same entity. Clustering uses vector representations created by summarization and semantic tagging analysis components. We present evaluation results for four system configurations demonstrating the potential of the applied techniques.,SHEF: Semantic Tagging and Summarization Techniques Applied to Cross-document Coreference,We describe experiments for the crossdocument coreference task in SemEval 2007. Our cross-document coreference system uses an in-house agglomerative clustering implementation to group documents referring to the same entity. Clustering uses vector representations created by summarization and semantic tagging analysis components. We present evaluation results for four system configurations demonstrating the potential of the applied techniques.,This work was partially supported by the EU-funded MUSING project (IST-2004-027097) and the EUfunded LIRICS project (eContent project 22236).,SHEF: Semantic Tagging and Summarization Techniques Applied to Cross-document Coreference. We describe experiments for the crossdocument coreference task in SemEval 2007. Our cross-document coreference system uses an in-house agglomerative clustering implementation to group documents referring to the same entity. Clustering uses vector representations created by summarization and semantic tagging analysis components. We present evaluation results for four system configurations demonstrating the potential of the applied techniques.,2007
guo-etal-2020-evidence,https://aclanthology.org/2020.acl-main.544,0,,,,,,,"Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder. Generating inferential texts about an event in different perspectives requires reasoning over different contexts that the event occurs. Existing works usually ignore the context that is not explicitly provided, resulting in a context-independent semantic representation that struggles to support the generation. To address this, we propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts. Our approach works in an encoderdecoder manner and is equipped with a Vector Quantised-Variational Autoencoder, where the encoder outputs representations from a distribution over discrete variables. Such discrete representations enable automatically selecting relevant evidence, which not only facilitates evidence-aware generation, but also provides a natural way to uncover rationales behind the generation. Our approach provides state-ofthe-art performance on both Event2Mind and ATOMIC datasets. More importantly, we find that with discrete representations, our model selectively uses evidence to generate different inferential texts.",Evidence-Aware Inferential Text Generation with Vector Quantised Variational {A}uto{E}ncoder,"Generating inferential texts about an event in different perspectives requires reasoning over different contexts that the event occurs. Existing works usually ignore the context that is not explicitly provided, resulting in a context-independent semantic representation that struggles to support the generation. To address this, we propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts. Our approach works in an encoderdecoder manner and is equipped with a Vector Quantised-Variational Autoencoder, where the encoder outputs representations from a distribution over discrete variables. Such discrete representations enable automatically selecting relevant evidence, which not only facilitates evidence-aware generation, but also provides a natural way to uncover rationales behind the generation. Our approach provides state-ofthe-art performance on both Event2Mind and ATOMIC datasets. More importantly, we find that with discrete representations, our model selectively uses evidence to generate different inferential texts.",Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder,"Generating inferential texts about an event in different perspectives requires reasoning over different contexts that the event occurs. Existing works usually ignore the context that is not explicitly provided, resulting in a context-independent semantic representation that struggles to support the generation. To address this, we propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts. Our approach works in an encoderdecoder manner and is equipped with a Vector Quantised-Variational Autoencoder, where the encoder outputs representations from a distribution over discrete variables. Such discrete representations enable automatically selecting relevant evidence, which not only facilitates evidence-aware generation, but also provides a natural way to uncover rationales behind the generation. Our approach provides state-ofthe-art performance on both Event2Mind and ATOMIC datasets. More importantly, we find that with discrete representations, our model selectively uses evidence to generate different inferential texts.","Daya Guo and Jian Yin are supported by the National Natural Science Foundation of China (U1711262, U1611264, U1711261, U1811261, U1811264, U1911203), National Key R&D Program of China (2018YFB1004404), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R&D Program of Guangdong Province (2018B010107005). Jian Yin is the corresponding author.","Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder. Generating inferential texts about an event in different perspectives requires reasoning over different contexts that the event occurs. Existing works usually ignore the context that is not explicitly provided, resulting in a context-independent semantic representation that struggles to support the generation. To address this, we propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts. Our approach works in an encoderdecoder manner and is equipped with a Vector Quantised-Variational Autoencoder, where the encoder outputs representations from a distribution over discrete variables. Such discrete representations enable automatically selecting relevant evidence, which not only facilitates evidence-aware generation, but also provides a natural way to uncover rationales behind the generation. Our approach provides state-ofthe-art performance on both Event2Mind and ATOMIC datasets. More importantly, we find that with discrete representations, our model selectively uses evidence to generate different inferential texts.",2020
dong-1990-transtar,https://aclanthology.org/C90-3066,0,,,,business_use,,,Transtar - A Commercial English-Chinese MT System. ,Transtar - A Commercial {E}nglish-{C}hinese {MT} System,,Transtar - A Commercial English-Chinese MT System,,,Transtar - A Commercial English-Chinese MT System. ,1990
romero-etal-2021-task,https://aclanthology.org/2021.sigdial-1.46,0,,,,,,,"A Task-Oriented Dialogue Architecture via Transformer Neural Language Models and Symbolic Injection. Recently, transformer language models have been applied to build both task-and non-taskoriented dialogue systems. Although transformers perform well on most of the NLP tasks, they perform poorly on context retrieval and symbolic reasoning. Our work aims to address this limitation by embedding the model in an operational loop that blends both natural language generation and symbolic injection. We evaluated our system on the multi-domain DSTC8 data set and reported joint goal accuracy of 75.8% (ranked among the first half positions), intent accuracy of 97.4% (which is higher than the reported literature), and a 15% improvement for success rate compared to a baseline with no symbolic injection. These promising results suggest that transformer language models can not only generate proper system responses but also symbolic representations that can further be used to enhance the overall quality of the dialogue management as well as serving as scaffolding for complex conversational reasoning.",A Task-Oriented Dialogue Architecture via Transformer Neural Language Models and Symbolic Injection,"Recently, transformer language models have been applied to build both task-and non-taskoriented dialogue systems. Although transformers perform well on most of the NLP tasks, they perform poorly on context retrieval and symbolic reasoning. Our work aims to address this limitation by embedding the model in an operational loop that blends both natural language generation and symbolic injection. We evaluated our system on the multi-domain DSTC8 data set and reported joint goal accuracy of 75.8% (ranked among the first half positions), intent accuracy of 97.4% (which is higher than the reported literature), and a 15% improvement for success rate compared to a baseline with no symbolic injection. These promising results suggest that transformer language models can not only generate proper system responses but also symbolic representations that can further be used to enhance the overall quality of the dialogue management as well as serving as scaffolding for complex conversational reasoning.",A Task-Oriented Dialogue Architecture via Transformer Neural Language Models and Symbolic Injection,"Recently, transformer language models have been applied to build both task-and non-taskoriented dialogue systems. Although transformers perform well on most of the NLP tasks, they perform poorly on context retrieval and symbolic reasoning. Our work aims to address this limitation by embedding the model in an operational loop that blends both natural language generation and symbolic injection. We evaluated our system on the multi-domain DSTC8 data set and reported joint goal accuracy of 75.8% (ranked among the first half positions), intent accuracy of 97.4% (which is higher than the reported literature), and a 15% improvement for success rate compared to a baseline with no symbolic injection. These promising results suggest that transformer language models can not only generate proper system responses but also symbolic representations that can further be used to enhance the overall quality of the dialogue management as well as serving as scaffolding for complex conversational reasoning.",,"A Task-Oriented Dialogue Architecture via Transformer Neural Language Models and Symbolic Injection. Recently, transformer language models have been applied to build both task-and non-taskoriented dialogue systems. Although transformers perform well on most of the NLP tasks, they perform poorly on context retrieval and symbolic reasoning. Our work aims to address this limitation by embedding the model in an operational loop that blends both natural language generation and symbolic injection. We evaluated our system on the multi-domain DSTC8 data set and reported joint goal accuracy of 75.8% (ranked among the first half positions), intent accuracy of 97.4% (which is higher than the reported literature), and a 15% improvement for success rate compared to a baseline with no symbolic injection. These promising results suggest that transformer language models can not only generate proper system responses but also symbolic representations that can further be used to enhance the overall quality of the dialogue management as well as serving as scaffolding for complex conversational reasoning.",2021
baker-sato-2003-framenet,https://aclanthology.org/P03-2030,0,,,,,,,"The FrameNet Data and Software. The FrameNet project has developed a lexical knowledge base providing a unique level of detail as to the the possible syntactic realizations of the specific semantic roles evoked by each predicator, for roughly 7,000 lexical units, on the basis of annotating more than 100,000 example sentences extracted from corpora. An interim version of the FrameNet data was released in October, 2002 and is being widely used. A new, more portable version of the FrameNet software is also being made available to researchers elsewhere, including the Spanish FrameNet project. This demo and poster will briefly explain the principles of Frame Semantics and demonstrate the new unified tools for lexicon building and annotation and also FrameSQL, a search tool for finding patterns in annotated sentences. We will discuss the content and format of the data releases and how the software and data can be used by other NLP researchers.",The {F}rame{N}et Data and Software,"The FrameNet project has developed a lexical knowledge base providing a unique level of detail as to the the possible syntactic realizations of the specific semantic roles evoked by each predicator, for roughly 7,000 lexical units, on the basis of annotating more than 100,000 example sentences extracted from corpora. An interim version of the FrameNet data was released in October, 2002 and is being widely used. A new, more portable version of the FrameNet software is also being made available to researchers elsewhere, including the Spanish FrameNet project. This demo and poster will briefly explain the principles of Frame Semantics and demonstrate the new unified tools for lexicon building and annotation and also FrameSQL, a search tool for finding patterns in annotated sentences. We will discuss the content and format of the data releases and how the software and data can be used by other NLP researchers.",The FrameNet Data and Software,"The FrameNet project has developed a lexical knowledge base providing a unique level of detail as to the the possible syntactic realizations of the specific semantic roles evoked by each predicator, for roughly 7,000 lexical units, on the basis of annotating more than 100,000 example sentences extracted from corpora. An interim version of the FrameNet data was released in October, 2002 and is being widely used. A new, more portable version of the FrameNet software is also being made available to researchers elsewhere, including the Spanish FrameNet project. This demo and poster will briefly explain the principles of Frame Semantics and demonstrate the new unified tools for lexicon building and annotation and also FrameSQL, a search tool for finding patterns in annotated sentences. We will discuss the content and format of the data releases and how the software and data can be used by other NLP researchers.",,"The FrameNet Data and Software. The FrameNet project has developed a lexical knowledge base providing a unique level of detail as to the the possible syntactic realizations of the specific semantic roles evoked by each predicator, for roughly 7,000 lexical units, on the basis of annotating more than 100,000 example sentences extracted from corpora. An interim version of the FrameNet data was released in October, 2002 and is being widely used. A new, more portable version of the FrameNet software is also being made available to researchers elsewhere, including the Spanish FrameNet project. This demo and poster will briefly explain the principles of Frame Semantics and demonstrate the new unified tools for lexicon building and annotation and also FrameSQL, a search tool for finding patterns in annotated sentences. We will discuss the content and format of the data releases and how the software and data can be used by other NLP researchers.",2003
saravanan-etal-2008-automatic,https://aclanthology.org/I08-1063,1,,,,peace_justice_and_strong_institutions,,,"Automatic Identification of Rhetorical Roles using Conditional Random Fields for Legal Document Summarization. In this paper, we propose a machine learning approach to rhetorical role identification from legal documents. In our approach, we annotate roles in sample documents with the help of legal experts and take them as training data. Conditional random field model has been trained with the data to perform rhetorical role identification with reinforcement of rich feature sets. The understanding of structure of a legal document and the application of mathematical model can brings out an effective summary in the final stage. Other important new findings in this work include that the training of a model for one sub-domain can be extended to another sub-domains with very limited augmentation of feature sets. Moreover, we can significantly improve extraction-based summarization results by modifying the ranking of sentences with the importance of specific roles.",Automatic Identification of Rhetorical Roles using Conditional Random Fields for Legal Document Summarization,"In this paper, we propose a machine learning approach to rhetorical role identification from legal documents. In our approach, we annotate roles in sample documents with the help of legal experts and take them as training data. Conditional random field model has been trained with the data to perform rhetorical role identification with reinforcement of rich feature sets. The understanding of structure of a legal document and the application of mathematical model can brings out an effective summary in the final stage. Other important new findings in this work include that the training of a model for one sub-domain can be extended to another sub-domains with very limited augmentation of feature sets. Moreover, we can significantly improve extraction-based summarization results by modifying the ranking of sentences with the importance of specific roles.",Automatic Identification of Rhetorical Roles using Conditional Random Fields for Legal Document Summarization,"In this paper, we propose a machine learning approach to rhetorical role identification from legal documents. In our approach, we annotate roles in sample documents with the help of legal experts and take them as training data. Conditional random field model has been trained with the data to perform rhetorical role identification with reinforcement of rich feature sets. The understanding of structure of a legal document and the application of mathematical model can brings out an effective summary in the final stage. Other important new findings in this work include that the training of a model for one sub-domain can be extended to another sub-domains with very limited augmentation of feature sets. Moreover, we can significantly improve extraction-based summarization results by modifying the ranking of sentences with the importance of specific roles.",,"Automatic Identification of Rhetorical Roles using Conditional Random Fields for Legal Document Summarization. In this paper, we propose a machine learning approach to rhetorical role identification from legal documents. In our approach, we annotate roles in sample documents with the help of legal experts and take them as training data. Conditional random field model has been trained with the data to perform rhetorical role identification with reinforcement of rich feature sets. The understanding of structure of a legal document and the application of mathematical model can brings out an effective summary in the final stage. Other important new findings in this work include that the training of a model for one sub-domain can be extended to another sub-domains with very limited augmentation of feature sets. Moreover, we can significantly improve extraction-based summarization results by modifying the ranking of sentences with the importance of specific roles.",2008
prevot-etal-2013-quantitative-comparative,https://aclanthology.org/Y13-1007,0,,,,,,,"A Quantitative Comparative Study of Prosodic and Discourse Units, the Case of French and Taiwan Mandarin. Studies of spontaneous conversational speech grounded on large and richly annotated corpora are still rare due to the scarcity of such resources. Comparative studies based on such resources are even more rarely found because of the extra-need of comparability in terms of content, genre and speaking style. The present paper presents our efforts for establishing such a dataset for two typologically diverse languages: French and Taiwan Mandarin. To the primary data, we added morphosyntactic, chunking, prosodic and discourse annotation in order to be able to carry out quantitative comparative studies of the syntaxdiscourse-prosody interfaces. We introduced our work on the data creation itself as well as some preliminary results of the boundary alignment between prosodic and discourse units and how POS and chunks are distributed on these boundaries.","A Quantitative Comparative Study of Prosodic and Discourse Units, the Case of {F}rench and {T}aiwan {M}andarin","Studies of spontaneous conversational speech grounded on large and richly annotated corpora are still rare due to the scarcity of such resources. Comparative studies based on such resources are even more rarely found because of the extra-need of comparability in terms of content, genre and speaking style. The present paper presents our efforts for establishing such a dataset for two typologically diverse languages: French and Taiwan Mandarin. To the primary data, we added morphosyntactic, chunking, prosodic and discourse annotation in order to be able to carry out quantitative comparative studies of the syntaxdiscourse-prosody interfaces. We introduced our work on the data creation itself as well as some preliminary results of the boundary alignment between prosodic and discourse units and how POS and chunks are distributed on these boundaries.","A Quantitative Comparative Study of Prosodic and Discourse Units, the Case of French and Taiwan Mandarin","Studies of spontaneous conversational speech grounded on large and richly annotated corpora are still rare due to the scarcity of such resources. Comparative studies based on such resources are even more rarely found because of the extra-need of comparability in terms of content, genre and speaking style. The present paper presents our efforts for establishing such a dataset for two typologically diverse languages: French and Taiwan Mandarin. To the primary data, we added morphosyntactic, chunking, prosodic and discourse annotation in order to be able to carry out quantitative comparative studies of the syntaxdiscourse-prosody interfaces. We introduced our work on the data creation itself as well as some preliminary results of the boundary alignment between prosodic and discourse units and how POS and chunks are distributed on these boundaries.","This work has been realized thanks to the support of the France-Taiwan ORCHID Program, under grant 100-2911-I-001-504 and the NSC project 100-2410-H-001-093 granted to the second author, as well as ANR OTIM BLAN08-2-349062 for initial work on the French data. We would like also to thank our colleagues for the help at various stage of the Data preparation, in particular Roxane Bertrand, Yi-Fen Liu, Robert Espesser, Stéphane Rauzy, Brigitte Bigi, and Philippe Blache. ","A Quantitative Comparative Study of Prosodic and Discourse Units, the Case of French and Taiwan Mandarin. Studies of spontaneous conversational speech grounded on large and richly annotated corpora are still rare due to the scarcity of such resources. Comparative studies based on such resources are even more rarely found because of the extra-need of comparability in terms of content, genre and speaking style. The present paper presents our efforts for establishing such a dataset for two typologically diverse languages: French and Taiwan Mandarin. To the primary data, we added morphosyntactic, chunking, prosodic and discourse annotation in order to be able to carry out quantitative comparative studies of the syntaxdiscourse-prosody interfaces. We introduced our work on the data creation itself as well as some preliminary results of the boundary alignment between prosodic and discourse units and how POS and chunks are distributed on these boundaries.",2013
basu-roy-chowdhury-etal-2019-instance,https://aclanthology.org/D19-6120,0,,,,,,,"Instance-based Inductive Deep Transfer Learning by Cross-Dataset Querying with Locality Sensitive Hashing. Supervised learning models are typically trained on a single dataset and the performance of these models rely heavily on the size of the dataset i.e., the amount of data available with ground truth. Learning algorithms try to generalize solely based on the data that it is presented with during the training. In this work, we propose an inductive transfer learning method that can augment learning models by infusing similar instances from different learning tasks in Natural Language Processing (NLP) domain. We propose to use instance representations from a source dataset, without inheriting anything else from the source learning model. Representations of the instances of source and target datasets are learned, retrieval of relevant source instances is performed using soft-attention mechanism and locality sensitive hashing and then augmented into the model during training on the target dataset. Therefore, while learning from a training data, we also simultaneously exploit and infuse relevant local instance-level information from an external data. Using this approach we have shown significant improvements over the baseline for three major news classification datasets. Experimental evaluations also show that the proposed approach reduces dependency on labeled data by a significant margin for comparable performance. With our proposed cross dataset learning procedure we show that one can achieve competitive/better performance than learning from a single dataset.",Instance-based Inductive Deep Transfer Learning by Cross-Dataset Querying with Locality Sensitive Hashing,"Supervised learning models are typically trained on a single dataset and the performance of these models rely heavily on the size of the dataset i.e., the amount of data available with ground truth. Learning algorithms try to generalize solely based on the data that it is presented with during the training. In this work, we propose an inductive transfer learning method that can augment learning models by infusing similar instances from different learning tasks in Natural Language Processing (NLP) domain. We propose to use instance representations from a source dataset, without inheriting anything else from the source learning model. Representations of the instances of source and target datasets are learned, retrieval of relevant source instances is performed using soft-attention mechanism and locality sensitive hashing and then augmented into the model during training on the target dataset. Therefore, while learning from a training data, we also simultaneously exploit and infuse relevant local instance-level information from an external data. Using this approach we have shown significant improvements over the baseline for three major news classification datasets. Experimental evaluations also show that the proposed approach reduces dependency on labeled data by a significant margin for comparable performance. With our proposed cross dataset learning procedure we show that one can achieve competitive/better performance than learning from a single dataset.",Instance-based Inductive Deep Transfer Learning by Cross-Dataset Querying with Locality Sensitive Hashing,"Supervised learning models are typically trained on a single dataset and the performance of these models rely heavily on the size of the dataset i.e., the amount of data available with ground truth. Learning algorithms try to generalize solely based on the data that it is presented with during the training. In this work, we propose an inductive transfer learning method that can augment learning models by infusing similar instances from different learning tasks in Natural Language Processing (NLP) domain. We propose to use instance representations from a source dataset, without inheriting anything else from the source learning model. Representations of the instances of source and target datasets are learned, retrieval of relevant source instances is performed using soft-attention mechanism and locality sensitive hashing and then augmented into the model during training on the target dataset. Therefore, while learning from a training data, we also simultaneously exploit and infuse relevant local instance-level information from an external data. Using this approach we have shown significant improvements over the baseline for three major news classification datasets. Experimental evaluations also show that the proposed approach reduces dependency on labeled data by a significant margin for comparable performance. With our proposed cross dataset learning procedure we show that one can achieve competitive/better performance than learning from a single dataset.",,"Instance-based Inductive Deep Transfer Learning by Cross-Dataset Querying with Locality Sensitive Hashing. Supervised learning models are typically trained on a single dataset and the performance of these models rely heavily on the size of the dataset i.e., the amount of data available with ground truth. Learning algorithms try to generalize solely based on the data that it is presented with during the training. In this work, we propose an inductive transfer learning method that can augment learning models by infusing similar instances from different learning tasks in Natural Language Processing (NLP) domain. We propose to use instance representations from a source dataset, without inheriting anything else from the source learning model. Representations of the instances of source and target datasets are learned, retrieval of relevant source instances is performed using soft-attention mechanism and locality sensitive hashing and then augmented into the model during training on the target dataset. Therefore, while learning from a training data, we also simultaneously exploit and infuse relevant local instance-level information from an external data. Using this approach we have shown significant improvements over the baseline for three major news classification datasets. Experimental evaluations also show that the proposed approach reduces dependency on labeled data by a significant margin for comparable performance. With our proposed cross dataset learning procedure we show that one can achieve competitive/better performance than learning from a single dataset.",2019
shaalan-etal-2009-syntactic,https://aclanthology.org/2009.mtsummit-caasl.9,0,,,,,,,"Syntactic Generation of Arabic in Interlingua-based Machine Translation Framework. Arabic is a highly inflectional language, with a rich morphology, relatively free word order, and two types of sentences: nominal and verbal. Arabic natural language processing in general is still underdeveloped and Arabic natural language generation (NLG) is even less developed. In particular, Arabic natural language generation from Interlingua was only investigated using template-based approaches. Moreover, tools used for other languages are not easily adaptable to Arabic due to the Arabic language complexity at both the morphological and syntactic levels. In this paper, we report our attempt at developing a rule-based Arabic generator for task-oriented interlingua-based spoken dialogues. Examples of syntactic generation results from the Arabic generator will be given and will illustrate how the system works. Our proposed syntactic generator has been effectively evaluated using real test data and achieved satisfactory results.",Syntactic Generation of {A}rabic in Interlingua-based Machine Translation Framework,"Arabic is a highly inflectional language, with a rich morphology, relatively free word order, and two types of sentences: nominal and verbal. Arabic natural language processing in general is still underdeveloped and Arabic natural language generation (NLG) is even less developed. In particular, Arabic natural language generation from Interlingua was only investigated using template-based approaches. Moreover, tools used for other languages are not easily adaptable to Arabic due to the Arabic language complexity at both the morphological and syntactic levels. In this paper, we report our attempt at developing a rule-based Arabic generator for task-oriented interlingua-based spoken dialogues. Examples of syntactic generation results from the Arabic generator will be given and will illustrate how the system works. Our proposed syntactic generator has been effectively evaluated using real test data and achieved satisfactory results.",Syntactic Generation of Arabic in Interlingua-based Machine Translation Framework,"Arabic is a highly inflectional language, with a rich morphology, relatively free word order, and two types of sentences: nominal and verbal. Arabic natural language processing in general is still underdeveloped and Arabic natural language generation (NLG) is even less developed. In particular, Arabic natural language generation from Interlingua was only investigated using template-based approaches. Moreover, tools used for other languages are not easily adaptable to Arabic due to the Arabic language complexity at both the morphological and syntactic levels. In this paper, we report our attempt at developing a rule-based Arabic generator for task-oriented interlingua-based spoken dialogues. Examples of syntactic generation results from the Arabic generator will be given and will illustrate how the system works. Our proposed syntactic generator has been effectively evaluated using real test data and achieved satisfactory results.",,"Syntactic Generation of Arabic in Interlingua-based Machine Translation Framework. Arabic is a highly inflectional language, with a rich morphology, relatively free word order, and two types of sentences: nominal and verbal. Arabic natural language processing in general is still underdeveloped and Arabic natural language generation (NLG) is even less developed. In particular, Arabic natural language generation from Interlingua was only investigated using template-based approaches. Moreover, tools used for other languages are not easily adaptable to Arabic due to the Arabic language complexity at both the morphological and syntactic levels. In this paper, we report our attempt at developing a rule-based Arabic generator for task-oriented interlingua-based spoken dialogues. Examples of syntactic generation results from the Arabic generator will be given and will illustrate how the system works. Our proposed syntactic generator has been effectively evaluated using real test data and achieved satisfactory results.",2009
cruz-etal-2017-annotating,https://aclanthology.org/W17-1808,1,,,,health,,,"Annotating Negation in Spanish Clinical Texts. In this paper we present ongoing work on annotating negation in Spanish clinical documents. A corpus of anamnesis and radiology reports has been annotated by two domain expert annotators with negation markers and negated events. The Dice coefficient for inter-annotator agreement is higher than 0.94 for negation markers and higher than 0.72 for negated events. The corpus will be publicly released when the annotation process is finished, constituting the first corpus annotated with negation for Spanish clinical reports available for the NLP community.",Annotating Negation in {S}panish Clinical Texts,"In this paper we present ongoing work on annotating negation in Spanish clinical documents. A corpus of anamnesis and radiology reports has been annotated by two domain expert annotators with negation markers and negated events. The Dice coefficient for inter-annotator agreement is higher than 0.94 for negation markers and higher than 0.72 for negated events. The corpus will be publicly released when the annotation process is finished, constituting the first corpus annotated with negation for Spanish clinical reports available for the NLP community.",Annotating Negation in Spanish Clinical Texts,"In this paper we present ongoing work on annotating negation in Spanish clinical documents. A corpus of anamnesis and radiology reports has been annotated by two domain expert annotators with negation markers and negated events. The Dice coefficient for inter-annotator agreement is higher than 0.94 for negation markers and higher than 0.72 for negated events. The corpus will be publicly released when the annotation process is finished, constituting the first corpus annotated with negation for Spanish clinical reports available for the NLP community.","This work has been partially funded by the Andalusian Regional Govenment (Bidamir Project TIC-07629) and the Spanish Government (IPHealth Project TIN2013-47153-C3-2-R). RM is supported by the Netherlands Organization for Scientific Research (NWO) via the Spinoza-prize awarded to Piek Vossen (SPI 30-673, 2014(SPI 30-673, -2019.","Annotating Negation in Spanish Clinical Texts. In this paper we present ongoing work on annotating negation in Spanish clinical documents. A corpus of anamnesis and radiology reports has been annotated by two domain expert annotators with negation markers and negated events. The Dice coefficient for inter-annotator agreement is higher than 0.94 for negation markers and higher than 0.72 for negated events. The corpus will be publicly released when the annotation process is finished, constituting the first corpus annotated with negation for Spanish clinical reports available for the NLP community.",2017
wu-etal-2021-counterfactual,https://aclanthology.org/2021.naacl-main.156,1,,,,health,,,"Counterfactual Supporting Facts Extraction for Explainable Medical Record Based Diagnosis with Graph Network. Providing a reliable explanation for clinical diagnosis based on the Electronic Medical Record (EMR) is fundamental to the application of Artificial Intelligence in the medical field. Current methods mostly treat the EMR as a text sequence and provide explanations based on a precise medical knowledge base, which is disease-specific and difficult to obtain for experts in reality. Therefore, we propose a counterfactual multi-granularity graph supporting facts extraction (CMGE) method to extract supporting facts from the irregular EMR itself without external knowledge bases in this paper. Specifically, we first structure the sequence of the EMR into a hierarchical graph network and then obtain the causal relationship between multi-granularity features and diagnosis results through counterfactual intervention on the graph. Features having the strongest causal connection with the results provide interpretive support for the diagnosis. Experimental results on real Chinese EMRs of the lymphedema demonstrate that our method can diagnose four types of EMRs correctly, and can provide accurate supporting facts for the results. More importantly, the results on different diseases demonstrate the robustness of our approach, which represents the potential application in the medical field 1 .",Counterfactual Supporting Facts Extraction for Explainable Medical Record Based Diagnosis with Graph Network,"Providing a reliable explanation for clinical diagnosis based on the Electronic Medical Record (EMR) is fundamental to the application of Artificial Intelligence in the medical field. Current methods mostly treat the EMR as a text sequence and provide explanations based on a precise medical knowledge base, which is disease-specific and difficult to obtain for experts in reality. Therefore, we propose a counterfactual multi-granularity graph supporting facts extraction (CMGE) method to extract supporting facts from the irregular EMR itself without external knowledge bases in this paper. Specifically, we first structure the sequence of the EMR into a hierarchical graph network and then obtain the causal relationship between multi-granularity features and diagnosis results through counterfactual intervention on the graph. Features having the strongest causal connection with the results provide interpretive support for the diagnosis. Experimental results on real Chinese EMRs of the lymphedema demonstrate that our method can diagnose four types of EMRs correctly, and can provide accurate supporting facts for the results. More importantly, the results on different diseases demonstrate the robustness of our approach, which represents the potential application in the medical field 1 .",Counterfactual Supporting Facts Extraction for Explainable Medical Record Based Diagnosis with Graph Network,"Providing a reliable explanation for clinical diagnosis based on the Electronic Medical Record (EMR) is fundamental to the application of Artificial Intelligence in the medical field. Current methods mostly treat the EMR as a text sequence and provide explanations based on a precise medical knowledge base, which is disease-specific and difficult to obtain for experts in reality. Therefore, we propose a counterfactual multi-granularity graph supporting facts extraction (CMGE) method to extract supporting facts from the irregular EMR itself without external knowledge bases in this paper. Specifically, we first structure the sequence of the EMR into a hierarchical graph network and then obtain the causal relationship between multi-granularity features and diagnosis results through counterfactual intervention on the graph. Features having the strongest causal connection with the results provide interpretive support for the diagnosis. Experimental results on real Chinese EMRs of the lymphedema demonstrate that our method can diagnose four types of EMRs correctly, and can provide accurate supporting facts for the results. More importantly, the results on different diseases demonstrate the robustness of our approach, which represents the potential application in the medical field 1 .",This work is supported by the National Key Research and Development Program of China (No.2018YFB1005104) and the Key Research Program of the Chinese Academy of Sciences (ZDBS-SSW-JSC006).,"Counterfactual Supporting Facts Extraction for Explainable Medical Record Based Diagnosis with Graph Network. Providing a reliable explanation for clinical diagnosis based on the Electronic Medical Record (EMR) is fundamental to the application of Artificial Intelligence in the medical field. Current methods mostly treat the EMR as a text sequence and provide explanations based on a precise medical knowledge base, which is disease-specific and difficult to obtain for experts in reality. Therefore, we propose a counterfactual multi-granularity graph supporting facts extraction (CMGE) method to extract supporting facts from the irregular EMR itself without external knowledge bases in this paper. Specifically, we first structure the sequence of the EMR into a hierarchical graph network and then obtain the causal relationship between multi-granularity features and diagnosis results through counterfactual intervention on the graph. Features having the strongest causal connection with the results provide interpretive support for the diagnosis. Experimental results on real Chinese EMRs of the lymphedema demonstrate that our method can diagnose four types of EMRs correctly, and can provide accurate supporting facts for the results. More importantly, the results on different diseases demonstrate the robustness of our approach, which represents the potential application in the medical field 1 .",2021
esteve-etal-2010-epac,http://www.lrec-conf.org/proceedings/lrec2010/pdf/650_Paper.pdf,0,,,,,,,"The EPAC Corpus: Manual and Automatic Annotations of Conversational Speech in French Broadcast News. This paper presents the EPAC corpus which is composed by a set of 100 hours of conversational speech manually transcribed and by the outputs of automatic tools (automatic segmentation, transcription, POS tagging, etc.) applied on the entire French ESTER 1 audio corpus: this concerns about 1700 hours of audio recordings from radiophonic shows. This corpus was built during the EPAC project funded by the French Research Agency (ANR) from 2007 to 2010. This corpus increases significantly the amount of French manually transcribed audio recordings easily available and it is now included as a part of the ESTER 1 corpus in the ELRA catalog without additional cost. By providing a large set of automatic outputs of speech processing tools, the EPAC corpus should be useful to researchers who want to work on such data without having to develop and deal with such tools. These automatic annotations are various: segmentation and speaker diarization, one-best hypotheses from the LIUM automatic speech recognition system with confidence measures, but also word-lattices and confusion networks, named entities, part-of-speech tags, chunks, etc. The 100 hours of speech manually transcribed were split into three data sets in order to get an official training corpus, an official development corpus and an official test corpus. These data sets were used to develop and to evaluate some automatic tools which have been used to process the 1700 hours of audio recording. For example, on the EPAC test data set our ASR system yields a word error rate equals to 17.25%.",The {EPAC} Corpus: Manual and Automatic Annotations of Conversational Speech in {F}rench Broadcast News,"This paper presents the EPAC corpus which is composed by a set of 100 hours of conversational speech manually transcribed and by the outputs of automatic tools (automatic segmentation, transcription, POS tagging, etc.) applied on the entire French ESTER 1 audio corpus: this concerns about 1700 hours of audio recordings from radiophonic shows. This corpus was built during the EPAC project funded by the French Research Agency (ANR) from 2007 to 2010. This corpus increases significantly the amount of French manually transcribed audio recordings easily available and it is now included as a part of the ESTER 1 corpus in the ELRA catalog without additional cost. By providing a large set of automatic outputs of speech processing tools, the EPAC corpus should be useful to researchers who want to work on such data without having to develop and deal with such tools. These automatic annotations are various: segmentation and speaker diarization, one-best hypotheses from the LIUM automatic speech recognition system with confidence measures, but also word-lattices and confusion networks, named entities, part-of-speech tags, chunks, etc. The 100 hours of speech manually transcribed were split into three data sets in order to get an official training corpus, an official development corpus and an official test corpus. These data sets were used to develop and to evaluate some automatic tools which have been used to process the 1700 hours of audio recording. For example, on the EPAC test data set our ASR system yields a word error rate equals to 17.25%.",The EPAC Corpus: Manual and Automatic Annotations of Conversational Speech in French Broadcast News,"This paper presents the EPAC corpus which is composed by a set of 100 hours of conversational speech manually transcribed and by the outputs of automatic tools (automatic segmentation, transcription, POS tagging, etc.) applied on the entire French ESTER 1 audio corpus: this concerns about 1700 hours of audio recordings from radiophonic shows. This corpus was built during the EPAC project funded by the French Research Agency (ANR) from 2007 to 2010. This corpus increases significantly the amount of French manually transcribed audio recordings easily available and it is now included as a part of the ESTER 1 corpus in the ELRA catalog without additional cost. By providing a large set of automatic outputs of speech processing tools, the EPAC corpus should be useful to researchers who want to work on such data without having to develop and deal with such tools. These automatic annotations are various: segmentation and speaker diarization, one-best hypotheses from the LIUM automatic speech recognition system with confidence measures, but also word-lattices and confusion networks, named entities, part-of-speech tags, chunks, etc. The 100 hours of speech manually transcribed were split into three data sets in order to get an official training corpus, an official development corpus and an official test corpus. These data sets were used to develop and to evaluate some automatic tools which have been used to process the 1700 hours of audio recording. For example, on the EPAC test data set our ASR system yields a word error rate equals to 17.25%.",This research was supported by the ANR (Agence Nationale de la Recherche) under contract number ANR-06-MDCA-006.,"The EPAC Corpus: Manual and Automatic Annotations of Conversational Speech in French Broadcast News. This paper presents the EPAC corpus which is composed by a set of 100 hours of conversational speech manually transcribed and by the outputs of automatic tools (automatic segmentation, transcription, POS tagging, etc.) applied on the entire French ESTER 1 audio corpus: this concerns about 1700 hours of audio recordings from radiophonic shows. This corpus was built during the EPAC project funded by the French Research Agency (ANR) from 2007 to 2010. This corpus increases significantly the amount of French manually transcribed audio recordings easily available and it is now included as a part of the ESTER 1 corpus in the ELRA catalog without additional cost. By providing a large set of automatic outputs of speech processing tools, the EPAC corpus should be useful to researchers who want to work on such data without having to develop and deal with such tools. These automatic annotations are various: segmentation and speaker diarization, one-best hypotheses from the LIUM automatic speech recognition system with confidence measures, but also word-lattices and confusion networks, named entities, part-of-speech tags, chunks, etc. The 100 hours of speech manually transcribed were split into three data sets in order to get an official training corpus, an official development corpus and an official test corpus. These data sets were used to develop and to evaluate some automatic tools which have been used to process the 1700 hours of audio recording. For example, on the EPAC test data set our ASR system yields a word error rate equals to 17.25%.",2010
sagot-martinez-alonso-2017-improving,https://aclanthology.org/W17-6304,0,,,,,,,"Improving neural tagging with lexical information. Neural part-of-speech tagging has achieved competitive results with the incorporation of character-based and pre-trained word embeddings. In this paper, we show that a state-of-the-art bi-LSTM tagger can benefit from using information from morphosyntactic lexicons as additional input. The tagger, trained on several dozen languages, shows a consistent, average improvement when using lexical information, even when also using character-based embeddings, thus showing the complementarity of the different sources of lexical information. The improvements are particularly important for the smaller datasets.",Improving neural tagging with lexical information,"Neural part-of-speech tagging has achieved competitive results with the incorporation of character-based and pre-trained word embeddings. In this paper, we show that a state-of-the-art bi-LSTM tagger can benefit from using information from morphosyntactic lexicons as additional input. The tagger, trained on several dozen languages, shows a consistent, average improvement when using lexical information, even when also using character-based embeddings, thus showing the complementarity of the different sources of lexical information. The improvements are particularly important for the smaller datasets.",Improving neural tagging with lexical information,"Neural part-of-speech tagging has achieved competitive results with the incorporation of character-based and pre-trained word embeddings. In this paper, we show that a state-of-the-art bi-LSTM tagger can benefit from using information from morphosyntactic lexicons as additional input. The tagger, trained on several dozen languages, shows a consistent, average improvement when using lexical information, even when also using character-based embeddings, thus showing the complementarity of the different sources of lexical information. The improvements are particularly important for the smaller datasets.",,"Improving neural tagging with lexical information. Neural part-of-speech tagging has achieved competitive results with the incorporation of character-based and pre-trained word embeddings. In this paper, we show that a state-of-the-art bi-LSTM tagger can benefit from using information from morphosyntactic lexicons as additional input. The tagger, trained on several dozen languages, shows a consistent, average improvement when using lexical information, even when also using character-based embeddings, thus showing the complementarity of the different sources of lexical information. The improvements are particularly important for the smaller datasets.",2017
de-vriend-etal-2002-using,http://www.lrec-conf.org/proceedings/lrec2002/pdf/264.pdf,0,,,,,,,"Using Grammatical Description as a Metalanguage Resource. The present paper is concerned with the advantages of a digitised descriptive grammar over its traditional print version. First we discuss the process of up-conversion of the ANS material and the main advantages the E-ANS has for the editorial staff. Then from the perspective of language resources, we discuss different applications of the grammatical descriptions for both human and machine users. The discussion is based on our experiences during the project 'Elektronisering van de ANS', a project in progress that is aimed at developing a digital version of the Dutch reference grammar Algemene Nederlandse Spraakkunst (ANS).",Using Grammatical Description as a Metalanguage Resource,"The present paper is concerned with the advantages of a digitised descriptive grammar over its traditional print version. First we discuss the process of up-conversion of the ANS material and the main advantages the E-ANS has for the editorial staff. Then from the perspective of language resources, we discuss different applications of the grammatical descriptions for both human and machine users. The discussion is based on our experiences during the project 'Elektronisering van de ANS', a project in progress that is aimed at developing a digital version of the Dutch reference grammar Algemene Nederlandse Spraakkunst (ANS).",Using Grammatical Description as a Metalanguage Resource,"The present paper is concerned with the advantages of a digitised descriptive grammar over its traditional print version. First we discuss the process of up-conversion of the ANS material and the main advantages the E-ANS has for the editorial staff. Then from the perspective of language resources, we discuss different applications of the grammatical descriptions for both human and machine users. The discussion is based on our experiences during the project 'Elektronisering van de ANS', a project in progress that is aimed at developing a digital version of the Dutch reference grammar Algemene Nederlandse Spraakkunst (ANS).","We would like to thank the members of the steering committee of this project for their comments and suggestions on the work presented in this paper: Gosse Bouma, Walter Daelemans, Carel Jansen, Gerard Kempen, Luuk Van Waes.","Using Grammatical Description as a Metalanguage Resource. The present paper is concerned with the advantages of a digitised descriptive grammar over its traditional print version. First we discuss the process of up-conversion of the ANS material and the main advantages the E-ANS has for the editorial staff. Then from the perspective of language resources, we discuss different applications of the grammatical descriptions for both human and machine users. The discussion is based on our experiences during the project 'Elektronisering van de ANS', a project in progress that is aimed at developing a digital version of the Dutch reference grammar Algemene Nederlandse Spraakkunst (ANS).",2002
tomeh-etal-2009-complexity,https://aclanthology.org/2009.mtsummit-papers.17,0,,,,,,,"Complexity-Based Phrase-Table Filtering for Statistical Machine Translation. We describe an approach for filtering phrase tables in a Statistical Machine Translation system, which relies on a statistical independence measure called Noise, first introduced in (Moore, 2004). While previous work by (Johnson et al., 2007) also addressed the question of phrase table filtering, it relied on a simpler independence measure, the p-value, which is theoretically less satisfying than the Noise in this context. In this paper, we use Noise as the filtering criterion, and show that when we partition the bi-phrase tables in several sub-classes according to their complexity, using Noise leads to improvements in BLEU score that are unreachable using pvalue, while allowing a similar amount of pruning of the phrase tables.",Complexity-Based Phrase-Table Filtering for Statistical Machine Translation,"We describe an approach for filtering phrase tables in a Statistical Machine Translation system, which relies on a statistical independence measure called Noise, first introduced in (Moore, 2004). While previous work by (Johnson et al., 2007) also addressed the question of phrase table filtering, it relied on a simpler independence measure, the p-value, which is theoretically less satisfying than the Noise in this context. In this paper, we use Noise as the filtering criterion, and show that when we partition the bi-phrase tables in several sub-classes according to their complexity, using Noise leads to improvements in BLEU score that are unreachable using pvalue, while allowing a similar amount of pruning of the phrase tables.",Complexity-Based Phrase-Table Filtering for Statistical Machine Translation,"We describe an approach for filtering phrase tables in a Statistical Machine Translation system, which relies on a statistical independence measure called Noise, first introduced in (Moore, 2004). While previous work by (Johnson et al., 2007) also addressed the question of phrase table filtering, it relied on a simpler independence measure, the p-value, which is theoretically less satisfying than the Noise in this context. In this paper, we use Noise as the filtering criterion, and show that when we partition the bi-phrase tables in several sub-classes according to their complexity, using Noise leads to improvements in BLEU score that are unreachable using pvalue, while allowing a similar amount of pruning of the phrase tables.","This work was supported by the European Commission under the IST Project SMART (FP6-033917). Thanks to Eric Gaussier for his support at the be-ginning of this project, and to Sara Stymne and the anonymous reviewers for detailed and insightful comments.","Complexity-Based Phrase-Table Filtering for Statistical Machine Translation. We describe an approach for filtering phrase tables in a Statistical Machine Translation system, which relies on a statistical independence measure called Noise, first introduced in (Moore, 2004). While previous work by (Johnson et al., 2007) also addressed the question of phrase table filtering, it relied on a simpler independence measure, the p-value, which is theoretically less satisfying than the Noise in this context. In this paper, we use Noise as the filtering criterion, and show that when we partition the bi-phrase tables in several sub-classes according to their complexity, using Noise leads to improvements in BLEU score that are unreachable using pvalue, while allowing a similar amount of pruning of the phrase tables.",2009
kulkarni-boyer-2018-toward,https://aclanthology.org/W18-0532,1,,,,education,,,"Toward Data-Driven Tutorial Question Answering with Deep Learning Conversational Models. There has been an increase in popularity of data-driven question answering systems given their recent success. This paper explores the possibility of building a tutorial question answering system for Java programming from data sampled from a community-based question answering forum. This paper reports on the creation of a dataset that could support building such a tutorial question answering system and discusses the methodology to create the 106,386 question strong dataset. We investigate how retrieval-based and generative models perform on the given dataset. The work also investigates the usefulness of using hybrid approaches such as combining retrieval-based and generative models. The results indicate that building datadriven tutorial systems using communitybased question answering forums holds significant promise.",Toward Data-Driven Tutorial Question Answering with Deep Learning Conversational Models,"There has been an increase in popularity of data-driven question answering systems given their recent success. This paper explores the possibility of building a tutorial question answering system for Java programming from data sampled from a community-based question answering forum. This paper reports on the creation of a dataset that could support building such a tutorial question answering system and discusses the methodology to create the 106,386 question strong dataset. We investigate how retrieval-based and generative models perform on the given dataset. The work also investigates the usefulness of using hybrid approaches such as combining retrieval-based and generative models. The results indicate that building datadriven tutorial systems using communitybased question answering forums holds significant promise.",Toward Data-Driven Tutorial Question Answering with Deep Learning Conversational Models,"There has been an increase in popularity of data-driven question answering systems given their recent success. This paper explores the possibility of building a tutorial question answering system for Java programming from data sampled from a community-based question answering forum. This paper reports on the creation of a dataset that could support building such a tutorial question answering system and discusses the methodology to create the 106,386 question strong dataset. We investigate how retrieval-based and generative models perform on the given dataset. The work also investigates the usefulness of using hybrid approaches such as combining retrieval-based and generative models. The results indicate that building datadriven tutorial systems using communitybased question answering forums holds significant promise.",,"Toward Data-Driven Tutorial Question Answering with Deep Learning Conversational Models. There has been an increase in popularity of data-driven question answering systems given their recent success. This paper explores the possibility of building a tutorial question answering system for Java programming from data sampled from a community-based question answering forum. This paper reports on the creation of a dataset that could support building such a tutorial question answering system and discusses the methodology to create the 106,386 question strong dataset. We investigate how retrieval-based and generative models perform on the given dataset. The work also investigates the usefulness of using hybrid approaches such as combining retrieval-based and generative models. The results indicate that building datadriven tutorial systems using communitybased question answering forums holds significant promise.",2018
felt-etal-2015-making,https://aclanthology.org/K15-1020,0,,,,,,,"Making the Most of Crowdsourced Document Annotations: Confused Supervised LDA. Corpus labeling projects frequently use low-cost workers from microtask marketplaces; however, these workers are often inexperienced or have misaligned incentives. Crowdsourcing models must be robust to the resulting systematic and nonsystematic inaccuracies. We introduce a novel crowdsourcing model that adapts the discrete supervised topic model sLDA to handle multiple corrupt, usually conflicting (hence ""confused"") supervision signals. Our model achieves significant gains over previous work in the accuracy of deduced ground truth.",Making the Most of Crowdsourced Document Annotations: Confused Supervised {LDA},"Corpus labeling projects frequently use low-cost workers from microtask marketplaces; however, these workers are often inexperienced or have misaligned incentives. Crowdsourcing models must be robust to the resulting systematic and nonsystematic inaccuracies. We introduce a novel crowdsourcing model that adapts the discrete supervised topic model sLDA to handle multiple corrupt, usually conflicting (hence ""confused"") supervision signals. Our model achieves significant gains over previous work in the accuracy of deduced ground truth.",Making the Most of Crowdsourced Document Annotations: Confused Supervised LDA,"Corpus labeling projects frequently use low-cost workers from microtask marketplaces; however, these workers are often inexperienced or have misaligned incentives. Crowdsourcing models must be robust to the resulting systematic and nonsystematic inaccuracies. We introduce a novel crowdsourcing model that adapts the discrete supervised topic model sLDA to handle multiple corrupt, usually conflicting (hence ""confused"") supervision signals. Our model achieves significant gains over previous work in the accuracy of deduced ground truth.","Acknowledgments This work was supported by the collaborative NSF Grant IIS-1409739 (BYU) and IIS-1409287 (UMD). Boyd-Graber is also supported by NSF grants IIS-1320538 and NCSE-1422492. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.","Making the Most of Crowdsourced Document Annotations: Confused Supervised LDA. Corpus labeling projects frequently use low-cost workers from microtask marketplaces; however, these workers are often inexperienced or have misaligned incentives. Crowdsourcing models must be robust to the resulting systematic and nonsystematic inaccuracies. We introduce a novel crowdsourcing model that adapts the discrete supervised topic model sLDA to handle multiple corrupt, usually conflicting (hence ""confused"") supervision signals. Our model achieves significant gains over previous work in the accuracy of deduced ground truth.",2015
fernando-2013-segmenting,https://aclanthology.org/W13-3004,0,,,,,,,"Segmenting Temporal Intervals for Tense and Aspect. Timelines interpreting interval temporal logic formulas are segmented into strings which serve as semantic representations for tense and aspect. The strings have bounded but refinable granularity, suitable for analyzing (im)perfectivity, durativity, telicity, and various relations including branching.",Segmenting Temporal Intervals for Tense and Aspect,"Timelines interpreting interval temporal logic formulas are segmented into strings which serve as semantic representations for tense and aspect. The strings have bounded but refinable granularity, suitable for analyzing (im)perfectivity, durativity, telicity, and various relations including branching.",Segmenting Temporal Intervals for Tense and Aspect,"Timelines interpreting interval temporal logic formulas are segmented into strings which serve as semantic representations for tense and aspect. The strings have bounded but refinable granularity, suitable for analyzing (im)perfectivity, durativity, telicity, and various relations including branching.",,"Segmenting Temporal Intervals for Tense and Aspect. Timelines interpreting interval temporal logic formulas are segmented into strings which serve as semantic representations for tense and aspect. The strings have bounded but refinable granularity, suitable for analyzing (im)perfectivity, durativity, telicity, and various relations including branching.",2013
bernth-1997-easyenglish,https://aclanthology.org/A97-1024,0,,,,,,,"EasyEnglish: A Tool for Improving Document Quality. We describe the authoring tool, EasyEnglish, which is part of IBM's internal SGML editing environment, Information Development Workbench. EasyEnglish helps writers produce clearer and simpler English by pointing out ambiguity and complexity as well as performing some standard grammar checking. Where appropriate, EasyEnglish makes suggestions for rephrasings that may be substituted directly into the text by using the editor interface. EasyEnglish is based on a full parse by English Slot Grammar; this makes it possible to produce a higher degree of accuracy in error messages as well as handle a large variety of texts.",{E}asy{E}nglish: A Tool for Improving Document Quality,"We describe the authoring tool, EasyEnglish, which is part of IBM's internal SGML editing environment, Information Development Workbench. EasyEnglish helps writers produce clearer and simpler English by pointing out ambiguity and complexity as well as performing some standard grammar checking. Where appropriate, EasyEnglish makes suggestions for rephrasings that may be substituted directly into the text by using the editor interface. EasyEnglish is based on a full parse by English Slot Grammar; this makes it possible to produce a higher degree of accuracy in error messages as well as handle a large variety of texts.",EasyEnglish: A Tool for Improving Document Quality,"We describe the authoring tool, EasyEnglish, which is part of IBM's internal SGML editing environment, Information Development Workbench. EasyEnglish helps writers produce clearer and simpler English by pointing out ambiguity and complexity as well as performing some standard grammar checking. Where appropriate, EasyEnglish makes suggestions for rephrasings that may be substituted directly into the text by using the editor interface. EasyEnglish is based on a full parse by English Slot Grammar; this makes it possible to produce a higher degree of accuracy in error messages as well as handle a large variety of texts.","I would like to thank the following persons for contributions to EasyEnglish and to this paper: Michael McCord of IBM Research for use of his ESG grammar and parser, for contributing ideas to the design and implementation, for extensive work on the lexicons and lexical utilities, and for commenting on this paper; Andrew Tanabe of the IBM AS/400 Division for contributing ideas for some of the rules, for coordinating users and user input, for extensive testing, and for his role in incorporating EasyEnglish in IDWB; Sue Medeiros of IBM Research for reading and commenting on this paper.","EasyEnglish: A Tool for Improving Document Quality. We describe the authoring tool, EasyEnglish, which is part of IBM's internal SGML editing environment, Information Development Workbench. EasyEnglish helps writers produce clearer and simpler English by pointing out ambiguity and complexity as well as performing some standard grammar checking. Where appropriate, EasyEnglish makes suggestions for rephrasings that may be substituted directly into the text by using the editor interface. EasyEnglish is based on a full parse by English Slot Grammar; this makes it possible to produce a higher degree of accuracy in error messages as well as handle a large variety of texts.",1997
galvez-etal-2020-unifying,https://aclanthology.org/2020.sigdial-1.27,0,,,,,,,"A unifying framework for modeling acoustic/prosodic entrainment: definition and evaluation on two large corpora. Acoustic/prosodic (a/p) entrainment has been associated with multiple positive social aspects of human-human conversations. However, research on its effects is still preliminary, first because how to model it is far from standardized, and second because most of the reported findings rely on small corpora or on corpora collected in experimental setups. The present article has a twofold purpose: 1) it proposes a unifying statistical framework for modeling a/p entrainment, and 2) it tests on two large corpora of spontaneous telephone interactions whether three metrics derived from this framework predict positive social aspects of the conversations. The corpora differ in their spoken language, domain, and positive social outcome attached. To our knowledge, this is the first article studying relations between a/p entrainment and positive social outcomes in such large corpora of spontaneous dialog. Our results suggest that our metrics effectively predict, up to some extent, positive social aspects of conversations, which not only validates the methodology, but also provides further insights into the elusive topic of entrainment in human-human conversation.",A unifying framework for modeling acoustic/prosodic entrainment: definition and evaluation on two large corpora,"Acoustic/prosodic (a/p) entrainment has been associated with multiple positive social aspects of human-human conversations. However, research on its effects is still preliminary, first because how to model it is far from standardized, and second because most of the reported findings rely on small corpora or on corpora collected in experimental setups. The present article has a twofold purpose: 1) it proposes a unifying statistical framework for modeling a/p entrainment, and 2) it tests on two large corpora of spontaneous telephone interactions whether three metrics derived from this framework predict positive social aspects of the conversations. The corpora differ in their spoken language, domain, and positive social outcome attached. To our knowledge, this is the first article studying relations between a/p entrainment and positive social outcomes in such large corpora of spontaneous dialog. Our results suggest that our metrics effectively predict, up to some extent, positive social aspects of conversations, which not only validates the methodology, but also provides further insights into the elusive topic of entrainment in human-human conversation.",A unifying framework for modeling acoustic/prosodic entrainment: definition and evaluation on two large corpora,"Acoustic/prosodic (a/p) entrainment has been associated with multiple positive social aspects of human-human conversations. However, research on its effects is still preliminary, first because how to model it is far from standardized, and second because most of the reported findings rely on small corpora or on corpora collected in experimental setups. The present article has a twofold purpose: 1) it proposes a unifying statistical framework for modeling a/p entrainment, and 2) it tests on two large corpora of spontaneous telephone interactions whether three metrics derived from this framework predict positive social aspects of the conversations. The corpora differ in their spoken language, domain, and positive social outcome attached. To our knowledge, this is the first article studying relations between a/p entrainment and positive social outcomes in such large corpora of spontaneous dialog. Our results suggest that our metrics effectively predict, up to some extent, positive social aspects of conversations, which not only validates the methodology, but also provides further insights into the elusive topic of entrainment in human-human conversation.",,"A unifying framework for modeling acoustic/prosodic entrainment: definition and evaluation on two large corpora. Acoustic/prosodic (a/p) entrainment has been associated with multiple positive social aspects of human-human conversations. However, research on its effects is still preliminary, first because how to model it is far from standardized, and second because most of the reported findings rely on small corpora or on corpora collected in experimental setups. The present article has a twofold purpose: 1) it proposes a unifying statistical framework for modeling a/p entrainment, and 2) it tests on two large corpora of spontaneous telephone interactions whether three metrics derived from this framework predict positive social aspects of the conversations. The corpora differ in their spoken language, domain, and positive social outcome attached. To our knowledge, this is the first article studying relations between a/p entrainment and positive social outcomes in such large corpora of spontaneous dialog. Our results suggest that our metrics effectively predict, up to some extent, positive social aspects of conversations, which not only validates the methodology, but also provides further insights into the elusive topic of entrainment in human-human conversation.",2020
ustalov-etal-2018-unsupervised,https://aclanthology.org/L18-1164,0,,,,,,,"An Unsupervised Word Sense Disambiguation System for Under-Resourced Languages. In this paper, we present Watasense, an unsupervised system for word sense disambiguation. Given a sentence, the system chooses the most relevant sense of each input word with respect to the semantic similarity between the given sentence and the synset constituting the sense of the target word. Watasense has two modes of operation. The sparse mode uses the traditional vector space model to estimate the most similar word sense corresponding to its context. The dense mode, instead, uses synset embeddings to cope with the sparsity problem. We describe the architecture of the present system and also conduct its evaluation on three different lexical semantic resources for Russian. We found that the dense mode substantially outperforms the sparse one on all datasets according to the adjusted Rand index.",An Unsupervised Word Sense Disambiguation System for Under-Resourced Languages,"In this paper, we present Watasense, an unsupervised system for word sense disambiguation. Given a sentence, the system chooses the most relevant sense of each input word with respect to the semantic similarity between the given sentence and the synset constituting the sense of the target word. Watasense has two modes of operation. The sparse mode uses the traditional vector space model to estimate the most similar word sense corresponding to its context. The dense mode, instead, uses synset embeddings to cope with the sparsity problem. We describe the architecture of the present system and also conduct its evaluation on three different lexical semantic resources for Russian. We found that the dense mode substantially outperforms the sparse one on all datasets according to the adjusted Rand index.",An Unsupervised Word Sense Disambiguation System for Under-Resourced Languages,"In this paper, we present Watasense, an unsupervised system for word sense disambiguation. Given a sentence, the system chooses the most relevant sense of each input word with respect to the semantic similarity between the given sentence and the synset constituting the sense of the target word. Watasense has two modes of operation. The sparse mode uses the traditional vector space model to estimate the most similar word sense corresponding to its context. The dense mode, instead, uses synset embeddings to cope with the sparsity problem. We describe the architecture of the present system and also conduct its evaluation on three different lexical semantic resources for Russian. We found that the dense mode substantially outperforms the sparse one on all datasets according to the adjusted Rand index.",,"An Unsupervised Word Sense Disambiguation System for Under-Resourced Languages. In this paper, we present Watasense, an unsupervised system for word sense disambiguation. Given a sentence, the system chooses the most relevant sense of each input word with respect to the semantic similarity between the given sentence and the synset constituting the sense of the target word. Watasense has two modes of operation. The sparse mode uses the traditional vector space model to estimate the most similar word sense corresponding to its context. The dense mode, instead, uses synset embeddings to cope with the sparsity problem. We describe the architecture of the present system and also conduct its evaluation on three different lexical semantic resources for Russian. We found that the dense mode substantially outperforms the sparse one on all datasets according to the adjusted Rand index.",2018
zelenko-etal-2004-coreference,https://aclanthology.org/W04-0704,0,,,,,,,"Coreference Resolution for Information Extraction. Ï ÓÑÔ Ö × Ú Ö Ð ÔÔÖÓ × ØÓ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ò Ø ÓÒØ ÜØ Ó Ò ÓÖÑ Ø ÓÒ ÜØÖ ¹ Ø ÓÒº Ï ÔÖ × ÒØ ÐÓ××¹ × Ó Ò Ö Ñ ¹ ÛÓÖ ÓÖ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ò Ö Ý Ð¹ ÓÖ Ø Ñ ÓÖ ÔÔÖÓÜ Ñ Ø ÓÖ Ö Ò Ó Ò ¸ Ò ÓÒ ÙÒ Ø ÓÒ Û Ø È Ö ÔØÖÓÒ Ò ÐÓ ×Ø Ö ¹ Ö ×× ÓÒ Ð ÖÒ Ò Ð ÓÖ Ø Ñ׺ Ï ÜÔ Ö Ñ Ò¹ Ø ÐÐÝ Ú ÐÙ Ø Ø ÔÖ × ÒØ ÔÔÖÓ × Ù× Ò Ø ÙØÓÑ Ø ÓÒØ ÒØ ÜØÖ Ø ÓÒ Ú ÐÙ Ø ÓÒ Ñ Ø Ó ÓÐÓ Ý¸Û Ø ÔÖÓÑ × Ò Ö ×ÙÐØ׺ ½ ÁÒØÖÓ Ù Ø ÓÒ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ × Ò ÑÔÓÖØ ÒØ ÔÖÓ Ð Ñ Ó Ø ÖÑ Ò Ò Û Ø Ö × ÓÙÖ× Ö Ö Ò × Ò Ø ÜØ ÓÖÖ ×ÔÓÒ ØÓ Ø × Ñ Ö Ð ÛÓÖÐ ÒØ Ø × Å Ø ÓÚ¸¾¼¼¾µº ÁÒ Ø × Ô Ô Ö¸Û Ö ×× Ö ×ØÖ Ø Ú Ö× ÓÒ Ó Ø ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñ Ò Ø ÓÒØ ÜØ Ó Ò ÓÖÑ Ø ÓÒ ÜØÖ ¹ Ø ÓÒº Ì Ò ÓÖÑ Ø ÓÒ ÜØÖ Ø ÓÒ Ô Ö×Ô Ø Ú Ó Ò ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÑÔÓ× × Ð Ñ Ø × ÓÔ ÓÒ Ø × Ø Ó ÒØ Ø × ØÓ Ö ×ÓÐÚ º Ï Ö ÒÓØ ÒØ Ö ×Ø Ò Ö ×ÓÐÚ Ò ÐÐ ÓÖ Ö Ò × Ò Ó ÙÑ Òظ ÙØ ÓÒÐÝ Ø Ó× ÒÚÓÐÚ Ò ÒØ Ø × ØÓ ÜØÖ Ø × Ô ÖØ Ó ×Ô ¬ ÜØÖ Ø ÓÒ Ø × º Ì Ù×¸Û Ò × ÐÝ ÒÓÖ ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ Ó Ø Ó× Ò Ñ ×¸ÒÓÙÒ Ô Ö × ×¸ Ò ÔÖÓ¹ ÒÓÙÒ× Ø Ø Ö Ñ ÖÖ Ð Ú ÒØ ØÓ Ø ÜØÖ ¹ Ø ÓÒ Ø × Ø Ò º Ì ÜØÖ Ø ÓÒ¹ÓÖ ÒØ ÓÖ Ö Ò Ö ×ÓÐÙ¹ Ø ÓÒ ÔÖÓ Ð Ñ × ÑÓØ Ú Ø Ý Ø ÒØ ØÝ ¹ Ø Ø ÓÒ Ò ÌÖ Ò ´ ̵ Ø × Ó Ø Ù¹ ØÓÑ Ø ÓÒØ ÒØ ÜØÖ Ø ÓÒ Ú ÐÙ Ø ÓÒ´ 3 ¼¼¿µº Ì Ì Ø × Ö ÕÙ Ö × Ø Ø Ò Ó Ò Ñ ¸ÒÓÑ Ò Ð¸ Ò ÔÖÓÒÓÑ Ò Ð Ñ ÒØ ÓÒ× Ò ØÖ Ò Ñ ÒØ ÓÒ× ÓÖÖ ×ÔÓÒ Ò ØÓ Ø × Ñ Ö Ð ÛÓÖÐ ÒØ Ø ×º Ï ÓÔØ Ø ÓÒ¹ Ú ÒØ ÓÒ Ó Ù× Ò Ñ ÒØ ÓÒ× ÓÖ Ø Ò Ñ ×¸ÒÓÙÒ Ô Ö × ×¸ Ò ÔÖÓÒÓÙÒ×¸Û Ð Ö × ÖÚ Ò ÒØ Ø × ØÓ Ö ÔÖ × ÒØ Ø ÕÙ Ú Ð Ò Ð ×× × Ó Ñ ÒØ ÓÒ׸ º º¸Ø × Ø× Ó Ñ ÒØ ÓÒ× ÓÖÖ ×ÔÓÒ Ò ØÓ Ø × Ñ Ö Ð ÛÓÖÐ ÒØ Ø ×º ÁÒ Ø × Ô Ô Ö¸Û Û ÐÐ Ø Ò ÒØ ØÝ Ñ Ò¹ Ø ÓÒ ÜØÖ Ø ÓÒ ÓÑÔÓÒ ÒØ × Ú Ò´Ø Ñ Ò¹ Ø ÓÒ ÜØÖ Ø ÓÒ ÓÑÔÓÒ ÒØ Ó ´ ÓÒ Ò Ê ÑÓ×¹ Ë ÒØ ÖÙÞ¸¾¼¼¼µµ¸ Ò ÓÒ× Ö ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ Ð ÓÖ Ø Ñ× Ø Ø ÛÓÖ Û Ø ÐÖ Ý Ü¹ ØÖ Ø ÒØ ØÝ Ñ ÒØ ÓÒ׺ Ö ÓÙØÐ Ò Ó Ø Ô Ô Ö ÓÐÐÓÛ׺ ÁÒ Ë ¹ Ø ÓÒ ¾¸Û ×ÙÖÚ Ý ÔÖ Ú ÓÙ× ÛÓÖ ÓÒ ÓÖ Ö¹ Ò Ö ×ÓÐÙØ ÓÒº ÁÒ Ë Ø ÓÒ ¿¸Û ÔÖ × ÒØ ÓÙÖ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ö Ñ ÛÓÖ Ø Ø Ò Óѹ Ô ×× × ÒÙÑ ÖÓ ×Ø Ò Ö ÓÖ Ö Ò Ö ×Ó¹ ÐÙØ ÓÒ ÔÔÖÓ ×º ÁÒ Ë Ø ÓÒ ¸Û ÒØÖÓ Ù Ì Ö Ý Ó Ò Ð ÓÖ Ø Ñ Ò Ö Ñ ÒØ ÐÐÝ ÓÔØ Ñ Þ × Ø ´ Òµ ÙÒ Ø ÓÒ È Ñ ´Û Ö × Ø Ö Û ÓÖ × Ò´Û µµº Ì ÐÓ ×Ø Ö ¹ Ö ×× ÓÒ Ú Ö× ÓÒ Ó Ø Ð ÓÖ Ø Ñ × ÔÖ × ÒØ × Ð ÓÖ Ø Ñ ½º Ì Ð ÓÖ Ø Ñ Ò Ø ÐÐÝ ÔÙØ× Ñ ÒØ ÓÒ ÒØÓ × Ô Ö Ø ÒØ ØÝ Ò Ø Ò Ø Ö ¹ Ø Ú ÐÝ Ñ Ö × Ø ÒØ Ø ×¸Û Ð Ø Ñ Ö Ñ¹ ÔÖÓÚ × Ø Ò ÙÒ Ø ÓÒº ÙÖ Ò Ø Ö Ø ÓÒØ Ô Ö Ó ÒØ Ø × × × Ð Ø Ò Ö Ý × ¹ ÓÒ ØÓ ÓÔØ Ñ Þ Ø Ò ÑÔÖÓÚ Ñ ÒØ Ú Ý Ø Ñ Ö º AEÓØ Ø Ø Ø Ð ÓÖ Ø Ñ Ø Ö ¹ Ø Ú ÐÝ ÙÔ Ø × Ø Û Ø× ØÛ Ò Ø ÐÖ Ý Ñ Ö ÒØ Ø ×º Ï ÒÓØ Ø Ø ÒÓ ÔÔÖÓÜ Ñ Ø ÓÒ Ö ×ÙÐØ× Ö ÒÓÛÒ ÓÖ Ð ÓÖ Ø Ñ ½º Ï ÜÔ Ö Ñ ÒØ ÐÐÝ Ú ÐÙ Ø Ø Ö Ý Ó Ò Ð ÓÖ Ø Ñ Ò Ë ¹ Ø ÓÒ Ò ÓÑÔ Ö Ø Û Ø Ð Ò ¹¬Ö×Ø Ò Ð Ò ¹ ×Ø Ó Ò Ð ÓÖ Ø Ñ׺ ÜÔ Ö Ñ ÒØ× Ï Ú ÐÙ Ø × Ú Ö Ð ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÓÒ¹ ¬ ÙÖ Ø ÓÒ× ÓÑÔÓ× Ó Ø « Ö ÒØ ÓÑÔÓ¹
ÐÓ××¹ × ÓÖ Ö Ò Ó Ò Ñ Ø Ó ÓÐÓ Ý Ò ÔÖ × ÒØ Ò ÔÔÖÓÜ Ñ Ø Ö Ý ÓÖ Ö Ò Ó Ò Ð ÓÖ Ø Ñº ÁÒ Ë Ø ÓÒ ¸Û ÜÔ Ö ¹ Ñ ÒØ ÐÐÝ Ú ÐÙ Ø × Ú Ö Ð ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ö Ø ØÙÖ ×¸ Ò Ø ÓÒØ ÜØ Ó Ò ÓÖÑ Ø ÓÒ Ü¹ ØÖ Ø ÓÒº ¾ ÓÖ Ö Ò Ê ×ÓÐÙØ ÓÒ ÇÚ ÖÚ Û Ì ÔÖÓ Ð Ñ Ó Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒ × Ó Ø Ò ×ØÙ ´Å Ø ÓÚ¸¾¼¼¾µ¸Û × ÐÓ× ÐÝ Ö ¹ Ð Ø ØÓ Ø ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ò Ô ÓÖ × Ô ÒÓÑ ÒÓÒ Ó Ö ÖÖ Ò ØÓ ÔÖ Ò Ñ ÒØ ÓÒ Ò Ó ÙÑ Òغ Ì Ö Ö¹ Ò × Ø Ò ÐÐ Ò Ò Ô ÓÖ Ò Ø Ö ÖÖ Ñ ÒØ ÓÒ × Ø ÖÑ Ò ÒØ Òغ Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñ × Ó Ø Ò Ö ×ØÖ Ø ØÓ ÒÓÑ ¹ Ò Ð Ò ÔÖÓÒÓÑ Ò Ð Ò Ô ÓÖ×¸Ø Ö Ý ÒÓÖ Ò Ø ÔÖÓ Ð Ñ Ó Ò Ñ ÓÖ Ö Ò ¸Û × Ü¹ ØÖ Ñ ÐÝ ÑÔÓÖØ ÒØ ÓÖ Ò ÓÖÑ Ø ÓÒ ÜØÖ Ø ÓÒº Ø ÓÒ ÐÐÝ¸× Ò Ò Ô ÓÖ Ö ×× ×´Ð Ø Ö¹ ÐÐݵ ÓÒÐÝ Û Ö Ö Ö Ò ×¸Ø Ò Ö ÕÙ ÒØ Ô ÒÓÑ ÒÓÒ Ó ÓÖÛ Ö Ö Ö Ò ×´Ø ÖÑ Ø¹ Ô ÓÖ µ × ÒÓØ ÓÚ Ö Ý Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒº ÁÒ ÓÙÖ ÔÖ × ÒØ Ø ÓÒ¸Ø Ø ÖÑ ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ ÑÔÐ × Ö ×ÓÐÙØ ÓÒ Ó Ò Ñ ¸ÒÓÑ Ò Ð¸ Ò ÔÖÓÒÓÑ Ò Ð ÒØ ØÝ Ñ Ò Ø ÓÒ× Ø Ø ×Ù ×ÙÑ × ÓØ Û Ö Ò ÓÖÛ Ö Ö Ö Ò ×º Ä Ø Ù× ¬Ò Ø ÓÖ Ö Ò Ö Ð Ø ÓÒ ÓÖ ÓÒ × Ø Ó Ó ÙÑ ÒØ ÒØ ØÝ Ñ ÒØ ÓÒ׺ Ï × Ý Ø Ø Ø Ö Ð Ø ÓÒ ÓÖ ´Ü ݵ ÓÐ × Ò ÓÒÐÝ Ø Ñ ÒØ ÓÒ× Ü Ò Ý Ö ÓÖ Ö Òغ ÁØ × Ö ÕÙ ÒØÐÝ ÐÔ ÙÐ ØÓ ÓÑÔ ÖØÑ ÒØ Ð Þ Ø Ö Ð Ø ÓÒ ÓÖ ´Ü ݵ Ò ¸ Ò ¸Ø ÓÖ ¹ Ö Ò Ö ×ÓÐÙØ ÓÒ Ø × ÒØÓ Ø Ö « Ö ÒØ ×Ù ¹ Ø × × ÓÖÖ ×ÔÓÒ Ò ØÓ « Ö ÒØ Ò × Ó ÒØ Ø × ÒÚÓÐÚ º ÅÓÖ ÔÖ × Ðݸ Ü ÓÖ Ý × ÔÖÓÒÓѹ Ò Ð ÒØ ØÝ¸Ø Ò Û Ó Ø Ò ÔÖÓÒÓÙÒ Ö ×ÓÐÙ¹ Ø ÓÒ ÔÖÓ Ð Ñº ÇØ ÖÛ × ¸ Ü ÓÖ Ý × ÒÓÑ Ò Ð ÒØ ØÝ¸Ø Ò Û Ú ÒÓÙÒ Ô Ö × Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ò ÐÐݸ ÓØ Ü Ò Ý Ö Ò Ñ ÒØ Ø ×¸Ø Ò Ø × Ò Ñ Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ò Ò ÓÖÑ Ø ÓÒ ÜØÖ Ø ÓÒ ×Ý×Ø Ñ Ò × ØÓ Ö ×× ÐÐ Ø Ö ×Ô Ø× Ó Ø ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ø « Ö ÒØ ÑÓ Ð Ò Ò Ð¹ ÓÖ Ø Ñ Ó × Ñ Ý ÔÔÖÓÔÖ Ø ÓÖ Ò Ñ Ò ÓÙÒ Ô Ö × ¸ Ò ÔÖÓÒÓÙÒ Ö ×ÓÐÙØ ÓÒº ÅÓ×Ø ÖÐÝ ÛÓÖ ÓÒ ÓÖ Ö Ò Ò Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒ ÐØ Û Ø ÔÖÓÒÓÙÒ ÓÖ Ö Ò ´Ä Ô¹ Ô Ò Ò Ä ×׸½ à ÒÒ Ý Ò Ó ÙÖ Ú1 µº Ì ÖÐÝ ÔÔÖÓ × ÒØ ¬ × Ø Ó ÔÖÓÒÓÙÒ× Ò Ó ÙÑ Òظ Ò ¸ ÓÖ ÔÖÓ¹ ÒÓÙÒ¸×ÓÙ Ø ØÓ Ø ÖÑ Ò Ø ×Ø ÒØ Òغ « Ö ÒØ ¬Ò Ø ÓÒ× Ó ×Ø Ð ØÓ « Ö ÒØ Ö ÙÐÐÝ × Ò Ò ÓÑÔÐ Ü ÖÙÐ × Ø Ø Û Ö ×ÓÑ Ø Ñ × × ÓÒ Ü ×Ø Ò × ÓÙÖ× Ø ÓÖ × Ë Ò Ö¸½ µº Ì Ö Ó ÔÖÓÒÓÙÒ Ò ÒÓÙÒ Ô Ö × ÓÖ ¹ Ö Ò Ö ×ÓÐÙØ ÓÒ Û × Ö ØÐÝ Ö Ú Ø Ð Þ × Ò Ñ ¹½ ¼× Ý ÔÔÐ Ø ÓÒ Ó Ð ÖÒ Ò ÔÔÖÓ × ØÓ Ø ÔÖÓ Ð Ñº Ï ÒÓØ ¸ ÑÓÒ ×Ø Ñ ÒÝ¸Ø ÛÓÖ Ó ´ ÓÒ Ò ÒÒ Øظ½ Å ÖØ Ý Ò Ä Ò Öظ½ AE ¸¾¼¼½ AE Ò Ö 3 ¼¼¾µº ÓÖ Ö Ò Ü ÑÔÐ × ØÙÖ ¹ × Ö Ô¹ Ö × ÒØ Ø ÓÒ Ó Ô Ö Ó Ñ ÒØ ÓÒ× Ø Ø × ¹ × Ò ØÓ Ñ Ñ Ò ×Ø Ø ÔÖÓÔ ÖØ × Ó Ø Ò Ô ÓÖ Ò Ø× Ò Ø ÒØ ÒØ Ø Ø Ö ÑÓ×Ø ÐÔ ÙÐ Ò Ñ Ò Ø × ÓÒ Û Ø Ö Ø Ò Ô ÓÖ Ò Ö Ö× ØÓ Ø ÒØ ÒØ Ò ÕÙ ×¹ Ø ÓÒº ÓÖ Ö Ò Ü ÑÔÐ × Ò ÖÝ Ð Ð Ö Ø Ò Û Ø Ö Ø ÒØ Ø × Ø Ø ÓÒ×Ø ØÙØ Ø Ü ÑÔÐ Ö Ò ÓÖ Ö ÒØ ÓÖ ÒÓغ ÅÓ×Ø Ð ÖÒ Ò ¹ × ×Ý×Ø Ñ× ÓÖ ÓÖ Ö Ò Ö ×ÓÐÙ¹ Ø ÓÒ ÑÔÐÓÝ Ð Ö Ö Ò ¹ Ö Ø ØÙÖ × Ø× AE ¸¾¼¼½µº ÒÙÑ ÖÓ Ð ÖÒ Ò Ð ÓÖ Ø Ñ× Ú Ò ÜÔ Ö Ñ ÒØ ÐÐÝ Ú ÐÙ Ø ÓÒ Ø ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Å ÒÝ ÔÙ Ð × ×ØÙ × Ñ¹ ÔÐÓÝ × ÓÒ ØÖ Ð ÓÖ Ø Ñ´ ÓÒ Ò Ò¹ Ò Øظ½ AE ¸¾¼¼½ AE Ò Ö ¸¾¼¼¾µº Ï Ð×Ó ÒÓØ Û ÐÓ Ð ÔÖÓ Ð ×Ø ÑÓ Ð Ò ÔÔÖÓ × ØÓ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ø Ò¹ Ö Ø Ú ÔÖÓ Ð ×Ø ÑÓ Ð Ó ´ ÖÒ Ø Ðº1 µ Ò Ø ÓÒ Ø ÓÒ Ð Ö Ò ÓÑ ¬Ð ÑÓ Ð Ó ´Å ÐÐÙÑ Ò Ï ÐÐÒ Ö¸¾¼¼¿µº Ì ÓÖ Ö Ò Ð ×× ¬ Ö× Ø Ö ÓÙØÔÙØ Ý Ð ÖÒ Ò Ð ÓÖ Ø Ñ× Ò ØÓ Ù× Ò ÓÒ ÙÒ ¹ Ø ÓÒ Û Ø ÓÖ Ö Ò Ó Ò Ð ÓÖ Ø Ñ× Ò ÓÖ¹ Ö ØÓ Ò Ù Ø ÓÖ ÕÙ Ú Ð Ò Ö Ð Ø ÓÒ ÓÒ Ø × Ø Ó Ñ ÒØ ÓÒ׺ ÑÓ×Ø ÔÓÔÙÐ Ö ÓÖ Ö¹ Ò Ó Ò Ð ÓÖ Ø Ñ Ð Ò × Ò Ò Ô ÓÖ ØÓ Ø ¬Ö×Ø ÔÖ Ò ÒØ ÒØ ÔÖ Ø × ÓÖ Ö¹ ÒØ Û Ø Ø Ò Ô ÓÖ´AE ¸¾¼¼½µº Ï Û ÐÐ ÐÐ Ø Ø Ð Ò ¹¬Ö×Ø Ó Ò Ð ÓÖ Ø Ñº Ò ÐØ ÖÒ ¹ Ø Ú Ó Ò Ð ÓÖ Ø Ñ´Ø ÖÑ Ð Ò ¹ ×ص Ð Ò × Ø Ò Ô ÓÖ ØÓ Ø ÑÓ×Ø ÔÖÓ Ð ÔÖ Ò Ò¹ Ø ÒØ¸Û Ö Ø ÔÖÓ Ð ØÝ Ó Ò Ø ÒØ × Ø Ò ØÓ Ø ÓÒ¬ Ò Ó Ø ÓÖ Ö Ò Ð ×¹ × ¬ Ö ÔÖ Ø ÓÒ´AE Ò Ö ¸¾¼¼¾µº Ï Û ÐÐ ÓÒ× Ö ÓØ Ð Ò ¹¬Ö×Ø Ò Ð Ò ¹ ×Ø Ó ¹ Ò Ð ÓÖ Ø Ñ× Ò ÓÑÔ Ö Ø Ñ Û Ø Ø Ò Û Ó Ò Ö Ñ ÛÓÖ Ø Ø Û ÒØÖÓ Ù Ò Ë ¹ Ø ÓÒ º ÇÙÖ Ó Ò Ö Ñ ÛÓÖ ÑÓ×Ø Ö × Ñ Ð × Ø ÛÓÖ Ó ´Å ÐÐÙÑ Ò Ï ÐÐÒ Ö¸¾¼¼¿µ¸Û Ö ÓÖ Ö Ò ÑÓ Ð Ö ÔÖ × ÒØ× ÓÒ Ø ÓÒ Ð Ö Ò¹ ÓÑ ¬ Ð º Ì ÓÖ Ö Ò Ó Ò ÔÖÓ Ð Ñ ÓÖ Ø ÓÒ Ø ÓÒ Ð Ö Ò ÓÑ ¬ Ð Ð × ØÓ ÓÖÖ ¹ Ð Ø ÓÒ ÐÙ×Ø Ö Ò ÔÖÓ Ð Ñ´ Ò× Ð Ø Ðº¸¾¼¼¾µº Ï Ð×Ó Ö Ù Ø ÓÖ Ö Ò Ó Ò ÔÖÓ ¹ Ð Ñ ØÓ ÓÖÖ Ð Ø ÓÒ ÐÙ×Ø Ö Ò ÔÖÓ Ð Ñ¸ ÙØ Ù× « Ö ÒØ ÔÔÖÓÜ Ñ Ø ÓÒ Ð ÓÖ Ø Ñ ÓÖ Ø× ×Ó¹ ÐÙØ ÓÒº ÁÒ Ø × Ò Ó ØÖ Ò Ò Ø ¸Û ÒÓØ Ô¹ ÔÐ Ø ÓÒ Ó ÐÙ×Ø Ö Ò ÓÖ ÓÖ Ö Ò Ó ÒÓÙÒ Ô Ö × ×´ Ö Ò Ï ×Ø «¸½ µº AE Ñ ÐÝØ ÒÓÙÒ Ô Ö × ØØÖ ÙØ × Ö Ù× ØÓ ¬Ò ×Ø Ò ÙÒ Ø ÓÒ Ø Ø × Ù× Û Ø Ò ÙÖ ×¹ Ø ÐÙ×Ø Ö Ò Ð ÓÖ Ø Ñ ØÓ ÔÖÓ Ù ÐÙ×Ø Ö Ò Ó ÒÓÙÒ Ô Ö × × Ø Ø Ñ× ØÓ ÓÖÖ ×ÔÓÒ ØÓ Ø ÓÖ Ö Ò Ô ÖØ Ø ÓÒ Ó Ø ÓÖÖ ×ÔÓÒ Ò ÒÓÙÒ Ô Ö × ÒØ Ø ×º ÁÒ Ø ÓÒ ØÓ Ø ÛÓÖ ÓÒ ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ Û Ø Ò Ó ÙÑ ÒØ×¸Ø Ö × Ò Ñ Ö ¹ Ò Ó Ý ÓÒ ÑÓÖ Ò Ö Ð ÒØ ØÝ ÙÒ ÖØ ÒØÝ ÔÖÓ Ð Ñ¸Û × ÓÒ ÖÒ Û Ø Ø ÖÑ Ò Ò Û Ø Ö ØÛÓ Ö ÓÖ × × Ö Ø × Ñ ÒØ ØÝ È ×ÙÐ Ø Ðº¸¾¼¼¿µº ¿ ÓÖ Ö Ò Ê ×ÓÐÙØ ÓÒ Ö Ø",Coreference Resolution for Information Extraction,"Ï ÓÑÔ Ö × Ú Ö Ð ÔÔÖÓ × ØÓ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ò Ø ÓÒØ ÜØ Ó Ò ÓÖÑ Ø ÓÒ ÜØÖ ¹ Ø ÓÒº Ï ÔÖ × ÒØ ÐÓ××¹ × Ó Ò Ö Ñ ¹ ÛÓÖ ÓÖ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ò Ö Ý Ð¹ ÓÖ Ø Ñ ÓÖ ÔÔÖÓÜ Ñ Ø ÓÖ Ö Ò Ó Ò ¸ Ò ÓÒ ÙÒ Ø ÓÒ Û Ø È Ö ÔØÖÓÒ Ò ÐÓ ×Ø Ö ¹ Ö ×× ÓÒ Ð ÖÒ Ò Ð ÓÖ Ø Ñ׺ Ï ÜÔ Ö Ñ Ò¹ Ø ÐÐÝ Ú ÐÙ Ø Ø ÔÖ × ÒØ ÔÔÖÓ × Ù× Ò Ø ÙØÓÑ Ø ÓÒØ ÒØ ÜØÖ Ø ÓÒ Ú ÐÙ Ø ÓÒ Ñ Ø Ó ÓÐÓ Ý¸Û Ø ÔÖÓÑ × Ò Ö ×ÙÐØ׺ ½ ÁÒØÖÓ Ù Ø ÓÒ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ × Ò ÑÔÓÖØ ÒØ ÔÖÓ Ð Ñ Ó Ø ÖÑ Ò Ò Û Ø Ö × ÓÙÖ× Ö Ö Ò × Ò Ø ÜØ ÓÖÖ ×ÔÓÒ ØÓ Ø × Ñ Ö Ð ÛÓÖÐ ÒØ Ø × Å Ø ÓÚ¸¾¼¼¾µº ÁÒ Ø × Ô Ô Ö¸Û Ö ×× Ö ×ØÖ Ø Ú Ö× ÓÒ Ó Ø ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñ Ò Ø ÓÒØ ÜØ Ó Ò ÓÖÑ Ø ÓÒ ÜØÖ ¹ Ø ÓÒº Ì Ò ÓÖÑ Ø ÓÒ ÜØÖ Ø ÓÒ Ô Ö×Ô Ø Ú Ó Ò ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÑÔÓ× × Ð Ñ Ø × ÓÔ ÓÒ Ø × Ø Ó ÒØ Ø × ØÓ Ö ×ÓÐÚ º Ï Ö ÒÓØ ÒØ Ö ×Ø Ò Ö ×ÓÐÚ Ò ÐÐ ÓÖ Ö Ò × Ò Ó ÙÑ Òظ ÙØ ÓÒÐÝ Ø Ó× ÒÚÓÐÚ Ò ÒØ Ø × ØÓ ÜØÖ Ø × Ô ÖØ Ó ×Ô ¬ ÜØÖ Ø ÓÒ Ø × º Ì Ù×¸Û Ò × ÐÝ ÒÓÖ ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ Ó Ø Ó× Ò Ñ ×¸ÒÓÙÒ Ô Ö × ×¸ Ò ÔÖÓ¹ ÒÓÙÒ× Ø Ø Ö Ñ ÖÖ Ð Ú ÒØ ØÓ Ø ÜØÖ ¹ Ø ÓÒ Ø × Ø Ò º Ì ÜØÖ Ø ÓÒ¹ÓÖ ÒØ ÓÖ Ö Ò Ö ×ÓÐÙ¹ Ø ÓÒ ÔÖÓ Ð Ñ × ÑÓØ Ú Ø Ý Ø ÒØ ØÝ ¹ Ø Ø ÓÒ Ò ÌÖ Ò ´ ̵ Ø × Ó Ø Ù¹ ØÓÑ Ø ÓÒØ ÒØ ÜØÖ Ø ÓÒ Ú ÐÙ Ø ÓÒ´ 3 ¼¼¿µº Ì Ì Ø × Ö ÕÙ Ö × Ø Ø Ò Ó Ò Ñ ¸ÒÓÑ Ò Ð¸ Ò ÔÖÓÒÓÑ Ò Ð Ñ ÒØ ÓÒ× Ò ØÖ Ò Ñ ÒØ ÓÒ× ÓÖÖ ×ÔÓÒ Ò ØÓ Ø × Ñ Ö Ð ÛÓÖÐ ÒØ Ø ×º Ï ÓÔØ Ø ÓÒ¹ Ú ÒØ ÓÒ Ó Ù× Ò Ñ ÒØ ÓÒ× ÓÖ Ø Ò Ñ ×¸ÒÓÙÒ Ô Ö × ×¸ Ò ÔÖÓÒÓÙÒ×¸Û Ð Ö × ÖÚ Ò ÒØ Ø × ØÓ Ö ÔÖ × ÒØ Ø ÕÙ Ú Ð Ò Ð ×× × Ó Ñ ÒØ ÓÒ׸ º º¸Ø × Ø× Ó Ñ ÒØ ÓÒ× ÓÖÖ ×ÔÓÒ Ò ØÓ Ø × Ñ Ö Ð ÛÓÖÐ ÒØ Ø ×º ÁÒ Ø × Ô Ô Ö¸Û Û ÐÐ Ø Ò ÒØ ØÝ Ñ Ò¹ Ø ÓÒ ÜØÖ Ø ÓÒ ÓÑÔÓÒ ÒØ × Ú Ò´Ø Ñ Ò¹ Ø ÓÒ ÜØÖ Ø ÓÒ ÓÑÔÓÒ ÒØ Ó ´ ÓÒ Ò Ê ÑÓ×¹ Ë ÒØ ÖÙÞ¸¾¼¼¼µµ¸ Ò ÓÒ× Ö ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ Ð ÓÖ Ø Ñ× Ø Ø ÛÓÖ Û Ø ÐÖ Ý Ü¹ ØÖ Ø ÒØ ØÝ Ñ ÒØ ÓÒ׺ Ö ÓÙØÐ Ò Ó Ø Ô Ô Ö ÓÐÐÓÛ׺ ÁÒ Ë ¹ Ø ÓÒ ¾¸Û ×ÙÖÚ Ý ÔÖ Ú ÓÙ× ÛÓÖ ÓÒ ÓÖ Ö¹ Ò Ö ×ÓÐÙØ ÓÒº ÁÒ Ë Ø ÓÒ ¿¸Û ÔÖ × ÒØ ÓÙÖ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ö Ñ ÛÓÖ Ø Ø Ò Óѹ Ô ×× × ÒÙÑ ÖÓ ×Ø Ò Ö ÓÖ Ö Ò Ö ×Ó¹ ÐÙØ ÓÒ ÔÔÖÓ ×º ÁÒ Ë Ø ÓÒ ¸Û ÒØÖÓ Ù Ì Ö Ý Ó Ò Ð ÓÖ Ø Ñ Ò Ö Ñ ÒØ ÐÐÝ ÓÔØ Ñ Þ × Ø ´ Òµ ÙÒ Ø ÓÒ È Ñ ´Û Ö × Ø Ö Û ÓÖ × Ò´Û µµº Ì ÐÓ ×Ø Ö ¹ Ö ×× ÓÒ Ú Ö× ÓÒ Ó Ø Ð ÓÖ Ø Ñ × ÔÖ × ÒØ × Ð ÓÖ Ø Ñ ½º Ì Ð ÓÖ Ø Ñ Ò Ø ÐÐÝ ÔÙØ× Ñ ÒØ ÓÒ ÒØÓ × Ô Ö Ø ÒØ ØÝ Ò Ø Ò Ø Ö ¹ Ø Ú ÐÝ Ñ Ö × Ø ÒØ Ø ×¸Û Ð Ø Ñ Ö Ñ¹ ÔÖÓÚ × Ø Ò ÙÒ Ø ÓÒº ÙÖ Ò Ø Ö Ø ÓÒØ Ô Ö Ó ÒØ Ø × × × Ð Ø Ò Ö Ý × ¹ ÓÒ ØÓ ÓÔØ Ñ Þ Ø Ò ÑÔÖÓÚ Ñ ÒØ Ú Ý Ø Ñ Ö º AEÓØ Ø Ø Ø Ð ÓÖ Ø Ñ Ø Ö ¹ Ø Ú ÐÝ ÙÔ Ø × Ø Û Ø× ØÛ Ò Ø ÐÖ Ý Ñ Ö ÒØ Ø ×º Ï ÒÓØ Ø Ø ÒÓ ÔÔÖÓÜ Ñ Ø ÓÒ Ö ×ÙÐØ× Ö ÒÓÛÒ ÓÖ Ð ÓÖ Ø Ñ ½º Ï ÜÔ Ö Ñ ÒØ ÐÐÝ Ú ÐÙ Ø Ø Ö Ý Ó Ò Ð ÓÖ Ø Ñ Ò Ë ¹ Ø ÓÒ Ò ÓÑÔ Ö Ø Û Ø Ð Ò ¹¬Ö×Ø Ò Ð Ò ¹ ×Ø Ó Ò Ð ÓÖ Ø Ñ׺ ÜÔ Ö Ñ ÒØ× Ï Ú ÐÙ Ø × Ú Ö Ð ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÓÒ¹ ¬ ÙÖ Ø ÓÒ× ÓÑÔÓ× Ó Ø « Ö ÒØ ÓÑÔÓ¹
ÐÓ××¹ × ÓÖ Ö Ò Ó Ò Ñ Ø Ó ÓÐÓ Ý Ò ÔÖ × ÒØ Ò ÔÔÖÓÜ Ñ Ø Ö Ý ÓÖ Ö Ò Ó Ò Ð ÓÖ Ø Ñº ÁÒ Ë Ø ÓÒ ¸Û ÜÔ Ö ¹ Ñ ÒØ ÐÐÝ Ú ÐÙ Ø × Ú Ö Ð ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ö Ø ØÙÖ ×¸ Ò Ø ÓÒØ ÜØ Ó Ò ÓÖÑ Ø ÓÒ Ü¹ ØÖ Ø ÓÒº ¾ ÓÖ Ö Ò Ê ×ÓÐÙØ ÓÒ ÇÚ ÖÚ Û Ì ÔÖÓ Ð Ñ Ó Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒ × Ó Ø Ò ×ØÙ ´Å Ø ÓÚ¸¾¼¼¾µ¸Û × ÐÓ× ÐÝ Ö ¹ Ð Ø ØÓ Ø ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ò Ô ÓÖ × Ô ÒÓÑ ÒÓÒ Ó Ö ÖÖ Ò ØÓ ÔÖ Ò Ñ ÒØ ÓÒ Ò Ó ÙÑ Òغ Ì Ö Ö¹ Ò × Ø Ò ÐÐ Ò Ò Ô ÓÖ Ò Ø Ö ÖÖ Ñ ÒØ ÓÒ × Ø ÖÑ Ò ÒØ Òغ Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñ × Ó Ø Ò Ö ×ØÖ Ø ØÓ ÒÓÑ ¹ Ò Ð Ò ÔÖÓÒÓÑ Ò Ð Ò Ô ÓÖ×¸Ø Ö Ý ÒÓÖ Ò Ø ÔÖÓ Ð Ñ Ó Ò Ñ ÓÖ Ö Ò ¸Û × Ü¹ ØÖ Ñ ÐÝ ÑÔÓÖØ ÒØ ÓÖ Ò ÓÖÑ Ø ÓÒ ÜØÖ Ø ÓÒº Ø ÓÒ ÐÐÝ¸× Ò Ò Ô ÓÖ Ö ×× ×´Ð Ø Ö¹ ÐÐݵ ÓÒÐÝ Û Ö Ö Ö Ò ×¸Ø Ò Ö ÕÙ ÒØ Ô ÒÓÑ ÒÓÒ Ó ÓÖÛ Ö Ö Ö Ò ×´Ø ÖÑ Ø¹ Ô ÓÖ µ × ÒÓØ ÓÚ Ö Ý Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒº ÁÒ ÓÙÖ ÔÖ × ÒØ Ø ÓÒ¸Ø Ø ÖÑ ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ ÑÔÐ × Ö ×ÓÐÙØ ÓÒ Ó Ò Ñ ¸ÒÓÑ Ò Ð¸ Ò ÔÖÓÒÓÑ Ò Ð ÒØ ØÝ Ñ Ò Ø ÓÒ× Ø Ø ×Ù ×ÙÑ × ÓØ Û Ö Ò ÓÖÛ Ö Ö Ö Ò ×º Ä Ø Ù× ¬Ò Ø ÓÖ Ö Ò Ö Ð Ø ÓÒ ÓÖ ÓÒ × Ø Ó Ó ÙÑ ÒØ ÒØ ØÝ Ñ ÒØ ÓÒ׺ Ï × Ý Ø Ø Ø Ö Ð Ø ÓÒ ÓÖ ´Ü ݵ ÓÐ × Ò ÓÒÐÝ Ø Ñ ÒØ ÓÒ× Ü Ò Ý Ö ÓÖ Ö Òغ ÁØ × Ö ÕÙ ÒØÐÝ ÐÔ ÙÐ ØÓ ÓÑÔ ÖØÑ ÒØ Ð Þ Ø Ö Ð Ø ÓÒ ÓÖ ´Ü ݵ Ò ¸ Ò ¸Ø ÓÖ ¹ Ö Ò Ö ×ÓÐÙØ ÓÒ Ø × ÒØÓ Ø Ö « Ö ÒØ ×Ù ¹ Ø × × ÓÖÖ ×ÔÓÒ Ò ØÓ « Ö ÒØ Ò × Ó ÒØ Ø × ÒÚÓÐÚ º ÅÓÖ ÔÖ × Ðݸ Ü ÓÖ Ý × ÔÖÓÒÓѹ Ò Ð ÒØ ØÝ¸Ø Ò Û Ó Ø Ò ÔÖÓÒÓÙÒ Ö ×ÓÐÙ¹ Ø ÓÒ ÔÖÓ Ð Ñº ÇØ ÖÛ × ¸ Ü ÓÖ Ý × ÒÓÑ Ò Ð ÒØ ØÝ¸Ø Ò Û Ú ÒÓÙÒ Ô Ö × Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ò ÐÐݸ ÓØ Ü Ò Ý Ö Ò Ñ ÒØ Ø ×¸Ø Ò Ø × Ò Ñ Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ò Ò ÓÖÑ Ø ÓÒ ÜØÖ Ø ÓÒ ×Ý×Ø Ñ Ò × ØÓ Ö ×× ÐÐ Ø Ö ×Ô Ø× Ó Ø ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ø « Ö ÒØ ÑÓ Ð Ò Ò Ð¹ ÓÖ Ø Ñ Ó × Ñ Ý ÔÔÖÓÔÖ Ø ÓÖ Ò Ñ Ò ÓÙÒ Ô Ö × ¸ Ò ÔÖÓÒÓÙÒ Ö ×ÓÐÙØ ÓÒº ÅÓ×Ø ÖÐÝ ÛÓÖ ÓÒ ÓÖ Ö Ò Ò Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒ ÐØ Û Ø ÔÖÓÒÓÙÒ ÓÖ Ö Ò ´Ä Ô¹ Ô Ò Ò Ä ×׸½ à ÒÒ Ý Ò Ó ÙÖ Ú1 µº Ì ÖÐÝ ÔÔÖÓ × ÒØ ¬ × Ø Ó ÔÖÓÒÓÙÒ× Ò Ó ÙÑ Òظ Ò ¸ ÓÖ ÔÖÓ¹ ÒÓÙÒ¸×ÓÙ Ø ØÓ Ø ÖÑ Ò Ø ×Ø ÒØ Òغ « Ö ÒØ ¬Ò Ø ÓÒ× Ó ×Ø Ð ØÓ « Ö ÒØ Ö ÙÐÐÝ × Ò Ò ÓÑÔÐ Ü ÖÙÐ × Ø Ø Û Ö ×ÓÑ Ø Ñ × × ÓÒ Ü ×Ø Ò × ÓÙÖ× Ø ÓÖ × Ë Ò Ö¸½ µº Ì Ö Ó ÔÖÓÒÓÙÒ Ò ÒÓÙÒ Ô Ö × ÓÖ ¹ Ö Ò Ö ×ÓÐÙØ ÓÒ Û × Ö ØÐÝ Ö Ú Ø Ð Þ × Ò Ñ ¹½ ¼× Ý ÔÔÐ Ø ÓÒ Ó Ð ÖÒ Ò ÔÔÖÓ × ØÓ Ø ÔÖÓ Ð Ñº Ï ÒÓØ ¸ ÑÓÒ ×Ø Ñ ÒÝ¸Ø ÛÓÖ Ó ´ ÓÒ Ò ÒÒ Øظ½ Å ÖØ Ý Ò Ä Ò Öظ½ AE ¸¾¼¼½ AE Ò Ö 3 ¼¼¾µº ÓÖ Ö Ò Ü ÑÔÐ × ØÙÖ ¹ × Ö Ô¹ Ö × ÒØ Ø ÓÒ Ó Ô Ö Ó Ñ ÒØ ÓÒ× Ø Ø × ¹ × Ò ØÓ Ñ Ñ Ò ×Ø Ø ÔÖÓÔ ÖØ × Ó Ø Ò Ô ÓÖ Ò Ø× Ò Ø ÒØ ÒØ Ø Ø Ö ÑÓ×Ø ÐÔ ÙÐ Ò Ñ Ò Ø × ÓÒ Û Ø Ö Ø Ò Ô ÓÖ Ò Ö Ö× ØÓ Ø ÒØ ÒØ Ò ÕÙ ×¹ Ø ÓÒº ÓÖ Ö Ò Ü ÑÔÐ × Ò ÖÝ Ð Ð Ö Ø Ò Û Ø Ö Ø ÒØ Ø × Ø Ø ÓÒ×Ø ØÙØ Ø Ü ÑÔÐ Ö Ò ÓÖ Ö ÒØ ÓÖ ÒÓغ ÅÓ×Ø Ð ÖÒ Ò ¹ × ×Ý×Ø Ñ× ÓÖ ÓÖ Ö Ò Ö ×ÓÐÙ¹ Ø ÓÒ ÑÔÐÓÝ Ð Ö Ö Ò ¹ Ö Ø ØÙÖ × Ø× AE ¸¾¼¼½µº ÒÙÑ ÖÓ Ð ÖÒ Ò Ð ÓÖ Ø Ñ× Ú Ò ÜÔ Ö Ñ ÒØ ÐÐÝ Ú ÐÙ Ø ÓÒ Ø ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Å ÒÝ ÔÙ Ð × ×ØÙ × Ñ¹ ÔÐÓÝ × ÓÒ ØÖ Ð ÓÖ Ø Ñ´ ÓÒ Ò Ò¹ Ò Øظ½ AE ¸¾¼¼½ AE Ò Ö ¸¾¼¼¾µº Ï Ð×Ó ÒÓØ Û ÐÓ Ð ÔÖÓ Ð ×Ø ÑÓ Ð Ò ÔÔÖÓ × ØÓ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ø Ò¹ Ö Ø Ú ÔÖÓ Ð ×Ø ÑÓ Ð Ó ´ ÖÒ Ø Ðº1 µ Ò Ø ÓÒ Ø ÓÒ Ð Ö Ò ÓÑ ¬Ð ÑÓ Ð Ó ´Å ÐÐÙÑ Ò Ï ÐÐÒ Ö¸¾¼¼¿µº Ì ÓÖ Ö Ò Ð ×× ¬ Ö× Ø Ö ÓÙØÔÙØ Ý Ð ÖÒ Ò Ð ÓÖ Ø Ñ× Ò ØÓ Ù× Ò ÓÒ ÙÒ ¹ Ø ÓÒ Û Ø ÓÖ Ö Ò Ó Ò Ð ÓÖ Ø Ñ× Ò ÓÖ¹ Ö ØÓ Ò Ù Ø ÓÖ ÕÙ Ú Ð Ò Ö Ð Ø ÓÒ ÓÒ Ø × Ø Ó Ñ ÒØ ÓÒ׺ ÑÓ×Ø ÔÓÔÙÐ Ö ÓÖ Ö¹ Ò Ó Ò Ð ÓÖ Ø Ñ Ð Ò × Ò Ò Ô ÓÖ ØÓ Ø ¬Ö×Ø ÔÖ Ò ÒØ ÒØ ÔÖ Ø × ÓÖ Ö¹ ÒØ Û Ø Ø Ò Ô ÓÖ´AE ¸¾¼¼½µº Ï Û ÐÐ ÐÐ Ø Ø Ð Ò ¹¬Ö×Ø Ó Ò Ð ÓÖ Ø Ñº Ò ÐØ ÖÒ ¹ Ø Ú Ó Ò Ð ÓÖ Ø Ñ´Ø ÖÑ Ð Ò ¹ ×ص Ð Ò × Ø Ò Ô ÓÖ ØÓ Ø ÑÓ×Ø ÔÖÓ Ð ÔÖ Ò Ò¹ Ø ÒØ¸Û Ö Ø ÔÖÓ Ð ØÝ Ó Ò Ø ÒØ × Ø Ò ØÓ Ø ÓÒ¬ Ò Ó Ø ÓÖ Ö Ò Ð ×¹ × ¬ Ö ÔÖ Ø ÓÒ´AE Ò Ö ¸¾¼¼¾µº Ï Û ÐÐ ÓÒ× Ö ÓØ Ð Ò ¹¬Ö×Ø Ò Ð Ò ¹ ×Ø Ó ¹ Ò Ð ÓÖ Ø Ñ× Ò ÓÑÔ Ö Ø Ñ Û Ø Ø Ò Û Ó Ò Ö Ñ ÛÓÖ Ø Ø Û ÒØÖÓ Ù Ò Ë ¹ Ø ÓÒ º ÇÙÖ Ó Ò Ö Ñ ÛÓÖ ÑÓ×Ø Ö × Ñ Ð × Ø ÛÓÖ Ó ´Å ÐÐÙÑ Ò Ï ÐÐÒ Ö¸¾¼¼¿µ¸Û Ö ÓÖ Ö Ò ÑÓ Ð Ö ÔÖ × ÒØ× ÓÒ Ø ÓÒ Ð Ö Ò¹ ÓÑ ¬ Ð º Ì ÓÖ Ö Ò Ó Ò ÔÖÓ Ð Ñ ÓÖ Ø ÓÒ Ø ÓÒ Ð Ö Ò ÓÑ ¬ Ð Ð × ØÓ ÓÖÖ ¹ Ð Ø ÓÒ ÐÙ×Ø Ö Ò ÔÖÓ Ð Ñ´ Ò× Ð Ø Ðº¸¾¼¼¾µº Ï Ð×Ó Ö Ù Ø ÓÖ Ö Ò Ó Ò ÔÖÓ ¹ Ð Ñ ØÓ ÓÖÖ Ð Ø ÓÒ ÐÙ×Ø Ö Ò ÔÖÓ Ð Ñ¸ ÙØ Ù× « Ö ÒØ ÔÔÖÓÜ Ñ Ø ÓÒ Ð ÓÖ Ø Ñ ÓÖ Ø× ×Ó¹ ÐÙØ ÓÒº ÁÒ Ø × Ò Ó ØÖ Ò Ò Ø ¸Û ÒÓØ Ô¹ ÔÐ Ø ÓÒ Ó ÐÙ×Ø Ö Ò ÓÖ ÓÖ Ö Ò Ó ÒÓÙÒ Ô Ö × ×´ Ö Ò Ï ×Ø «¸½ µº AE Ñ ÐÝØ ÒÓÙÒ Ô Ö × ØØÖ ÙØ × Ö Ù× ØÓ ¬Ò ×Ø Ò ÙÒ Ø ÓÒ Ø Ø × Ù× Û Ø Ò ÙÖ ×¹ Ø ÐÙ×Ø Ö Ò Ð ÓÖ Ø Ñ ØÓ ÔÖÓ Ù ÐÙ×Ø Ö Ò Ó ÒÓÙÒ Ô Ö × × Ø Ø Ñ× ØÓ ÓÖÖ ×ÔÓÒ ØÓ Ø ÓÖ Ö Ò Ô ÖØ Ø ÓÒ Ó Ø ÓÖÖ ×ÔÓÒ Ò ÒÓÙÒ Ô Ö × ÒØ Ø ×º ÁÒ Ø ÓÒ ØÓ Ø ÛÓÖ ÓÒ ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ Û Ø Ò Ó ÙÑ ÒØ×¸Ø Ö × Ò Ñ Ö ¹ Ò Ó Ý ÓÒ ÑÓÖ Ò Ö Ð ÒØ ØÝ ÙÒ ÖØ ÒØÝ ÔÖÓ Ð Ñ¸Û × ÓÒ ÖÒ Û Ø Ø ÖÑ Ò Ò Û Ø Ö ØÛÓ Ö ÓÖ × × Ö Ø × Ñ ÒØ ØÝ È ×ÙÐ Ø Ðº¸¾¼¼¿µº ¿ ÓÖ Ö Ò Ê ×ÓÐÙØ ÓÒ Ö Ø",Coreference Resolution for Information Extraction,"Ï ÓÑÔ Ö × Ú Ö Ð ÔÔÖÓ × ØÓ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ò Ø ÓÒØ ÜØ Ó Ò ÓÖÑ Ø ÓÒ ÜØÖ ¹ Ø ÓÒº Ï ÔÖ × ÒØ ÐÓ××¹ × Ó Ò Ö Ñ ¹ ÛÓÖ ÓÖ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ò Ö Ý Ð¹ ÓÖ Ø Ñ ÓÖ ÔÔÖÓÜ Ñ Ø ÓÖ Ö Ò Ó Ò ¸ Ò ÓÒ ÙÒ Ø ÓÒ Û Ø È Ö ÔØÖÓÒ Ò ÐÓ ×Ø Ö ¹ Ö ×× ÓÒ Ð ÖÒ Ò Ð ÓÖ Ø Ñ׺ Ï ÜÔ Ö Ñ Ò¹ Ø ÐÐÝ Ú ÐÙ Ø Ø ÔÖ × ÒØ ÔÔÖÓ × Ù× Ò Ø ÙØÓÑ Ø ÓÒØ ÒØ ÜØÖ Ø ÓÒ Ú ÐÙ Ø ÓÒ Ñ Ø Ó ÓÐÓ Ý¸Û Ø ÔÖÓÑ × Ò Ö ×ÙÐØ׺ ½ ÁÒØÖÓ Ù Ø ÓÒ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ × Ò ÑÔÓÖØ ÒØ ÔÖÓ Ð Ñ Ó Ø ÖÑ Ò Ò Û Ø Ö × ÓÙÖ× Ö Ö Ò × Ò Ø ÜØ ÓÖÖ ×ÔÓÒ ØÓ Ø × Ñ Ö Ð ÛÓÖÐ ÒØ Ø × Å Ø ÓÚ¸¾¼¼¾µº ÁÒ Ø × Ô Ô Ö¸Û Ö ×× Ö ×ØÖ Ø Ú Ö× ÓÒ Ó Ø ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñ Ò Ø ÓÒØ ÜØ Ó Ò ÓÖÑ Ø ÓÒ ÜØÖ ¹ Ø ÓÒº Ì Ò ÓÖÑ Ø ÓÒ ÜØÖ Ø ÓÒ Ô Ö×Ô Ø Ú Ó Ò ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÑÔÓ× × Ð Ñ Ø × ÓÔ ÓÒ Ø × Ø Ó ÒØ Ø × ØÓ Ö ×ÓÐÚ º Ï Ö ÒÓØ ÒØ Ö ×Ø Ò Ö ×ÓÐÚ Ò ÐÐ ÓÖ Ö Ò × Ò Ó ÙÑ Òظ ÙØ ÓÒÐÝ Ø Ó× ÒÚÓÐÚ Ò ÒØ Ø × ØÓ ÜØÖ Ø × Ô ÖØ Ó ×Ô ¬ ÜØÖ Ø ÓÒ Ø × º Ì Ù×¸Û Ò × ÐÝ ÒÓÖ ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ Ó Ø Ó× Ò Ñ ×¸ÒÓÙÒ Ô Ö × ×¸ Ò ÔÖÓ¹ ÒÓÙÒ× Ø Ø Ö Ñ ÖÖ Ð Ú ÒØ ØÓ Ø ÜØÖ ¹ Ø ÓÒ Ø × Ø Ò º Ì ÜØÖ Ø ÓÒ¹ÓÖ ÒØ ÓÖ Ö Ò Ö ×ÓÐÙ¹ Ø ÓÒ ÔÖÓ Ð Ñ × ÑÓØ Ú Ø Ý Ø ÒØ ØÝ ¹ Ø Ø ÓÒ Ò ÌÖ Ò ´ ̵ Ø × Ó Ø Ù¹ ØÓÑ Ø ÓÒØ ÒØ ÜØÖ Ø ÓÒ Ú ÐÙ Ø ÓÒ´ 3 ¼¼¿µº Ì Ì Ø × Ö ÕÙ Ö × Ø Ø Ò Ó Ò Ñ ¸ÒÓÑ Ò Ð¸ Ò ÔÖÓÒÓÑ Ò Ð Ñ ÒØ ÓÒ× Ò ØÖ Ò Ñ ÒØ ÓÒ× ÓÖÖ ×ÔÓÒ Ò ØÓ Ø × Ñ Ö Ð ÛÓÖÐ ÒØ Ø ×º Ï ÓÔØ Ø ÓÒ¹ Ú ÒØ ÓÒ Ó Ù× Ò Ñ ÒØ ÓÒ× ÓÖ Ø Ò Ñ ×¸ÒÓÙÒ Ô Ö × ×¸ Ò ÔÖÓÒÓÙÒ×¸Û Ð Ö × ÖÚ Ò ÒØ Ø × ØÓ Ö ÔÖ × ÒØ Ø ÕÙ Ú Ð Ò Ð ×× × Ó Ñ ÒØ ÓÒ׸ º º¸Ø × Ø× Ó Ñ ÒØ ÓÒ× ÓÖÖ ×ÔÓÒ Ò ØÓ Ø × Ñ Ö Ð ÛÓÖÐ ÒØ Ø ×º ÁÒ Ø × Ô Ô Ö¸Û Û ÐÐ Ø Ò ÒØ ØÝ Ñ Ò¹ Ø ÓÒ ÜØÖ Ø ÓÒ ÓÑÔÓÒ ÒØ × Ú Ò´Ø Ñ Ò¹ Ø ÓÒ ÜØÖ Ø ÓÒ ÓÑÔÓÒ ÒØ Ó ´ ÓÒ Ò Ê ÑÓ×¹ Ë ÒØ ÖÙÞ¸¾¼¼¼µµ¸ Ò ÓÒ× Ö ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ Ð ÓÖ Ø Ñ× Ø Ø ÛÓÖ Û Ø ÐÖ Ý Ü¹ ØÖ Ø ÒØ ØÝ Ñ ÒØ ÓÒ׺ Ö ÓÙØÐ Ò Ó Ø Ô Ô Ö ÓÐÐÓÛ׺ ÁÒ Ë ¹ Ø ÓÒ ¾¸Û ×ÙÖÚ Ý ÔÖ Ú ÓÙ× ÛÓÖ ÓÒ ÓÖ Ö¹ Ò Ö ×ÓÐÙØ ÓÒº ÁÒ Ë Ø ÓÒ ¿¸Û ÔÖ × ÒØ ÓÙÖ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ö Ñ ÛÓÖ Ø Ø Ò Óѹ Ô ×× × ÒÙÑ ÖÓ ×Ø Ò Ö ÓÖ Ö Ò Ö ×Ó¹ ÐÙØ ÓÒ ÔÔÖÓ ×º ÁÒ Ë Ø ÓÒ ¸Û ÒØÖÓ Ù Ì Ö Ý Ó Ò Ð ÓÖ Ø Ñ Ò Ö Ñ ÒØ ÐÐÝ ÓÔØ Ñ Þ × Ø ´ Òµ ÙÒ Ø ÓÒ È Ñ ´Û Ö × Ø Ö Û ÓÖ × Ò´Û µµº Ì ÐÓ ×Ø Ö ¹ Ö ×× ÓÒ Ú Ö× ÓÒ Ó Ø Ð ÓÖ Ø Ñ × ÔÖ × ÒØ × Ð ÓÖ Ø Ñ ½º Ì Ð ÓÖ Ø Ñ Ò Ø ÐÐÝ ÔÙØ× Ñ ÒØ ÓÒ ÒØÓ × Ô Ö Ø ÒØ ØÝ Ò Ø Ò Ø Ö ¹ Ø Ú ÐÝ Ñ Ö × Ø ÒØ Ø ×¸Û Ð Ø Ñ Ö Ñ¹ ÔÖÓÚ × Ø Ò ÙÒ Ø ÓÒº ÙÖ Ò Ø Ö Ø ÓÒØ Ô Ö Ó ÒØ Ø × × × Ð Ø Ò Ö Ý × ¹ ÓÒ ØÓ ÓÔØ Ñ Þ Ø Ò ÑÔÖÓÚ Ñ ÒØ Ú Ý Ø Ñ Ö º AEÓØ Ø Ø Ø Ð ÓÖ Ø Ñ Ø Ö ¹ Ø Ú ÐÝ ÙÔ Ø × Ø Û Ø× ØÛ Ò Ø ÐÖ Ý Ñ Ö ÒØ Ø ×º Ï ÒÓØ Ø Ø ÒÓ ÔÔÖÓÜ Ñ Ø ÓÒ Ö ×ÙÐØ× Ö ÒÓÛÒ ÓÖ Ð ÓÖ Ø Ñ ½º Ï ÜÔ Ö Ñ ÒØ ÐÐÝ Ú ÐÙ Ø Ø Ö Ý Ó Ò Ð ÓÖ Ø Ñ Ò Ë ¹ Ø ÓÒ Ò ÓÑÔ Ö Ø Û Ø Ð Ò ¹¬Ö×Ø Ò Ð Ò ¹ ×Ø Ó Ò Ð ÓÖ Ø Ñ׺ ÜÔ Ö Ñ ÒØ× Ï Ú ÐÙ Ø × Ú Ö Ð ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÓÒ¹ ¬ ÙÖ Ø ÓÒ× ÓÑÔÓ× Ó Ø « Ö ÒØ ÓÑÔÓ¹
ÐÓ××¹ × ÓÖ Ö Ò Ó Ò Ñ Ø Ó ÓÐÓ Ý Ò ÔÖ × ÒØ Ò ÔÔÖÓÜ Ñ Ø Ö Ý ÓÖ Ö Ò Ó Ò Ð ÓÖ Ø Ñº ÁÒ Ë Ø ÓÒ ¸Û ÜÔ Ö ¹ Ñ ÒØ ÐÐÝ Ú ÐÙ Ø × Ú Ö Ð ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ö Ø ØÙÖ ×¸ Ò Ø ÓÒØ ÜØ Ó Ò ÓÖÑ Ø ÓÒ Ü¹ ØÖ Ø ÓÒº ¾ ÓÖ Ö Ò Ê ×ÓÐÙØ ÓÒ ÇÚ ÖÚ Û Ì ÔÖÓ Ð Ñ Ó Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒ × Ó Ø Ò ×ØÙ ´Å Ø ÓÚ¸¾¼¼¾µ¸Û × ÐÓ× ÐÝ Ö ¹ Ð Ø ØÓ Ø ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ò Ô ÓÖ × Ô ÒÓÑ ÒÓÒ Ó Ö ÖÖ Ò ØÓ ÔÖ Ò Ñ ÒØ ÓÒ Ò Ó ÙÑ Òغ Ì Ö Ö¹ Ò × Ø Ò ÐÐ Ò Ò Ô ÓÖ Ò Ø Ö ÖÖ Ñ ÒØ ÓÒ × Ø ÖÑ Ò ÒØ Òغ Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñ × Ó Ø Ò Ö ×ØÖ Ø ØÓ ÒÓÑ ¹ Ò Ð Ò ÔÖÓÒÓÑ Ò Ð Ò Ô ÓÖ×¸Ø Ö Ý ÒÓÖ Ò Ø ÔÖÓ Ð Ñ Ó Ò Ñ ÓÖ Ö Ò ¸Û × Ü¹ ØÖ Ñ ÐÝ ÑÔÓÖØ ÒØ ÓÖ Ò ÓÖÑ Ø ÓÒ ÜØÖ Ø ÓÒº Ø ÓÒ ÐÐÝ¸× Ò Ò Ô ÓÖ Ö ×× ×´Ð Ø Ö¹ ÐÐݵ ÓÒÐÝ Û Ö Ö Ö Ò ×¸Ø Ò Ö ÕÙ ÒØ Ô ÒÓÑ ÒÓÒ Ó ÓÖÛ Ö Ö Ö Ò ×´Ø ÖÑ Ø¹ Ô ÓÖ µ × ÒÓØ ÓÚ Ö Ý Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒº ÁÒ ÓÙÖ ÔÖ × ÒØ Ø ÓÒ¸Ø Ø ÖÑ ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ ÑÔÐ × Ö ×ÓÐÙØ ÓÒ Ó Ò Ñ ¸ÒÓÑ Ò Ð¸ Ò ÔÖÓÒÓÑ Ò Ð ÒØ ØÝ Ñ Ò Ø ÓÒ× Ø Ø ×Ù ×ÙÑ × ÓØ Û Ö Ò ÓÖÛ Ö Ö Ö Ò ×º Ä Ø Ù× ¬Ò Ø ÓÖ Ö Ò Ö Ð Ø ÓÒ ÓÖ ÓÒ × Ø Ó Ó ÙÑ ÒØ ÒØ ØÝ Ñ ÒØ ÓÒ׺ Ï × Ý Ø Ø Ø Ö Ð Ø ÓÒ ÓÖ ´Ü ݵ ÓÐ × Ò ÓÒÐÝ Ø Ñ ÒØ ÓÒ× Ü Ò Ý Ö ÓÖ Ö Òغ ÁØ × Ö ÕÙ ÒØÐÝ ÐÔ ÙÐ ØÓ ÓÑÔ ÖØÑ ÒØ Ð Þ Ø Ö Ð Ø ÓÒ ÓÖ ´Ü ݵ Ò ¸ Ò ¸Ø ÓÖ ¹ Ö Ò Ö ×ÓÐÙØ ÓÒ Ø × ÒØÓ Ø Ö « Ö ÒØ ×Ù ¹ Ø × × ÓÖÖ ×ÔÓÒ Ò ØÓ « Ö ÒØ Ò × Ó ÒØ Ø × ÒÚÓÐÚ º ÅÓÖ ÔÖ × Ðݸ Ü ÓÖ Ý × ÔÖÓÒÓѹ Ò Ð ÒØ ØÝ¸Ø Ò Û Ó Ø Ò ÔÖÓÒÓÙÒ Ö ×ÓÐÙ¹ Ø ÓÒ ÔÖÓ Ð Ñº ÇØ ÖÛ × ¸ Ü ÓÖ Ý × ÒÓÑ Ò Ð ÒØ ØÝ¸Ø Ò Û Ú ÒÓÙÒ Ô Ö × Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ò ÐÐݸ ÓØ Ü Ò Ý Ö Ò Ñ ÒØ Ø ×¸Ø Ò Ø × Ò Ñ Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ò Ò ÓÖÑ Ø ÓÒ ÜØÖ Ø ÓÒ ×Ý×Ø Ñ Ò × ØÓ Ö ×× ÐÐ Ø Ö ×Ô Ø× Ó Ø ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ø « Ö ÒØ ÑÓ Ð Ò Ò Ð¹ ÓÖ Ø Ñ Ó × Ñ Ý ÔÔÖÓÔÖ Ø ÓÖ Ò Ñ Ò ÓÙÒ Ô Ö × ¸ Ò ÔÖÓÒÓÙÒ Ö ×ÓÐÙØ ÓÒº ÅÓ×Ø ÖÐÝ ÛÓÖ ÓÒ ÓÖ Ö Ò Ò Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒ ÐØ Û Ø ÔÖÓÒÓÙÒ ÓÖ Ö Ò ´Ä Ô¹ Ô Ò Ò Ä ×׸½ à ÒÒ Ý Ò Ó ÙÖ Ú1 µº Ì ÖÐÝ ÔÔÖÓ × ÒØ ¬ × Ø Ó ÔÖÓÒÓÙÒ× Ò Ó ÙÑ Òظ Ò ¸ ÓÖ ÔÖÓ¹ ÒÓÙÒ¸×ÓÙ Ø ØÓ Ø ÖÑ Ò Ø ×Ø ÒØ Òغ « Ö ÒØ ¬Ò Ø ÓÒ× Ó ×Ø Ð ØÓ « Ö ÒØ Ö ÙÐÐÝ × Ò Ò ÓÑÔÐ Ü ÖÙÐ × Ø Ø Û Ö ×ÓÑ Ø Ñ × × ÓÒ Ü ×Ø Ò × ÓÙÖ× Ø ÓÖ × Ë Ò Ö¸½ µº Ì Ö Ó ÔÖÓÒÓÙÒ Ò ÒÓÙÒ Ô Ö × ÓÖ ¹ Ö Ò Ö ×ÓÐÙØ ÓÒ Û × Ö ØÐÝ Ö Ú Ø Ð Þ × Ò Ñ ¹½ ¼× Ý ÔÔÐ Ø ÓÒ Ó Ð ÖÒ Ò ÔÔÖÓ × ØÓ Ø ÔÖÓ Ð Ñº Ï ÒÓØ ¸ ÑÓÒ ×Ø Ñ ÒÝ¸Ø ÛÓÖ Ó ´ ÓÒ Ò ÒÒ Øظ½ Å ÖØ Ý Ò Ä Ò Öظ½ AE ¸¾¼¼½ AE Ò Ö 3 ¼¼¾µº ÓÖ Ö Ò Ü ÑÔÐ × ØÙÖ ¹ × Ö Ô¹ Ö × ÒØ Ø ÓÒ Ó Ô Ö Ó Ñ ÒØ ÓÒ× Ø Ø × ¹ × Ò ØÓ Ñ Ñ Ò ×Ø Ø ÔÖÓÔ ÖØ × Ó Ø Ò Ô ÓÖ Ò Ø× Ò Ø ÒØ ÒØ Ø Ø Ö ÑÓ×Ø ÐÔ ÙÐ Ò Ñ Ò Ø × ÓÒ Û Ø Ö Ø Ò Ô ÓÖ Ò Ö Ö× ØÓ Ø ÒØ ÒØ Ò ÕÙ ×¹ Ø ÓÒº ÓÖ Ö Ò Ü ÑÔÐ × Ò ÖÝ Ð Ð Ö Ø Ò Û Ø Ö Ø ÒØ Ø × Ø Ø ÓÒ×Ø ØÙØ Ø Ü ÑÔÐ Ö Ò ÓÖ Ö ÒØ ÓÖ ÒÓغ ÅÓ×Ø Ð ÖÒ Ò ¹ × ×Ý×Ø Ñ× ÓÖ ÓÖ Ö Ò Ö ×ÓÐÙ¹ Ø ÓÒ ÑÔÐÓÝ Ð Ö Ö Ò ¹ Ö Ø ØÙÖ × Ø× AE ¸¾¼¼½µº ÒÙÑ ÖÓ Ð ÖÒ Ò Ð ÓÖ Ø Ñ× Ú Ò ÜÔ Ö Ñ ÒØ ÐÐÝ Ú ÐÙ Ø ÓÒ Ø ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Å ÒÝ ÔÙ Ð × ×ØÙ × Ñ¹ ÔÐÓÝ × ÓÒ ØÖ Ð ÓÖ Ø Ñ´ ÓÒ Ò Ò¹ Ò Øظ½ AE ¸¾¼¼½ AE Ò Ö ¸¾¼¼¾µº Ï Ð×Ó ÒÓØ Û ÐÓ Ð ÔÖÓ Ð ×Ø ÑÓ Ð Ò ÔÔÖÓ × ØÓ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ø Ò¹ Ö Ø Ú ÔÖÓ Ð ×Ø ÑÓ Ð Ó ´ ÖÒ Ø Ðº1 µ Ò Ø ÓÒ Ø ÓÒ Ð Ö Ò ÓÑ ¬Ð ÑÓ Ð Ó ´Å ÐÐÙÑ Ò Ï ÐÐÒ Ö¸¾¼¼¿µº Ì ÓÖ Ö Ò Ð ×× ¬ Ö× Ø Ö ÓÙØÔÙØ Ý Ð ÖÒ Ò Ð ÓÖ Ø Ñ× Ò ØÓ Ù× Ò ÓÒ ÙÒ ¹ Ø ÓÒ Û Ø ÓÖ Ö Ò Ó Ò Ð ÓÖ Ø Ñ× Ò ÓÖ¹ Ö ØÓ Ò Ù Ø ÓÖ ÕÙ Ú Ð Ò Ö Ð Ø ÓÒ ÓÒ Ø × Ø Ó Ñ ÒØ ÓÒ׺ ÑÓ×Ø ÔÓÔÙÐ Ö ÓÖ Ö¹ Ò Ó Ò Ð ÓÖ Ø Ñ Ð Ò × Ò Ò Ô ÓÖ ØÓ Ø ¬Ö×Ø ÔÖ Ò ÒØ ÒØ ÔÖ Ø × ÓÖ Ö¹ ÒØ Û Ø Ø Ò Ô ÓÖ´AE ¸¾¼¼½µº Ï Û ÐÐ ÐÐ Ø Ø Ð Ò ¹¬Ö×Ø Ó Ò Ð ÓÖ Ø Ñº Ò ÐØ ÖÒ ¹ Ø Ú Ó Ò Ð ÓÖ Ø Ñ´Ø ÖÑ Ð Ò ¹ ×ص Ð Ò × Ø Ò Ô ÓÖ ØÓ Ø ÑÓ×Ø ÔÖÓ Ð ÔÖ Ò Ò¹ Ø ÒØ¸Û Ö Ø ÔÖÓ Ð ØÝ Ó Ò Ø ÒØ × Ø Ò ØÓ Ø ÓÒ¬ Ò Ó Ø ÓÖ Ö Ò Ð ×¹ × ¬ Ö ÔÖ Ø ÓÒ´AE Ò Ö ¸¾¼¼¾µº Ï Û ÐÐ ÓÒ× Ö ÓØ Ð Ò ¹¬Ö×Ø Ò Ð Ò ¹ ×Ø Ó ¹ Ò Ð ÓÖ Ø Ñ× Ò ÓÑÔ Ö Ø Ñ Û Ø Ø Ò Û Ó Ò Ö Ñ ÛÓÖ Ø Ø Û ÒØÖÓ Ù Ò Ë ¹ Ø ÓÒ º ÇÙÖ Ó Ò Ö Ñ ÛÓÖ ÑÓ×Ø Ö × Ñ Ð × Ø ÛÓÖ Ó ´Å ÐÐÙÑ Ò Ï ÐÐÒ Ö¸¾¼¼¿µ¸Û Ö ÓÖ Ö Ò ÑÓ Ð Ö ÔÖ × ÒØ× ÓÒ Ø ÓÒ Ð Ö Ò¹ ÓÑ ¬ Ð º Ì ÓÖ Ö Ò Ó Ò ÔÖÓ Ð Ñ ÓÖ Ø ÓÒ Ø ÓÒ Ð Ö Ò ÓÑ ¬ Ð Ð × ØÓ ÓÖÖ ¹ Ð Ø ÓÒ ÐÙ×Ø Ö Ò ÔÖÓ Ð Ñ´ Ò× Ð Ø Ðº¸¾¼¼¾µº Ï Ð×Ó Ö Ù Ø ÓÖ Ö Ò Ó Ò ÔÖÓ ¹ Ð Ñ ØÓ ÓÖÖ Ð Ø ÓÒ ÐÙ×Ø Ö Ò ÔÖÓ Ð Ñ¸ ÙØ Ù× « Ö ÒØ ÔÔÖÓÜ Ñ Ø ÓÒ Ð ÓÖ Ø Ñ ÓÖ Ø× ×Ó¹ ÐÙØ ÓÒº ÁÒ Ø × Ò Ó ØÖ Ò Ò Ø ¸Û ÒÓØ Ô¹ ÔÐ Ø ÓÒ Ó ÐÙ×Ø Ö Ò ÓÖ ÓÖ Ö Ò Ó ÒÓÙÒ Ô Ö × ×´ Ö Ò Ï ×Ø «¸½ µº AE Ñ ÐÝØ ÒÓÙÒ Ô Ö × ØØÖ ÙØ × Ö Ù× ØÓ ¬Ò ×Ø Ò ÙÒ Ø ÓÒ Ø Ø × Ù× Û Ø Ò ÙÖ ×¹ Ø ÐÙ×Ø Ö Ò Ð ÓÖ Ø Ñ ØÓ ÔÖÓ Ù ÐÙ×Ø Ö Ò Ó ÒÓÙÒ Ô Ö × × Ø Ø Ñ× ØÓ ÓÖÖ ×ÔÓÒ ØÓ Ø ÓÖ Ö Ò Ô ÖØ Ø ÓÒ Ó Ø ÓÖÖ ×ÔÓÒ Ò ÒÓÙÒ Ô Ö × ÒØ Ø ×º ÁÒ Ø ÓÒ ØÓ Ø ÛÓÖ ÓÒ ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ Û Ø Ò Ó ÙÑ ÒØ×¸Ø Ö × Ò Ñ Ö ¹ Ò Ó Ý ÓÒ ÑÓÖ Ò Ö Ð ÒØ ØÝ ÙÒ ÖØ ÒØÝ ÔÖÓ Ð Ñ¸Û × ÓÒ ÖÒ Û Ø Ø ÖÑ Ò Ò Û Ø Ö ØÛÓ Ö ÓÖ × × Ö Ø × Ñ ÒØ ØÝ È ×ÙÐ Ø Ðº¸¾¼¼¿µº ¿ ÓÖ Ö Ò Ê ×ÓÐÙØ ÓÒ Ö Ø",,"Coreference Resolution for Information Extraction. Ï ÓÑÔ Ö × Ú Ö Ð ÔÔÖÓ × ØÓ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ò Ø ÓÒØ ÜØ Ó Ò ÓÖÑ Ø ÓÒ ÜØÖ ¹ Ø ÓÒº Ï ÔÖ × ÒØ ÐÓ××¹ × Ó Ò Ö Ñ ¹ ÛÓÖ ÓÖ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ò Ö Ý Ð¹ ÓÖ Ø Ñ ÓÖ ÔÔÖÓÜ Ñ Ø ÓÖ Ö Ò Ó Ò ¸ Ò ÓÒ ÙÒ Ø ÓÒ Û Ø È Ö ÔØÖÓÒ Ò ÐÓ ×Ø Ö ¹ Ö ×× ÓÒ Ð ÖÒ Ò Ð ÓÖ Ø Ñ׺ Ï ÜÔ Ö Ñ Ò¹ Ø ÐÐÝ Ú ÐÙ Ø Ø ÔÖ × ÒØ ÔÔÖÓ × Ù× Ò Ø ÙØÓÑ Ø ÓÒØ ÒØ ÜØÖ Ø ÓÒ Ú ÐÙ Ø ÓÒ Ñ Ø Ó ÓÐÓ Ý¸Û Ø ÔÖÓÑ × Ò Ö ×ÙÐØ׺ ½ ÁÒØÖÓ Ù Ø ÓÒ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ × Ò ÑÔÓÖØ ÒØ ÔÖÓ Ð Ñ Ó Ø ÖÑ Ò Ò Û Ø Ö × ÓÙÖ× Ö Ö Ò × Ò Ø ÜØ ÓÖÖ ×ÔÓÒ ØÓ Ø × Ñ Ö Ð ÛÓÖÐ ÒØ Ø × Å Ø ÓÚ¸¾¼¼¾µº ÁÒ Ø × Ô Ô Ö¸Û Ö ×× Ö ×ØÖ Ø Ú Ö× ÓÒ Ó Ø ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñ Ò Ø ÓÒØ ÜØ Ó Ò ÓÖÑ Ø ÓÒ ÜØÖ ¹ Ø ÓÒº Ì Ò ÓÖÑ Ø ÓÒ ÜØÖ Ø ÓÒ Ô Ö×Ô Ø Ú Ó Ò ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÑÔÓ× × Ð Ñ Ø × ÓÔ ÓÒ Ø × Ø Ó ÒØ Ø × ØÓ Ö ×ÓÐÚ º Ï Ö ÒÓØ ÒØ Ö ×Ø Ò Ö ×ÓÐÚ Ò ÐÐ ÓÖ Ö Ò × Ò Ó ÙÑ Òظ ÙØ ÓÒÐÝ Ø Ó× ÒÚÓÐÚ Ò ÒØ Ø × ØÓ ÜØÖ Ø × Ô ÖØ Ó ×Ô ¬ ÜØÖ Ø ÓÒ Ø × º Ì Ù×¸Û Ò × ÐÝ ÒÓÖ ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ Ó Ø Ó× Ò Ñ ×¸ÒÓÙÒ Ô Ö × ×¸ Ò ÔÖÓ¹ ÒÓÙÒ× Ø Ø Ö Ñ ÖÖ Ð Ú ÒØ ØÓ Ø ÜØÖ ¹ Ø ÓÒ Ø × Ø Ò º Ì ÜØÖ Ø ÓÒ¹ÓÖ ÒØ ÓÖ Ö Ò Ö ×ÓÐÙ¹ Ø ÓÒ ÔÖÓ Ð Ñ × ÑÓØ Ú Ø Ý Ø ÒØ ØÝ ¹ Ø Ø ÓÒ Ò ÌÖ Ò ´ ̵ Ø × Ó Ø Ù¹ ØÓÑ Ø ÓÒØ ÒØ ÜØÖ Ø ÓÒ Ú ÐÙ Ø ÓÒ´ 3 ¼¼¿µº Ì Ì Ø × Ö ÕÙ Ö × Ø Ø Ò Ó Ò Ñ ¸ÒÓÑ Ò Ð¸ Ò ÔÖÓÒÓÑ Ò Ð Ñ ÒØ ÓÒ× Ò ØÖ Ò Ñ ÒØ ÓÒ× ÓÖÖ ×ÔÓÒ Ò ØÓ Ø × Ñ Ö Ð ÛÓÖÐ ÒØ Ø ×º Ï ÓÔØ Ø ÓÒ¹ Ú ÒØ ÓÒ Ó Ù× Ò Ñ ÒØ ÓÒ× ÓÖ Ø Ò Ñ ×¸ÒÓÙÒ Ô Ö × ×¸ Ò ÔÖÓÒÓÙÒ×¸Û Ð Ö × ÖÚ Ò ÒØ Ø × ØÓ Ö ÔÖ × ÒØ Ø ÕÙ Ú Ð Ò Ð ×× × Ó Ñ ÒØ ÓÒ׸ º º¸Ø × Ø× Ó Ñ ÒØ ÓÒ× ÓÖÖ ×ÔÓÒ Ò ØÓ Ø × Ñ Ö Ð ÛÓÖÐ ÒØ Ø ×º ÁÒ Ø × Ô Ô Ö¸Û Û ÐÐ Ø Ò ÒØ ØÝ Ñ Ò¹ Ø ÓÒ ÜØÖ Ø ÓÒ ÓÑÔÓÒ ÒØ × Ú Ò´Ø Ñ Ò¹ Ø ÓÒ ÜØÖ Ø ÓÒ ÓÑÔÓÒ ÒØ Ó ´ ÓÒ Ò Ê ÑÓ×¹ Ë ÒØ ÖÙÞ¸¾¼¼¼µµ¸ Ò ÓÒ× Ö ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ Ð ÓÖ Ø Ñ× Ø Ø ÛÓÖ Û Ø ÐÖ Ý Ü¹ ØÖ Ø ÒØ ØÝ Ñ ÒØ ÓÒ׺ Ö ÓÙØÐ Ò Ó Ø Ô Ô Ö ÓÐÐÓÛ׺ ÁÒ Ë ¹ Ø ÓÒ ¾¸Û ×ÙÖÚ Ý ÔÖ Ú ÓÙ× ÛÓÖ ÓÒ ÓÖ Ö¹ Ò Ö ×ÓÐÙØ ÓÒº ÁÒ Ë Ø ÓÒ ¿¸Û ÔÖ × ÒØ ÓÙÖ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ö Ñ ÛÓÖ Ø Ø Ò Óѹ Ô ×× × ÒÙÑ ÖÓ ×Ø Ò Ö ÓÖ Ö Ò Ö ×Ó¹ ÐÙØ ÓÒ ÔÔÖÓ ×º ÁÒ Ë Ø ÓÒ ¸Û ÒØÖÓ Ù Ì Ö Ý Ó Ò Ð ÓÖ Ø Ñ Ò Ö Ñ ÒØ ÐÐÝ ÓÔØ Ñ Þ × Ø ´ Òµ ÙÒ Ø ÓÒ È Ñ ´Û Ö × Ø Ö Û ÓÖ × Ò´Û µµº Ì ÐÓ ×Ø Ö ¹ Ö ×× ÓÒ Ú Ö× ÓÒ Ó Ø Ð ÓÖ Ø Ñ × ÔÖ × ÒØ × Ð ÓÖ Ø Ñ ½º Ì Ð ÓÖ Ø Ñ Ò Ø ÐÐÝ ÔÙØ× Ñ ÒØ ÓÒ ÒØÓ × Ô Ö Ø ÒØ ØÝ Ò Ø Ò Ø Ö ¹ Ø Ú ÐÝ Ñ Ö × Ø ÒØ Ø ×¸Û Ð Ø Ñ Ö Ñ¹ ÔÖÓÚ × Ø Ò ÙÒ Ø ÓÒº ÙÖ Ò Ø Ö Ø ÓÒØ Ô Ö Ó ÒØ Ø × × × Ð Ø Ò Ö Ý × ¹ ÓÒ ØÓ ÓÔØ Ñ Þ Ø Ò ÑÔÖÓÚ Ñ ÒØ Ú Ý Ø Ñ Ö º AEÓØ Ø Ø Ø Ð ÓÖ Ø Ñ Ø Ö ¹ Ø Ú ÐÝ ÙÔ Ø × Ø Û Ø× ØÛ Ò Ø ÐÖ Ý Ñ Ö ÒØ Ø ×º Ï ÒÓØ Ø Ø ÒÓ ÔÔÖÓÜ Ñ Ø ÓÒ Ö ×ÙÐØ× Ö ÒÓÛÒ ÓÖ Ð ÓÖ Ø Ñ ½º Ï ÜÔ Ö Ñ ÒØ ÐÐÝ Ú ÐÙ Ø Ø Ö Ý Ó Ò Ð ÓÖ Ø Ñ Ò Ë ¹ Ø ÓÒ Ò ÓÑÔ Ö Ø Û Ø Ð Ò ¹¬Ö×Ø Ò Ð Ò ¹ ×Ø Ó Ò Ð ÓÖ Ø Ñ׺ ÜÔ Ö Ñ ÒØ× Ï Ú ÐÙ Ø × Ú Ö Ð ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÓÒ¹ ¬ ÙÖ Ø ÓÒ× ÓÑÔÓ× Ó Ø « Ö ÒØ ÓÑÔÓ¹
ÐÓ××¹ × ÓÖ Ö Ò Ó Ò Ñ Ø Ó ÓÐÓ Ý Ò ÔÖ × ÒØ Ò ÔÔÖÓÜ Ñ Ø Ö Ý ÓÖ Ö Ò Ó Ò Ð ÓÖ Ø Ñº ÁÒ Ë Ø ÓÒ ¸Û ÜÔ Ö ¹ Ñ ÒØ ÐÐÝ Ú ÐÙ Ø × Ú Ö Ð ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ö Ø ØÙÖ ×¸ Ò Ø ÓÒØ ÜØ Ó Ò ÓÖÑ Ø ÓÒ Ü¹ ØÖ Ø ÓÒº ¾ ÓÖ Ö Ò Ê ×ÓÐÙØ ÓÒ ÇÚ ÖÚ Û Ì ÔÖÓ Ð Ñ Ó Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒ × Ó Ø Ò ×ØÙ ´Å Ø ÓÚ¸¾¼¼¾µ¸Û × ÐÓ× ÐÝ Ö ¹ Ð Ø ØÓ Ø ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ò Ô ÓÖ × Ô ÒÓÑ ÒÓÒ Ó Ö ÖÖ Ò ØÓ ÔÖ Ò Ñ ÒØ ÓÒ Ò Ó ÙÑ Òغ Ì Ö Ö¹ Ò × Ø Ò ÐÐ Ò Ò Ô ÓÖ Ò Ø Ö ÖÖ Ñ ÒØ ÓÒ × Ø ÖÑ Ò ÒØ Òغ Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñ × Ó Ø Ò Ö ×ØÖ Ø ØÓ ÒÓÑ ¹ Ò Ð Ò ÔÖÓÒÓÑ Ò Ð Ò Ô ÓÖ×¸Ø Ö Ý ÒÓÖ Ò Ø ÔÖÓ Ð Ñ Ó Ò Ñ ÓÖ Ö Ò ¸Û × Ü¹ ØÖ Ñ ÐÝ ÑÔÓÖØ ÒØ ÓÖ Ò ÓÖÑ Ø ÓÒ ÜØÖ Ø ÓÒº Ø ÓÒ ÐÐÝ¸× Ò Ò Ô ÓÖ Ö ×× ×´Ð Ø Ö¹ ÐÐݵ ÓÒÐÝ Û Ö Ö Ö Ò ×¸Ø Ò Ö ÕÙ ÒØ Ô ÒÓÑ ÒÓÒ Ó ÓÖÛ Ö Ö Ö Ò ×´Ø ÖÑ Ø¹ Ô ÓÖ µ × ÒÓØ ÓÚ Ö Ý Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒº ÁÒ ÓÙÖ ÔÖ × ÒØ Ø ÓÒ¸Ø Ø ÖÑ ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ ÑÔÐ × Ö ×ÓÐÙØ ÓÒ Ó Ò Ñ ¸ÒÓÑ Ò Ð¸ Ò ÔÖÓÒÓÑ Ò Ð ÒØ ØÝ Ñ Ò Ø ÓÒ× Ø Ø ×Ù ×ÙÑ × ÓØ Û Ö Ò ÓÖÛ Ö Ö Ö Ò ×º Ä Ø Ù× ¬Ò Ø ÓÖ Ö Ò Ö Ð Ø ÓÒ ÓÖ ÓÒ × Ø Ó Ó ÙÑ ÒØ ÒØ ØÝ Ñ ÒØ ÓÒ׺ Ï × Ý Ø Ø Ø Ö Ð Ø ÓÒ ÓÖ ´Ü ݵ ÓÐ × Ò ÓÒÐÝ Ø Ñ ÒØ ÓÒ× Ü Ò Ý Ö ÓÖ Ö Òغ ÁØ × Ö ÕÙ ÒØÐÝ ÐÔ ÙÐ ØÓ ÓÑÔ ÖØÑ ÒØ Ð Þ Ø Ö Ð Ø ÓÒ ÓÖ ´Ü ݵ Ò ¸ Ò ¸Ø ÓÖ ¹ Ö Ò Ö ×ÓÐÙØ ÓÒ Ø × ÒØÓ Ø Ö « Ö ÒØ ×Ù ¹ Ø × × ÓÖÖ ×ÔÓÒ Ò ØÓ « Ö ÒØ Ò × Ó ÒØ Ø × ÒÚÓÐÚ º ÅÓÖ ÔÖ × Ðݸ Ü ÓÖ Ý × ÔÖÓÒÓѹ Ò Ð ÒØ ØÝ¸Ø Ò Û Ó Ø Ò ÔÖÓÒÓÙÒ Ö ×ÓÐÙ¹ Ø ÓÒ ÔÖÓ Ð Ñº ÇØ ÖÛ × ¸ Ü ÓÖ Ý × ÒÓÑ Ò Ð ÒØ ØÝ¸Ø Ò Û Ú ÒÓÙÒ Ô Ö × Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ò ÐÐݸ ÓØ Ü Ò Ý Ö Ò Ñ ÒØ Ø ×¸Ø Ò Ø × Ò Ñ Ö ×ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ò Ò ÓÖÑ Ø ÓÒ ÜØÖ Ø ÓÒ ×Ý×Ø Ñ Ò × ØÓ Ö ×× ÐÐ Ø Ö ×Ô Ø× Ó Ø ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Ø « Ö ÒØ ÑÓ Ð Ò Ò Ð¹ ÓÖ Ø Ñ Ó × Ñ Ý ÔÔÖÓÔÖ Ø ÓÖ Ò Ñ Ò ÓÙÒ Ô Ö × ¸ Ò ÔÖÓÒÓÙÒ Ö ×ÓÐÙØ ÓÒº ÅÓ×Ø ÖÐÝ ÛÓÖ ÓÒ ÓÖ Ö Ò Ò Ò Ô ÓÖ Ö ×ÓÐÙØ ÓÒ ÐØ Û Ø ÔÖÓÒÓÙÒ ÓÖ Ö Ò ´Ä Ô¹ Ô Ò Ò Ä ×׸½ à ÒÒ Ý Ò Ó ÙÖ Ú1 µº Ì ÖÐÝ ÔÔÖÓ × ÒØ ¬ × Ø Ó ÔÖÓÒÓÙÒ× Ò Ó ÙÑ Òظ Ò ¸ ÓÖ ÔÖÓ¹ ÒÓÙÒ¸×ÓÙ Ø ØÓ Ø ÖÑ Ò Ø ×Ø ÒØ Òغ « Ö ÒØ ¬Ò Ø ÓÒ× Ó ×Ø Ð ØÓ « Ö ÒØ Ö ÙÐÐÝ × Ò Ò ÓÑÔÐ Ü ÖÙÐ × Ø Ø Û Ö ×ÓÑ Ø Ñ × × ÓÒ Ü ×Ø Ò × ÓÙÖ× Ø ÓÖ × Ë Ò Ö¸½ µº Ì Ö Ó ÔÖÓÒÓÙÒ Ò ÒÓÙÒ Ô Ö × ÓÖ ¹ Ö Ò Ö ×ÓÐÙØ ÓÒ Û × Ö ØÐÝ Ö Ú Ø Ð Þ × Ò Ñ ¹½ ¼× Ý ÔÔÐ Ø ÓÒ Ó Ð ÖÒ Ò ÔÔÖÓ × ØÓ Ø ÔÖÓ Ð Ñº Ï ÒÓØ ¸ ÑÓÒ ×Ø Ñ ÒÝ¸Ø ÛÓÖ Ó ´ ÓÒ Ò ÒÒ Øظ½ Å ÖØ Ý Ò Ä Ò Öظ½ AE ¸¾¼¼½ AE Ò Ö 3 ¼¼¾µº ÓÖ Ö Ò Ü ÑÔÐ × ØÙÖ ¹ × Ö Ô¹ Ö × ÒØ Ø ÓÒ Ó Ô Ö Ó Ñ ÒØ ÓÒ× Ø Ø × ¹ × Ò ØÓ Ñ Ñ Ò ×Ø Ø ÔÖÓÔ ÖØ × Ó Ø Ò Ô ÓÖ Ò Ø× Ò Ø ÒØ ÒØ Ø Ø Ö ÑÓ×Ø ÐÔ ÙÐ Ò Ñ Ò Ø × ÓÒ Û Ø Ö Ø Ò Ô ÓÖ Ò Ö Ö× ØÓ Ø ÒØ ÒØ Ò ÕÙ ×¹ Ø ÓÒº ÓÖ Ö Ò Ü ÑÔÐ × Ò ÖÝ Ð Ð Ö Ø Ò Û Ø Ö Ø ÒØ Ø × Ø Ø ÓÒ×Ø ØÙØ Ø Ü ÑÔÐ Ö Ò ÓÖ Ö ÒØ ÓÖ ÒÓغ ÅÓ×Ø Ð ÖÒ Ò ¹ × ×Ý×Ø Ñ× ÓÖ ÓÖ Ö Ò Ö ×ÓÐÙ¹ Ø ÓÒ ÑÔÐÓÝ Ð Ö Ö Ò ¹ Ö Ø ØÙÖ × Ø× AE ¸¾¼¼½µº ÒÙÑ ÖÓ Ð ÖÒ Ò Ð ÓÖ Ø Ñ× Ú Ò ÜÔ Ö Ñ ÒØ ÐÐÝ Ú ÐÙ Ø ÓÒ Ø ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ ÔÖÓ Ð Ñº Å ÒÝ ÔÙ Ð × ×ØÙ × Ñ¹ ÔÐÓÝ × ÓÒ ØÖ Ð ÓÖ Ø Ñ´ ÓÒ Ò Ò¹ Ò Øظ½ AE ¸¾¼¼½ AE Ò Ö ¸¾¼¼¾µº Ï Ð×Ó ÒÓØ Û ÐÓ Ð ÔÖÓ Ð ×Ø ÑÓ Ð Ò ÔÔÖÓ × ØÓ ÓÖ Ö Ò Ö ×ÓÐÙØ ÓÒ Ø Ò¹ Ö Ø Ú ÔÖÓ Ð ×Ø ÑÓ Ð Ó ´ ÖÒ Ø Ðº1 µ Ò Ø ÓÒ Ø ÓÒ Ð Ö Ò ÓÑ ¬Ð ÑÓ Ð Ó ´Å ÐÐÙÑ Ò Ï ÐÐÒ Ö¸¾¼¼¿µº Ì ÓÖ Ö Ò Ð ×× ¬ Ö× Ø Ö ÓÙØÔÙØ Ý Ð ÖÒ Ò Ð ÓÖ Ø Ñ× Ò ØÓ Ù× Ò ÓÒ ÙÒ ¹ Ø ÓÒ Û Ø ÓÖ Ö Ò Ó Ò Ð ÓÖ Ø Ñ× Ò ÓÖ¹ Ö ØÓ Ò Ù Ø ÓÖ ÕÙ Ú Ð Ò Ö Ð Ø ÓÒ ÓÒ Ø × Ø Ó Ñ ÒØ ÓÒ׺ ÑÓ×Ø ÔÓÔÙÐ Ö ÓÖ Ö¹ Ò Ó Ò Ð ÓÖ Ø Ñ Ð Ò × Ò Ò Ô ÓÖ ØÓ Ø ¬Ö×Ø ÔÖ Ò ÒØ ÒØ ÔÖ Ø × ÓÖ Ö¹ ÒØ Û Ø Ø Ò Ô ÓÖ´AE ¸¾¼¼½µº Ï Û ÐÐ ÐÐ Ø Ø Ð Ò ¹¬Ö×Ø Ó Ò Ð ÓÖ Ø Ñº Ò ÐØ ÖÒ ¹ Ø Ú Ó Ò Ð ÓÖ Ø Ñ´Ø ÖÑ Ð Ò ¹ ×ص Ð Ò × Ø Ò Ô ÓÖ ØÓ Ø ÑÓ×Ø ÔÖÓ Ð ÔÖ Ò Ò¹ Ø ÒØ¸Û Ö Ø ÔÖÓ Ð ØÝ Ó Ò Ø ÒØ × Ø Ò ØÓ Ø ÓÒ¬ Ò Ó Ø ÓÖ Ö Ò Ð ×¹ × ¬ Ö ÔÖ Ø ÓÒ´AE Ò Ö ¸¾¼¼¾µº Ï Û ÐÐ ÓÒ× Ö ÓØ Ð Ò ¹¬Ö×Ø Ò Ð Ò ¹ ×Ø Ó ¹ Ò Ð ÓÖ Ø Ñ× Ò ÓÑÔ Ö Ø Ñ Û Ø Ø Ò Û Ó Ò Ö Ñ ÛÓÖ Ø Ø Û ÒØÖÓ Ù Ò Ë ¹ Ø ÓÒ º ÇÙÖ Ó Ò Ö Ñ ÛÓÖ ÑÓ×Ø Ö × Ñ Ð × Ø ÛÓÖ Ó ´Å ÐÐÙÑ Ò Ï ÐÐÒ Ö¸¾¼¼¿µ¸Û Ö ÓÖ Ö Ò ÑÓ Ð Ö ÔÖ × ÒØ× ÓÒ Ø ÓÒ Ð Ö Ò¹ ÓÑ ¬ Ð º Ì ÓÖ Ö Ò Ó Ò ÔÖÓ Ð Ñ ÓÖ Ø ÓÒ Ø ÓÒ Ð Ö Ò ÓÑ ¬ Ð Ð × ØÓ ÓÖÖ ¹ Ð Ø ÓÒ ÐÙ×Ø Ö Ò ÔÖÓ Ð Ñ´ Ò× Ð Ø Ðº¸¾¼¼¾µº Ï Ð×Ó Ö Ù Ø ÓÖ Ö Ò Ó Ò ÔÖÓ ¹ Ð Ñ ØÓ ÓÖÖ Ð Ø ÓÒ ÐÙ×Ø Ö Ò ÔÖÓ Ð Ñ¸ ÙØ Ù× « Ö ÒØ ÔÔÖÓÜ Ñ Ø ÓÒ Ð ÓÖ Ø Ñ ÓÖ Ø× ×Ó¹ ÐÙØ ÓÒº ÁÒ Ø × Ò Ó ØÖ Ò Ò Ø ¸Û ÒÓØ Ô¹ ÔÐ Ø ÓÒ Ó ÐÙ×Ø Ö Ò ÓÖ ÓÖ Ö Ò Ó ÒÓÙÒ Ô Ö × ×´ Ö Ò Ï ×Ø «¸½ µº AE Ñ ÐÝØ ÒÓÙÒ Ô Ö × ØØÖ ÙØ × Ö Ù× ØÓ ¬Ò ×Ø Ò ÙÒ Ø ÓÒ Ø Ø × Ù× Û Ø Ò ÙÖ ×¹ Ø ÐÙ×Ø Ö Ò Ð ÓÖ Ø Ñ ØÓ ÔÖÓ Ù ÐÙ×Ø Ö Ò Ó ÒÓÙÒ Ô Ö × × Ø Ø Ñ× ØÓ ÓÖÖ ×ÔÓÒ ØÓ Ø ÓÖ Ö Ò Ô ÖØ Ø ÓÒ Ó Ø ÓÖÖ ×ÔÓÒ Ò ÒÓÙÒ Ô Ö × ÒØ Ø ×º ÁÒ Ø ÓÒ ØÓ Ø ÛÓÖ ÓÒ ÓÖ Ö Ò Ö ×¹ ÓÐÙØ ÓÒ Û Ø Ò Ó ÙÑ ÒØ×¸Ø Ö × Ò Ñ Ö ¹ Ò Ó Ý ÓÒ ÑÓÖ Ò Ö Ð ÒØ ØÝ ÙÒ ÖØ ÒØÝ ÔÖÓ Ð Ñ¸Û × ÓÒ ÖÒ Û Ø Ø ÖÑ Ò Ò Û Ø Ö ØÛÓ Ö ÓÖ × × Ö Ø × Ñ ÒØ ØÝ È ×ÙÐ Ø Ðº¸¾¼¼¿µº ¿ ÓÖ Ö Ò Ê ×ÓÐÙØ ÓÒ Ö Ø",2004
wu-weld-2010-open,https://aclanthology.org/P10-1013,0,,,,,,,"Open Information Extraction Using Wikipedia. Information-extraction (IE) systems seek to distill semantic relations from naturallanguage text, but most systems use supervised learning of relation-specific examples and are thus limited by the availability of training data. Open IE systems such as TextRunner, on the other hand, aim to handle the unbounded number of relations found on the Web. But how well can these open systems perform? This paper presents WOE, an open IE system which improves dramatically on TextRunner's precision and recall. The key to WOE's performance is a novel form of self-supervised learning for open extractors-using heuristic matches between Wikipedia infobox attribute values and corresponding sentences to construct training data. Like TextRunner, WOE's extractor eschews lexicalized features and handles an unbounded set of semantic relations. WOE can operate in two modes: when restricted to POS tag features, it runs as quickly as TextRunner, but when set to use dependency-parse features its precision and recall rise even higher.",Open Information Extraction Using {W}ikipedia,"Information-extraction (IE) systems seek to distill semantic relations from naturallanguage text, but most systems use supervised learning of relation-specific examples and are thus limited by the availability of training data. Open IE systems such as TextRunner, on the other hand, aim to handle the unbounded number of relations found on the Web. But how well can these open systems perform? This paper presents WOE, an open IE system which improves dramatically on TextRunner's precision and recall. The key to WOE's performance is a novel form of self-supervised learning for open extractors-using heuristic matches between Wikipedia infobox attribute values and corresponding sentences to construct training data. Like TextRunner, WOE's extractor eschews lexicalized features and handles an unbounded set of semantic relations. WOE can operate in two modes: when restricted to POS tag features, it runs as quickly as TextRunner, but when set to use dependency-parse features its precision and recall rise even higher.",Open Information Extraction Using Wikipedia,"Information-extraction (IE) systems seek to distill semantic relations from naturallanguage text, but most systems use supervised learning of relation-specific examples and are thus limited by the availability of training data. Open IE systems such as TextRunner, on the other hand, aim to handle the unbounded number of relations found on the Web. But how well can these open systems perform? This paper presents WOE, an open IE system which improves dramatically on TextRunner's precision and recall. The key to WOE's performance is a novel form of self-supervised learning for open extractors-using heuristic matches between Wikipedia infobox attribute values and corresponding sentences to construct training data. Like TextRunner, WOE's extractor eschews lexicalized features and handles an unbounded set of semantic relations. WOE can operate in two modes: when restricted to POS tag features, it runs as quickly as TextRunner, but when set to use dependency-parse features its precision and recall rise even higher.","We thank Oren Etzioni and Michele Banko from Turing Center at the University of Washington for providing the code of their software and useful discussions. We also thank Alan Ritter, Mausam, Peng Dai, Raphael Hoffmann, Xiao Ling, Stefan Schoenmackers, Andrey Kolobov and Daniel Suskin for valuable comments. This material is based upon work supported by the WRF / TJ Cable Professorship, a gift from Google and by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-09-C-0181. Any opinions,","Open Information Extraction Using Wikipedia. Information-extraction (IE) systems seek to distill semantic relations from naturallanguage text, but most systems use supervised learning of relation-specific examples and are thus limited by the availability of training data. Open IE systems such as TextRunner, on the other hand, aim to handle the unbounded number of relations found on the Web. But how well can these open systems perform? This paper presents WOE, an open IE system which improves dramatically on TextRunner's precision and recall. The key to WOE's performance is a novel form of self-supervised learning for open extractors-using heuristic matches between Wikipedia infobox attribute values and corresponding sentences to construct training data. Like TextRunner, WOE's extractor eschews lexicalized features and handles an unbounded set of semantic relations. WOE can operate in two modes: when restricted to POS tag features, it runs as quickly as TextRunner, but when set to use dependency-parse features its precision and recall rise even higher.",2010
bangalore-etal-2001-impact,https://aclanthology.org/W01-0520,0,,,,,,,"Impact of Quality and Quantity of Corpora on Stochastic Generation. EQUATION
EQUATION",Impact of Quality and Quantity of Corpora on Stochastic Generation,"EQUATION
EQUATION",Impact of Quality and Quantity of Corpora on Stochastic Generation,"EQUATION
EQUATION",,"Impact of Quality and Quantity of Corpora on Stochastic Generation. EQUATION
EQUATION",2001
gurcke-etal-2021-assessing,https://aclanthology.org/2021.argmining-1.7,0,,,,,,,"Assessing the Sufficiency of Arguments through Conclusion Generation. The premises of an argument give evidence or other reasons to support a conclusion. However, the amount of support required depends on the generality of a conclusion, the nature of the individual premises, and similar. An argument whose premises make its conclusion rationally worthy to be drawn is called sufficient in argument quality research. Previous work tackled sufficiency assessment as a standard text classification problem, not modeling the inherent relation of premises and conclusion. In this paper, we hypothesize that the conclusion of a sufficient argument can be generated from its premises. To study this hypothesis, we explore the potential of assessing sufficiency based on the output of large-scale pre-trained language models. Our best model variant achieves an F 1-score of .885, outperforming the previous state-of-the-art and being on par with human experts. While manual evaluation reveals the quality of the generated conclusions, their impact remains low ultimately.",Assessing the Sufficiency of Arguments through Conclusion Generation,"The premises of an argument give evidence or other reasons to support a conclusion. However, the amount of support required depends on the generality of a conclusion, the nature of the individual premises, and similar. An argument whose premises make its conclusion rationally worthy to be drawn is called sufficient in argument quality research. Previous work tackled sufficiency assessment as a standard text classification problem, not modeling the inherent relation of premises and conclusion. In this paper, we hypothesize that the conclusion of a sufficient argument can be generated from its premises. To study this hypothesis, we explore the potential of assessing sufficiency based on the output of large-scale pre-trained language models. Our best model variant achieves an F 1-score of .885, outperforming the previous state-of-the-art and being on par with human experts. While manual evaluation reveals the quality of the generated conclusions, their impact remains low ultimately.",Assessing the Sufficiency of Arguments through Conclusion Generation,"The premises of an argument give evidence or other reasons to support a conclusion. However, the amount of support required depends on the generality of a conclusion, the nature of the individual premises, and similar. An argument whose premises make its conclusion rationally worthy to be drawn is called sufficient in argument quality research. Previous work tackled sufficiency assessment as a standard text classification problem, not modeling the inherent relation of premises and conclusion. In this paper, we hypothesize that the conclusion of a sufficient argument can be generated from its premises. To study this hypothesis, we explore the potential of assessing sufficiency based on the output of large-scale pre-trained language models. Our best model variant achieves an F 1-score of .885, outperforming the previous state-of-the-art and being on par with human experts. While manual evaluation reveals the quality of the generated conclusions, their impact remains low ultimately.","We thank Katharina Brennig, Simon Seidl, Abdullah Burak, Frederike Gurcke and Dr. Maurice Gurcke for their feedback. We gratefully acknowledge the computing time provided the described experiments by the Paderborn Center for Parallel Computing (PC 2 ). This project has been partially funded by the German Research Foundation (DFG) within the project OASiS, project number 455913891, as part of the Priority Program ""Robust Argumentation Machines (RATIO)"" (SPP-1999).","Assessing the Sufficiency of Arguments through Conclusion Generation. The premises of an argument give evidence or other reasons to support a conclusion. However, the amount of support required depends on the generality of a conclusion, the nature of the individual premises, and similar. An argument whose premises make its conclusion rationally worthy to be drawn is called sufficient in argument quality research. Previous work tackled sufficiency assessment as a standard text classification problem, not modeling the inherent relation of premises and conclusion. In this paper, we hypothesize that the conclusion of a sufficient argument can be generated from its premises. To study this hypothesis, we explore the potential of assessing sufficiency based on the output of large-scale pre-trained language models. Our best model variant achieves an F 1-score of .885, outperforming the previous state-of-the-art and being on par with human experts. While manual evaluation reveals the quality of the generated conclusions, their impact remains low ultimately.",2021
wedlake-1992-introduction,https://aclanthology.org/1992.tc-1.1,0,,,,,,,"An Introduction to quality assurance and a guide to the implementation of BS5750. This paper introduces the philosophy of Quality Assurance and traces the development of the British Standard for Quality Systems-BS 5750. The key components of the Quality System are covered and there is a discussion on how to choose a Quality System which is most appropriate to the needs of the particular organisation. A comprehensive guide (including flowcharts) is also given which addresses the nature and scope of tasks which must be undertaken in implementing a Quality System commensurate with the requirements of a recognised international standard such as BS 5750. QUALITY ASSURANCE-AN INTRODUCTION The concept of seeking a guarantee in return for goods or items exchanged is not new. In fact, the well-known phrase 'my word is my bond', still in use today, is a form of guarantee or assurance that an agreement reached or an obligation undertaken will be honoured. Guarantees in respect of items purchased (in exchange for money) are usually not verbal agreements but take the form of signed receipts, which imply that items bought will be fit for the purpose for which they were advertised or intended and that someone is accountable if they fail to live up to those expectations.",An Introduction to quality assurance and a guide to the implementation of {BS}5750,"This paper introduces the philosophy of Quality Assurance and traces the development of the British Standard for Quality Systems-BS 5750. The key components of the Quality System are covered and there is a discussion on how to choose a Quality System which is most appropriate to the needs of the particular organisation. A comprehensive guide (including flowcharts) is also given which addresses the nature and scope of tasks which must be undertaken in implementing a Quality System commensurate with the requirements of a recognised international standard such as BS 5750. QUALITY ASSURANCE-AN INTRODUCTION The concept of seeking a guarantee in return for goods or items exchanged is not new. In fact, the well-known phrase 'my word is my bond', still in use today, is a form of guarantee or assurance that an agreement reached or an obligation undertaken will be honoured. Guarantees in respect of items purchased (in exchange for money) are usually not verbal agreements but take the form of signed receipts, which imply that items bought will be fit for the purpose for which they were advertised or intended and that someone is accountable if they fail to live up to those expectations.",An Introduction to quality assurance and a guide to the implementation of BS5750,"This paper introduces the philosophy of Quality Assurance and traces the development of the British Standard for Quality Systems-BS 5750. The key components of the Quality System are covered and there is a discussion on how to choose a Quality System which is most appropriate to the needs of the particular organisation. A comprehensive guide (including flowcharts) is also given which addresses the nature and scope of tasks which must be undertaken in implementing a Quality System commensurate with the requirements of a recognised international standard such as BS 5750. QUALITY ASSURANCE-AN INTRODUCTION The concept of seeking a guarantee in return for goods or items exchanged is not new. In fact, the well-known phrase 'my word is my bond', still in use today, is a form of guarantee or assurance that an agreement reached or an obligation undertaken will be honoured. Guarantees in respect of items purchased (in exchange for money) are usually not verbal agreements but take the form of signed receipts, which imply that items bought will be fit for the purpose for which they were advertised or intended and that someone is accountable if they fail to live up to those expectations.",,"An Introduction to quality assurance and a guide to the implementation of BS5750. This paper introduces the philosophy of Quality Assurance and traces the development of the British Standard for Quality Systems-BS 5750. The key components of the Quality System are covered and there is a discussion on how to choose a Quality System which is most appropriate to the needs of the particular organisation. A comprehensive guide (including flowcharts) is also given which addresses the nature and scope of tasks which must be undertaken in implementing a Quality System commensurate with the requirements of a recognised international standard such as BS 5750. QUALITY ASSURANCE-AN INTRODUCTION The concept of seeking a guarantee in return for goods or items exchanged is not new. In fact, the well-known phrase 'my word is my bond', still in use today, is a form of guarantee or assurance that an agreement reached or an obligation undertaken will be honoured. Guarantees in respect of items purchased (in exchange for money) are usually not verbal agreements but take the form of signed receipts, which imply that items bought will be fit for the purpose for which they were advertised or intended and that someone is accountable if they fail to live up to those expectations.",1992
marivate-etal-2020-investigating,https://aclanthology.org/2020.rail-1.3,0,,,,,,,"Investigating an Approach for Low Resource Language Dataset Creation, Curation and Classification: Setswana and Sepedi. The recent advances in Natural Language Processing have only been a boon for well represented languages, negating research in lesser known global languages. This is in part due to the availability of curated data and research resources. One of the current challenges concerning low-resourced languages are clear guidelines on the collection, curation and preparation of datasets for different use-cases. In this work, we take on the task of creating two datasets that are focused on news headlines (i.e short text) for Setswana and Sepedi and the creation of a news topic classification task from these datasets. In this study, we document our work, propose baselines for classification, and investigate an approach on data augmentation better suited to low-resourced languages in order to improve the performance of the classifiers.","Investigating an Approach for Low Resource Language Dataset Creation, Curation and Classification: Setswana and Sepedi","The recent advances in Natural Language Processing have only been a boon for well represented languages, negating research in lesser known global languages. This is in part due to the availability of curated data and research resources. One of the current challenges concerning low-resourced languages are clear guidelines on the collection, curation and preparation of datasets for different use-cases. In this work, we take on the task of creating two datasets that are focused on news headlines (i.e short text) for Setswana and Sepedi and the creation of a news topic classification task from these datasets. In this study, we document our work, propose baselines for classification, and investigate an approach on data augmentation better suited to low-resourced languages in order to improve the performance of the classifiers.","Investigating an Approach for Low Resource Language Dataset Creation, Curation and Classification: Setswana and Sepedi","The recent advances in Natural Language Processing have only been a boon for well represented languages, negating research in lesser known global languages. This is in part due to the availability of curated data and research resources. One of the current challenges concerning low-resourced languages are clear guidelines on the collection, curation and preparation of datasets for different use-cases. In this work, we take on the task of creating two datasets that are focused on news headlines (i.e short text) for Setswana and Sepedi and the creation of a news topic classification task from these datasets. In this study, we document our work, propose baselines for classification, and investigate an approach on data augmentation better suited to low-resourced languages in order to improve the performance of the classifiers.",,"Investigating an Approach for Low Resource Language Dataset Creation, Curation and Classification: Setswana and Sepedi. The recent advances in Natural Language Processing have only been a boon for well represented languages, negating research in lesser known global languages. This is in part due to the availability of curated data and research resources. One of the current challenges concerning low-resourced languages are clear guidelines on the collection, curation and preparation of datasets for different use-cases. In this work, we take on the task of creating two datasets that are focused on news headlines (i.e short text) for Setswana and Sepedi and the creation of a news topic classification task from these datasets. In this study, we document our work, propose baselines for classification, and investigate an approach on data augmentation better suited to low-resourced languages in order to improve the performance of the classifiers.",2020
heck-etal-2015-naist,https://aclanthology.org/2015.iwslt-evaluation.17,0,,,,,,,The NAIST English speech recognition system for IWSLT 2015. ,The {NAIST} {E}nglish speech recognition system for {IWSLT} 2015,,The NAIST English speech recognition system for IWSLT 2015,,,The NAIST English speech recognition system for IWSLT 2015. ,2015
grishina-2017-combining,https://aclanthology.org/W17-4809,0,,,,,,,"Combining the output of two coreference resolution systems for two source languages to improve annotation projection. Although parallel coreference corpora can to a high degree support the development of SMT systems, there are no large-scale parallel datasets available due to the complexity of the annotation task and the variability in annotation schemes. In this study, we exploit an annotation projection method to combine the output of two coreference resolution systems for two different source languages (English, German) in order to create an annotated corpus for a third language (Russian). We show that our technique is superior to projecting annotations from a single source language, and we provide an in-depth analysis of the projected annotations in order to assess the perspectives of our approach.",Combining the output of two coreference resolution systems for two source languages to improve annotation projection,"Although parallel coreference corpora can to a high degree support the development of SMT systems, there are no large-scale parallel datasets available due to the complexity of the annotation task and the variability in annotation schemes. In this study, we exploit an annotation projection method to combine the output of two coreference resolution systems for two different source languages (English, German) in order to create an annotated corpus for a third language (Russian). We show that our technique is superior to projecting annotations from a single source language, and we provide an in-depth analysis of the projected annotations in order to assess the perspectives of our approach.",Combining the output of two coreference resolution systems for two source languages to improve annotation projection,"Although parallel coreference corpora can to a high degree support the development of SMT systems, there are no large-scale parallel datasets available due to the complexity of the annotation task and the variability in annotation schemes. In this study, we exploit an annotation projection method to combine the output of two coreference resolution systems for two different source languages (English, German) in order to create an annotated corpus for a third language (Russian). We show that our technique is superior to projecting annotations from a single source language, and we provide an in-depth analysis of the projected annotations in order to assess the perspectives of our approach.",,"Combining the output of two coreference resolution systems for two source languages to improve annotation projection. Although parallel coreference corpora can to a high degree support the development of SMT systems, there are no large-scale parallel datasets available due to the complexity of the annotation task and the variability in annotation schemes. In this study, we exploit an annotation projection method to combine the output of two coreference resolution systems for two different source languages (English, German) in order to create an annotated corpus for a third language (Russian). We show that our technique is superior to projecting annotations from a single source language, and we provide an in-depth analysis of the projected annotations in order to assess the perspectives of our approach.",2017
duppada-etal-2018-seernet,https://aclanthology.org/S18-1002,0,,,,,,,"SeerNet at SemEval-2018 Task 1: Domain Adaptation for Affect in Tweets. The paper describes the best performing system for the SemEval-2018 Affect in Tweets (English) sub-tasks. The system focuses on the ordinal classification and regression sub-tasks for valence and emotion. For ordinal classification valence is classified into 7 different classes ranging from-3 to 3 whereas emotion is classified into 4 different classes 0 to 3 separately for each emotion namely anger, fear, joy and sadness. The regression sub-tasks estimate the intensity of valence and each emotion. The system performs domain adaptation of 4 different models and creates an ensemble to give the final prediction. The proposed system achieved 1 st position out of 75 teams which participated in the fore-mentioned subtasks. We outperform the baseline model by margins ranging from 49.2% to 76.4%, thus, pushing the state-of-the-art significantly.",{S}eer{N}et at {S}em{E}val-2018 Task 1: Domain Adaptation for Affect in Tweets,"The paper describes the best performing system for the SemEval-2018 Affect in Tweets (English) sub-tasks. The system focuses on the ordinal classification and regression sub-tasks for valence and emotion. For ordinal classification valence is classified into 7 different classes ranging from-3 to 3 whereas emotion is classified into 4 different classes 0 to 3 separately for each emotion namely anger, fear, joy and sadness. The regression sub-tasks estimate the intensity of valence and each emotion. The system performs domain adaptation of 4 different models and creates an ensemble to give the final prediction. The proposed system achieved 1 st position out of 75 teams which participated in the fore-mentioned subtasks. We outperform the baseline model by margins ranging from 49.2% to 76.4%, thus, pushing the state-of-the-art significantly.",SeerNet at SemEval-2018 Task 1: Domain Adaptation for Affect in Tweets,"The paper describes the best performing system for the SemEval-2018 Affect in Tweets (English) sub-tasks. The system focuses on the ordinal classification and regression sub-tasks for valence and emotion. For ordinal classification valence is classified into 7 different classes ranging from-3 to 3 whereas emotion is classified into 4 different classes 0 to 3 separately for each emotion namely anger, fear, joy and sadness. The regression sub-tasks estimate the intensity of valence and each emotion. The system performs domain adaptation of 4 different models and creates an ensemble to give the final prediction. The proposed system achieved 1 st position out of 75 teams which participated in the fore-mentioned subtasks. We outperform the baseline model by margins ranging from 49.2% to 76.4%, thus, pushing the state-of-the-art significantly.",,"SeerNet at SemEval-2018 Task 1: Domain Adaptation for Affect in Tweets. The paper describes the best performing system for the SemEval-2018 Affect in Tweets (English) sub-tasks. The system focuses on the ordinal classification and regression sub-tasks for valence and emotion. For ordinal classification valence is classified into 7 different classes ranging from-3 to 3 whereas emotion is classified into 4 different classes 0 to 3 separately for each emotion namely anger, fear, joy and sadness. The regression sub-tasks estimate the intensity of valence and each emotion. The system performs domain adaptation of 4 different models and creates an ensemble to give the final prediction. The proposed system achieved 1 st position out of 75 teams which participated in the fore-mentioned subtasks. We outperform the baseline model by margins ranging from 49.2% to 76.4%, thus, pushing the state-of-the-art significantly.",2018
mitchell-lapata-2008-vector,https://aclanthology.org/P08-1028,0,,,,,,,"Vector-based Models of Semantic Composition. This paper proposes a framework for representing the meaning of phrases and sentences in vector space. Central to our approach is vector composition which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models which we evaluate empirically on a sentence similarity task. Experimental results demonstrate that the multiplicative models are superior to the additive alternatives when compared against human judgments.",Vector-based Models of Semantic Composition,"This paper proposes a framework for representing the meaning of phrases and sentences in vector space. Central to our approach is vector composition which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models which we evaluate empirically on a sentence similarity task. Experimental results demonstrate that the multiplicative models are superior to the additive alternatives when compared against human judgments.",Vector-based Models of Semantic Composition,"This paper proposes a framework for representing the meaning of phrases and sentences in vector space. Central to our approach is vector composition which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models which we evaluate empirically on a sentence similarity task. Experimental results demonstrate that the multiplicative models are superior to the additive alternatives when compared against human judgments.",,"Vector-based Models of Semantic Composition. This paper proposes a framework for representing the meaning of phrases and sentences in vector space. Central to our approach is vector composition which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models which we evaluate empirically on a sentence similarity task. Experimental results demonstrate that the multiplicative models are superior to the additive alternatives when compared against human judgments.",2008
fujii-etal-2012-effects,http://www.lrec-conf.org/proceedings/lrec2012/pdf/714_Paper.pdf,0,,,,,,,"Effects of Document Clustering in Modeling Wikipedia-style Term Descriptions. Reflecting the rapid growth of science, technology, and culture, it has become common practice to consult tools on the World Wide Web for various terms. Existing search engines provide an enormous volume of information, but retrieved information is not organized. Hand-compiled encyclopedias provide organized information, but the quantity of information is limited. In this paper, aiming to integrate the advantages of both tools, we propose a method to organize a search result based on multiple viewpoints as in Wikipedia. Because viewpoints required for explanation are different depending on the type of a term, such as animal and disease, we model articles in Wikipedia to extract a viewpoint structure for each term type. To identify a set of term types, we independently use manual annotation and automatic document clustering for Wikipedia articles. We also propose an effective feature for clustering of Wikipedia articles. We experimentally show that the document clustering reduces the cost for the manual annotation while maintaining the accuracy for modeling Wikipedia articles.",Effects of Document Clustering in Modeling {W}ikipedia-style Term Descriptions,"Reflecting the rapid growth of science, technology, and culture, it has become common practice to consult tools on the World Wide Web for various terms. Existing search engines provide an enormous volume of information, but retrieved information is not organized. Hand-compiled encyclopedias provide organized information, but the quantity of information is limited. In this paper, aiming to integrate the advantages of both tools, we propose a method to organize a search result based on multiple viewpoints as in Wikipedia. Because viewpoints required for explanation are different depending on the type of a term, such as animal and disease, we model articles in Wikipedia to extract a viewpoint structure for each term type. To identify a set of term types, we independently use manual annotation and automatic document clustering for Wikipedia articles. We also propose an effective feature for clustering of Wikipedia articles. We experimentally show that the document clustering reduces the cost for the manual annotation while maintaining the accuracy for modeling Wikipedia articles.",Effects of Document Clustering in Modeling Wikipedia-style Term Descriptions,"Reflecting the rapid growth of science, technology, and culture, it has become common practice to consult tools on the World Wide Web for various terms. Existing search engines provide an enormous volume of information, but retrieved information is not organized. Hand-compiled encyclopedias provide organized information, but the quantity of information is limited. In this paper, aiming to integrate the advantages of both tools, we propose a method to organize a search result based on multiple viewpoints as in Wikipedia. Because viewpoints required for explanation are different depending on the type of a term, such as animal and disease, we model articles in Wikipedia to extract a viewpoint structure for each term type. To identify a set of term types, we independently use manual annotation and automatic document clustering for Wikipedia articles. We also propose an effective feature for clustering of Wikipedia articles. We experimentally show that the document clustering reduces the cost for the manual annotation while maintaining the accuracy for modeling Wikipedia articles.",,"Effects of Document Clustering in Modeling Wikipedia-style Term Descriptions. Reflecting the rapid growth of science, technology, and culture, it has become common practice to consult tools on the World Wide Web for various terms. Existing search engines provide an enormous volume of information, but retrieved information is not organized. Hand-compiled encyclopedias provide organized information, but the quantity of information is limited. In this paper, aiming to integrate the advantages of both tools, we propose a method to organize a search result based on multiple viewpoints as in Wikipedia. Because viewpoints required for explanation are different depending on the type of a term, such as animal and disease, we model articles in Wikipedia to extract a viewpoint structure for each term type. To identify a set of term types, we independently use manual annotation and automatic document clustering for Wikipedia articles. We also propose an effective feature for clustering of Wikipedia articles. We experimentally show that the document clustering reduces the cost for the manual annotation while maintaining the accuracy for modeling Wikipedia articles.",2012
kim-1996-internally,https://aclanthology.org/Y96-1042,0,,,,,,,"Internally Headed Relative Clause Constructions in Korean. This paper attempts to analyze some grammatical aspects of the so called internally-headed relative clause construction in Korean. This paper proposes that the meaning of the external head kes is underspecified in the sense that its semantic content is filled in by co-indexing it to the internal head under appropriate conditions. This paper also argues that interpretation of kes is determined by the verb following it. Dealing also with the pragmatics of the construction, this paper argues that the crucial characteristics of the construction in question resides in pragmatics rather than in semantics.",Internally Headed Relative Clause Constructions in {K}orean,"This paper attempts to analyze some grammatical aspects of the so called internally-headed relative clause construction in Korean. This paper proposes that the meaning of the external head kes is underspecified in the sense that its semantic content is filled in by co-indexing it to the internal head under appropriate conditions. This paper also argues that interpretation of kes is determined by the verb following it. Dealing also with the pragmatics of the construction, this paper argues that the crucial characteristics of the construction in question resides in pragmatics rather than in semantics.",Internally Headed Relative Clause Constructions in Korean,"This paper attempts to analyze some grammatical aspects of the so called internally-headed relative clause construction in Korean. This paper proposes that the meaning of the external head kes is underspecified in the sense that its semantic content is filled in by co-indexing it to the internal head under appropriate conditions. This paper also argues that interpretation of kes is determined by the verb following it. Dealing also with the pragmatics of the construction, this paper argues that the crucial characteristics of the construction in question resides in pragmatics rather than in semantics.",,"Internally Headed Relative Clause Constructions in Korean. This paper attempts to analyze some grammatical aspects of the so called internally-headed relative clause construction in Korean. This paper proposes that the meaning of the external head kes is underspecified in the sense that its semantic content is filled in by co-indexing it to the internal head under appropriate conditions. This paper also argues that interpretation of kes is determined by the verb following it. Dealing also with the pragmatics of the construction, this paper argues that the crucial characteristics of the construction in question resides in pragmatics rather than in semantics.",1996
heinroth-etal-2012-adaptive,http://www.lrec-conf.org/proceedings/lrec2012/pdf/169_Paper.pdf,0,,,,,,,"Adaptive Speech Understanding for Intuitive Model-based Spoken Dialogues. In this paper we present three approaches towards adaptive speech understanding. The target system is a model-based Adaptive Spoken Dialogue Manager, the OwlSpeak ASDM. We enhanced this system in order to properly react on non-understandings in real-life situations where intuitive communication is required. OwlSpeak provides a model-based spoken interface to an Intelligent Environment depending on and adapting to the current context. It utilises a set of ontologies used as dialogue models that can be combined dynamically during runtime. Besides the benefits the system showed in practice, real-life evaluations also conveyed some limitations of the model-based approach. Since it is unfeasible to model all variations of the communication between the user and the system beforehand, various situations where the system did not correctly understand the user input have been observed. Thus we present three enhancements towards a more sophisticated use of the ontology-based dialogue models and show how grammars may dynamically be adapted in order to understand intuitive user utterances. The evaluation of our approaches revealed the incorporation of a lexical-semantic knowledgebase into the recognition process to be the most promising approach.",Adaptive Speech Understanding for Intuitive Model-based Spoken Dialogues,"In this paper we present three approaches towards adaptive speech understanding. The target system is a model-based Adaptive Spoken Dialogue Manager, the OwlSpeak ASDM. We enhanced this system in order to properly react on non-understandings in real-life situations where intuitive communication is required. OwlSpeak provides a model-based spoken interface to an Intelligent Environment depending on and adapting to the current context. It utilises a set of ontologies used as dialogue models that can be combined dynamically during runtime. Besides the benefits the system showed in practice, real-life evaluations also conveyed some limitations of the model-based approach. Since it is unfeasible to model all variations of the communication between the user and the system beforehand, various situations where the system did not correctly understand the user input have been observed. Thus we present three enhancements towards a more sophisticated use of the ontology-based dialogue models and show how grammars may dynamically be adapted in order to understand intuitive user utterances. The evaluation of our approaches revealed the incorporation of a lexical-semantic knowledgebase into the recognition process to be the most promising approach.",Adaptive Speech Understanding for Intuitive Model-based Spoken Dialogues,"In this paper we present three approaches towards adaptive speech understanding. The target system is a model-based Adaptive Spoken Dialogue Manager, the OwlSpeak ASDM. We enhanced this system in order to properly react on non-understandings in real-life situations where intuitive communication is required. OwlSpeak provides a model-based spoken interface to an Intelligent Environment depending on and adapting to the current context. It utilises a set of ontologies used as dialogue models that can be combined dynamically during runtime. Besides the benefits the system showed in practice, real-life evaluations also conveyed some limitations of the model-based approach. Since it is unfeasible to model all variations of the communication between the user and the system beforehand, various situations where the system did not correctly understand the user input have been observed. Thus we present three enhancements towards a more sophisticated use of the ontology-based dialogue models and show how grammars may dynamically be adapted in order to understand intuitive user utterances. The evaluation of our approaches revealed the incorporation of a lexical-semantic knowledgebase into the recognition process to be the most promising approach.","The research leading to these results has received funding from the Transregional Collaborative Research Centre SF-B/TRR 62 ""Companion-Technology for Cognitive Technical Systems"" funded by the German Research Foundation (DFG).","Adaptive Speech Understanding for Intuitive Model-based Spoken Dialogues. In this paper we present three approaches towards adaptive speech understanding. The target system is a model-based Adaptive Spoken Dialogue Manager, the OwlSpeak ASDM. We enhanced this system in order to properly react on non-understandings in real-life situations where intuitive communication is required. OwlSpeak provides a model-based spoken interface to an Intelligent Environment depending on and adapting to the current context. It utilises a set of ontologies used as dialogue models that can be combined dynamically during runtime. Besides the benefits the system showed in practice, real-life evaluations also conveyed some limitations of the model-based approach. Since it is unfeasible to model all variations of the communication between the user and the system beforehand, various situations where the system did not correctly understand the user input have been observed. Thus we present three enhancements towards a more sophisticated use of the ontology-based dialogue models and show how grammars may dynamically be adapted in order to understand intuitive user utterances. The evaluation of our approaches revealed the incorporation of a lexical-semantic knowledgebase into the recognition process to be the most promising approach.",2012
mireshghallah-etal-2022-mix,https://aclanthology.org/2022.acl-long.31,0,,,,,,,"Mix and Match: Learning-free Controllable Text Generationusing Energy Language Models. Recent work on controlled text generation has either required attribute-based fine-tuning of the base language model (LM), or has restricted the parameterization of the attribute discriminator to be compatible with the base autoregressive LM. In this work, we propose Mix and Match LM, a global score-based alternative for controllable text generation that combines arbitrary pre-trained black-box models for achieving the desired attributes in the generated text without involving any fine-tuning or structural assumptions about the black-box models. We interpret the task of controllable generation as drawing samples from an energy-based model whose energy values are a linear combination of scores from black-box models that are separately responsible for fluency, the control attribute, and faithfulness to any conditioning context. We use a Metropolis-Hastings sampling scheme to sample from this energy-based model using bidirectional context and global attribute features. We validate the effectiveness of our approach on various controlled generation and style-based text revision tasks by outperforming recently proposed methods that involve extra training, fine-tuning, or restrictive assumptions over the form of models.",Mix and Match: Learning-free Controllable Text Generationusing Energy Language Models,"Recent work on controlled text generation has either required attribute-based fine-tuning of the base language model (LM), or has restricted the parameterization of the attribute discriminator to be compatible with the base autoregressive LM. In this work, we propose Mix and Match LM, a global score-based alternative for controllable text generation that combines arbitrary pre-trained black-box models for achieving the desired attributes in the generated text without involving any fine-tuning or structural assumptions about the black-box models. We interpret the task of controllable generation as drawing samples from an energy-based model whose energy values are a linear combination of scores from black-box models that are separately responsible for fluency, the control attribute, and faithfulness to any conditioning context. We use a Metropolis-Hastings sampling scheme to sample from this energy-based model using bidirectional context and global attribute features. We validate the effectiveness of our approach on various controlled generation and style-based text revision tasks by outperforming recently proposed methods that involve extra training, fine-tuning, or restrictive assumptions over the form of models.",Mix and Match: Learning-free Controllable Text Generationusing Energy Language Models,"Recent work on controlled text generation has either required attribute-based fine-tuning of the base language model (LM), or has restricted the parameterization of the attribute discriminator to be compatible with the base autoregressive LM. In this work, we propose Mix and Match LM, a global score-based alternative for controllable text generation that combines arbitrary pre-trained black-box models for achieving the desired attributes in the generated text without involving any fine-tuning or structural assumptions about the black-box models. We interpret the task of controllable generation as drawing samples from an energy-based model whose energy values are a linear combination of scores from black-box models that are separately responsible for fluency, the control attribute, and faithfulness to any conditioning context. We use a Metropolis-Hastings sampling scheme to sample from this energy-based model using bidirectional context and global attribute features. We validate the effectiveness of our approach on various controlled generation and style-based text revision tasks by outperforming recently proposed methods that involve extra training, fine-tuning, or restrictive assumptions over the form of models.",The authors would like to thank the anonymous reviewers and meta-reviewers for their helpful feedback. We also thank our colleagues at the UCSD/CMU Berg Lab for their helpful comments and feedback.,"Mix and Match: Learning-free Controllable Text Generationusing Energy Language Models. Recent work on controlled text generation has either required attribute-based fine-tuning of the base language model (LM), or has restricted the parameterization of the attribute discriminator to be compatible with the base autoregressive LM. In this work, we propose Mix and Match LM, a global score-based alternative for controllable text generation that combines arbitrary pre-trained black-box models for achieving the desired attributes in the generated text without involving any fine-tuning or structural assumptions about the black-box models. We interpret the task of controllable generation as drawing samples from an energy-based model whose energy values are a linear combination of scores from black-box models that are separately responsible for fluency, the control attribute, and faithfulness to any conditioning context. We use a Metropolis-Hastings sampling scheme to sample from this energy-based model using bidirectional context and global attribute features. We validate the effectiveness of our approach on various controlled generation and style-based text revision tasks by outperforming recently proposed methods that involve extra training, fine-tuning, or restrictive assumptions over the form of models.",2022
uslu-etal-2017-textimager,https://aclanthology.org/E17-3005,0,,,,,,,"TextImager as a Generic Interface to R. R is a very powerful framework for statistical modeling. Thus, it is of high importance to integrate R with state-of-theart tools in NLP. In this paper, we present the functionality and architecture of such an integration by means of TextImager. We use the OpenCPU API to integrate R based on our own R-Server. This allows for communicating with R-packages and combining them with TextImager's NLPcomponents.",{T}ext{I}mager as a Generic Interface to {R},"R is a very powerful framework for statistical modeling. Thus, it is of high importance to integrate R with state-of-theart tools in NLP. In this paper, we present the functionality and architecture of such an integration by means of TextImager. We use the OpenCPU API to integrate R based on our own R-Server. This allows for communicating with R-packages and combining them with TextImager's NLPcomponents.",TextImager as a Generic Interface to R,"R is a very powerful framework for statistical modeling. Thus, it is of high importance to integrate R with state-of-theart tools in NLP. In this paper, we present the functionality and architecture of such an integration by means of TextImager. We use the OpenCPU API to integrate R based on our own R-Server. This allows for communicating with R-packages and combining them with TextImager's NLPcomponents.",,"TextImager as a Generic Interface to R. R is a very powerful framework for statistical modeling. Thus, it is of high importance to integrate R with state-of-theart tools in NLP. In this paper, we present the functionality and architecture of such an integration by means of TextImager. We use the OpenCPU API to integrate R based on our own R-Server. This allows for communicating with R-packages and combining them with TextImager's NLPcomponents.",2017
kraus-etal-2020-comparison,https://aclanthology.org/2020.lrec-1.54,0,,,,,,,"A Comparison of Explicit and Implicit Proactive Dialogue Strategies for Conversational Recommendation. Recommendation systems aim at facilitating information retrieval for users by taking into account their preferences. Based on previous user behaviour, such a system suggests items or provides information that a user might like or find useful. Nonetheless, how to provide suggestions is still an open question. Depending on the way a recommendation is communicated influences the user's perception of the system. This paper presents an empirical study on the effects of proactive dialogue strategies on user acceptance. Therefore, an explicit strategy based on user preferences provided directly by the user, and an implicit proactive strategy, using autonomously gathered information, are compared. The results show that proactive dialogue systems significantly affect the perception of human-computer interaction. Although no significant differences are found between implicit and explicit strategies, proactivity significantly influences the user experience compared to reactive system behaviour. The study contributes new insights to the human-agent interaction and the voice user interface design. Furthermore, interesting tendencies are discovered that motivate future work.",A Comparison of Explicit and Implicit Proactive Dialogue Strategies for Conversational Recommendation,"Recommendation systems aim at facilitating information retrieval for users by taking into account their preferences. Based on previous user behaviour, such a system suggests items or provides information that a user might like or find useful. Nonetheless, how to provide suggestions is still an open question. Depending on the way a recommendation is communicated influences the user's perception of the system. This paper presents an empirical study on the effects of proactive dialogue strategies on user acceptance. Therefore, an explicit strategy based on user preferences provided directly by the user, and an implicit proactive strategy, using autonomously gathered information, are compared. The results show that proactive dialogue systems significantly affect the perception of human-computer interaction. Although no significant differences are found between implicit and explicit strategies, proactivity significantly influences the user experience compared to reactive system behaviour. The study contributes new insights to the human-agent interaction and the voice user interface design. Furthermore, interesting tendencies are discovered that motivate future work.",A Comparison of Explicit and Implicit Proactive Dialogue Strategies for Conversational Recommendation,"Recommendation systems aim at facilitating information retrieval for users by taking into account their preferences. Based on previous user behaviour, such a system suggests items or provides information that a user might like or find useful. Nonetheless, how to provide suggestions is still an open question. Depending on the way a recommendation is communicated influences the user's perception of the system. This paper presents an empirical study on the effects of proactive dialogue strategies on user acceptance. Therefore, an explicit strategy based on user preferences provided directly by the user, and an implicit proactive strategy, using autonomously gathered information, are compared. The results show that proactive dialogue systems significantly affect the perception of human-computer interaction. Although no significant differences are found between implicit and explicit strategies, proactivity significantly influences the user experience compared to reactive system behaviour. The study contributes new insights to the human-agent interaction and the voice user interface design. Furthermore, interesting tendencies are discovered that motivate future work.",,"A Comparison of Explicit and Implicit Proactive Dialogue Strategies for Conversational Recommendation. Recommendation systems aim at facilitating information retrieval for users by taking into account their preferences. Based on previous user behaviour, such a system suggests items or provides information that a user might like or find useful. Nonetheless, how to provide suggestions is still an open question. Depending on the way a recommendation is communicated influences the user's perception of the system. This paper presents an empirical study on the effects of proactive dialogue strategies on user acceptance. Therefore, an explicit strategy based on user preferences provided directly by the user, and an implicit proactive strategy, using autonomously gathered information, are compared. The results show that proactive dialogue systems significantly affect the perception of human-computer interaction. Although no significant differences are found between implicit and explicit strategies, proactivity significantly influences the user experience compared to reactive system behaviour. The study contributes new insights to the human-agent interaction and the voice user interface design. Furthermore, interesting tendencies are discovered that motivate future work.",2020
aji-etal-2021-paracotta,https://aclanthology.org/2021.paclic-1.56,0,,,,,,,"ParaCotta: Synthetic Multilingual Paraphrase Corpora from the Most Diverse Translation Sample Pair. We release our synthetic parallel paraphrase corpus across 17 languages: Arabic, Catalan,",{P}ara{C}otta: Synthetic Multilingual Paraphrase Corpora from the Most Diverse Translation Sample Pair,"We release our synthetic parallel paraphrase corpus across 17 languages: Arabic, Catalan,",ParaCotta: Synthetic Multilingual Paraphrase Corpora from the Most Diverse Translation Sample Pair,"We release our synthetic parallel paraphrase corpus across 17 languages: Arabic, Catalan,",,"ParaCotta: Synthetic Multilingual Paraphrase Corpora from the Most Diverse Translation Sample Pair. We release our synthetic parallel paraphrase corpus across 17 languages: Arabic, Catalan,",2021
yao-van-durme-2014-information,https://aclanthology.org/P14-1090,0,,,,,,,"Information Extraction over Structured Data: Question Answering with Freebase. Answering natural language questions using the Freebase knowledge base has recently been explored as a platform for advancing the state of the art in open domain semantic parsing. Those efforts map questions to sophisticated meaning representations that are then attempted to be matched against viable answer candidates in the knowledge base. Here we show that relatively modest information extraction techniques, when paired with a webscale corpus, can outperform these sophisticated approaches by roughly 34% relative gain.",Information Extraction over Structured Data: Question Answering with {F}reebase,"Answering natural language questions using the Freebase knowledge base has recently been explored as a platform for advancing the state of the art in open domain semantic parsing. Those efforts map questions to sophisticated meaning representations that are then attempted to be matched against viable answer candidates in the knowledge base. Here we show that relatively modest information extraction techniques, when paired with a webscale corpus, can outperform these sophisticated approaches by roughly 34% relative gain.",Information Extraction over Structured Data: Question Answering with Freebase,"Answering natural language questions using the Freebase knowledge base has recently been explored as a platform for advancing the state of the art in open domain semantic parsing. Those efforts map questions to sophisticated meaning representations that are then attempted to be matched against viable answer candidates in the knowledge base. Here we show that relatively modest information extraction techniques, when paired with a webscale corpus, can outperform these sophisticated approaches by roughly 34% relative gain.","Acknowledgments We thank the Allen Institute for Artificial Intelligence for funding this work. We are also grateful to Jonathan Berant, Tom Kwiatkowski, Qingqing Cai, Adam Lopez, Chris Callison-Burch and Peter Clark for helpful discussion and to the reviewers for insightful comments.","Information Extraction over Structured Data: Question Answering with Freebase. Answering natural language questions using the Freebase knowledge base has recently been explored as a platform for advancing the state of the art in open domain semantic parsing. Those efforts map questions to sophisticated meaning representations that are then attempted to be matched against viable answer candidates in the knowledge base. Here we show that relatively modest information extraction techniques, when paired with a webscale corpus, can outperform these sophisticated approaches by roughly 34% relative gain.",2014
bannard-2007-measure,https://aclanthology.org/W07-1101,0,,,,,,,"A Measure of Syntactic Flexibility for Automatically Identifying Multiword Expressions in Corpora. Natural languages contain many multi-word sequences that do not display the variety of syntactic processes we would expect given their phrase type, and consequently must be included in the lexicon as multiword units. This paper describes a method for identifying such items in corpora, focussing on English verb-noun combinations. In an evaluation using a set of dictionary-published MWEs we show that our method achieves greater accuracy than existing MWE extraction methods based on lexical association.",A Measure of Syntactic Flexibility for Automatically Identifying Multiword Expressions in Corpora,"Natural languages contain many multi-word sequences that do not display the variety of syntactic processes we would expect given their phrase type, and consequently must be included in the lexicon as multiword units. This paper describes a method for identifying such items in corpora, focussing on English verb-noun combinations. In an evaluation using a set of dictionary-published MWEs we show that our method achieves greater accuracy than existing MWE extraction methods based on lexical association.",A Measure of Syntactic Flexibility for Automatically Identifying Multiword Expressions in Corpora,"Natural languages contain many multi-word sequences that do not display the variety of syntactic processes we would expect given their phrase type, and consequently must be included in the lexicon as multiword units. This paper describes a method for identifying such items in corpora, focussing on English verb-noun combinations. In an evaluation using a set of dictionary-published MWEs we show that our method achieves greater accuracy than existing MWE extraction methods based on lexical association.","Thanks to Tim Baldwin, Francis Bond, Ted Briscoe, Chris Callison-Burch, Mirella Lapata, Alex Las-carides, Andrew Smith, Takaaki Tanaka and two anonymous reviewers for helpful ideas and comments.","A Measure of Syntactic Flexibility for Automatically Identifying Multiword Expressions in Corpora. Natural languages contain many multi-word sequences that do not display the variety of syntactic processes we would expect given their phrase type, and consequently must be included in the lexicon as multiword units. This paper describes a method for identifying such items in corpora, focussing on English verb-noun combinations. In an evaluation using a set of dictionary-published MWEs we show that our method achieves greater accuracy than existing MWE extraction methods based on lexical association.",2007
ma-etal-2002-models,http://www.lrec-conf.org/proceedings/lrec2002/pdf/141.pdf,0,,,,,,,"Models and Tools for Collaborative Annotation. The Annotation Graph Toolkit (AGTK) is a collection of software which facilitates development of linguistic annotation tools. AGTK provides a database interface which allows applications to use a database server for persistent storage. This paper discusses various modes of collaborative annotation and how they can be supported with tools built using AGTK and its database interface. We describe the relational database schema and API, and describe a version of the TableTrans tool which supports collaborative annotation. The remainder of the paper discusses a high-level query language for annotation graphs, along with optimizations, in support of expressive and efficient access to the annotations held on a large central server. The paper demonstrates that it is straightforward to support a variety of different levels of collaborative annotation with existing AGTK-based tools, with a minimum of additional programming effort.",Models and Tools for Collaborative Annotation,"The Annotation Graph Toolkit (AGTK) is a collection of software which facilitates development of linguistic annotation tools. AGTK provides a database interface which allows applications to use a database server for persistent storage. This paper discusses various modes of collaborative annotation and how they can be supported with tools built using AGTK and its database interface. We describe the relational database schema and API, and describe a version of the TableTrans tool which supports collaborative annotation. The remainder of the paper discusses a high-level query language for annotation graphs, along with optimizations, in support of expressive and efficient access to the annotations held on a large central server. The paper demonstrates that it is straightforward to support a variety of different levels of collaborative annotation with existing AGTK-based tools, with a minimum of additional programming effort.",Models and Tools for Collaborative Annotation,"The Annotation Graph Toolkit (AGTK) is a collection of software which facilitates development of linguistic annotation tools. AGTK provides a database interface which allows applications to use a database server for persistent storage. This paper discusses various modes of collaborative annotation and how they can be supported with tools built using AGTK and its database interface. We describe the relational database schema and API, and describe a version of the TableTrans tool which supports collaborative annotation. The remainder of the paper discusses a high-level query language for annotation graphs, along with optimizations, in support of expressive and efficient access to the annotations held on a large central server. The paper demonstrates that it is straightforward to support a variety of different levels of collaborative annotation with existing AGTK-based tools, with a minimum of additional programming effort.",This material is based upon work supported by the National Science Foundation under Grant Nos. 9978056 and 9980009 (Talkbank).,"Models and Tools for Collaborative Annotation. The Annotation Graph Toolkit (AGTK) is a collection of software which facilitates development of linguistic annotation tools. AGTK provides a database interface which allows applications to use a database server for persistent storage. This paper discusses various modes of collaborative annotation and how they can be supported with tools built using AGTK and its database interface. We describe the relational database schema and API, and describe a version of the TableTrans tool which supports collaborative annotation. The remainder of the paper discusses a high-level query language for annotation graphs, along with optimizations, in support of expressive and efficient access to the annotations held on a large central server. The paper demonstrates that it is straightforward to support a variety of different levels of collaborative annotation with existing AGTK-based tools, with a minimum of additional programming effort.",2002
dugan-etal-2022-feasibility,https://aclanthology.org/2022.findings-acl.151,1,,,,education,,,"A Feasibility Study of Answer-Unaware Question Generation for Education. We conduct a feasibility study into the applicability of answer-agnostic question generation models to textbook passages. We show that a significant portion of errors in such systems arise from asking irrelevant or uninterpretable questions and that such errors can be ameliorated by providing summarized input. We find that giving these models human-written summaries instead of the original text results in a significant increase in acceptability of generated questions (33% → 83%) as determined by expert annotators. We also find that, in the absence of human-written summaries, automatic summarization can serve as a good middle ground.",A Feasibility Study of Answer-Unaware Question Generation for Education,"We conduct a feasibility study into the applicability of answer-agnostic question generation models to textbook passages. We show that a significant portion of errors in such systems arise from asking irrelevant or uninterpretable questions and that such errors can be ameliorated by providing summarized input. We find that giving these models human-written summaries instead of the original text results in a significant increase in acceptability of generated questions (33% → 83%) as determined by expert annotators. We also find that, in the absence of human-written summaries, automatic summarization can serve as a good middle ground.",A Feasibility Study of Answer-Unaware Question Generation for Education,"We conduct a feasibility study into the applicability of answer-agnostic question generation models to textbook passages. We show that a significant portion of errors in such systems arise from asking irrelevant or uninterpretable questions and that such errors can be ameliorated by providing summarized input. We find that giving these models human-written summaries instead of the original text results in a significant increase in acceptability of generated questions (33% → 83%) as determined by expert annotators. We also find that, in the absence of human-written summaries, automatic summarization can serve as a good middle ground.",,"A Feasibility Study of Answer-Unaware Question Generation for Education. We conduct a feasibility study into the applicability of answer-agnostic question generation models to textbook passages. We show that a significant portion of errors in such systems arise from asking irrelevant or uninterpretable questions and that such errors can be ameliorated by providing summarized input. We find that giving these models human-written summaries instead of the original text results in a significant increase in acceptability of generated questions (33% → 83%) as determined by expert annotators. We also find that, in the absence of human-written summaries, automatic summarization can serve as a good middle ground.",2022
rangarajan-sridhar-etal-2014-framework,https://aclanthology.org/C14-1092,1,,,,industry_innovation_infrastructure,,,"A Framework for Translating SMS Messages. Short Messaging Service (SMS) has become a popular form of communication. While it is predominantly used for monolingual communication, it can be extremely useful for facilitating cross-lingual communication through statistical machine translation. In this work we present an application of statistical machine translation to SMS messages. We decouple the SMS translation task into normalization followed by translation so that one can exploit existing bitext resources and present a novel unsupervised normalization approach using distributed representation of words learned through neural networks. We describe several surrogate data that are good approximations to real SMS data feeds and use a hybrid translation approach using finite-state transducers. Both objective and subjective evaluation indicate that our approach is highly suitable for translating SMS messages.",A Framework for Translating {SMS} Messages,"Short Messaging Service (SMS) has become a popular form of communication. While it is predominantly used for monolingual communication, it can be extremely useful for facilitating cross-lingual communication through statistical machine translation. In this work we present an application of statistical machine translation to SMS messages. We decouple the SMS translation task into normalization followed by translation so that one can exploit existing bitext resources and present a novel unsupervised normalization approach using distributed representation of words learned through neural networks. We describe several surrogate data that are good approximations to real SMS data feeds and use a hybrid translation approach using finite-state transducers. Both objective and subjective evaluation indicate that our approach is highly suitable for translating SMS messages.",A Framework for Translating SMS Messages,"Short Messaging Service (SMS) has become a popular form of communication. While it is predominantly used for monolingual communication, it can be extremely useful for facilitating cross-lingual communication through statistical machine translation. In this work we present an application of statistical machine translation to SMS messages. We decouple the SMS translation task into normalization followed by translation so that one can exploit existing bitext resources and present a novel unsupervised normalization approach using distributed representation of words learned through neural networks. We describe several surrogate data that are good approximations to real SMS data feeds and use a hybrid translation approach using finite-state transducers. Both objective and subjective evaluation indicate that our approach is highly suitable for translating SMS messages.",,"A Framework for Translating SMS Messages. Short Messaging Service (SMS) has become a popular form of communication. While it is predominantly used for monolingual communication, it can be extremely useful for facilitating cross-lingual communication through statistical machine translation. In this work we present an application of statistical machine translation to SMS messages. We decouple the SMS translation task into normalization followed by translation so that one can exploit existing bitext resources and present a novel unsupervised normalization approach using distributed representation of words learned through neural networks. We describe several surrogate data that are good approximations to real SMS data feeds and use a hybrid translation approach using finite-state transducers. Both objective and subjective evaluation indicate that our approach is highly suitable for translating SMS messages.",2014
petukhova-bunt-2010-towards,http://www.lrec-conf.org/proceedings/lrec2010/pdf/195_Paper.pdf,0,,,,,,,"Towards an Integrated Scheme for Semantic Annotation of Multimodal Dialogue Data. This paper investigates the applicability of existing dialogue act annotation schemes, designed for the analysis of spoken dialogue, to the semantic annotation of multimodal data, and the way a dialogue act annotation scheme can be extended to cover dialogue phenomena from multiple modalities.",Towards an Integrated Scheme for Semantic Annotation of Multimodal Dialogue Data,"This paper investigates the applicability of existing dialogue act annotation schemes, designed for the analysis of spoken dialogue, to the semantic annotation of multimodal data, and the way a dialogue act annotation scheme can be extended to cover dialogue phenomena from multiple modalities.",Towards an Integrated Scheme for Semantic Annotation of Multimodal Dialogue Data,"This paper investigates the applicability of existing dialogue act annotation schemes, designed for the analysis of spoken dialogue, to the semantic annotation of multimodal data, and the way a dialogue act annotation scheme can be extended to cover dialogue phenomena from multiple modalities.","This research was conducted within the project Multidimensional Dialogue Modelling, sponsored by the Netherlands Organisation for Scientific Research (NWO), under grant reference 017.003.090. We also very thankful to anonumous reviewers for their valuable comments.","Towards an Integrated Scheme for Semantic Annotation of Multimodal Dialogue Data. This paper investigates the applicability of existing dialogue act annotation schemes, designed for the analysis of spoken dialogue, to the semantic annotation of multimodal data, and the way a dialogue act annotation scheme can be extended to cover dialogue phenomena from multiple modalities.",2010
mccoy-1984-correcting,https://aclanthology.org/P84-1090,0,,,,,,,"Correcting Object-Related Misconceptions: How Should The System Respond?. Tills paper describes a computational method for correcting users' miseonceptioas concerning the objects modelled by a compute,"" s.ystem. The method involves classifying object-related misc,mce|,tions according to the knowledge-base feature involved in the incorrect information. For each resulting class sub-types are identified, :.:cording to the structure of the knowledge base, which indicate wh:LI i.formativn may be supporting the misconception and therefore what information to include in the response. Such a characteriza*i,,n, along with a model of what the user knows, enables the syst.cm to reas,m in a domain-independent way about how best to c~rrv,'t [he user.",Correcting Object-Related Misconceptions: How Should The System Respond?,"Tills paper describes a computational method for correcting users' miseonceptioas concerning the objects modelled by a compute,"" s.ystem. The method involves classifying object-related misc,mce|,tions according to the knowledge-base feature involved in the incorrect information. For each resulting class sub-types are identified, :.:cording to the structure of the knowledge base, which indicate wh:LI i.formativn may be supporting the misconception and therefore what information to include in the response. Such a characteriza*i,,n, along with a model of what the user knows, enables the syst.cm to reas,m in a domain-independent way about how best to c~rrv,'t [he user.",Correcting Object-Related Misconceptions: How Should The System Respond?,"Tills paper describes a computational method for correcting users' miseonceptioas concerning the objects modelled by a compute,"" s.ystem. The method involves classifying object-related misc,mce|,tions according to the knowledge-base feature involved in the incorrect information. For each resulting class sub-types are identified, :.:cording to the structure of the knowledge base, which indicate wh:LI i.formativn may be supporting the misconception and therefore what information to include in the response. Such a characteriza*i,,n, along with a model of what the user knows, enables the syst.cm to reas,m in a domain-independent way about how best to c~rrv,'t [he user.","I would like to thank Julia tlirschberg, Aravind Joshi, Martha Poll.~ck, and Bonnie Webber for their many helpful comments concerning this work.","Correcting Object-Related Misconceptions: How Should The System Respond?. Tills paper describes a computational method for correcting users' miseonceptioas concerning the objects modelled by a compute,"" s.ystem. The method involves classifying object-related misc,mce|,tions according to the knowledge-base feature involved in the incorrect information. For each resulting class sub-types are identified, :.:cording to the structure of the knowledge base, which indicate wh:LI i.formativn may be supporting the misconception and therefore what information to include in the response. Such a characteriza*i,,n, along with a model of what the user knows, enables the syst.cm to reas,m in a domain-independent way about how best to c~rrv,'t [he user.",1984
carter-1994-improving,https://aclanthology.org/A94-1010,0,,,,,,,"Improving Language Models by Clustering Training Sentences. Many of the kinds of language model used in speech understanding suffer from imperfect modeling of intra-sentential contextual influences. I argue that this problem can be addressed by clustering the sentences in a training corpus automatically into subcorpora on the criterion of entropy reduction, and calculating separate language model parameters for each cluster. This kind of clustering offers a way to represent important contextual effects and can therefore significantly improve the performance of a model. It also offers a reasonably automatic means to gather evidence on whether a more complex, context-sensitive model using the same general kind of linguistic information is likely to reward the effort that would be required to develop it: if clustering improves the performance of a model, this proves the existence of further context dependencies, not exploited by the unclustered model. As evidence for these claims, I present results showing that clustering improves some models but not others for the ATIS domain. These results are consistent with other findings for such models, suggesting that the existence or otherwise of an improvement brought about by clustering is indeed a good pointer to whether it is worth developing further the unclustered model.",Improving Language Models by Clustering Training Sentences,"Many of the kinds of language model used in speech understanding suffer from imperfect modeling of intra-sentential contextual influences. I argue that this problem can be addressed by clustering the sentences in a training corpus automatically into subcorpora on the criterion of entropy reduction, and calculating separate language model parameters for each cluster. This kind of clustering offers a way to represent important contextual effects and can therefore significantly improve the performance of a model. It also offers a reasonably automatic means to gather evidence on whether a more complex, context-sensitive model using the same general kind of linguistic information is likely to reward the effort that would be required to develop it: if clustering improves the performance of a model, this proves the existence of further context dependencies, not exploited by the unclustered model. As evidence for these claims, I present results showing that clustering improves some models but not others for the ATIS domain. These results are consistent with other findings for such models, suggesting that the existence or otherwise of an improvement brought about by clustering is indeed a good pointer to whether it is worth developing further the unclustered model.",Improving Language Models by Clustering Training Sentences,"Many of the kinds of language model used in speech understanding suffer from imperfect modeling of intra-sentential contextual influences. I argue that this problem can be addressed by clustering the sentences in a training corpus automatically into subcorpora on the criterion of entropy reduction, and calculating separate language model parameters for each cluster. This kind of clustering offers a way to represent important contextual effects and can therefore significantly improve the performance of a model. It also offers a reasonably automatic means to gather evidence on whether a more complex, context-sensitive model using the same general kind of linguistic information is likely to reward the effort that would be required to develop it: if clustering improves the performance of a model, this proves the existence of further context dependencies, not exploited by the unclustered model. As evidence for these claims, I present results showing that clustering improves some models but not others for the ATIS domain. These results are consistent with other findings for such models, suggesting that the existence or otherwise of an improvement brought about by clustering is indeed a good pointer to whether it is worth developing further the unclustered model.","This research was partly funded by the Defence Research Agency, Malvern, UK, under assignment M85T51XX.I am grateful to Manny Rayner and Ian Lewin for useful comments on earlier versions of this paper. Responsibility for any remaining errors or unclarities rests in the customary place.","Improving Language Models by Clustering Training Sentences. Many of the kinds of language model used in speech understanding suffer from imperfect modeling of intra-sentential contextual influences. I argue that this problem can be addressed by clustering the sentences in a training corpus automatically into subcorpora on the criterion of entropy reduction, and calculating separate language model parameters for each cluster. This kind of clustering offers a way to represent important contextual effects and can therefore significantly improve the performance of a model. It also offers a reasonably automatic means to gather evidence on whether a more complex, context-sensitive model using the same general kind of linguistic information is likely to reward the effort that would be required to develop it: if clustering improves the performance of a model, this proves the existence of further context dependencies, not exploited by the unclustered model. As evidence for these claims, I present results showing that clustering improves some models but not others for the ATIS domain. These results are consistent with other findings for such models, suggesting that the existence or otherwise of an improvement brought about by clustering is indeed a good pointer to whether it is worth developing further the unclustered model.",1994
li-etal-2016-litway,https://aclanthology.org/W16-3004,1,,,,health,,,"LitWay, Discriminative Extraction for Different Bio-Events. Even a simple biological phenomenon may introduce a complex network of molecular interactions. Scientific literature is one of the trustful resources delivering knowledge of these networks. We propose LitWay, a system for extracting semantic relations from texts. Lit-Way utilizes a hybrid method that combines both a rule-based method and a machine learning-based method. It is tested on the SeeDev task of BioNLP-ST 2016, achieves the state-of-the-art performance with the F-score of 43.2%, ranking first of all participating teams. To further reveal the linguistic characteristics of each event, we test the system solely with syntactic rules or machine learning, and different combinations of two methods. We find that it is difficult for one method to achieve good performance for all semantic relation types due to the complication of bio-events in the literatures.","{L}it{W}ay, Discriminative Extraction for Different Bio-Events","Even a simple biological phenomenon may introduce a complex network of molecular interactions. Scientific literature is one of the trustful resources delivering knowledge of these networks. We propose LitWay, a system for extracting semantic relations from texts. Lit-Way utilizes a hybrid method that combines both a rule-based method and a machine learning-based method. It is tested on the SeeDev task of BioNLP-ST 2016, achieves the state-of-the-art performance with the F-score of 43.2%, ranking first of all participating teams. To further reveal the linguistic characteristics of each event, we test the system solely with syntactic rules or machine learning, and different combinations of two methods. We find that it is difficult for one method to achieve good performance for all semantic relation types due to the complication of bio-events in the literatures.","LitWay, Discriminative Extraction for Different Bio-Events","Even a simple biological phenomenon may introduce a complex network of molecular interactions. Scientific literature is one of the trustful resources delivering knowledge of these networks. We propose LitWay, a system for extracting semantic relations from texts. Lit-Way utilizes a hybrid method that combines both a rule-based method and a machine learning-based method. It is tested on the SeeDev task of BioNLP-ST 2016, achieves the state-of-the-art performance with the F-score of 43.2%, ranking first of all participating teams. To further reveal the linguistic characteristics of each event, we test the system solely with syntactic rules or machine learning, and different combinations of two methods. We find that it is difficult for one method to achieve good performance for all semantic relation types due to the complication of bio-events in the literatures.",,"LitWay, Discriminative Extraction for Different Bio-Events. Even a simple biological phenomenon may introduce a complex network of molecular interactions. Scientific literature is one of the trustful resources delivering knowledge of these networks. We propose LitWay, a system for extracting semantic relations from texts. Lit-Way utilizes a hybrid method that combines both a rule-based method and a machine learning-based method. It is tested on the SeeDev task of BioNLP-ST 2016, achieves the state-of-the-art performance with the F-score of 43.2%, ranking first of all participating teams. To further reveal the linguistic characteristics of each event, we test the system solely with syntactic rules or machine learning, and different combinations of two methods. We find that it is difficult for one method to achieve good performance for all semantic relation types due to the complication of bio-events in the literatures.",2016
pianta-etal-2008-textpro,http://www.lrec-conf.org/proceedings/lrec2008/pdf/645_paper.pdf,0,,,,,,,"The TextPro Tool Suite. We present TextPro, a suite of modular Natural Language Processing (NLP) tools for analysis of Italian and English texts. The suite has been designed so as to integrate and reuse state of the art NLP components developed by researchers at FBK. The current version of the tool suite provides functions ranging from tokenization to chunking and Named Entity Recognition (NER). The system""s architecture is organized as a pipeline of processors wherein each stage accepts data from an initial input or from an output of a previous stage, executes a specific task, and sends the resulting data to the next stage, or to the output of the pipeline. TextPro performed the best on the task of Italian NER and Italian PoS Tagging at EVALITA 2007. When tested on a number of other standard English benchmarks, TextPro confirms that it performs as state of the art system. Distributions for Linux, Solaris and Windows are available, for both research and commercial purposes. A web-service version of the system is under development.",The {T}ext{P}ro Tool Suite,"We present TextPro, a suite of modular Natural Language Processing (NLP) tools for analysis of Italian and English texts. The suite has been designed so as to integrate and reuse state of the art NLP components developed by researchers at FBK. The current version of the tool suite provides functions ranging from tokenization to chunking and Named Entity Recognition (NER). The system""s architecture is organized as a pipeline of processors wherein each stage accepts data from an initial input or from an output of a previous stage, executes a specific task, and sends the resulting data to the next stage, or to the output of the pipeline. TextPro performed the best on the task of Italian NER and Italian PoS Tagging at EVALITA 2007. When tested on a number of other standard English benchmarks, TextPro confirms that it performs as state of the art system. Distributions for Linux, Solaris and Windows are available, for both research and commercial purposes. A web-service version of the system is under development.",The TextPro Tool Suite,"We present TextPro, a suite of modular Natural Language Processing (NLP) tools for analysis of Italian and English texts. The suite has been designed so as to integrate and reuse state of the art NLP components developed by researchers at FBK. The current version of the tool suite provides functions ranging from tokenization to chunking and Named Entity Recognition (NER). The system""s architecture is organized as a pipeline of processors wherein each stage accepts data from an initial input or from an output of a previous stage, executes a specific task, and sends the resulting data to the next stage, or to the output of the pipeline. TextPro performed the best on the task of Italian NER and Italian PoS Tagging at EVALITA 2007. When tested on a number of other standard English benchmarks, TextPro confirms that it performs as state of the art system. Distributions for Linux, Solaris and Windows are available, for both research and commercial purposes. A web-service version of the system is under development.","This work has been funded partly by the following projects: the ONTOTEXT sponsored by the Autonomous Province of Trento under the FUP-2004 research program, and partly by the Meaning and PATExpert (http://www.patexpert.org) projects sponsored by the European Commission. We wish to thanks Taku Kudo and Yuji Matsumoto for making available YamCha.","The TextPro Tool Suite. We present TextPro, a suite of modular Natural Language Processing (NLP) tools for analysis of Italian and English texts. The suite has been designed so as to integrate and reuse state of the art NLP components developed by researchers at FBK. The current version of the tool suite provides functions ranging from tokenization to chunking and Named Entity Recognition (NER). The system""s architecture is organized as a pipeline of processors wherein each stage accepts data from an initial input or from an output of a previous stage, executes a specific task, and sends the resulting data to the next stage, or to the output of the pipeline. TextPro performed the best on the task of Italian NER and Italian PoS Tagging at EVALITA 2007. When tested on a number of other standard English benchmarks, TextPro confirms that it performs as state of the art system. Distributions for Linux, Solaris and Windows are available, for both research and commercial purposes. A web-service version of the system is under development.",2008
lin-mitamura-2004-keyword,https://link.springer.com/chapter/10.1007/978-3-540-30194-3_19,0,,,,,,,Keyword translation from English to Chinese for multilingual QA. ,Keyword translation from {E}nglish to {C}hinese for multilingual {QA},,Keyword translation from English to Chinese for multilingual QA,,,Keyword translation from English to Chinese for multilingual QA. ,2004
collier-etal-1999-genia,https://aclanthology.org/E99-1043,1,,,,health,industry_innovation_infrastructure,,"The GENIA project: corpus-based knowledge acquisition and information extraction from genome research papers. We present an outline of the genome information acquisition (GENIA) project for automatically extracting biochemical information from journal papers and abstracts. GENIA will be available over the Internet and is designed to aid in information extraction, retrieval and visualisation and to help reduce information overload on researchers. The vast repository of papers available online in databases such as MEDLINE is a natural environment in which to develop language engineering methods and tools and is an opportunity to show how language engineering can play a key role on the Internet.",The {GENIA} project: corpus-based knowledge acquisition and information extraction from genome research papers,"We present an outline of the genome information acquisition (GENIA) project for automatically extracting biochemical information from journal papers and abstracts. GENIA will be available over the Internet and is designed to aid in information extraction, retrieval and visualisation and to help reduce information overload on researchers. The vast repository of papers available online in databases such as MEDLINE is a natural environment in which to develop language engineering methods and tools and is an opportunity to show how language engineering can play a key role on the Internet.",The GENIA project: corpus-based knowledge acquisition and information extraction from genome research papers,"We present an outline of the genome information acquisition (GENIA) project for automatically extracting biochemical information from journal papers and abstracts. GENIA will be available over the Internet and is designed to aid in information extraction, retrieval and visualisation and to help reduce information overload on researchers. The vast repository of papers available online in databases such as MEDLINE is a natural environment in which to develop language engineering methods and tools and is an opportunity to show how language engineering can play a key role on the Internet.",,"The GENIA project: corpus-based knowledge acquisition and information extraction from genome research papers. We present an outline of the genome information acquisition (GENIA) project for automatically extracting biochemical information from journal papers and abstracts. GENIA will be available over the Internet and is designed to aid in information extraction, retrieval and visualisation and to help reduce information overload on researchers. The vast repository of papers available online in databases such as MEDLINE is a natural environment in which to develop language engineering methods and tools and is an opportunity to show how language engineering can play a key role on the Internet.",1999
cases-etal-2019-recursive,https://aclanthology.org/N19-1365,0,,,,,,,"Recursive Routing Networks: Learning to Compose Modules for Language Understanding. We introduce Recursive Routing Networks (RRNs), which are modular, adaptable models that learn effectively in diverse environments. RRNs consist of a set of functions, typically organized into a grid, and a meta-learner decision-making component called the router. The model jointly optimizes the parameters of the functions and the meta-learner's policy for routing inputs through those functions. RRNs can be incorporated into existing architectures in a number of ways; we explore adding them to word representation layers, recurrent network hidden layers, and classifier layers. Our evaluation task is natural language inference (NLI). Using the MULTINLI corpus, we show that an RRN's routing decisions reflect the high-level genre structure of that corpus. To show that RRNs can learn to specialize to more fine-grained semantic distinctions, we introduce a new corpus of NLI examples involving implicative predicates, and show that the model components become fine-tuned to the inferential signatures that are characteristic of these predicates. x Routing across examples Weight sharing Possible distribution Orthogonalized Knowledge",Recursive Routing Networks: Learning to Compose Modules for Language Understanding,"We introduce Recursive Routing Networks (RRNs), which are modular, adaptable models that learn effectively in diverse environments. RRNs consist of a set of functions, typically organized into a grid, and a meta-learner decision-making component called the router. The model jointly optimizes the parameters of the functions and the meta-learner's policy for routing inputs through those functions. RRNs can be incorporated into existing architectures in a number of ways; we explore adding them to word representation layers, recurrent network hidden layers, and classifier layers. Our evaluation task is natural language inference (NLI). Using the MULTINLI corpus, we show that an RRN's routing decisions reflect the high-level genre structure of that corpus. To show that RRNs can learn to specialize to more fine-grained semantic distinctions, we introduce a new corpus of NLI examples involving implicative predicates, and show that the model components become fine-tuned to the inferential signatures that are characteristic of these predicates. x Routing across examples Weight sharing Possible distribution Orthogonalized Knowledge",Recursive Routing Networks: Learning to Compose Modules for Language Understanding,"We introduce Recursive Routing Networks (RRNs), which are modular, adaptable models that learn effectively in diverse environments. RRNs consist of a set of functions, typically organized into a grid, and a meta-learner decision-making component called the router. The model jointly optimizes the parameters of the functions and the meta-learner's policy for routing inputs through those functions. RRNs can be incorporated into existing architectures in a number of ways; we explore adding them to word representation layers, recurrent network hidden layers, and classifier layers. Our evaluation task is natural language inference (NLI). Using the MULTINLI corpus, we show that an RRN's routing decisions reflect the high-level genre structure of that corpus. To show that RRNs can learn to specialize to more fine-grained semantic distinctions, we introduce a new corpus of NLI examples involving implicative predicates, and show that the model components become fine-tuned to the inferential signatures that are characteristic of these predicates. x Routing across examples Weight sharing Possible distribution Orthogonalized Knowledge","We thank George Supaniratisai, Arun Chaganty, Kenny Xu and Abi See for valuable discussions, and the anonymous reviewers for their useful suggestions. Clemens Rosenbaum was a recipient of an IBM PhD Fellowship while working on this publication. We acknowledge the Office of the Vice Provost for Undergraduate Education at Stanford for the summer internships for Atticus Geiger, Olivia Li and Sandhini Agarwal. This research is based in part upon work supported by the Stanford Data Science Initiative, by the NSF under Grant No. BCS-1456077, by the NSF Award IIS-1514268, and by the Air Force Research Laboratory and DARPA under agreement number FA8750-18-2-0126. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory and DARPA or the U.S. Government.","Recursive Routing Networks: Learning to Compose Modules for Language Understanding. We introduce Recursive Routing Networks (RRNs), which are modular, adaptable models that learn effectively in diverse environments. RRNs consist of a set of functions, typically organized into a grid, and a meta-learner decision-making component called the router. The model jointly optimizes the parameters of the functions and the meta-learner's policy for routing inputs through those functions. RRNs can be incorporated into existing architectures in a number of ways; we explore adding them to word representation layers, recurrent network hidden layers, and classifier layers. Our evaluation task is natural language inference (NLI). Using the MULTINLI corpus, we show that an RRN's routing decisions reflect the high-level genre structure of that corpus. To show that RRNs can learn to specialize to more fine-grained semantic distinctions, we introduce a new corpus of NLI examples involving implicative predicates, and show that the model components become fine-tuned to the inferential signatures that are characteristic of these predicates. x Routing across examples Weight sharing Possible distribution Orthogonalized Knowledge",2019
rapaport-shapiro-1984-quasi,https://aclanthology.org/P84-1016,0,,,,,,,"Quasi-Indexical Reference in Propositional Semantic Networks. We discuss how a deductive question-answering system can represent the beliefs or other cognitive states of users, of other (interacting) systems, and of itself.",Quasi-Indexical Reference in Propositional Semantic Networks,"We discuss how a deductive question-answering system can represent the beliefs or other cognitive states of users, of other (interacting) systems, and of itself.",Quasi-Indexical Reference in Propositional Semantic Networks,"We discuss how a deductive question-answering system can represent the beliefs or other cognitive states of users, of other (interacting) systems, and of itself.",,"Quasi-Indexical Reference in Propositional Semantic Networks. We discuss how a deductive question-answering system can represent the beliefs or other cognitive states of users, of other (interacting) systems, and of itself.",1984
ryu-1996-argument,https://aclanthology.org/Y96-1034,0,,,,,,,"Argument Structure and Unaccusativity in the Constraint-based Lexicon. This paper addresses the issue of Split Intransitivity (si) and Unaccusative Mismatches (uMs), proposing a constraint-based approach to si and ums within a recent framework of Head-driven Phrase Structure Grammar. I argue against the widely accepted dichotomous distinction of intransitive verbs, which has been advanced by the Unaccusative Hypothesis [Perlmutter (1978)]. I then propose a quadripartitive distinction of intransitive verbs on the basis of the distribution of subject argument in the semantically motivated argument structure, and show that this quadripartitive distinction allows a better understanding of si and ums. The main idea of this proposal will be summarized as the Quadripartitive Split Intransitivity Hypothesis (Qsm).",Argument Structure and Unaccusativity in the Constraint-based Lexicon,"This paper addresses the issue of Split Intransitivity (si) and Unaccusative Mismatches (uMs), proposing a constraint-based approach to si and ums within a recent framework of Head-driven Phrase Structure Grammar. I argue against the widely accepted dichotomous distinction of intransitive verbs, which has been advanced by the Unaccusative Hypothesis [Perlmutter (1978)]. I then propose a quadripartitive distinction of intransitive verbs on the basis of the distribution of subject argument in the semantically motivated argument structure, and show that this quadripartitive distinction allows a better understanding of si and ums. The main idea of this proposal will be summarized as the Quadripartitive Split Intransitivity Hypothesis (Qsm).",Argument Structure and Unaccusativity in the Constraint-based Lexicon,"This paper addresses the issue of Split Intransitivity (si) and Unaccusative Mismatches (uMs), proposing a constraint-based approach to si and ums within a recent framework of Head-driven Phrase Structure Grammar. I argue against the widely accepted dichotomous distinction of intransitive verbs, which has been advanced by the Unaccusative Hypothesis [Perlmutter (1978)]. I then propose a quadripartitive distinction of intransitive verbs on the basis of the distribution of subject argument in the semantically motivated argument structure, and show that this quadripartitive distinction allows a better understanding of si and ums. The main idea of this proposal will be summarized as the Quadripartitive Split Intransitivity Hypothesis (Qsm).",,"Argument Structure and Unaccusativity in the Constraint-based Lexicon. This paper addresses the issue of Split Intransitivity (si) and Unaccusative Mismatches (uMs), proposing a constraint-based approach to si and ums within a recent framework of Head-driven Phrase Structure Grammar. I argue against the widely accepted dichotomous distinction of intransitive verbs, which has been advanced by the Unaccusative Hypothesis [Perlmutter (1978)]. I then propose a quadripartitive distinction of intransitive verbs on the basis of the distribution of subject argument in the semantically motivated argument structure, and show that this quadripartitive distinction allows a better understanding of si and ums. The main idea of this proposal will be summarized as the Quadripartitive Split Intransitivity Hypothesis (Qsm).",1996
liu-1995-preferred,https://aclanthology.org/Y95-1029,0,,,,,,,"Preferred Clause Structure in Mandarin Spoken and Written Discourse. This paper studies the preferred clause structure in Mandarin. Tao's [I] pioneering work proposed the following ""preferred clause structure in conversational Mandarin"":",Preferred Clause Structure in {M}andarin Spoken and Written Discourse,"This paper studies the preferred clause structure in Mandarin. Tao's [I] pioneering work proposed the following ""preferred clause structure in conversational Mandarin"":",Preferred Clause Structure in Mandarin Spoken and Written Discourse,"This paper studies the preferred clause structure in Mandarin. Tao's [I] pioneering work proposed the following ""preferred clause structure in conversational Mandarin"":",,"Preferred Clause Structure in Mandarin Spoken and Written Discourse. This paper studies the preferred clause structure in Mandarin. Tao's [I] pioneering work proposed the following ""preferred clause structure in conversational Mandarin"":",1995
windhouwer-2012-relcat,http://www.lrec-conf.org/proceedings/lrec2012/pdf/954_Paper.pdf,0,,,,,,,"RELcat: a Relation Registry for ISOcat data categories. The ISOcat Data Category Registry contains basically a flat and easily extensible list of data category specifications. To foster reuse and standardization only very shallow relationships among data categories are stored in the registry. However, to assist crosswalks, possibly based on personal views, between various (application) domains and to overcome possible proliferation of data categories more types of ontological relationships need to be specified. RELcat is a first prototype of a Relation Registry, which allows storing arbitrary relationships. These relationships can reflect the personal view of one linguist or a larger community. The basis of the registry is a relation type taxonomy that can easily be extended. This allows on one hand to load existing sets o f relations specified in, for example, an OWL (2) ontology or SKOS taxonomy. And on the other hand allows algorithms that query the registry to traverse the stored semantic network to remain ignorant of the original source vocabulary. This paper describes first experiences with RELcat and explains some initial design decisions.",{REL}cat: a Relation Registry for {ISO}cat data categories,"The ISOcat Data Category Registry contains basically a flat and easily extensible list of data category specifications. To foster reuse and standardization only very shallow relationships among data categories are stored in the registry. However, to assist crosswalks, possibly based on personal views, between various (application) domains and to overcome possible proliferation of data categories more types of ontological relationships need to be specified. RELcat is a first prototype of a Relation Registry, which allows storing arbitrary relationships. These relationships can reflect the personal view of one linguist or a larger community. The basis of the registry is a relation type taxonomy that can easily be extended. This allows on one hand to load existing sets o f relations specified in, for example, an OWL (2) ontology or SKOS taxonomy. And on the other hand allows algorithms that query the registry to traverse the stored semantic network to remain ignorant of the original source vocabulary. This paper describes first experiences with RELcat and explains some initial design decisions.",RELcat: a Relation Registry for ISOcat data categories,"The ISOcat Data Category Registry contains basically a flat and easily extensible list of data category specifications. To foster reuse and standardization only very shallow relationships among data categories are stored in the registry. However, to assist crosswalks, possibly based on personal views, between various (application) domains and to overcome possible proliferation of data categories more types of ontological relationships need to be specified. RELcat is a first prototype of a Relation Registry, which allows storing arbitrary relationships. These relationships can reflect the personal view of one linguist or a larger community. The basis of the registry is a relation type taxonomy that can easily be extended. This allows on one hand to load existing sets o f relations specified in, for example, an OWL (2) ontology or SKOS taxonomy. And on the other hand allows algorithms that query the registry to traverse the stored semantic network to remain ignorant of the original source vocabulary. This paper describes first experiences with RELcat and explains some initial design decisions.","Thanks to early adaptors Matej Durco (SMC4LRT), Irina Nevskaya (RELISH) and Ineke Schuurman (CLARIN-NL/VL) for driving this first version of RELcat forward.","RELcat: a Relation Registry for ISOcat data categories. The ISOcat Data Category Registry contains basically a flat and easily extensible list of data category specifications. To foster reuse and standardization only very shallow relationships among data categories are stored in the registry. However, to assist crosswalks, possibly based on personal views, between various (application) domains and to overcome possible proliferation of data categories more types of ontological relationships need to be specified. RELcat is a first prototype of a Relation Registry, which allows storing arbitrary relationships. These relationships can reflect the personal view of one linguist or a larger community. The basis of the registry is a relation type taxonomy that can easily be extended. This allows on one hand to load existing sets o f relations specified in, for example, an OWL (2) ontology or SKOS taxonomy. And on the other hand allows algorithms that query the registry to traverse the stored semantic network to remain ignorant of the original source vocabulary. This paper describes first experiences with RELcat and explains some initial design decisions.",2012
strobelt-etal-2021-lmdiff,https://aclanthology.org/2021.emnlp-demo.12,0,,,,,,,"LMdiff: A Visual Diff Tool to Compare Language Models. While different language models are ubiquitous in NLP, it is hard to contrast their outputs and identify which contexts one can handle better than the other. To address this question, we introduce LMDIFF, a tool that visually compares probability distributions of two models that differ, e.g., through finetuning, distillation, or simply training with different parameter sizes. LMDIFF allows the generation of hypotheses about model behavior by investigating text instances token by token and further assists in choosing these interesting text instances by identifying the most interesting phrases from large corpora. We showcase the applicability of LMDIFF for hypothesis generation across multiple case studies. A demo is available at http://lmdiff.net.",{LM}diff: A Visual Diff Tool to Compare Language Models,"While different language models are ubiquitous in NLP, it is hard to contrast their outputs and identify which contexts one can handle better than the other. To address this question, we introduce LMDIFF, a tool that visually compares probability distributions of two models that differ, e.g., through finetuning, distillation, or simply training with different parameter sizes. LMDIFF allows the generation of hypotheses about model behavior by investigating text instances token by token and further assists in choosing these interesting text instances by identifying the most interesting phrases from large corpora. We showcase the applicability of LMDIFF for hypothesis generation across multiple case studies. A demo is available at http://lmdiff.net.",LMdiff: A Visual Diff Tool to Compare Language Models,"While different language models are ubiquitous in NLP, it is hard to contrast their outputs and identify which contexts one can handle better than the other. To address this question, we introduce LMDIFF, a tool that visually compares probability distributions of two models that differ, e.g., through finetuning, distillation, or simply training with different parameter sizes. LMDIFF allows the generation of hypotheses about model behavior by investigating text instances token by token and further assists in choosing these interesting text instances by identifying the most interesting phrases from large corpora. We showcase the applicability of LMDIFF for hypothesis generation across multiple case studies. A demo is available at http://lmdiff.net.",We thank Ankur Parikh and Ian Tenney for helpful comments on an earlier draft of this paper. This work was supported by the MIT-IBM Watson AI Lab. This work has been developed in part during the BigScience Summer of Language Models 2021.,"LMdiff: A Visual Diff Tool to Compare Language Models. While different language models are ubiquitous in NLP, it is hard to contrast their outputs and identify which contexts one can handle better than the other. To address this question, we introduce LMDIFF, a tool that visually compares probability distributions of two models that differ, e.g., through finetuning, distillation, or simply training with different parameter sizes. LMDIFF allows the generation of hypotheses about model behavior by investigating text instances token by token and further assists in choosing these interesting text instances by identifying the most interesting phrases from large corpora. We showcase the applicability of LMDIFF for hypothesis generation across multiple case studies. A demo is available at http://lmdiff.net.",2021
taghipour-ng-2016-neural,https://aclanthology.org/D16-1193,1,,,,education,,,"A Neural Approach to Automated Essay Scoring. Traditional automated essay scoring systems rely on carefully designed features to evaluate and score essays. The performance of such systems is tightly bound to the quality of the underlying features. However, it is laborious to manually design the most informative features for such a system. In this paper, we develop an approach based on recurrent neural networks to learn the relation between an essay and its assigned score, without any feature engineering. We explore several neural network models for the task of automated essay scoring and perform some analysis to get some insights of the models. The results show that our best system, which is based on long short-term memory networks, outperforms a strong baseline by 5.6% in terms of quadratic weighted Kappa, without requiring any feature engineering.",A Neural Approach to Automated Essay Scoring,"Traditional automated essay scoring systems rely on carefully designed features to evaluate and score essays. The performance of such systems is tightly bound to the quality of the underlying features. However, it is laborious to manually design the most informative features for such a system. In this paper, we develop an approach based on recurrent neural networks to learn the relation between an essay and its assigned score, without any feature engineering. We explore several neural network models for the task of automated essay scoring and perform some analysis to get some insights of the models. The results show that our best system, which is based on long short-term memory networks, outperforms a strong baseline by 5.6% in terms of quadratic weighted Kappa, without requiring any feature engineering.",A Neural Approach to Automated Essay Scoring,"Traditional automated essay scoring systems rely on carefully designed features to evaluate and score essays. The performance of such systems is tightly bound to the quality of the underlying features. However, it is laborious to manually design the most informative features for such a system. In this paper, we develop an approach based on recurrent neural networks to learn the relation between an essay and its assigned score, without any feature engineering. We explore several neural network models for the task of automated essay scoring and perform some analysis to get some insights of the models. The results show that our best system, which is based on long short-term memory networks, outperforms a strong baseline by 5.6% in terms of quadratic weighted Kappa, without requiring any feature engineering.",This research is supported by Singapore Ministry of Education Academic Research Fund Tier 2 grant MOE2013-T2-1-150. We are also grateful to the anonymous reviewers for their helpful comments.,"A Neural Approach to Automated Essay Scoring. Traditional automated essay scoring systems rely on carefully designed features to evaluate and score essays. The performance of such systems is tightly bound to the quality of the underlying features. However, it is laborious to manually design the most informative features for such a system. In this paper, we develop an approach based on recurrent neural networks to learn the relation between an essay and its assigned score, without any feature engineering. We explore several neural network models for the task of automated essay scoring and perform some analysis to get some insights of the models. The results show that our best system, which is based on long short-term memory networks, outperforms a strong baseline by 5.6% in terms of quadratic weighted Kappa, without requiring any feature engineering.",2016
papageorgiou-etal-2000-unified,http://www.lrec-conf.org/proceedings/lrec2000/pdf/181.pdf,0,,,,,,,"A Unified POS Tagging Architecture and its Application to Greek. This paper proposes a flexible and unified tagging architecture that could be incorporated into a number of applications like information extraction, cross-language information retrieval, term extraction, or summarization, while providing an essential component for subsequent syntactic processing or lexicographical work. A feature-based multi-tiered approach (FBT tagger) is introduced to part-of-speech tagging. FBT is a variant of the well-known transformation based learning paradigm aiming at improving the quality of tagging highly inflective languages such as Greek. Additionally, a large experiment concerning the Greek language is conducted and results are presented for a variety of text genres, including financial reports, newswires, press releases and technical manuals. Finally, the adopted evaluation methodology is discussed.",A Unified {POS} Tagging Architecture and its Application to {G}reek,"This paper proposes a flexible and unified tagging architecture that could be incorporated into a number of applications like information extraction, cross-language information retrieval, term extraction, or summarization, while providing an essential component for subsequent syntactic processing or lexicographical work. A feature-based multi-tiered approach (FBT tagger) is introduced to part-of-speech tagging. FBT is a variant of the well-known transformation based learning paradigm aiming at improving the quality of tagging highly inflective languages such as Greek. Additionally, a large experiment concerning the Greek language is conducted and results are presented for a variety of text genres, including financial reports, newswires, press releases and technical manuals. Finally, the adopted evaluation methodology is discussed.",A Unified POS Tagging Architecture and its Application to Greek,"This paper proposes a flexible and unified tagging architecture that could be incorporated into a number of applications like information extraction, cross-language information retrieval, term extraction, or summarization, while providing an essential component for subsequent syntactic processing or lexicographical work. A feature-based multi-tiered approach (FBT tagger) is introduced to part-of-speech tagging. FBT is a variant of the well-known transformation based learning paradigm aiming at improving the quality of tagging highly inflective languages such as Greek. Additionally, a large experiment concerning the Greek language is conducted and results are presented for a variety of text genres, including financial reports, newswires, press releases and technical manuals. Finally, the adopted evaluation methodology is discussed.",,"A Unified POS Tagging Architecture and its Application to Greek. This paper proposes a flexible and unified tagging architecture that could be incorporated into a number of applications like information extraction, cross-language information retrieval, term extraction, or summarization, while providing an essential component for subsequent syntactic processing or lexicographical work. A feature-based multi-tiered approach (FBT tagger) is introduced to part-of-speech tagging. FBT is a variant of the well-known transformation based learning paradigm aiming at improving the quality of tagging highly inflective languages such as Greek. Additionally, a large experiment concerning the Greek language is conducted and results are presented for a variety of text genres, including financial reports, newswires, press releases and technical manuals. Finally, the adopted evaluation methodology is discussed.",2000
reiter-etal-2008-resource,https://aclanthology.org/W08-2231,0,,,,,,,"A Resource-Poor Approach for Linking Ontology Classes to Wikipedia Articles. The applicability of ontologies for natural language processing depends on the ability to link ontological concepts and relations to their realisations in texts. We present a general, resource-poor account to create such a linking automatically by extracting Wikipedia articles corresponding to ontology classes. We evaluate our approach in an experiment with the Music Ontology. We consider linking as a promising starting point for subsequent steps of information extraction.",A Resource-Poor Approach for Linking Ontology Classes to {W}ikipedia Articles,"The applicability of ontologies for natural language processing depends on the ability to link ontological concepts and relations to their realisations in texts. We present a general, resource-poor account to create such a linking automatically by extracting Wikipedia articles corresponding to ontology classes. We evaluate our approach in an experiment with the Music Ontology. We consider linking as a promising starting point for subsequent steps of information extraction.",A Resource-Poor Approach for Linking Ontology Classes to Wikipedia Articles,"The applicability of ontologies for natural language processing depends on the ability to link ontological concepts and relations to their realisations in texts. We present a general, resource-poor account to create such a linking automatically by extracting Wikipedia articles corresponding to ontology classes. We evaluate our approach in an experiment with the Music Ontology. We consider linking as a promising starting point for subsequent steps of information extraction.",Acknowledgements. We kindly thank our annotators for their effort and Rüdiger Wolf for technical support.,"A Resource-Poor Approach for Linking Ontology Classes to Wikipedia Articles. The applicability of ontologies for natural language processing depends on the ability to link ontological concepts and relations to their realisations in texts. We present a general, resource-poor account to create such a linking automatically by extracting Wikipedia articles corresponding to ontology classes. We evaluate our approach in an experiment with the Music Ontology. We consider linking as a promising starting point for subsequent steps of information extraction.",2008
saleh-etal-2014-study,https://aclanthology.org/C14-1020,0,,,,,,,"A Study of using Syntactic and Semantic Structures for Concept Segmentation and Labeling. This paper presents an empirical study on using syntactic and semantic information for Concept Segmentation and Labeling (CSL), a well-known component in spoken language understanding. Our approach is based on reranking N-best outputs from a state-of-the-art CSL parser. We perform extensive experimentation by comparing different tree-based kernels with a variety of representations of the available linguistic information, including semantic concepts, words, POS tags, shallow and full syntax, and discourse trees. The results show that the structured representation with the semantic concepts yields significant improvement over the base CSL parser, much larger compared to learning with an explicit feature vector representation. We also show that shallow syntax helps improve the results and that discourse relations can be partially beneficial.",A Study of using Syntactic and Semantic Structures for Concept Segmentation and Labeling,"This paper presents an empirical study on using syntactic and semantic information for Concept Segmentation and Labeling (CSL), a well-known component in spoken language understanding. Our approach is based on reranking N-best outputs from a state-of-the-art CSL parser. We perform extensive experimentation by comparing different tree-based kernels with a variety of representations of the available linguistic information, including semantic concepts, words, POS tags, shallow and full syntax, and discourse trees. The results show that the structured representation with the semantic concepts yields significant improvement over the base CSL parser, much larger compared to learning with an explicit feature vector representation. We also show that shallow syntax helps improve the results and that discourse relations can be partially beneficial.",A Study of using Syntactic and Semantic Structures for Concept Segmentation and Labeling,"This paper presents an empirical study on using syntactic and semantic information for Concept Segmentation and Labeling (CSL), a well-known component in spoken language understanding. Our approach is based on reranking N-best outputs from a state-of-the-art CSL parser. We perform extensive experimentation by comparing different tree-based kernels with a variety of representations of the available linguistic information, including semantic concepts, words, POS tags, shallow and full syntax, and discourse trees. The results show that the structured representation with the semantic concepts yields significant improvement over the base CSL parser, much larger compared to learning with an explicit feature vector representation. We also show that shallow syntax helps improve the results and that discourse relations can be partially beneficial.",This research is developed by the Arabic Language Technologies (ALT) group at Qatar Computing Research Institute (QCRI) within the Qatar Foundation in collaboration with MIT. It is part of the Interactive sYstems for Answer Search (Iyas) project.,"A Study of using Syntactic and Semantic Structures for Concept Segmentation and Labeling. This paper presents an empirical study on using syntactic and semantic information for Concept Segmentation and Labeling (CSL), a well-known component in spoken language understanding. Our approach is based on reranking N-best outputs from a state-of-the-art CSL parser. We perform extensive experimentation by comparing different tree-based kernels with a variety of representations of the available linguistic information, including semantic concepts, words, POS tags, shallow and full syntax, and discourse trees. The results show that the structured representation with the semantic concepts yields significant improvement over the base CSL parser, much larger compared to learning with an explicit feature vector representation. We also show that shallow syntax helps improve the results and that discourse relations can be partially beneficial.",2014
malmasi-etal-2015-norwegian,https://aclanthology.org/R15-1053,0,,,,,,,"Norwegian Native Language Identification. We present a study of Native Language Identification (NLI) using data from learners of Norwegian, a language not yet used for this task. NLI is the task of predicting a writer's first language using only their writings in a learned language. We find that three feature types, function words, part-of-speech n-grams and a hybrid part-of-speech/function word mixture n-gram model are useful here. Our system achieves an accuracy of 79% against a baseline of 13% for predicting an author's L1. The same features can distinguish non-native writing with 99% accuracy. We also find that part-of-speech n-gram performance on this data deviates from previous NLI results, possibly due to the use of manually post-corrected tags.",{N}orwegian Native Language Identification,"We present a study of Native Language Identification (NLI) using data from learners of Norwegian, a language not yet used for this task. NLI is the task of predicting a writer's first language using only their writings in a learned language. We find that three feature types, function words, part-of-speech n-grams and a hybrid part-of-speech/function word mixture n-gram model are useful here. Our system achieves an accuracy of 79% against a baseline of 13% for predicting an author's L1. The same features can distinguish non-native writing with 99% accuracy. We also find that part-of-speech n-gram performance on this data deviates from previous NLI results, possibly due to the use of manually post-corrected tags.",Norwegian Native Language Identification,"We present a study of Native Language Identification (NLI) using data from learners of Norwegian, a language not yet used for this task. NLI is the task of predicting a writer's first language using only their writings in a learned language. We find that three feature types, function words, part-of-speech n-grams and a hybrid part-of-speech/function word mixture n-gram model are useful here. Our system achieves an accuracy of 79% against a baseline of 13% for predicting an author's L1. The same features can distinguish non-native writing with 99% accuracy. We also find that part-of-speech n-gram performance on this data deviates from previous NLI results, possibly due to the use of manually post-corrected tags.",We would like to thank Kari Tenfjord and Paul Meurer for providing access to the ASK corpus and their assistance in using the data.,"Norwegian Native Language Identification. We present a study of Native Language Identification (NLI) using data from learners of Norwegian, a language not yet used for this task. NLI is the task of predicting a writer's first language using only their writings in a learned language. We find that three feature types, function words, part-of-speech n-grams and a hybrid part-of-speech/function word mixture n-gram model are useful here. Our system achieves an accuracy of 79% against a baseline of 13% for predicting an author's L1. The same features can distinguish non-native writing with 99% accuracy. We also find that part-of-speech n-gram performance on this data deviates from previous NLI results, possibly due to the use of manually post-corrected tags.",2015
siblini-etal-2021-towards,https://aclanthology.org/2021.acl-short.130,0,,,,,,,"Towards a more Robust Evaluation for Conversational Question Answering. With the explosion of chatbot applications, Conversational Question Answering (CQA) has generated a lot of interest in recent years. Among proposals, reading comprehension models which take advantage of the conversation history (previous QA) seem to answer better than those which only consider the current question. Nevertheless, we note that the CQA evaluation protocol has a major limitation. In particular, models are allowed, at each turn of the conversation, to access the ground truth answers of the previous turns. Not only does this severely prevent their applications in fully autonomous chatbots, it also leads to unsuspected biases in their behavior. In this paper, we highlight this effect and propose new tools for evaluation and training in order to guard against the noted issues. The new results that we bring come to reinforce methods of the current state of the art.",Towards a more Robust Evaluation for Conversational Question Answering,"With the explosion of chatbot applications, Conversational Question Answering (CQA) has generated a lot of interest in recent years. Among proposals, reading comprehension models which take advantage of the conversation history (previous QA) seem to answer better than those which only consider the current question. Nevertheless, we note that the CQA evaluation protocol has a major limitation. In particular, models are allowed, at each turn of the conversation, to access the ground truth answers of the previous turns. Not only does this severely prevent their applications in fully autonomous chatbots, it also leads to unsuspected biases in their behavior. In this paper, we highlight this effect and propose new tools for evaluation and training in order to guard against the noted issues. The new results that we bring come to reinforce methods of the current state of the art.",Towards a more Robust Evaluation for Conversational Question Answering,"With the explosion of chatbot applications, Conversational Question Answering (CQA) has generated a lot of interest in recent years. Among proposals, reading comprehension models which take advantage of the conversation history (previous QA) seem to answer better than those which only consider the current question. Nevertheless, we note that the CQA evaluation protocol has a major limitation. In particular, models are allowed, at each turn of the conversation, to access the ground truth answers of the previous turns. Not only does this severely prevent their applications in fully autonomous chatbots, it also leads to unsuspected biases in their behavior. In this paper, we highlight this effect and propose new tools for evaluation and training in order to guard against the noted issues. The new results that we bring come to reinforce methods of the current state of the art.",,"Towards a more Robust Evaluation for Conversational Question Answering. With the explosion of chatbot applications, Conversational Question Answering (CQA) has generated a lot of interest in recent years. Among proposals, reading comprehension models which take advantage of the conversation history (previous QA) seem to answer better than those which only consider the current question. Nevertheless, we note that the CQA evaluation protocol has a major limitation. In particular, models are allowed, at each turn of the conversation, to access the ground truth answers of the previous turns. Not only does this severely prevent their applications in fully autonomous chatbots, it also leads to unsuspected biases in their behavior. In this paper, we highlight this effect and propose new tools for evaluation and training in order to guard against the noted issues. The new results that we bring come to reinforce methods of the current state of the art.",2021
dudy-etal-2018-multi,https://aclanthology.org/W18-1210,0,,,,,,,"A Multi-Context Character Prediction Model for a Brain-Computer Interface. Brain-computer interfaces and other augmentative and alternative communication devices introduce language-modeing challenges distinct from other character-entry methods. In particular, the acquired signal of the EEG (electroencephalogram) signal is noisier, which, in turn, makes the user intent harder to decipher. In order to adapt to this condition, we propose to maintain ambiguous history for every time step, and to employ, apart from the character language model, word information to produce a more robust prediction system. We present preliminary results that compare this proposed Online-Context Language Model (OCLM) to current algorithms that are used in this type of setting. Evaluations on both perplexity and predictive accuracy demonstrate promising results when dealing with ambiguous histories in order to provide to the front end a distribution of the next character the user might type.",A Multi-Context Character Prediction Model for a Brain-Computer Interface,"Brain-computer interfaces and other augmentative and alternative communication devices introduce language-modeing challenges distinct from other character-entry methods. In particular, the acquired signal of the EEG (electroencephalogram) signal is noisier, which, in turn, makes the user intent harder to decipher. In order to adapt to this condition, we propose to maintain ambiguous history for every time step, and to employ, apart from the character language model, word information to produce a more robust prediction system. We present preliminary results that compare this proposed Online-Context Language Model (OCLM) to current algorithms that are used in this type of setting. Evaluations on both perplexity and predictive accuracy demonstrate promising results when dealing with ambiguous histories in order to provide to the front end a distribution of the next character the user might type.",A Multi-Context Character Prediction Model for a Brain-Computer Interface,"Brain-computer interfaces and other augmentative and alternative communication devices introduce language-modeing challenges distinct from other character-entry methods. In particular, the acquired signal of the EEG (electroencephalogram) signal is noisier, which, in turn, makes the user intent harder to decipher. In order to adapt to this condition, we propose to maintain ambiguous history for every time step, and to employ, apart from the character language model, word information to produce a more robust prediction system. We present preliminary results that compare this proposed Online-Context Language Model (OCLM) to current algorithms that are used in this type of setting. Evaluations on both perplexity and predictive accuracy demonstrate promising results when dealing with ambiguous histories in order to provide to the front end a distribution of the next character the user might type.","We would like to thank the reviewers of the SCLeM workshop for their insightful comments and feedback. We also would like to thank Brian Roark for his helpful advice, as well as our clinical team in the Institute on Development & Disability at OHSU. Research reported in this paper was supported the National Institute on Deafness and Other Communication Disorders of the NIH under award number 5R01DC009834-09. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.","A Multi-Context Character Prediction Model for a Brain-Computer Interface. Brain-computer interfaces and other augmentative and alternative communication devices introduce language-modeing challenges distinct from other character-entry methods. In particular, the acquired signal of the EEG (electroencephalogram) signal is noisier, which, in turn, makes the user intent harder to decipher. In order to adapt to this condition, we propose to maintain ambiguous history for every time step, and to employ, apart from the character language model, word information to produce a more robust prediction system. We present preliminary results that compare this proposed Online-Context Language Model (OCLM) to current algorithms that are used in this type of setting. Evaluations on both perplexity and predictive accuracy demonstrate promising results when dealing with ambiguous histories in order to provide to the front end a distribution of the next character the user might type.",2018
danieli-etal-2004-evaluation,http://www.lrec-conf.org/proceedings/lrec2004/pdf/371.pdf,0,,,,,,,"Evaluation of Consensus on the Annotation of Prosodic Breaks in the Romance Corpus of Spontaneous Speech ``C-ORAL-ROM''. CORAL -ROM, Integrated Reference Corpora For Spoken Romance Languages, is a multilingual corpus of spontaneous speech delivered within the IST Program. Corpora are tagged with respect to terminal and non terminal prosodic breaks. Terminal breaks are considered the most perceptively relevant cues to determine the utterance boundaries in spontaneous speech resources. The paper presents the evaluation of the inter-annotator agreement accomplished by an institution external to the consortium and shows the level of reliability of the tagging delivered and the annotation scheme adopted. The data show, at cross-linguistic level, a very high K coefficient (between 7.7 and 9.2, according to the language resource). A strong level of agreement specifically for terminal breaks has also been recorded. The data thus show that the annotation of the utterances identified in terms of their prosodic breaks is able to capture relevant perceptual facts, and it appears that the proposed coding scheme can be applied in a highly replicable way.",Evaluation of Consensus on the Annotation of Prosodic Breaks in the {R}omance Corpus of Spontaneous Speech {``}{C}-{ORAL}-{ROM}{''},"CORAL -ROM, Integrated Reference Corpora For Spoken Romance Languages, is a multilingual corpus of spontaneous speech delivered within the IST Program. Corpora are tagged with respect to terminal and non terminal prosodic breaks. Terminal breaks are considered the most perceptively relevant cues to determine the utterance boundaries in spontaneous speech resources. The paper presents the evaluation of the inter-annotator agreement accomplished by an institution external to the consortium and shows the level of reliability of the tagging delivered and the annotation scheme adopted. The data show, at cross-linguistic level, a very high K coefficient (between 7.7 and 9.2, according to the language resource). A strong level of agreement specifically for terminal breaks has also been recorded. The data thus show that the annotation of the utterances identified in terms of their prosodic breaks is able to capture relevant perceptual facts, and it appears that the proposed coding scheme can be applied in a highly replicable way.",Evaluation of Consensus on the Annotation of Prosodic Breaks in the Romance Corpus of Spontaneous Speech ``C-ORAL-ROM'',"CORAL -ROM, Integrated Reference Corpora For Spoken Romance Languages, is a multilingual corpus of spontaneous speech delivered within the IST Program. Corpora are tagged with respect to terminal and non terminal prosodic breaks. Terminal breaks are considered the most perceptively relevant cues to determine the utterance boundaries in spontaneous speech resources. The paper presents the evaluation of the inter-annotator agreement accomplished by an institution external to the consortium and shows the level of reliability of the tagging delivered and the annotation scheme adopted. The data show, at cross-linguistic level, a very high K coefficient (between 7.7 and 9.2, according to the language resource). A strong level of agreement specifically for terminal breaks has also been recorded. The data thus show that the annotation of the utterances identified in terms of their prosodic breaks is able to capture relevant perceptual facts, and it appears that the proposed coding scheme can be applied in a highly replicable way.",,"Evaluation of Consensus on the Annotation of Prosodic Breaks in the Romance Corpus of Spontaneous Speech ``C-ORAL-ROM''. CORAL -ROM, Integrated Reference Corpora For Spoken Romance Languages, is a multilingual corpus of spontaneous speech delivered within the IST Program. Corpora are tagged with respect to terminal and non terminal prosodic breaks. Terminal breaks are considered the most perceptively relevant cues to determine the utterance boundaries in spontaneous speech resources. The paper presents the evaluation of the inter-annotator agreement accomplished by an institution external to the consortium and shows the level of reliability of the tagging delivered and the annotation scheme adopted. The data show, at cross-linguistic level, a very high K coefficient (between 7.7 and 9.2, according to the language resource). A strong level of agreement specifically for terminal breaks has also been recorded. The data thus show that the annotation of the utterances identified in terms of their prosodic breaks is able to capture relevant perceptual facts, and it appears that the proposed coding scheme can be applied in a highly replicable way.",2004
quirk-etal-2015-language,https://aclanthology.org/P15-1085,0,,,,,,,"Language to Code: Learning Semantic Parsers for If-This-Then-That Recipes. Using natural language to write programs is a touchstone problem for computational linguistics. We present an approach that learns to map natural-language descriptions of simple ""if-then"" rules to executable code. By training and testing on a large corpus of naturally-occurring programs (called ""recipes"") and their natural language descriptions, we demonstrate the ability to effectively map language to code. We compare a number of semantic parsing approaches on the highly noisy training data collected from ordinary users, and find that loosely synchronous systems perform best.",Language to Code: Learning Semantic Parsers for If-This-Then-That Recipes,"Using natural language to write programs is a touchstone problem for computational linguistics. We present an approach that learns to map natural-language descriptions of simple ""if-then"" rules to executable code. By training and testing on a large corpus of naturally-occurring programs (called ""recipes"") and their natural language descriptions, we demonstrate the ability to effectively map language to code. We compare a number of semantic parsing approaches on the highly noisy training data collected from ordinary users, and find that loosely synchronous systems perform best.",Language to Code: Learning Semantic Parsers for If-This-Then-That Recipes,"Using natural language to write programs is a touchstone problem for computational linguistics. We present an approach that learns to map natural-language descriptions of simple ""if-then"" rules to executable code. By training and testing on a large corpus of naturally-occurring programs (called ""recipes"") and their natural language descriptions, we demonstrate the ability to effectively map language to code. We compare a number of semantic parsing approaches on the highly noisy training data collected from ordinary users, and find that loosely synchronous systems perform best.",The authors would like to thank William Dolan and the anonymous reviewers for their helpful advice and suggestions.,"Language to Code: Learning Semantic Parsers for If-This-Then-That Recipes. Using natural language to write programs is a touchstone problem for computational linguistics. We present an approach that learns to map natural-language descriptions of simple ""if-then"" rules to executable code. By training and testing on a large corpus of naturally-occurring programs (called ""recipes"") and their natural language descriptions, we demonstrate the ability to effectively map language to code. We compare a number of semantic parsing approaches on the highly noisy training data collected from ordinary users, and find that loosely synchronous systems perform best.",2015
sakaji-etal-2019-financial,https://aclanthology.org/W19-5507,0,,,,finance,,,"Financial Text Data Analytics Framework for Business Confidence Indices and Inter-Industry Relations. In this paper, we propose a novel framework for analyzing inter-industry relations using the contact histories of local banks. Contact histories are data recorded when employees communicate with customers. By analyzing contact histories, we can determine business confidence levels in the local region and analyze inter-industry relations using industrial data that is attached to the contact history. However, it is often difficult for bankers to create analysis programs. Therefore, we propose a banker-friendly inter-industry relations analysis framework. In this study, we generated regional business confidence indices and used them to analyze inter-industry relations.",Financial Text Data Analytics Framework for Business Confidence Indices and Inter-Industry Relations,"In this paper, we propose a novel framework for analyzing inter-industry relations using the contact histories of local banks. Contact histories are data recorded when employees communicate with customers. By analyzing contact histories, we can determine business confidence levels in the local region and analyze inter-industry relations using industrial data that is attached to the contact history. However, it is often difficult for bankers to create analysis programs. Therefore, we propose a banker-friendly inter-industry relations analysis framework. In this study, we generated regional business confidence indices and used them to analyze inter-industry relations.",Financial Text Data Analytics Framework for Business Confidence Indices and Inter-Industry Relations,"In this paper, we propose a novel framework for analyzing inter-industry relations using the contact histories of local banks. Contact histories are data recorded when employees communicate with customers. By analyzing contact histories, we can determine business confidence levels in the local region and analyze inter-industry relations using industrial data that is attached to the contact history. However, it is often difficult for bankers to create analysis programs. Therefore, we propose a banker-friendly inter-industry relations analysis framework. In this study, we generated regional business confidence indices and used them to analyze inter-industry relations.",,"Financial Text Data Analytics Framework for Business Confidence Indices and Inter-Industry Relations. In this paper, we propose a novel framework for analyzing inter-industry relations using the contact histories of local banks. Contact histories are data recorded when employees communicate with customers. By analyzing contact histories, we can determine business confidence levels in the local region and analyze inter-industry relations using industrial data that is attached to the contact history. However, it is often difficult for bankers to create analysis programs. Therefore, we propose a banker-friendly inter-industry relations analysis framework. In this study, we generated regional business confidence indices and used them to analyze inter-industry relations.",2019
wu-etal-2021-newsbert-distilling,https://aclanthology.org/2021.findings-emnlp.280,1,,,,peace_justice_and_strong_institutions,,,"NewsBERT: Distilling Pre-trained Language Model for Intelligent News Application. Pre-trained language models (PLMs) like BERT have made great progress in NLP. News articles usually contain rich textual information, and PLMs have the potentials to enhance news text modeling for various intelligent news applications like news recommendation and retrieval. However, most existing PLMs are in huge size with hundreds of millions of parameters. Many online news applications need to serve millions of users with low latency, which poses great challenge to incorporating PLMs in these scenarios. Knowledge distillation techniques can compress a large PLM into a much smaller one and meanwhile keep good performance. However, existing language models are pre-trained and distilled on general corpus like Wikipedia, which have gaps with the news domain and may be suboptimal for news intelligence. In this paper, we propose NewsBERT, which can distill PLMs for efficient and effective news intelligence. In our approach, we design a teacher-student joint learning and distillation framework to collaboratively learn both teacher and student models, where the student model can learn from the learning experience of the teacher model. In addition, we propose a momentum distillation method by incorporating the gradients of teacher model into the update of student model to better transfer the knowledge learned by the teacher model. Thorough experiments on two real-world datasets with three tasks show that NewsBERT can empower various intelligent news applications with much smaller models.",{N}ews{BERT}: Distilling Pre-trained Language Model for Intelligent News Application,"Pre-trained language models (PLMs) like BERT have made great progress in NLP. News articles usually contain rich textual information, and PLMs have the potentials to enhance news text modeling for various intelligent news applications like news recommendation and retrieval. However, most existing PLMs are in huge size with hundreds of millions of parameters. Many online news applications need to serve millions of users with low latency, which poses great challenge to incorporating PLMs in these scenarios. Knowledge distillation techniques can compress a large PLM into a much smaller one and meanwhile keep good performance. However, existing language models are pre-trained and distilled on general corpus like Wikipedia, which have gaps with the news domain and may be suboptimal for news intelligence. In this paper, we propose NewsBERT, which can distill PLMs for efficient and effective news intelligence. In our approach, we design a teacher-student joint learning and distillation framework to collaboratively learn both teacher and student models, where the student model can learn from the learning experience of the teacher model. In addition, we propose a momentum distillation method by incorporating the gradients of teacher model into the update of student model to better transfer the knowledge learned by the teacher model. Thorough experiments on two real-world datasets with three tasks show that NewsBERT can empower various intelligent news applications with much smaller models.",NewsBERT: Distilling Pre-trained Language Model for Intelligent News Application,"Pre-trained language models (PLMs) like BERT have made great progress in NLP. News articles usually contain rich textual information, and PLMs have the potentials to enhance news text modeling for various intelligent news applications like news recommendation and retrieval. However, most existing PLMs are in huge size with hundreds of millions of parameters. Many online news applications need to serve millions of users with low latency, which poses great challenge to incorporating PLMs in these scenarios. Knowledge distillation techniques can compress a large PLM into a much smaller one and meanwhile keep good performance. However, existing language models are pre-trained and distilled on general corpus like Wikipedia, which have gaps with the news domain and may be suboptimal for news intelligence. In this paper, we propose NewsBERT, which can distill PLMs for efficient and effective news intelligence. In our approach, we design a teacher-student joint learning and distillation framework to collaboratively learn both teacher and student models, where the student model can learn from the learning experience of the teacher model. In addition, we propose a momentum distillation method by incorporating the gradients of teacher model into the update of student model to better transfer the knowledge learned by the teacher model. Thorough experiments on two real-world datasets with three tasks show that NewsBERT can empower various intelligent news applications with much smaller models.","This work was supported by the National Natural Science Foundation of China under Grant numbers 82090053, 61862002, and Tsinghua-Toyota Research Funds 20213930033.","NewsBERT: Distilling Pre-trained Language Model for Intelligent News Application. Pre-trained language models (PLMs) like BERT have made great progress in NLP. News articles usually contain rich textual information, and PLMs have the potentials to enhance news text modeling for various intelligent news applications like news recommendation and retrieval. However, most existing PLMs are in huge size with hundreds of millions of parameters. Many online news applications need to serve millions of users with low latency, which poses great challenge to incorporating PLMs in these scenarios. Knowledge distillation techniques can compress a large PLM into a much smaller one and meanwhile keep good performance. However, existing language models are pre-trained and distilled on general corpus like Wikipedia, which have gaps with the news domain and may be suboptimal for news intelligence. In this paper, we propose NewsBERT, which can distill PLMs for efficient and effective news intelligence. In our approach, we design a teacher-student joint learning and distillation framework to collaboratively learn both teacher and student models, where the student model can learn from the learning experience of the teacher model. In addition, we propose a momentum distillation method by incorporating the gradients of teacher model into the update of student model to better transfer the knowledge learned by the teacher model. Thorough experiments on two real-world datasets with three tasks show that NewsBERT can empower various intelligent news applications with much smaller models.",2021
bengler-2000-automotive,http://www.lrec-conf.org/proceedings/lrec2000/pdf/312.pdf,0,,,,,,,Automotive Speech-Recognition - Success Conditions Beyond Recognition Rates. From a car-manufacturer's point of view it is very important to integrate evaluation procedures into the MMI development process. Focusing the usability evaluation of speech-input and speech-output systems aspects beyond recognition rates must be fulfilled. Two of these conditions will be discussed based upon user studies conducted in 1999: • Mental-workload and distraction • Learnability,Automotive Speech-Recognition - Success Conditions Beyond Recognition Rates,From a car-manufacturer's point of view it is very important to integrate evaluation procedures into the MMI development process. Focusing the usability evaluation of speech-input and speech-output systems aspects beyond recognition rates must be fulfilled. Two of these conditions will be discussed based upon user studies conducted in 1999: • Mental-workload and distraction • Learnability,Automotive Speech-Recognition - Success Conditions Beyond Recognition Rates,From a car-manufacturer's point of view it is very important to integrate evaluation procedures into the MMI development process. Focusing the usability evaluation of speech-input and speech-output systems aspects beyond recognition rates must be fulfilled. Two of these conditions will be discussed based upon user studies conducted in 1999: • Mental-workload and distraction • Learnability,,Automotive Speech-Recognition - Success Conditions Beyond Recognition Rates. From a car-manufacturer's point of view it is very important to integrate evaluation procedures into the MMI development process. Focusing the usability evaluation of speech-input and speech-output systems aspects beyond recognition rates must be fulfilled. Two of these conditions will be discussed based upon user studies conducted in 1999: • Mental-workload and distraction • Learnability,2000
lin-2002-web,http://www.lrec-conf.org/proceedings/lrec2002/pdf/85.pdf,0,,,,,,,"The Web as a Resource for Question Answering: Perspectives and Challenges. The vast amounts of information readily available on the World Wide Web can be effectively used for question answering in two fundamentally different ways. In the federated approach, techniques for handling semistructured data are applied to access Web sources as if they were databases, allowing large classes of common questions to be answered uniformly. In the distributed approach, largescale text-processing techniques are used to extract answers directly from unstructured Web documents. Because the Web is orders of magnitude larger than any human-collected corpus, question answering systems can capitalize on its unparalleled-levels of data redundancy. Analysis of real-world user questions reveals that the federated and distributed approaches complement each other nicely, suggesting a hybrid approach in future question answering systems.",The Web as a Resource for Question Answering: Perspectives and Challenges,"The vast amounts of information readily available on the World Wide Web can be effectively used for question answering in two fundamentally different ways. In the federated approach, techniques for handling semistructured data are applied to access Web sources as if they were databases, allowing large classes of common questions to be answered uniformly. In the distributed approach, largescale text-processing techniques are used to extract answers directly from unstructured Web documents. Because the Web is orders of magnitude larger than any human-collected corpus, question answering systems can capitalize on its unparalleled-levels of data redundancy. Analysis of real-world user questions reveals that the federated and distributed approaches complement each other nicely, suggesting a hybrid approach in future question answering systems.",The Web as a Resource for Question Answering: Perspectives and Challenges,"The vast amounts of information readily available on the World Wide Web can be effectively used for question answering in two fundamentally different ways. In the federated approach, techniques for handling semistructured data are applied to access Web sources as if they were databases, allowing large classes of common questions to be answered uniformly. In the distributed approach, largescale text-processing techniques are used to extract answers directly from unstructured Web documents. Because the Web is orders of magnitude larger than any human-collected corpus, question answering systems can capitalize on its unparalleled-levels of data redundancy. Analysis of real-world user questions reveals that the federated and distributed approaches complement each other nicely, suggesting a hybrid approach in future question answering systems.","I'd like to thank Boris Katz, Greg Marton, and Vineet Sinha for their helpful comments on earlier drafts.","The Web as a Resource for Question Answering: Perspectives and Challenges. The vast amounts of information readily available on the World Wide Web can be effectively used for question answering in two fundamentally different ways. In the federated approach, techniques for handling semistructured data are applied to access Web sources as if they were databases, allowing large classes of common questions to be answered uniformly. In the distributed approach, largescale text-processing techniques are used to extract answers directly from unstructured Web documents. Because the Web is orders of magnitude larger than any human-collected corpus, question answering systems can capitalize on its unparalleled-levels of data redundancy. Analysis of real-world user questions reveals that the federated and distributed approaches complement each other nicely, suggesting a hybrid approach in future question answering systems.",2002
schulte-im-walde-2006-experiments,https://aclanthology.org/J06-2001,0,,,,,,,"Experiments on the Automatic Induction of German Semantic Verb Classes. This article presents clustering experiments on German verbs: A statistical grammar model for German serves as the source for a distributional verb description at the lexical syntax-semantics interface, and the unsupervised clustering algorithm k-means uses the empirical verb properties to perform an automatic induction of verb classes. Various evaluation measures are applied to compare the clustering results to gold standard German semantic verb classes under different criteria. The primary goals of the experiments are (1) to empirically utilize and investigate the well-established relationship between verb meaning and verb behavior within a cluster analysis and (2) to investigate the required technical parameters of a cluster analysis with respect to this specific linguistic task. The clustering methodology is developed on a small-scale verb set and then applied to a larger-scale verb set including 883 German verbs.",Experiments on the Automatic Induction of {G}erman Semantic Verb Classes,"This article presents clustering experiments on German verbs: A statistical grammar model for German serves as the source for a distributional verb description at the lexical syntax-semantics interface, and the unsupervised clustering algorithm k-means uses the empirical verb properties to perform an automatic induction of verb classes. Various evaluation measures are applied to compare the clustering results to gold standard German semantic verb classes under different criteria. The primary goals of the experiments are (1) to empirically utilize and investigate the well-established relationship between verb meaning and verb behavior within a cluster analysis and (2) to investigate the required technical parameters of a cluster analysis with respect to this specific linguistic task. The clustering methodology is developed on a small-scale verb set and then applied to a larger-scale verb set including 883 German verbs.",Experiments on the Automatic Induction of German Semantic Verb Classes,"This article presents clustering experiments on German verbs: A statistical grammar model for German serves as the source for a distributional verb description at the lexical syntax-semantics interface, and the unsupervised clustering algorithm k-means uses the empirical verb properties to perform an automatic induction of verb classes. Various evaluation measures are applied to compare the clustering results to gold standard German semantic verb classes under different criteria. The primary goals of the experiments are (1) to empirically utilize and investigate the well-established relationship between verb meaning and verb behavior within a cluster analysis and (2) to investigate the required technical parameters of a cluster analysis with respect to this specific linguistic task. The clustering methodology is developed on a small-scale verb set and then applied to a larger-scale verb set including 883 German verbs.","The work reported here was performed while the author was a member of the DFG-funded PhD program ""Graduiertenkolleg"" Sprachliche Repräsentationen und ihre Interpretation at the Institute for Natural Language Processing (IMS), University of Stuttgart, Germany. Many thanks to Helmut Schmid, Stefan Evert, Frank Keller, Scott McDonald, Alissa Melinger, Chris Brew, Hinrich Schütze, Jonas Kuhn, and the two anonymous reviewers for their valuable comments on previous versions of this article.","Experiments on the Automatic Induction of German Semantic Verb Classes. This article presents clustering experiments on German verbs: A statistical grammar model for German serves as the source for a distributional verb description at the lexical syntax-semantics interface, and the unsupervised clustering algorithm k-means uses the empirical verb properties to perform an automatic induction of verb classes. Various evaluation measures are applied to compare the clustering results to gold standard German semantic verb classes under different criteria. The primary goals of the experiments are (1) to empirically utilize and investigate the well-established relationship between verb meaning and verb behavior within a cluster analysis and (2) to investigate the required technical parameters of a cluster analysis with respect to this specific linguistic task. The clustering methodology is developed on a small-scale verb set and then applied to a larger-scale verb set including 883 German verbs.",2006
turchi-etal-2014-adaptive,https://aclanthology.org/P14-1067,0,,,,,,,"Adaptive Quality Estimation for Machine Translation. The automatic estimation of machine translation (MT) output quality is a hard task in which the selection of the appropriate algorithm and the most predictive features over reasonably sized training sets plays a crucial role. When moving from controlled lab evaluations to real-life scenarios the task becomes even harder. For current MT quality estimation (QE) systems, additional complexity comes from the difficulty to model user and domain changes. Indeed, the instability of the systems with respect to data coming from different distributions calls for adaptive solutions that react to new operating conditions. To tackle this issue we propose an online framework for adaptive QE that targets reactivity and robustness to user and domain changes. Contrastive experiments in different testing conditions involving user and domain changes demonstrate the effectiveness of our approach.",Adaptive Quality Estimation for Machine Translation,"The automatic estimation of machine translation (MT) output quality is a hard task in which the selection of the appropriate algorithm and the most predictive features over reasonably sized training sets plays a crucial role. When moving from controlled lab evaluations to real-life scenarios the task becomes even harder. For current MT quality estimation (QE) systems, additional complexity comes from the difficulty to model user and domain changes. Indeed, the instability of the systems with respect to data coming from different distributions calls for adaptive solutions that react to new operating conditions. To tackle this issue we propose an online framework for adaptive QE that targets reactivity and robustness to user and domain changes. Contrastive experiments in different testing conditions involving user and domain changes demonstrate the effectiveness of our approach.",Adaptive Quality Estimation for Machine Translation,"The automatic estimation of machine translation (MT) output quality is a hard task in which the selection of the appropriate algorithm and the most predictive features over reasonably sized training sets plays a crucial role. When moving from controlled lab evaluations to real-life scenarios the task becomes even harder. For current MT quality estimation (QE) systems, additional complexity comes from the difficulty to model user and domain changes. Indeed, the instability of the systems with respect to data coming from different distributions calls for adaptive solutions that react to new operating conditions. To tackle this issue we propose an online framework for adaptive QE that targets reactivity and robustness to user and domain changes. Contrastive experiments in different testing conditions involving user and domain changes demonstrate the effectiveness of our approach.",This work has been partially supported by the ECfunded project MateCat (ICT-2011.4.2-287688).,"Adaptive Quality Estimation for Machine Translation. The automatic estimation of machine translation (MT) output quality is a hard task in which the selection of the appropriate algorithm and the most predictive features over reasonably sized training sets plays a crucial role. When moving from controlled lab evaluations to real-life scenarios the task becomes even harder. For current MT quality estimation (QE) systems, additional complexity comes from the difficulty to model user and domain changes. Indeed, the instability of the systems with respect to data coming from different distributions calls for adaptive solutions that react to new operating conditions. To tackle this issue we propose an online framework for adaptive QE that targets reactivity and robustness to user and domain changes. Contrastive experiments in different testing conditions involving user and domain changes demonstrate the effectiveness of our approach.",2014
kongthon-etal-2011-semantic,https://aclanthology.org/W11-3106,0,,,,business_use,,,A Semantic Based Question Answering System for Thailand Tourism Information. This paper reports our ongoing research work to create a semantic based question answering system for Thailand tourism information. Our proposed system focuses on mapping expressions in Thai natural language into ontology query language (SPARQL).,A Semantic Based Question Answering System for {T}hailand Tourism Information,This paper reports our ongoing research work to create a semantic based question answering system for Thailand tourism information. Our proposed system focuses on mapping expressions in Thai natural language into ontology query language (SPARQL).,A Semantic Based Question Answering System for Thailand Tourism Information,This paper reports our ongoing research work to create a semantic based question answering system for Thailand tourism information. Our proposed system focuses on mapping expressions in Thai natural language into ontology query language (SPARQL).,,A Semantic Based Question Answering System for Thailand Tourism Information. This paper reports our ongoing research work to create a semantic based question answering system for Thailand tourism information. Our proposed system focuses on mapping expressions in Thai natural language into ontology query language (SPARQL).,2011
schatzmann-etal-2007-agenda,https://aclanthology.org/N07-2038,0,,,,,,,"Agenda-Based User Simulation for Bootstrapping a POMDP Dialogue System. This paper investigates the problem of bootstrapping a statistical dialogue manager without access to training data and proposes a new probabilistic agenda-based method for simulating user behaviour. In experiments with a statistical POMDP dialogue system, the simulator was realistic enough to successfully test the prototype system and train a dialogue policy. An extensive study with human subjects showed that the learned policy was highly competitive, with task completion rates above 90%.",Agenda-Based User Simulation for Bootstrapping a {POMDP} Dialogue System,"This paper investigates the problem of bootstrapping a statistical dialogue manager without access to training data and proposes a new probabilistic agenda-based method for simulating user behaviour. In experiments with a statistical POMDP dialogue system, the simulator was realistic enough to successfully test the prototype system and train a dialogue policy. An extensive study with human subjects showed that the learned policy was highly competitive, with task completion rates above 90%.",Agenda-Based User Simulation for Bootstrapping a POMDP Dialogue System,"This paper investigates the problem of bootstrapping a statistical dialogue manager without access to training data and proposes a new probabilistic agenda-based method for simulating user behaviour. In experiments with a statistical POMDP dialogue system, the simulator was realistic enough to successfully test the prototype system and train a dialogue policy. An extensive study with human subjects showed that the learned policy was highly competitive, with task completion rates above 90%.",,"Agenda-Based User Simulation for Bootstrapping a POMDP Dialogue System. This paper investigates the problem of bootstrapping a statistical dialogue manager without access to training data and proposes a new probabilistic agenda-based method for simulating user behaviour. In experiments with a statistical POMDP dialogue system, the simulator was realistic enough to successfully test the prototype system and train a dialogue policy. An extensive study with human subjects showed that the learned policy was highly competitive, with task completion rates above 90%.",2007
hazem-hernandez-2019-meta,https://aclanthology.org/R19-1055,0,,,,,,,"Meta-Embedding Sentence Representation for Textual Similarity. Word embedding models are now widely used in most NLP applications. Despite their effectiveness, there is no clear evidence about the choice of the most appropriate model. It often depends on the nature of the task and on the quality and size of the used data sets. This remains true for bottom-up sentence embedding models. However, no straightforward investigation has been conducted so far. In this paper, we propose a systematic study of the impact of the main word embedding models on sentence representation. By contrasting in-domain and pre-trained embedding models, we show under which conditions they can be jointly used for bottom-up sentence embeddings. Finally, we propose the first bottom-up meta-embedding representation at the sentence level for textual similarity. Significant improvements are observed in several tasks including question-to-question similarity, paraphrasing and next utterance ranking.",Meta-Embedding Sentence Representation for Textual Similarity,"Word embedding models are now widely used in most NLP applications. Despite their effectiveness, there is no clear evidence about the choice of the most appropriate model. It often depends on the nature of the task and on the quality and size of the used data sets. This remains true for bottom-up sentence embedding models. However, no straightforward investigation has been conducted so far. In this paper, we propose a systematic study of the impact of the main word embedding models on sentence representation. By contrasting in-domain and pre-trained embedding models, we show under which conditions they can be jointly used for bottom-up sentence embeddings. Finally, we propose the first bottom-up meta-embedding representation at the sentence level for textual similarity. Significant improvements are observed in several tasks including question-to-question similarity, paraphrasing and next utterance ranking.",Meta-Embedding Sentence Representation for Textual Similarity,"Word embedding models are now widely used in most NLP applications. Despite their effectiveness, there is no clear evidence about the choice of the most appropriate model. It often depends on the nature of the task and on the quality and size of the used data sets. This remains true for bottom-up sentence embedding models. However, no straightforward investigation has been conducted so far. In this paper, we propose a systematic study of the impact of the main word embedding models on sentence representation. By contrasting in-domain and pre-trained embedding models, we show under which conditions they can be jointly used for bottom-up sentence embeddings. Finally, we propose the first bottom-up meta-embedding representation at the sentence level for textual similarity. Significant improvements are observed in several tasks including question-to-question similarity, paraphrasing and next utterance ranking.",,"Meta-Embedding Sentence Representation for Textual Similarity. Word embedding models are now widely used in most NLP applications. Despite their effectiveness, there is no clear evidence about the choice of the most appropriate model. It often depends on the nature of the task and on the quality and size of the used data sets. This remains true for bottom-up sentence embedding models. However, no straightforward investigation has been conducted so far. In this paper, we propose a systematic study of the impact of the main word embedding models on sentence representation. By contrasting in-domain and pre-trained embedding models, we show under which conditions they can be jointly used for bottom-up sentence embeddings. Finally, we propose the first bottom-up meta-embedding representation at the sentence level for textual similarity. Significant improvements are observed in several tasks including question-to-question similarity, paraphrasing and next utterance ranking.",2019
carlberger-etal-2001-improving,https://aclanthology.org/W01-1703,0,,,,,,,"Improving Precision in Information Retrieval for Swedish using Stemming. We will in this paper present an evaluation 1 of how much stemming improves precision in information retrieval for Swedish texts. To perform this, we built an information retrieval tool with optional stemming and created a tagged corpus in Swedish. We know that stemming in information retrieval for English, Dutch and Slovenian gives better precision the more inflecting the language is, but precision depends also on query length and document length. Our final results were that stemming improved both precision and recall with 15 respectively 18 percent for Swedish texts having an average length of 181 words.",Improving Precision in Information Retrieval for {S}wedish using Stemming,"We will in this paper present an evaluation 1 of how much stemming improves precision in information retrieval for Swedish texts. To perform this, we built an information retrieval tool with optional stemming and created a tagged corpus in Swedish. We know that stemming in information retrieval for English, Dutch and Slovenian gives better precision the more inflecting the language is, but precision depends also on query length and document length. Our final results were that stemming improved both precision and recall with 15 respectively 18 percent for Swedish texts having an average length of 181 words.",Improving Precision in Information Retrieval for Swedish using Stemming,"We will in this paper present an evaluation 1 of how much stemming improves precision in information retrieval for Swedish texts. To perform this, we built an information retrieval tool with optional stemming and created a tagged corpus in Swedish. We know that stemming in information retrieval for English, Dutch and Slovenian gives better precision the more inflecting the language is, but precision depends also on query length and document length. Our final results were that stemming improved both precision and recall with 15 respectively 18 percent for Swedish texts having an average length of 181 words.",We would like to thank the search engine team and specifically Jesper Ekhall at Euroseek AB for their support with the integration of our stemming algorithms in their search engine and allowing us to use their search engine in our experiments.,"Improving Precision in Information Retrieval for Swedish using Stemming. We will in this paper present an evaluation 1 of how much stemming improves precision in information retrieval for Swedish texts. To perform this, we built an information retrieval tool with optional stemming and created a tagged corpus in Swedish. We know that stemming in information retrieval for English, Dutch and Slovenian gives better precision the more inflecting the language is, but precision depends also on query length and document length. Our final results were that stemming improved both precision and recall with 15 respectively 18 percent for Swedish texts having an average length of 181 words.",2001
dredze-crammer-2008-active,https://aclanthology.org/P08-2059,0,,,,,,,Active Learning with Confidence. Active learning is a machine learning approach to achieving high-accuracy with a small amount of labels by letting the learning algorithm choose instances to be labeled. Most of previous approaches based on discriminative learning use the margin for choosing instances. We present a method for incorporating confidence into the margin by using a newly introduced online learning algorithm and show empirically that confidence improves active learning.,Active Learning with Confidence,Active learning is a machine learning approach to achieving high-accuracy with a small amount of labels by letting the learning algorithm choose instances to be labeled. Most of previous approaches based on discriminative learning use the margin for choosing instances. We present a method for incorporating confidence into the margin by using a newly introduced online learning algorithm and show empirically that confidence improves active learning.,Active Learning with Confidence,Active learning is a machine learning approach to achieving high-accuracy with a small amount of labels by letting the learning algorithm choose instances to be labeled. Most of previous approaches based on discriminative learning use the margin for choosing instances. We present a method for incorporating confidence into the margin by using a newly introduced online learning algorithm and show empirically that confidence improves active learning.,,Active Learning with Confidence. Active learning is a machine learning approach to achieving high-accuracy with a small amount of labels by letting the learning algorithm choose instances to be labeled. Most of previous approaches based on discriminative learning use the margin for choosing instances. We present a method for incorporating confidence into the margin by using a newly introduced online learning algorithm and show empirically that confidence improves active learning.,2008
ji-etal-2020-span,https://aclanthology.org/2020.coling-main.8,0,,,,,,,"Span-based Joint Entity and Relation Extraction with Attention-based Span-specific and Contextual Semantic Representations. Span-based joint extraction models have shown their efficiency on entity recognition and relation extraction. These models regard text spans as candidate entities and span tuples as candidate relation tuples. Span semantic representations are shared in both entity recognition and relation extraction, while existing models cannot well capture semantics of these candidate entities and relations. To address these problems, we introduce a span-based joint extraction framework with attention-based semantic representations. Specially, attentions are utilized to calculate semantic representations, including span-specific and contextual ones. We further investigate effects of four attention variants in generating contextual semantic representations. Experiments show that our model outperforms previous systems and achieves state-of-the-art results on ACE2005, CoNLL2004 and ADE.",Span-based Joint Entity and Relation Extraction with Attention-based Span-specific and Contextual Semantic Representations,"Span-based joint extraction models have shown their efficiency on entity recognition and relation extraction. These models regard text spans as candidate entities and span tuples as candidate relation tuples. Span semantic representations are shared in both entity recognition and relation extraction, while existing models cannot well capture semantics of these candidate entities and relations. To address these problems, we introduce a span-based joint extraction framework with attention-based semantic representations. Specially, attentions are utilized to calculate semantic representations, including span-specific and contextual ones. We further investigate effects of four attention variants in generating contextual semantic representations. Experiments show that our model outperforms previous systems and achieves state-of-the-art results on ACE2005, CoNLL2004 and ADE.",Span-based Joint Entity and Relation Extraction with Attention-based Span-specific and Contextual Semantic Representations,"Span-based joint extraction models have shown their efficiency on entity recognition and relation extraction. These models regard text spans as candidate entities and span tuples as candidate relation tuples. Span semantic representations are shared in both entity recognition and relation extraction, while existing models cannot well capture semantics of these candidate entities and relations. To address these problems, we introduce a span-based joint extraction framework with attention-based semantic representations. Specially, attentions are utilized to calculate semantic representations, including span-specific and contextual ones. We further investigate effects of four attention variants in generating contextual semantic representations. Experiments show that our model outperforms previous systems and achieves state-of-the-art results on ACE2005, CoNLL2004 and ADE.",The work is supported by the National Key Research and Development Program of China (2018YFB1004502) and the National Natural Science Foundation of China (61532001).,"Span-based Joint Entity and Relation Extraction with Attention-based Span-specific and Contextual Semantic Representations. Span-based joint extraction models have shown their efficiency on entity recognition and relation extraction. These models regard text spans as candidate entities and span tuples as candidate relation tuples. Span semantic representations are shared in both entity recognition and relation extraction, while existing models cannot well capture semantics of these candidate entities and relations. To address these problems, we introduce a span-based joint extraction framework with attention-based semantic representations. Specially, attentions are utilized to calculate semantic representations, including span-specific and contextual ones. We further investigate effects of four attention variants in generating contextual semantic representations. Experiments show that our model outperforms previous systems and achieves state-of-the-art results on ACE2005, CoNLL2004 and ADE.",2020
poelitz-bartz-2014-enhancing,https://aclanthology.org/W14-0606,0,,,,,,,"Enhancing the possibilities of corpus-based investigations: Word sense disambiguation on query results of large text corpora. Common large digital text corpora do not distinguish between different meanings of word forms, intense manual effort has to be done for disambiguation tasks when querying for homonyms or polysemes. To improve this situation, we ran experiments with automatic word sense disambiguation methods operating directly on the output of the corpus query. In this paper, we present experiments with topic models to cluster search result snippets in order to separate occurrences of homonymous or polysemous queried words by their meanings.",Enhancing the possibilities of corpus-based investigations: Word sense disambiguation on query results of large text corpora,"Common large digital text corpora do not distinguish between different meanings of word forms, intense manual effort has to be done for disambiguation tasks when querying for homonyms or polysemes. To improve this situation, we ran experiments with automatic word sense disambiguation methods operating directly on the output of the corpus query. In this paper, we present experiments with topic models to cluster search result snippets in order to separate occurrences of homonymous or polysemous queried words by their meanings.",Enhancing the possibilities of corpus-based investigations: Word sense disambiguation on query results of large text corpora,"Common large digital text corpora do not distinguish between different meanings of word forms, intense manual effort has to be done for disambiguation tasks when querying for homonyms or polysemes. To improve this situation, we ran experiments with automatic word sense disambiguation methods operating directly on the output of the corpus query. In this paper, we present experiments with topic models to cluster search result snippets in order to separate occurrences of homonymous or polysemous queried words by their meanings.",,"Enhancing the possibilities of corpus-based investigations: Word sense disambiguation on query results of large text corpora. Common large digital text corpora do not distinguish between different meanings of word forms, intense manual effort has to be done for disambiguation tasks when querying for homonyms or polysemes. To improve this situation, we ran experiments with automatic word sense disambiguation methods operating directly on the output of the corpus query. In this paper, we present experiments with topic models to cluster search result snippets in order to separate occurrences of homonymous or polysemous queried words by their meanings.",2014
rondeau-hazen-2018-systematic,https://aclanthology.org/W18-2602,0,,,,,,,"Systematic Error Analysis of the Stanford Question Answering Dataset. We analyzed the outputs of multiple question answering (QA) models applied to the Stanford Question Answering Dataset (SQuAD) to identify the core challenges for QA systems on this data set. Through an iterative process, challenging aspects were hypothesized through qualitative analysis of the common error cases. A classifier was then constructed to predict whether SQuAD test examples were likely to be difficult for systems to answer based on features associated with the hypothesized aspects. The classifier's performance was used to accept or reject each aspect as an indicator of difficulty. With this approach, we ensured that our hypotheses were systematically tested and not simply accepted based on our pre-existing biases. Our explanations are not accepted based on human evaluation of individual examples. This process also enabled us to identify the primary QA strategy learned by the models, i.e., systems determined the acceptable answer type for a question and then selected the acceptable answer span of that type containing the highest density of words present in the question within its local vicinity in the passage.",Systematic Error Analysis of the {S}tanford Question Answering Dataset,"We analyzed the outputs of multiple question answering (QA) models applied to the Stanford Question Answering Dataset (SQuAD) to identify the core challenges for QA systems on this data set. Through an iterative process, challenging aspects were hypothesized through qualitative analysis of the common error cases. A classifier was then constructed to predict whether SQuAD test examples were likely to be difficult for systems to answer based on features associated with the hypothesized aspects. The classifier's performance was used to accept or reject each aspect as an indicator of difficulty. With this approach, we ensured that our hypotheses were systematically tested and not simply accepted based on our pre-existing biases. Our explanations are not accepted based on human evaluation of individual examples. This process also enabled us to identify the primary QA strategy learned by the models, i.e., systems determined the acceptable answer type for a question and then selected the acceptable answer span of that type containing the highest density of words present in the question within its local vicinity in the passage.",Systematic Error Analysis of the Stanford Question Answering Dataset,"We analyzed the outputs of multiple question answering (QA) models applied to the Stanford Question Answering Dataset (SQuAD) to identify the core challenges for QA systems on this data set. Through an iterative process, challenging aspects were hypothesized through qualitative analysis of the common error cases. A classifier was then constructed to predict whether SQuAD test examples were likely to be difficult for systems to answer based on features associated with the hypothesized aspects. The classifier's performance was used to accept or reject each aspect as an indicator of difficulty. With this approach, we ensured that our hypotheses were systematically tested and not simply accepted based on our pre-existing biases. Our explanations are not accepted based on human evaluation of individual examples. This process also enabled us to identify the primary QA strategy learned by the models, i.e., systems determined the acceptable answer type for a question and then selected the acceptable answer span of that type containing the highest density of words present in the question within its local vicinity in the passage.","We would like to thank Eric Lin, Peter Potash, Yadollah Yaghoobzadeh, and Kaheer Suleman for their feedback and helpful comments. We also thanks the anonymous reviewers for their comments.","Systematic Error Analysis of the Stanford Question Answering Dataset. We analyzed the outputs of multiple question answering (QA) models applied to the Stanford Question Answering Dataset (SQuAD) to identify the core challenges for QA systems on this data set. Through an iterative process, challenging aspects were hypothesized through qualitative analysis of the common error cases. A classifier was then constructed to predict whether SQuAD test examples were likely to be difficult for systems to answer based on features associated with the hypothesized aspects. The classifier's performance was used to accept or reject each aspect as an indicator of difficulty. With this approach, we ensured that our hypotheses were systematically tested and not simply accepted based on our pre-existing biases. Our explanations are not accepted based on human evaluation of individual examples. This process also enabled us to identify the primary QA strategy learned by the models, i.e., systems determined the acceptable answer type for a question and then selected the acceptable answer span of that type containing the highest density of words present in the question within its local vicinity in the passage.",2018
wang-etal-2021-fine-grained,https://aclanthology.org/2021.findings-emnlp.9,0,,,,,,,"Fine-grained Semantic Alignment Network for Weakly Supervised Temporal Language Grounding. Temporal language grounding (TLG) aims to localize a video segment in an untrimmed video based on a natural language description. To alleviate the expensive cost of manual annotations for temporal boundary labels, we are dedicated to the weakly supervised setting, where only video-level descriptions are provided for training. Most of the existing weakly supervised methods generate a candidate segment set and learn cross-modal alignment through a MIL-based framework. However, the temporal structure of the video as well as the complicated semantics in the sentence are lost during the learning. In this work, we propose a novel candidatefree framework: Fine-grained Semantic Alignment Network (FSAN), for weakly supervised TLG. Instead of view the sentence and candidate moments as a whole, FSAN learns token-by-clip cross-modal semantic alignment by an iterative cross-modal interaction module, generates a fine-grained cross-modal semantic alignment map, and performs grounding directly on top of the map. Extensive experiments are conducted on two widelyused benchmarks: ActivityNet-Captions, and DiDeMo, where our FSAN achieves state-ofthe-art performance.",Fine-grained Semantic Alignment Network for Weakly Supervised Temporal Language Grounding,"Temporal language grounding (TLG) aims to localize a video segment in an untrimmed video based on a natural language description. To alleviate the expensive cost of manual annotations for temporal boundary labels, we are dedicated to the weakly supervised setting, where only video-level descriptions are provided for training. Most of the existing weakly supervised methods generate a candidate segment set and learn cross-modal alignment through a MIL-based framework. However, the temporal structure of the video as well as the complicated semantics in the sentence are lost during the learning. In this work, we propose a novel candidatefree framework: Fine-grained Semantic Alignment Network (FSAN), for weakly supervised TLG. Instead of view the sentence and candidate moments as a whole, FSAN learns token-by-clip cross-modal semantic alignment by an iterative cross-modal interaction module, generates a fine-grained cross-modal semantic alignment map, and performs grounding directly on top of the map. Extensive experiments are conducted on two widelyused benchmarks: ActivityNet-Captions, and DiDeMo, where our FSAN achieves state-ofthe-art performance.",Fine-grained Semantic Alignment Network for Weakly Supervised Temporal Language Grounding,"Temporal language grounding (TLG) aims to localize a video segment in an untrimmed video based on a natural language description. To alleviate the expensive cost of manual annotations for temporal boundary labels, we are dedicated to the weakly supervised setting, where only video-level descriptions are provided for training. Most of the existing weakly supervised methods generate a candidate segment set and learn cross-modal alignment through a MIL-based framework. However, the temporal structure of the video as well as the complicated semantics in the sentence are lost during the learning. In this work, we propose a novel candidatefree framework: Fine-grained Semantic Alignment Network (FSAN), for weakly supervised TLG. Instead of view the sentence and candidate moments as a whole, FSAN learns token-by-clip cross-modal semantic alignment by an iterative cross-modal interaction module, generates a fine-grained cross-modal semantic alignment map, and performs grounding directly on top of the map. Extensive experiments are conducted on two widelyused benchmarks: ActivityNet-Captions, and DiDeMo, where our FSAN achieves state-ofthe-art performance.",This work was supported by the National Natural Science Foundation of China under Contract 61632019.,"Fine-grained Semantic Alignment Network for Weakly Supervised Temporal Language Grounding. Temporal language grounding (TLG) aims to localize a video segment in an untrimmed video based on a natural language description. To alleviate the expensive cost of manual annotations for temporal boundary labels, we are dedicated to the weakly supervised setting, where only video-level descriptions are provided for training. Most of the existing weakly supervised methods generate a candidate segment set and learn cross-modal alignment through a MIL-based framework. However, the temporal structure of the video as well as the complicated semantics in the sentence are lost during the learning. In this work, we propose a novel candidatefree framework: Fine-grained Semantic Alignment Network (FSAN), for weakly supervised TLG. Instead of view the sentence and candidate moments as a whole, FSAN learns token-by-clip cross-modal semantic alignment by an iterative cross-modal interaction module, generates a fine-grained cross-modal semantic alignment map, and performs grounding directly on top of the map. Extensive experiments are conducted on two widelyused benchmarks: ActivityNet-Captions, and DiDeMo, where our FSAN achieves state-ofthe-art performance.",2021
zeng-etal-2021-gene,https://aclanthology.org/2021.textgraphs-1.5,0,,,,,,,"GENE: Global Event Network Embedding. Current methods for event representation ignore related events in a corpus-level global context. For a deep and comprehensive understanding of complex events, we introduce a new task, Event Network Embedding, which aims to represent events by capturing the connections among events. We propose a novel framework, Global Event Network Embedding (GENE), that encodes the event network with a multi-view graph encoder while preserving the graph topology and node semantics. The graph encoder is trained by minimizing both structural and semantic losses. We develop a new series of structured probing tasks, and show that our approach effectively outperforms baseline models on node typing, argument role classification, and event coreference resolution. 1",{GENE}: Global Event Network Embedding,"Current methods for event representation ignore related events in a corpus-level global context. For a deep and comprehensive understanding of complex events, we introduce a new task, Event Network Embedding, which aims to represent events by capturing the connections among events. We propose a novel framework, Global Event Network Embedding (GENE), that encodes the event network with a multi-view graph encoder while preserving the graph topology and node semantics. The graph encoder is trained by minimizing both structural and semantic losses. We develop a new series of structured probing tasks, and show that our approach effectively outperforms baseline models on node typing, argument role classification, and event coreference resolution. 1",GENE: Global Event Network Embedding,"Current methods for event representation ignore related events in a corpus-level global context. For a deep and comprehensive understanding of complex events, we introduce a new task, Event Network Embedding, which aims to represent events by capturing the connections among events. We propose a novel framework, Global Event Network Embedding (GENE), that encodes the event network with a multi-view graph encoder while preserving the graph topology and node semantics. The graph encoder is trained by minimizing both structural and semantic losses. We develop a new series of structured probing tasks, and show that our approach effectively outperforms baseline models on node typing, argument role classification, and event coreference resolution. 1","This research is based upon work supported in part by U.S. DARPA KAIROS Program No. FA8750-19-2-1004, U.S. DARPA AIDA Program No. FA8750-18-2-0014, Air Force No. FA8650-17-C-7715. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.","GENE: Global Event Network Embedding. Current methods for event representation ignore related events in a corpus-level global context. For a deep and comprehensive understanding of complex events, we introduce a new task, Event Network Embedding, which aims to represent events by capturing the connections among events. We propose a novel framework, Global Event Network Embedding (GENE), that encodes the event network with a multi-view graph encoder while preserving the graph topology and node semantics. The graph encoder is trained by minimizing both structural and semantic losses. We develop a new series of structured probing tasks, and show that our approach effectively outperforms baseline models on node typing, argument role classification, and event coreference resolution. 1",2021
bagherbeygi-shamsfard-2012-corpus,http://www.lrec-conf.org/proceedings/lrec2012/pdf/1013_Paper.pdf,0,,,,,,,"Corpus based Semi-Automatic Extraction of Persian Compound Verbs and their Relations. Nowadays, Wordnet is used in natural language processing as one of the major linguistic resources. Having such a resource for Persian language helps researchers in computational linguistics and natural language processing fields to develop more accurate systems with higher performances. In this research, we propose a model for semi-automatic construction of Persian wordnet of verbs. Compound verbs are a very productive structure in Persian and number of compound verbs is much greater than simple verbs in this language This research is aimed at finding the structure of Persian compound verbs and the relations between verb components. The main idea behind developing this system is using the wordnet of other POS categories (here means noun and adjective) to extract Persian compound verbs, their synsets and their relations. This paper focuses on three main tasks: 1.extracting compound verbs 2.extracting verbal synsets and 3.extracting the relations among verbal synsets such as hypernymy, antonymy and cause.",Corpus based Semi-Automatic Extraction of {P}ersian Compound Verbs and their Relations,"Nowadays, Wordnet is used in natural language processing as one of the major linguistic resources. Having such a resource for Persian language helps researchers in computational linguistics and natural language processing fields to develop more accurate systems with higher performances. In this research, we propose a model for semi-automatic construction of Persian wordnet of verbs. Compound verbs are a very productive structure in Persian and number of compound verbs is much greater than simple verbs in this language This research is aimed at finding the structure of Persian compound verbs and the relations between verb components. The main idea behind developing this system is using the wordnet of other POS categories (here means noun and adjective) to extract Persian compound verbs, their synsets and their relations. This paper focuses on three main tasks: 1.extracting compound verbs 2.extracting verbal synsets and 3.extracting the relations among verbal synsets such as hypernymy, antonymy and cause.",Corpus based Semi-Automatic Extraction of Persian Compound Verbs and their Relations,"Nowadays, Wordnet is used in natural language processing as one of the major linguistic resources. Having such a resource for Persian language helps researchers in computational linguistics and natural language processing fields to develop more accurate systems with higher performances. In this research, we propose a model for semi-automatic construction of Persian wordnet of verbs. Compound verbs are a very productive structure in Persian and number of compound verbs is much greater than simple verbs in this language This research is aimed at finding the structure of Persian compound verbs and the relations between verb components. The main idea behind developing this system is using the wordnet of other POS categories (here means noun and adjective) to extract Persian compound verbs, their synsets and their relations. This paper focuses on three main tasks: 1.extracting compound verbs 2.extracting verbal synsets and 3.extracting the relations among verbal synsets such as hypernymy, antonymy and cause.",,"Corpus based Semi-Automatic Extraction of Persian Compound Verbs and their Relations. Nowadays, Wordnet is used in natural language processing as one of the major linguistic resources. Having such a resource for Persian language helps researchers in computational linguistics and natural language processing fields to develop more accurate systems with higher performances. In this research, we propose a model for semi-automatic construction of Persian wordnet of verbs. Compound verbs are a very productive structure in Persian and number of compound verbs is much greater than simple verbs in this language This research is aimed at finding the structure of Persian compound verbs and the relations between verb components. The main idea behind developing this system is using the wordnet of other POS categories (here means noun and adjective) to extract Persian compound verbs, their synsets and their relations. This paper focuses on three main tasks: 1.extracting compound verbs 2.extracting verbal synsets and 3.extracting the relations among verbal synsets such as hypernymy, antonymy and cause.",2012
geertzen-etal-2007-multidimensional,https://aclanthology.org/2007.sigdial-1.26,0,,,,,,,"A Multidimensional Approach to Utterance Segmentation and Dialogue Act Classification. In this paper we present a multidimensional approach to utterance segmentation and automatic dialogue act classif cation. We show that the use of multiple dimensions in distinguishing and annotating units not only supports a more accurate analysis of human communication, but can also help to solve some notorious problems concerning the segmentation of dialogue into functional units. We introduce the use of per-dimension segmentation for dialogue act taxonomies that feature multi-functionality and show that better classif cation results are obtained when using a separate segmentation for each dimension than when using one segmentation that f ts all dimensions. Three machine learning techniques are applied and compared on the task of automatic classifcation of multiple communicative functions of utterances. The results are encouraging and indicate that communicative functions in important dimensions are easy machinelearnable.",A Multidimensional Approach to Utterance Segmentation and Dialogue Act Classification,"In this paper we present a multidimensional approach to utterance segmentation and automatic dialogue act classif cation. We show that the use of multiple dimensions in distinguishing and annotating units not only supports a more accurate analysis of human communication, but can also help to solve some notorious problems concerning the segmentation of dialogue into functional units. We introduce the use of per-dimension segmentation for dialogue act taxonomies that feature multi-functionality and show that better classif cation results are obtained when using a separate segmentation for each dimension than when using one segmentation that f ts all dimensions. Three machine learning techniques are applied and compared on the task of automatic classifcation of multiple communicative functions of utterances. The results are encouraging and indicate that communicative functions in important dimensions are easy machinelearnable.",A Multidimensional Approach to Utterance Segmentation and Dialogue Act Classification,"In this paper we present a multidimensional approach to utterance segmentation and automatic dialogue act classif cation. We show that the use of multiple dimensions in distinguishing and annotating units not only supports a more accurate analysis of human communication, but can also help to solve some notorious problems concerning the segmentation of dialogue into functional units. We introduce the use of per-dimension segmentation for dialogue act taxonomies that feature multi-functionality and show that better classif cation results are obtained when using a separate segmentation for each dimension than when using one segmentation that f ts all dimensions. Three machine learning techniques are applied and compared on the task of automatic classifcation of multiple communicative functions of utterances. The results are encouraging and indicate that communicative functions in important dimensions are easy machinelearnable.",,"A Multidimensional Approach to Utterance Segmentation and Dialogue Act Classification. In this paper we present a multidimensional approach to utterance segmentation and automatic dialogue act classif cation. We show that the use of multiple dimensions in distinguishing and annotating units not only supports a more accurate analysis of human communication, but can also help to solve some notorious problems concerning the segmentation of dialogue into functional units. We introduce the use of per-dimension segmentation for dialogue act taxonomies that feature multi-functionality and show that better classif cation results are obtained when using a separate segmentation for each dimension than when using one segmentation that f ts all dimensions. Three machine learning techniques are applied and compared on the task of automatic classifcation of multiple communicative functions of utterances. The results are encouraging and indicate that communicative functions in important dimensions are easy machinelearnable.",2007
zanzotto-etal-2006-discovering,https://aclanthology.org/P06-1107,0,,,,,,,"Discovering Asymmetric Entailment Relations between Verbs Using Selectional Preferences. In this paper we investigate a novel method to detect asymmetric entailment relations between verbs. Our starting point is the idea that some point-wise verb selectional preferences carry relevant semantic information. Experiments using Word-Net as a gold standard show promising results. Where applicable, our method, used in combination with other approaches, significantly increases the performance of entailment detection. A combined approach including our model improves the AROC of 5% absolute points with respect to standard models.",Discovering Asymmetric Entailment Relations between Verbs Using Selectional Preferences,"In this paper we investigate a novel method to detect asymmetric entailment relations between verbs. Our starting point is the idea that some point-wise verb selectional preferences carry relevant semantic information. Experiments using Word-Net as a gold standard show promising results. Where applicable, our method, used in combination with other approaches, significantly increases the performance of entailment detection. A combined approach including our model improves the AROC of 5% absolute points with respect to standard models.",Discovering Asymmetric Entailment Relations between Verbs Using Selectional Preferences,"In this paper we investigate a novel method to detect asymmetric entailment relations between verbs. Our starting point is the idea that some point-wise verb selectional preferences carry relevant semantic information. Experiments using Word-Net as a gold standard show promising results. Where applicable, our method, used in combination with other approaches, significantly increases the performance of entailment detection. A combined approach including our model improves the AROC of 5% absolute points with respect to standard models.",,"Discovering Asymmetric Entailment Relations between Verbs Using Selectional Preferences. In this paper we investigate a novel method to detect asymmetric entailment relations between verbs. Our starting point is the idea that some point-wise verb selectional preferences carry relevant semantic information. Experiments using Word-Net as a gold standard show promising results. Where applicable, our method, used in combination with other approaches, significantly increases the performance of entailment detection. A combined approach including our model improves the AROC of 5% absolute points with respect to standard models.",2006
guthrie-etal-2008-unsupervised,http://www.lrec-conf.org/proceedings/lrec2008/pdf/866_paper.pdf,0,,,,,,,"An Unsupervised Probabilistic Approach for the Detection of Outliers in Corpora. Many applications of computational linguistics are greatly influenced by the quality of corpora available and as automatically generated corpora continue to play an increasingly common role, it is essential that we not overlook the importance of well-constructed and homogeneous corpora. This paper describes an automatic approach to improving the homogeneity of corpora using an unsupervised method of statistical outlier detection to find documents and segments that do not belong in a corpus. We consider collections of corpora that are homogeneous with respect to topic (i.e. about the same subject), or genre (written for the same audience or from the same source) and use a combination of stylistic and lexical features of the texts to automatically identify pieces of text in these collections that break the homogeneity. These pieces of text that are significantly different from the rest of the corpus are likely to be errors that are out of place and should be removed from the corpus before it is used for other tasks. We evaluate our techniques by running extensive experiments over large artificially constructed corpora that each contain single pieces of text from a different topic, author, or genre than the rest of the collection and measure the accuracy of identifying these pieces of text without the use of training data. We show that when these pieces of text are reasonably large (1,000 words) we can reliably identify them in a corpus.",An Unsupervised Probabilistic Approach for the Detection of Outliers in Corpora,"Many applications of computational linguistics are greatly influenced by the quality of corpora available and as automatically generated corpora continue to play an increasingly common role, it is essential that we not overlook the importance of well-constructed and homogeneous corpora. This paper describes an automatic approach to improving the homogeneity of corpora using an unsupervised method of statistical outlier detection to find documents and segments that do not belong in a corpus. We consider collections of corpora that are homogeneous with respect to topic (i.e. about the same subject), or genre (written for the same audience or from the same source) and use a combination of stylistic and lexical features of the texts to automatically identify pieces of text in these collections that break the homogeneity. These pieces of text that are significantly different from the rest of the corpus are likely to be errors that are out of place and should be removed from the corpus before it is used for other tasks. We evaluate our techniques by running extensive experiments over large artificially constructed corpora that each contain single pieces of text from a different topic, author, or genre than the rest of the collection and measure the accuracy of identifying these pieces of text without the use of training data. We show that when these pieces of text are reasonably large (1,000 words) we can reliably identify them in a corpus.",An Unsupervised Probabilistic Approach for the Detection of Outliers in Corpora,"Many applications of computational linguistics are greatly influenced by the quality of corpora available and as automatically generated corpora continue to play an increasingly common role, it is essential that we not overlook the importance of well-constructed and homogeneous corpora. This paper describes an automatic approach to improving the homogeneity of corpora using an unsupervised method of statistical outlier detection to find documents and segments that do not belong in a corpus. We consider collections of corpora that are homogeneous with respect to topic (i.e. about the same subject), or genre (written for the same audience or from the same source) and use a combination of stylistic and lexical features of the texts to automatically identify pieces of text in these collections that break the homogeneity. These pieces of text that are significantly different from the rest of the corpus are likely to be errors that are out of place and should be removed from the corpus before it is used for other tasks. We evaluate our techniques by running extensive experiments over large artificially constructed corpora that each contain single pieces of text from a different topic, author, or genre than the rest of the collection and measure the accuracy of identifying these pieces of text without the use of training data. We show that when these pieces of text are reasonably large (1,000 words) we can reliably identify them in a corpus.",,"An Unsupervised Probabilistic Approach for the Detection of Outliers in Corpora. Many applications of computational linguistics are greatly influenced by the quality of corpora available and as automatically generated corpora continue to play an increasingly common role, it is essential that we not overlook the importance of well-constructed and homogeneous corpora. This paper describes an automatic approach to improving the homogeneity of corpora using an unsupervised method of statistical outlier detection to find documents and segments that do not belong in a corpus. We consider collections of corpora that are homogeneous with respect to topic (i.e. about the same subject), or genre (written for the same audience or from the same source) and use a combination of stylistic and lexical features of the texts to automatically identify pieces of text in these collections that break the homogeneity. These pieces of text that are significantly different from the rest of the corpus are likely to be errors that are out of place and should be removed from the corpus before it is used for other tasks. We evaluate our techniques by running extensive experiments over large artificially constructed corpora that each contain single pieces of text from a different topic, author, or genre than the rest of the collection and measure the accuracy of identifying these pieces of text without the use of training data. We show that when these pieces of text are reasonably large (1,000 words) we can reliably identify them in a corpus.",2008
al-natsheh-etal-2017-udl,https://aclanthology.org/S17-2013,0,,,,,,,"UdL at SemEval-2017 Task 1: Semantic Textual Similarity Estimation of English Sentence Pairs Using Regression Model over Pairwise Features. This paper describes the model UdL we proposed to solve the semantic textual similarity task of SemEval 2017 workshop. The track we participated in was estimating the semantics relatedness of a given set of sentence pairs in English. The best run out of three submitted runs of our model achieved a Pearson correlation score of 0.8004 compared to a hidden human annotation of 250 pairs. We used random forest ensemble learning to map an expandable set of extracted pairwise features into a semantic similarity estimated value bounded between 0 and 5. Most of these features were calculated using word embedding vectors similarity to align Part of Speech (PoS) and Name Entities (NE) tagged tokens of each sentence pair. Among other pairwise features, we experimented a classical tf-idf weighted Bag of Words (BoW) vector model but with character-based range of n-grams instead of words. This sentence vector BoW-based feature gave a relatively high importance value percentage in the feature importances analysis of the ensemble learning.",{U}d{L} at {S}em{E}val-2017 Task 1: Semantic Textual Similarity Estimation of {E}nglish Sentence Pairs Using Regression Model over Pairwise Features,"This paper describes the model UdL we proposed to solve the semantic textual similarity task of SemEval 2017 workshop. The track we participated in was estimating the semantics relatedness of a given set of sentence pairs in English. The best run out of three submitted runs of our model achieved a Pearson correlation score of 0.8004 compared to a hidden human annotation of 250 pairs. We used random forest ensemble learning to map an expandable set of extracted pairwise features into a semantic similarity estimated value bounded between 0 and 5. Most of these features were calculated using word embedding vectors similarity to align Part of Speech (PoS) and Name Entities (NE) tagged tokens of each sentence pair. Among other pairwise features, we experimented a classical tf-idf weighted Bag of Words (BoW) vector model but with character-based range of n-grams instead of words. This sentence vector BoW-based feature gave a relatively high importance value percentage in the feature importances analysis of the ensemble learning.",UdL at SemEval-2017 Task 1: Semantic Textual Similarity Estimation of English Sentence Pairs Using Regression Model over Pairwise Features,"This paper describes the model UdL we proposed to solve the semantic textual similarity task of SemEval 2017 workshop. The track we participated in was estimating the semantics relatedness of a given set of sentence pairs in English. The best run out of three submitted runs of our model achieved a Pearson correlation score of 0.8004 compared to a hidden human annotation of 250 pairs. We used random forest ensemble learning to map an expandable set of extracted pairwise features into a semantic similarity estimated value bounded between 0 and 5. Most of these features were calculated using word embedding vectors similarity to align Part of Speech (PoS) and Name Entities (NE) tagged tokens of each sentence pair. Among other pairwise features, we experimented a classical tf-idf weighted Bag of Words (BoW) vector model but with character-based range of n-grams instead of words. This sentence vector BoW-based feature gave a relatively high importance value percentage in the feature importances analysis of the ensemble learning.","We would like to thank ARC6 Auvergne-Rhône-Alpes that funds the current PhD studies of the first author and the program ""Investissements d'Avenir"" ISTEX for funding the post-doctoral position of the second author.","UdL at SemEval-2017 Task 1: Semantic Textual Similarity Estimation of English Sentence Pairs Using Regression Model over Pairwise Features. This paper describes the model UdL we proposed to solve the semantic textual similarity task of SemEval 2017 workshop. The track we participated in was estimating the semantics relatedness of a given set of sentence pairs in English. The best run out of three submitted runs of our model achieved a Pearson correlation score of 0.8004 compared to a hidden human annotation of 250 pairs. We used random forest ensemble learning to map an expandable set of extracted pairwise features into a semantic similarity estimated value bounded between 0 and 5. Most of these features were calculated using word embedding vectors similarity to align Part of Speech (PoS) and Name Entities (NE) tagged tokens of each sentence pair. Among other pairwise features, we experimented a classical tf-idf weighted Bag of Words (BoW) vector model but with character-based range of n-grams instead of words. This sentence vector BoW-based feature gave a relatively high importance value percentage in the feature importances analysis of the ensemble learning.",2017
braschler-etal-2000-evaluation,http://www.lrec-conf.org/proceedings/lrec2000/pdf/70.pdf,0,,,,,,,"The Evaluation of Systems for Cross-language Information Retrieval. We describe the creation of an infrastructure for the testing of cross-language text retrieval systems within the context of the Text REtrieval Conferences (TREC) organised by the US National Institute of Standards and Technology (NIST). The approach adopted and the issues that had to be taken into consideration when building a multilingual test suite and developing appropriate evaluation procedures to test cross-language systems are described. From 2000 on, a cross-language evaluation activity for European languages known as CLEF (Cross-Language Evaluation Forum) will be coordinated in Europe, while TREC will focus on Asian languages. The implications of the move to Europe and the intentions for the future are discussed.",The Evaluation of Systems for Cross-language Information Retrieval,"We describe the creation of an infrastructure for the testing of cross-language text retrieval systems within the context of the Text REtrieval Conferences (TREC) organised by the US National Institute of Standards and Technology (NIST). The approach adopted and the issues that had to be taken into consideration when building a multilingual test suite and developing appropriate evaluation procedures to test cross-language systems are described. From 2000 on, a cross-language evaluation activity for European languages known as CLEF (Cross-Language Evaluation Forum) will be coordinated in Europe, while TREC will focus on Asian languages. The implications of the move to Europe and the intentions for the future are discussed.",The Evaluation of Systems for Cross-language Information Retrieval,"We describe the creation of an infrastructure for the testing of cross-language text retrieval systems within the context of the Text REtrieval Conferences (TREC) organised by the US National Institute of Standards and Technology (NIST). The approach adopted and the issues that had to be taken into consideration when building a multilingual test suite and developing appropriate evaluation procedures to test cross-language systems are described. From 2000 on, a cross-language evaluation activity for European languages known as CLEF (Cross-Language Evaluation Forum) will be coordinated in Europe, while TREC will focus on Asian languages. The implications of the move to Europe and the intentions for the future are discussed.","We gratefully acknowledge the support of all the data providers and copyright holders, and in particular: Newswires: Associated Press, USA; SDA -Schweizerische Depeschenagentur, Switzerland.","The Evaluation of Systems for Cross-language Information Retrieval. We describe the creation of an infrastructure for the testing of cross-language text retrieval systems within the context of the Text REtrieval Conferences (TREC) organised by the US National Institute of Standards and Technology (NIST). The approach adopted and the issues that had to be taken into consideration when building a multilingual test suite and developing appropriate evaluation procedures to test cross-language systems are described. From 2000 on, a cross-language evaluation activity for European languages known as CLEF (Cross-Language Evaluation Forum) will be coordinated in Europe, while TREC will focus on Asian languages. The implications of the move to Europe and the intentions for the future are discussed.",2000
oneill-mctear-1999-object,https://aclanthology.org/E99-1004,0,,,,,,,"An Object-Oriented Approach to the Design of Dialogue Management Functionality. Dialogues may be seen as comprising commonplace routines on the one hand and specialized, task-specific interactions on the other. Object-orientation is an established means of separating the generic from the specialized. The system under discussion combines this objectoriented approach with a self-organizing, mixed-initiative dialogue strategy, raising the possibility of dialogue systems that can be assembled from ready-made components and tailored, specialized components.",An Object-Oriented Approach to the Design of Dialogue Management Functionality,"Dialogues may be seen as comprising commonplace routines on the one hand and specialized, task-specific interactions on the other. Object-orientation is an established means of separating the generic from the specialized. The system under discussion combines this objectoriented approach with a self-organizing, mixed-initiative dialogue strategy, raising the possibility of dialogue systems that can be assembled from ready-made components and tailored, specialized components.",An Object-Oriented Approach to the Design of Dialogue Management Functionality,"Dialogues may be seen as comprising commonplace routines on the one hand and specialized, task-specific interactions on the other. Object-orientation is an established means of separating the generic from the specialized. The system under discussion combines this objectoriented approach with a self-organizing, mixed-initiative dialogue strategy, raising the possibility of dialogue systems that can be assembled from ready-made components and tailored, specialized components.",,"An Object-Oriented Approach to the Design of Dialogue Management Functionality. Dialogues may be seen as comprising commonplace routines on the one hand and specialized, task-specific interactions on the other. Object-orientation is an established means of separating the generic from the specialized. The system under discussion combines this objectoriented approach with a self-organizing, mixed-initiative dialogue strategy, raising the possibility of dialogue systems that can be assembled from ready-made components and tailored, specialized components.",1999
roemmele-etal-2021-answerquest,https://aclanthology.org/2021.eacl-demos.6,0,,,,,,,"AnswerQuest: A System for Generating Question-Answer Items from Multi-Paragraph Documents. One strategy for facilitating reading comprehension is to present information in a questionand-answer format. We demo a system that integrates the tasks of question answering (QA) and question generation (QG) in order to produce Q&A items that convey the content of multi-paragraph documents. We report some experiments for QA and QG that yield improvements on both tasks, and assess how they interact to produce a list of Q&A items for a text. The demo is accessible at qna.sdl.com.",{A}nswer{Q}uest: A System for Generating Question-Answer Items from Multi-Paragraph Documents,"One strategy for facilitating reading comprehension is to present information in a questionand-answer format. We demo a system that integrates the tasks of question answering (QA) and question generation (QG) in order to produce Q&A items that convey the content of multi-paragraph documents. We report some experiments for QA and QG that yield improvements on both tasks, and assess how they interact to produce a list of Q&A items for a text. The demo is accessible at qna.sdl.com.",AnswerQuest: A System for Generating Question-Answer Items from Multi-Paragraph Documents,"One strategy for facilitating reading comprehension is to present information in a questionand-answer format. We demo a system that integrates the tasks of question answering (QA) and question generation (QG) in order to produce Q&A items that convey the content of multi-paragraph documents. We report some experiments for QA and QG that yield improvements on both tasks, and assess how they interact to produce a list of Q&A items for a text. The demo is accessible at qna.sdl.com.",,"AnswerQuest: A System for Generating Question-Answer Items from Multi-Paragraph Documents. One strategy for facilitating reading comprehension is to present information in a questionand-answer format. We demo a system that integrates the tasks of question answering (QA) and question generation (QG) in order to produce Q&A items that convey the content of multi-paragraph documents. We report some experiments for QA and QG that yield improvements on both tasks, and assess how they interact to produce a list of Q&A items for a text. The demo is accessible at qna.sdl.com.",2021
ballard-tinkham-1984-phrase,https://aclanthology.org/J84-2001,0,,,,,,,"A Phrase-Structured Grammatical Framework for Transportable Natural Language Processing. We present methods of dealing with the syntactic problems that arise in the construction of natural language processors that seek to allow users, as opposed to computational linguists, to customize an interface to operate with a new domain of data. In particular, we describe a grammatical formalism, based on augmented phrase-structure rules, which allows a parser to perform many important domain-specific disambiguations by reference to a pre-defined grammar and a collection of auxiliary files produced during an initial knowledge acquisition session with the user. We illustrate the workings of this formalism with examples from the grammar developed for our Layered Domain Class (LDC) system, though similarly motivated systems ought also to benefit from our formalisms. In addition to showing the theoretical advantage of providing many of the fine-tuning capabilities of so-called semantic grammars within the context of a domain-independent grammar, we demonstrate several practical benefits to our approach. The results of three experiments with our grammar and parser are also given.",A Phrase-Structured Grammatical Framework for Transportable Natural Language Processing,"We present methods of dealing with the syntactic problems that arise in the construction of natural language processors that seek to allow users, as opposed to computational linguists, to customize an interface to operate with a new domain of data. In particular, we describe a grammatical formalism, based on augmented phrase-structure rules, which allows a parser to perform many important domain-specific disambiguations by reference to a pre-defined grammar and a collection of auxiliary files produced during an initial knowledge acquisition session with the user. We illustrate the workings of this formalism with examples from the grammar developed for our Layered Domain Class (LDC) system, though similarly motivated systems ought also to benefit from our formalisms. In addition to showing the theoretical advantage of providing many of the fine-tuning capabilities of so-called semantic grammars within the context of a domain-independent grammar, we demonstrate several practical benefits to our approach. The results of three experiments with our grammar and parser are also given.",A Phrase-Structured Grammatical Framework for Transportable Natural Language Processing,"We present methods of dealing with the syntactic problems that arise in the construction of natural language processors that seek to allow users, as opposed to computational linguists, to customize an interface to operate with a new domain of data. In particular, we describe a grammatical formalism, based on augmented phrase-structure rules, which allows a parser to perform many important domain-specific disambiguations by reference to a pre-defined grammar and a collection of auxiliary files produced during an initial knowledge acquisition session with the user. We illustrate the workings of this formalism with examples from the grammar developed for our Layered Domain Class (LDC) system, though similarly motivated systems ought also to benefit from our formalisms. In addition to showing the theoretical advantage of providing many of the fine-tuning capabilities of so-called semantic grammars within the context of a domain-independent grammar, we demonstrate several practical benefits to our approach. The results of three experiments with our grammar and parser are also given.",,"A Phrase-Structured Grammatical Framework for Transportable Natural Language Processing. We present methods of dealing with the syntactic problems that arise in the construction of natural language processors that seek to allow users, as opposed to computational linguists, to customize an interface to operate with a new domain of data. In particular, we describe a grammatical formalism, based on augmented phrase-structure rules, which allows a parser to perform many important domain-specific disambiguations by reference to a pre-defined grammar and a collection of auxiliary files produced during an initial knowledge acquisition session with the user. We illustrate the workings of this formalism with examples from the grammar developed for our Layered Domain Class (LDC) system, though similarly motivated systems ought also to benefit from our formalisms. In addition to showing the theoretical advantage of providing many of the fine-tuning capabilities of so-called semantic grammars within the context of a domain-independent grammar, we demonstrate several practical benefits to our approach. The results of three experiments with our grammar and parser are also given.",1984
martschat-etal-2015-analyzing,https://aclanthology.org/N15-3002,0,,,,,,,Analyzing and Visualizing Coreference Resolution Errors. We present a toolkit for coreference resolution error analysis. It implements a recently proposed analysis framework and contains rich components for analyzing and visualizing recall and precision errors.,Analyzing and Visualizing Coreference Resolution Errors,We present a toolkit for coreference resolution error analysis. It implements a recently proposed analysis framework and contains rich components for analyzing and visualizing recall and precision errors.,Analyzing and Visualizing Coreference Resolution Errors,We present a toolkit for coreference resolution error analysis. It implements a recently proposed analysis framework and contains rich components for analyzing and visualizing recall and precision errors.,"This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a HITS PhD scholarship.",Analyzing and Visualizing Coreference Resolution Errors. We present a toolkit for coreference resolution error analysis. It implements a recently proposed analysis framework and contains rich components for analyzing and visualizing recall and precision errors.,2015
hoffman-etal-1963-application,https://aclanthology.org/1963.earlymt-1.16,0,,,,,,,Application of decision tables to syntactic analysis. ,Application of decision tables to syntactic analysis,,Application of decision tables to syntactic analysis,,,Application of decision tables to syntactic analysis. ,1963
beltagy-etal-2019-scibert,https://aclanthology.org/D19-1371,1,,,,industry_innovation_infrastructure,,,"SciBERT: A Pretrained Language Model for Scientific Text. Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SCIBERT, a pretrained language model based on BERT (Devlin et al., 2019) to address the lack of highquality, large-scale labeled scientific data. SCIBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github. com/allenai/scibert/.",{S}ci{BERT}: A Pretrained Language Model for Scientific Text,"Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SCIBERT, a pretrained language model based on BERT (Devlin et al., 2019) to address the lack of highquality, large-scale labeled scientific data. SCIBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github. com/allenai/scibert/.",SciBERT: A Pretrained Language Model for Scientific Text,"Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SCIBERT, a pretrained language model based on BERT (Devlin et al., 2019) to address the lack of highquality, large-scale labeled scientific data. SCIBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github. com/allenai/scibert/.","We thank the anonymous reviewers for their comments and suggestions. We also thank Waleed Ammar, Noah Smith, Yoav Goldberg, Daniel King, Doug Downey, and Dan Weld for their helpful discussions and feedback. All experiments were performed on beaker.org and supported in part by credits from Google Cloud.","SciBERT: A Pretrained Language Model for Scientific Text. Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SCIBERT, a pretrained language model based on BERT (Devlin et al., 2019) to address the lack of highquality, large-scale labeled scientific data. SCIBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github. com/allenai/scibert/.",2019
fillwock-traum-2018-identification,https://aclanthology.org/L18-1629,0,,,,,,,"Identification of Personal Information Shared in Chat-Oriented Dialogue. We present an analysis of how personal information is shared in chat-oriented dialogue. We develop an annotation scheme, including entity-types, attributes, and values, that can be used to annotate the presence and type of personal information in these dialogues. A collection of attribute types is identified from the annotation of three chat-oriented dialogue corpora and a taxonomy of personal information pertinent to chat-oriented dialogue is presented. We examine similarities and differences in the frequency of specific attributes in the three corpora and observe that there is much overlap between the attribute types which are shared between dialogue participants in these different settings. The work presented here suggests that there is a common set of attribute types that frequently occur within chat-oriented dialogue in general. This resource can be used in the development of chat-oriented dialogue systems by providing common topics that a dialogue system should be able to talk about.",Identification of Personal Information Shared in Chat-Oriented Dialogue,"We present an analysis of how personal information is shared in chat-oriented dialogue. We develop an annotation scheme, including entity-types, attributes, and values, that can be used to annotate the presence and type of personal information in these dialogues. A collection of attribute types is identified from the annotation of three chat-oriented dialogue corpora and a taxonomy of personal information pertinent to chat-oriented dialogue is presented. We examine similarities and differences in the frequency of specific attributes in the three corpora and observe that there is much overlap between the attribute types which are shared between dialogue participants in these different settings. The work presented here suggests that there is a common set of attribute types that frequently occur within chat-oriented dialogue in general. This resource can be used in the development of chat-oriented dialogue systems by providing common topics that a dialogue system should be able to talk about.",Identification of Personal Information Shared in Chat-Oriented Dialogue,"We present an analysis of how personal information is shared in chat-oriented dialogue. We develop an annotation scheme, including entity-types, attributes, and values, that can be used to annotate the presence and type of personal information in these dialogues. A collection of attribute types is identified from the annotation of three chat-oriented dialogue corpora and a taxonomy of personal information pertinent to chat-oriented dialogue is presented. We examine similarities and differences in the frequency of specific attributes in the three corpora and observe that there is much overlap between the attribute types which are shared between dialogue participants in these different settings. The work presented here suggests that there is a common set of attribute types that frequently occur within chat-oriented dialogue in general. This resource can be used in the development of chat-oriented dialogue systems by providing common topics that a dialogue system should be able to talk about.",,"Identification of Personal Information Shared in Chat-Oriented Dialogue. We present an analysis of how personal information is shared in chat-oriented dialogue. We develop an annotation scheme, including entity-types, attributes, and values, that can be used to annotate the presence and type of personal information in these dialogues. A collection of attribute types is identified from the annotation of three chat-oriented dialogue corpora and a taxonomy of personal information pertinent to chat-oriented dialogue is presented. We examine similarities and differences in the frequency of specific attributes in the three corpora and observe that there is much overlap between the attribute types which are shared between dialogue participants in these different settings. The work presented here suggests that there is a common set of attribute types that frequently occur within chat-oriented dialogue in general. This resource can be used in the development of chat-oriented dialogue systems by providing common topics that a dialogue system should be able to talk about.",2018
keesing-etal-2020-convolutional,https://aclanthology.org/2020.alta-1.13,0,,,,,,,"Convolutional and Recurrent Neural Networks for Spoken Emotion Recognition. We test four models proposed in the speech emotion recognition (SER) literature on 15 public and academic licensed datasets in speaker-independent cross-validation. Results indicate differences in the performance of the models which is partly dependent on the dataset and features used. We also show that a standard utterance-level feature set still performs competitively with neural models on some datasets. This work serves as a starting point for future model comparisons, in addition to open-sourcing the testing code.",Convolutional and Recurrent Neural Networks for Spoken Emotion Recognition,"We test four models proposed in the speech emotion recognition (SER) literature on 15 public and academic licensed datasets in speaker-independent cross-validation. Results indicate differences in the performance of the models which is partly dependent on the dataset and features used. We also show that a standard utterance-level feature set still performs competitively with neural models on some datasets. This work serves as a starting point for future model comparisons, in addition to open-sourcing the testing code.",Convolutional and Recurrent Neural Networks for Spoken Emotion Recognition,"We test four models proposed in the speech emotion recognition (SER) literature on 15 public and academic licensed datasets in speaker-independent cross-validation. Results indicate differences in the performance of the models which is partly dependent on the dataset and features used. We also show that a standard utterance-level feature set still performs competitively with neural models on some datasets. This work serves as a starting point for future model comparisons, in addition to open-sourcing the testing code.",The authors would like to thank the University of Auckland for funding this research through a PhD scholarship. We would like to thank in particular the School of Computer Science for providing the computer hardware to train and test these models. We would also like to thank the anonymous reviewers who submitted helpful feedback on this paper.,"Convolutional and Recurrent Neural Networks for Spoken Emotion Recognition. We test four models proposed in the speech emotion recognition (SER) literature on 15 public and academic licensed datasets in speaker-independent cross-validation. Results indicate differences in the performance of the models which is partly dependent on the dataset and features used. We also show that a standard utterance-level feature set still performs competitively with neural models on some datasets. This work serves as a starting point for future model comparisons, in addition to open-sourcing the testing code.",2020
kunilovskaya-etal-2021-fiction,https://aclanthology.org/2021.ranlp-1.84,0,,,,,,,Fiction in Russian Translation: A Translationese Study. This paper presents a translationese study based on the parallel data from the Russian National Corpus (RNC). We explored differences between literary texts originally authored in Russian and fiction translated into Russian from 11 languages. The texts are represented with frequency-based features that capture structural and lexical properties of language. Binary classification results indicate that literary translations can be distinguished from non-translations with an accuracy ranging from 82 to 92% depending on the source language and feature set. Multiclass classification confirms that translations from distant languages are more distinct from non-translations than translations from languages that are typologically close to Russian. It also demonstrates that translations from same-family source languages share translationese properties. Structural features return more consistent results than features relying on external resources and capturing lexical properties of texts in both translationese detection and source language identification tasks.,Fiction in {R}ussian Translation: A Translationese Study,This paper presents a translationese study based on the parallel data from the Russian National Corpus (RNC). We explored differences between literary texts originally authored in Russian and fiction translated into Russian from 11 languages. The texts are represented with frequency-based features that capture structural and lexical properties of language. Binary classification results indicate that literary translations can be distinguished from non-translations with an accuracy ranging from 82 to 92% depending on the source language and feature set. Multiclass classification confirms that translations from distant languages are more distinct from non-translations than translations from languages that are typologically close to Russian. It also demonstrates that translations from same-family source languages share translationese properties. Structural features return more consistent results than features relying on external resources and capturing lexical properties of texts in both translationese detection and source language identification tasks.,Fiction in Russian Translation: A Translationese Study,This paper presents a translationese study based on the parallel data from the Russian National Corpus (RNC). We explored differences between literary texts originally authored in Russian and fiction translated into Russian from 11 languages. The texts are represented with frequency-based features that capture structural and lexical properties of language. Binary classification results indicate that literary translations can be distinguished from non-translations with an accuracy ranging from 82 to 92% depending on the source language and feature set. Multiclass classification confirms that translations from distant languages are more distinct from non-translations than translations from languages that are typologically close to Russian. It also demonstrates that translations from same-family source languages share translationese properties. Structural features return more consistent results than features relying on external resources and capturing lexical properties of texts in both translationese detection and source language identification tasks.,,Fiction in Russian Translation: A Translationese Study. This paper presents a translationese study based on the parallel data from the Russian National Corpus (RNC). We explored differences between literary texts originally authored in Russian and fiction translated into Russian from 11 languages. The texts are represented with frequency-based features that capture structural and lexical properties of language. Binary classification results indicate that literary translations can be distinguished from non-translations with an accuracy ranging from 82 to 92% depending on the source language and feature set. Multiclass classification confirms that translations from distant languages are more distinct from non-translations than translations from languages that are typologically close to Russian. It also demonstrates that translations from same-family source languages share translationese properties. Structural features return more consistent results than features relying on external resources and capturing lexical properties of texts in both translationese detection and source language identification tasks.,2021
kim-etal-2021-mostly,https://aclanthology.org/2021.naloma-1.9,0,,,,,,,"A (Mostly) Symbolic System for Monotonic Inference with Unscoped Episodic Logical Forms. We implement the formalization of natural logic-like monotonic inference using Unscoped Episodic Logical Forms (ULFs) by Kim et al. (2020). We demonstrate this system's capacity to handle a variety of challenging semantic phenomena using the FraCaS dataset (Cooper et al., 1996).These results give empirical evidence for prior claims that ULF is an appropriate representation to mediate natural logic-like inferences. 1",A (Mostly) Symbolic System for Monotonic Inference with Unscoped Episodic Logical Forms,"We implement the formalization of natural logic-like monotonic inference using Unscoped Episodic Logical Forms (ULFs) by Kim et al. (2020). We demonstrate this system's capacity to handle a variety of challenging semantic phenomena using the FraCaS dataset (Cooper et al., 1996).These results give empirical evidence for prior claims that ULF is an appropriate representation to mediate natural logic-like inferences. 1",A (Mostly) Symbolic System for Monotonic Inference with Unscoped Episodic Logical Forms,"We implement the formalization of natural logic-like monotonic inference using Unscoped Episodic Logical Forms (ULFs) by Kim et al. (2020). We demonstrate this system's capacity to handle a variety of challenging semantic phenomena using the FraCaS dataset (Cooper et al., 1996).These results give empirical evidence for prior claims that ULF is an appropriate representation to mediate natural logic-like inferences. 1","This work was supported by NSF EAGER grant NSF IIS-1908595, DARPA CwC subcontract W911NF-15-1-0542, and a Sproull Graduate Fellowship from the University of Rochester. We are grateful to the anonymous reviewers for their helpful feedback.","A (Mostly) Symbolic System for Monotonic Inference with Unscoped Episodic Logical Forms. We implement the formalization of natural logic-like monotonic inference using Unscoped Episodic Logical Forms (ULFs) by Kim et al. (2020). We demonstrate this system's capacity to handle a variety of challenging semantic phenomena using the FraCaS dataset (Cooper et al., 1996).These results give empirical evidence for prior claims that ULF is an appropriate representation to mediate natural logic-like inferences. 1",2021
litman-etal-2006-characterizing,https://aclanthology.org/J06-3004,0,,,,,,,"Characterizing and Predicting Corrections in Spoken Dialogue Systems. This article focuses on the analysis and prediction of corrections, defined as turns where a user tries to correct a prior error made by a spoken dialogue system. We describe our labeling procedure of various corrections types and statistical analyses of their features in a corpus collected from a train information spoken dialogue system. We then present results of machinelearning experiments designed to identify user corrections of speech recognition errors. We investigate the predictive power of features automatically computable from the prosody of the turn, the speech recognition process, experimental conditions, and the dialogue history. Our best-performing features reduce classification error from baselines of 25.70-28.99% to 15.72%.",Characterizing and Predicting Corrections in Spoken Dialogue Systems,"This article focuses on the analysis and prediction of corrections, defined as turns where a user tries to correct a prior error made by a spoken dialogue system. We describe our labeling procedure of various corrections types and statistical analyses of their features in a corpus collected from a train information spoken dialogue system. We then present results of machinelearning experiments designed to identify user corrections of speech recognition errors. We investigate the predictive power of features automatically computable from the prosody of the turn, the speech recognition process, experimental conditions, and the dialogue history. Our best-performing features reduce classification error from baselines of 25.70-28.99% to 15.72%.",Characterizing and Predicting Corrections in Spoken Dialogue Systems,"This article focuses on the analysis and prediction of corrections, defined as turns where a user tries to correct a prior error made by a spoken dialogue system. We describe our labeling procedure of various corrections types and statistical analyses of their features in a corpus collected from a train information spoken dialogue system. We then present results of machinelearning experiments designed to identify user corrections of speech recognition errors. We investigate the predictive power of features automatically computable from the prosody of the turn, the speech recognition process, experimental conditions, and the dialogue history. Our best-performing features reduce classification error from baselines of 25.70-28.99% to 15.72%.",Marc Swerts is also affiliated with the University of Antwerp. His research is sponsored by the Netherlands Organisation for Scientific Research (NWO). This work was performed when the authors were at AT&T Labs-Research.,"Characterizing and Predicting Corrections in Spoken Dialogue Systems. This article focuses on the analysis and prediction of corrections, defined as turns where a user tries to correct a prior error made by a spoken dialogue system. We describe our labeling procedure of various corrections types and statistical analyses of their features in a corpus collected from a train information spoken dialogue system. We then present results of machinelearning experiments designed to identify user corrections of speech recognition errors. We investigate the predictive power of features automatically computable from the prosody of the turn, the speech recognition process, experimental conditions, and the dialogue history. Our best-performing features reduce classification error from baselines of 25.70-28.99% to 15.72%.",2006
belz-2005-statistical,https://aclanthology.org/W05-1601,0,,,,,,,"Statistical Generation: Three Methods Compared and Evaluated. Statistical NLG has largely meant n-gram modelling which has the considerable advantages of lending robustness to NLG systems, and of making automatic adaptation to new domains from raw corpora possible. On the downside, n-gram models are expensive to use as selection mechanisms and have a built-in bias towards shorter realisations. This paper looks at treebank-training of generators, an alternative method for building statistical models for NLG from raw corpora, and two different ways of using treebank-trained models during generation. Results show that the treebank-trained generators achieve improvements similar to a 2-gram generator over a baseline of random selection. However, the treebank-trained generators achieve this at a much lower cost than the 2-gram generator, and without its strong preference for shorter realisations.",Statistical Generation: Three Methods Compared and Evaluated,"Statistical NLG has largely meant n-gram modelling which has the considerable advantages of lending robustness to NLG systems, and of making automatic adaptation to new domains from raw corpora possible. On the downside, n-gram models are expensive to use as selection mechanisms and have a built-in bias towards shorter realisations. This paper looks at treebank-training of generators, an alternative method for building statistical models for NLG from raw corpora, and two different ways of using treebank-trained models during generation. Results show that the treebank-trained generators achieve improvements similar to a 2-gram generator over a baseline of random selection. However, the treebank-trained generators achieve this at a much lower cost than the 2-gram generator, and without its strong preference for shorter realisations.",Statistical Generation: Three Methods Compared and Evaluated,"Statistical NLG has largely meant n-gram modelling which has the considerable advantages of lending robustness to NLG systems, and of making automatic adaptation to new domains from raw corpora possible. On the downside, n-gram models are expensive to use as selection mechanisms and have a built-in bias towards shorter realisations. This paper looks at treebank-training of generators, an alternative method for building statistical models for NLG from raw corpora, and two different ways of using treebank-trained models during generation. Results show that the treebank-trained generators achieve improvements similar to a 2-gram generator over a baseline of random selection. However, the treebank-trained generators achieve this at a much lower cost than the 2-gram generator, and without its strong preference for shorter realisations.","The research reported in this paper is part of the CoGenT project, an ongoing research project supported under UK EP-SRC Grant GR/S24480/01. Many thanks to John Carroll, Roger Evans and Richard Power, as well as to the anonymous reviewers, for very helpful comments.","Statistical Generation: Three Methods Compared and Evaluated. Statistical NLG has largely meant n-gram modelling which has the considerable advantages of lending robustness to NLG systems, and of making automatic adaptation to new domains from raw corpora possible. On the downside, n-gram models are expensive to use as selection mechanisms and have a built-in bias towards shorter realisations. This paper looks at treebank-training of generators, an alternative method for building statistical models for NLG from raw corpora, and two different ways of using treebank-trained models during generation. Results show that the treebank-trained generators achieve improvements similar to a 2-gram generator over a baseline of random selection. However, the treebank-trained generators achieve this at a much lower cost than the 2-gram generator, and without its strong preference for shorter realisations.",2005
bergmair-2009-proposal,https://aclanthology.org/W09-2502,0,,,,,,,"A Proposal on Evaluation Measures for RTE. We outline problems with the interpretation of accuracy in the presence of bias, arguing that the issue is a particularly pressing concern for RTE evaluation. Furthermore, we argue that average precision scores are unsuitable for RTE, and should not be reported. We advocate mutual information as a new evaluation measure that should be reported in addition to accuracy and confidence-weighted score.",A Proposal on Evaluation Measures for {RTE},"We outline problems with the interpretation of accuracy in the presence of bias, arguing that the issue is a particularly pressing concern for RTE evaluation. Furthermore, we argue that average precision scores are unsuitable for RTE, and should not be reported. We advocate mutual information as a new evaluation measure that should be reported in addition to accuracy and confidence-weighted score.",A Proposal on Evaluation Measures for RTE,"We outline problems with the interpretation of accuracy in the presence of bias, arguing that the issue is a particularly pressing concern for RTE evaluation. Furthermore, we argue that average precision scores are unsuitable for RTE, and should not be reported. We advocate mutual information as a new evaluation measure that should be reported in addition to accuracy and confidence-weighted score.","I would like to thank the anonymous reviewers and my colleague Ekaterina Shutova for providing many helpful comments and my supervisor Ann Copestake for reading multiple drafts of this paper and providing a great number of suggestions within a very short timeframe. All errors and omissions are, of course, entirely my own. I gratefully acknowledge financial support by the Austrian Academy of Sciences.","A Proposal on Evaluation Measures for RTE. We outline problems with the interpretation of accuracy in the presence of bias, arguing that the issue is a particularly pressing concern for RTE evaluation. Furthermore, we argue that average precision scores are unsuitable for RTE, and should not be reported. We advocate mutual information as a new evaluation measure that should be reported in addition to accuracy and confidence-weighted score.",2009
hellwig-etal-2018-multi,https://aclanthology.org/L18-1011,0,,,,,,,"Multi-layer Annotation of the Rigveda. The paper introduces a multi-level annotation of the R. GVEDA, a fundamental Sanskrit text composed in the 2. millenium BCE that is important for South-Asian and Indo-European linguistics, as well as Cultural Studies. We describe the individual annotation levels, including phonetics, morphology, lexicon, and syntax, and show how these different levels of annotation are merged to create a novel annotated corpus of Vedic Sanskrit. Vedic Sanskrit is a complex, but computationally under-resourced language. Therefore, creating this resource required considerable domain adaptation of existing computational tools, which is discussed in this paper. Because parts of the annotations are selective, we propose a bi-directional LSTM based sequential model to supplement missing verb-argument links.",Multi-layer Annotation of the Rigveda,"The paper introduces a multi-level annotation of the R. GVEDA, a fundamental Sanskrit text composed in the 2. millenium BCE that is important for South-Asian and Indo-European linguistics, as well as Cultural Studies. We describe the individual annotation levels, including phonetics, morphology, lexicon, and syntax, and show how these different levels of annotation are merged to create a novel annotated corpus of Vedic Sanskrit. Vedic Sanskrit is a complex, but computationally under-resourced language. Therefore, creating this resource required considerable domain adaptation of existing computational tools, which is discussed in this paper. Because parts of the annotations are selective, we propose a bi-directional LSTM based sequential model to supplement missing verb-argument links.",Multi-layer Annotation of the Rigveda,"The paper introduces a multi-level annotation of the R. GVEDA, a fundamental Sanskrit text composed in the 2. millenium BCE that is important for South-Asian and Indo-European linguistics, as well as Cultural Studies. We describe the individual annotation levels, including phonetics, morphology, lexicon, and syntax, and show how these different levels of annotation are merged to create a novel annotated corpus of Vedic Sanskrit. Vedic Sanskrit is a complex, but computationally under-resourced language. Therefore, creating this resource required considerable domain adaptation of existing computational tools, which is discussed in this paper. Because parts of the annotations are selective, we propose a bi-directional LSTM based sequential model to supplement missing verb-argument links.","Research for this project was partially funded by the Cluster of Excellence ""Multimodal Computing and Interaction"" of German Science Foundation (DFG). We thank the Akademie der Wissenschaften und der Literatur Mainz for hosting the annotated corpus.","Multi-layer Annotation of the Rigveda. The paper introduces a multi-level annotation of the R. GVEDA, a fundamental Sanskrit text composed in the 2. millenium BCE that is important for South-Asian and Indo-European linguistics, as well as Cultural Studies. We describe the individual annotation levels, including phonetics, morphology, lexicon, and syntax, and show how these different levels of annotation are merged to create a novel annotated corpus of Vedic Sanskrit. Vedic Sanskrit is a complex, but computationally under-resourced language. Therefore, creating this resource required considerable domain adaptation of existing computational tools, which is discussed in this paper. Because parts of the annotations are selective, we propose a bi-directional LSTM based sequential model to supplement missing verb-argument links.",2018
ciobanu-etal-2015-readability,https://aclanthology.org/R15-1014,0,,,,,,,"Readability Assessment of Translated Texts. In this paper we investigate how readability varies between texts originally written in English and texts translated into English. For quantification, we analyze several factors that are relevant in assessing readability-shallow, lexical and morpho-syntactic features-and we employ the widely used Flesch-Kincaid formula to measure the variation of the readability level between original English texts and texts translated into English. Finally, we analyze whether the readability features have enough discriminative power to distinguish between originals and translations.",Readability Assessment of Translated Texts,"In this paper we investigate how readability varies between texts originally written in English and texts translated into English. For quantification, we analyze several factors that are relevant in assessing readability-shallow, lexical and morpho-syntactic features-and we employ the widely used Flesch-Kincaid formula to measure the variation of the readability level between original English texts and texts translated into English. Finally, we analyze whether the readability features have enough discriminative power to distinguish between originals and translations.",Readability Assessment of Translated Texts,"In this paper we investigate how readability varies between texts originally written in English and texts translated into English. For quantification, we analyze several factors that are relevant in assessing readability-shallow, lexical and morpho-syntactic features-and we employ the widely used Flesch-Kincaid formula to measure the variation of the readability level between original English texts and texts translated into English. Finally, we analyze whether the readability features have enough discriminative power to distinguish between originals and translations.","We thank the anonymous reviewers for their helpful and constructive comments. The contribution of the authors to this paper is equal. Liviu P. Dinu was supported by UEFISCDI, PNII-ID-PCE-2011-3-0959.","Readability Assessment of Translated Texts. In this paper we investigate how readability varies between texts originally written in English and texts translated into English. For quantification, we analyze several factors that are relevant in assessing readability-shallow, lexical and morpho-syntactic features-and we employ the widely used Flesch-Kincaid formula to measure the variation of the readability level between original English texts and texts translated into English. Finally, we analyze whether the readability features have enough discriminative power to distinguish between originals and translations.",2015
miller-etal-2014-employing,https://aclanthology.org/W14-5308,0,,,,,,,Employing Phonetic Speech Recognition for Language and Dialect Specific Search. We discuss the notion of language and dialect-specific search in the context of audio indexing. A system is described where users can find dialect or language-specific pronunciations of Afghan placenames in Dari and Pashto. We explore the efficacy of a phonetic speech recognition system employed in this task.,Employing Phonetic Speech Recognition for Language and Dialect Specific Search,We discuss the notion of language and dialect-specific search in the context of audio indexing. A system is described where users can find dialect or language-specific pronunciations of Afghan placenames in Dari and Pashto. We explore the efficacy of a phonetic speech recognition system employed in this task.,Employing Phonetic Speech Recognition for Language and Dialect Specific Search,We discuss the notion of language and dialect-specific search in the context of audio indexing. A system is described where users can find dialect or language-specific pronunciations of Afghan placenames in Dari and Pashto. We explore the efficacy of a phonetic speech recognition system employed in this task.,,Employing Phonetic Speech Recognition for Language and Dialect Specific Search. We discuss the notion of language and dialect-specific search in the context of audio indexing. A system is described where users can find dialect or language-specific pronunciations of Afghan placenames in Dari and Pashto. We explore the efficacy of a phonetic speech recognition system employed in this task.,2014
pecar-2018-towards,https://aclanthology.org/P18-3001,0,,,,business_use,,,"Towards Opinion Summarization of Customer Reviews. In recent years, the number of texts has grown rapidly. For example, most reviewbased portals, like Yelp or Amazon, contain thousands of user-generated reviews. It is impossible for any human reader to process even the most relevant of these documents. The most promising tool to solve this task is a text summarization. Most existing approaches, however, work on small, homogeneous, English datasets, and do not account to multi-linguality, opinion shift, and domain effects. In this paper, we introduce our research plan to use neural networks on user-generated travel reviews to generate summaries that take into account shifting opinions over time. We outline future directions in summarization to address all of these issues. By resolving the existing problems, we will make it easier for users of review-sites to make more informed decisions.",Towards Opinion Summarization of Customer Reviews,"In recent years, the number of texts has grown rapidly. For example, most reviewbased portals, like Yelp or Amazon, contain thousands of user-generated reviews. It is impossible for any human reader to process even the most relevant of these documents. The most promising tool to solve this task is a text summarization. Most existing approaches, however, work on small, homogeneous, English datasets, and do not account to multi-linguality, opinion shift, and domain effects. In this paper, we introduce our research plan to use neural networks on user-generated travel reviews to generate summaries that take into account shifting opinions over time. We outline future directions in summarization to address all of these issues. By resolving the existing problems, we will make it easier for users of review-sites to make more informed decisions.",Towards Opinion Summarization of Customer Reviews,"In recent years, the number of texts has grown rapidly. For example, most reviewbased portals, like Yelp or Amazon, contain thousands of user-generated reviews. It is impossible for any human reader to process even the most relevant of these documents. The most promising tool to solve this task is a text summarization. Most existing approaches, however, work on small, homogeneous, English datasets, and do not account to multi-linguality, opinion shift, and domain effects. In this paper, we introduce our research plan to use neural networks on user-generated travel reviews to generate summaries that take into account shifting opinions over time. We outline future directions in summarization to address all of these issues. By resolving the existing problems, we will make it easier for users of review-sites to make more informed decisions.",I would like to thank my supervisors Marian Simko and Maria Bielikova. This work has been partially supported by the STU Grant scheme for Support of Young Researchers and grants No. VG 1/0646/15 and No. KEGA 028STU-4/2017.,"Towards Opinion Summarization of Customer Reviews. In recent years, the number of texts has grown rapidly. For example, most reviewbased portals, like Yelp or Amazon, contain thousands of user-generated reviews. It is impossible for any human reader to process even the most relevant of these documents. The most promising tool to solve this task is a text summarization. Most existing approaches, however, work on small, homogeneous, English datasets, and do not account to multi-linguality, opinion shift, and domain effects. In this paper, we introduce our research plan to use neural networks on user-generated travel reviews to generate summaries that take into account shifting opinions over time. We outline future directions in summarization to address all of these issues. By resolving the existing problems, we will make it easier for users of review-sites to make more informed decisions.",2018
montgomery-1997-fulcrum,https://aclanthology.org/1997.mtsummit-plenaries.5,0,,,,,,,"The Fulcrum Approach to Machine Translation. In a paper from a distinguished collection of papers prepared for a 1959 course entitled ""Computer Programming and Artificial Intelligence,"" Paul Garvin described two types of machine translation problems ""in terms of the two components of the term: machine problems, and translation problems."" While the machine problems made us crazy, the translation problems made us think differently about language than we might otherwise have done, which has had some advantages and some disadvantages in the long run. I will save anecdotes about the former and comments about the latter for the discussion.
In this paper I will focus on the translation problems and, in particular, the translation approach that was developed by Paul Garvin, with whom I was associated, initially at Georgetown University, and later in the Synthetic Intelligence Department of the Ramo-Wooldridge Corporation and successor corporations: Thompson Ramo Wooldridge and Bunker-Ramo.",The Fulcrum Approach to Machine Translation,"In a paper from a distinguished collection of papers prepared for a 1959 course entitled ""Computer Programming and Artificial Intelligence,"" Paul Garvin described two types of machine translation problems ""in terms of the two components of the term: machine problems, and translation problems."" While the machine problems made us crazy, the translation problems made us think differently about language than we might otherwise have done, which has had some advantages and some disadvantages in the long run. I will save anecdotes about the former and comments about the latter for the discussion.
In this paper I will focus on the translation problems and, in particular, the translation approach that was developed by Paul Garvin, with whom I was associated, initially at Georgetown University, and later in the Synthetic Intelligence Department of the Ramo-Wooldridge Corporation and successor corporations: Thompson Ramo Wooldridge and Bunker-Ramo.",The Fulcrum Approach to Machine Translation,"In a paper from a distinguished collection of papers prepared for a 1959 course entitled ""Computer Programming and Artificial Intelligence,"" Paul Garvin described two types of machine translation problems ""in terms of the two components of the term: machine problems, and translation problems."" While the machine problems made us crazy, the translation problems made us think differently about language than we might otherwise have done, which has had some advantages and some disadvantages in the long run. I will save anecdotes about the former and comments about the latter for the discussion.
In this paper I will focus on the translation problems and, in particular, the translation approach that was developed by Paul Garvin, with whom I was associated, initially at Georgetown University, and later in the Synthetic Intelligence Department of the Ramo-Wooldridge Corporation and successor corporations: Thompson Ramo Wooldridge and Bunker-Ramo.",,"The Fulcrum Approach to Machine Translation. In a paper from a distinguished collection of papers prepared for a 1959 course entitled ""Computer Programming and Artificial Intelligence,"" Paul Garvin described two types of machine translation problems ""in terms of the two components of the term: machine problems, and translation problems."" While the machine problems made us crazy, the translation problems made us think differently about language than we might otherwise have done, which has had some advantages and some disadvantages in the long run. I will save anecdotes about the former and comments about the latter for the discussion.
In this paper I will focus on the translation problems and, in particular, the translation approach that was developed by Paul Garvin, with whom I was associated, initially at Georgetown University, and later in the Synthetic Intelligence Department of the Ramo-Wooldridge Corporation and successor corporations: Thompson Ramo Wooldridge and Bunker-Ramo.",1997
corpas-pastor-etal-2008-translation,https://aclanthology.org/2008.amta-papers.5,0,,,,,,,"Translation universals: do they exist? A corpus-based NLP study of convergence and simplification. Convergence and simplification are two of the so-called universals in translation studies. The first one postulates that translated texts tend to be more similar than nontranslated texts. The second one postulates that translated texts are simpler, easier-tounderstand than non-translated ones. This paper discusses the results of a project which applies NLP techniques over comparable corpora of translated and nontranslated texts in Spanish seeking to establish whether these two universals hold Corpas Pastor (2008).",Translation universals: do they exist? A corpus-based {NLP} study of convergence and simplification,"Convergence and simplification are two of the so-called universals in translation studies. The first one postulates that translated texts tend to be more similar than nontranslated texts. The second one postulates that translated texts are simpler, easier-tounderstand than non-translated ones. This paper discusses the results of a project which applies NLP techniques over comparable corpora of translated and nontranslated texts in Spanish seeking to establish whether these two universals hold Corpas Pastor (2008).",Translation universals: do they exist? A corpus-based NLP study of convergence and simplification,"Convergence and simplification are two of the so-called universals in translation studies. The first one postulates that translated texts tend to be more similar than nontranslated texts. The second one postulates that translated texts are simpler, easier-tounderstand than non-translated ones. This paper discusses the results of a project which applies NLP techniques over comparable corpora of translated and nontranslated texts in Spanish seeking to establish whether these two universals hold Corpas Pastor (2008).",,"Translation universals: do they exist? A corpus-based NLP study of convergence and simplification. Convergence and simplification are two of the so-called universals in translation studies. The first one postulates that translated texts tend to be more similar than nontranslated texts. The second one postulates that translated texts are simpler, easier-tounderstand than non-translated ones. This paper discusses the results of a project which applies NLP techniques over comparable corpora of translated and nontranslated texts in Spanish seeking to establish whether these two universals hold Corpas Pastor (2008).",2008
brun-2012-learning,https://aclanthology.org/C12-2017,0,,,,,,,"Learning Opinionated Patterns for Contextual Opinion Detection. This paper tackles the problem of polar vocabulary ambiguity. While some opinionated words keep their polarity in any context and/or across any domain (except for the ironic style that goes beyond the present article), some other have an ambiguous polarity which is highly dependent of the context or the domain: in this case, the opinion is generally carried by complex expressions (""patterns"") rather than single words. In this paper, we propose and evaluate an original hybrid method, based on syntactic information extraction and clustering techniques, to learn automatically such patterns and integrate them into an opinion detection system.",Learning Opinionated Patterns for Contextual Opinion Detection,"This paper tackles the problem of polar vocabulary ambiguity. While some opinionated words keep their polarity in any context and/or across any domain (except for the ironic style that goes beyond the present article), some other have an ambiguous polarity which is highly dependent of the context or the domain: in this case, the opinion is generally carried by complex expressions (""patterns"") rather than single words. In this paper, we propose and evaluate an original hybrid method, based on syntactic information extraction and clustering techniques, to learn automatically such patterns and integrate them into an opinion detection system.",Learning Opinionated Patterns for Contextual Opinion Detection,"This paper tackles the problem of polar vocabulary ambiguity. While some opinionated words keep their polarity in any context and/or across any domain (except for the ironic style that goes beyond the present article), some other have an ambiguous polarity which is highly dependent of the context or the domain: in this case, the opinion is generally carried by complex expressions (""patterns"") rather than single words. In this paper, we propose and evaluate an original hybrid method, based on syntactic information extraction and clustering techniques, to learn automatically such patterns and integrate them into an opinion detection system.",,"Learning Opinionated Patterns for Contextual Opinion Detection. This paper tackles the problem of polar vocabulary ambiguity. While some opinionated words keep their polarity in any context and/or across any domain (except for the ironic style that goes beyond the present article), some other have an ambiguous polarity which is highly dependent of the context or the domain: in this case, the opinion is generally carried by complex expressions (""patterns"") rather than single words. In this paper, we propose and evaluate an original hybrid method, based on syntactic information extraction and clustering techniques, to learn automatically such patterns and integrate them into an opinion detection system.",2012
turton-etal-2021-deriving,https://aclanthology.org/2021.repl4nlp-1.26,0,,,,,,,"Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings. Models based on the transformer architecture, such as BERT, have marked a crucial step forward in the field of Natural Language Processing. Importantly, they allow the creation of word embeddings that capture important semantic information about words in context. However, as single entities, these embeddings are difficult to interpret and the models used to create them have been described as opaque. Binder and colleagues proposed an intuitive embedding space where each dimension is based on one of 65 core semantic features. Unfortunately, the space only exists for a small data-set of 535 words, limiting its uses. Previous work (Utsumi, 2018, 2020; Turton et al., 2020) has shown that Binder features can be derived from static embeddings and successfully extrapolated to a large new vocabulary. Taking the next step, this paper demonstrates that Binder features can be derived from the BERT embedding space. This provides two things; (1) semantic feature values derived from contextualised word embeddings and (2) insights into how semantic features are represented across the different layers of the BERT model.",Deriving Contextualised Semantic Features from {BERT} (and Other Transformer Model) Embeddings,"Models based on the transformer architecture, such as BERT, have marked a crucial step forward in the field of Natural Language Processing. Importantly, they allow the creation of word embeddings that capture important semantic information about words in context. However, as single entities, these embeddings are difficult to interpret and the models used to create them have been described as opaque. Binder and colleagues proposed an intuitive embedding space where each dimension is based on one of 65 core semantic features. Unfortunately, the space only exists for a small data-set of 535 words, limiting its uses. Previous work (Utsumi, 2018, 2020; Turton et al., 2020) has shown that Binder features can be derived from static embeddings and successfully extrapolated to a large new vocabulary. Taking the next step, this paper demonstrates that Binder features can be derived from the BERT embedding space. This provides two things; (1) semantic feature values derived from contextualised word embeddings and (2) insights into how semantic features are represented across the different layers of the BERT model.",Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings,"Models based on the transformer architecture, such as BERT, have marked a crucial step forward in the field of Natural Language Processing. Importantly, they allow the creation of word embeddings that capture important semantic information about words in context. However, as single entities, these embeddings are difficult to interpret and the models used to create them have been described as opaque. Binder and colleagues proposed an intuitive embedding space where each dimension is based on one of 65 core semantic features. Unfortunately, the space only exists for a small data-set of 535 words, limiting its uses. Previous work (Utsumi, 2018, 2020; Turton et al., 2020) has shown that Binder features can be derived from static embeddings and successfully extrapolated to a large new vocabulary. Taking the next step, this paper demonstrates that Binder features can be derived from the BERT embedding space. This provides two things; (1) semantic feature values derived from contextualised word embeddings and (2) insights into how semantic features are represented across the different layers of the BERT model.",,"Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings. Models based on the transformer architecture, such as BERT, have marked a crucial step forward in the field of Natural Language Processing. Importantly, they allow the creation of word embeddings that capture important semantic information about words in context. However, as single entities, these embeddings are difficult to interpret and the models used to create them have been described as opaque. Binder and colleagues proposed an intuitive embedding space where each dimension is based on one of 65 core semantic features. Unfortunately, the space only exists for a small data-set of 535 words, limiting its uses. Previous work (Utsumi, 2018, 2020; Turton et al., 2020) has shown that Binder features can be derived from static embeddings and successfully extrapolated to a large new vocabulary. Taking the next step, this paper demonstrates that Binder features can be derived from the BERT embedding space. This provides two things; (1) semantic feature values derived from contextualised word embeddings and (2) insights into how semantic features are represented across the different layers of the BERT model.",2021
gemes-recski-2021-tuw,https://aclanthology.org/2021.germeval-1.10,1,,,,hate_speech,,,"TUW-Inf at GermEval2021: Rule-based and Hybrid Methods for Detecting Toxic, Engaging, and Fact-Claiming Comments. This paper describes our methods submitted for the GermEval 2021 shared task on identifying toxic, engaging and factclaiming comments in social media texts (Risch et al., 2021). We explore simple strategies for semi-automatic generation of rule-based systems with high precision and low recall, and use them to achieve slight overall improvements over a standard BERT-based classifier.","{TUW}-{I}nf at {G}erm{E}val2021: Rule-based and Hybrid Methods for Detecting Toxic, Engaging, and Fact-Claiming Comments","This paper describes our methods submitted for the GermEval 2021 shared task on identifying toxic, engaging and factclaiming comments in social media texts (Risch et al., 2021). We explore simple strategies for semi-automatic generation of rule-based systems with high precision and low recall, and use them to achieve slight overall improvements over a standard BERT-based classifier.","TUW-Inf at GermEval2021: Rule-based and Hybrid Methods for Detecting Toxic, Engaging, and Fact-Claiming Comments","This paper describes our methods submitted for the GermEval 2021 shared task on identifying toxic, engaging and factclaiming comments in social media texts (Risch et al., 2021). We explore simple strategies for semi-automatic generation of rule-based systems with high precision and low recall, and use them to achieve slight overall improvements over a standard BERT-based classifier.",Research conducted in collaboration with Botium GmbH.,"TUW-Inf at GermEval2021: Rule-based and Hybrid Methods for Detecting Toxic, Engaging, and Fact-Claiming Comments. This paper describes our methods submitted for the GermEval 2021 shared task on identifying toxic, engaging and factclaiming comments in social media texts (Risch et al., 2021). We explore simple strategies for semi-automatic generation of rule-based systems with high precision and low recall, and use them to achieve slight overall improvements over a standard BERT-based classifier.",2021
temnikova-cohen-2013-recognizing,https://aclanthology.org/W13-1909,1,,,,industry_innovation_infrastructure,,,"Recognizing Sublanguages in Scientific Journal Articles through Closure Properties. It has long been realized that sublanguages are relevant to natural language processing and text mining. However, practical methods for recognizing or characterizing them have been lacking. This paper describes a publicly available set of tools for sublanguage recognition. Closure properties are used to assess the goodness of fit of two biomedical corpora to the sublanguage model. Scientific journal articles are compared to general English text, and it is shown that the journal articles fit the sublanguage model, while the general English text does not. A number of examples of implications of the sublanguage characteristics for natural language processing are pointed out. The software is made publicly available at [edited for anonymization].",Recognizing Sublanguages in Scientific Journal Articles through Closure Properties,"It has long been realized that sublanguages are relevant to natural language processing and text mining. However, practical methods for recognizing or characterizing them have been lacking. This paper describes a publicly available set of tools for sublanguage recognition. Closure properties are used to assess the goodness of fit of two biomedical corpora to the sublanguage model. Scientific journal articles are compared to general English text, and it is shown that the journal articles fit the sublanguage model, while the general English text does not. A number of examples of implications of the sublanguage characteristics for natural language processing are pointed out. The software is made publicly available at [edited for anonymization].",Recognizing Sublanguages in Scientific Journal Articles through Closure Properties,"It has long been realized that sublanguages are relevant to natural language processing and text mining. However, practical methods for recognizing or characterizing them have been lacking. This paper describes a publicly available set of tools for sublanguage recognition. Closure properties are used to assess the goodness of fit of two biomedical corpora to the sublanguage model. Scientific journal articles are compared to general English text, and it is shown that the journal articles fit the sublanguage model, while the general English text does not. A number of examples of implications of the sublanguage characteristics for natural language processing are pointed out. The software is made publicly available at [edited for anonymization].","Irina Temnikova's work on the research reported in this paper was supported by the project AComIn ""Advanced Computing for Innovation"", grant 316087, funded by the FP7 Capacity Programme (Research Potential of Convergence Re-gions). Kevin Bretonnel Cohen's work was supported by grants NIH 5R01 LM009254-07 and NIH 5R01 LM008111-08 to Lawrence E. Hunter, NIH 1R01MH096906-01A1 to Tal Yarkoni, NIH R01 LM011124 to John Pestian, and NSF IIS-1207592 to Lawrence E. Hunter and Barbara Grimpe. The authors thank Tony McEnery and Andrew Wilson for advice on dealing with the tag sets.","Recognizing Sublanguages in Scientific Journal Articles through Closure Properties. It has long been realized that sublanguages are relevant to natural language processing and text mining. However, practical methods for recognizing or characterizing them have been lacking. This paper describes a publicly available set of tools for sublanguage recognition. Closure properties are used to assess the goodness of fit of two biomedical corpora to the sublanguage model. Scientific journal articles are compared to general English text, and it is shown that the journal articles fit the sublanguage model, while the general English text does not. A number of examples of implications of the sublanguage characteristics for natural language processing are pointed out. The software is made publicly available at [edited for anonymization].",2013
krishnamurthy-mitchell-2012-weakly,https://aclanthology.org/D12-1069,0,,,,,,,"Weakly Supervised Training of Semantic Parsers. We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms of weak supervision can be combined to train an accurate semantic parser: semantic supervision from a knowledge base, and syntactic supervision from dependencyparsed sentences. We apply our approach to train a semantic parser that uses 77 relations from Freebase in its knowledge representation. This semantic parser extracts instances of binary relations with state-of-theart accuracy, while simultaneously recovering much richer semantic structures, such as conjunctions of multiple relations with partially shared arguments. We demonstrate recovery of this richer structure by extracting logical forms from natural language queries against Freebase. On this task, the trained semantic parser achieves 80% precision and 56% recall, despite never having seen an annotated logical form.",Weakly Supervised Training of Semantic Parsers,"We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms of weak supervision can be combined to train an accurate semantic parser: semantic supervision from a knowledge base, and syntactic supervision from dependencyparsed sentences. We apply our approach to train a semantic parser that uses 77 relations from Freebase in its knowledge representation. This semantic parser extracts instances of binary relations with state-of-theart accuracy, while simultaneously recovering much richer semantic structures, such as conjunctions of multiple relations with partially shared arguments. We demonstrate recovery of this richer structure by extracting logical forms from natural language queries against Freebase. On this task, the trained semantic parser achieves 80% precision and 56% recall, despite never having seen an annotated logical form.",Weakly Supervised Training of Semantic Parsers,"We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms of weak supervision can be combined to train an accurate semantic parser: semantic supervision from a knowledge base, and syntactic supervision from dependencyparsed sentences. We apply our approach to train a semantic parser that uses 77 relations from Freebase in its knowledge representation. This semantic parser extracts instances of binary relations with state-of-theart accuracy, while simultaneously recovering much richer semantic structures, such as conjunctions of multiple relations with partially shared arguments. We demonstrate recovery of this richer structure by extracting logical forms from natural language queries against Freebase. On this task, the trained semantic parser achieves 80% precision and 56% recall, despite never having seen an annotated logical form.","This research has been supported in part by DARPA under contract number FA8750-09-C-0179, and by a grant from Google. Additionally, we thank Yahoo! for use of their M45 cluster. We also gratefully acknowledge the contributions of our colleagues on the NELL project, Justin Betteridge for collecting the Freebase relations, Jamie Callan and colleagues for the web crawl, and Thomas Kollar and Matt Gardner for helpful comments on earlier drafts of this paper.","Weakly Supervised Training of Semantic Parsers. We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms of weak supervision can be combined to train an accurate semantic parser: semantic supervision from a knowledge base, and syntactic supervision from dependencyparsed sentences. We apply our approach to train a semantic parser that uses 77 relations from Freebase in its knowledge representation. This semantic parser extracts instances of binary relations with state-of-theart accuracy, while simultaneously recovering much richer semantic structures, such as conjunctions of multiple relations with partially shared arguments. We demonstrate recovery of this richer structure by extracting logical forms from natural language queries against Freebase. On this task, the trained semantic parser achieves 80% precision and 56% recall, despite never having seen an annotated logical form.",2012
benotti-blackburn-2021-recipe,https://aclanthology.org/2021.naacl-main.320,0,,,,,,,"A recipe for annotating grounded clarifications. In order to interpret the communicative intents of an utterance, it needs to be grounded in something that is outside of language; that is, grounded in world modalities. In this paper we argue that dialogue clarification mechanisms make explicit the process of interpreting the communicative intents of the speaker's utterances by grounding them in the various modalities in which the dialogue is situated. This paper frames dialogue clarification mechanisms as an understudied research problem and a key missing piece in the giant jigsaw puzzle of natural language understanding. We discuss both the theoretical background and practical challenges posed by this problem, and propose a recipe for obtaining grounding annotations. We conclude by highlighting ethical issues that need to be addressed in future work. 1 We are suspicious of the common assumption that requests for information regarding references that are grounded in vision (e.g. the red or the blue jacket?) are clarifications, whereas requests for information grounded in other modalities are not (e.g. do I take the stairs up or down?). 2 See also the supplement on ethical considerations.",A recipe for annotating grounded clarifications,"In order to interpret the communicative intents of an utterance, it needs to be grounded in something that is outside of language; that is, grounded in world modalities. In this paper we argue that dialogue clarification mechanisms make explicit the process of interpreting the communicative intents of the speaker's utterances by grounding them in the various modalities in which the dialogue is situated. This paper frames dialogue clarification mechanisms as an understudied research problem and a key missing piece in the giant jigsaw puzzle of natural language understanding. We discuss both the theoretical background and practical challenges posed by this problem, and propose a recipe for obtaining grounding annotations. We conclude by highlighting ethical issues that need to be addressed in future work. 1 We are suspicious of the common assumption that requests for information regarding references that are grounded in vision (e.g. the red or the blue jacket?) are clarifications, whereas requests for information grounded in other modalities are not (e.g. do I take the stairs up or down?). 2 See also the supplement on ethical considerations.",A recipe for annotating grounded clarifications,"In order to interpret the communicative intents of an utterance, it needs to be grounded in something that is outside of language; that is, grounded in world modalities. In this paper we argue that dialogue clarification mechanisms make explicit the process of interpreting the communicative intents of the speaker's utterances by grounding them in the various modalities in which the dialogue is situated. This paper frames dialogue clarification mechanisms as an understudied research problem and a key missing piece in the giant jigsaw puzzle of natural language understanding. We discuss both the theoretical background and practical challenges posed by this problem, and propose a recipe for obtaining grounding annotations. We conclude by highlighting ethical issues that need to be addressed in future work. 1 We are suspicious of the common assumption that requests for information regarding references that are grounded in vision (e.g. the red or the blue jacket?) are clarifications, whereas requests for information grounded in other modalities are not (e.g. do I take the stairs up or down?). 2 See also the supplement on ethical considerations.",We thank the anonymous reviewers for their detailed reviews and insightful comments.,"A recipe for annotating grounded clarifications. In order to interpret the communicative intents of an utterance, it needs to be grounded in something that is outside of language; that is, grounded in world modalities. In this paper we argue that dialogue clarification mechanisms make explicit the process of interpreting the communicative intents of the speaker's utterances by grounding them in the various modalities in which the dialogue is situated. This paper frames dialogue clarification mechanisms as an understudied research problem and a key missing piece in the giant jigsaw puzzle of natural language understanding. We discuss both the theoretical background and practical challenges posed by this problem, and propose a recipe for obtaining grounding annotations. We conclude by highlighting ethical issues that need to be addressed in future work. 1 We are suspicious of the common assumption that requests for information regarding references that are grounded in vision (e.g. the red or the blue jacket?) are clarifications, whereas requests for information grounded in other modalities are not (e.g. do I take the stairs up or down?). 2 See also the supplement on ethical considerations.",2021
chen-etal-2022-discrete,https://aclanthology.org/2022.acl-long.145,0,,,,,,,"Discrete Opinion Tree Induction for Aspect-based Sentiment Analysis. Dependency trees have been intensively used with graph neural networks for aspect-based sentiment classification. Though being effective, such methods rely on external dependency parsers, which can be unavailable for low-resource languages or perform worse in low-resource domains. In addition, dependency trees are also not optimized for aspect-based sentiment classification. In this paper, we propose an aspect-specific and language-agnostic discrete latent opinion tree model as an alternative structure to explicit dependency trees. To ease the learning of complicated structured latent variables, we build a connection between aspect-to-context attention scores and syntactic distances, inducing trees from the attention scores. Results on six English benchmarks, one Chinese dataset and one Korean dataset show that our model can achieve competitive performance and interpretability.",Discrete Opinion Tree Induction for Aspect-based Sentiment Analysis,"Dependency trees have been intensively used with graph neural networks for aspect-based sentiment classification. Though being effective, such methods rely on external dependency parsers, which can be unavailable for low-resource languages or perform worse in low-resource domains. In addition, dependency trees are also not optimized for aspect-based sentiment classification. In this paper, we propose an aspect-specific and language-agnostic discrete latent opinion tree model as an alternative structure to explicit dependency trees. To ease the learning of complicated structured latent variables, we build a connection between aspect-to-context attention scores and syntactic distances, inducing trees from the attention scores. Results on six English benchmarks, one Chinese dataset and one Korean dataset show that our model can achieve competitive performance and interpretability.",Discrete Opinion Tree Induction for Aspect-based Sentiment Analysis,"Dependency trees have been intensively used with graph neural networks for aspect-based sentiment classification. Though being effective, such methods rely on external dependency parsers, which can be unavailable for low-resource languages or perform worse in low-resource domains. In addition, dependency trees are also not optimized for aspect-based sentiment classification. In this paper, we propose an aspect-specific and language-agnostic discrete latent opinion tree model as an alternative structure to explicit dependency trees. To ease the learning of complicated structured latent variables, we build a connection between aspect-to-context attention scores and syntactic distances, inducing trees from the attention scores. Results on six English benchmarks, one Chinese dataset and one Korean dataset show that our model can achieve competitive performance and interpretability.","Zhiyang Teng and Yue Zhang are the corresponding authors. Our thanks to anonymous reviewers for their insightful comments and suggestions. We appreciate Prof. Pengyuan Liu sharing the Chinese Hotel dataset, Prof. Jingjing Wang sharing the reinforcement learning code of Wang et al. 2019 Wu et al. (2020) upon our request. We thank Dr. Xuebin Wang for providing us with 2 V100 GPU cards for use. This publication is conducted with the financial support of ""Pioneer"" and ""Leading Goose"" R&D Program of Zhejiang under Grant Number 2022SDXHDX0003.","Discrete Opinion Tree Induction for Aspect-based Sentiment Analysis. Dependency trees have been intensively used with graph neural networks for aspect-based sentiment classification. Though being effective, such methods rely on external dependency parsers, which can be unavailable for low-resource languages or perform worse in low-resource domains. In addition, dependency trees are also not optimized for aspect-based sentiment classification. In this paper, we propose an aspect-specific and language-agnostic discrete latent opinion tree model as an alternative structure to explicit dependency trees. To ease the learning of complicated structured latent variables, we build a connection between aspect-to-context attention scores and syntactic distances, inducing trees from the attention scores. Results on six English benchmarks, one Chinese dataset and one Korean dataset show that our model can achieve competitive performance and interpretability.",2022
espla-gomis-etal-2016-ualacant,https://aclanthology.org/W16-2383,0,,,,,,,"UAlacant word-level and phrase-level machine translation quality estimation systems at WMT 2016. This paper describes the Universitat d'Alacant submissions (labeled as UAlacant) to the machine translation quality estimation (MTQE) shared task at WMT 2016, where we have participated in the word-level and phrase-level MTQE subtasks. Our systems use external sources of bilingual information as a black box to spot sub-segment correspondences between the source segment and the translation hypothesis. For our submissions, two sources of bilingual information have been used: machine translation (Lucy LT KWIK Translator and Google Translate) and the bilingual concordancer Reverso Context. Building upon the word-level approach implemented for WMT 2015, a method for phrase-based MTQE is proposed which builds on the probabilities obtained for word-level MTQE. For each sub-task we have submitted two systems: one using the features produced exclusively based on online sources of bilingual information, and one combining them with the baseline features provided by the organisers of the task.",{UA}lacant word-level and phrase-level machine translation quality estimation systems at {WMT} 2016,"This paper describes the Universitat d'Alacant submissions (labeled as UAlacant) to the machine translation quality estimation (MTQE) shared task at WMT 2016, where we have participated in the word-level and phrase-level MTQE subtasks. Our systems use external sources of bilingual information as a black box to spot sub-segment correspondences between the source segment and the translation hypothesis. For our submissions, two sources of bilingual information have been used: machine translation (Lucy LT KWIK Translator and Google Translate) and the bilingual concordancer Reverso Context. Building upon the word-level approach implemented for WMT 2015, a method for phrase-based MTQE is proposed which builds on the probabilities obtained for word-level MTQE. For each sub-task we have submitted two systems: one using the features produced exclusively based on online sources of bilingual information, and one combining them with the baseline features provided by the organisers of the task.",UAlacant word-level and phrase-level machine translation quality estimation systems at WMT 2016,"This paper describes the Universitat d'Alacant submissions (labeled as UAlacant) to the machine translation quality estimation (MTQE) shared task at WMT 2016, where we have participated in the word-level and phrase-level MTQE subtasks. Our systems use external sources of bilingual information as a black box to spot sub-segment correspondences between the source segment and the translation hypothesis. For our submissions, two sources of bilingual information have been used: machine translation (Lucy LT KWIK Translator and Google Translate) and the bilingual concordancer Reverso Context. Building upon the word-level approach implemented for WMT 2015, a method for phrase-based MTQE is proposed which builds on the probabilities obtained for word-level MTQE. For each sub-task we have submitted two systems: one using the features produced exclusively based on online sources of bilingual information, and one combining them with the baseline features provided by the organisers of the task.","Work partially funded by the European Commission through project PIAP-GA-2012-324414 (Abu-MaTran) and by the Spanish government through project TIN2015-69632-R (Effortune). We specially thank Reverso-Softissimo and Prompsit Language Engineering for providing the access to the Reverso Context concordancer, the University Research Program for Google Translate that granted us access to the Google Translate service, and Anna Civil from Lucy Software for providing access to the Lucy LT machine translation system.","UAlacant word-level and phrase-level machine translation quality estimation systems at WMT 2016. This paper describes the Universitat d'Alacant submissions (labeled as UAlacant) to the machine translation quality estimation (MTQE) shared task at WMT 2016, where we have participated in the word-level and phrase-level MTQE subtasks. Our systems use external sources of bilingual information as a black box to spot sub-segment correspondences between the source segment and the translation hypothesis. For our submissions, two sources of bilingual information have been used: machine translation (Lucy LT KWIK Translator and Google Translate) and the bilingual concordancer Reverso Context. Building upon the word-level approach implemented for WMT 2015, a method for phrase-based MTQE is proposed which builds on the probabilities obtained for word-level MTQE. For each sub-task we have submitted two systems: one using the features produced exclusively based on online sources of bilingual information, and one combining them with the baseline features provided by the organisers of the task.",2016
ceska-fox-2009-influence,https://aclanthology.org/R09-1011,1,,,,industry_innovation_infrastructure,peace_justice_and_strong_institutions,,"The Influence of Text Pre-processing on Plagiarism Detection. This paper explores the influence of text preprocessing techniques on plagiarism detection. We examine stop-word removal, lemmatization, number replacement, synonymy recognition, and word generalization. We also look into the influence of punctuation and word-order within N-grams. All these techniques are evaluated according to their impact on F1-measure and speed of execution. Our experiments were performed on a Czech corpus of plagiarized documents about politics. At the end of this paper, we propose what we consider to be the best combination of text pre-processing techniques.",The Influence of Text Pre-processing on Plagiarism Detection,"This paper explores the influence of text preprocessing techniques on plagiarism detection. We examine stop-word removal, lemmatization, number replacement, synonymy recognition, and word generalization. We also look into the influence of punctuation and word-order within N-grams. All these techniques are evaluated according to their impact on F1-measure and speed of execution. Our experiments were performed on a Czech corpus of plagiarized documents about politics. At the end of this paper, we propose what we consider to be the best combination of text pre-processing techniques.",The Influence of Text Pre-processing on Plagiarism Detection,"This paper explores the influence of text preprocessing techniques on plagiarism detection. We examine stop-word removal, lemmatization, number replacement, synonymy recognition, and word generalization. We also look into the influence of punctuation and word-order within N-grams. All these techniques are evaluated according to their impact on F1-measure and speed of execution. Our experiments were performed on a Czech corpus of plagiarized documents about politics. At the end of this paper, we propose what we consider to be the best combination of text pre-processing techniques.","This research was supported in part by National Research Programme II, project 2C06009 (COT-SEWing). Special thanks go to Michal Toman who helped us to employ the disambiguation process.","The Influence of Text Pre-processing on Plagiarism Detection. This paper explores the influence of text preprocessing techniques on plagiarism detection. We examine stop-word removal, lemmatization, number replacement, synonymy recognition, and word generalization. We also look into the influence of punctuation and word-order within N-grams. All these techniques are evaluated according to their impact on F1-measure and speed of execution. Our experiments were performed on a Czech corpus of plagiarized documents about politics. At the end of this paper, we propose what we consider to be the best combination of text pre-processing techniques.",2009
lefrancois-gandon-2013-reasoning,https://aclanthology.org/W13-3719,0,,,,,,,"Reasoning with Dependency Structures and Lexicographic Definitions Using Unit Graphs. We are interested in a graph-based Knowledge Representation (KR) formalism that would allow for the representation, manipulation, query, and reasoning over dependency structures, and linguistic knowledge of the lexicon in the Meaning-Text Theory framework. Neither the semantic web formalisms nor the conceptual graphs appear to be suitable for this task, and this led to the introduction of the new Unit Graphs (UG) framework. In this paper we will overview the foundational concepts of this framework: the UGs are defined over a UG-support that contains: i) a hierarchy of unit types which is strongly driven by the actantial structure of unit types, ii) a hierarchy of circumstantial symbols, and iii) a set of unit identifiers. Based on these foundational concepts and on the definition of UGs, this paper justifies the use of a deep semantic representation level to represent meanings of lexical units. Rules over UGs are then introduced, and lexicographic definitions of lexical units are added to the hierarchy of unit types. Finally this paper provides UGs with semantics (in the logical sense), and pose the entailment problem, so as to enable the reasoning in the UGs framework.",Reasoning with Dependency Structures and Lexicographic Definitions Using Unit Graphs,"We are interested in a graph-based Knowledge Representation (KR) formalism that would allow for the representation, manipulation, query, and reasoning over dependency structures, and linguistic knowledge of the lexicon in the Meaning-Text Theory framework. Neither the semantic web formalisms nor the conceptual graphs appear to be suitable for this task, and this led to the introduction of the new Unit Graphs (UG) framework. In this paper we will overview the foundational concepts of this framework: the UGs are defined over a UG-support that contains: i) a hierarchy of unit types which is strongly driven by the actantial structure of unit types, ii) a hierarchy of circumstantial symbols, and iii) a set of unit identifiers. Based on these foundational concepts and on the definition of UGs, this paper justifies the use of a deep semantic representation level to represent meanings of lexical units. Rules over UGs are then introduced, and lexicographic definitions of lexical units are added to the hierarchy of unit types. Finally this paper provides UGs with semantics (in the logical sense), and pose the entailment problem, so as to enable the reasoning in the UGs framework.",Reasoning with Dependency Structures and Lexicographic Definitions Using Unit Graphs,"We are interested in a graph-based Knowledge Representation (KR) formalism that would allow for the representation, manipulation, query, and reasoning over dependency structures, and linguistic knowledge of the lexicon in the Meaning-Text Theory framework. Neither the semantic web formalisms nor the conceptual graphs appear to be suitable for this task, and this led to the introduction of the new Unit Graphs (UG) framework. In this paper we will overview the foundational concepts of this framework: the UGs are defined over a UG-support that contains: i) a hierarchy of unit types which is strongly driven by the actantial structure of unit types, ii) a hierarchy of circumstantial symbols, and iii) a set of unit identifiers. Based on these foundational concepts and on the definition of UGs, this paper justifies the use of a deep semantic representation level to represent meanings of lexical units. Rules over UGs are then introduced, and lexicographic definitions of lexical units are added to the hierarchy of unit types. Finally this paper provides UGs with semantics (in the logical sense), and pose the entailment problem, so as to enable the reasoning in the UGs framework.",,"Reasoning with Dependency Structures and Lexicographic Definitions Using Unit Graphs. We are interested in a graph-based Knowledge Representation (KR) formalism that would allow for the representation, manipulation, query, and reasoning over dependency structures, and linguistic knowledge of the lexicon in the Meaning-Text Theory framework. Neither the semantic web formalisms nor the conceptual graphs appear to be suitable for this task, and this led to the introduction of the new Unit Graphs (UG) framework. In this paper we will overview the foundational concepts of this framework: the UGs are defined over a UG-support that contains: i) a hierarchy of unit types which is strongly driven by the actantial structure of unit types, ii) a hierarchy of circumstantial symbols, and iii) a set of unit identifiers. Based on these foundational concepts and on the definition of UGs, this paper justifies the use of a deep semantic representation level to represent meanings of lexical units. Rules over UGs are then introduced, and lexicographic definitions of lexical units are added to the hierarchy of unit types. Finally this paper provides UGs with semantics (in the logical sense), and pose the entailment problem, so as to enable the reasoning in the UGs framework.",2013
gali-etal-2008-aggregating,https://aclanthology.org/I08-5005,0,,,,,,,"Aggregating Machine Learning and Rule Based Heuristics for Named Entity Recognition. This paper, submitted as an entry for the NERSSEAL-2008 shared task, describes a system build for Named Entity Recognition for South and South East Asian Languages. Our paper combines machine learning techniques with language specific heuristics to model the problem of NER for Indian languages. The system has been tested on five languages: Telugu, Hindi, Bengali, Urdu and Oriya. It uses CRF (Conditional Random Fields) based machine learning, followed by post processing which involves using some heuristics or rules. The system is specifically tuned for Hindi and Telugu, we also report the results for the other four languages.",Aggregating Machine Learning and Rule Based Heuristics for Named Entity Recognition,"This paper, submitted as an entry for the NERSSEAL-2008 shared task, describes a system build for Named Entity Recognition for South and South East Asian Languages. Our paper combines machine learning techniques with language specific heuristics to model the problem of NER for Indian languages. The system has been tested on five languages: Telugu, Hindi, Bengali, Urdu and Oriya. It uses CRF (Conditional Random Fields) based machine learning, followed by post processing which involves using some heuristics or rules. The system is specifically tuned for Hindi and Telugu, we also report the results for the other four languages.",Aggregating Machine Learning and Rule Based Heuristics for Named Entity Recognition,"This paper, submitted as an entry for the NERSSEAL-2008 shared task, describes a system build for Named Entity Recognition for South and South East Asian Languages. Our paper combines machine learning techniques with language specific heuristics to model the problem of NER for Indian languages. The system has been tested on five languages: Telugu, Hindi, Bengali, Urdu and Oriya. It uses CRF (Conditional Random Fields) based machine learning, followed by post processing which involves using some heuristics or rules. The system is specifically tuned for Hindi and Telugu, we also report the results for the other four languages.",We would like to thank the organizer Mr. Anil Kumar Singh deeply for his continuous support during the shared task.,"Aggregating Machine Learning and Rule Based Heuristics for Named Entity Recognition. This paper, submitted as an entry for the NERSSEAL-2008 shared task, describes a system build for Named Entity Recognition for South and South East Asian Languages. Our paper combines machine learning techniques with language specific heuristics to model the problem of NER for Indian languages. The system has been tested on five languages: Telugu, Hindi, Bengali, Urdu and Oriya. It uses CRF (Conditional Random Fields) based machine learning, followed by post processing which involves using some heuristics or rules. The system is specifically tuned for Hindi and Telugu, we also report the results for the other four languages.",2008
osborne-2013-distribution,https://aclanthology.org/W13-3730,0,,,,,,,"The Distribution of Floating Quantifiers: A Dependency Grammar Analysis. This contribution provides a dependency grammar analysis of the distribution of floating quantifiers in English and German. Floating quantifiers are deemed to be ""base generated"", meaning that they are not moved into their surface position by a transformation. Their distribution is similar to that of modal adverbs. The nominal (noun or pronoun) over which they quantify is an argument of the predicate to which they attach. Variation in their placement across English and German is due to independent word order principles associated with each language.",The Distribution of Floating Quantifiers: A Dependency Grammar Analysis,"This contribution provides a dependency grammar analysis of the distribution of floating quantifiers in English and German. Floating quantifiers are deemed to be ""base generated"", meaning that they are not moved into their surface position by a transformation. Their distribution is similar to that of modal adverbs. The nominal (noun or pronoun) over which they quantify is an argument of the predicate to which they attach. Variation in their placement across English and German is due to independent word order principles associated with each language.",The Distribution of Floating Quantifiers: A Dependency Grammar Analysis,"This contribution provides a dependency grammar analysis of the distribution of floating quantifiers in English and German. Floating quantifiers are deemed to be ""base generated"", meaning that they are not moved into their surface position by a transformation. Their distribution is similar to that of modal adverbs. The nominal (noun or pronoun) over which they quantify is an argument of the predicate to which they attach. Variation in their placement across English and German is due to independent word order principles associated with each language.",,"The Distribution of Floating Quantifiers: A Dependency Grammar Analysis. This contribution provides a dependency grammar analysis of the distribution of floating quantifiers in English and German. Floating quantifiers are deemed to be ""base generated"", meaning that they are not moved into their surface position by a transformation. Their distribution is similar to that of modal adverbs. The nominal (noun or pronoun) over which they quantify is an argument of the predicate to which they attach. Variation in their placement across English and German is due to independent word order principles associated with each language.",2013
hossain-etal-2021-nlp-cuet,https://aclanthology.org/2021.ltedi-1.25,1,,,,health,,,"NLP-CUET@LT-EDI-EACL2021: Multilingual Code-Mixed Hope Speech Detection using Cross-lingual Representation Learner. In recent years, several systems have been developed to regulate the spread of negativity and eliminate aggressive, offensive or abusive contents from the online platforms. Nevertheless, a limited number of researches carried out to identify positive, encouraging and supportive contents. In this work, our goal is to identify whether a social media post/comment contains hope speech or not. We propose three distinct models to identify hope speech in English, Tamil and Malayalam language to serve this purpose. To attain this goal, we employed various machine learning (support vector machine, logistic regression, ensemble), deep learning (convolutional neural network + long short term memory) and transformer (m-BERT, Indic-BERT, XLNet, XLM-Roberta) based methods. Results indicate that XLM-Roberta outdoes all other techniques by gaining a weighted f 1-score of 0.93, 0.60 and 0.85 respectively for English, Tamil and Malayalam language. Our team has achieved 1 st , 2 nd and 1 st rank in these three tasks respectively.",{NLP}-{CUET}@{LT}-{EDI}-{EACL}2021: Multilingual Code-Mixed Hope Speech Detection using Cross-lingual Representation Learner,"In recent years, several systems have been developed to regulate the spread of negativity and eliminate aggressive, offensive or abusive contents from the online platforms. Nevertheless, a limited number of researches carried out to identify positive, encouraging and supportive contents. In this work, our goal is to identify whether a social media post/comment contains hope speech or not. We propose three distinct models to identify hope speech in English, Tamil and Malayalam language to serve this purpose. To attain this goal, we employed various machine learning (support vector machine, logistic regression, ensemble), deep learning (convolutional neural network + long short term memory) and transformer (m-BERT, Indic-BERT, XLNet, XLM-Roberta) based methods. Results indicate that XLM-Roberta outdoes all other techniques by gaining a weighted f 1-score of 0.93, 0.60 and 0.85 respectively for English, Tamil and Malayalam language. Our team has achieved 1 st , 2 nd and 1 st rank in these three tasks respectively.",NLP-CUET@LT-EDI-EACL2021: Multilingual Code-Mixed Hope Speech Detection using Cross-lingual Representation Learner,"In recent years, several systems have been developed to regulate the spread of negativity and eliminate aggressive, offensive or abusive contents from the online platforms. Nevertheless, a limited number of researches carried out to identify positive, encouraging and supportive contents. In this work, our goal is to identify whether a social media post/comment contains hope speech or not. We propose three distinct models to identify hope speech in English, Tamil and Malayalam language to serve this purpose. To attain this goal, we employed various machine learning (support vector machine, logistic regression, ensemble), deep learning (convolutional neural network + long short term memory) and transformer (m-BERT, Indic-BERT, XLNet, XLM-Roberta) based methods. Results indicate that XLM-Roberta outdoes all other techniques by gaining a weighted f 1-score of 0.93, 0.60 and 0.85 respectively for English, Tamil and Malayalam language. Our team has achieved 1 st , 2 nd and 1 st rank in these three tasks respectively.",,"NLP-CUET@LT-EDI-EACL2021: Multilingual Code-Mixed Hope Speech Detection using Cross-lingual Representation Learner. In recent years, several systems have been developed to regulate the spread of negativity and eliminate aggressive, offensive or abusive contents from the online platforms. Nevertheless, a limited number of researches carried out to identify positive, encouraging and supportive contents. In this work, our goal is to identify whether a social media post/comment contains hope speech or not. We propose three distinct models to identify hope speech in English, Tamil and Malayalam language to serve this purpose. To attain this goal, we employed various machine learning (support vector machine, logistic regression, ensemble), deep learning (convolutional neural network + long short term memory) and transformer (m-BERT, Indic-BERT, XLNet, XLM-Roberta) based methods. Results indicate that XLM-Roberta outdoes all other techniques by gaining a weighted f 1-score of 0.93, 0.60 and 0.85 respectively for English, Tamil and Malayalam language. Our team has achieved 1 st , 2 nd and 1 st rank in these three tasks respectively.",2021
morales-etal-2007-multivariate,https://aclanthology.org/W07-2421,0,,,,,,,"Multivariate Cepstral Feature Compensation on Band-limited Data for Robust Speech Recognition. This paper describes a new method for compensating bandwidth mismatch for automatic speech recognition using multivariate linear combinations of feature vector components. It is shown that multivariate compensation is superior to methods based on linear compensations of individual features. Performance is evaluated on a real microphone-telephone mismatch condition (this involves noise compensation and bandwidth extension of real data), as well as on several artificial bandwidth limitations. Speech recognition accuracy using this approach is similar to that of acoustic model compensation methods for small to moderate mismatches, and allows keeping active a single acoustic model set for multiple bandwidth limitations.",Multivariate Cepstral Feature Compensation on Band-limited Data for Robust Speech Recognition,"This paper describes a new method for compensating bandwidth mismatch for automatic speech recognition using multivariate linear combinations of feature vector components. It is shown that multivariate compensation is superior to methods based on linear compensations of individual features. Performance is evaluated on a real microphone-telephone mismatch condition (this involves noise compensation and bandwidth extension of real data), as well as on several artificial bandwidth limitations. Speech recognition accuracy using this approach is similar to that of acoustic model compensation methods for small to moderate mismatches, and allows keeping active a single acoustic model set for multiple bandwidth limitations.",Multivariate Cepstral Feature Compensation on Band-limited Data for Robust Speech Recognition,"This paper describes a new method for compensating bandwidth mismatch for automatic speech recognition using multivariate linear combinations of feature vector components. It is shown that multivariate compensation is superior to methods based on linear compensations of individual features. Performance is evaluated on a real microphone-telephone mismatch condition (this involves noise compensation and bandwidth extension of real data), as well as on several artificial bandwidth limitations. Speech recognition accuracy using this approach is similar to that of acoustic model compensation methods for small to moderate mismatches, and allows keeping active a single acoustic model set for multiple bandwidth limitations.",This research is supported in part by an MCyT project (TIC 2006-13141-C03).,"Multivariate Cepstral Feature Compensation on Band-limited Data for Robust Speech Recognition. This paper describes a new method for compensating bandwidth mismatch for automatic speech recognition using multivariate linear combinations of feature vector components. It is shown that multivariate compensation is superior to methods based on linear compensations of individual features. Performance is evaluated on a real microphone-telephone mismatch condition (this involves noise compensation and bandwidth extension of real data), as well as on several artificial bandwidth limitations. Speech recognition accuracy using this approach is similar to that of acoustic model compensation methods for small to moderate mismatches, and allows keeping active a single acoustic model set for multiple bandwidth limitations.",2007
jansche-2003-parametric,https://aclanthology.org/P03-1037,0,,,,,,,"Parametric Models of Linguistic Count Data. It is well known that occurrence counts of words in documents are often modeled poorly by standard distributions like the binomial or Poisson. Observed counts vary more than simple models predict, prompting the use of overdispersed models like Gamma-Poisson or Beta-binomial mixtures as robust alternatives. Another deficiency of standard models is due to the fact that most words never occur in a given document, resulting in large amounts of zero counts. We propose using zeroinflated models for dealing with this, and evaluate competing models on a Naive Bayes text classification task. Simple zero-inflated models can account for practically relevant variation, and can be easier to work with than overdispersed models.",Parametric Models of Linguistic Count Data,"It is well known that occurrence counts of words in documents are often modeled poorly by standard distributions like the binomial or Poisson. Observed counts vary more than simple models predict, prompting the use of overdispersed models like Gamma-Poisson or Beta-binomial mixtures as robust alternatives. Another deficiency of standard models is due to the fact that most words never occur in a given document, resulting in large amounts of zero counts. We propose using zeroinflated models for dealing with this, and evaluate competing models on a Naive Bayes text classification task. Simple zero-inflated models can account for practically relevant variation, and can be easier to work with than overdispersed models.",Parametric Models of Linguistic Count Data,"It is well known that occurrence counts of words in documents are often modeled poorly by standard distributions like the binomial or Poisson. Observed counts vary more than simple models predict, prompting the use of overdispersed models like Gamma-Poisson or Beta-binomial mixtures as robust alternatives. Another deficiency of standard models is due to the fact that most words never occur in a given document, resulting in large amounts of zero counts. We propose using zeroinflated models for dealing with this, and evaluate competing models on a Naive Bayes text classification task. Simple zero-inflated models can account for practically relevant variation, and can be easier to work with than overdispersed models.",Thanks to Chris Brew and three anonymous reviewers for valuable feedback. Cue the usual disclaimers.,"Parametric Models of Linguistic Count Data. It is well known that occurrence counts of words in documents are often modeled poorly by standard distributions like the binomial or Poisson. Observed counts vary more than simple models predict, prompting the use of overdispersed models like Gamma-Poisson or Beta-binomial mixtures as robust alternatives. Another deficiency of standard models is due to the fact that most words never occur in a given document, resulting in large amounts of zero counts. We propose using zeroinflated models for dealing with this, and evaluate competing models on a Naive Bayes text classification task. Simple zero-inflated models can account for practically relevant variation, and can be easier to work with than overdispersed models.",2003
collier-etal-1998-refining,https://aclanthology.org/W98-1109,0,,,,,,,"Refining the Automatic Identification of Conceptual Relations in Large-scale Corpora. In the ACRONYM Project, we have taken the Firthian view (e.g. Firth 1957) that context is part of the meaning of the word, and measured similarity of meaning between words through second-order collocation. Using large-scale, free text corpora of UK journalism, we have generated collocational data for all words except for highfrequency grammatical words, and have found that semantically related word pairings can be identified, whilst syntactic relations are disfavoured. We have then moved on to refine this system, to deal with multi-word terms and identify changing conceptual relationships across time. The system, conceived in the late 80's and developed in 1994-97, differs from others of the 90's in purpose, scope, methodology and results, and comparisons will be drawn in the course of the paper.",Refining the Automatic Identification of Conceptual Relations in Large-scale Corpora,"In the ACRONYM Project, we have taken the Firthian view (e.g. Firth 1957) that context is part of the meaning of the word, and measured similarity of meaning between words through second-order collocation. Using large-scale, free text corpora of UK journalism, we have generated collocational data for all words except for highfrequency grammatical words, and have found that semantically related word pairings can be identified, whilst syntactic relations are disfavoured. We have then moved on to refine this system, to deal with multi-word terms and identify changing conceptual relationships across time. The system, conceived in the late 80's and developed in 1994-97, differs from others of the 90's in purpose, scope, methodology and results, and comparisons will be drawn in the course of the paper.",Refining the Automatic Identification of Conceptual Relations in Large-scale Corpora,"In the ACRONYM Project, we have taken the Firthian view (e.g. Firth 1957) that context is part of the meaning of the word, and measured similarity of meaning between words through second-order collocation. Using large-scale, free text corpora of UK journalism, we have generated collocational data for all words except for highfrequency grammatical words, and have found that semantically related word pairings can be identified, whilst syntactic relations are disfavoured. We have then moved on to refine this system, to deal with multi-word terms and identify changing conceptual relationships across time. The system, conceived in the late 80's and developed in 1994-97, differs from others of the 90's in purpose, scope, methodology and results, and comparisons will be drawn in the course of the paper.",,"Refining the Automatic Identification of Conceptual Relations in Large-scale Corpora. In the ACRONYM Project, we have taken the Firthian view (e.g. Firth 1957) that context is part of the meaning of the word, and measured similarity of meaning between words through second-order collocation. Using large-scale, free text corpora of UK journalism, we have generated collocational data for all words except for highfrequency grammatical words, and have found that semantically related word pairings can be identified, whilst syntactic relations are disfavoured. We have then moved on to refine this system, to deal with multi-word terms and identify changing conceptual relationships across time. The system, conceived in the late 80's and developed in 1994-97, differs from others of the 90's in purpose, scope, methodology and results, and comparisons will be drawn in the course of the paper.",1998
bjerva-augenstein-2018-phonology,https://aclanthology.org/N18-1083,0,,,,,,,"From Phonology to Syntax: Unsupervised Linguistic Typology at Different Levels with Language Embeddings. A core part of linguistic typology is the classification of languages according to linguistic properties, such as those detailed in the World Atlas of Language Structure (WALS). Doing this manually is prohibitively time-consuming, which is in part evidenced by the fact that only 100 out of over 7,000 languages spoken in the world are fully covered in WALS. We learn distributed language representations, which can be used to predict typological properties on a massively multilingual scale. Additionally, quantitative and qualitative analyses of these language embeddings can tell us how language similarities are encoded in NLP models for tasks at different typological levels. The representations are learned in an unsupervised manner alongside tasks at three typological levels: phonology (grapheme-to-phoneme prediction, and phoneme reconstruction), morphology (morphological inflection), and syntax (part-of-speech tagging). We consider more than 800 languages and find significant differences in the language representations encoded, depending on the target task. For instance, although Norwegian Bokmål and Danish are typologically close to one another, they are phonologically distant, which is reflected in their language embeddings growing relatively distant in a phonological task. We are also able to predict typological features in WALS with high accuracies, even for unseen language families.",From Phonology to Syntax: Unsupervised Linguistic Typology at Different Levels with Language Embeddings,"A core part of linguistic typology is the classification of languages according to linguistic properties, such as those detailed in the World Atlas of Language Structure (WALS). Doing this manually is prohibitively time-consuming, which is in part evidenced by the fact that only 100 out of over 7,000 languages spoken in the world are fully covered in WALS. We learn distributed language representations, which can be used to predict typological properties on a massively multilingual scale. Additionally, quantitative and qualitative analyses of these language embeddings can tell us how language similarities are encoded in NLP models for tasks at different typological levels. The representations are learned in an unsupervised manner alongside tasks at three typological levels: phonology (grapheme-to-phoneme prediction, and phoneme reconstruction), morphology (morphological inflection), and syntax (part-of-speech tagging). We consider more than 800 languages and find significant differences in the language representations encoded, depending on the target task. For instance, although Norwegian Bokmål and Danish are typologically close to one another, they are phonologically distant, which is reflected in their language embeddings growing relatively distant in a phonological task. We are also able to predict typological features in WALS with high accuracies, even for unseen language families.",From Phonology to Syntax: Unsupervised Linguistic Typology at Different Levels with Language Embeddings,"A core part of linguistic typology is the classification of languages according to linguistic properties, such as those detailed in the World Atlas of Language Structure (WALS). Doing this manually is prohibitively time-consuming, which is in part evidenced by the fact that only 100 out of over 7,000 languages spoken in the world are fully covered in WALS. We learn distributed language representations, which can be used to predict typological properties on a massively multilingual scale. Additionally, quantitative and qualitative analyses of these language embeddings can tell us how language similarities are encoded in NLP models for tasks at different typological levels. The representations are learned in an unsupervised manner alongside tasks at three typological levels: phonology (grapheme-to-phoneme prediction, and phoneme reconstruction), morphology (morphological inflection), and syntax (part-of-speech tagging). We consider more than 800 languages and find significant differences in the language representations encoded, depending on the target task. For instance, although Norwegian Bokmål and Danish are typologically close to one another, they are phonologically distant, which is reflected in their language embeddings growing relatively distant in a phonological task. We are also able to predict typological features in WALS with high accuracies, even for unseen language families.",We would also like to thank RobertÖstling for giving us access to the pre-trained language embeddings. Isabelle Augenstein is supported by Eurostars grant Number E10138. We further gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.,"From Phonology to Syntax: Unsupervised Linguistic Typology at Different Levels with Language Embeddings. A core part of linguistic typology is the classification of languages according to linguistic properties, such as those detailed in the World Atlas of Language Structure (WALS). Doing this manually is prohibitively time-consuming, which is in part evidenced by the fact that only 100 out of over 7,000 languages spoken in the world are fully covered in WALS. We learn distributed language representations, which can be used to predict typological properties on a massively multilingual scale. Additionally, quantitative and qualitative analyses of these language embeddings can tell us how language similarities are encoded in NLP models for tasks at different typological levels. The representations are learned in an unsupervised manner alongside tasks at three typological levels: phonology (grapheme-to-phoneme prediction, and phoneme reconstruction), morphology (morphological inflection), and syntax (part-of-speech tagging). We consider more than 800 languages and find significant differences in the language representations encoded, depending on the target task. For instance, although Norwegian Bokmål and Danish are typologically close to one another, they are phonologically distant, which is reflected in their language embeddings growing relatively distant in a phonological task. We are also able to predict typological features in WALS with high accuracies, even for unseen language families.",2018
godard-etal-2018-adaptor,https://aclanthology.org/W18-5804,0,,,,,,,"Adaptor Grammars for the Linguist: Word Segmentation Experiments for Very Low-Resource Languages. Computational Language Documentation attempts to make the most recent research in speech and language technologies available to linguists working on language preservation and documentation. In this paper, we pursue two main goals along these lines. The first is to improve upon a strong baseline for the unsupervised word discovery task on two very lowresource Bantu languages, taking advantage of the expertise of linguists on these particular languages. The second consists in exploring the Adaptor Grammar framework as a decision and prediction tool for linguists studying a new language. We experiment 162 grammar configurations for each language and show that using Adaptor Grammars for word segmentation enables us to test hypotheses about a language. Specializing a generic grammar with language specific knowledge leads to great improvements for the word discovery task, ultimately achieving a leap of about 30% token F-score from the results of a strong baseline.",{A}daptor {G}rammars for the Linguist: Word Segmentation Experiments for Very Low-Resource Languages,"Computational Language Documentation attempts to make the most recent research in speech and language technologies available to linguists working on language preservation and documentation. In this paper, we pursue two main goals along these lines. The first is to improve upon a strong baseline for the unsupervised word discovery task on two very lowresource Bantu languages, taking advantage of the expertise of linguists on these particular languages. The second consists in exploring the Adaptor Grammar framework as a decision and prediction tool for linguists studying a new language. We experiment 162 grammar configurations for each language and show that using Adaptor Grammars for word segmentation enables us to test hypotheses about a language. Specializing a generic grammar with language specific knowledge leads to great improvements for the word discovery task, ultimately achieving a leap of about 30% token F-score from the results of a strong baseline.",Adaptor Grammars for the Linguist: Word Segmentation Experiments for Very Low-Resource Languages,"Computational Language Documentation attempts to make the most recent research in speech and language technologies available to linguists working on language preservation and documentation. In this paper, we pursue two main goals along these lines. The first is to improve upon a strong baseline for the unsupervised word discovery task on two very lowresource Bantu languages, taking advantage of the expertise of linguists on these particular languages. The second consists in exploring the Adaptor Grammar framework as a decision and prediction tool for linguists studying a new language. We experiment 162 grammar configurations for each language and show that using Adaptor Grammars for word segmentation enables us to test hypotheses about a language. Specializing a generic grammar with language specific knowledge leads to great improvements for the word discovery task, ultimately achieving a leap of about 30% token F-score from the results of a strong baseline.",We thank the anonymous reviewers for their insightful comments. We also thank Ramy Eskander for his help in the early stages of this research. This work was partly funded by French ANR and German DFG under grant ANR-14-CE35-0002 (BULB project).,"Adaptor Grammars for the Linguist: Word Segmentation Experiments for Very Low-Resource Languages. Computational Language Documentation attempts to make the most recent research in speech and language technologies available to linguists working on language preservation and documentation. In this paper, we pursue two main goals along these lines. The first is to improve upon a strong baseline for the unsupervised word discovery task on two very lowresource Bantu languages, taking advantage of the expertise of linguists on these particular languages. The second consists in exploring the Adaptor Grammar framework as a decision and prediction tool for linguists studying a new language. We experiment 162 grammar configurations for each language and show that using Adaptor Grammars for word segmentation enables us to test hypotheses about a language. Specializing a generic grammar with language specific knowledge leads to great improvements for the word discovery task, ultimately achieving a leap of about 30% token F-score from the results of a strong baseline.",2018
king-2008-osu,https://aclanthology.org/W08-1137,0,,,,,,,"OSU-GP: Attribute Selection Using Genetic Programming. This system's approach to the attribute selection task was to use a genetic programming algorithm to search for a solution to the task. The evolved programs for the furniture and people domain exhibit quite naive behavior, and the DICE and MASI scores on the training sets reflect the poor humanlikeness of the programs.",{OSU}-{GP}: Attribute Selection Using Genetic Programming,"This system's approach to the attribute selection task was to use a genetic programming algorithm to search for a solution to the task. The evolved programs for the furniture and people domain exhibit quite naive behavior, and the DICE and MASI scores on the training sets reflect the poor humanlikeness of the programs.",OSU-GP: Attribute Selection Using Genetic Programming,"This system's approach to the attribute selection task was to use a genetic programming algorithm to search for a solution to the task. The evolved programs for the furniture and people domain exhibit quite naive behavior, and the DICE and MASI scores on the training sets reflect the poor humanlikeness of the programs.",,"OSU-GP: Attribute Selection Using Genetic Programming. This system's approach to the attribute selection task was to use a genetic programming algorithm to search for a solution to the task. The evolved programs for the furniture and people domain exhibit quite naive behavior, and the DICE and MASI scores on the training sets reflect the poor humanlikeness of the programs.",2008
herbelot-2020-solve,https://aclanthology.org/2020.conll-1.27,0,,,,,,,"Re-solve it: simulating the acquisition of core semantic competences from small data. Many tasks are considered to be 'solved' in the computational linguistics literature, but the corresponding algorithms operate in ways which are radically different from human cognition. I illustrate this by coming back to the notion of semantic competence, which includes basic linguistic skills encompassing both referential phenomena and generic knowledge, in particular a) the ability to denote, b) the mastery of the lexicon, or c) the ability to model one's language use on others. Even though each of those faculties has been extensively tested individually, there is still no computational model that would account for their joint acquisition under the conditions experienced by a human. In this paper, I focus on one particular aspect of this problem: the amount of linguistic data available to the child or machine. I show that given the first competence mentioned above (a denotation function), the other two can in fact be learned from very limited data (2.8M token), reaching state-of-theart performance. I argue that both the nature of the data and the way it is presented to the system matter to acquisition.",Re-solve it: simulating the acquisition of core semantic competences from small data,"Many tasks are considered to be 'solved' in the computational linguistics literature, but the corresponding algorithms operate in ways which are radically different from human cognition. I illustrate this by coming back to the notion of semantic competence, which includes basic linguistic skills encompassing both referential phenomena and generic knowledge, in particular a) the ability to denote, b) the mastery of the lexicon, or c) the ability to model one's language use on others. Even though each of those faculties has been extensively tested individually, there is still no computational model that would account for their joint acquisition under the conditions experienced by a human. In this paper, I focus on one particular aspect of this problem: the amount of linguistic data available to the child or machine. I show that given the first competence mentioned above (a denotation function), the other two can in fact be learned from very limited data (2.8M token), reaching state-of-theart performance. I argue that both the nature of the data and the way it is presented to the system matter to acquisition.",Re-solve it: simulating the acquisition of core semantic competences from small data,"Many tasks are considered to be 'solved' in the computational linguistics literature, but the corresponding algorithms operate in ways which are radically different from human cognition. I illustrate this by coming back to the notion of semantic competence, which includes basic linguistic skills encompassing both referential phenomena and generic knowledge, in particular a) the ability to denote, b) the mastery of the lexicon, or c) the ability to model one's language use on others. Even though each of those faculties has been extensively tested individually, there is still no computational model that would account for their joint acquisition under the conditions experienced by a human. In this paper, I focus on one particular aspect of this problem: the amount of linguistic data available to the child or machine. I show that given the first competence mentioned above (a denotation function), the other two can in fact be learned from very limited data (2.8M token), reaching state-of-theart performance. I argue that both the nature of the data and the way it is presented to the system matter to acquisition.","I thank Ann Copestake and Katrin Erk for reading an early draft of this paper, as well as the participants to the GeCKo workshop in Barcelona for their helpful comments. I would also like to thank the anonymous reviewers for their helpful suggestions and comments. Finally, I gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research.","Re-solve it: simulating the acquisition of core semantic competences from small data. Many tasks are considered to be 'solved' in the computational linguistics literature, but the corresponding algorithms operate in ways which are radically different from human cognition. I illustrate this by coming back to the notion of semantic competence, which includes basic linguistic skills encompassing both referential phenomena and generic knowledge, in particular a) the ability to denote, b) the mastery of the lexicon, or c) the ability to model one's language use on others. Even though each of those faculties has been extensively tested individually, there is still no computational model that would account for their joint acquisition under the conditions experienced by a human. In this paper, I focus on one particular aspect of this problem: the amount of linguistic data available to the child or machine. I show that given the first competence mentioned above (a denotation function), the other two can in fact be learned from very limited data (2.8M token), reaching state-of-theart performance. I argue that both the nature of the data and the way it is presented to the system matter to acquisition.",2020
agirre-martinez-2000-exploring,https://aclanthology.org/W00-1702,0,,,,,,,"Exploring Automatic Word Sense Disambiguation with Decision Lists and the Web. The most effective paradigm for word sense disambiguation, supervised learning, seems to be stuck because of the knowledge acquisition bottleneck. In this paper we take an in-depth study of the performance of decision lists on two publicly available corpora and an additional corpus automatically acquired from the Web, using the fine-grained highly polysemous senses in WordNet. Decision lists are shown a versatile state-of-the-art technique. The experiments reveal, among other facts, that SemCor can be an acceptable (0.7 precision for polysemous words) starting point for an all-words system. The results on the DSO corpus show that for some highly polysemous words 0.7 precision seems to be the current state-of-the-art limit. On the other hand, independently constructed hand-tagged corpora are not mutually useful, and a corpus automatically acquired from the Web is shown to fail. 'church1' => GLOSS 'a group of Christians' Why is one >> church << satisfied and the other oppressed ? : 'church2' => MONOSEMOUS SYNONYM 'church building' The result was a congregation formed at that place, and a >> church << erected. :",Exploring Automatic Word Sense Disambiguation with Decision Lists and the Web,"The most effective paradigm for word sense disambiguation, supervised learning, seems to be stuck because of the knowledge acquisition bottleneck. In this paper we take an in-depth study of the performance of decision lists on two publicly available corpora and an additional corpus automatically acquired from the Web, using the fine-grained highly polysemous senses in WordNet. Decision lists are shown a versatile state-of-the-art technique. The experiments reveal, among other facts, that SemCor can be an acceptable (0.7 precision for polysemous words) starting point for an all-words system. The results on the DSO corpus show that for some highly polysemous words 0.7 precision seems to be the current state-of-the-art limit. On the other hand, independently constructed hand-tagged corpora are not mutually useful, and a corpus automatically acquired from the Web is shown to fail. 'church1' => GLOSS 'a group of Christians' Why is one >> church << satisfied and the other oppressed ? : 'church2' => MONOSEMOUS SYNONYM 'church building' The result was a congregation formed at that place, and a >> church << erected. :",Exploring Automatic Word Sense Disambiguation with Decision Lists and the Web,"The most effective paradigm for word sense disambiguation, supervised learning, seems to be stuck because of the knowledge acquisition bottleneck. In this paper we take an in-depth study of the performance of decision lists on two publicly available corpora and an additional corpus automatically acquired from the Web, using the fine-grained highly polysemous senses in WordNet. Decision lists are shown a versatile state-of-the-art technique. The experiments reveal, among other facts, that SemCor can be an acceptable (0.7 precision for polysemous words) starting point for an all-words system. The results on the DSO corpus show that for some highly polysemous words 0.7 precision seems to be the current state-of-the-art limit. On the other hand, independently constructed hand-tagged corpora are not mutually useful, and a corpus automatically acquired from the Web is shown to fail. 'church1' => GLOSS 'a group of Christians' Why is one >> church << satisfied and the other oppressed ? : 'church2' => MONOSEMOUS SYNONYM 'church building' The result was a congregation formed at that place, and a >> church << erected. :","The work here presented received funds from projects OF319-99 (Government of Gipuzkoa), EX1998-30 (Basque Country Government) and 2FD1997-1503 (European Commission).","Exploring Automatic Word Sense Disambiguation with Decision Lists and the Web. The most effective paradigm for word sense disambiguation, supervised learning, seems to be stuck because of the knowledge acquisition bottleneck. In this paper we take an in-depth study of the performance of decision lists on two publicly available corpora and an additional corpus automatically acquired from the Web, using the fine-grained highly polysemous senses in WordNet. Decision lists are shown a versatile state-of-the-art technique. The experiments reveal, among other facts, that SemCor can be an acceptable (0.7 precision for polysemous words) starting point for an all-words system. The results on the DSO corpus show that for some highly polysemous words 0.7 precision seems to be the current state-of-the-art limit. On the other hand, independently constructed hand-tagged corpora are not mutually useful, and a corpus automatically acquired from the Web is shown to fail. 'church1' => GLOSS 'a group of Christians' Why is one >> church << satisfied and the other oppressed ? : 'church2' => MONOSEMOUS SYNONYM 'church building' The result was a congregation formed at that place, and a >> church << erected. :",2000
reynaert-2014-ticclops,https://aclanthology.org/C14-2012,0,,,,,,,"TICCLops: Text-Induced Corpus Clean-up as online processing system. We present the 'online processing system' version of Text-Induced Corpus Clean-up, a web service and application open for use to researchers. The system has over the past years been developed to provide mainly OCR error post-correction, but can just as fruitfully be employed to automatically correct texts for spelling errors, or to transcribe texts in an older spelling into the modern variant of the language. It has recently been re-implemented as a distributable and scalable software system in C++, designed to be easily adaptable for use with a broad range of languages and diachronical language varieties. Its new code base is now fit for production work and to be released as open source.",{TICCL}ops: Text-Induced Corpus Clean-up as online processing system,"We present the 'online processing system' version of Text-Induced Corpus Clean-up, a web service and application open for use to researchers. The system has over the past years been developed to provide mainly OCR error post-correction, but can just as fruitfully be employed to automatically correct texts for spelling errors, or to transcribe texts in an older spelling into the modern variant of the language. It has recently been re-implemented as a distributable and scalable software system in C++, designed to be easily adaptable for use with a broad range of languages and diachronical language varieties. Its new code base is now fit for production work and to be released as open source.",TICCLops: Text-Induced Corpus Clean-up as online processing system,"We present the 'online processing system' version of Text-Induced Corpus Clean-up, a web service and application open for use to researchers. The system has over the past years been developed to provide mainly OCR error post-correction, but can just as fruitfully be employed to automatically correct texts for spelling errors, or to transcribe texts in an older spelling into the modern variant of the language. It has recently been re-implemented as a distributable and scalable software system in C++, designed to be easily adaptable for use with a broad range of languages and diachronical language varieties. Its new code base is now fit for production work and to be released as open source.","The author, Martin Reynaert, and TiCC senior scientific programmer Ko van der Sloot gratefully acknowledge support from CLARIN-NL in projects @PhilosTEI (CLARIN-NL-12-006) and OpenSoNaR (CLARIN-NL-12-013). The author further acknowledges support from NWO in project Nederlab.","TICCLops: Text-Induced Corpus Clean-up as online processing system. We present the 'online processing system' version of Text-Induced Corpus Clean-up, a web service and application open for use to researchers. The system has over the past years been developed to provide mainly OCR error post-correction, but can just as fruitfully be employed to automatically correct texts for spelling errors, or to transcribe texts in an older spelling into the modern variant of the language. It has recently been re-implemented as a distributable and scalable software system in C++, designed to be easily adaptable for use with a broad range of languages and diachronical language varieties. Its new code base is now fit for production work and to be released as open source.",2014
malmaud-etal-2015-whats,https://aclanthology.org/N15-1015,0,,,,,,,"What's Cookin'? Interpreting Cooking Videos using Text, Speech and Vision. We present a novel method for aligning a sequence of instructions to a video of someone carrying out a task. In particular, we focus on the cooking domain, where the instructions correspond to the recipe. Our technique relies on an HMM to align the recipe steps to the (automatically generated) speech transcript. We then refine this alignment using a state-of-the-art visual food detector, based on a deep convolutional neural network. We show that our technique outperforms simpler techniques based on keyword spotting. It also enables interesting applications, such as automatically illustrating recipes with keyframes, and searching within a video for events of interest.","What{'}s Cookin{'}? Interpreting Cooking Videos using Text, Speech and Vision","We present a novel method for aligning a sequence of instructions to a video of someone carrying out a task. In particular, we focus on the cooking domain, where the instructions correspond to the recipe. Our technique relies on an HMM to align the recipe steps to the (automatically generated) speech transcript. We then refine this alignment using a state-of-the-art visual food detector, based on a deep convolutional neural network. We show that our technique outperforms simpler techniques based on keyword spotting. It also enables interesting applications, such as automatically illustrating recipes with keyframes, and searching within a video for events of interest.","What's Cookin'? Interpreting Cooking Videos using Text, Speech and Vision","We present a novel method for aligning a sequence of instructions to a video of someone carrying out a task. In particular, we focus on the cooking domain, where the instructions correspond to the recipe. Our technique relies on an HMM to align the recipe steps to the (automatically generated) speech transcript. We then refine this alignment using a state-of-the-art visual food detector, based on a deep convolutional neural network. We show that our technique outperforms simpler techniques based on keyword spotting. It also enables interesting applications, such as automatically illustrating recipes with keyframes, and searching within a video for events of interest.","Acknowledgments. We would like to thank Alex Gorban and Anoop Korattikara for helping with some of the experiments, and Nancy Chang for feedback on the paper.","What's Cookin'? Interpreting Cooking Videos using Text, Speech and Vision. We present a novel method for aligning a sequence of instructions to a video of someone carrying out a task. In particular, we focus on the cooking domain, where the instructions correspond to the recipe. Our technique relies on an HMM to align the recipe steps to the (automatically generated) speech transcript. We then refine this alignment using a state-of-the-art visual food detector, based on a deep convolutional neural network. We show that our technique outperforms simpler techniques based on keyword spotting. It also enables interesting applications, such as automatically illustrating recipes with keyframes, and searching within a video for events of interest.",2015
kubler-2008-page,https://aclanthology.org/W08-1008,0,,,,,,,"The PaGe 2008 Shared Task on Parsing German. The ACL 2008 Workshop on Parsing German features a shared task on parsing German. The goal of the shared task was to find reasons for the radically different behavior of parsers on the different treebanks and between constituent and dependency representations. In this paper, we describe the task and the data sets. In addition, we provide an overview of the test results and a first analysis.",The {P}a{G}e 2008 Shared Task on Parsing {G}erman,"The ACL 2008 Workshop on Parsing German features a shared task on parsing German. The goal of the shared task was to find reasons for the radically different behavior of parsers on the different treebanks and between constituent and dependency representations. In this paper, we describe the task and the data sets. In addition, we provide an overview of the test results and a first analysis.",The PaGe 2008 Shared Task on Parsing German,"The ACL 2008 Workshop on Parsing German features a shared task on parsing German. The goal of the shared task was to find reasons for the radically different behavior of parsers on the different treebanks and between constituent and dependency representations. In this paper, we describe the task and the data sets. In addition, we provide an overview of the test results and a first analysis.","First and foremost, we want to thank all the people and organizations that generously provided us with treebank data and without whom the shared task would have been literally impossible: Erhard Hinrichs, University of Tübingen (TüBa-D/Z), and Hans Uszkoreit, Saarland University and DFKI (TIGER).Secondly, we would like to thank Wolfgang Maier and Yannick Versley who performed the data conversions necessary for the shared task. Additionally, Wolfgang provided the scripts for the constituent evaluation.","The PaGe 2008 Shared Task on Parsing German. The ACL 2008 Workshop on Parsing German features a shared task on parsing German. The goal of the shared task was to find reasons for the radically different behavior of parsers on the different treebanks and between constituent and dependency representations. In this paper, we describe the task and the data sets. In addition, we provide an overview of the test results and a first analysis.",2008
aksu-etal-2022-n,https://aclanthology.org/2022.findings-acl.131,0,,,,,,,"N-Shot Learning for Augmenting Task-Oriented Dialogue State Tracking. Augmentation of task-oriented dialogues has followed standard methods used for plain-text such as back-translation, word-level manipulation, and paraphrasing despite its richly annotated structure. In this work, we introduce an augmentation framework that utilizes belief state annotations to match turns from various dialogues and form new synthetic dialogues in a bottom-up manner. Unlike other augmentation strategies, it operates with as few as five examples. Our augmentation strategy yields significant improvements when both adapting a DST model to a new domain, and when adapting a language model to the DST task, on evaluations with TRADE and TOD-BERT models. Further analysis shows that our model performs better on seen values during training, and it is also more robust to unseen values. We conclude that exploiting belief state annotations enhances dialogue augmentation and results in improved models in n-shot training scenarios.",N-Shot Learning for Augmenting Task-Oriented Dialogue State Tracking,"Augmentation of task-oriented dialogues has followed standard methods used for plain-text such as back-translation, word-level manipulation, and paraphrasing despite its richly annotated structure. In this work, we introduce an augmentation framework that utilizes belief state annotations to match turns from various dialogues and form new synthetic dialogues in a bottom-up manner. Unlike other augmentation strategies, it operates with as few as five examples. Our augmentation strategy yields significant improvements when both adapting a DST model to a new domain, and when adapting a language model to the DST task, on evaluations with TRADE and TOD-BERT models. Further analysis shows that our model performs better on seen values during training, and it is also more robust to unseen values. We conclude that exploiting belief state annotations enhances dialogue augmentation and results in improved models in n-shot training scenarios.",N-Shot Learning for Augmenting Task-Oriented Dialogue State Tracking,"Augmentation of task-oriented dialogues has followed standard methods used for plain-text such as back-translation, word-level manipulation, and paraphrasing despite its richly annotated structure. In this work, we introduce an augmentation framework that utilizes belief state annotations to match turns from various dialogues and form new synthetic dialogues in a bottom-up manner. Unlike other augmentation strategies, it operates with as few as five examples. Our augmentation strategy yields significant improvements when both adapting a DST model to a new domain, and when adapting a language model to the DST task, on evaluations with TRADE and TOD-BERT models. Further analysis shows that our model performs better on seen values during training, and it is also more robust to unseen values. We conclude that exploiting belief state annotations enhances dialogue augmentation and results in improved models in n-shot training scenarios.","This research was supported by the SINGA scholarship from A*STAR and by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. We would like to thank anonymous reviewers for their insightful feedback on how to improve the paper.","N-Shot Learning for Augmenting Task-Oriented Dialogue State Tracking. Augmentation of task-oriented dialogues has followed standard methods used for plain-text such as back-translation, word-level manipulation, and paraphrasing despite its richly annotated structure. In this work, we introduce an augmentation framework that utilizes belief state annotations to match turns from various dialogues and form new synthetic dialogues in a bottom-up manner. Unlike other augmentation strategies, it operates with as few as five examples. Our augmentation strategy yields significant improvements when both adapting a DST model to a new domain, and when adapting a language model to the DST task, on evaluations with TRADE and TOD-BERT models. Further analysis shows that our model performs better on seen values during training, and it is also more robust to unseen values. We conclude that exploiting belief state annotations enhances dialogue augmentation and results in improved models in n-shot training scenarios.",2022
radford-etal-2018-adult,https://aclanthology.org/W18-0614,1,,,,health,,,"Can adult mental health be predicted by childhood future-self narratives? Insights from the CLPsych 2018 Shared Task. The CLPsych 2018 Shared Task B explores how childhood essays can predict psychological distress throughout the author's life. Our main aim was to build tools to help our psychologists understand the data, propose features and interpret predictions. We submitted two linear regression models: MODELA uses simple demographic and wordcount features, while MODELB uses linguistic, entity, typographic, expert-gazetteer, and readability features. Our models perform best at younger prediction ages, with our best unofficial score at 23 of 0.426 disattenuated Pearson correlation. This task is challenging and although predictive performance is limited, we propose that tight integration of expertise across computational linguistics and clinical psychology is a productive direction.",Can adult mental health be predicted by childhood future-self narratives? Insights from the {CLP}sych 2018 Shared Task,"The CLPsych 2018 Shared Task B explores how childhood essays can predict psychological distress throughout the author's life. Our main aim was to build tools to help our psychologists understand the data, propose features and interpret predictions. We submitted two linear regression models: MODELA uses simple demographic and wordcount features, while MODELB uses linguistic, entity, typographic, expert-gazetteer, and readability features. Our models perform best at younger prediction ages, with our best unofficial score at 23 of 0.426 disattenuated Pearson correlation. This task is challenging and although predictive performance is limited, we propose that tight integration of expertise across computational linguistics and clinical psychology is a productive direction.",Can adult mental health be predicted by childhood future-self narratives? Insights from the CLPsych 2018 Shared Task,"The CLPsych 2018 Shared Task B explores how childhood essays can predict psychological distress throughout the author's life. Our main aim was to build tools to help our psychologists understand the data, propose features and interpret predictions. We submitted two linear regression models: MODELA uses simple demographic and wordcount features, while MODELB uses linguistic, entity, typographic, expert-gazetteer, and readability features. Our models perform best at younger prediction ages, with our best unofficial score at 23 of 0.426 disattenuated Pearson correlation. This task is challenging and although predictive performance is limited, we propose that tight integration of expertise across computational linguistics and clinical psychology is a productive direction.",This study was approved by the University of New South Wales Human Research Ethics Advisory Panel (ref. HC180171). We thank the CLPsych reviewers for their thoughtful comments. KMK is funded by the Australian National Health and Medical Research Council (NHMRC) fellowship #1088313. KR is supported by the ARC-NHMRC Dementia Research Development Fellowship #1103312. LL is supported by the Serpentine Foundation Postdoctoral Fellowship. RP is supported by the Dementia Collaborative Research Centre.,"Can adult mental health be predicted by childhood future-self narratives? Insights from the CLPsych 2018 Shared Task. The CLPsych 2018 Shared Task B explores how childhood essays can predict psychological distress throughout the author's life. Our main aim was to build tools to help our psychologists understand the data, propose features and interpret predictions. We submitted two linear regression models: MODELA uses simple demographic and wordcount features, while MODELB uses linguistic, entity, typographic, expert-gazetteer, and readability features. Our models perform best at younger prediction ages, with our best unofficial score at 23 of 0.426 disattenuated Pearson correlation. This task is challenging and although predictive performance is limited, we propose that tight integration of expertise across computational linguistics and clinical psychology is a productive direction.",2018
chandrasekaran-etal-2018-punny,https://aclanthology.org/N18-2121,0,,,,,,,"Punny Captions: Witty Wordplay in Image Descriptions. Wit is a form of rich interaction that is often grounded in a specific situation (e.g., a comment in response to an event). In this work, we attempt to build computational models that can produce witty descriptions for a given image. Inspired by a cognitive account of humor appreciation, we employ linguistic wordplay, specifically puns, in image descriptions. We develop two approaches which involve retrieving witty descriptions for a given image from a large corpus of sentences, or generating them via an encoder-decoder neural network architecture. We compare our approach against meaningful baseline approaches via human studies and show substantial improvements. We find that when a human is subject to similar constraints as the model regarding word usage and style, people vote the image descriptions generated by our model to be slightly wittier than human-written witty descriptions. Unsurprisingly, humans are almost always wittier than the model when they are free to choose the vocabulary, style, etc.",Punny Captions: Witty Wordplay in Image Descriptions,"Wit is a form of rich interaction that is often grounded in a specific situation (e.g., a comment in response to an event). In this work, we attempt to build computational models that can produce witty descriptions for a given image. Inspired by a cognitive account of humor appreciation, we employ linguistic wordplay, specifically puns, in image descriptions. We develop two approaches which involve retrieving witty descriptions for a given image from a large corpus of sentences, or generating them via an encoder-decoder neural network architecture. We compare our approach against meaningful baseline approaches via human studies and show substantial improvements. We find that when a human is subject to similar constraints as the model regarding word usage and style, people vote the image descriptions generated by our model to be slightly wittier than human-written witty descriptions. Unsurprisingly, humans are almost always wittier than the model when they are free to choose the vocabulary, style, etc.",Punny Captions: Witty Wordplay in Image Descriptions,"Wit is a form of rich interaction that is often grounded in a specific situation (e.g., a comment in response to an event). In this work, we attempt to build computational models that can produce witty descriptions for a given image. Inspired by a cognitive account of humor appreciation, we employ linguistic wordplay, specifically puns, in image descriptions. We develop two approaches which involve retrieving witty descriptions for a given image from a large corpus of sentences, or generating them via an encoder-decoder neural network architecture. We compare our approach against meaningful baseline approaches via human studies and show substantial improvements. We find that when a human is subject to similar constraints as the model regarding word usage and style, people vote the image descriptions generated by our model to be slightly wittier than human-written witty descriptions. Unsurprisingly, humans are almost always wittier than the model when they are free to choose the vocabulary, style, etc.","We thank Shubham Toshniwal for his advice regarding the automatic speech recognition model. This work was supported in part by: a NSF CAREER award, ONR YIP award, ONR Grant N00014-14-12713, PGA Family Foundation award, Google FRA, Amazon ARA, DARPA XAI grant to DP and NVIDIA GPU donations, Google FRA, IBM Faculty Award, and Bloomberg Data Science Research Grant to MB.","Punny Captions: Witty Wordplay in Image Descriptions. Wit is a form of rich interaction that is often grounded in a specific situation (e.g., a comment in response to an event). In this work, we attempt to build computational models that can produce witty descriptions for a given image. Inspired by a cognitive account of humor appreciation, we employ linguistic wordplay, specifically puns, in image descriptions. We develop two approaches which involve retrieving witty descriptions for a given image from a large corpus of sentences, or generating them via an encoder-decoder neural network architecture. We compare our approach against meaningful baseline approaches via human studies and show substantial improvements. We find that when a human is subject to similar constraints as the model regarding word usage and style, people vote the image descriptions generated by our model to be slightly wittier than human-written witty descriptions. Unsurprisingly, humans are almost always wittier than the model when they are free to choose the vocabulary, style, etc.",2018
rothe-etal-2021-simple,https://aclanthology.org/2021.acl-short.89,0,,,,,,,"A Simple Recipe for Multilingual Grammatical Error Correction. This paper presents a simple recipe to train state-of-the-art multilingual Grammatical Error Correction (GEC) models. We achieve this by first proposing a language-agnostic method to generate a large number of synthetic examples. The second ingredient is to use largescale multilingual language models (up to 11B parameters). Once fine-tuned on languagespecific supervised sets we surpass the previous state-of-the-art results on GEC benchmarks in four languages: English, Czech, German and Russian. Having established a new set of baselines for GEC, we make our results easily reproducible and accessible by releasing a CLANG-8 dataset. 1 It is produced by using our best model, which we call gT5, to clean the targets of a widely used yet noisy LANG-8 dataset. CLANG-8 greatly simplifies typical GEC training pipelines composed of multiple fine-tuning stages-we demonstrate that performing a single fine-tuning step on CLANG-8 with the off-the-shelf language models yields further accuracy improvements over an already top-performing gT5 model for English.",A Simple Recipe for Multilingual Grammatical Error Correction,"This paper presents a simple recipe to train state-of-the-art multilingual Grammatical Error Correction (GEC) models. We achieve this by first proposing a language-agnostic method to generate a large number of synthetic examples. The second ingredient is to use largescale multilingual language models (up to 11B parameters). Once fine-tuned on languagespecific supervised sets we surpass the previous state-of-the-art results on GEC benchmarks in four languages: English, Czech, German and Russian. Having established a new set of baselines for GEC, we make our results easily reproducible and accessible by releasing a CLANG-8 dataset. 1 It is produced by using our best model, which we call gT5, to clean the targets of a widely used yet noisy LANG-8 dataset. CLANG-8 greatly simplifies typical GEC training pipelines composed of multiple fine-tuning stages-we demonstrate that performing a single fine-tuning step on CLANG-8 with the off-the-shelf language models yields further accuracy improvements over an already top-performing gT5 model for English.",A Simple Recipe for Multilingual Grammatical Error Correction,"This paper presents a simple recipe to train state-of-the-art multilingual Grammatical Error Correction (GEC) models. We achieve this by first proposing a language-agnostic method to generate a large number of synthetic examples. The second ingredient is to use largescale multilingual language models (up to 11B parameters). Once fine-tuned on languagespecific supervised sets we surpass the previous state-of-the-art results on GEC benchmarks in four languages: English, Czech, German and Russian. Having established a new set of baselines for GEC, we make our results easily reproducible and accessible by releasing a CLANG-8 dataset. 1 It is produced by using our best model, which we call gT5, to clean the targets of a widely used yet noisy LANG-8 dataset. CLANG-8 greatly simplifies typical GEC training pipelines composed of multiple fine-tuning stages-we demonstrate that performing a single fine-tuning step on CLANG-8 with the off-the-shelf language models yields further accuracy improvements over an already top-performing gT5 model for English.","We would like to thank Costanza Conforti, Shankar Kumar, Felix Stahlberg and Samer Hassan for useful discussions as well as their help with training and evaluating the models.","A Simple Recipe for Multilingual Grammatical Error Correction. This paper presents a simple recipe to train state-of-the-art multilingual Grammatical Error Correction (GEC) models. We achieve this by first proposing a language-agnostic method to generate a large number of synthetic examples. The second ingredient is to use largescale multilingual language models (up to 11B parameters). Once fine-tuned on languagespecific supervised sets we surpass the previous state-of-the-art results on GEC benchmarks in four languages: English, Czech, German and Russian. Having established a new set of baselines for GEC, we make our results easily reproducible and accessible by releasing a CLANG-8 dataset. 1 It is produced by using our best model, which we call gT5, to clean the targets of a widely used yet noisy LANG-8 dataset. CLANG-8 greatly simplifies typical GEC training pipelines composed of multiple fine-tuning stages-we demonstrate that performing a single fine-tuning step on CLANG-8 with the off-the-shelf language models yields further accuracy improvements over an already top-performing gT5 model for English.",2021
singh-etal-2020-newssweeper,https://aclanthology.org/2020.semeval-1.231,1,,,,disinformation_and_fake_news,,,"newsSweeper at SemEval-2020 Task 11: Context-Aware Rich Feature Representations for Propaganda Classification. This paper describes our submissions to SemEval 2020 Task 11: Detection of Propaganda Techniques in News Articles for each of the two subtasks of Span Identification and Technique Classification. We make use of pre-trained BERT language model enhanced with tagging techniques developed for the task of Named Entity Recognition (NER), to develop a system for identifying propaganda spans in the text. For the second subtask, we incorporate contextual features in a pre-trained RoBERTa model for the classification of propaganda techniques. We were ranked 5 th in the propaganda technique classification subtask.",news{S}weeper at {S}em{E}val-2020 Task 11: Context-Aware Rich Feature Representations for Propaganda Classification,"This paper describes our submissions to SemEval 2020 Task 11: Detection of Propaganda Techniques in News Articles for each of the two subtasks of Span Identification and Technique Classification. We make use of pre-trained BERT language model enhanced with tagging techniques developed for the task of Named Entity Recognition (NER), to develop a system for identifying propaganda spans in the text. For the second subtask, we incorporate contextual features in a pre-trained RoBERTa model for the classification of propaganda techniques. We were ranked 5 th in the propaganda technique classification subtask.",newsSweeper at SemEval-2020 Task 11: Context-Aware Rich Feature Representations for Propaganda Classification,"This paper describes our submissions to SemEval 2020 Task 11: Detection of Propaganda Techniques in News Articles for each of the two subtasks of Span Identification and Technique Classification. We make use of pre-trained BERT language model enhanced with tagging techniques developed for the task of Named Entity Recognition (NER), to develop a system for identifying propaganda spans in the text. For the second subtask, we incorporate contextual features in a pre-trained RoBERTa model for the classification of propaganda techniques. We were ranked 5 th in the propaganda technique classification subtask.",,"newsSweeper at SemEval-2020 Task 11: Context-Aware Rich Feature Representations for Propaganda Classification. This paper describes our submissions to SemEval 2020 Task 11: Detection of Propaganda Techniques in News Articles for each of the two subtasks of Span Identification and Technique Classification. We make use of pre-trained BERT language model enhanced with tagging techniques developed for the task of Named Entity Recognition (NER), to develop a system for identifying propaganda spans in the text. For the second subtask, we incorporate contextual features in a pre-trained RoBERTa model for the classification of propaganda techniques. We were ranked 5 th in the propaganda technique classification subtask.",2020
aramaki-etal-2007-uth,https://aclanthology.org/S07-1103,0,,,,,,,"UTH: SVM-based Semantic Relation Classification using Physical Sizes. Although researchers have shown increasing interest in extracting/classifying semantic relations, most previous studies have basically relied on lexical patterns between terms. This paper proposes a novel way to accomplish the task: a system that captures a physical size of an entity. Experimental results revealed that our proposed method is feasible and prevents the problems inherent in other methods.",{UTH}: {SVM}-based Semantic Relation Classification using Physical Sizes,"Although researchers have shown increasing interest in extracting/classifying semantic relations, most previous studies have basically relied on lexical patterns between terms. This paper proposes a novel way to accomplish the task: a system that captures a physical size of an entity. Experimental results revealed that our proposed method is feasible and prevents the problems inherent in other methods.",UTH: SVM-based Semantic Relation Classification using Physical Sizes,"Although researchers have shown increasing interest in extracting/classifying semantic relations, most previous studies have basically relied on lexical patterns between terms. This paper proposes a novel way to accomplish the task: a system that captures a physical size of an entity. Experimental results revealed that our proposed method is feasible and prevents the problems inherent in other methods.",,"UTH: SVM-based Semantic Relation Classification using Physical Sizes. Although researchers have shown increasing interest in extracting/classifying semantic relations, most previous studies have basically relied on lexical patterns between terms. This paper proposes a novel way to accomplish the task: a system that captures a physical size of an entity. Experimental results revealed that our proposed method is feasible and prevents the problems inherent in other methods.",2007
kacmarcik-etal-2000-robust,https://aclanthology.org/C00-1057,0,,,,,,,"Robust Segmentation of Japanese Text into a Lattice for Parsing. We describe a segmentation component that utilizes minimal syntactic knowledge to produce a lattice of word candidates for a broad coverage Japanese NL parser. The segmenter is a finite state morphological analyzer and text normalizer designed to handle the orthographic variations characteristic of written Japanese, including alternate spellings, script variation, vowel extensions and word-internal parenthetical material. This architecture differs from conventional Japanese wordbreakers in that it does not attempt to simultaneously attack the problems of identifying segmentation candidates and choosing the most probable analysis. To minimize duplication of effort between components and to give the segmenter greater freedom to address orthography issues, the task of choosing the best analysis is handled by the parser, which has access to a much richer set of linguistic information. By maximizing recall in the segmenter and allowing a precision of 34.7%, our parser currently achieves a breaking accuracy of ~97% over a wide variety of corpora.",Robust Segmentation of {J}apanese Text into a Lattice for Parsing,"We describe a segmentation component that utilizes minimal syntactic knowledge to produce a lattice of word candidates for a broad coverage Japanese NL parser. The segmenter is a finite state morphological analyzer and text normalizer designed to handle the orthographic variations characteristic of written Japanese, including alternate spellings, script variation, vowel extensions and word-internal parenthetical material. This architecture differs from conventional Japanese wordbreakers in that it does not attempt to simultaneously attack the problems of identifying segmentation candidates and choosing the most probable analysis. To minimize duplication of effort between components and to give the segmenter greater freedom to address orthography issues, the task of choosing the best analysis is handled by the parser, which has access to a much richer set of linguistic information. By maximizing recall in the segmenter and allowing a precision of 34.7%, our parser currently achieves a breaking accuracy of ~97% over a wide variety of corpora.",Robust Segmentation of Japanese Text into a Lattice for Parsing,"We describe a segmentation component that utilizes minimal syntactic knowledge to produce a lattice of word candidates for a broad coverage Japanese NL parser. The segmenter is a finite state morphological analyzer and text normalizer designed to handle the orthographic variations characteristic of written Japanese, including alternate spellings, script variation, vowel extensions and word-internal parenthetical material. This architecture differs from conventional Japanese wordbreakers in that it does not attempt to simultaneously attack the problems of identifying segmentation candidates and choosing the most probable analysis. To minimize duplication of effort between components and to give the segmenter greater freedom to address orthography issues, the task of choosing the best analysis is handled by the parser, which has access to a much richer set of linguistic information. By maximizing recall in the segmenter and allowing a precision of 34.7%, our parser currently achieves a breaking accuracy of ~97% over a wide variety of corpora.",,"Robust Segmentation of Japanese Text into a Lattice for Parsing. We describe a segmentation component that utilizes minimal syntactic knowledge to produce a lattice of word candidates for a broad coverage Japanese NL parser. The segmenter is a finite state morphological analyzer and text normalizer designed to handle the orthographic variations characteristic of written Japanese, including alternate spellings, script variation, vowel extensions and word-internal parenthetical material. This architecture differs from conventional Japanese wordbreakers in that it does not attempt to simultaneously attack the problems of identifying segmentation candidates and choosing the most probable analysis. To minimize duplication of effort between components and to give the segmenter greater freedom to address orthography issues, the task of choosing the best analysis is handled by the parser, which has access to a much richer set of linguistic information. By maximizing recall in the segmenter and allowing a precision of 34.7%, our parser currently achieves a breaking accuracy of ~97% over a wide variety of corpora.",2000
rodriguez-penagos-2004-metalinguistic,https://aclanthology.org/W04-1802,0,,,,,,,"Metalinguistic Information Extraction for Terminology. This paper d escribes and evaluates the Metalinguistic Operation Processor (MOP) system for automatic compilation of metalinguistic information from technical and scientific documents. This system is designed to extract non-standard terminological resources that we have called Metalinguistic Information Databases (or MIDs), in order to help update changing glossaries, knowledge bases and ontologies, as well as to reflect the metastable dynamics of special-domain knowledge.",Metalinguistic Information Extraction for Terminology,"This paper d escribes and evaluates the Metalinguistic Operation Processor (MOP) system for automatic compilation of metalinguistic information from technical and scientific documents. This system is designed to extract non-standard terminological resources that we have called Metalinguistic Information Databases (or MIDs), in order to help update changing glossaries, knowledge bases and ontologies, as well as to reflect the metastable dynamics of special-domain knowledge.",Metalinguistic Information Extraction for Terminology,"This paper d escribes and evaluates the Metalinguistic Operation Processor (MOP) system for automatic compilation of metalinguistic information from technical and scientific documents. This system is designed to extract non-standard terminological resources that we have called Metalinguistic Information Databases (or MIDs), in order to help update changing glossaries, knowledge bases and ontologies, as well as to reflect the metastable dynamics of special-domain knowledge.",,"Metalinguistic Information Extraction for Terminology. This paper d escribes and evaluates the Metalinguistic Operation Processor (MOP) system for automatic compilation of metalinguistic information from technical and scientific documents. This system is designed to extract non-standard terminological resources that we have called Metalinguistic Information Databases (or MIDs), in order to help update changing glossaries, knowledge bases and ontologies, as well as to reflect the metastable dynamics of special-domain knowledge.",2004
wang-etal-2022-miner,https://aclanthology.org/2022.acl-long.383,0,,,,,,,"MINER: Improving Out-of-Vocabulary Named Entity Recognition from an Information Theoretic Perspective. NER model has achieved promising performance on standard NER benchmarks. However, recent studies show that previous approaches may over-rely on entity mention information, resulting in poor performance on out-of-vocabulary (OOV) entity recognition. In this work, we propose MINER, a novel NER learning framework, to remedy this issue from an information-theoretic perspective. The proposed approach contains two mutual information-based training objectives: i) generalizing information maximization, which enhances representation via deep understanding of context and entity surface forms; ii) superfluous information minimization, which discourages representation from rote memorizing entity names or exploiting biased cues in data. Experiments on various settings and datasets demonstrate that it achieves better performance in predicting OOV entities.",{MINER}: Improving Out-of-Vocabulary Named Entity Recognition from an Information Theoretic Perspective,"NER model has achieved promising performance on standard NER benchmarks. However, recent studies show that previous approaches may over-rely on entity mention information, resulting in poor performance on out-of-vocabulary (OOV) entity recognition. In this work, we propose MINER, a novel NER learning framework, to remedy this issue from an information-theoretic perspective. The proposed approach contains two mutual information-based training objectives: i) generalizing information maximization, which enhances representation via deep understanding of context and entity surface forms; ii) superfluous information minimization, which discourages representation from rote memorizing entity names or exploiting biased cues in data. Experiments on various settings and datasets demonstrate that it achieves better performance in predicting OOV entities.",MINER: Improving Out-of-Vocabulary Named Entity Recognition from an Information Theoretic Perspective,"NER model has achieved promising performance on standard NER benchmarks. However, recent studies show that previous approaches may over-rely on entity mention information, resulting in poor performance on out-of-vocabulary (OOV) entity recognition. In this work, we propose MINER, a novel NER learning framework, to remedy this issue from an information-theoretic perspective. The proposed approach contains two mutual information-based training objectives: i) generalizing information maximization, which enhances representation via deep understanding of context and entity surface forms; ii) superfluous information minimization, which discourages representation from rote memorizing entity names or exploiting biased cues in data. Experiments on various settings and datasets demonstrate that it achieves better performance in predicting OOV entities.","The authors would like to thank the anonymous reviewers for their helpful comments, Ting Wu and Yiding Tan for their early contribution. This work was partially funded by China National Key RD Program (No. 2018YFB1005104), National Natural Science Foundation of China (No. 62076069, 61976056). This research was sponsored by Hikvision Cooperation Fund, Beijing Academy of Artificial Intelligence(BAAI), and CAAI-Huawei MindSpore Open Fund.","MINER: Improving Out-of-Vocabulary Named Entity Recognition from an Information Theoretic Perspective. NER model has achieved promising performance on standard NER benchmarks. However, recent studies show that previous approaches may over-rely on entity mention information, resulting in poor performance on out-of-vocabulary (OOV) entity recognition. In this work, we propose MINER, a novel NER learning framework, to remedy this issue from an information-theoretic perspective. The proposed approach contains two mutual information-based training objectives: i) generalizing information maximization, which enhances representation via deep understanding of context and entity surface forms; ii) superfluous information minimization, which discourages representation from rote memorizing entity names or exploiting biased cues in data. Experiments on various settings and datasets demonstrate that it achieves better performance in predicting OOV entities.",2022
moore-2004-improving,https://aclanthology.org/P04-1066,0,,,,,,,"Improving IBM Word Alignment Model 1. We investigate a number of simple methods for improving the word-alignment accuracy of IBM Model 1. We demonstrate reduction in alignment error rate of approximately 30% resulting from (1) giving extra weight to the probability of alignment to the null word, (2) smoothing probability estimates for rare words, and (3) using a simple heuristic estimation method to initialize, or replace, EM training of model parameters.",Improving {IBM} Word Alignment Model 1,"We investigate a number of simple methods for improving the word-alignment accuracy of IBM Model 1. We demonstrate reduction in alignment error rate of approximately 30% resulting from (1) giving extra weight to the probability of alignment to the null word, (2) smoothing probability estimates for rare words, and (3) using a simple heuristic estimation method to initialize, or replace, EM training of model parameters.",Improving IBM Word Alignment Model 1,"We investigate a number of simple methods for improving the word-alignment accuracy of IBM Model 1. We demonstrate reduction in alignment error rate of approximately 30% resulting from (1) giving extra weight to the probability of alignment to the null word, (2) smoothing probability estimates for rare words, and (3) using a simple heuristic estimation method to initialize, or replace, EM training of model parameters.",,"Improving IBM Word Alignment Model 1. We investigate a number of simple methods for improving the word-alignment accuracy of IBM Model 1. We demonstrate reduction in alignment error rate of approximately 30% resulting from (1) giving extra weight to the probability of alignment to the null word, (2) smoothing probability estimates for rare words, and (3) using a simple heuristic estimation method to initialize, or replace, EM training of model parameters.",2004
muaz-etal-2009-analysis,https://aclanthology.org/W09-3404,0,,,,,,,"Analysis and Development of Urdu POS Tagged Corpus. In this paper, two corpora of Urdu (with 110K and 120K words) tagged with different POS tagsets are used to train TnT and Tree taggers. Error analysis of both taggers is done to identify frequent confusions in tagging. Based on the analysis of tagging, and syntactic structure of Urdu, a more refined tagset is derived. The existing tagged corpora are tagged with the new tagset to develop a single corpus of 230K words and the TnT tagger is retrained. The results show improvement in tagging accuracy for individual corpora to 94.2% and also for the merged corpus to 91%. Implications of these results are discussed.",Analysis and Development of {U}rdu {POS} Tagged Corpus,"In this paper, two corpora of Urdu (with 110K and 120K words) tagged with different POS tagsets are used to train TnT and Tree taggers. Error analysis of both taggers is done to identify frequent confusions in tagging. Based on the analysis of tagging, and syntactic structure of Urdu, a more refined tagset is derived. The existing tagged corpora are tagged with the new tagset to develop a single corpus of 230K words and the TnT tagger is retrained. The results show improvement in tagging accuracy for individual corpora to 94.2% and also for the merged corpus to 91%. Implications of these results are discussed.",Analysis and Development of Urdu POS Tagged Corpus,"In this paper, two corpora of Urdu (with 110K and 120K words) tagged with different POS tagsets are used to train TnT and Tree taggers. Error analysis of both taggers is done to identify frequent confusions in tagging. Based on the analysis of tagging, and syntactic structure of Urdu, a more refined tagset is derived. The existing tagged corpora are tagged with the new tagset to develop a single corpus of 230K words and the TnT tagger is retrained. The results show improvement in tagging accuracy for individual corpora to 94.2% and also for the merged corpus to 91%. Implications of these results are discussed.",,"Analysis and Development of Urdu POS Tagged Corpus. In this paper, two corpora of Urdu (with 110K and 120K words) tagged with different POS tagsets are used to train TnT and Tree taggers. Error analysis of both taggers is done to identify frequent confusions in tagging. Based on the analysis of tagging, and syntactic structure of Urdu, a more refined tagset is derived. The existing tagged corpora are tagged with the new tagset to develop a single corpus of 230K words and the TnT tagger is retrained. The results show improvement in tagging accuracy for individual corpora to 94.2% and also for the merged corpus to 91%. Implications of these results are discussed.",2009
lewis-2014-getting,https://aclanthology.org/2014.tc-1.15,0,,,,,,,"Getting the best out of a mixed bag. This paper discusses the development and implementation of an approach to the combination of Rule Based Machine Translation, Statistical Machine Translation and Translation Memory tecnologies. The machine translation system itself draws upon translation memories and both syntactically and statistically generated phrase tables, unresolved sentences being fed to a Rules Engine. The output of the process is a TMX file containing a varying mixture of TMgenerated and MT-generated sentences. The author has designed this workflow using his own language engineering tools written in Java.",Getting the best out of a mixed bag,"This paper discusses the development and implementation of an approach to the combination of Rule Based Machine Translation, Statistical Machine Translation and Translation Memory tecnologies. The machine translation system itself draws upon translation memories and both syntactically and statistically generated phrase tables, unresolved sentences being fed to a Rules Engine. The output of the process is a TMX file containing a varying mixture of TMgenerated and MT-generated sentences. The author has designed this workflow using his own language engineering tools written in Java.",Getting the best out of a mixed bag,"This paper discusses the development and implementation of an approach to the combination of Rule Based Machine Translation, Statistical Machine Translation and Translation Memory tecnologies. The machine translation system itself draws upon translation memories and both syntactically and statistically generated phrase tables, unresolved sentences being fed to a Rules Engine. The output of the process is a TMX file containing a varying mixture of TMgenerated and MT-generated sentences. The author has designed this workflow using his own language engineering tools written in Java.",,"Getting the best out of a mixed bag. This paper discusses the development and implementation of an approach to the combination of Rule Based Machine Translation, Statistical Machine Translation and Translation Memory tecnologies. The machine translation system itself draws upon translation memories and both syntactically and statistically generated phrase tables, unresolved sentences being fed to a Rules Engine. The output of the process is a TMX file containing a varying mixture of TMgenerated and MT-generated sentences. The author has designed this workflow using his own language engineering tools written in Java.",2014
knoth-etal-2011-using,https://aclanthology.org/W11-3602,0,,,,,,,"Using Explicit Semantic Analysis for Cross-Lingual Link Discovery. This paper explores how to automatically generate cross-language links between resources in large document collections. The paper presents new methods for Cross-Lingual Link Discovery (CLLD) based on Explicit Semantic Analysis (ESA). The methods are applicable to any multilingual document collection. In this report, we present their comparative study on the Wikipedia corpus and provide new insights into the evaluation of link discovery systems. In particular, we measure the agreement of human annotators in linking articles in different language versions of Wikipedia, and compare it to the results achieved by the presented methods.",Using Explicit Semantic Analysis for Cross-Lingual Link Discovery,"This paper explores how to automatically generate cross-language links between resources in large document collections. The paper presents new methods for Cross-Lingual Link Discovery (CLLD) based on Explicit Semantic Analysis (ESA). The methods are applicable to any multilingual document collection. In this report, we present their comparative study on the Wikipedia corpus and provide new insights into the evaluation of link discovery systems. In particular, we measure the agreement of human annotators in linking articles in different language versions of Wikipedia, and compare it to the results achieved by the presented methods.",Using Explicit Semantic Analysis for Cross-Lingual Link Discovery,"This paper explores how to automatically generate cross-language links between resources in large document collections. The paper presents new methods for Cross-Lingual Link Discovery (CLLD) based on Explicit Semantic Analysis (ESA). The methods are applicable to any multilingual document collection. In this report, we present their comparative study on the Wikipedia corpus and provide new insights into the evaluation of link discovery systems. In particular, we measure the agreement of human annotators in linking articles in different language versions of Wikipedia, and compare it to the results achieved by the presented methods.",,"Using Explicit Semantic Analysis for Cross-Lingual Link Discovery. This paper explores how to automatically generate cross-language links between resources in large document collections. The paper presents new methods for Cross-Lingual Link Discovery (CLLD) based on Explicit Semantic Analysis (ESA). The methods are applicable to any multilingual document collection. In this report, we present their comparative study on the Wikipedia corpus and provide new insights into the evaluation of link discovery systems. In particular, we measure the agreement of human annotators in linking articles in different language versions of Wikipedia, and compare it to the results achieved by the presented methods.",2011
sil-yates-2011-extracting,https://aclanthology.org/R11-1001,0,,,,,,,"Extracting STRIPS Representations of Actions and Events. Knowledge about how the world changes over time is a vital component of commonsense knowledge for Artificial Intelligence (AI) and natural language understanding. Actions and events are fundamental components to any knowledge about changes in the state of the world: the states before and after an event differ in regular and predictable ways. We describe a novel system that tackles the problem of extracting knowledge from text about how actions and events change the world over time. We leverage standard language-processing tools, like semantic role labelers and coreference resolvers, as well as large-corpus statistics like pointwise mutual information, to identify STRIPS representations of actions and events, a type of representation commonly used in AI planning systems. In experiments on Web text, our extractor's Area under the Curve (AUC) improves by more than 31% over the closest system from the literature for identifying the preconditions and add effects of actions. In addition, we also extract significant aspects of STRIPS representations that are missing from previous work, including delete effects and arguments.",Extracting {STRIPS} Representations of Actions and Events,"Knowledge about how the world changes over time is a vital component of commonsense knowledge for Artificial Intelligence (AI) and natural language understanding. Actions and events are fundamental components to any knowledge about changes in the state of the world: the states before and after an event differ in regular and predictable ways. We describe a novel system that tackles the problem of extracting knowledge from text about how actions and events change the world over time. We leverage standard language-processing tools, like semantic role labelers and coreference resolvers, as well as large-corpus statistics like pointwise mutual information, to identify STRIPS representations of actions and events, a type of representation commonly used in AI planning systems. In experiments on Web text, our extractor's Area under the Curve (AUC) improves by more than 31% over the closest system from the literature for identifying the preconditions and add effects of actions. In addition, we also extract significant aspects of STRIPS representations that are missing from previous work, including delete effects and arguments.",Extracting STRIPS Representations of Actions and Events,"Knowledge about how the world changes over time is a vital component of commonsense knowledge for Artificial Intelligence (AI) and natural language understanding. Actions and events are fundamental components to any knowledge about changes in the state of the world: the states before and after an event differ in regular and predictable ways. We describe a novel system that tackles the problem of extracting knowledge from text about how actions and events change the world over time. We leverage standard language-processing tools, like semantic role labelers and coreference resolvers, as well as large-corpus statistics like pointwise mutual information, to identify STRIPS representations of actions and events, a type of representation commonly used in AI planning systems. In experiments on Web text, our extractor's Area under the Curve (AUC) improves by more than 31% over the closest system from the literature for identifying the preconditions and add effects of actions. In addition, we also extract significant aspects of STRIPS representations that are missing from previous work, including delete effects and arguments.",,"Extracting STRIPS Representations of Actions and Events. Knowledge about how the world changes over time is a vital component of commonsense knowledge for Artificial Intelligence (AI) and natural language understanding. Actions and events are fundamental components to any knowledge about changes in the state of the world: the states before and after an event differ in regular and predictable ways. We describe a novel system that tackles the problem of extracting knowledge from text about how actions and events change the world over time. We leverage standard language-processing tools, like semantic role labelers and coreference resolvers, as well as large-corpus statistics like pointwise mutual information, to identify STRIPS representations of actions and events, a type of representation commonly used in AI planning systems. In experiments on Web text, our extractor's Area under the Curve (AUC) improves by more than 31% over the closest system from the literature for identifying the preconditions and add effects of actions. In addition, we also extract significant aspects of STRIPS representations that are missing from previous work, including delete effects and arguments.",2011
dinkar-etal-2020-importance,https://aclanthology.org/2020.emnlp-main.641,0,,,,,,,"The importance of fillers for text representations of speech transcripts. While being an essential component of spoken language, fillers (e.g. ""um"" or ""uh"") often remain overlooked in Spoken Language Understanding (SLU) tasks. We explore the possibility of representing them with deep contextualised embeddings, showing improvements on modelling spoken language and two downstream tasks-predicting a speaker's stance and expressed confidence.",The importance of fillers for text representations of speech transcripts,"While being an essential component of spoken language, fillers (e.g. ""um"" or ""uh"") often remain overlooked in Spoken Language Understanding (SLU) tasks. We explore the possibility of representing them with deep contextualised embeddings, showing improvements on modelling spoken language and two downstream tasks-predicting a speaker's stance and expressed confidence.",The importance of fillers for text representations of speech transcripts,"While being an essential component of spoken language, fillers (e.g. ""um"" or ""uh"") often remain overlooked in Spoken Language Understanding (SLU) tasks. We explore the possibility of representing them with deep contextualised embeddings, showing improvements on modelling spoken language and two downstream tasks-predicting a speaker's stance and expressed confidence.",This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 765955 and the French National Research Agency's grant ANR-17-MAOI.,"The importance of fillers for text representations of speech transcripts. While being an essential component of spoken language, fillers (e.g. ""um"" or ""uh"") often remain overlooked in Spoken Language Understanding (SLU) tasks. We explore the possibility of representing them with deep contextualised embeddings, showing improvements on modelling spoken language and two downstream tasks-predicting a speaker's stance and expressed confidence.",2020
jing-etal-2019-show,https://aclanthology.org/P19-1657,1,,,,health,,,"Show, Describe and Conclude: On Exploiting the Structure Information of Chest X-ray Reports. Chest X-Ray (CXR) images are commonly used for clinical screening and diagnosis. Automatically writing reports for these images can considerably lighten the workload of radiologists for summarizing descriptive findings and conclusive impressions. The complex structures between and within sections of the reports pose a great challenge to the automatic report generation. Specifically, the section Impression is a diagnostic summarization over the section Findings; and the appearance of normality dominates each section over that of abnormality. Existing studies rarely explore and consider this fundamental structure information. In this work, we propose a novel framework which exploits the structure information between and within report sections for generating CXR imaging reports. First, we propose a two-stage strategy that explicitly models the relationship between Findings and Impression. Second, we design a novel cooperative multi-agent system that implicitly captures the imbalanced distribution between abnormality and normality. Experiments on two CXR report datasets show that our method achieves state-of-the-art performance in terms of various evaluation metrics. Our results expose that the proposed approach is able to generate high-quality medical reports through integrating the structure information. Findings: The cardiac silhouette is enlarged and has a globular appearance. Mild bibasilar dependent atelectasis. No pneumothorax or large pleural effusion. No acute bone abnormality. Impression: Cardiomegaly with globular appearance of the cardiac silhouette. Considerations would include pericardial effusion or dilated cardiomyopathy.","Show, Describe and Conclude: On Exploiting the Structure Information of Chest {X}-ray Reports","Chest X-Ray (CXR) images are commonly used for clinical screening and diagnosis. Automatically writing reports for these images can considerably lighten the workload of radiologists for summarizing descriptive findings and conclusive impressions. The complex structures between and within sections of the reports pose a great challenge to the automatic report generation. Specifically, the section Impression is a diagnostic summarization over the section Findings; and the appearance of normality dominates each section over that of abnormality. Existing studies rarely explore and consider this fundamental structure information. In this work, we propose a novel framework which exploits the structure information between and within report sections for generating CXR imaging reports. First, we propose a two-stage strategy that explicitly models the relationship between Findings and Impression. Second, we design a novel cooperative multi-agent system that implicitly captures the imbalanced distribution between abnormality and normality. Experiments on two CXR report datasets show that our method achieves state-of-the-art performance in terms of various evaluation metrics. Our results expose that the proposed approach is able to generate high-quality medical reports through integrating the structure information. Findings: The cardiac silhouette is enlarged and has a globular appearance. Mild bibasilar dependent atelectasis. No pneumothorax or large pleural effusion. No acute bone abnormality. Impression: Cardiomegaly with globular appearance of the cardiac silhouette. Considerations would include pericardial effusion or dilated cardiomyopathy.","Show, Describe and Conclude: On Exploiting the Structure Information of Chest X-ray Reports","Chest X-Ray (CXR) images are commonly used for clinical screening and diagnosis. Automatically writing reports for these images can considerably lighten the workload of radiologists for summarizing descriptive findings and conclusive impressions. The complex structures between and within sections of the reports pose a great challenge to the automatic report generation. Specifically, the section Impression is a diagnostic summarization over the section Findings; and the appearance of normality dominates each section over that of abnormality. Existing studies rarely explore and consider this fundamental structure information. In this work, we propose a novel framework which exploits the structure information between and within report sections for generating CXR imaging reports. First, we propose a two-stage strategy that explicitly models the relationship between Findings and Impression. Second, we design a novel cooperative multi-agent system that implicitly captures the imbalanced distribution between abnormality and normality. Experiments on two CXR report datasets show that our method achieves state-of-the-art performance in terms of various evaluation metrics. Our results expose that the proposed approach is able to generate high-quality medical reports through integrating the structure information. Findings: The cardiac silhouette is enlarged and has a globular appearance. Mild bibasilar dependent atelectasis. No pneumothorax or large pleural effusion. No acute bone abnormality. Impression: Cardiomegaly with globular appearance of the cardiac silhouette. Considerations would include pericardial effusion or dilated cardiomyopathy.",,"Show, Describe and Conclude: On Exploiting the Structure Information of Chest X-ray Reports. Chest X-Ray (CXR) images are commonly used for clinical screening and diagnosis. Automatically writing reports for these images can considerably lighten the workload of radiologists for summarizing descriptive findings and conclusive impressions. The complex structures between and within sections of the reports pose a great challenge to the automatic report generation. Specifically, the section Impression is a diagnostic summarization over the section Findings; and the appearance of normality dominates each section over that of abnormality. Existing studies rarely explore and consider this fundamental structure information. In this work, we propose a novel framework which exploits the structure information between and within report sections for generating CXR imaging reports. First, we propose a two-stage strategy that explicitly models the relationship between Findings and Impression. Second, we design a novel cooperative multi-agent system that implicitly captures the imbalanced distribution between abnormality and normality. Experiments on two CXR report datasets show that our method achieves state-of-the-art performance in terms of various evaluation metrics. Our results expose that the proposed approach is able to generate high-quality medical reports through integrating the structure information. Findings: The cardiac silhouette is enlarged and has a globular appearance. Mild bibasilar dependent atelectasis. No pneumothorax or large pleural effusion. No acute bone abnormality. Impression: Cardiomegaly with globular appearance of the cardiac silhouette. Considerations would include pericardial effusion or dilated cardiomyopathy.",2019
imperial-ong-2021-microscope,https://aclanthology.org/2021.paclic-1.1,0,,,,,,,"Under the Microscope: Interpreting Readability Assessment Models for Filipino. Readability assessment is the process of identifying the level of ease or difficulty of a certain piece of text for its intended audience. Approaches have evolved from the use of arithmetic formulas to more complex pattern-recognizing models trained using machine learning algorithms. While using these approaches provide competitive results, limited work is done on analyzing how linguistic variables affect model inference quantitatively. In this work, we dissect machine learning-based readability assessment models in Filipino by performing global and local model interpretation to understand the contributions of varying linguistic features and discuss its implications in the context of the Filipino language. Results show that using a model trained with top features from global interpretation obtained higher performance than the ones using features selected by Spearman correlation. Likewise, we also empirically observed local feature weight boundaries for discriminating reading difficulty at an extremely fine-grained level and their corresponding effects if values are perturbed.",Under the Microscope: Interpreting Readability Assessment Models for {F}ilipino,"Readability assessment is the process of identifying the level of ease or difficulty of a certain piece of text for its intended audience. Approaches have evolved from the use of arithmetic formulas to more complex pattern-recognizing models trained using machine learning algorithms. While using these approaches provide competitive results, limited work is done on analyzing how linguistic variables affect model inference quantitatively. In this work, we dissect machine learning-based readability assessment models in Filipino by performing global and local model interpretation to understand the contributions of varying linguistic features and discuss its implications in the context of the Filipino language. Results show that using a model trained with top features from global interpretation obtained higher performance than the ones using features selected by Spearman correlation. Likewise, we also empirically observed local feature weight boundaries for discriminating reading difficulty at an extremely fine-grained level and their corresponding effects if values are perturbed.",Under the Microscope: Interpreting Readability Assessment Models for Filipino,"Readability assessment is the process of identifying the level of ease or difficulty of a certain piece of text for its intended audience. Approaches have evolved from the use of arithmetic formulas to more complex pattern-recognizing models trained using machine learning algorithms. While using these approaches provide competitive results, limited work is done on analyzing how linguistic variables affect model inference quantitatively. In this work, we dissect machine learning-based readability assessment models in Filipino by performing global and local model interpretation to understand the contributions of varying linguistic features and discuss its implications in the context of the Filipino language. Results show that using a model trained with top features from global interpretation obtained higher performance than the ones using features selected by Spearman correlation. Likewise, we also empirically observed local feature weight boundaries for discriminating reading difficulty at an extremely fine-grained level and their corresponding effects if values are perturbed.",Acknowledgment The authors would like to thank the anonymous reviewers for their valuable feedback and to Dr. Ani Almario of Adarna House for allowing us to use their children's book dataset for this study. This work is also supported by the DOST National Research Council of the Philippines (NRCP).,"Under the Microscope: Interpreting Readability Assessment Models for Filipino. Readability assessment is the process of identifying the level of ease or difficulty of a certain piece of text for its intended audience. Approaches have evolved from the use of arithmetic formulas to more complex pattern-recognizing models trained using machine learning algorithms. While using these approaches provide competitive results, limited work is done on analyzing how linguistic variables affect model inference quantitatively. In this work, we dissect machine learning-based readability assessment models in Filipino by performing global and local model interpretation to understand the contributions of varying linguistic features and discuss its implications in the context of the Filipino language. Results show that using a model trained with top features from global interpretation obtained higher performance than the ones using features selected by Spearman correlation. Likewise, we also empirically observed local feature weight boundaries for discriminating reading difficulty at an extremely fine-grained level and their corresponding effects if values are perturbed.",2021
rutherford-thanyawong-2019-written,https://aclanthology.org/W19-4710,0,,,,,,,"Written on Leaves or in Stones?: Computational Evidence for the Era of Authorship of Old Thai Prose. We aim to provide computational evidence for the era of authorship of two important old Thai texts: Traiphumikatha and Pumratchatham. The era of authorship of these two books is still an ongoing debate among Thai literature scholars. Analysis of old Thai texts present a challenge for standard natural language processing techniques, due to the lack of corpora necessary for building old Thai word and syllable segmentation. We propose an accurate and interpretable model to classify each segment as one of the three eras of authorship (Sukhothai, Ayuddhya, or Rattanakosin) without sophisticated linguistic preprocessing. Contrary to previous hypotheses, our model suggests that both books were written during the Sukhothai era. Moreover, the second half of the Pumratchtham is uncharacteristic of the Sukhothai era, which may have confounded literary scholars in the past. Further, our model reveals that the most indicative linguistic changes stem from unidirectional grammaticalized words and polyfunctional words, which show up as most dominant features in the model.",Written on Leaves or in Stones?: Computational Evidence for the Era of Authorship of Old {T}hai Prose,"We aim to provide computational evidence for the era of authorship of two important old Thai texts: Traiphumikatha and Pumratchatham. The era of authorship of these two books is still an ongoing debate among Thai literature scholars. Analysis of old Thai texts present a challenge for standard natural language processing techniques, due to the lack of corpora necessary for building old Thai word and syllable segmentation. We propose an accurate and interpretable model to classify each segment as one of the three eras of authorship (Sukhothai, Ayuddhya, or Rattanakosin) without sophisticated linguistic preprocessing. Contrary to previous hypotheses, our model suggests that both books were written during the Sukhothai era. Moreover, the second half of the Pumratchtham is uncharacteristic of the Sukhothai era, which may have confounded literary scholars in the past. Further, our model reveals that the most indicative linguistic changes stem from unidirectional grammaticalized words and polyfunctional words, which show up as most dominant features in the model.",Written on Leaves or in Stones?: Computational Evidence for the Era of Authorship of Old Thai Prose,"We aim to provide computational evidence for the era of authorship of two important old Thai texts: Traiphumikatha and Pumratchatham. The era of authorship of these two books is still an ongoing debate among Thai literature scholars. Analysis of old Thai texts present a challenge for standard natural language processing techniques, due to the lack of corpora necessary for building old Thai word and syllable segmentation. We propose an accurate and interpretable model to classify each segment as one of the three eras of authorship (Sukhothai, Ayuddhya, or Rattanakosin) without sophisticated linguistic preprocessing. Contrary to previous hypotheses, our model suggests that both books were written during the Sukhothai era. Moreover, the second half of the Pumratchtham is uncharacteristic of the Sukhothai era, which may have confounded literary scholars in the past. Further, our model reveals that the most indicative linguistic changes stem from unidirectional grammaticalized words and polyfunctional words, which show up as most dominant features in the model.",This research is funded by Grants for Development of New Faculty Staff at Chulalongkorn University.,"Written on Leaves or in Stones?: Computational Evidence for the Era of Authorship of Old Thai Prose. We aim to provide computational evidence for the era of authorship of two important old Thai texts: Traiphumikatha and Pumratchatham. The era of authorship of these two books is still an ongoing debate among Thai literature scholars. Analysis of old Thai texts present a challenge for standard natural language processing techniques, due to the lack of corpora necessary for building old Thai word and syllable segmentation. We propose an accurate and interpretable model to classify each segment as one of the three eras of authorship (Sukhothai, Ayuddhya, or Rattanakosin) without sophisticated linguistic preprocessing. Contrary to previous hypotheses, our model suggests that both books were written during the Sukhothai era. Moreover, the second half of the Pumratchtham is uncharacteristic of the Sukhothai era, which may have confounded literary scholars in the past. Further, our model reveals that the most indicative linguistic changes stem from unidirectional grammaticalized words and polyfunctional words, which show up as most dominant features in the model.",2019
garrido-alenda-etal-2002-incremental,https://aclanthology.org/2002.tmi-papers.7,0,,,,,,,"Incremental construction and maintenance of morphological analysers based on augmented letter transducers. We define deterministic augmented letter transducers (DALTs), a class of finitestate transducers which provide an efficient way of implementing morphological analysers which tokenize their input (i.e., divide texts in tokens or words) as they analyse it, and show how these morphological analysers may be maintained (i.e., how surface form-lexical form transductions may be added or removed from them) while keeping them minimal; efficient algorithms for both operations are given in detail. The algorithms may also be applied to the incremental construction and maintentance of other lexical modules in a machine translation system such as the lexical transfer module or the morphological generator.",Incremental construction and maintenance of morphological analysers based on augmented letter transducers,"We define deterministic augmented letter transducers (DALTs), a class of finitestate transducers which provide an efficient way of implementing morphological analysers which tokenize their input (i.e., divide texts in tokens or words) as they analyse it, and show how these morphological analysers may be maintained (i.e., how surface form-lexical form transductions may be added or removed from them) while keeping them minimal; efficient algorithms for both operations are given in detail. The algorithms may also be applied to the incremental construction and maintentance of other lexical modules in a machine translation system such as the lexical transfer module or the morphological generator.",Incremental construction and maintenance of morphological analysers based on augmented letter transducers,"We define deterministic augmented letter transducers (DALTs), a class of finitestate transducers which provide an efficient way of implementing morphological analysers which tokenize their input (i.e., divide texts in tokens or words) as they analyse it, and show how these morphological analysers may be maintained (i.e., how surface form-lexical form transductions may be added or removed from them) while keeping them minimal; efficient algorithms for both operations are given in detail. The algorithms may also be applied to the incremental construction and maintentance of other lexical modules in a machine translation system such as the lexical transfer module or the morphological generator.",,"Incremental construction and maintenance of morphological analysers based on augmented letter transducers. We define deterministic augmented letter transducers (DALTs), a class of finitestate transducers which provide an efficient way of implementing morphological analysers which tokenize their input (i.e., divide texts in tokens or words) as they analyse it, and show how these morphological analysers may be maintained (i.e., how surface form-lexical form transductions may be added or removed from them) while keeping them minimal; efficient algorithms for both operations are given in detail. The algorithms may also be applied to the incremental construction and maintentance of other lexical modules in a machine translation system such as the lexical transfer module or the morphological generator.",2002
hamon-etal-1998-step,https://aclanthology.org/C98-1079,1,,,,industry_innovation_infrastructure,,,"A step towards the detection of semantic variants of terms in technical documents. This paper reports the results of a preliminary experiment on the detection of semantic variants of terms in a French technical document. The general goal of our work is to help the structuration of terminologies. Two kinds of semantic variants can be found in traditional terminologies : strict synonymy links and fuzzier relations like see-also. We have designed three rules which exploit general dictionary information to infer synonymy relations between complex candidate terms. The results have been examined by a human terminologist. The expert has judged that half of the overall pairs of terms are relevant for the semantic variation. He validated an important part of the detected links as synonymy. Moreover, it appeared that numerous errors are due to few misinterpreted links: they could be eliminated by few exception rules.",A step towards the detection of semantic variants of terms in technical documents,"This paper reports the results of a preliminary experiment on the detection of semantic variants of terms in a French technical document. The general goal of our work is to help the structuration of terminologies. Two kinds of semantic variants can be found in traditional terminologies : strict synonymy links and fuzzier relations like see-also. We have designed three rules which exploit general dictionary information to infer synonymy relations between complex candidate terms. The results have been examined by a human terminologist. The expert has judged that half of the overall pairs of terms are relevant for the semantic variation. He validated an important part of the detected links as synonymy. Moreover, it appeared that numerous errors are due to few misinterpreted links: they could be eliminated by few exception rules.",A step towards the detection of semantic variants of terms in technical documents,"This paper reports the results of a preliminary experiment on the detection of semantic variants of terms in a French technical document. The general goal of our work is to help the structuration of terminologies. Two kinds of semantic variants can be found in traditional terminologies : strict synonymy links and fuzzier relations like see-also. We have designed three rules which exploit general dictionary information to infer synonymy relations between complex candidate terms. The results have been examined by a human terminologist. The expert has judged that half of the overall pairs of terms are relevant for the semantic variation. He validated an important part of the detected links as synonymy. Moreover, it appeared that numerous errors are due to few misinterpreted links: they could be eliminated by few exception rules.","This work is the result of a collaboration with the Direction des Etudes et Recherche (DER) d'Electricit6 de France (EDF). We thank Marie-Luce Picard from EDF and Beno[t Habert from ENS Fontenay-St Cloud for their help, Didier Bourigault and Jean-Yves Hamon from the Institut de la Langue FranQaise (INaLF) for the dictionary and Henry Boecon-Gibod for the validation of the results.","A step towards the detection of semantic variants of terms in technical documents. This paper reports the results of a preliminary experiment on the detection of semantic variants of terms in a French technical document. The general goal of our work is to help the structuration of terminologies. Two kinds of semantic variants can be found in traditional terminologies : strict synonymy links and fuzzier relations like see-also. We have designed three rules which exploit general dictionary information to infer synonymy relations between complex candidate terms. The results have been examined by a human terminologist. The expert has judged that half of the overall pairs of terms are relevant for the semantic variation. He validated an important part of the detected links as synonymy. Moreover, it appeared that numerous errors are due to few misinterpreted links: they could be eliminated by few exception rules.",1998
mehri-eskenazi-2020-usr,https://aclanthology.org/2020.acl-main.64,0,,,,,,,"USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation. The lack of meaningful automatic evaluation metrics for dialog has impeded open-domain dialog research. Standard language generation metrics have been shown to be ineffective for evaluating dialog models. To this end, this paper presents USR, an UnSupervised and Reference-free evaluation metric for dialog. USR is a reference-free metric that trains unsupervised models to measure several desirable qualities of dialog. USR is shown to strongly correlate with human judgment on both Topical-Chat (turn-level: 0.42, systemlevel: 1.0) and PersonaChat (turn-level: 0.48 and system-level: 1.0). USR additionally produces interpretable measures for several desirable properties of dialog.",{USR}: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation,"The lack of meaningful automatic evaluation metrics for dialog has impeded open-domain dialog research. Standard language generation metrics have been shown to be ineffective for evaluating dialog models. To this end, this paper presents USR, an UnSupervised and Reference-free evaluation metric for dialog. USR is a reference-free metric that trains unsupervised models to measure several desirable qualities of dialog. USR is shown to strongly correlate with human judgment on both Topical-Chat (turn-level: 0.42, systemlevel: 1.0) and PersonaChat (turn-level: 0.48 and system-level: 1.0). USR additionally produces interpretable measures for several desirable properties of dialog.",USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation,"The lack of meaningful automatic evaluation metrics for dialog has impeded open-domain dialog research. Standard language generation metrics have been shown to be ineffective for evaluating dialog models. To this end, this paper presents USR, an UnSupervised and Reference-free evaluation metric for dialog. USR is a reference-free metric that trains unsupervised models to measure several desirable qualities of dialog. USR is shown to strongly correlate with human judgment on both Topical-Chat (turn-level: 0.42, systemlevel: 1.0) and PersonaChat (turn-level: 0.48 and system-level: 1.0). USR additionally produces interpretable measures for several desirable properties of dialog.","We thank the following individuals for their help with annotation: Evgeniia Razumovskaia, Felix Labelle, Mckenna Brown and Yulan Feng.","USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation. The lack of meaningful automatic evaluation metrics for dialog has impeded open-domain dialog research. Standard language generation metrics have been shown to be ineffective for evaluating dialog models. To this end, this paper presents USR, an UnSupervised and Reference-free evaluation metric for dialog. USR is a reference-free metric that trains unsupervised models to measure several desirable qualities of dialog. USR is shown to strongly correlate with human judgment on both Topical-Chat (turn-level: 0.42, systemlevel: 1.0) and PersonaChat (turn-level: 0.48 and system-level: 1.0). USR additionally produces interpretable measures for several desirable properties of dialog.",2020
partanen-rueter-2019-survey,https://aclanthology.org/W19-8009,0,,,,,,,"Survey of Uralic Universal Dependencies development. This paper attempts to evaluate some of the systematic differences in Uralic Universal Dependencies treebanks from a perspective that would help to introduce reasonable improvements in treebank annotation consistency within this language family. The study finds that the coverage of Uralic languages in the project is already relatively high, and the majority of typically Uralic features are already present and can be discussed on the basis of existing treebanks. Some of the idiosyncrasies found in individual treebanks stem from language-internal grammar traditions, and could be a target for harmonization in later phases.",Survey of Uralic {U}niversal {D}ependencies development,"This paper attempts to evaluate some of the systematic differences in Uralic Universal Dependencies treebanks from a perspective that would help to introduce reasonable improvements in treebank annotation consistency within this language family. The study finds that the coverage of Uralic languages in the project is already relatively high, and the majority of typically Uralic features are already present and can be discussed on the basis of existing treebanks. Some of the idiosyncrasies found in individual treebanks stem from language-internal grammar traditions, and could be a target for harmonization in later phases.",Survey of Uralic Universal Dependencies development,"This paper attempts to evaluate some of the systematic differences in Uralic Universal Dependencies treebanks from a perspective that would help to introduce reasonable improvements in treebank annotation consistency within this language family. The study finds that the coverage of Uralic languages in the project is already relatively high, and the majority of typically Uralic features are already present and can be discussed on the basis of existing treebanks. Some of the idiosyncrasies found in individual treebanks stem from language-internal grammar traditions, and could be a target for harmonization in later phases.",,"Survey of Uralic Universal Dependencies development. This paper attempts to evaluate some of the systematic differences in Uralic Universal Dependencies treebanks from a perspective that would help to introduce reasonable improvements in treebank annotation consistency within this language family. The study finds that the coverage of Uralic languages in the project is already relatively high, and the majority of typically Uralic features are already present and can be discussed on the basis of existing treebanks. Some of the idiosyncrasies found in individual treebanks stem from language-internal grammar traditions, and could be a target for harmonization in later phases.",2019
shinnou-sasaki-2008-spectral,http://www.lrec-conf.org/proceedings/lrec2008/pdf/62_paper.pdf,0,,,,,,,"Spectral Clustering for a Large Data Set by Reducing the Similarity Matrix Size. Spectral clustering is a powerful clustering method for document data set. However, spectral clustering needs to solve an eigenvalue problem of the matrix converted from the similarity matrix corresponding to the data set. Therefore, it is not practical to use spectral clustering for a large data set. To overcome this problem, we propose the method to reduce the similarity matrix size. First, using k-means, we obtain a clustering result for the given data set. From each cluster, we pick up some data, which are near to the central of the cluster. We take these data as one data. We call these data set as ""committee."" Data except for committees remain one data. For these data, we construct the similarity matrix. Definitely, the size of this similarity matrix is reduced so much that we can perform spectral clustering using the reduced similarity matrix",Spectral Clustering for a Large Data Set by Reducing the Similarity Matrix Size,"Spectral clustering is a powerful clustering method for document data set. However, spectral clustering needs to solve an eigenvalue problem of the matrix converted from the similarity matrix corresponding to the data set. Therefore, it is not practical to use spectral clustering for a large data set. To overcome this problem, we propose the method to reduce the similarity matrix size. First, using k-means, we obtain a clustering result for the given data set. From each cluster, we pick up some data, which are near to the central of the cluster. We take these data as one data. We call these data set as ""committee."" Data except for committees remain one data. For these data, we construct the similarity matrix. Definitely, the size of this similarity matrix is reduced so much that we can perform spectral clustering using the reduced similarity matrix",Spectral Clustering for a Large Data Set by Reducing the Similarity Matrix Size,"Spectral clustering is a powerful clustering method for document data set. However, spectral clustering needs to solve an eigenvalue problem of the matrix converted from the similarity matrix corresponding to the data set. Therefore, it is not practical to use spectral clustering for a large data set. To overcome this problem, we propose the method to reduce the similarity matrix size. First, using k-means, we obtain a clustering result for the given data set. From each cluster, we pick up some data, which are near to the central of the cluster. We take these data as one data. We call these data set as ""committee."" Data except for committees remain one data. For these data, we construct the similarity matrix. Definitely, the size of this similarity matrix is reduced so much that we can perform spectral clustering using the reduced similarity matrix","This research was partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Research on Priority Areas, Japanese Corpus , 19011001, 2007. ","Spectral Clustering for a Large Data Set by Reducing the Similarity Matrix Size. Spectral clustering is a powerful clustering method for document data set. However, spectral clustering needs to solve an eigenvalue problem of the matrix converted from the similarity matrix corresponding to the data set. Therefore, it is not practical to use spectral clustering for a large data set. To overcome this problem, we propose the method to reduce the similarity matrix size. First, using k-means, we obtain a clustering result for the given data set. From each cluster, we pick up some data, which are near to the central of the cluster. We take these data as one data. We call these data set as ""committee."" Data except for committees remain one data. For these data, we construct the similarity matrix. Definitely, the size of this similarity matrix is reduced so much that we can perform spectral clustering using the reduced similarity matrix",2008
bartolini-etal-2004-semantic,http://www.lrec-conf.org/proceedings/lrec2004/pdf/709.pdf,1,,,,peace_justice_and_strong_institutions,,,"Semantic Mark-up of Italian Legal Texts Through NLP-based Techniques. In this paper we illustrate an approach to information extraction from legal texts using SALEM. SALEM is an NLP architecture for semantic annotation and indexing of Italian legislative texts, developed by ILC in close collaboration with ITTIG-CNR, Florence. Results of SALEM performance on a test sample of about 500 Italian law paragraphs are provided.",Semantic Mark-up of {I}talian Legal Texts Through {NLP}-based Techniques,"In this paper we illustrate an approach to information extraction from legal texts using SALEM. SALEM is an NLP architecture for semantic annotation and indexing of Italian legislative texts, developed by ILC in close collaboration with ITTIG-CNR, Florence. Results of SALEM performance on a test sample of about 500 Italian law paragraphs are provided.",Semantic Mark-up of Italian Legal Texts Through NLP-based Techniques,"In this paper we illustrate an approach to information extraction from legal texts using SALEM. SALEM is an NLP architecture for semantic annotation and indexing of Italian legislative texts, developed by ILC in close collaboration with ITTIG-CNR, Florence. Results of SALEM performance on a test sample of about 500 Italian law paragraphs are provided.",,"Semantic Mark-up of Italian Legal Texts Through NLP-based Techniques. In this paper we illustrate an approach to information extraction from legal texts using SALEM. SALEM is an NLP architecture for semantic annotation and indexing of Italian legislative texts, developed by ILC in close collaboration with ITTIG-CNR, Florence. Results of SALEM performance on a test sample of about 500 Italian law paragraphs are provided.",2004
huo-etal-2019-graph,https://aclanthology.org/D19-5319,0,,,,,,,"Graph Enhanced Cross-Domain Text-to-SQL Generation. Semantic parsing is a fundamental problem in natural language understanding, as it involves the mapping of natural language to structured forms such as executable queries or logic-like knowledge representations. Existing deep learning approaches for semantic parsing have shown promise on a variety of benchmark data sets, particularly on textto-SQL parsing. However, most text-to-SQL parsers do not generalize to unseen data sets in different domains. In this paper, we propose a new cross-domain learning scheme to perform text-to-SQL translation and demonstrate its use on Spider, a large-scale cross-domain text-to-SQL data set. We improve upon a state-of-the-art Spider model, SyntaxSQLNet, by constructing a graph of column names for all databases and using graph neural networks to compute their embeddings. The resulting embeddings offer better cross-domain representations and SQL queries, as evidenced by substantial improvement on the Spider data set compared to SyntaxSQLNet.",Graph Enhanced Cross-Domain Text-to-{SQL} Generation,"Semantic parsing is a fundamental problem in natural language understanding, as it involves the mapping of natural language to structured forms such as executable queries or logic-like knowledge representations. Existing deep learning approaches for semantic parsing have shown promise on a variety of benchmark data sets, particularly on textto-SQL parsing. However, most text-to-SQL parsers do not generalize to unseen data sets in different domains. In this paper, we propose a new cross-domain learning scheme to perform text-to-SQL translation and demonstrate its use on Spider, a large-scale cross-domain text-to-SQL data set. We improve upon a state-of-the-art Spider model, SyntaxSQLNet, by constructing a graph of column names for all databases and using graph neural networks to compute their embeddings. The resulting embeddings offer better cross-domain representations and SQL queries, as evidenced by substantial improvement on the Spider data set compared to SyntaxSQLNet.",Graph Enhanced Cross-Domain Text-to-SQL Generation,"Semantic parsing is a fundamental problem in natural language understanding, as it involves the mapping of natural language to structured forms such as executable queries or logic-like knowledge representations. Existing deep learning approaches for semantic parsing have shown promise on a variety of benchmark data sets, particularly on textto-SQL parsing. However, most text-to-SQL parsers do not generalize to unseen data sets in different domains. In this paper, we propose a new cross-domain learning scheme to perform text-to-SQL translation and demonstrate its use on Spider, a large-scale cross-domain text-to-SQL data set. We improve upon a state-of-the-art Spider model, SyntaxSQLNet, by constructing a graph of column names for all databases and using graph neural networks to compute their embeddings. The resulting embeddings offer better cross-domain representations and SQL queries, as evidenced by substantial improvement on the Spider data set compared to SyntaxSQLNet.",,"Graph Enhanced Cross-Domain Text-to-SQL Generation. Semantic parsing is a fundamental problem in natural language understanding, as it involves the mapping of natural language to structured forms such as executable queries or logic-like knowledge representations. Existing deep learning approaches for semantic parsing have shown promise on a variety of benchmark data sets, particularly on textto-SQL parsing. However, most text-to-SQL parsers do not generalize to unseen data sets in different domains. In this paper, we propose a new cross-domain learning scheme to perform text-to-SQL translation and demonstrate its use on Spider, a large-scale cross-domain text-to-SQL data set. We improve upon a state-of-the-art Spider model, SyntaxSQLNet, by constructing a graph of column names for all databases and using graph neural networks to compute their embeddings. The resulting embeddings offer better cross-domain representations and SQL queries, as evidenced by substantial improvement on the Spider data set compared to SyntaxSQLNet.",2019
petiwala-etal-2012-textbook,https://aclanthology.org/W12-5806,1,,,,education,,,Textbook Construction from Lecture Transcripts. ,Textbook Construction from Lecture Transcripts,,Textbook Construction from Lecture Transcripts,,,Textbook Construction from Lecture Transcripts. ,2012
huang-etal-1997-segmentation,https://aclanthology.org/O97-4003,0,,,,,,,"Segmentation Standard for Chinese Natural Language Processing. This paper proposes a segmentation standard for Chinese natural language processing. The standard is proposed to achieve linguistic felicity, computational feasibility, and data uniformity. Linguistic felicity is maintained by a definition of segmentation unit that is equivalent to the theoretical definition of word, as well as a set of segmentation principles that are equivalent to a functional definition of a word. Computational feasibility is ensured by the fact that the above functional definitions are procedural in nature and can be converted to segmentation algorithms as well as by the implementable heuristic guidelines which deal with specific linguistic categories. Data uniformity is achieved by stratification of the standard itself and by defining a standard lexicon as part of the standard.",Segmentation Standard for {C}hinese Natural Language Processing,"This paper proposes a segmentation standard for Chinese natural language processing. The standard is proposed to achieve linguistic felicity, computational feasibility, and data uniformity. Linguistic felicity is maintained by a definition of segmentation unit that is equivalent to the theoretical definition of word, as well as a set of segmentation principles that are equivalent to a functional definition of a word. Computational feasibility is ensured by the fact that the above functional definitions are procedural in nature and can be converted to segmentation algorithms as well as by the implementable heuristic guidelines which deal with specific linguistic categories. Data uniformity is achieved by stratification of the standard itself and by defining a standard lexicon as part of the standard.",Segmentation Standard for Chinese Natural Language Processing,"This paper proposes a segmentation standard for Chinese natural language processing. The standard is proposed to achieve linguistic felicity, computational feasibility, and data uniformity. Linguistic felicity is maintained by a definition of segmentation unit that is equivalent to the theoretical definition of word, as well as a set of segmentation principles that are equivalent to a functional definition of a word. Computational feasibility is ensured by the fact that the above functional definitions are procedural in nature and can be converted to segmentation algorithms as well as by the implementable heuristic guidelines which deal with specific linguistic categories. Data uniformity is achieved by stratification of the standard itself and by defining a standard lexicon as part of the standard.","Research reported in this paper is partially supported by the Standardization Bureau of Taiwan, ROC. The authors are indebted to the following taskforce committee members for their invaluable contribution to the project: Claire H.H. Chang, One-Soon Her, Shuan-fan Huang, James H.Y. Tai, Charles T.C Tang, Jyun-shen Chang, Hsin-hsi Chen, Hsi-jiann Lee, Jhing-fa Wang, Chao-Huang Chang, Chiu-tang Chen, Una Y.L. Hsu, Jyn-jie Kuo, Hui-chun Ma, and Lin-Mei Wei. We would like to thank the three CLCLP reviewers for their constructive comments. We are also indebted to our colleagues at CKIP, Academia Sinica for their unfailing support as well as helpful suggestions. Any remaining errors are, of course, ours.","Segmentation Standard for Chinese Natural Language Processing. This paper proposes a segmentation standard for Chinese natural language processing. The standard is proposed to achieve linguistic felicity, computational feasibility, and data uniformity. Linguistic felicity is maintained by a definition of segmentation unit that is equivalent to the theoretical definition of word, as well as a set of segmentation principles that are equivalent to a functional definition of a word. Computational feasibility is ensured by the fact that the above functional definitions are procedural in nature and can be converted to segmentation algorithms as well as by the implementable heuristic guidelines which deal with specific linguistic categories. Data uniformity is achieved by stratification of the standard itself and by defining a standard lexicon as part of the standard.",1997
xia-etal-2022-structured,https://aclanthology.org/2022.acl-long.107,0,,,,,,,"Structured Pruning Learns Compact and Accurate Models. The growing size of neural language models has led to increased attention in model compression. The two predominant approaches are pruning, which gradually removes weights from a pre-trained model, and distillation, which trains a smaller compact model to match a larger one. Pruning methods can significantly reduce the model size but hardly achieve large speedups as distillation. However, distillation methods require large amounts of unlabeled data and are expensive to train. In this work, we propose a task-specific structured pruning method CoFi 1 (Coarse-and Fine-grained Pruning), which delivers highly parallelizable subnetworks and matches the distillation methods in both accuracy and latency, without resorting to any unlabeled data. Our key insight is to jointly prune coarse-grained (e.g., layers) and fine-grained (e.g., heads and hidden units) modules, which controls the pruning decision of each parameter with masks of different granularity. We also devise a layerwise distillation strategy to transfer knowledge from unpruned to pruned models during optimization. Our experiments on GLUE and SQuAD datasets show that CoFi yields models with over 10× speedups with a small accuracy drop, showing its effectiveness and efficiency compared to previous pruning and distillation approaches. 2",Structured Pruning Learns Compact and Accurate Models,"The growing size of neural language models has led to increased attention in model compression. The two predominant approaches are pruning, which gradually removes weights from a pre-trained model, and distillation, which trains a smaller compact model to match a larger one. Pruning methods can significantly reduce the model size but hardly achieve large speedups as distillation. However, distillation methods require large amounts of unlabeled data and are expensive to train. In this work, we propose a task-specific structured pruning method CoFi 1 (Coarse-and Fine-grained Pruning), which delivers highly parallelizable subnetworks and matches the distillation methods in both accuracy and latency, without resorting to any unlabeled data. Our key insight is to jointly prune coarse-grained (e.g., layers) and fine-grained (e.g., heads and hidden units) modules, which controls the pruning decision of each parameter with masks of different granularity. We also devise a layerwise distillation strategy to transfer knowledge from unpruned to pruned models during optimization. Our experiments on GLUE and SQuAD datasets show that CoFi yields models with over 10× speedups with a small accuracy drop, showing its effectiveness and efficiency compared to previous pruning and distillation approaches. 2",Structured Pruning Learns Compact and Accurate Models,"The growing size of neural language models has led to increased attention in model compression. The two predominant approaches are pruning, which gradually removes weights from a pre-trained model, and distillation, which trains a smaller compact model to match a larger one. Pruning methods can significantly reduce the model size but hardly achieve large speedups as distillation. However, distillation methods require large amounts of unlabeled data and are expensive to train. In this work, we propose a task-specific structured pruning method CoFi 1 (Coarse-and Fine-grained Pruning), which delivers highly parallelizable subnetworks and matches the distillation methods in both accuracy and latency, without resorting to any unlabeled data. Our key insight is to jointly prune coarse-grained (e.g., layers) and fine-grained (e.g., heads and hidden units) modules, which controls the pruning decision of each parameter with masks of different granularity. We also devise a layerwise distillation strategy to transfer knowledge from unpruned to pruned models during optimization. Our experiments on GLUE and SQuAD datasets show that CoFi yields models with over 10× speedups with a small accuracy drop, showing its effectiveness and efficiency compared to previous pruning and distillation approaches. 2","The authors thank Tao Lei from Google Research, Ameet Deshpande, Dan Friedman, Sadhika Malladi from Princeton University and the anonymous reviewers for their valuable feedback on our paper. This research is supported by a Hisashi and Masae Kobayashi *67 Fellowship and a Google Research Scholar Award.","Structured Pruning Learns Compact and Accurate Models. The growing size of neural language models has led to increased attention in model compression. The two predominant approaches are pruning, which gradually removes weights from a pre-trained model, and distillation, which trains a smaller compact model to match a larger one. Pruning methods can significantly reduce the model size but hardly achieve large speedups as distillation. However, distillation methods require large amounts of unlabeled data and are expensive to train. In this work, we propose a task-specific structured pruning method CoFi 1 (Coarse-and Fine-grained Pruning), which delivers highly parallelizable subnetworks and matches the distillation methods in both accuracy and latency, without resorting to any unlabeled data. Our key insight is to jointly prune coarse-grained (e.g., layers) and fine-grained (e.g., heads and hidden units) modules, which controls the pruning decision of each parameter with masks of different granularity. We also devise a layerwise distillation strategy to transfer knowledge from unpruned to pruned models during optimization. Our experiments on GLUE and SQuAD datasets show that CoFi yields models with over 10× speedups with a small accuracy drop, showing its effectiveness and efficiency compared to previous pruning and distillation approaches. 2",2022
zhang-lan-2015-ecnu,https://aclanthology.org/S15-2125,0,,,,,,,"ECNU: Extracting Effective Features from Multiple Sequential Sentences for Target-dependent Sentiment Analysis in Reviews. This paper describes our systems submitted to the target-dependent sentiment polarity classification subtask in aspect based sentiment analysis (ABSA) task (i.e., Task 12) in Se-mEval 2015. To settle this problem, we extracted several effective features from three sequential sentences, including sentiment lexicon, linguistic and domain specific features. Then we employed these features to construct classifiers using supervised classification algorithm. In laptop domain, our systems ranked 2nd out of 6 constrained submissions and 2nd out of 7 unconstrained submissions. In restaurant domain, the rankings are 5th out of 6 and 2nd out of 8 respectively.",{ECNU}: Extracting Effective Features from Multiple Sequential Sentences for Target-dependent Sentiment Analysis in Reviews,"This paper describes our systems submitted to the target-dependent sentiment polarity classification subtask in aspect based sentiment analysis (ABSA) task (i.e., Task 12) in Se-mEval 2015. To settle this problem, we extracted several effective features from three sequential sentences, including sentiment lexicon, linguistic and domain specific features. Then we employed these features to construct classifiers using supervised classification algorithm. In laptop domain, our systems ranked 2nd out of 6 constrained submissions and 2nd out of 7 unconstrained submissions. In restaurant domain, the rankings are 5th out of 6 and 2nd out of 8 respectively.",ECNU: Extracting Effective Features from Multiple Sequential Sentences for Target-dependent Sentiment Analysis in Reviews,"This paper describes our systems submitted to the target-dependent sentiment polarity classification subtask in aspect based sentiment analysis (ABSA) task (i.e., Task 12) in Se-mEval 2015. To settle this problem, we extracted several effective features from three sequential sentences, including sentiment lexicon, linguistic and domain specific features. Then we employed these features to construct classifiers using supervised classification algorithm. In laptop domain, our systems ranked 2nd out of 6 constrained submissions and 2nd out of 7 unconstrained submissions. In restaurant domain, the rankings are 5th out of 6 and 2nd out of 8 respectively.",This research is supported by grants from Science and Technology Commission of Shanghai Municipality under research grant no. (14DZ2260800 and 15ZR1410700) and Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things (ZF1213).,"ECNU: Extracting Effective Features from Multiple Sequential Sentences for Target-dependent Sentiment Analysis in Reviews. This paper describes our systems submitted to the target-dependent sentiment polarity classification subtask in aspect based sentiment analysis (ABSA) task (i.e., Task 12) in Se-mEval 2015. To settle this problem, we extracted several effective features from three sequential sentences, including sentiment lexicon, linguistic and domain specific features. Then we employed these features to construct classifiers using supervised classification algorithm. In laptop domain, our systems ranked 2nd out of 6 constrained submissions and 2nd out of 7 unconstrained submissions. In restaurant domain, the rankings are 5th out of 6 and 2nd out of 8 respectively.",2015
cheng-etal-2020-dynamically,https://aclanthology.org/2020.findings-emnlp.121,0,,,,,,,"Dynamically Updating Event Representations for Temporal Relation Classification with Multi-category Learning. Temporal relation classification is a pair-wise task for identifying the relation of a temporal link (TLINK) between two mentions, i.e. event, time and document creation time (DCT). It leads to two crucial limits: 1) Two TLINKs involving a common mention do not share information. 2) Existing models with independent classifiers for each TLINK category (E2E, E2T and E2D) 1 hinder from using the whole data. This paper presents an event centric model that allows to manage dynamic event representations across multiple TLINKs. Our model deals with three TLINK categories with multi-task learning to leverage the full size of data. The experimental results show that our proposal outperforms state-of-the-art models and two transfer learning baselines on both the English and Japanese data.",Dynamically Updating Event Representations for Temporal Relation Classification with Multi-category Learning,"Temporal relation classification is a pair-wise task for identifying the relation of a temporal link (TLINK) between two mentions, i.e. event, time and document creation time (DCT). It leads to two crucial limits: 1) Two TLINKs involving a common mention do not share information. 2) Existing models with independent classifiers for each TLINK category (E2E, E2T and E2D) 1 hinder from using the whole data. This paper presents an event centric model that allows to manage dynamic event representations across multiple TLINKs. Our model deals with three TLINK categories with multi-task learning to leverage the full size of data. The experimental results show that our proposal outperforms state-of-the-art models and two transfer learning baselines on both the English and Japanese data.",Dynamically Updating Event Representations for Temporal Relation Classification with Multi-category Learning,"Temporal relation classification is a pair-wise task for identifying the relation of a temporal link (TLINK) between two mentions, i.e. event, time and document creation time (DCT). It leads to two crucial limits: 1) Two TLINKs involving a common mention do not share information. 2) Existing models with independent classifiers for each TLINK category (E2E, E2T and E2D) 1 hinder from using the whole data. This paper presents an event centric model that allows to manage dynamic event representations across multiple TLINKs. Our model deals with three TLINK categories with multi-task learning to leverage the full size of data. The experimental results show that our proposal outperforms state-of-the-art models and two transfer learning baselines on both the English and Japanese data.",,"Dynamically Updating Event Representations for Temporal Relation Classification with Multi-category Learning. Temporal relation classification is a pair-wise task for identifying the relation of a temporal link (TLINK) between two mentions, i.e. event, time and document creation time (DCT). It leads to two crucial limits: 1) Two TLINKs involving a common mention do not share information. 2) Existing models with independent classifiers for each TLINK category (E2E, E2T and E2D) 1 hinder from using the whole data. This paper presents an event centric model that allows to manage dynamic event representations across multiple TLINKs. Our model deals with three TLINK categories with multi-task learning to leverage the full size of data. The experimental results show that our proposal outperforms state-of-the-art models and two transfer learning baselines on both the English and Japanese data.",2020
cohn-blunsom-2009-bayesian,https://aclanthology.org/D09-1037,0,,,,,,,"A Bayesian Model of Syntax-Directed Tree to String Grammar Induction. Tree based translation models are a compelling means of integrating linguistic information into machine translation. Syntax can inform lexical selection and reordering choices and thereby improve translation quality. Research to date has focussed primarily on decoding with such models, but less on the difficult problem of inducing the bilingual grammar from data. We propose a generative Bayesian model of tree-to-string translation which induces grammars that are both smaller and produce better translations than the previous heuristic two-stage approach which employs a separate word alignment step.",A {B}ayesian Model of Syntax-Directed Tree to String Grammar Induction,"Tree based translation models are a compelling means of integrating linguistic information into machine translation. Syntax can inform lexical selection and reordering choices and thereby improve translation quality. Research to date has focussed primarily on decoding with such models, but less on the difficult problem of inducing the bilingual grammar from data. We propose a generative Bayesian model of tree-to-string translation which induces grammars that are both smaller and produce better translations than the previous heuristic two-stage approach which employs a separate word alignment step.",A Bayesian Model of Syntax-Directed Tree to String Grammar Induction,"Tree based translation models are a compelling means of integrating linguistic information into machine translation. Syntax can inform lexical selection and reordering choices and thereby improve translation quality. Research to date has focussed primarily on decoding with such models, but less on the difficult problem of inducing the bilingual grammar from data. We propose a generative Bayesian model of tree-to-string translation which induces grammars that are both smaller and produce better translations than the previous heuristic two-stage approach which employs a separate word alignment step.",The authors acknowledge the support of the EP-SRC (grants GR/T04557/01 and EP/D074959/1). This work has made use of the resources provided by the Edinburgh Compute and Data Facility (ECDF). The ECDF is partially supported by the eDIKT initiative.,"A Bayesian Model of Syntax-Directed Tree to String Grammar Induction. Tree based translation models are a compelling means of integrating linguistic information into machine translation. Syntax can inform lexical selection and reordering choices and thereby improve translation quality. Research to date has focussed primarily on decoding with such models, but less on the difficult problem of inducing the bilingual grammar from data. We propose a generative Bayesian model of tree-to-string translation which induces grammars that are both smaller and produce better translations than the previous heuristic two-stage approach which employs a separate word alignment step.",2009
stathopoulos-etal-2018-variable,https://aclanthology.org/N18-1028,0,,,,,,,"Variable Typing: Assigning Meaning to Variables in Mathematical Text. Information about the meaning of mathematical variables in text is useful in NLP/IR tasks such as symbol disambiguation, topic modeling and mathematical information retrieval (MIR). We introduce variable typing, the task of assigning one mathematical type (multiword technical terms referring to mathematical concepts) to each variable in a sentence of mathematical text. As part of this work, we also introduce a new annotated data set composed of 33,524 data points extracted from scientific documents published on arXiv. Our intrinsic evaluation demonstrates that our data set is sufficient to successfully train and evaluate current classifiers from three different model architectures. The best performing model is evaluated on an extrinsic task: MIR, by producing a typed formula index. Our results show that the best performing MIR models make use of our typed index, compared to a formula index only containing raw symbols, thereby demonstrating the usefulness of variable typing. Let P be a parabolic subgroup of GL(n) with Levi decomposition P = M N , where N is the unipotent radical.",Variable Typing: Assigning Meaning to Variables in Mathematical Text,"Information about the meaning of mathematical variables in text is useful in NLP/IR tasks such as symbol disambiguation, topic modeling and mathematical information retrieval (MIR). We introduce variable typing, the task of assigning one mathematical type (multiword technical terms referring to mathematical concepts) to each variable in a sentence of mathematical text. As part of this work, we also introduce a new annotated data set composed of 33,524 data points extracted from scientific documents published on arXiv. Our intrinsic evaluation demonstrates that our data set is sufficient to successfully train and evaluate current classifiers from three different model architectures. The best performing model is evaluated on an extrinsic task: MIR, by producing a typed formula index. Our results show that the best performing MIR models make use of our typed index, compared to a formula index only containing raw symbols, thereby demonstrating the usefulness of variable typing. Let P be a parabolic subgroup of GL(n) with Levi decomposition P = M N , where N is the unipotent radical.",Variable Typing: Assigning Meaning to Variables in Mathematical Text,"Information about the meaning of mathematical variables in text is useful in NLP/IR tasks such as symbol disambiguation, topic modeling and mathematical information retrieval (MIR). We introduce variable typing, the task of assigning one mathematical type (multiword technical terms referring to mathematical concepts) to each variable in a sentence of mathematical text. As part of this work, we also introduce a new annotated data set composed of 33,524 data points extracted from scientific documents published on arXiv. Our intrinsic evaluation demonstrates that our data set is sufficient to successfully train and evaluate current classifiers from three different model architectures. The best performing model is evaluated on an extrinsic task: MIR, by producing a typed formula index. Our results show that the best performing MIR models make use of our typed index, compared to a formula index only containing raw symbols, thereby demonstrating the usefulness of variable typing. Let P be a parabolic subgroup of GL(n) with Levi decomposition P = M N , where N is the unipotent radical.",,"Variable Typing: Assigning Meaning to Variables in Mathematical Text. Information about the meaning of mathematical variables in text is useful in NLP/IR tasks such as symbol disambiguation, topic modeling and mathematical information retrieval (MIR). We introduce variable typing, the task of assigning one mathematical type (multiword technical terms referring to mathematical concepts) to each variable in a sentence of mathematical text. As part of this work, we also introduce a new annotated data set composed of 33,524 data points extracted from scientific documents published on arXiv. Our intrinsic evaluation demonstrates that our data set is sufficient to successfully train and evaluate current classifiers from three different model architectures. The best performing model is evaluated on an extrinsic task: MIR, by producing a typed formula index. Our results show that the best performing MIR models make use of our typed index, compared to a formula index only containing raw symbols, thereby demonstrating the usefulness of variable typing. Let P be a parabolic subgroup of GL(n) with Levi decomposition P = M N , where N is the unipotent radical.",2018
anthony-patrick-2004-dependency,https://aclanthology.org/W04-0815,0,,,,,,,Dependency based logical form transformations. This paper describes a system developed for the transformation of English sentences into a first order logical form representation. The metho dology is centered on the use of a d ependency grammar based parser. We demonstrate the suitability of applying a dependency parser based solution to the given task a nd in turn e xplain some of the limitations and challenges involved when using such an approach. The efficiencies and deficiencies of our approach are discussed as well as considerations for further enhanc ements.,Dependency based logical form transformations,This paper describes a system developed for the transformation of English sentences into a first order logical form representation. The metho dology is centered on the use of a d ependency grammar based parser. We demonstrate the suitability of applying a dependency parser based solution to the given task a nd in turn e xplain some of the limitations and challenges involved when using such an approach. The efficiencies and deficiencies of our approach are discussed as well as considerations for further enhanc ements.,Dependency based logical form transformations,This paper describes a system developed for the transformation of English sentences into a first order logical form representation. The metho dology is centered on the use of a d ependency grammar based parser. We demonstrate the suitability of applying a dependency parser based solution to the given task a nd in turn e xplain some of the limitations and challenges involved when using such an approach. The efficiencies and deficiencies of our approach are discussed as well as considerations for further enhanc ements.,,Dependency based logical form transformations. This paper describes a system developed for the transformation of English sentences into a first order logical form representation. The metho dology is centered on the use of a d ependency grammar based parser. We demonstrate the suitability of applying a dependency parser based solution to the given task a nd in turn e xplain some of the limitations and challenges involved when using such an approach. The efficiencies and deficiencies of our approach are discussed as well as considerations for further enhanc ements.,2004
terrell-mutlu-2012-regression,https://aclanthology.org/W12-1639,0,,,,,,,"A Regression-based Approach to Modeling Addressee Backchannels. During conversations, addressees produce conversational acts-verbal and nonverbal backchannels-that facilitate turn-taking, acknowledge speakership, and communicate common ground without disrupting the speaker's speech. These acts play a key role in achieving fluent conversations. Therefore, gaining a deeper understanding of how these acts interact with speaker behaviors in shaping conversations might offer key insights into the design of technologies such as computer-mediated communication systems and embodied conversational agents. In this paper, we explore how a regression-based approach might offer such insights into modeling predictive relationships between speaker behaviors and addressee backchannels in a storytelling scenario. Our results reveal speaker eye contact as a significant predictor of verbal, nonverbal, and bimodal backchannels and utterance boundaries as predictors of nonverbal and bimodal backchannels.",A Regression-based Approach to Modeling Addressee Backchannels,"During conversations, addressees produce conversational acts-verbal and nonverbal backchannels-that facilitate turn-taking, acknowledge speakership, and communicate common ground without disrupting the speaker's speech. These acts play a key role in achieving fluent conversations. Therefore, gaining a deeper understanding of how these acts interact with speaker behaviors in shaping conversations might offer key insights into the design of technologies such as computer-mediated communication systems and embodied conversational agents. In this paper, we explore how a regression-based approach might offer such insights into modeling predictive relationships between speaker behaviors and addressee backchannels in a storytelling scenario. Our results reveal speaker eye contact as a significant predictor of verbal, nonverbal, and bimodal backchannels and utterance boundaries as predictors of nonverbal and bimodal backchannels.",A Regression-based Approach to Modeling Addressee Backchannels,"During conversations, addressees produce conversational acts-verbal and nonverbal backchannels-that facilitate turn-taking, acknowledge speakership, and communicate common ground without disrupting the speaker's speech. These acts play a key role in achieving fluent conversations. Therefore, gaining a deeper understanding of how these acts interact with speaker behaviors in shaping conversations might offer key insights into the design of technologies such as computer-mediated communication systems and embodied conversational agents. In this paper, we explore how a regression-based approach might offer such insights into modeling predictive relationships between speaker behaviors and addressee backchannels in a storytelling scenario. Our results reveal speaker eye contact as a significant predictor of verbal, nonverbal, and bimodal backchannels and utterance boundaries as predictors of nonverbal and bimodal backchannels.",We would like to thank Faisal Khan for his help in data collection and processing. This work was supported by National Science Foundation award 1149970.,"A Regression-based Approach to Modeling Addressee Backchannels. During conversations, addressees produce conversational acts-verbal and nonverbal backchannels-that facilitate turn-taking, acknowledge speakership, and communicate common ground without disrupting the speaker's speech. These acts play a key role in achieving fluent conversations. Therefore, gaining a deeper understanding of how these acts interact with speaker behaviors in shaping conversations might offer key insights into the design of technologies such as computer-mediated communication systems and embodied conversational agents. In this paper, we explore how a regression-based approach might offer such insights into modeling predictive relationships between speaker behaviors and addressee backchannels in a storytelling scenario. Our results reveal speaker eye contact as a significant predictor of verbal, nonverbal, and bimodal backchannels and utterance boundaries as predictors of nonverbal and bimodal backchannels.",2012
chen-etal-2020-reconstructing,https://aclanthology.org/2020.aacl-main.81,0,,,,,,,"Reconstructing Event Regions for Event Extraction via Graph Attention Networks. Event information is usually scattered across multiple sentences within a document. The local sentence-level event extractors often yield many noisy event role filler extractions in the absence of a broader view of the documentlevel context. Filtering spurious extractions and aggregating event information in a document remains a challenging problem. Following the observation that a document has several relevant event regions densely populated with event role fillers, we build graphs with candidate role filler extractions enriched by sentential embeddings as nodes, and use graph attention networks to identify event regions in a document and aggregate event information. We characterize edges between candidate extractions in a graph into rich vector representations to facilitate event region identification. The experimental results on two datasets of two languages show that our approach yields new state-of-the-art performance for the challenging event extraction task.",Reconstructing Event Regions for Event Extraction via Graph Attention Networks,"Event information is usually scattered across multiple sentences within a document. The local sentence-level event extractors often yield many noisy event role filler extractions in the absence of a broader view of the documentlevel context. Filtering spurious extractions and aggregating event information in a document remains a challenging problem. Following the observation that a document has several relevant event regions densely populated with event role fillers, we build graphs with candidate role filler extractions enriched by sentential embeddings as nodes, and use graph attention networks to identify event regions in a document and aggregate event information. We characterize edges between candidate extractions in a graph into rich vector representations to facilitate event region identification. The experimental results on two datasets of two languages show that our approach yields new state-of-the-art performance for the challenging event extraction task.",Reconstructing Event Regions for Event Extraction via Graph Attention Networks,"Event information is usually scattered across multiple sentences within a document. The local sentence-level event extractors often yield many noisy event role filler extractions in the absence of a broader view of the documentlevel context. Filtering spurious extractions and aggregating event information in a document remains a challenging problem. Following the observation that a document has several relevant event regions densely populated with event role fillers, we build graphs with candidate role filler extractions enriched by sentential embeddings as nodes, and use graph attention networks to identify event regions in a document and aggregate event information. We characterize edges between candidate extractions in a graph into rich vector representations to facilitate event region identification. The experimental results on two datasets of two languages show that our approach yields new state-of-the-art performance for the challenging event extraction task.","This work is supported by the Natural Key RD Program of China (No.2018YFB1005100), the National Natural Science Foundation of China (No.61922085, No.U1936207, No.61806201) and the Key Research Program of the Chinese Academy of Sciences (Grant NO. ZDBS-SSW-JSC006). This work is also supported by CCF-Tencent Open Research Fund, Beijing Academy of Artificial Intelligence (BAAI2019QN0301) and independent research project of National Laboratory of Pattern Recognition.","Reconstructing Event Regions for Event Extraction via Graph Attention Networks. Event information is usually scattered across multiple sentences within a document. The local sentence-level event extractors often yield many noisy event role filler extractions in the absence of a broader view of the documentlevel context. Filtering spurious extractions and aggregating event information in a document remains a challenging problem. Following the observation that a document has several relevant event regions densely populated with event role fillers, we build graphs with candidate role filler extractions enriched by sentential embeddings as nodes, and use graph attention networks to identify event regions in a document and aggregate event information. We characterize edges between candidate extractions in a graph into rich vector representations to facilitate event region identification. The experimental results on two datasets of two languages show that our approach yields new state-of-the-art performance for the challenging event extraction task.",2020
zhu-etal-2013-improved,https://aclanthology.org/P13-1019,0,,,,,,,Improved Bayesian Logistic Supervised Topic Models with Data Augmentation. Supervised topic models with a logistic likelihood have two issues that potentially limit their practical use: 1) response variables are usually over-weighted by document word counts; and 2) existing variational inference methods make strict mean-field assumptions. We address these issues by: 1) introducing a regularization constant to better balance the two parts based on an optimization formulation of Bayesian inference; and 2) developing a simple Gibbs sampling algorithm by introducing auxiliary Polya-Gamma variables and collapsing out Dirichlet variables. Our augment-and-collapse sampling algorithm has analytical forms of each conditional distribution without making any restricting assumptions and can be easily parallelized. Empirical results demonstrate significant improvements on prediction performance and time efficiency.,Improved {B}ayesian Logistic Supervised Topic Models with Data Augmentation,Supervised topic models with a logistic likelihood have two issues that potentially limit their practical use: 1) response variables are usually over-weighted by document word counts; and 2) existing variational inference methods make strict mean-field assumptions. We address these issues by: 1) introducing a regularization constant to better balance the two parts based on an optimization formulation of Bayesian inference; and 2) developing a simple Gibbs sampling algorithm by introducing auxiliary Polya-Gamma variables and collapsing out Dirichlet variables. Our augment-and-collapse sampling algorithm has analytical forms of each conditional distribution without making any restricting assumptions and can be easily parallelized. Empirical results demonstrate significant improvements on prediction performance and time efficiency.,Improved Bayesian Logistic Supervised Topic Models with Data Augmentation,Supervised topic models with a logistic likelihood have two issues that potentially limit their practical use: 1) response variables are usually over-weighted by document word counts; and 2) existing variational inference methods make strict mean-field assumptions. We address these issues by: 1) introducing a regularization constant to better balance the two parts based on an optimization formulation of Bayesian inference; and 2) developing a simple Gibbs sampling algorithm by introducing auxiliary Polya-Gamma variables and collapsing out Dirichlet variables. Our augment-and-collapse sampling algorithm has analytical forms of each conditional distribution without making any restricting assumptions and can be easily parallelized. Empirical results demonstrate significant improvements on prediction performance and time efficiency.,,Improved Bayesian Logistic Supervised Topic Models with Data Augmentation. Supervised topic models with a logistic likelihood have two issues that potentially limit their practical use: 1) response variables are usually over-weighted by document word counts; and 2) existing variational inference methods make strict mean-field assumptions. We address these issues by: 1) introducing a regularization constant to better balance the two parts based on an optimization formulation of Bayesian inference; and 2) developing a simple Gibbs sampling algorithm by introducing auxiliary Polya-Gamma variables and collapsing out Dirichlet variables. Our augment-and-collapse sampling algorithm has analytical forms of each conditional distribution without making any restricting assumptions and can be easily parallelized. Empirical results demonstrate significant improvements on prediction performance and time efficiency.,2013
diab-etal-2004-automatic,https://aclanthology.org/N04-4038,0,,,,,,,"Automatic Tagging of Arabic Text: From Raw Text to Base Phrase Chunks. To date, there are no fully automated systems addressing the community's need for fundamental language processing tools for Arabic text. In this paper, we present a Support Vector Machine (SVM) based approach to automatically tokenize (segmenting off clitics), part-ofspeech (POS) tag and annotate base phrases (BPs) in Arabic text. We adapt highly accurate tools that have been developed for English text and apply them to Arabic text. Using standard evaluation metrics, we report that the SVM-TOK tokenizer achieves an ¡ £ ¢ ¥ ¤ £ ¦ score of 99.12, the SVM-POS tagger achieves an accuracy of 95.49%, and the SVM-BP chunker yields an ¡ ¢ § ¤ £ ¦ score of 92.08.",Automatic Tagging of {A}rabic Text: From Raw Text to Base Phrase Chunks,"To date, there are no fully automated systems addressing the community's need for fundamental language processing tools for Arabic text. In this paper, we present a Support Vector Machine (SVM) based approach to automatically tokenize (segmenting off clitics), part-ofspeech (POS) tag and annotate base phrases (BPs) in Arabic text. We adapt highly accurate tools that have been developed for English text and apply them to Arabic text. Using standard evaluation metrics, we report that the SVM-TOK tokenizer achieves an ¡ £ ¢ ¥ ¤ £ ¦ score of 99.12, the SVM-POS tagger achieves an accuracy of 95.49%, and the SVM-BP chunker yields an ¡ ¢ § ¤ £ ¦ score of 92.08.",Automatic Tagging of Arabic Text: From Raw Text to Base Phrase Chunks,"To date, there are no fully automated systems addressing the community's need for fundamental language processing tools for Arabic text. In this paper, we present a Support Vector Machine (SVM) based approach to automatically tokenize (segmenting off clitics), part-ofspeech (POS) tag and annotate base phrases (BPs) in Arabic text. We adapt highly accurate tools that have been developed for English text and apply them to Arabic text. Using standard evaluation metrics, we report that the SVM-TOK tokenizer achieves an ¡ £ ¢ ¥ ¤ £ ¦ score of 99.12, the SVM-POS tagger achieves an accuracy of 95.49%, and the SVM-BP chunker yields an ¡ ¢ § ¤ £ ¦ score of 92.08.",,"Automatic Tagging of Arabic Text: From Raw Text to Base Phrase Chunks. To date, there are no fully automated systems addressing the community's need for fundamental language processing tools for Arabic text. In this paper, we present a Support Vector Machine (SVM) based approach to automatically tokenize (segmenting off clitics), part-ofspeech (POS) tag and annotate base phrases (BPs) in Arabic text. We adapt highly accurate tools that have been developed for English text and apply them to Arabic text. Using standard evaluation metrics, we report that the SVM-TOK tokenizer achieves an ¡ £ ¢ ¥ ¤ £ ¦ score of 99.12, the SVM-POS tagger achieves an accuracy of 95.49%, and the SVM-BP chunker yields an ¡ ¢ § ¤ £ ¦ score of 92.08.",2004
anastasopoulos-etal-2020-tico,https://aclanthology.org/2020.nlpcovid19-2.5,1,,,,health,,,"TICO-19: the Translation Initiative for COvid-19. The COVID-19 pandemic is the worst pandemic to strike the world in over a century. Crucial to stemming the tide of the SARS-CoV-2 virus is communicating to vulnerable populations the means by which they can protect themselves. To this end, the collaborators forming the Translation Initiative for COvid-19 (TICO-19) 1 have made test and development data available to AI and MT researchers in 35 different languages in order to foster the development of tools and resources for improving access to information about COVID-19 in these languages. In addition to 9 highresourced, ""pivot"" languages, the team is targeting 26 lesser resourced languages, in particular languages of Africa, South Asia and SouthEast Asia, whose populations may be the most vulnerable to the spread of the virus. The same data is translated into all of the languages represented, meaning that testing or development can be done for any pairing of languages in the set. Further, the team is converting the test and development data into translation memories (TMXs) that can be used by localizers from and to any of the languages. 2",{TICO}-19: the Translation Initiative for {CO}vid-19,"The COVID-19 pandemic is the worst pandemic to strike the world in over a century. Crucial to stemming the tide of the SARS-CoV-2 virus is communicating to vulnerable populations the means by which they can protect themselves. To this end, the collaborators forming the Translation Initiative for COvid-19 (TICO-19) 1 have made test and development data available to AI and MT researchers in 35 different languages in order to foster the development of tools and resources for improving access to information about COVID-19 in these languages. In addition to 9 highresourced, ""pivot"" languages, the team is targeting 26 lesser resourced languages, in particular languages of Africa, South Asia and SouthEast Asia, whose populations may be the most vulnerable to the spread of the virus. The same data is translated into all of the languages represented, meaning that testing or development can be done for any pairing of languages in the set. Further, the team is converting the test and development data into translation memories (TMXs) that can be used by localizers from and to any of the languages. 2",TICO-19: the Translation Initiative for COvid-19,"The COVID-19 pandemic is the worst pandemic to strike the world in over a century. Crucial to stemming the tide of the SARS-CoV-2 virus is communicating to vulnerable populations the means by which they can protect themselves. To this end, the collaborators forming the Translation Initiative for COvid-19 (TICO-19) 1 have made test and development data available to AI and MT researchers in 35 different languages in order to foster the development of tools and resources for improving access to information about COVID-19 in these languages. In addition to 9 highresourced, ""pivot"" languages, the team is targeting 26 lesser resourced languages, in particular languages of Africa, South Asia and SouthEast Asia, whose populations may be the most vulnerable to the spread of the virus. The same data is translated into all of the languages represented, meaning that testing or development can be done for any pairing of languages in the set. Further, the team is converting the test and development data into translation memories (TMXs) that can be used by localizers from and to any of the languages. 2","We would like to thank the people who made this effort possible: Tanya Badeka, Jen Wang, William Wong, Rebekkah Hogan, Cynthia Gao, Rachael Brunckhorst, Ian Hill, Bob Jung, Jason Smith, Susan Kim Chan, Romina Stella, Keith Stevens. We also extend our gratitude to the many translators and the quality reviewers whose hard work are represented in our benchmarks and in our translation memories. Some of the languages were very difficult to source, and the burden in these cases often fell to a very small number of translators. We thank you for the many hours you spent translating and, in many cases, re-translating content.","TICO-19: the Translation Initiative for COvid-19. The COVID-19 pandemic is the worst pandemic to strike the world in over a century. Crucial to stemming the tide of the SARS-CoV-2 virus is communicating to vulnerable populations the means by which they can protect themselves. To this end, the collaborators forming the Translation Initiative for COvid-19 (TICO-19) 1 have made test and development data available to AI and MT researchers in 35 different languages in order to foster the development of tools and resources for improving access to information about COVID-19 in these languages. In addition to 9 highresourced, ""pivot"" languages, the team is targeting 26 lesser resourced languages, in particular languages of Africa, South Asia and SouthEast Asia, whose populations may be the most vulnerable to the spread of the virus. The same data is translated into all of the languages represented, meaning that testing or development can be done for any pairing of languages in the set. Further, the team is converting the test and development data into translation memories (TMXs) that can be used by localizers from and to any of the languages. 2",2020
giannakopoulos-etal-2017-multiling,https://aclanthology.org/W17-1001,0,,,,,,,"MultiLing 2017 Overview. In this brief report we present an overview of the MultiLing 2017 effort and workshop, as implemented within EACL 2017. MultiLing is a community-driven initiative that pushes the state-of-the-art in Automatic Summarization by providing data sets and fostering further research and development of summarization systems. This year the scope of the workshop was widened, bringing together researchers that work on summarization across sources, languages and genres. We summarize the main tasks planned and implemented this year, also providing insights on next steps.",{M}ulti{L}ing 2017 Overview,"In this brief report we present an overview of the MultiLing 2017 effort and workshop, as implemented within EACL 2017. MultiLing is a community-driven initiative that pushes the state-of-the-art in Automatic Summarization by providing data sets and fostering further research and development of summarization systems. This year the scope of the workshop was widened, bringing together researchers that work on summarization across sources, languages and genres. We summarize the main tasks planned and implemented this year, also providing insights on next steps.",MultiLing 2017 Overview,"In this brief report we present an overview of the MultiLing 2017 effort and workshop, as implemented within EACL 2017. MultiLing is a community-driven initiative that pushes the state-of-the-art in Automatic Summarization by providing data sets and fostering further research and development of summarization systems. This year the scope of the workshop was widened, bringing together researchers that work on summarization across sources, languages and genres. We summarize the main tasks planned and implemented this year, also providing insights on next steps.","This work was supported by project MediaGist, EUs FP7 People Programme (Marie Curie Actions), no. 630786, MediaGist.","MultiLing 2017 Overview. In this brief report we present an overview of the MultiLing 2017 effort and workshop, as implemented within EACL 2017. MultiLing is a community-driven initiative that pushes the state-of-the-art in Automatic Summarization by providing data sets and fostering further research and development of summarization systems. This year the scope of the workshop was widened, bringing together researchers that work on summarization across sources, languages and genres. We summarize the main tasks planned and implemented this year, also providing insights on next steps.",2017
kireyev-2009-semantic,https://aclanthology.org/N09-1060,0,,,,,,,"Semantic-based Estimation of Term Informativeness. The idea that some words carry more semantic content than others, has led to the notion of term specificity, or informativeness. Computational estimation of this quantity is important for various applications such as information retrieval. We propose a new method of computing term specificity, based on modeling the rate of learning of word meaning in Latent Semantic Analysis (LSA). We analyze the performance of this method both qualitatively and quantitatively and demonstrate that it shows excellent performance compared to existing methods on a broad range of tests. We also demonstrate how it can be used to improve existing applications in information retrieval and summarization.",Semantic-based Estimation of Term Informativeness,"The idea that some words carry more semantic content than others, has led to the notion of term specificity, or informativeness. Computational estimation of this quantity is important for various applications such as information retrieval. We propose a new method of computing term specificity, based on modeling the rate of learning of word meaning in Latent Semantic Analysis (LSA). We analyze the performance of this method both qualitatively and quantitatively and demonstrate that it shows excellent performance compared to existing methods on a broad range of tests. We also demonstrate how it can be used to improve existing applications in information retrieval and summarization.",Semantic-based Estimation of Term Informativeness,"The idea that some words carry more semantic content than others, has led to the notion of term specificity, or informativeness. Computational estimation of this quantity is important for various applications such as information retrieval. We propose a new method of computing term specificity, based on modeling the rate of learning of word meaning in Latent Semantic Analysis (LSA). We analyze the performance of this method both qualitatively and quantitatively and demonstrate that it shows excellent performance compared to existing methods on a broad range of tests. We also demonstrate how it can be used to improve existing applications in information retrieval and summarization.",,"Semantic-based Estimation of Term Informativeness. The idea that some words carry more semantic content than others, has led to the notion of term specificity, or informativeness. Computational estimation of this quantity is important for various applications such as information retrieval. We propose a new method of computing term specificity, based on modeling the rate of learning of word meaning in Latent Semantic Analysis (LSA). We analyze the performance of this method both qualitatively and quantitatively and demonstrate that it shows excellent performance compared to existing methods on a broad range of tests. We also demonstrate how it can be used to improve existing applications in information retrieval and summarization.",2009
semmar-laib-2017-building,https://doi.org/10.26615/978-954-452-049-6_085,0,,,,,,,Building Multiword Expressions Bilingual Lexicons for Domain Adaptation of an Example-Based Machine Translation System. ,Building Multiword Expressions Bilingual Lexicons for Domain Adaptation of an Example-Based Machine Translation System,,Building Multiword Expressions Bilingual Lexicons for Domain Adaptation of an Example-Based Machine Translation System,,,Building Multiword Expressions Bilingual Lexicons for Domain Adaptation of an Example-Based Machine Translation System. ,2017
inoue-etal-2022-learning,https://aclanthology.org/2022.findings-acl.81,0,,,,,,,"Learning and Evaluating Character Representations in Novels. We address the problem of learning fixedlength vector representations of characters in novels. Recent advances in word embeddings have proven successful in learning entity representations from short texts, but fall short on longer documents because they do not capture full book-level information. To overcome the weakness of such text-based embeddings, we propose two novel methods for representing characters: (i) graph neural network-based embeddings from a full corpus-based character network; and (ii) low-dimensional embeddings constructed from the occurrence pattern of characters in each novel. We test the quality of these character embeddings using a new benchmark suite to evaluate character representations, encompassing 12 different tasks. We show that our representation techniques combined with text-based embeddings lead to the best character representations, outperforming text-based embeddings in four tasks. Our dataset is made publicly available to stimulate additional work in this area.",Learning and Evaluating Character Representations in Novels,"We address the problem of learning fixedlength vector representations of characters in novels. Recent advances in word embeddings have proven successful in learning entity representations from short texts, but fall short on longer documents because they do not capture full book-level information. To overcome the weakness of such text-based embeddings, we propose two novel methods for representing characters: (i) graph neural network-based embeddings from a full corpus-based character network; and (ii) low-dimensional embeddings constructed from the occurrence pattern of characters in each novel. We test the quality of these character embeddings using a new benchmark suite to evaluate character representations, encompassing 12 different tasks. We show that our representation techniques combined with text-based embeddings lead to the best character representations, outperforming text-based embeddings in four tasks. Our dataset is made publicly available to stimulate additional work in this area.",Learning and Evaluating Character Representations in Novels,"We address the problem of learning fixedlength vector representations of characters in novels. Recent advances in word embeddings have proven successful in learning entity representations from short texts, but fall short on longer documents because they do not capture full book-level information. To overcome the weakness of such text-based embeddings, we propose two novel methods for representing characters: (i) graph neural network-based embeddings from a full corpus-based character network; and (ii) low-dimensional embeddings constructed from the occurrence pattern of characters in each novel. We test the quality of these character embeddings using a new benchmark suite to evaluate character representations, encompassing 12 different tasks. We show that our representation techniques combined with text-based embeddings lead to the best character representations, outperforming text-based embeddings in four tasks. Our dataset is made publicly available to stimulate additional work in this area.",We would like to thank anonymous reviewers for valuable and insightful feedback.,"Learning and Evaluating Character Representations in Novels. We address the problem of learning fixedlength vector representations of characters in novels. Recent advances in word embeddings have proven successful in learning entity representations from short texts, but fall short on longer documents because they do not capture full book-level information. To overcome the weakness of such text-based embeddings, we propose two novel methods for representing characters: (i) graph neural network-based embeddings from a full corpus-based character network; and (ii) low-dimensional embeddings constructed from the occurrence pattern of characters in each novel. We test the quality of these character embeddings using a new benchmark suite to evaluate character representations, encompassing 12 different tasks. We show that our representation techniques combined with text-based embeddings lead to the best character representations, outperforming text-based embeddings in four tasks. Our dataset is made publicly available to stimulate additional work in this area.",2022
netisopakul-chattupan-2015-thai,https://aclanthology.org/Y15-1022,0,,,,,,,"Thai Stock News Sentiment Classification using Wordpair Features. Thai stock brokers issue daily stock news for their customers. One broker labels these news with plus, minus and zero sign to indicate the type of recommendation. This paper proposed to classify Thai stock news by extracting important texts from the news. The extracted text is in a form of a 'wordpair'. Three wordpair sets, manual wordpairs extraction (ME), manual wordpairs addition (MA), and automate wordpairs combination (AC), are constructed and compared for their precision, recall and f-measure. Using this broker's news as a training set and unseen stock news from other brokers as a testing set, the experiment shows that all three sets have similar results for the training set but the second and the third set have better classification results in classifying stock news from unseen brokers.",{T}hai Stock News Sentiment Classification using Wordpair Features,"Thai stock brokers issue daily stock news for their customers. One broker labels these news with plus, minus and zero sign to indicate the type of recommendation. This paper proposed to classify Thai stock news by extracting important texts from the news. The extracted text is in a form of a 'wordpair'. Three wordpair sets, manual wordpairs extraction (ME), manual wordpairs addition (MA), and automate wordpairs combination (AC), are constructed and compared for their precision, recall and f-measure. Using this broker's news as a training set and unseen stock news from other brokers as a testing set, the experiment shows that all three sets have similar results for the training set but the second and the third set have better classification results in classifying stock news from unseen brokers.",Thai Stock News Sentiment Classification using Wordpair Features,"Thai stock brokers issue daily stock news for their customers. One broker labels these news with plus, minus and zero sign to indicate the type of recommendation. This paper proposed to classify Thai stock news by extracting important texts from the news. The extracted text is in a form of a 'wordpair'. Three wordpair sets, manual wordpairs extraction (ME), manual wordpairs addition (MA), and automate wordpairs combination (AC), are constructed and compared for their precision, recall and f-measure. Using this broker's news as a training set and unseen stock news from other brokers as a testing set, the experiment shows that all three sets have similar results for the training set but the second and the third set have better classification results in classifying stock news from unseen brokers.",,"Thai Stock News Sentiment Classification using Wordpair Features. Thai stock brokers issue daily stock news for their customers. One broker labels these news with plus, minus and zero sign to indicate the type of recommendation. This paper proposed to classify Thai stock news by extracting important texts from the news. The extracted text is in a form of a 'wordpair'. Three wordpair sets, manual wordpairs extraction (ME), manual wordpairs addition (MA), and automate wordpairs combination (AC), are constructed and compared for their precision, recall and f-measure. Using this broker's news as a training set and unseen stock news from other brokers as a testing set, the experiment shows that all three sets have similar results for the training set but the second and the third set have better classification results in classifying stock news from unseen brokers.",2015
ma-etal-2017-text,https://aclanthology.org/P17-3009,0,,,,,,,"Text-based Speaker Identification on Multiparty Dialogues Using Multi-document Convolutional Neural Networks. We propose a convolutional neural network model for text-based speaker identification on multiparty dialogues extracted from the TV show, Friends. While most previous works on this task rely heavily on acoustic features, our approach attempts to identify speakers in dialogues using their speech patterns as captured by transcriptions to the TV show. It has been shown that different individual speakers exhibit distinct idiolectal styles. Several convolutional neural network models are developed to discriminate between differing speech patterns. Our results confirm the promise of text-based approaches, with the best performing model showing an accuracy improvement of over 6% upon the baseline CNN model.",Text-based Speaker Identification on Multiparty Dialogues Using Multi-document Convolutional Neural Networks,"We propose a convolutional neural network model for text-based speaker identification on multiparty dialogues extracted from the TV show, Friends. While most previous works on this task rely heavily on acoustic features, our approach attempts to identify speakers in dialogues using their speech patterns as captured by transcriptions to the TV show. It has been shown that different individual speakers exhibit distinct idiolectal styles. Several convolutional neural network models are developed to discriminate between differing speech patterns. Our results confirm the promise of text-based approaches, with the best performing model showing an accuracy improvement of over 6% upon the baseline CNN model.",Text-based Speaker Identification on Multiparty Dialogues Using Multi-document Convolutional Neural Networks,"We propose a convolutional neural network model for text-based speaker identification on multiparty dialogues extracted from the TV show, Friends. While most previous works on this task rely heavily on acoustic features, our approach attempts to identify speakers in dialogues using their speech patterns as captured by transcriptions to the TV show. It has been shown that different individual speakers exhibit distinct idiolectal styles. Several convolutional neural network models are developed to discriminate between differing speech patterns. Our results confirm the promise of text-based approaches, with the best performing model showing an accuracy improvement of over 6% upon the baseline CNN model.",,"Text-based Speaker Identification on Multiparty Dialogues Using Multi-document Convolutional Neural Networks. We propose a convolutional neural network model for text-based speaker identification on multiparty dialogues extracted from the TV show, Friends. While most previous works on this task rely heavily on acoustic features, our approach attempts to identify speakers in dialogues using their speech patterns as captured by transcriptions to the TV show. It has been shown that different individual speakers exhibit distinct idiolectal styles. Several convolutional neural network models are developed to discriminate between differing speech patterns. Our results confirm the promise of text-based approaches, with the best performing model showing an accuracy improvement of over 6% upon the baseline CNN model.",2017
deng-nakamura-2005-investigating,https://aclanthology.org/I05-2025,0,,,,,,,"Investigating the Features that Affect Cue Usage of Non-native Speakers of English. At present, the population of non-native speakers is twice that of native speakers. It is necessary to explore the text generation strategies for non-native users. However, little has been done in this field. This study investigates the features that affect the placement (where to place a cue) of because for non-native speakers. A machine learning program-C4.5 was applied to induce the classification models of the placement.",Investigating the Features that Affect Cue Usage of Non-native Speakers of {E}nglish,"At present, the population of non-native speakers is twice that of native speakers. It is necessary to explore the text generation strategies for non-native users. However, little has been done in this field. This study investigates the features that affect the placement (where to place a cue) of because for non-native speakers. A machine learning program-C4.5 was applied to induce the classification models of the placement.",Investigating the Features that Affect Cue Usage of Non-native Speakers of English,"At present, the population of non-native speakers is twice that of native speakers. It is necessary to explore the text generation strategies for non-native users. However, little has been done in this field. This study investigates the features that affect the placement (where to place a cue) of because for non-native speakers. A machine learning program-C4.5 was applied to induce the classification models of the placement.",,"Investigating the Features that Affect Cue Usage of Non-native Speakers of English. At present, the population of non-native speakers is twice that of native speakers. It is necessary to explore the text generation strategies for non-native users. However, little has been done in this field. This study investigates the features that affect the placement (where to place a cue) of because for non-native speakers. A machine learning program-C4.5 was applied to induce the classification models of the placement.",2005
jain-gandhi-2022-comprehensive,https://aclanthology.org/2022.findings-acl.270,0,,,,,,,"Comprehensive Multi-Modal Interactions for Referring Image Segmentation. We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intramodal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach's performance on four benchmark datasets, showing considerable performance gains over the existing state-ofthe-art (SOTA) methods.",Comprehensive Multi-Modal Interactions for Referring Image Segmentation,"We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intramodal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach's performance on four benchmark datasets, showing considerable performance gains over the existing state-ofthe-art (SOTA) methods.",Comprehensive Multi-Modal Interactions for Referring Image Segmentation,"We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intramodal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach's performance on four benchmark datasets, showing considerable performance gains over the existing state-ofthe-art (SOTA) methods.",,"Comprehensive Multi-Modal Interactions for Referring Image Segmentation. We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intramodal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach's performance on four benchmark datasets, showing considerable performance gains over the existing state-ofthe-art (SOTA) methods.",2022
faraj-etal-2021-sarcasmdet,https://aclanthology.org/2021.wanlp-1.44,0,,,,,,,"SarcasmDet at Sarcasm Detection Task 2021 in Arabic using AraBERT Pretrained Model. This paper presents one of the top five winning solutions for the Shared Task on Sarcasm and Sentiment Detection in Arabic (sub-task1 Sarcasm Detection). The goal of the sub-task is to identify whether a tweet is sarcastic or not. Our solution has been developed using ensemble technique with AraBERT pre-trained model. This paper describes the architecture of the submitted solution in the shared task. It also provides in detail the experiments and the hyperparameters tuning that lead to this outperforming result. Besides, the paper discusses and analyzes the results by comparing all the models that we have trained or tested to build a robust model in a table design. Our model is ranked fifth out of 27 teams with an F1-score of 0.5989 of the sarcastic class. It is worth mentioning that our model achieved the highest accuracy score of 0.7830 in this competition.",{S}arcasm{D}et at Sarcasm Detection Task 2021 in {A}rabic using {A}ra{BERT} Pretrained Model,"This paper presents one of the top five winning solutions for the Shared Task on Sarcasm and Sentiment Detection in Arabic (sub-task1 Sarcasm Detection). The goal of the sub-task is to identify whether a tweet is sarcastic or not. Our solution has been developed using ensemble technique with AraBERT pre-trained model. This paper describes the architecture of the submitted solution in the shared task. It also provides in detail the experiments and the hyperparameters tuning that lead to this outperforming result. Besides, the paper discusses and analyzes the results by comparing all the models that we have trained or tested to build a robust model in a table design. Our model is ranked fifth out of 27 teams with an F1-score of 0.5989 of the sarcastic class. It is worth mentioning that our model achieved the highest accuracy score of 0.7830 in this competition.",SarcasmDet at Sarcasm Detection Task 2021 in Arabic using AraBERT Pretrained Model,"This paper presents one of the top five winning solutions for the Shared Task on Sarcasm and Sentiment Detection in Arabic (sub-task1 Sarcasm Detection). The goal of the sub-task is to identify whether a tweet is sarcastic or not. Our solution has been developed using ensemble technique with AraBERT pre-trained model. This paper describes the architecture of the submitted solution in the shared task. It also provides in detail the experiments and the hyperparameters tuning that lead to this outperforming result. Besides, the paper discusses and analyzes the results by comparing all the models that we have trained or tested to build a robust model in a table design. Our model is ranked fifth out of 27 teams with an F1-score of 0.5989 of the sarcastic class. It is worth mentioning that our model achieved the highest accuracy score of 0.7830 in this competition.",,"SarcasmDet at Sarcasm Detection Task 2021 in Arabic using AraBERT Pretrained Model. This paper presents one of the top five winning solutions for the Shared Task on Sarcasm and Sentiment Detection in Arabic (sub-task1 Sarcasm Detection). The goal of the sub-task is to identify whether a tweet is sarcastic or not. Our solution has been developed using ensemble technique with AraBERT pre-trained model. This paper describes the architecture of the submitted solution in the shared task. It also provides in detail the experiments and the hyperparameters tuning that lead to this outperforming result. Besides, the paper discusses and analyzes the results by comparing all the models that we have trained or tested to build a robust model in a table design. Our model is ranked fifth out of 27 teams with an F1-score of 0.5989 of the sarcastic class. It is worth mentioning that our model achieved the highest accuracy score of 0.7830 in this competition.",2021
chang-etal-2014-semantic-frame,https://aclanthology.org/Y14-1011,0,,,,,,,"Semantic Frame-based Statistical Approach for Topic Detection. We propose a statistical frame-based approach (FBA) for natural language processing, and demonstrate its advantage over traditional machine learning methods by using topic detection as a case study. FBA perceives and identifies semantic knowledge in a more general manner by collecting important linguistic patterns within documents through a unique flexible matching scheme that allows word insertion, deletion and substitution (IDS) to capture linguistic structures within the text. In addition, FBA can also overcome major issues of the rule-based approach by reducing human effort through its highly automated pattern generation and summarization. Using Yahoo! Chinese news corpus containing about 140,000 news articles, we provide a comprehensive performance evaluation that demonstrates the effectiveness of FBA in detecting the topic of a document by exploiting the semantic association and the context within the text. Moreover, it outperforms common topic models like Naïve Bayes, Vector Space Model, and LDA-SVM. On the other hand, there are several machine learning-based approaches. For instance, Nallapati et al. (2004) attempted to find characteristics of topics by clustering keywords using statistical similarity. The clusters are then connected chronologically to form a time-line of the topic. Furthermore, many previous methods treated topic detection as a supervised classification problem (Blei et al., 2003; Zhang and Wang, 2010). These approaches can achieve substantial performance without much human involvement. However, to manifest topic as",Semantic Frame-based Statistical Approach for Topic Detection,"We propose a statistical frame-based approach (FBA) for natural language processing, and demonstrate its advantage over traditional machine learning methods by using topic detection as a case study. FBA perceives and identifies semantic knowledge in a more general manner by collecting important linguistic patterns within documents through a unique flexible matching scheme that allows word insertion, deletion and substitution (IDS) to capture linguistic structures within the text. In addition, FBA can also overcome major issues of the rule-based approach by reducing human effort through its highly automated pattern generation and summarization. Using Yahoo! Chinese news corpus containing about 140,000 news articles, we provide a comprehensive performance evaluation that demonstrates the effectiveness of FBA in detecting the topic of a document by exploiting the semantic association and the context within the text. Moreover, it outperforms common topic models like Naïve Bayes, Vector Space Model, and LDA-SVM. On the other hand, there are several machine learning-based approaches. For instance, Nallapati et al. (2004) attempted to find characteristics of topics by clustering keywords using statistical similarity. The clusters are then connected chronologically to form a time-line of the topic. Furthermore, many previous methods treated topic detection as a supervised classification problem (Blei et al., 2003; Zhang and Wang, 2010). These approaches can achieve substantial performance without much human involvement. However, to manifest topic as",Semantic Frame-based Statistical Approach for Topic Detection,"We propose a statistical frame-based approach (FBA) for natural language processing, and demonstrate its advantage over traditional machine learning methods by using topic detection as a case study. FBA perceives and identifies semantic knowledge in a more general manner by collecting important linguistic patterns within documents through a unique flexible matching scheme that allows word insertion, deletion and substitution (IDS) to capture linguistic structures within the text. In addition, FBA can also overcome major issues of the rule-based approach by reducing human effort through its highly automated pattern generation and summarization. Using Yahoo! Chinese news corpus containing about 140,000 news articles, we provide a comprehensive performance evaluation that demonstrates the effectiveness of FBA in detecting the topic of a document by exploiting the semantic association and the context within the text. Moreover, it outperforms common topic models like Naïve Bayes, Vector Space Model, and LDA-SVM. On the other hand, there are several machine learning-based approaches. For instance, Nallapati et al. (2004) attempted to find characteristics of topics by clustering keywords using statistical similarity. The clusters are then connected chronologically to form a time-line of the topic. Furthermore, many previous methods treated topic detection as a supervised classification problem (Blei et al., 2003; Zhang and Wang, 2010). These approaches can achieve substantial performance without much human involvement. However, to manifest topic as","This study is conducted under the NSC 102-3114-Y-307-026 ""A Research on Social Influence and Decision Support Analytics"" of the Institute for Information Industry which is subsidized by the National Science Council.","Semantic Frame-based Statistical Approach for Topic Detection. We propose a statistical frame-based approach (FBA) for natural language processing, and demonstrate its advantage over traditional machine learning methods by using topic detection as a case study. FBA perceives and identifies semantic knowledge in a more general manner by collecting important linguistic patterns within documents through a unique flexible matching scheme that allows word insertion, deletion and substitution (IDS) to capture linguistic structures within the text. In addition, FBA can also overcome major issues of the rule-based approach by reducing human effort through its highly automated pattern generation and summarization. Using Yahoo! Chinese news corpus containing about 140,000 news articles, we provide a comprehensive performance evaluation that demonstrates the effectiveness of FBA in detecting the topic of a document by exploiting the semantic association and the context within the text. Moreover, it outperforms common topic models like Naïve Bayes, Vector Space Model, and LDA-SVM. On the other hand, there are several machine learning-based approaches. For instance, Nallapati et al. (2004) attempted to find characteristics of topics by clustering keywords using statistical similarity. The clusters are then connected chronologically to form a time-line of the topic. Furthermore, many previous methods treated topic detection as a supervised classification problem (Blei et al., 2003; Zhang and Wang, 2010). These approaches can achieve substantial performance without much human involvement. However, to manifest topic as",2014
ling-etal-2015-design,https://aclanthology.org/Q15-1023,0,,,,,,,"Design Challenges for Entity Linking. Recent research on entity linking (EL) has introduced a plethora of promising techniques, ranging from deep neural networks to joint inference. But despite numerous papers there is surprisingly little understanding of the state of the art in EL. We attack this confusion by analyzing differences between several versions of the EL problem and presenting a simple yet effective, modular, unsupervised system, called VINCULUM, for entity linking. We conduct an extensive evaluation on nine data sets, comparing VINCULUM with two state-of-theart systems, and elucidate key aspects of the system that include mention extraction, candidate generation, entity type prediction, entity coreference, and coherence.",Design Challenges for Entity Linking,"Recent research on entity linking (EL) has introduced a plethora of promising techniques, ranging from deep neural networks to joint inference. But despite numerous papers there is surprisingly little understanding of the state of the art in EL. We attack this confusion by analyzing differences between several versions of the EL problem and presenting a simple yet effective, modular, unsupervised system, called VINCULUM, for entity linking. We conduct an extensive evaluation on nine data sets, comparing VINCULUM with two state-of-theart systems, and elucidate key aspects of the system that include mention extraction, candidate generation, entity type prediction, entity coreference, and coherence.",Design Challenges for Entity Linking,"Recent research on entity linking (EL) has introduced a plethora of promising techniques, ranging from deep neural networks to joint inference. But despite numerous papers there is surprisingly little understanding of the state of the art in EL. We attack this confusion by analyzing differences between several versions of the EL problem and presenting a simple yet effective, modular, unsupervised system, called VINCULUM, for entity linking. We conduct an extensive evaluation on nine data sets, comparing VINCULUM with two state-of-theart systems, and elucidate key aspects of the system that include mention extraction, candidate generation, entity type prediction, entity coreference, and coherence.","Acknowledgements The authors thank Luke Zettlemoyer, Tony Fader, Kenton Lee, Mark Yatskar for constructive suggestions on an early draft and all members of the LoudLab group and the LIL group for helpful discussions. We also thank the action editor and the anonymous reviewers for valuable comments. This work is supported in part by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-13-2-0019, an ONR grant N00014-12-1-0211, a WRF / TJ Cable Professorship, a gift from Google, an ARO grant number W911NF-13-1-0246, and by TerraSwarm, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of DARPA, AFRL, or the US government.","Design Challenges for Entity Linking. Recent research on entity linking (EL) has introduced a plethora of promising techniques, ranging from deep neural networks to joint inference. But despite numerous papers there is surprisingly little understanding of the state of the art in EL. We attack this confusion by analyzing differences between several versions of the EL problem and presenting a simple yet effective, modular, unsupervised system, called VINCULUM, for entity linking. We conduct an extensive evaluation on nine data sets, comparing VINCULUM with two state-of-theart systems, and elucidate key aspects of the system that include mention extraction, candidate generation, entity type prediction, entity coreference, and coherence.",2015
zhong-etal-2020-element,https://aclanthology.org/2020.emnlp-main.540,1,,,,peace_justice_and_strong_institutions,,,"An Element-aware Multi-representation Model for Law Article Prediction. Existing works have proved that using law articles as external knowledge can improve the performance of the Legal Judgment Prediction. However, they do not fully use law article information and most of the current work is only for single label samples. In this paper, we propose a Law Article Element-aware Multi-representation Model (LEMM), which can make full use of law article information and can be used for multi-label samples. The model uses the labeled elements of law articles to extract fact description features from multiple angles. It generates multiple representations of a fact for classification. Every label has a law-aware fact representation to encode more information. To capture the dependencies between law articles, the model also introduces a self-attention mechanism between multiple representations. Compared with baseline models like TopJudge, this model improves the accuracy of 5.84%, the macro F1 of 6.42%, and the micro F1 of 4.28%.",An Element-aware Multi-representation Model for Law Article Prediction,"Existing works have proved that using law articles as external knowledge can improve the performance of the Legal Judgment Prediction. However, they do not fully use law article information and most of the current work is only for single label samples. In this paper, we propose a Law Article Element-aware Multi-representation Model (LEMM), which can make full use of law article information and can be used for multi-label samples. The model uses the labeled elements of law articles to extract fact description features from multiple angles. It generates multiple representations of a fact for classification. Every label has a law-aware fact representation to encode more information. To capture the dependencies between law articles, the model also introduces a self-attention mechanism between multiple representations. Compared with baseline models like TopJudge, this model improves the accuracy of 5.84%, the macro F1 of 6.42%, and the micro F1 of 4.28%.",An Element-aware Multi-representation Model for Law Article Prediction,"Existing works have proved that using law articles as external knowledge can improve the performance of the Legal Judgment Prediction. However, they do not fully use law article information and most of the current work is only for single label samples. In this paper, we propose a Law Article Element-aware Multi-representation Model (LEMM), which can make full use of law article information and can be used for multi-label samples. The model uses the labeled elements of law articles to extract fact description features from multiple angles. It generates multiple representations of a fact for classification. Every label has a law-aware fact representation to encode more information. To capture the dependencies between law articles, the model also introduces a self-attention mechanism between multiple representations. Compared with baseline models like TopJudge, this model improves the accuracy of 5.84%, the macro F1 of 6.42%, and the micro F1 of 4.28%.",We thank all reviewers for the valuable comments. This work is supported by the National Natural Science Foundation of China (No. 61472191 and No. 61772278).,"An Element-aware Multi-representation Model for Law Article Prediction. Existing works have proved that using law articles as external knowledge can improve the performance of the Legal Judgment Prediction. However, they do not fully use law article information and most of the current work is only for single label samples. In this paper, we propose a Law Article Element-aware Multi-representation Model (LEMM), which can make full use of law article information and can be used for multi-label samples. The model uses the labeled elements of law articles to extract fact description features from multiple angles. It generates multiple representations of a fact for classification. Every label has a law-aware fact representation to encode more information. To capture the dependencies between law articles, the model also introduces a self-attention mechanism between multiple representations. Compared with baseline models like TopJudge, this model improves the accuracy of 5.84%, the macro F1 of 6.42%, and the micro F1 of 4.28%.",2020
goldwater-etal-2008-words,https://aclanthology.org/P08-1044,0,,,,,,,"Which Words Are Hard to Recognize? Prosodic, Lexical, and Disfluency Factors that Increase ASR Error Rates. Many factors are thought to increase the chances of misrecognizing a word in ASR, including low frequency, nearby disfluencies, short duration, and being at the start of a turn. However, few of these factors have been formally examined. This paper analyzes a variety of lexical, prosodic, and disfluency factors to determine which are likely to increase ASR error rates. Findings include the following. (1) For disfluencies, effects depend on the type of disfluency: errors increase by up to 15% (absolute) for words near fragments, but decrease by up to 7.2% (absolute) for words near repetitions. This decrease seems to be due to longer word duration. (2) For prosodic features, there are more errors for words with extreme values than words with typical values. (3) Although our results are based on output from a system with speaker adaptation, speaker differences are a major factor influencing error rates, and the effects of features such as frequency, pitch, and intensity may vary between speakers.","Which Words Are Hard to Recognize? Prosodic, Lexical, and Disfluency Factors that Increase {ASR} Error Rates","Many factors are thought to increase the chances of misrecognizing a word in ASR, including low frequency, nearby disfluencies, short duration, and being at the start of a turn. However, few of these factors have been formally examined. This paper analyzes a variety of lexical, prosodic, and disfluency factors to determine which are likely to increase ASR error rates. Findings include the following. (1) For disfluencies, effects depend on the type of disfluency: errors increase by up to 15% (absolute) for words near fragments, but decrease by up to 7.2% (absolute) for words near repetitions. This decrease seems to be due to longer word duration. (2) For prosodic features, there are more errors for words with extreme values than words with typical values. (3) Although our results are based on output from a system with speaker adaptation, speaker differences are a major factor influencing error rates, and the effects of features such as frequency, pitch, and intensity may vary between speakers.","Which Words Are Hard to Recognize? Prosodic, Lexical, and Disfluency Factors that Increase ASR Error Rates","Many factors are thought to increase the chances of misrecognizing a word in ASR, including low frequency, nearby disfluencies, short duration, and being at the start of a turn. However, few of these factors have been formally examined. This paper analyzes a variety of lexical, prosodic, and disfluency factors to determine which are likely to increase ASR error rates. Findings include the following. (1) For disfluencies, effects depend on the type of disfluency: errors increase by up to 15% (absolute) for words near fragments, but decrease by up to 7.2% (absolute) for words near repetitions. This decrease seems to be due to longer word duration. (2) For prosodic features, there are more errors for words with extreme values than words with typical values. (3) Although our results are based on output from a system with speaker adaptation, speaker differences are a major factor influencing error rates, and the effects of features such as frequency, pitch, and intensity may vary between speakers.","This work was supported by the Edinburgh-Stanford LINK and ONR MURI award N000140510388. We thank Andreas Stolcke for providing the ASR output, language model, and forced alignments used here, and Raghunandan Kumaran and Katrin Kirchhoff for earlier datasets and additional help.","Which Words Are Hard to Recognize? Prosodic, Lexical, and Disfluency Factors that Increase ASR Error Rates. Many factors are thought to increase the chances of misrecognizing a word in ASR, including low frequency, nearby disfluencies, short duration, and being at the start of a turn. However, few of these factors have been formally examined. This paper analyzes a variety of lexical, prosodic, and disfluency factors to determine which are likely to increase ASR error rates. Findings include the following. (1) For disfluencies, effects depend on the type of disfluency: errors increase by up to 15% (absolute) for words near fragments, but decrease by up to 7.2% (absolute) for words near repetitions. This decrease seems to be due to longer word duration. (2) For prosodic features, there are more errors for words with extreme values than words with typical values. (3) Although our results are based on output from a system with speaker adaptation, speaker differences are a major factor influencing error rates, and the effects of features such as frequency, pitch, and intensity may vary between speakers.",2008
mass-etal-2022-conversational,https://aclanthology.org/2022.dialdoc-1.7,0,,,,,,,"Conversational Search with Mixed-Initiative - Asking Good Clarification Questions backed-up by Passage Retrieval. We deal with the scenario of conversational search, where user queries are under-specified or ambiguous. This calls for a mixed-initiative setup. User-asks (queries) and system-answers, as well as system-asks (clarification questions) and user response, in order to clarify her information needs. We focus on the task of selecting the next clarification question, given the conversation context. Our method leverages passage retrieval from a background content to fine-tune two deep-learning models for ranking candidate clarification questions. We evaluated our method on two different use-cases. The first is an open domain conversational search in a large web collection. The second is a taskoriented customer-support setup. We show that our method performs well on both use-cases.",Conversational Search with Mixed-Initiative - Asking Good Clarification Questions backed-up by Passage Retrieval,"We deal with the scenario of conversational search, where user queries are under-specified or ambiguous. This calls for a mixed-initiative setup. User-asks (queries) and system-answers, as well as system-asks (clarification questions) and user response, in order to clarify her information needs. We focus on the task of selecting the next clarification question, given the conversation context. Our method leverages passage retrieval from a background content to fine-tune two deep-learning models for ranking candidate clarification questions. We evaluated our method on two different use-cases. The first is an open domain conversational search in a large web collection. The second is a taskoriented customer-support setup. We show that our method performs well on both use-cases.",Conversational Search with Mixed-Initiative - Asking Good Clarification Questions backed-up by Passage Retrieval,"We deal with the scenario of conversational search, where user queries are under-specified or ambiguous. This calls for a mixed-initiative setup. User-asks (queries) and system-answers, as well as system-asks (clarification questions) and user response, in order to clarify her information needs. We focus on the task of selecting the next clarification question, given the conversation context. Our method leverages passage retrieval from a background content to fine-tune two deep-learning models for ranking candidate clarification questions. We evaluated our method on two different use-cases. The first is an open domain conversational search in a large web collection. The second is a taskoriented customer-support setup. We show that our method performs well on both use-cases.",,"Conversational Search with Mixed-Initiative - Asking Good Clarification Questions backed-up by Passage Retrieval. We deal with the scenario of conversational search, where user queries are under-specified or ambiguous. This calls for a mixed-initiative setup. User-asks (queries) and system-answers, as well as system-asks (clarification questions) and user response, in order to clarify her information needs. We focus on the task of selecting the next clarification question, given the conversation context. Our method leverages passage retrieval from a background content to fine-tune two deep-learning models for ranking candidate clarification questions. We evaluated our method on two different use-cases. The first is an open domain conversational search in a large web collection. The second is a taskoriented customer-support setup. We show that our method performs well on both use-cases.",2022
cinkova-etal-2016-graded,https://aclanthology.org/L16-1137,0,,,,,,,"Graded and Word-Sense-Disambiguation Decisions in Corpus Pattern Analysis: a Pilot Study. We present a pilot analysis of a new linguistic resource, VPS-GradeUp (available at http://hdl.handle.net/11234/1-1585). The resource contains 11,400 graded human decisions on usage patterns of 29 English lexical verbs, randomly selected from the Pattern Dictionary of English Verbs (Hanks, 2000 2014). The selection was random and based on their frequency and the number of senses their lemmas have in PDEV. This data set has been created to observe the interannotator agreement on PDEV patterns produced using the Corpus Pattern Analysis (Hanks, 2013). Apart from the graded decisions, the data set also contains traditional Word-Sense-Disambiguation (WSD) labels. We analyze the associations between the graded annotation and WSD annotation. The results of the respective annotations do not correlate with the size of the usage pattern inventory for the respective verbs lemmas, which makes the data set worth further linguistic analysis.",Graded and Word-Sense-Disambiguation Decisions in Corpus Pattern Analysis: a Pilot Study,"We present a pilot analysis of a new linguistic resource, VPS-GradeUp (available at http://hdl.handle.net/11234/1-1585). The resource contains 11,400 graded human decisions on usage patterns of 29 English lexical verbs, randomly selected from the Pattern Dictionary of English Verbs (Hanks, 2000 2014). The selection was random and based on their frequency and the number of senses their lemmas have in PDEV. This data set has been created to observe the interannotator agreement on PDEV patterns produced using the Corpus Pattern Analysis (Hanks, 2013). Apart from the graded decisions, the data set also contains traditional Word-Sense-Disambiguation (WSD) labels. We analyze the associations between the graded annotation and WSD annotation. The results of the respective annotations do not correlate with the size of the usage pattern inventory for the respective verbs lemmas, which makes the data set worth further linguistic analysis.",Graded and Word-Sense-Disambiguation Decisions in Corpus Pattern Analysis: a Pilot Study,"We present a pilot analysis of a new linguistic resource, VPS-GradeUp (available at http://hdl.handle.net/11234/1-1585). The resource contains 11,400 graded human decisions on usage patterns of 29 English lexical verbs, randomly selected from the Pattern Dictionary of English Verbs (Hanks, 2000 2014). The selection was random and based on their frequency and the number of senses their lemmas have in PDEV. This data set has been created to observe the interannotator agreement on PDEV patterns produced using the Corpus Pattern Analysis (Hanks, 2013). Apart from the graded decisions, the data set also contains traditional Word-Sense-Disambiguation (WSD) labels. We analyze the associations between the graded annotation and WSD annotation. The results of the respective annotations do not correlate with the size of the usage pattern inventory for the respective verbs lemmas, which makes the data set worth further linguistic analysis.","This work has been using language resources developed and/or stored and/or distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth, and Sports of the Czech Republic (project LM2015071). For most implementation we used R (R Core Team, 2015).","Graded and Word-Sense-Disambiguation Decisions in Corpus Pattern Analysis: a Pilot Study. We present a pilot analysis of a new linguistic resource, VPS-GradeUp (available at http://hdl.handle.net/11234/1-1585). The resource contains 11,400 graded human decisions on usage patterns of 29 English lexical verbs, randomly selected from the Pattern Dictionary of English Verbs (Hanks, 2000 2014). The selection was random and based on their frequency and the number of senses their lemmas have in PDEV. This data set has been created to observe the interannotator agreement on PDEV patterns produced using the Corpus Pattern Analysis (Hanks, 2013). Apart from the graded decisions, the data set also contains traditional Word-Sense-Disambiguation (WSD) labels. We analyze the associations between the graded annotation and WSD annotation. The results of the respective annotations do not correlate with the size of the usage pattern inventory for the respective verbs lemmas, which makes the data set worth further linguistic analysis.",2016
steele-specia-2018-vis,https://aclanthology.org/N18-5015,0,,,,,,,"Vis-Eval Metric Viewer: A Visualisation Tool for Inspecting and Evaluating Metric Scores of Machine Translation Output. Machine Translation systems are usually evaluated and compared using automated evaluation metrics such as BLEU and METEOR to score the generated translations against human translations. However, the interaction with the output from the metrics is relatively limited and results are commonly a single score along with a few additional statistics. Whilst this may be enough for system comparison it does not provide much useful feedback or a means for inspecting translations and their respective scores. Vis-Eval Metric Viewer (VEMV) is a tool designed to provide visualisation of multiple evaluation scores so they can be easily interpreted by a user. VEMV takes in the source, reference, and hypothesis files as parameters, and scores the hypotheses using several popular evaluation metrics simultaneously. Scores are produced at both the sentence and dataset level and results are written locally to a series of HTML files that can be viewed on a web browser. The individual scored sentences can easily be inspected using powerful search and selection functions and results can be visualised with graphical representations of the scores and distributions.",Vis-Eval Metric Viewer: A Visualisation Tool for Inspecting and Evaluating Metric Scores of Machine Translation Output,"Machine Translation systems are usually evaluated and compared using automated evaluation metrics such as BLEU and METEOR to score the generated translations against human translations. However, the interaction with the output from the metrics is relatively limited and results are commonly a single score along with a few additional statistics. Whilst this may be enough for system comparison it does not provide much useful feedback or a means for inspecting translations and their respective scores. Vis-Eval Metric Viewer (VEMV) is a tool designed to provide visualisation of multiple evaluation scores so they can be easily interpreted by a user. VEMV takes in the source, reference, and hypothesis files as parameters, and scores the hypotheses using several popular evaluation metrics simultaneously. Scores are produced at both the sentence and dataset level and results are written locally to a series of HTML files that can be viewed on a web browser. The individual scored sentences can easily be inspected using powerful search and selection functions and results can be visualised with graphical representations of the scores and distributions.",Vis-Eval Metric Viewer: A Visualisation Tool for Inspecting and Evaluating Metric Scores of Machine Translation Output,"Machine Translation systems are usually evaluated and compared using automated evaluation metrics such as BLEU and METEOR to score the generated translations against human translations. However, the interaction with the output from the metrics is relatively limited and results are commonly a single score along with a few additional statistics. Whilst this may be enough for system comparison it does not provide much useful feedback or a means for inspecting translations and their respective scores. Vis-Eval Metric Viewer (VEMV) is a tool designed to provide visualisation of multiple evaluation scores so they can be easily interpreted by a user. VEMV takes in the source, reference, and hypothesis files as parameters, and scores the hypotheses using several popular evaluation metrics simultaneously. Scores are produced at both the sentence and dataset level and results are written locally to a series of HTML files that can be viewed on a web browser. The individual scored sentences can easily be inspected using powerful search and selection functions and results can be visualised with graphical representations of the scores and distributions.",,"Vis-Eval Metric Viewer: A Visualisation Tool for Inspecting and Evaluating Metric Scores of Machine Translation Output. Machine Translation systems are usually evaluated and compared using automated evaluation metrics such as BLEU and METEOR to score the generated translations against human translations. However, the interaction with the output from the metrics is relatively limited and results are commonly a single score along with a few additional statistics. Whilst this may be enough for system comparison it does not provide much useful feedback or a means for inspecting translations and their respective scores. Vis-Eval Metric Viewer (VEMV) is a tool designed to provide visualisation of multiple evaluation scores so they can be easily interpreted by a user. VEMV takes in the source, reference, and hypothesis files as parameters, and scores the hypotheses using several popular evaluation metrics simultaneously. Scores are produced at both the sentence and dataset level and results are written locally to a series of HTML files that can be viewed on a web browser. The individual scored sentences can easily be inspected using powerful search and selection functions and results can be visualised with graphical representations of the scores and distributions.",2018
diwersy-2014-varitext,https://aclanthology.org/W14-5306,0,,,,,,,The Varitext platform and the Corpus des vari\'et\'es nationales du fran\ccais (CoVaNa-FR) as resources for the study of French from a pluricentric perspective. This paper reports on the francophone corpus archive Corpus des variétés nationales du français (CoVaNa-FR) and the lexico-statistical platform Varitext. It outlines the design and data format of the samples as well as presenting various usage scenarios related to the applications featured by the platform's toolbox.,The Varitext platform and the Corpus des vari{\'e}t{\'e}s nationales du fran{\c{c}}ais ({C}o{V}a{N}a-{FR}) as resources for the study of {F}rench from a pluricentric perspective,This paper reports on the francophone corpus archive Corpus des variétés nationales du français (CoVaNa-FR) and the lexico-statistical platform Varitext. It outlines the design and data format of the samples as well as presenting various usage scenarios related to the applications featured by the platform's toolbox.,The Varitext platform and the Corpus des vari\'et\'es nationales du fran\ccais (CoVaNa-FR) as resources for the study of French from a pluricentric perspective,This paper reports on the francophone corpus archive Corpus des variétés nationales du français (CoVaNa-FR) and the lexico-statistical platform Varitext. It outlines the design and data format of the samples as well as presenting various usage scenarios related to the applications featured by the platform's toolbox.,The author wishes to thank the reviewers for their valuable comments which helped to clarify the main points of the paper.,The Varitext platform and the Corpus des vari\'et\'es nationales du fran\ccais (CoVaNa-FR) as resources for the study of French from a pluricentric perspective. This paper reports on the francophone corpus archive Corpus des variétés nationales du français (CoVaNa-FR) and the lexico-statistical platform Varitext. It outlines the design and data format of the samples as well as presenting various usage scenarios related to the applications featured by the platform's toolbox.,2014
eschenbach-etal-1989-remarks,https://aclanthology.org/E89-1022,0,,,,,,,"Remarks on Plural Anaphora. The interpretation of plural anaphora often requires the construction of complex reference objects (RefOs) out of RefOs which were formerly introduced not by plural terms but by a number of singular terms only. Often, several complex RefOs can be constructed, but only one of them is the preferred referent for the plural anaphor in question. As a means of explanation for preferred and non-preferred interpretations of plural anaphora, the concept of a Common Association Basis (CAB) for the potential atomic parts of a complex object is introduced in the following. CABs pose conceptual constraints on the formation of complex RefOs in general. We argue that in cases where a suitable CAB for the atomic RefOs introduced in the text exists, the corresponding complex RefO is constructed as early as in the course of processing the antecedent sentence and put into the focus domain of the discourse model. Thus, the search for a referent for a plural anaphor is constrained to a limited domain of RefOs according to the general principles of focus theory in NLP. Further principles of interpretation are suggested which guide the resolution of plural anaphora in cases where more than one suitable complex RefO is in focus. * The research on this paper was supported in part by the Deutsche Forschungsgemeinschaft (DFG) under grant Ha 1237/2-1. GAP is the acronym for ""Gruppierungs-und Abgrenzungsgrozesse beim Aufbau sprachlich angeregter mentaler Modelle"" (Processes of grouping and separation in the construction of mental models from texts), a research project carried out in the DFG-program ""Kognitive Linguistik"".",Remarks on Plural Anaphora,"The interpretation of plural anaphora often requires the construction of complex reference objects (RefOs) out of RefOs which were formerly introduced not by plural terms but by a number of singular terms only. Often, several complex RefOs can be constructed, but only one of them is the preferred referent for the plural anaphor in question. As a means of explanation for preferred and non-preferred interpretations of plural anaphora, the concept of a Common Association Basis (CAB) for the potential atomic parts of a complex object is introduced in the following. CABs pose conceptual constraints on the formation of complex RefOs in general. We argue that in cases where a suitable CAB for the atomic RefOs introduced in the text exists, the corresponding complex RefO is constructed as early as in the course of processing the antecedent sentence and put into the focus domain of the discourse model. Thus, the search for a referent for a plural anaphor is constrained to a limited domain of RefOs according to the general principles of focus theory in NLP. Further principles of interpretation are suggested which guide the resolution of plural anaphora in cases where more than one suitable complex RefO is in focus. * The research on this paper was supported in part by the Deutsche Forschungsgemeinschaft (DFG) under grant Ha 1237/2-1. GAP is the acronym for ""Gruppierungs-und Abgrenzungsgrozesse beim Aufbau sprachlich angeregter mentaler Modelle"" (Processes of grouping and separation in the construction of mental models from texts), a research project carried out in the DFG-program ""Kognitive Linguistik"".",Remarks on Plural Anaphora,"The interpretation of plural anaphora often requires the construction of complex reference objects (RefOs) out of RefOs which were formerly introduced not by plural terms but by a number of singular terms only. Often, several complex RefOs can be constructed, but only one of them is the preferred referent for the plural anaphor in question. As a means of explanation for preferred and non-preferred interpretations of plural anaphora, the concept of a Common Association Basis (CAB) for the potential atomic parts of a complex object is introduced in the following. CABs pose conceptual constraints on the formation of complex RefOs in general. We argue that in cases where a suitable CAB for the atomic RefOs introduced in the text exists, the corresponding complex RefO is constructed as early as in the course of processing the antecedent sentence and put into the focus domain of the discourse model. Thus, the search for a referent for a plural anaphor is constrained to a limited domain of RefOs according to the general principles of focus theory in NLP. Further principles of interpretation are suggested which guide the resolution of plural anaphora in cases where more than one suitable complex RefO is in focus. * The research on this paper was supported in part by the Deutsche Forschungsgemeinschaft (DFG) under grant Ha 1237/2-1. GAP is the acronym for ""Gruppierungs-und Abgrenzungsgrozesse beim Aufbau sprachlich angeregter mentaler Modelle"" (Processes of grouping and separation in the construction of mental models from texts), a research project carried out in the DFG-program ""Kognitive Linguistik"".","We thank Ewald Lang, Geoff Simmons (who also corrected our English) and Andrea Schopp for stimulating discussions and three anonymous referees from ACL for their comments on an earlier version of this paper.","Remarks on Plural Anaphora. The interpretation of plural anaphora often requires the construction of complex reference objects (RefOs) out of RefOs which were formerly introduced not by plural terms but by a number of singular terms only. Often, several complex RefOs can be constructed, but only one of them is the preferred referent for the plural anaphor in question. As a means of explanation for preferred and non-preferred interpretations of plural anaphora, the concept of a Common Association Basis (CAB) for the potential atomic parts of a complex object is introduced in the following. CABs pose conceptual constraints on the formation of complex RefOs in general. We argue that in cases where a suitable CAB for the atomic RefOs introduced in the text exists, the corresponding complex RefO is constructed as early as in the course of processing the antecedent sentence and put into the focus domain of the discourse model. Thus, the search for a referent for a plural anaphor is constrained to a limited domain of RefOs according to the general principles of focus theory in NLP. Further principles of interpretation are suggested which guide the resolution of plural anaphora in cases where more than one suitable complex RefO is in focus. * The research on this paper was supported in part by the Deutsche Forschungsgemeinschaft (DFG) under grant Ha 1237/2-1. GAP is the acronym for ""Gruppierungs-und Abgrenzungsgrozesse beim Aufbau sprachlich angeregter mentaler Modelle"" (Processes of grouping and separation in the construction of mental models from texts), a research project carried out in the DFG-program ""Kognitive Linguistik"".",1989
sjobergh-araki-2008-multi,http://www.lrec-conf.org/proceedings/lrec2008/pdf/133_paper.pdf,1,,,,hate_speech,,,"A Multi-Lingual Dictionary of Dirty Words. We present a multilingual dictionary of dirty words. We have collected about 3,200 dirty words in several languages and built a database of these. The language with the most words in the database is English, though there are several hundred dirty words in for instance Japanese too. Words are classified into their general meaning, such as what part of the human anatomy they refer to. Words can also be assigned a nuance label to indicate if it is a cute word used when speaking to children, a very rude word, a clinical word etc. The database is available online and will hopefully be enlarged over time. It has already been used in research on for instance automatic joke generation and emotion detection.",A Multi-Lingual Dictionary of Dirty Words,"We present a multilingual dictionary of dirty words. We have collected about 3,200 dirty words in several languages and built a database of these. The language with the most words in the database is English, though there are several hundred dirty words in for instance Japanese too. Words are classified into their general meaning, such as what part of the human anatomy they refer to. Words can also be assigned a nuance label to indicate if it is a cute word used when speaking to children, a very rude word, a clinical word etc. The database is available online and will hopefully be enlarged over time. It has already been used in research on for instance automatic joke generation and emotion detection.",A Multi-Lingual Dictionary of Dirty Words,"We present a multilingual dictionary of dirty words. We have collected about 3,200 dirty words in several languages and built a database of these. The language with the most words in the database is English, though there are several hundred dirty words in for instance Japanese too. Words are classified into their general meaning, such as what part of the human anatomy they refer to. Words can also be assigned a nuance label to indicate if it is a cute word used when speaking to children, a very rude word, a clinical word etc. The database is available online and will hopefully be enlarged over time. It has already been used in research on for instance automatic joke generation and emotion detection.","This work was done as part of a project funded by the Japanese Society for the Promotion of Science (JSPS). We would like to thank some of the anonymous reviewers for interesting suggestions for extending our work. We would also like to thank the volunteers who have contributed dirty words to the dictionary, especially Svetoslav Dankov who also helped out with various practical things.","A Multi-Lingual Dictionary of Dirty Words. We present a multilingual dictionary of dirty words. We have collected about 3,200 dirty words in several languages and built a database of these. The language with the most words in the database is English, though there are several hundred dirty words in for instance Japanese too. Words are classified into their general meaning, such as what part of the human anatomy they refer to. Words can also be assigned a nuance label to indicate if it is a cute word used when speaking to children, a very rude word, a clinical word etc. The database is available online and will hopefully be enlarged over time. It has already been used in research on for instance automatic joke generation and emotion detection.",2008
fernandez-etal-2007-referring,https://aclanthology.org/2007.sigdial-1.25,0,,,,,,,"Referring under Restricted Interactivity Conditions. We report results on how the collaborative process of referring in task-oriented dialogue is affected by the restrictive interactivity of a turn-taking policy commonly used in dialogue systems, namely push-to-talk. Our findings show that the restriction did not have a negative effect. Instead, the stricter control imposed at the interaction level favoured longer, more effective referring expressions, and induced a stricter and more structured performance at the level of the task.",Referring under Restricted Interactivity Conditions,"We report results on how the collaborative process of referring in task-oriented dialogue is affected by the restrictive interactivity of a turn-taking policy commonly used in dialogue systems, namely push-to-talk. Our findings show that the restriction did not have a negative effect. Instead, the stricter control imposed at the interaction level favoured longer, more effective referring expressions, and induced a stricter and more structured performance at the level of the task.",Referring under Restricted Interactivity Conditions,"We report results on how the collaborative process of referring in task-oriented dialogue is affected by the restrictive interactivity of a turn-taking policy commonly used in dialogue systems, namely push-to-talk. Our findings show that the restriction did not have a negative effect. Instead, the stricter control imposed at the interaction level favoured longer, more effective referring expressions, and induced a stricter and more structured performance at the level of the task.",Acknowledgements. This work was supported by the EU Marie Curie Programme (first author) and the DFG Emmy Noether Programme (last author). Thanks to the anonymous reviewers for their helpful comments.,"Referring under Restricted Interactivity Conditions. We report results on how the collaborative process of referring in task-oriented dialogue is affected by the restrictive interactivity of a turn-taking policy commonly used in dialogue systems, namely push-to-talk. Our findings show that the restriction did not have a negative effect. Instead, the stricter control imposed at the interaction level favoured longer, more effective referring expressions, and induced a stricter and more structured performance at the level of the task.",2007
suzuki-2004-phrase,http://www.lrec-conf.org/proceedings/lrec2004/pdf/272.pdf,0,,,,,,,"Phrase-Based Dependency Evaluation of a Japanese Parser. Extraction of predicate-argument structure is an important task that requires evaluation for many applications, yet annotated resources of predicate-argument structure are currently scarce, especially for languages other than English. This paper presents an evaluation of a Japanese parser based on dependency relations as proposed by Lin (1995, 1998), but using phrase dependency instead of word dependency. Phrase-based dependency analysis has been the preferred form of Japanese syntactic analysis, yet the use of annotated resources in this format has so far been limited to training and evaluation of dependency analyzers. We will show that (1) evaluation based on phrase-dependency is particularly well-suited for Japanese, even for an evaluation of phrase-structure grammar, and that (2) in spite of shortcomings, the proposed evaluation method has the advantage of utilizing currently available surface-based annotations in a way that is relevant to predicate-argument structure.",Phrase-Based Dependency Evaluation of a {J}apanese Parser,"Extraction of predicate-argument structure is an important task that requires evaluation for many applications, yet annotated resources of predicate-argument structure are currently scarce, especially for languages other than English. This paper presents an evaluation of a Japanese parser based on dependency relations as proposed by Lin (1995, 1998), but using phrase dependency instead of word dependency. Phrase-based dependency analysis has been the preferred form of Japanese syntactic analysis, yet the use of annotated resources in this format has so far been limited to training and evaluation of dependency analyzers. We will show that (1) evaluation based on phrase-dependency is particularly well-suited for Japanese, even for an evaluation of phrase-structure grammar, and that (2) in spite of shortcomings, the proposed evaluation method has the advantage of utilizing currently available surface-based annotations in a way that is relevant to predicate-argument structure.",Phrase-Based Dependency Evaluation of a Japanese Parser,"Extraction of predicate-argument structure is an important task that requires evaluation for many applications, yet annotated resources of predicate-argument structure are currently scarce, especially for languages other than English. This paper presents an evaluation of a Japanese parser based on dependency relations as proposed by Lin (1995, 1998), but using phrase dependency instead of word dependency. Phrase-based dependency analysis has been the preferred form of Japanese syntactic analysis, yet the use of annotated resources in this format has so far been limited to training and evaluation of dependency analyzers. We will show that (1) evaluation based on phrase-dependency is particularly well-suited for Japanese, even for an evaluation of phrase-structure grammar, and that (2) in spite of shortcomings, the proposed evaluation method has the advantage of utilizing currently available surface-based annotations in a way that is relevant to predicate-argument structure.",I would like to thank Mari Brunson for producing the KCstyle annotation for various data sets for our experiments.,"Phrase-Based Dependency Evaluation of a Japanese Parser. Extraction of predicate-argument structure is an important task that requires evaluation for many applications, yet annotated resources of predicate-argument structure are currently scarce, especially for languages other than English. This paper presents an evaluation of a Japanese parser based on dependency relations as proposed by Lin (1995, 1998), but using phrase dependency instead of word dependency. Phrase-based dependency analysis has been the preferred form of Japanese syntactic analysis, yet the use of annotated resources in this format has so far been limited to training and evaluation of dependency analyzers. We will show that (1) evaluation based on phrase-dependency is particularly well-suited for Japanese, even for an evaluation of phrase-structure grammar, and that (2) in spite of shortcomings, the proposed evaluation method has the advantage of utilizing currently available surface-based annotations in a way that is relevant to predicate-argument structure.",2004
abu-jbara-etal-2011-towards,https://aclanthology.org/P11-2043,0,,,,,,,"Towards Style Transformation from Written-Style to Audio-Style. In this paper, we address the problem of optimizing the style of textual content to make it more suitable to being listened to by a user as opposed to being read. We study the differences between the written style and the audio style by consulting the linguistics and journalism literatures. Guided by this study, we suggest a number of linguistic features to distinguish between the two styles. We show the correctness of our features and the impact of style transformation on the user experience through statistical analysis, a style classification task, and a user study.",Towards Style Transformation from Written-Style to Audio-Style,"In this paper, we address the problem of optimizing the style of textual content to make it more suitable to being listened to by a user as opposed to being read. We study the differences between the written style and the audio style by consulting the linguistics and journalism literatures. Guided by this study, we suggest a number of linguistic features to distinguish between the two styles. We show the correctness of our features and the impact of style transformation on the user experience through statistical analysis, a style classification task, and a user study.",Towards Style Transformation from Written-Style to Audio-Style,"In this paper, we address the problem of optimizing the style of textual content to make it more suitable to being listened to by a user as opposed to being read. We study the differences between the written style and the audio style by consulting the linguistics and journalism literatures. Guided by this study, we suggest a number of linguistic features to distinguish between the two styles. We show the correctness of our features and the impact of style transformation on the user experience through statistical analysis, a style classification task, and a user study.",,"Towards Style Transformation from Written-Style to Audio-Style. In this paper, we address the problem of optimizing the style of textual content to make it more suitable to being listened to by a user as opposed to being read. We study the differences between the written style and the audio style by consulting the linguistics and journalism literatures. Guided by this study, we suggest a number of linguistic features to distinguish between the two styles. We show the correctness of our features and the impact of style transformation on the user experience through statistical analysis, a style classification task, and a user study.",2011
sperber-etal-2019-attention,https://aclanthology.org/Q19-1020,0,,,,,,,"Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation. Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts. Several recent works have shown the feasibility of collapsing the cascade into a single, direct model that can be trained in an end-to-end fashion on a corpus of translated speech. However, experiments are inconclusive on whether the cascade or the direct model is stronger, and have only been conducted under the unrealistic assumption that both are trained on equal amounts of data, ignoring other available speech recognition and machine translation corpora. In this paper, we demonstrate that direct speech translation models require more data to perform well than cascaded models, and although they allow including auxiliary data through multi-task training, they are poor at exploiting such data, putting them at a severe disadvantage. As a remedy, we propose the use of endto-end trainable models with two attention mechanisms, the first establishing source speech to source text alignments, the second modeling source to target text alignment. We show that such models naturally decompose into multitask-trainable recognition and translation tasks and propose an attention-passing technique that alleviates error propagation issues in a previous formulation of a model with two attention stages. Our proposed model outperforms all examined baselines and is able to exploit auxiliary training data much more effectively than direct attentional models.",Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation,"Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts. Several recent works have shown the feasibility of collapsing the cascade into a single, direct model that can be trained in an end-to-end fashion on a corpus of translated speech. However, experiments are inconclusive on whether the cascade or the direct model is stronger, and have only been conducted under the unrealistic assumption that both are trained on equal amounts of data, ignoring other available speech recognition and machine translation corpora. In this paper, we demonstrate that direct speech translation models require more data to perform well than cascaded models, and although they allow including auxiliary data through multi-task training, they are poor at exploiting such data, putting them at a severe disadvantage. As a remedy, we propose the use of endto-end trainable models with two attention mechanisms, the first establishing source speech to source text alignments, the second modeling source to target text alignment. We show that such models naturally decompose into multitask-trainable recognition and translation tasks and propose an attention-passing technique that alleviates error propagation issues in a previous formulation of a model with two attention stages. Our proposed model outperforms all examined baselines and is able to exploit auxiliary training data much more effectively than direct attentional models.",Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation,"Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts. Several recent works have shown the feasibility of collapsing the cascade into a single, direct model that can be trained in an end-to-end fashion on a corpus of translated speech. However, experiments are inconclusive on whether the cascade or the direct model is stronger, and have only been conducted under the unrealistic assumption that both are trained on equal amounts of data, ignoring other available speech recognition and machine translation corpora. In this paper, we demonstrate that direct speech translation models require more data to perform well than cascaded models, and although they allow including auxiliary data through multi-task training, they are poor at exploiting such data, putting them at a severe disadvantage. As a remedy, we propose the use of endto-end trainable models with two attention mechanisms, the first establishing source speech to source text alignments, the second modeling source to target text alignment. We show that such models naturally decompose into multitask-trainable recognition and translation tasks and propose an attention-passing technique that alleviates error propagation issues in a previous formulation of a model with two attention stages. Our proposed model outperforms all examined baselines and is able to exploit auxiliary training data much more effectively than direct attentional models.","We thank Adam Lopez, Stefan Constantin, and the anonymous reviewers for their helpful comments. The work leading to these results has received funding from the European Union under grant agreement no. 825460.","Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation. Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts. Several recent works have shown the feasibility of collapsing the cascade into a single, direct model that can be trained in an end-to-end fashion on a corpus of translated speech. However, experiments are inconclusive on whether the cascade or the direct model is stronger, and have only been conducted under the unrealistic assumption that both are trained on equal amounts of data, ignoring other available speech recognition and machine translation corpora. In this paper, we demonstrate that direct speech translation models require more data to perform well than cascaded models, and although they allow including auxiliary data through multi-task training, they are poor at exploiting such data, putting them at a severe disadvantage. As a remedy, we propose the use of endto-end trainable models with two attention mechanisms, the first establishing source speech to source text alignments, the second modeling source to target text alignment. We show that such models naturally decompose into multitask-trainable recognition and translation tasks and propose an attention-passing technique that alleviates error propagation issues in a previous formulation of a model with two attention stages. Our proposed model outperforms all examined baselines and is able to exploit auxiliary training data much more effectively than direct attentional models.",2019
erbs-etal-2013-hierarchy,https://aclanthology.org/R13-1033,0,,,,,,,"Hierarchy Identification for Automatically Generating Table-of-Contents. A table-of-contents (TOC) provides a quick reference to a document's content and structure. We present the first study on identifying the hierarchical structure for automatically generating a TOC using only textual features instead of structural hints e.g. from HTML-tags. We create two new datasets to evaluate our approaches for hierarchy identification. We find that our algorithm performs on a level that is sufficient for a fully automated system. For documents without given segment titles, we extend our work by automatically generating segment titles. We make the datasets and our experimental framework publicly available in order to foster future research in TOC generation.",Hierarchy Identification for Automatically Generating Table-of-Contents,"A table-of-contents (TOC) provides a quick reference to a document's content and structure. We present the first study on identifying the hierarchical structure for automatically generating a TOC using only textual features instead of structural hints e.g. from HTML-tags. We create two new datasets to evaluate our approaches for hierarchy identification. We find that our algorithm performs on a level that is sufficient for a fully automated system. For documents without given segment titles, we extend our work by automatically generating segment titles. We make the datasets and our experimental framework publicly available in order to foster future research in TOC generation.",Hierarchy Identification for Automatically Generating Table-of-Contents,"A table-of-contents (TOC) provides a quick reference to a document's content and structure. We present the first study on identifying the hierarchical structure for automatically generating a TOC using only textual features instead of structural hints e.g. from HTML-tags. We create two new datasets to evaluate our approaches for hierarchy identification. We find that our algorithm performs on a level that is sufficient for a fully automated system. For documents without given segment titles, we extend our work by automatically generating segment titles. We make the datasets and our experimental framework publicly available in order to foster future research in TOC generation.",This work has been supported by the Volkswagen Foundation as part of the ,"Hierarchy Identification for Automatically Generating Table-of-Contents. A table-of-contents (TOC) provides a quick reference to a document's content and structure. We present the first study on identifying the hierarchical structure for automatically generating a TOC using only textual features instead of structural hints e.g. from HTML-tags. We create two new datasets to evaluate our approaches for hierarchy identification. We find that our algorithm performs on a level that is sufficient for a fully automated system. For documents without given segment titles, we extend our work by automatically generating segment titles. We make the datasets and our experimental framework publicly available in order to foster future research in TOC generation.",2013
li-wu-2016-multi,https://aclanthology.org/C16-1185,0,,,,,,,"Multi-level Gated Recurrent Neural Network for dialog act classification. In this paper we focus on the problem of dialog act (DA) labelling. This problem has recently attracted a lot of attention as it is an important sub-part of an automatic dialog model, which is currently in great demand. Traditional methods tend to see this problem as a sequence labelling task and deal with it by applying classifiers with rich features. Most of the current neural network models still omit the sequential information in the conversation. Henceforth, we apply a novel multi-level gated recurrent neural network (GRNN) with non-textual information to predict the DA tag. Our model not only utilizes textual information, but also makes use of non-textual and contextual information. In comparison, our model has shown significant improvement over previous works on the Switchboard Dialog Act (SWDA) data by over 6%.",Multi-level Gated Recurrent Neural Network for dialog act classification,"In this paper we focus on the problem of dialog act (DA) labelling. This problem has recently attracted a lot of attention as it is an important sub-part of an automatic dialog model, which is currently in great demand. Traditional methods tend to see this problem as a sequence labelling task and deal with it by applying classifiers with rich features. Most of the current neural network models still omit the sequential information in the conversation. Henceforth, we apply a novel multi-level gated recurrent neural network (GRNN) with non-textual information to predict the DA tag. Our model not only utilizes textual information, but also makes use of non-textual and contextual information. In comparison, our model has shown significant improvement over previous works on the Switchboard Dialog Act (SWDA) data by over 6%.",Multi-level Gated Recurrent Neural Network for dialog act classification,"In this paper we focus on the problem of dialog act (DA) labelling. This problem has recently attracted a lot of attention as it is an important sub-part of an automatic dialog model, which is currently in great demand. Traditional methods tend to see this problem as a sequence labelling task and deal with it by applying classifiers with rich features. Most of the current neural network models still omit the sequential information in the conversation. Henceforth, we apply a novel multi-level gated recurrent neural network (GRNN) with non-textual information to predict the DA tag. Our model not only utilizes textual information, but also makes use of non-textual and contextual information. In comparison, our model has shown significant improvement over previous works on the Switchboard Dialog Act (SWDA) data by over 6%.",,"Multi-level Gated Recurrent Neural Network for dialog act classification. In this paper we focus on the problem of dialog act (DA) labelling. This problem has recently attracted a lot of attention as it is an important sub-part of an automatic dialog model, which is currently in great demand. Traditional methods tend to see this problem as a sequence labelling task and deal with it by applying classifiers with rich features. Most of the current neural network models still omit the sequential information in the conversation. Henceforth, we apply a novel multi-level gated recurrent neural network (GRNN) with non-textual information to predict the DA tag. Our model not only utilizes textual information, but also makes use of non-textual and contextual information. In comparison, our model has shown significant improvement over previous works on the Switchboard Dialog Act (SWDA) data by over 6%.",2016
ravi-knight-2009-learning,https://aclanthology.org/N09-1005,0,,,,,,,Learning Phoneme Mappings for Transliteration without Parallel Data. We present a method for performing machine transliteration without any parallel resources. We frame the transliteration task as a decipherment problem and show that it is possible to learn cross-language phoneme mapping tables using only monolingual resources. We compare various methods and evaluate their accuracies on a standard name transliteration task.,Learning Phoneme Mappings for Transliteration without Parallel Data,We present a method for performing machine transliteration without any parallel resources. We frame the transliteration task as a decipherment problem and show that it is possible to learn cross-language phoneme mapping tables using only monolingual resources. We compare various methods and evaluate their accuracies on a standard name transliteration task.,Learning Phoneme Mappings for Transliteration without Parallel Data,We present a method for performing machine transliteration without any parallel resources. We frame the transliteration task as a decipherment problem and show that it is possible to learn cross-language phoneme mapping tables using only monolingual resources. We compare various methods and evaluate their accuracies on a standard name transliteration task.,This research was supported by the Defense Advanced Research Projects Agency under SRI International's prime Contract Number NBCHD040058.,Learning Phoneme Mappings for Transliteration without Parallel Data. We present a method for performing machine transliteration without any parallel resources. We frame the transliteration task as a decipherment problem and show that it is possible to learn cross-language phoneme mapping tables using only monolingual resources. We compare various methods and evaluate their accuracies on a standard name transliteration task.,2009
tokunaga-etal-2011-discriminative,https://aclanthology.org/W11-3502,0,,,,,,,"Discriminative Method for Japanese Kana-Kanji Input Method. The most popular type of input method in Japan is kana-kanji conversion, conversion from a string of kana to a mixed kanjikana string. However there is no study using discriminative methods like structured SVMs for kana-kanji conversion. One of the reasons is that learning a discriminative model from a large data set is often intractable. However, due to progress of recent researches, large scale learning of discriminative models become feasible in these days. In the present paper, we investigate whether discriminative methods such as structured SVMs can improve the accuracy of kana-kanji conversion. To the best of our knowledge, this is the first study comparing a generative model and a discriminative model for kana-kanji conversion. An experiment revealed that a discriminative method can improve the performance by approximately 3%.",Discriminative Method for {J}apanese Kana-Kanji Input Method,"The most popular type of input method in Japan is kana-kanji conversion, conversion from a string of kana to a mixed kanjikana string. However there is no study using discriminative methods like structured SVMs for kana-kanji conversion. One of the reasons is that learning a discriminative model from a large data set is often intractable. However, due to progress of recent researches, large scale learning of discriminative models become feasible in these days. In the present paper, we investigate whether discriminative methods such as structured SVMs can improve the accuracy of kana-kanji conversion. To the best of our knowledge, this is the first study comparing a generative model and a discriminative model for kana-kanji conversion. An experiment revealed that a discriminative method can improve the performance by approximately 3%.",Discriminative Method for Japanese Kana-Kanji Input Method,"The most popular type of input method in Japan is kana-kanji conversion, conversion from a string of kana to a mixed kanjikana string. However there is no study using discriminative methods like structured SVMs for kana-kanji conversion. One of the reasons is that learning a discriminative model from a large data set is often intractable. However, due to progress of recent researches, large scale learning of discriminative models become feasible in these days. In the present paper, we investigate whether discriminative methods such as structured SVMs can improve the accuracy of kana-kanji conversion. To the best of our knowledge, this is the first study comparing a generative model and a discriminative model for kana-kanji conversion. An experiment revealed that a discriminative method can improve the performance by approximately 3%.",,"Discriminative Method for Japanese Kana-Kanji Input Method. The most popular type of input method in Japan is kana-kanji conversion, conversion from a string of kana to a mixed kanjikana string. However there is no study using discriminative methods like structured SVMs for kana-kanji conversion. One of the reasons is that learning a discriminative model from a large data set is often intractable. However, due to progress of recent researches, large scale learning of discriminative models become feasible in these days. In the present paper, we investigate whether discriminative methods such as structured SVMs can improve the accuracy of kana-kanji conversion. To the best of our knowledge, this is the first study comparing a generative model and a discriminative model for kana-kanji conversion. An experiment revealed that a discriminative method can improve the performance by approximately 3%.",2011
andreas-klein-2015-log,https://aclanthology.org/N15-1027,0,,,,,,,"When and why are log-linear models self-normalizing?. Several techniques have recently been proposed for training ""self-normalized"" discriminative models. These attempt to find parameter settings for which unnormalized model scores approximate the true label probability. However, the theoretical properties of such techniques (and of self-normalization generally) have not been investigated. This paper examines the conditions under which we can expect self-normalization to work. We characterize a general class of distributions that admit self-normalization, and prove generalization bounds for procedures that minimize empirical normalizer variance. Motivated by these results, we describe a novel variant of an established procedure for training self-normalized models. The new procedure avoids computing normalizers for most training examples, and decreases training time by as much as factor of ten while preserving model quality.",When and why are log-linear models self-normalizing?,"Several techniques have recently been proposed for training ""self-normalized"" discriminative models. These attempt to find parameter settings for which unnormalized model scores approximate the true label probability. However, the theoretical properties of such techniques (and of self-normalization generally) have not been investigated. This paper examines the conditions under which we can expect self-normalization to work. We characterize a general class of distributions that admit self-normalization, and prove generalization bounds for procedures that minimize empirical normalizer variance. Motivated by these results, we describe a novel variant of an established procedure for training self-normalized models. The new procedure avoids computing normalizers for most training examples, and decreases training time by as much as factor of ten while preserving model quality.",When and why are log-linear models self-normalizing?,"Several techniques have recently been proposed for training ""self-normalized"" discriminative models. These attempt to find parameter settings for which unnormalized model scores approximate the true label probability. However, the theoretical properties of such techniques (and of self-normalization generally) have not been investigated. This paper examines the conditions under which we can expect self-normalization to work. We characterize a general class of distributions that admit self-normalization, and prove generalization bounds for procedures that minimize empirical normalizer variance. Motivated by these results, we describe a novel variant of an established procedure for training self-normalized models. The new procedure avoids computing normalizers for most training examples, and decreases training time by as much as factor of ten while preserving model quality.","The authors would like to thank Peter Bartlett, Robert Nishihara and Maxim Rabinovich for useful discussions. This work was partially supported by BBN under DARPA contract HR0011-12-C-0014. The first author is supported by a National Science Foundation Graduate Fellowship.","When and why are log-linear models self-normalizing?. Several techniques have recently been proposed for training ""self-normalized"" discriminative models. These attempt to find parameter settings for which unnormalized model scores approximate the true label probability. However, the theoretical properties of such techniques (and of self-normalization generally) have not been investigated. This paper examines the conditions under which we can expect self-normalization to work. We characterize a general class of distributions that admit self-normalization, and prove generalization bounds for procedures that minimize empirical normalizer variance. Motivated by these results, we describe a novel variant of an established procedure for training self-normalized models. The new procedure avoids computing normalizers for most training examples, and decreases training time by as much as factor of ten while preserving model quality.",2015
ws-2002-coling,https://aclanthology.org/W02-1100,0,,,,,,,COLING-02: SEMANET: Building and Using Semantic Networks. ,{COLING}-02: {SEMANET}: Building and Using Semantic Networks,,COLING-02: SEMANET: Building and Using Semantic Networks,,,COLING-02: SEMANET: Building and Using Semantic Networks. ,2002
vestre-1991-algorithm,https://aclanthology.org/E91-1044,0,,,,,,,"An Algorithm for Generating Non-Redundant Quantifier Scopings. This paper describes an algorithm for generating quantifier scopings. The algorithm is designed to generate only logically non-redundant scopings and to partially order the scopings with a given :default scoping first. Removing logical redundancy is not only interesting per se, but also drastically reduces the processing time. The input and output formats are described through a few access and construction functions. Thus, the algorithm is interesting for a modular linguistic theory, which is flexible with respect to syntactic and semantic framework.",An Algorithm for Generating Non-Redundant Quantifier Scopings,"This paper describes an algorithm for generating quantifier scopings. The algorithm is designed to generate only logically non-redundant scopings and to partially order the scopings with a given :default scoping first. Removing logical redundancy is not only interesting per se, but also drastically reduces the processing time. The input and output formats are described through a few access and construction functions. Thus, the algorithm is interesting for a modular linguistic theory, which is flexible with respect to syntactic and semantic framework.",An Algorithm for Generating Non-Redundant Quantifier Scopings,"This paper describes an algorithm for generating quantifier scopings. The algorithm is designed to generate only logically non-redundant scopings and to partially order the scopings with a given :default scoping first. Removing logical redundancy is not only interesting per se, but also drastically reduces the processing time. The input and output formats are described through a few access and construction functions. Thus, the algorithm is interesting for a modular linguistic theory, which is flexible with respect to syntactic and semantic framework.",,"An Algorithm for Generating Non-Redundant Quantifier Scopings. This paper describes an algorithm for generating quantifier scopings. The algorithm is designed to generate only logically non-redundant scopings and to partially order the scopings with a given :default scoping first. Removing logical redundancy is not only interesting per se, but also drastically reduces the processing time. The input and output formats are described through a few access and construction functions. Thus, the algorithm is interesting for a modular linguistic theory, which is flexible with respect to syntactic and semantic framework.",1991
khan-etal-2013-generative,https://aclanthology.org/W13-5409,0,,,,,,,Generative Lexicon Theory and Linguistic Linked Open Data. In this paper we look at how Generative Lexicon theory can assist in providing a more thorough definition of word senses as links between items in a RDF-based lexicon and concepts in an ontology. We focus on the definition of lexical sense in lemon and show its limitations before defining a new model based on lemon and which we term lemonGL. This new model is an initial attempt at providing a way of structuring lexico-ontological resources as linked data in such a way as to allow a rich representation of word meaning (following the GL theory) while at the same time (attempting to) remain faithful to the separation between the lexicon and the ontology as recommended by the lemon model.,{G}enerative {L}exicon Theory and Linguistic Linked Open Data,In this paper we look at how Generative Lexicon theory can assist in providing a more thorough definition of word senses as links between items in a RDF-based lexicon and concepts in an ontology. We focus on the definition of lexical sense in lemon and show its limitations before defining a new model based on lemon and which we term lemonGL. This new model is an initial attempt at providing a way of structuring lexico-ontological resources as linked data in such a way as to allow a rich representation of word meaning (following the GL theory) while at the same time (attempting to) remain faithful to the separation between the lexicon and the ontology as recommended by the lemon model.,Generative Lexicon Theory and Linguistic Linked Open Data,In this paper we look at how Generative Lexicon theory can assist in providing a more thorough definition of word senses as links between items in a RDF-based lexicon and concepts in an ontology. We focus on the definition of lexical sense in lemon and show its limitations before defining a new model based on lemon and which we term lemonGL. This new model is an initial attempt at providing a way of structuring lexico-ontological resources as linked data in such a way as to allow a rich representation of word meaning (following the GL theory) while at the same time (attempting to) remain faithful to the separation between the lexicon and the ontology as recommended by the lemon model.,,Generative Lexicon Theory and Linguistic Linked Open Data. In this paper we look at how Generative Lexicon theory can assist in providing a more thorough definition of word senses as links between items in a RDF-based lexicon and concepts in an ontology. We focus on the definition of lexical sense in lemon and show its limitations before defining a new model based on lemon and which we term lemonGL. This new model is an initial attempt at providing a way of structuring lexico-ontological resources as linked data in such a way as to allow a rich representation of word meaning (following the GL theory) while at the same time (attempting to) remain faithful to the separation between the lexicon and the ontology as recommended by the lemon model.,2013
samardzic-etal-2015-automatic,https://aclanthology.org/W15-3710,0,,,,,,,"Automatic interlinear glossing as two-level sequence classification. Interlinear glossing is a type of annotation of morphosyntactic categories and crosslinguistic lexical correspondences that allows linguists to analyse sentences in languages that they do not necessarily speak. Automatising this annotation is necessary in order to provide glossed corpora big enough to be used for quantitative studies. In this paper, we present experiments on the automatic glossing of Chintang. We decompose the task of glossing into steps suitable for statistical processing. We first perform grammatical glossing as standard supervised part-of-speech tagging. We then add lexical glosses from a stand-off dictionary applying context disambiguation in a similar way to word lemmatisation. We obtain the highest accuracy score of 96% for grammatical and 94% for lexical glossing.",Automatic interlinear glossing as two-level sequence classification,"Interlinear glossing is a type of annotation of morphosyntactic categories and crosslinguistic lexical correspondences that allows linguists to analyse sentences in languages that they do not necessarily speak. Automatising this annotation is necessary in order to provide glossed corpora big enough to be used for quantitative studies. In this paper, we present experiments on the automatic glossing of Chintang. We decompose the task of glossing into steps suitable for statistical processing. We first perform grammatical glossing as standard supervised part-of-speech tagging. We then add lexical glosses from a stand-off dictionary applying context disambiguation in a similar way to word lemmatisation. We obtain the highest accuracy score of 96% for grammatical and 94% for lexical glossing.",Automatic interlinear glossing as two-level sequence classification,"Interlinear glossing is a type of annotation of morphosyntactic categories and crosslinguistic lexical correspondences that allows linguists to analyse sentences in languages that they do not necessarily speak. Automatising this annotation is necessary in order to provide glossed corpora big enough to be used for quantitative studies. In this paper, we present experiments on the automatic glossing of Chintang. We decompose the task of glossing into steps suitable for statistical processing. We first perform grammatical glossing as standard supervised part-of-speech tagging. We then add lexical glosses from a stand-off dictionary applying context disambiguation in a similar way to word lemmatisation. We obtain the highest accuracy score of 96% for grammatical and 94% for lexical glossing.",,"Automatic interlinear glossing as two-level sequence classification. Interlinear glossing is a type of annotation of morphosyntactic categories and crosslinguistic lexical correspondences that allows linguists to analyse sentences in languages that they do not necessarily speak. Automatising this annotation is necessary in order to provide glossed corpora big enough to be used for quantitative studies. In this paper, we present experiments on the automatic glossing of Chintang. We decompose the task of glossing into steps suitable for statistical processing. We first perform grammatical glossing as standard supervised part-of-speech tagging. We then add lexical glosses from a stand-off dictionary applying context disambiguation in a similar way to word lemmatisation. We obtain the highest accuracy score of 96% for grammatical and 94% for lexical glossing.",2015
belz-etal-2010-finding,https://aclanthology.org/W10-4237,0,,,,,,,"Finding Common Ground: Towards a Surface Realisation Shared Task. In many areas of NLP reuse of utility tools such as parsers and POS taggers is now common, but this is still rare in NLG. The subfield of surface realisation has perhaps come closest, but at present we still lack a basis on which different surface realisers could be compared, chiefly because of the wide variety of different input representations used by different realisers. This paper outlines an idea for a shared task in surface realisation, where inputs are provided in a common-ground representation formalism which participants map to the types of input required by their system. These inputs are derived from existing annotated corpora developed for language analysis (parsing etc.). Outputs (realisations) are evaluated by automatic comparison against the human-authored text in the corpora as well as by human assessors.",Finding Common Ground: Towards a Surface Realisation Shared Task,"In many areas of NLP reuse of utility tools such as parsers and POS taggers is now common, but this is still rare in NLG. The subfield of surface realisation has perhaps come closest, but at present we still lack a basis on which different surface realisers could be compared, chiefly because of the wide variety of different input representations used by different realisers. This paper outlines an idea for a shared task in surface realisation, where inputs are provided in a common-ground representation formalism which participants map to the types of input required by their system. These inputs are derived from existing annotated corpora developed for language analysis (parsing etc.). Outputs (realisations) are evaluated by automatic comparison against the human-authored text in the corpora as well as by human assessors.",Finding Common Ground: Towards a Surface Realisation Shared Task,"In many areas of NLP reuse of utility tools such as parsers and POS taggers is now common, but this is still rare in NLG. The subfield of surface realisation has perhaps come closest, but at present we still lack a basis on which different surface realisers could be compared, chiefly because of the wide variety of different input representations used by different realisers. This paper outlines an idea for a shared task in surface realisation, where inputs are provided in a common-ground representation formalism which participants map to the types of input required by their system. These inputs are derived from existing annotated corpora developed for language analysis (parsing etc.). Outputs (realisations) are evaluated by automatic comparison against the human-authored text in the corpora as well as by human assessors.",,"Finding Common Ground: Towards a Surface Realisation Shared Task. In many areas of NLP reuse of utility tools such as parsers and POS taggers is now common, but this is still rare in NLG. The subfield of surface realisation has perhaps come closest, but at present we still lack a basis on which different surface realisers could be compared, chiefly because of the wide variety of different input representations used by different realisers. This paper outlines an idea for a shared task in surface realisation, where inputs are provided in a common-ground representation formalism which participants map to the types of input required by their system. These inputs are derived from existing annotated corpora developed for language analysis (parsing etc.). Outputs (realisations) are evaluated by automatic comparison against the human-authored text in the corpora as well as by human assessors.",2010
satta-1992-recognition,https://aclanthology.org/P92-1012,0,,,,,,,"Recognition of Linear Context-Free Rewriting Systems. The class of linear context-free rewriting systems has been introduced as a generalization of a class of grammar formalisms known as mildly context-sensitive. The recognition problem for linear context-free rewriting languages is studied at length here, presenting evidence that, even in some restricted cases, it cannot be solved efficiently. This entails the existence of a gap between, for example, tree adjoining languages and the subclass of linear context-free rewriting languages that generalizes the former class; such a gap is attributed to ""crossing configurations"". A few other interesting consequences of the main result are discussed, that concern the recognition problem for linear context-free rewriting languages.",Recognition of Linear Context-Free Rewriting Systems,"The class of linear context-free rewriting systems has been introduced as a generalization of a class of grammar formalisms known as mildly context-sensitive. The recognition problem for linear context-free rewriting languages is studied at length here, presenting evidence that, even in some restricted cases, it cannot be solved efficiently. This entails the existence of a gap between, for example, tree adjoining languages and the subclass of linear context-free rewriting languages that generalizes the former class; such a gap is attributed to ""crossing configurations"". A few other interesting consequences of the main result are discussed, that concern the recognition problem for linear context-free rewriting languages.",Recognition of Linear Context-Free Rewriting Systems,"The class of linear context-free rewriting systems has been introduced as a generalization of a class of grammar formalisms known as mildly context-sensitive. The recognition problem for linear context-free rewriting languages is studied at length here, presenting evidence that, even in some restricted cases, it cannot be solved efficiently. This entails the existence of a gap between, for example, tree adjoining languages and the subclass of linear context-free rewriting languages that generalizes the former class; such a gap is attributed to ""crossing configurations"". A few other interesting consequences of the main result are discussed, that concern the recognition problem for linear context-free rewriting languages.",,"Recognition of Linear Context-Free Rewriting Systems. The class of linear context-free rewriting systems has been introduced as a generalization of a class of grammar formalisms known as mildly context-sensitive. The recognition problem for linear context-free rewriting languages is studied at length here, presenting evidence that, even in some restricted cases, it cannot be solved efficiently. This entails the existence of a gap between, for example, tree adjoining languages and the subclass of linear context-free rewriting languages that generalizes the former class; such a gap is attributed to ""crossing configurations"". A few other interesting consequences of the main result are discussed, that concern the recognition problem for linear context-free rewriting languages.",1992
kiso-etal-2011-hits,https://aclanthology.org/P11-2006,0,,,,,,,"HITS-based Seed Selection and Stop List Construction for Bootstrapping. In bootstrapping (seed set expansion), selecting good seeds and creating stop lists are two effective ways to reduce semantic drift, but these methods generally need human supervision. In this paper, we propose a graphbased approach to helping editors choose effective seeds and stop list instances, applicable to Pantel and Pennacchiotti's Espresso bootstrapping algorithm. The idea is to select seeds and create a stop list using the rankings of instances and patterns computed by Kleinberg's HITS algorithm. Experimental results on a variation of the lexical sample task show the effectiveness of our method.",{HITS}-based Seed Selection and Stop List Construction for Bootstrapping,"In bootstrapping (seed set expansion), selecting good seeds and creating stop lists are two effective ways to reduce semantic drift, but these methods generally need human supervision. In this paper, we propose a graphbased approach to helping editors choose effective seeds and stop list instances, applicable to Pantel and Pennacchiotti's Espresso bootstrapping algorithm. The idea is to select seeds and create a stop list using the rankings of instances and patterns computed by Kleinberg's HITS algorithm. Experimental results on a variation of the lexical sample task show the effectiveness of our method.",HITS-based Seed Selection and Stop List Construction for Bootstrapping,"In bootstrapping (seed set expansion), selecting good seeds and creating stop lists are two effective ways to reduce semantic drift, but these methods generally need human supervision. In this paper, we propose a graphbased approach to helping editors choose effective seeds and stop list instances, applicable to Pantel and Pennacchiotti's Espresso bootstrapping algorithm. The idea is to select seeds and create a stop list using the rankings of instances and patterns computed by Kleinberg's HITS algorithm. Experimental results on a variation of the lexical sample task show the effectiveness of our method.",We thank Masayuki Asahara and Kazuo Hara for helpful discussions and the anonymous reviewers for valuable comments. MS was partially supported by Kakenhi Grant-in-Aid for Scientific Research C 21500141.,"HITS-based Seed Selection and Stop List Construction for Bootstrapping. In bootstrapping (seed set expansion), selecting good seeds and creating stop lists are two effective ways to reduce semantic drift, but these methods generally need human supervision. In this paper, we propose a graphbased approach to helping editors choose effective seeds and stop list instances, applicable to Pantel and Pennacchiotti's Espresso bootstrapping algorithm. The idea is to select seeds and create a stop list using the rankings of instances and patterns computed by Kleinberg's HITS algorithm. Experimental results on a variation of the lexical sample task show the effectiveness of our method.",2011
olaussen-2011-evaluating,https://aclanthology.org/W11-4653,0,,,,,,,"Evaluating the speech quality of the Norwegian synthetic voice Brage. This document describes the method, results and conclusions from my master's thesis in Nordic studies. My aim was to assess the speech quality of the Norwegian Filibuster text-to-speech system with the synthetic voice Brage. The assessment was carried out with a survey and an intelligibility test at phoneme, word and sentence level. The evaluation criteria used in the study were intelligibility, naturalness, likeability, acceptance and suitability.",Evaluating the speech quality of the {N}orwegian synthetic voice Brage,"This document describes the method, results and conclusions from my master's thesis in Nordic studies. My aim was to assess the speech quality of the Norwegian Filibuster text-to-speech system with the synthetic voice Brage. The assessment was carried out with a survey and an intelligibility test at phoneme, word and sentence level. The evaluation criteria used in the study were intelligibility, naturalness, likeability, acceptance and suitability.",Evaluating the speech quality of the Norwegian synthetic voice Brage,"This document describes the method, results and conclusions from my master's thesis in Nordic studies. My aim was to assess the speech quality of the Norwegian Filibuster text-to-speech system with the synthetic voice Brage. The assessment was carried out with a survey and an intelligibility test at phoneme, word and sentence level. The evaluation criteria used in the study were intelligibility, naturalness, likeability, acceptance and suitability.",,"Evaluating the speech quality of the Norwegian synthetic voice Brage. This document describes the method, results and conclusions from my master's thesis in Nordic studies. My aim was to assess the speech quality of the Norwegian Filibuster text-to-speech system with the synthetic voice Brage. The assessment was carried out with a survey and an intelligibility test at phoneme, word and sentence level. The evaluation criteria used in the study were intelligibility, naturalness, likeability, acceptance and suitability.",2011
dligach-etal-2017-neural,https://aclanthology.org/E17-2118,0,,,,,,,"Neural Temporal Relation Extraction. We experiment with neural architectures for temporal relation extraction and establish a new state-of-the-art for several scenarios. We find that neural models with only tokens as input outperform state-ofthe-art hand-engineered feature-based models, that convolutional neural networks outperform LSTM models, and that encoding relation arguments with XML tags outperforms a traditional position-based encoding.",Neural Temporal Relation Extraction,"We experiment with neural architectures for temporal relation extraction and establish a new state-of-the-art for several scenarios. We find that neural models with only tokens as input outperform state-ofthe-art hand-engineered feature-based models, that convolutional neural networks outperform LSTM models, and that encoding relation arguments with XML tags outperforms a traditional position-based encoding.",Neural Temporal Relation Extraction,"We experiment with neural architectures for temporal relation extraction and establish a new state-of-the-art for several scenarios. We find that neural models with only tokens as input outperform state-ofthe-art hand-engineered feature-based models, that convolutional neural networks outperform LSTM models, and that encoding relation arguments with XML tags outperforms a traditional position-based encoding.",This work was partially funded by the US National Institutes of Health (U24CA184407; R01 LM 10090; R01GM114355). The Titan X GPU used for this research was donated by the NVIDIA Corporation.,"Neural Temporal Relation Extraction. We experiment with neural architectures for temporal relation extraction and establish a new state-of-the-art for several scenarios. We find that neural models with only tokens as input outperform state-ofthe-art hand-engineered feature-based models, that convolutional neural networks outperform LSTM models, and that encoding relation arguments with XML tags outperforms a traditional position-based encoding.",2017
meshgi-etal-2022-uncertainty,https://aclanthology.org/2022.wassa-1.8,0,,,,,,,"Uncertainty Regularized Multi-Task Learning. By sharing parameters and providing taskindependent shared features, multi-task deep neural networks are considered one of the most interesting ways for parallel learning from different tasks and domains. However, fine-tuning on one task may compromise the performance of other tasks or restrict the generalization of the shared learned features. To address this issue, we propose to use task uncertainty to gauge the effect of the shared feature changes on other tasks and prevent the model from overfitting or over-generalizing. We conducted an experiment on 16 text classification tasks, and findings showed that the proposed method consistently improves the performance of the baseline, facilitates the knowledge transfer of learned features to unseen data, and provides explicit control over the generalization of the shared model.",Uncertainty Regularized Multi-Task Learning,"By sharing parameters and providing taskindependent shared features, multi-task deep neural networks are considered one of the most interesting ways for parallel learning from different tasks and domains. However, fine-tuning on one task may compromise the performance of other tasks or restrict the generalization of the shared learned features. To address this issue, we propose to use task uncertainty to gauge the effect of the shared feature changes on other tasks and prevent the model from overfitting or over-generalizing. We conducted an experiment on 16 text classification tasks, and findings showed that the proposed method consistently improves the performance of the baseline, facilitates the knowledge transfer of learned features to unseen data, and provides explicit control over the generalization of the shared model.",Uncertainty Regularized Multi-Task Learning,"By sharing parameters and providing taskindependent shared features, multi-task deep neural networks are considered one of the most interesting ways for parallel learning from different tasks and domains. However, fine-tuning on one task may compromise the performance of other tasks or restrict the generalization of the shared learned features. To address this issue, we propose to use task uncertainty to gauge the effect of the shared feature changes on other tasks and prevent the model from overfitting or over-generalizing. We conducted an experiment on 16 text classification tasks, and findings showed that the proposed method consistently improves the performance of the baseline, facilitates the knowledge transfer of learned features to unseen data, and provides explicit control over the generalization of the shared model.",,"Uncertainty Regularized Multi-Task Learning. By sharing parameters and providing taskindependent shared features, multi-task deep neural networks are considered one of the most interesting ways for parallel learning from different tasks and domains. However, fine-tuning on one task may compromise the performance of other tasks or restrict the generalization of the shared learned features. To address this issue, we propose to use task uncertainty to gauge the effect of the shared feature changes on other tasks and prevent the model from overfitting or over-generalizing. We conducted an experiment on 16 text classification tasks, and findings showed that the proposed method consistently improves the performance of the baseline, facilitates the knowledge transfer of learned features to unseen data, and provides explicit control over the generalization of the shared model.",2022
gaspari-2006-added-value,https://aclanthology.org/2006.amta-users.3,1,,,,decent_work_and_economy,,,"The Added Value of Free Online MT Services. This paper reports on an experiment investigating how effective free online machine translation (MT) is in helping Internet users to access the contents of websites written only in languages they do not know. This study explores the extent to which using Internet-based MT tools affects the confidence of web-surfers in the reliability of the information they find on websites available only in languages unfamiliar to them. The results of a case study for the language pair Italian-English involving 101 participants show that the chances of identifying correctly basic information (i.e. understanding the nature of websites and finding contact telephone numbers from their web-pages) are consistently enhanced to varying degrees (up to nearly 20%) by translating online content into a familiar language. In addition, confidence ratings given by users to the reliability and accuracy of the information they find are significantly higher (with increases between 5 and 11%) when they translate websites into their preferred language with free online MT services.",The Added Value of Free Online {MT} Services,"This paper reports on an experiment investigating how effective free online machine translation (MT) is in helping Internet users to access the contents of websites written only in languages they do not know. This study explores the extent to which using Internet-based MT tools affects the confidence of web-surfers in the reliability of the information they find on websites available only in languages unfamiliar to them. The results of a case study for the language pair Italian-English involving 101 participants show that the chances of identifying correctly basic information (i.e. understanding the nature of websites and finding contact telephone numbers from their web-pages) are consistently enhanced to varying degrees (up to nearly 20%) by translating online content into a familiar language. In addition, confidence ratings given by users to the reliability and accuracy of the information they find are significantly higher (with increases between 5 and 11%) when they translate websites into their preferred language with free online MT services.",The Added Value of Free Online MT Services,"This paper reports on an experiment investigating how effective free online machine translation (MT) is in helping Internet users to access the contents of websites written only in languages they do not know. This study explores the extent to which using Internet-based MT tools affects the confidence of web-surfers in the reliability of the information they find on websites available only in languages unfamiliar to them. The results of a case study for the language pair Italian-English involving 101 participants show that the chances of identifying correctly basic information (i.e. understanding the nature of websites and finding contact telephone numbers from their web-pages) are consistently enhanced to varying degrees (up to nearly 20%) by translating online content into a familiar language. In addition, confidence ratings given by users to the reliability and accuracy of the information they find are significantly higher (with increases between 5 and 11%) when they translate websites into their preferred language with free online MT services.","The author wishes to thank his colleagues at the Universities of Manchester, Salford and Liverpool Hope in the United Kingdom for their assistance in distributing the questionnaires to their students. Special thanks also to all the students who volunteered to fill in the questionnaire on which this study was based.","The Added Value of Free Online MT Services. This paper reports on an experiment investigating how effective free online machine translation (MT) is in helping Internet users to access the contents of websites written only in languages they do not know. This study explores the extent to which using Internet-based MT tools affects the confidence of web-surfers in the reliability of the information they find on websites available only in languages unfamiliar to them. The results of a case study for the language pair Italian-English involving 101 participants show that the chances of identifying correctly basic information (i.e. understanding the nature of websites and finding contact telephone numbers from their web-pages) are consistently enhanced to varying degrees (up to nearly 20%) by translating online content into a familiar language. In addition, confidence ratings given by users to the reliability and accuracy of the information they find are significantly higher (with increases between 5 and 11%) when they translate websites into their preferred language with free online MT services.",2006
mathur-etal-2018-offend,https://aclanthology.org/W18-5118,1,,,,hate_speech,,,"Did you offend me? Classification of Offensive Tweets in Hinglish Language. The use of code-switched languages (e.g., Hinglish, which is derived by the blending of Hindi with the English language) is getting much popular on Twitter due to their ease of communication in native languages. However, spelling variations and absence of grammar rules introduce ambiguity and make it difficult to understand the text automatically. This paper presents the Multi-Input Multi-Channel Transfer Learning based model (MIMCT) to detect offensive (hate speech or abusive) Hinglish tweets from the proposed Hinglish Offensive Tweet (HOT) dataset using transfer learning coupled with multiple feature inputs. Specifically, it takes multiple primary word embedding along with secondary extracted features as inputs to train a multi-channel CNN-LSTM architecture that has been pre-trained on English tweets through transfer learning. The proposed MIMCT model outperforms the baseline supervised classification models, transfer learning based CNN and LSTM models to establish itself as the state of the art in the unexplored domain of Hinglish offensive text classification.",Did you offend me? Classification of Offensive Tweets in {H}inglish Language,"The use of code-switched languages (e.g., Hinglish, which is derived by the blending of Hindi with the English language) is getting much popular on Twitter due to their ease of communication in native languages. However, spelling variations and absence of grammar rules introduce ambiguity and make it difficult to understand the text automatically. This paper presents the Multi-Input Multi-Channel Transfer Learning based model (MIMCT) to detect offensive (hate speech or abusive) Hinglish tweets from the proposed Hinglish Offensive Tweet (HOT) dataset using transfer learning coupled with multiple feature inputs. Specifically, it takes multiple primary word embedding along with secondary extracted features as inputs to train a multi-channel CNN-LSTM architecture that has been pre-trained on English tweets through transfer learning. The proposed MIMCT model outperforms the baseline supervised classification models, transfer learning based CNN and LSTM models to establish itself as the state of the art in the unexplored domain of Hinglish offensive text classification.",Did you offend me? Classification of Offensive Tweets in Hinglish Language,"The use of code-switched languages (e.g., Hinglish, which is derived by the blending of Hindi with the English language) is getting much popular on Twitter due to their ease of communication in native languages. However, spelling variations and absence of grammar rules introduce ambiguity and make it difficult to understand the text automatically. This paper presents the Multi-Input Multi-Channel Transfer Learning based model (MIMCT) to detect offensive (hate speech or abusive) Hinglish tweets from the proposed Hinglish Offensive Tweet (HOT) dataset using transfer learning coupled with multiple feature inputs. Specifically, it takes multiple primary word embedding along with secondary extracted features as inputs to train a multi-channel CNN-LSTM architecture that has been pre-trained on English tweets through transfer learning. The proposed MIMCT model outperforms the baseline supervised classification models, transfer learning based CNN and LSTM models to establish itself as the state of the art in the unexplored domain of Hinglish offensive text classification.",,"Did you offend me? Classification of Offensive Tweets in Hinglish Language. The use of code-switched languages (e.g., Hinglish, which is derived by the blending of Hindi with the English language) is getting much popular on Twitter due to their ease of communication in native languages. However, spelling variations and absence of grammar rules introduce ambiguity and make it difficult to understand the text automatically. This paper presents the Multi-Input Multi-Channel Transfer Learning based model (MIMCT) to detect offensive (hate speech or abusive) Hinglish tweets from the proposed Hinglish Offensive Tweet (HOT) dataset using transfer learning coupled with multiple feature inputs. Specifically, it takes multiple primary word embedding along with secondary extracted features as inputs to train a multi-channel CNN-LSTM architecture that has been pre-trained on English tweets through transfer learning. The proposed MIMCT model outperforms the baseline supervised classification models, transfer learning based CNN and LSTM models to establish itself as the state of the art in the unexplored domain of Hinglish offensive text classification.",2018
wall-1960-system,https://aclanthology.org/1960.earlymt-nsmt.62,0,,,,,,,"System Design of a Computer for Russian-English Translation. Session 11: EQUIPMENT problem to the equipment.
This paper presents the general specifications for a digital data-processing system which would be desirable for machine translation according to the experience of the group at the University of Washington. First the problem of lexicon storage will be considered.",System Design of a Computer for {R}ussian-{E}nglish Translation,"Session 11: EQUIPMENT problem to the equipment.
This paper presents the general specifications for a digital data-processing system which would be desirable for machine translation according to the experience of the group at the University of Washington. First the problem of lexicon storage will be considered.",System Design of a Computer for Russian-English Translation,"Session 11: EQUIPMENT problem to the equipment.
This paper presents the general specifications for a digital data-processing system which would be desirable for machine translation according to the experience of the group at the University of Washington. First the problem of lexicon storage will be considered.",,"System Design of a Computer for Russian-English Translation. Session 11: EQUIPMENT problem to the equipment.
This paper presents the general specifications for a digital data-processing system which would be desirable for machine translation according to the experience of the group at the University of Washington. First the problem of lexicon storage will be considered.",1960
shen-etal-2003-effective,https://aclanthology.org/W03-1307,1,,,,health,,,"Effective Adaptation of Hidden Markov Model-based Named Entity Recognizer for Biomedical Domain. In this paper, we explore how to adapt a general Hidden Markov Model-based named entity recognizer effectively to biomedical domain. We integrate various features, including simple deterministic features, morphological features, POS features and semantic trigger features, to capture various evidences especially for biomedical named entity and evaluate their contributions. We also present a simple algorithm to solve the abbreviation problem and a rule-based method to deal with the cascaded phenomena in biomedical domain. Our experiments on GENIA V3.0 and GENIA V1.1 achieve the 66.1 and 62.5 F-measure respectively, which outperform the previous best published results by 8.1 F-measure when using the same training and testing data.",Effective Adaptation of Hidden {M}arkov Model-based Named Entity Recognizer for Biomedical Domain,"In this paper, we explore how to adapt a general Hidden Markov Model-based named entity recognizer effectively to biomedical domain. We integrate various features, including simple deterministic features, morphological features, POS features and semantic trigger features, to capture various evidences especially for biomedical named entity and evaluate their contributions. We also present a simple algorithm to solve the abbreviation problem and a rule-based method to deal with the cascaded phenomena in biomedical domain. Our experiments on GENIA V3.0 and GENIA V1.1 achieve the 66.1 and 62.5 F-measure respectively, which outperform the previous best published results by 8.1 F-measure when using the same training and testing data.",Effective Adaptation of Hidden Markov Model-based Named Entity Recognizer for Biomedical Domain,"In this paper, we explore how to adapt a general Hidden Markov Model-based named entity recognizer effectively to biomedical domain. We integrate various features, including simple deterministic features, morphological features, POS features and semantic trigger features, to capture various evidences especially for biomedical named entity and evaluate their contributions. We also present a simple algorithm to solve the abbreviation problem and a rule-based method to deal with the cascaded phenomena in biomedical domain. Our experiments on GENIA V3.0 and GENIA V1.1 achieve the 66.1 and 62.5 F-measure respectively, which outperform the previous best published results by 8.1 F-measure when using the same training and testing data.",We would like to thank Mr. Tan Soon Heng for his support of biomedical knowledge.,"Effective Adaptation of Hidden Markov Model-based Named Entity Recognizer for Biomedical Domain. In this paper, we explore how to adapt a general Hidden Markov Model-based named entity recognizer effectively to biomedical domain. We integrate various features, including simple deterministic features, morphological features, POS features and semantic trigger features, to capture various evidences especially for biomedical named entity and evaluate their contributions. We also present a simple algorithm to solve the abbreviation problem and a rule-based method to deal with the cascaded phenomena in biomedical domain. Our experiments on GENIA V3.0 and GENIA V1.1 achieve the 66.1 and 62.5 F-measure respectively, which outperform the previous best published results by 8.1 F-measure when using the same training and testing data.",2003
wei-jia-2021-statistical,https://aclanthology.org/2021.acl-long.533,0,,,,,,,"The statistical advantage of automatic NLG metrics at the system level. Estimating the expected output quality of generation systems is central to NLG. This paper qualifies the notion that automatic metrics are not as good as humans in estimating systemlevel quality. Statistically, humans are unbiased, high variance estimators, while metrics are biased, low variance estimators. We compare these estimators by their error in pairwise prediction (which generation system is better?) using the bootstrap. Measuring this error is complicated: predictions are evaluated against noisy, human predicted labels instead of the ground truth, and metric predictions fluctuate based on the test sets they were calculated on. By applying a bias-variance-noise decomposition, we adjust this error to a noise-free, infinite test set setting. Our analysis compares the adjusted error of metrics to humans and a derived, perfect segment-level annotator, both of which are unbiased estimators dependent on the number of judgments collected. In MT, we identify two settings where metrics outperform humans due to a statistical advantage in variance: when the number of human judgments used is small, and when the quality difference between compared systems is small. 1",The statistical advantage of automatic {NLG} metrics at the system level,"Estimating the expected output quality of generation systems is central to NLG. This paper qualifies the notion that automatic metrics are not as good as humans in estimating systemlevel quality. Statistically, humans are unbiased, high variance estimators, while metrics are biased, low variance estimators. We compare these estimators by their error in pairwise prediction (which generation system is better?) using the bootstrap. Measuring this error is complicated: predictions are evaluated against noisy, human predicted labels instead of the ground truth, and metric predictions fluctuate based on the test sets they were calculated on. By applying a bias-variance-noise decomposition, we adjust this error to a noise-free, infinite test set setting. Our analysis compares the adjusted error of metrics to humans and a derived, perfect segment-level annotator, both of which are unbiased estimators dependent on the number of judgments collected. In MT, we identify two settings where metrics outperform humans due to a statistical advantage in variance: when the number of human judgments used is small, and when the quality difference between compared systems is small. 1",The statistical advantage of automatic NLG metrics at the system level,"Estimating the expected output quality of generation systems is central to NLG. This paper qualifies the notion that automatic metrics are not as good as humans in estimating systemlevel quality. Statistically, humans are unbiased, high variance estimators, while metrics are biased, low variance estimators. We compare these estimators by their error in pairwise prediction (which generation system is better?) using the bootstrap. Measuring this error is complicated: predictions are evaluated against noisy, human predicted labels instead of the ground truth, and metric predictions fluctuate based on the test sets they were calculated on. By applying a bias-variance-noise decomposition, we adjust this error to a noise-free, infinite test set setting. Our analysis compares the adjusted error of metrics to humans and a derived, perfect segment-level annotator, both of which are unbiased estimators dependent on the number of judgments collected. In MT, we identify two settings where metrics outperform humans due to a statistical advantage in variance: when the number of human judgments used is small, and when the quality difference between compared systems is small. 1","Discussions with Nitika Mathur, Markus Freitag, and Thibault Sellam led to several insights. Nelson Liu and Tianyi Zhang provided feedback on our first draft, and anonymous reviewers provided feedback on the submitted draft. Nanyun Peng advised the first author, and on this work. Alex Fabbri provided a scored version of the SummEval dataset. We thank all who have made our work possible.","The statistical advantage of automatic NLG metrics at the system level. Estimating the expected output quality of generation systems is central to NLG. This paper qualifies the notion that automatic metrics are not as good as humans in estimating systemlevel quality. Statistically, humans are unbiased, high variance estimators, while metrics are biased, low variance estimators. We compare these estimators by their error in pairwise prediction (which generation system is better?) using the bootstrap. Measuring this error is complicated: predictions are evaluated against noisy, human predicted labels instead of the ground truth, and metric predictions fluctuate based on the test sets they were calculated on. By applying a bias-variance-noise decomposition, we adjust this error to a noise-free, infinite test set setting. Our analysis compares the adjusted error of metrics to humans and a derived, perfect segment-level annotator, both of which are unbiased estimators dependent on the number of judgments collected. In MT, we identify two settings where metrics outperform humans due to a statistical advantage in variance: when the number of human judgments used is small, and when the quality difference between compared systems is small. 1",2021
goldwater-etal-2000-building,https://aclanthology.org/W00-0312,0,,,,,,,"Building a Robust Dialogue System with Limited Data. We describe robustness techniques used in the Com-mandTalk system at: the recognition level, the parsing level, and th~ dia16gue level, and how these were influenced by the lack of domain data. We used interviews with subject matter experts (SME's) to develop a single grammar for recognition, understanding, and generation, thus eliminating the need for a robust parser. We broadened the coverage of the recognition grammar by allowing word insertions and deletions, and we implemented clarification and correction subdialogues to increase robustness at the dialogue level. We discuss the applicability of these techniques to other domains.",Building a Robust Dialogue System with Limited Data,"We describe robustness techniques used in the Com-mandTalk system at: the recognition level, the parsing level, and th~ dia16gue level, and how these were influenced by the lack of domain data. We used interviews with subject matter experts (SME's) to develop a single grammar for recognition, understanding, and generation, thus eliminating the need for a robust parser. We broadened the coverage of the recognition grammar by allowing word insertions and deletions, and we implemented clarification and correction subdialogues to increase robustness at the dialogue level. We discuss the applicability of these techniques to other domains.",Building a Robust Dialogue System with Limited Data,"We describe robustness techniques used in the Com-mandTalk system at: the recognition level, the parsing level, and th~ dia16gue level, and how these were influenced by the lack of domain data. We used interviews with subject matter experts (SME's) to develop a single grammar for recognition, understanding, and generation, thus eliminating the need for a robust parser. We broadened the coverage of the recognition grammar by allowing word insertions and deletions, and we implemented clarification and correction subdialogues to increase robustness at the dialogue level. We discuss the applicability of these techniques to other domains.",,"Building a Robust Dialogue System with Limited Data. We describe robustness techniques used in the Com-mandTalk system at: the recognition level, the parsing level, and th~ dia16gue level, and how these were influenced by the lack of domain data. We used interviews with subject matter experts (SME's) to develop a single grammar for recognition, understanding, and generation, thus eliminating the need for a robust parser. We broadened the coverage of the recognition grammar by allowing word insertions and deletions, and we implemented clarification and correction subdialogues to increase robustness at the dialogue level. We discuss the applicability of these techniques to other domains.",2000
novikova-etal-2018-rankme,https://aclanthology.org/N18-2012,0,,,,,,,"RankME: Reliable Human Ratings for Natural Language Generation. Human evaluation for natural language generation (NLG) often suffers from inconsistent user ratings. While previous research tends to attribute this problem to individual user preferences, we show that the quality of human judgements can also be improved by experimental design. We present a novel rank-based magnitude estimation method (RankME), which combines the use of continuous scales and relative assessments. We show that RankME significantly improves the reliability and consistency of human ratings compared to traditional evaluation methods. In addition, we show that it is possible to evaluate NLG systems according to multiple, distinct criteria, which is important for error analysis. Finally, we demonstrate that RankME, in combination with Bayesian estimation of system quality, is a cost-effective alternative for ranking multiple NLG systems.",{R}ank{ME}: Reliable Human Ratings for Natural Language Generation,"Human evaluation for natural language generation (NLG) often suffers from inconsistent user ratings. While previous research tends to attribute this problem to individual user preferences, we show that the quality of human judgements can also be improved by experimental design. We present a novel rank-based magnitude estimation method (RankME), which combines the use of continuous scales and relative assessments. We show that RankME significantly improves the reliability and consistency of human ratings compared to traditional evaluation methods. In addition, we show that it is possible to evaluate NLG systems according to multiple, distinct criteria, which is important for error analysis. Finally, we demonstrate that RankME, in combination with Bayesian estimation of system quality, is a cost-effective alternative for ranking multiple NLG systems.",RankME: Reliable Human Ratings for Natural Language Generation,"Human evaluation for natural language generation (NLG) often suffers from inconsistent user ratings. While previous research tends to attribute this problem to individual user preferences, we show that the quality of human judgements can also be improved by experimental design. We present a novel rank-based magnitude estimation method (RankME), which combines the use of continuous scales and relative assessments. We show that RankME significantly improves the reliability and consistency of human ratings compared to traditional evaluation methods. In addition, we show that it is possible to evaluate NLG systems according to multiple, distinct criteria, which is important for error analysis. Finally, we demonstrate that RankME, in combination with Bayesian estimation of system quality, is a cost-effective alternative for ranking multiple NLG systems.",This research received funding from the EPSRC projects DILiGENt (EP/M005429/1) and MaDrI-gAL (EP/N017536/1). The Titan Xp used for this research was donated by the NVIDIA Corporation.,"RankME: Reliable Human Ratings for Natural Language Generation. Human evaluation for natural language generation (NLG) often suffers from inconsistent user ratings. While previous research tends to attribute this problem to individual user preferences, we show that the quality of human judgements can also be improved by experimental design. We present a novel rank-based magnitude estimation method (RankME), which combines the use of continuous scales and relative assessments. We show that RankME significantly improves the reliability and consistency of human ratings compared to traditional evaluation methods. In addition, we show that it is possible to evaluate NLG systems according to multiple, distinct criteria, which is important for error analysis. Finally, we demonstrate that RankME, in combination with Bayesian estimation of system quality, is a cost-effective alternative for ranking multiple NLG systems.",2018
kiyono-etal-2018-reducing,https://aclanthology.org/Y18-1034,0,,,,,,,"Reducing Odd Generation from Neural Headline Generation. The Encoder-Decoder model is widely used in natural language generation tasks. However, the model sometimes suffers from repeated redundant generation, misses important phrases, and includes irrelevant entities. Toward solving these problems we propose a novel sourceside token prediction module. Our method jointly estimates the probability distributions over source and target vocabularies to capture the correspondence between source and target tokens. Experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task. We also show that our method can learn a reasonable token-wise correspondence without knowing any true alignment 1. * This work is a product of collaborative research program of Tohoku University and NTT Communication Science Laboratories. 1 Our code for reproducing the experiments is available at https://github.com/butsugiri/UAM Unfortunately, as often discussed in the community, EncDec sometimes generates sentences with repeating phrases or completely irrelevant phrases and the reason for their generation cannot be interpreted intuitively. Moreover, EncDec also sometimes generates sentences that lack important phrases. We refer to these observations as the odd generation problem (odd-gen) in EncDec. The following table shows typical examples of odd-gen actually generated by a typical EncDec. (1) Repeating Phrases Gold: duran duran group fashionable again EncDec: duran duran duran duran (2) Lack of Important Phrases Gold: graf says goodbye to tennis due to injuries EncDec: graf retires (3) Irrelevant Phrases Gold: u.s. troops take first position in serb-held bosnia EncDec: precede sarajevo This paper tackles for reducing the odd-gen in the task of abstractive summarization. In machine translation literature, coverage (Tu et al., 2016; Mi et al., 2016) and reconstruction (Tu et al., 2017) are promising extensions of EncDec to address the odd-gen. These models take advantage of the fact that machine translation is the loss-less generation (lossless-gen) task, where the semantic information of source-and target-side sequence is equivalent. However, as discussed in previous studies, abstractive summarization is a lossy-compression generation (lossy-gen) task. Here, the task is to delete certain semantic information from the source to generate target-side sequence.",Reducing Odd Generation from Neural Headline Generation,"The Encoder-Decoder model is widely used in natural language generation tasks. However, the model sometimes suffers from repeated redundant generation, misses important phrases, and includes irrelevant entities. Toward solving these problems we propose a novel sourceside token prediction module. Our method jointly estimates the probability distributions over source and target vocabularies to capture the correspondence between source and target tokens. Experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task. We also show that our method can learn a reasonable token-wise correspondence without knowing any true alignment 1. * This work is a product of collaborative research program of Tohoku University and NTT Communication Science Laboratories. 1 Our code for reproducing the experiments is available at https://github.com/butsugiri/UAM Unfortunately, as often discussed in the community, EncDec sometimes generates sentences with repeating phrases or completely irrelevant phrases and the reason for their generation cannot be interpreted intuitively. Moreover, EncDec also sometimes generates sentences that lack important phrases. We refer to these observations as the odd generation problem (odd-gen) in EncDec. The following table shows typical examples of odd-gen actually generated by a typical EncDec. (1) Repeating Phrases Gold: duran duran group fashionable again EncDec: duran duran duran duran (2) Lack of Important Phrases Gold: graf says goodbye to tennis due to injuries EncDec: graf retires (3) Irrelevant Phrases Gold: u.s. troops take first position in serb-held bosnia EncDec: precede sarajevo This paper tackles for reducing the odd-gen in the task of abstractive summarization. In machine translation literature, coverage (Tu et al., 2016; Mi et al., 2016) and reconstruction (Tu et al., 2017) are promising extensions of EncDec to address the odd-gen. These models take advantage of the fact that machine translation is the loss-less generation (lossless-gen) task, where the semantic information of source-and target-side sequence is equivalent. However, as discussed in previous studies, abstractive summarization is a lossy-compression generation (lossy-gen) task. Here, the task is to delete certain semantic information from the source to generate target-side sequence.",Reducing Odd Generation from Neural Headline Generation,"The Encoder-Decoder model is widely used in natural language generation tasks. However, the model sometimes suffers from repeated redundant generation, misses important phrases, and includes irrelevant entities. Toward solving these problems we propose a novel sourceside token prediction module. Our method jointly estimates the probability distributions over source and target vocabularies to capture the correspondence between source and target tokens. Experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task. We also show that our method can learn a reasonable token-wise correspondence without knowing any true alignment 1. * This work is a product of collaborative research program of Tohoku University and NTT Communication Science Laboratories. 1 Our code for reproducing the experiments is available at https://github.com/butsugiri/UAM Unfortunately, as often discussed in the community, EncDec sometimes generates sentences with repeating phrases or completely irrelevant phrases and the reason for their generation cannot be interpreted intuitively. Moreover, EncDec also sometimes generates sentences that lack important phrases. We refer to these observations as the odd generation problem (odd-gen) in EncDec. The following table shows typical examples of odd-gen actually generated by a typical EncDec. (1) Repeating Phrases Gold: duran duran group fashionable again EncDec: duran duran duran duran (2) Lack of Important Phrases Gold: graf says goodbye to tennis due to injuries EncDec: graf retires (3) Irrelevant Phrases Gold: u.s. troops take first position in serb-held bosnia EncDec: precede sarajevo This paper tackles for reducing the odd-gen in the task of abstractive summarization. In machine translation literature, coverage (Tu et al., 2016; Mi et al., 2016) and reconstruction (Tu et al., 2017) are promising extensions of EncDec to address the odd-gen. These models take advantage of the fact that machine translation is the loss-less generation (lossless-gen) task, where the semantic information of source-and target-side sequence is equivalent. However, as discussed in previous studies, abstractive summarization is a lossy-compression generation (lossy-gen) task. Here, the task is to delete certain semantic information from the source to generate target-side sequence.",We are grateful to anonymous reviewers for their insightful comments. We thank Sosuke Kobayashi for providing helpful comments. We also thank Qingyu Zhou for providing a dataset and information for a fair comparison.,"Reducing Odd Generation from Neural Headline Generation. The Encoder-Decoder model is widely used in natural language generation tasks. However, the model sometimes suffers from repeated redundant generation, misses important phrases, and includes irrelevant entities. Toward solving these problems we propose a novel sourceside token prediction module. Our method jointly estimates the probability distributions over source and target vocabularies to capture the correspondence between source and target tokens. Experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task. We also show that our method can learn a reasonable token-wise correspondence without knowing any true alignment 1. * This work is a product of collaborative research program of Tohoku University and NTT Communication Science Laboratories. 1 Our code for reproducing the experiments is available at https://github.com/butsugiri/UAM Unfortunately, as often discussed in the community, EncDec sometimes generates sentences with repeating phrases or completely irrelevant phrases and the reason for their generation cannot be interpreted intuitively. Moreover, EncDec also sometimes generates sentences that lack important phrases. We refer to these observations as the odd generation problem (odd-gen) in EncDec. The following table shows typical examples of odd-gen actually generated by a typical EncDec. (1) Repeating Phrases Gold: duran duran group fashionable again EncDec: duran duran duran duran (2) Lack of Important Phrases Gold: graf says goodbye to tennis due to injuries EncDec: graf retires (3) Irrelevant Phrases Gold: u.s. troops take first position in serb-held bosnia EncDec: precede sarajevo This paper tackles for reducing the odd-gen in the task of abstractive summarization. In machine translation literature, coverage (Tu et al., 2016; Mi et al., 2016) and reconstruction (Tu et al., 2017) are promising extensions of EncDec to address the odd-gen. These models take advantage of the fact that machine translation is the loss-less generation (lossless-gen) task, where the semantic information of source-and target-side sequence is equivalent. However, as discussed in previous studies, abstractive summarization is a lossy-compression generation (lossy-gen) task. Here, the task is to delete certain semantic information from the source to generate target-side sequence.",2018
bhargava-penn-2021-proof,https://aclanthology.org/2021.iwpt-1.2,0,,,,,,,"Proof Net Structure for Neural Lambek Categorial Parsing. In this paper, we present the first statistical parser for Lambek categorial grammar (LCG), a grammatical formalism for which the graphical proof method known as proof nets is applicable. Our parser incorporates proof net structure and constraints into a system based on selfattention networks via novel model elements. Our experiments on an English LCG corpus show that incorporating term graph structure is helpful to the model, improving both parsing accuracy and coverage. Moreover, we derive novel loss functions by expressing proof net constraints as differentiable functions of our model output, enabling us to train our parser without ground-truth derivations.",Proof Net Structure for Neural {L}ambek Categorial Parsing,"In this paper, we present the first statistical parser for Lambek categorial grammar (LCG), a grammatical formalism for which the graphical proof method known as proof nets is applicable. Our parser incorporates proof net structure and constraints into a system based on selfattention networks via novel model elements. Our experiments on an English LCG corpus show that incorporating term graph structure is helpful to the model, improving both parsing accuracy and coverage. Moreover, we derive novel loss functions by expressing proof net constraints as differentiable functions of our model output, enabling us to train our parser without ground-truth derivations.",Proof Net Structure for Neural Lambek Categorial Parsing,"In this paper, we present the first statistical parser for Lambek categorial grammar (LCG), a grammatical formalism for which the graphical proof method known as proof nets is applicable. Our parser incorporates proof net structure and constraints into a system based on selfattention networks via novel model elements. Our experiments on an English LCG corpus show that incorporating term graph structure is helpful to the model, improving both parsing accuracy and coverage. Moreover, we derive novel loss functions by expressing proof net constraints as differentiable functions of our model output, enabling us to train our parser without ground-truth derivations.","We thank Elizabeth Patitsas as well as our anonymous reviewers for their feedback. This research was enabled in part by support provided by NSERC, SHARCNET, and Compute Canada.","Proof Net Structure for Neural Lambek Categorial Parsing. In this paper, we present the first statistical parser for Lambek categorial grammar (LCG), a grammatical formalism for which the graphical proof method known as proof nets is applicable. Our parser incorporates proof net structure and constraints into a system based on selfattention networks via novel model elements. Our experiments on an English LCG corpus show that incorporating term graph structure is helpful to the model, improving both parsing accuracy and coverage. Moreover, we derive novel loss functions by expressing proof net constraints as differentiable functions of our model output, enabling us to train our parser without ground-truth derivations.",2021
zhao-etal-2010-paraphrasing,https://aclanthology.org/C10-1148,0,,,,,,,"Paraphrasing with Search Engine Query Logs. This paper proposes a method that extracts paraphrases from search engine query logs. The method first extracts paraphrase query-title pairs based on an assumption that a search query and its corresponding clicked document titles may mean the same thing. It then extracts paraphrase query-query and title-title pairs from the query-title paraphrases with a pivot approach. Paraphrases extracted in each step are validated with a binary classifier. We evaluate the method using a query log from Baidu 1 , a Chinese search engine. Experimental results show that the proposed method is effective, which extracts more than 3.5 million pairs of paraphrases with a precision of over 70%. The results also show that the extracted paraphrases can be used to generate high-quality paraphrase patterns.",Paraphrasing with Search Engine Query Logs,"This paper proposes a method that extracts paraphrases from search engine query logs. The method first extracts paraphrase query-title pairs based on an assumption that a search query and its corresponding clicked document titles may mean the same thing. It then extracts paraphrase query-query and title-title pairs from the query-title paraphrases with a pivot approach. Paraphrases extracted in each step are validated with a binary classifier. We evaluate the method using a query log from Baidu 1 , a Chinese search engine. Experimental results show that the proposed method is effective, which extracts more than 3.5 million pairs of paraphrases with a precision of over 70%. The results also show that the extracted paraphrases can be used to generate high-quality paraphrase patterns.",Paraphrasing with Search Engine Query Logs,"This paper proposes a method that extracts paraphrases from search engine query logs. The method first extracts paraphrase query-title pairs based on an assumption that a search query and its corresponding clicked document titles may mean the same thing. It then extracts paraphrase query-query and title-title pairs from the query-title paraphrases with a pivot approach. Paraphrases extracted in each step are validated with a binary classifier. We evaluate the method using a query log from Baidu 1 , a Chinese search engine. Experimental results show that the proposed method is effective, which extracts more than 3.5 million pairs of paraphrases with a precision of over 70%. The results also show that the extracted paraphrases can be used to generate high-quality paraphrase patterns.","We would like to thank Wanxiang Che, Hua Wu, and the anonymous reviewers for their useful comments on this paper.","Paraphrasing with Search Engine Query Logs. This paper proposes a method that extracts paraphrases from search engine query logs. The method first extracts paraphrase query-title pairs based on an assumption that a search query and its corresponding clicked document titles may mean the same thing. It then extracts paraphrase query-query and title-title pairs from the query-title paraphrases with a pivot approach. Paraphrases extracted in each step are validated with a binary classifier. We evaluate the method using a query log from Baidu 1 , a Chinese search engine. Experimental results show that the proposed method is effective, which extracts more than 3.5 million pairs of paraphrases with a precision of over 70%. The results also show that the extracted paraphrases can be used to generate high-quality paraphrase patterns.",2010
al-boni-etal-2015-model,https://aclanthology.org/P15-2126,0,,,,,,,"Model Adaptation for Personalized Opinion Analysis. Humans are idiosyncratic and variable: towards the same topic, they might hold different opinions or express the same opinion in various ways. It is hence important to model opinions at the level of individual users; however it is impractical to estimate independent sentiment classification models for each user with limited data. In this paper, we adopt a modelbased transfer learning solution-using linear transformations over the parameters of a generic model-for personalized opinion analysis. Extensive experimental results on a large collection of Amazon reviews confirm our method significantly outperformed a user-independent generic opinion model as well as several state-ofthe-art transfer learning algorithms.",Model Adaptation for Personalized Opinion Analysis,"Humans are idiosyncratic and variable: towards the same topic, they might hold different opinions or express the same opinion in various ways. It is hence important to model opinions at the level of individual users; however it is impractical to estimate independent sentiment classification models for each user with limited data. In this paper, we adopt a modelbased transfer learning solution-using linear transformations over the parameters of a generic model-for personalized opinion analysis. Extensive experimental results on a large collection of Amazon reviews confirm our method significantly outperformed a user-independent generic opinion model as well as several state-ofthe-art transfer learning algorithms.",Model Adaptation for Personalized Opinion Analysis,"Humans are idiosyncratic and variable: towards the same topic, they might hold different opinions or express the same opinion in various ways. It is hence important to model opinions at the level of individual users; however it is impractical to estimate independent sentiment classification models for each user with limited data. In this paper, we adopt a modelbased transfer learning solution-using linear transformations over the parameters of a generic model-for personalized opinion analysis. Extensive experimental results on a large collection of Amazon reviews confirm our method significantly outperformed a user-independent generic opinion model as well as several state-ofthe-art transfer learning algorithms.","This research was funded in part by grant W911NF-10-2-0051 from the United States Army Research Laboratory. Also, Hongning Wang is partially supported by the Yahoo Academic Career Enhancement Award.","Model Adaptation for Personalized Opinion Analysis. Humans are idiosyncratic and variable: towards the same topic, they might hold different opinions or express the same opinion in various ways. It is hence important to model opinions at the level of individual users; however it is impractical to estimate independent sentiment classification models for each user with limited data. In this paper, we adopt a modelbased transfer learning solution-using linear transformations over the parameters of a generic model-for personalized opinion analysis. Extensive experimental results on a large collection of Amazon reviews confirm our method significantly outperformed a user-independent generic opinion model as well as several state-ofthe-art transfer learning algorithms.",2015
zaenen-2016-modality,https://aclanthology.org/2016.lilt-14.1,0,,,,,,,"Modality: logic, semantics, annotation and machine learning. Up to rather recently Natural Language Processing has not given much attention to modality. As long as the main task was to determined what a text was about (Information Retrieval) or who the participants in an eventuality were (Information Extraction), this neglect was understandable. With the focus moving to questions of natural language understanding and inferencing as well as to sentiment and opinion analysis, it becomes necessary to distinguish between actual and envisioned eventualities and to draw conclusions about the attitude of the writer or speaker towards the eventualities referred to. This means, i.a., to be able to distinguish 'John went to Paris' and 'John wanted to go to Paris'. To do this one has to calculate the effect of different linguistic operators on the eventuality predication. 1 Modality has different shades of meaning that are subtle, and often difficult to distinguish, being able to express hypothetical situations (he could/may come in), desired or undesired (permitted or non-permitted situations (he can/may come in/enter), or (physical) abilities: he can enter. The study of modality often focusses on the semantics and pragmatics of the modal auxiliaries because of their notorious ambiguity but modality can also be expressed through other means than auxiliaries, such as adverbial modification and non-auxiliary verbs such as want or believe. In fact, the same modality can be expressed by different linguistic means, e.g. 'Maybe he is already home' or 'He may already","Modality: logic, semantics, annotation and machine learning","Up to rather recently Natural Language Processing has not given much attention to modality. As long as the main task was to determined what a text was about (Information Retrieval) or who the participants in an eventuality were (Information Extraction), this neglect was understandable. With the focus moving to questions of natural language understanding and inferencing as well as to sentiment and opinion analysis, it becomes necessary to distinguish between actual and envisioned eventualities and to draw conclusions about the attitude of the writer or speaker towards the eventualities referred to. This means, i.a., to be able to distinguish 'John went to Paris' and 'John wanted to go to Paris'. To do this one has to calculate the effect of different linguistic operators on the eventuality predication. 1 Modality has different shades of meaning that are subtle, and often difficult to distinguish, being able to express hypothetical situations (he could/may come in), desired or undesired (permitted or non-permitted situations (he can/may come in/enter), or (physical) abilities: he can enter. The study of modality often focusses on the semantics and pragmatics of the modal auxiliaries because of their notorious ambiguity but modality can also be expressed through other means than auxiliaries, such as adverbial modification and non-auxiliary verbs such as want or believe. In fact, the same modality can be expressed by different linguistic means, e.g. 'Maybe he is already home' or 'He may already","Modality: logic, semantics, annotation and machine learning","Up to rather recently Natural Language Processing has not given much attention to modality. As long as the main task was to determined what a text was about (Information Retrieval) or who the participants in an eventuality were (Information Extraction), this neglect was understandable. With the focus moving to questions of natural language understanding and inferencing as well as to sentiment and opinion analysis, it becomes necessary to distinguish between actual and envisioned eventualities and to draw conclusions about the attitude of the writer or speaker towards the eventualities referred to. This means, i.a., to be able to distinguish 'John went to Paris' and 'John wanted to go to Paris'. To do this one has to calculate the effect of different linguistic operators on the eventuality predication. 1 Modality has different shades of meaning that are subtle, and often difficult to distinguish, being able to express hypothetical situations (he could/may come in), desired or undesired (permitted or non-permitted situations (he can/may come in/enter), or (physical) abilities: he can enter. The study of modality often focusses on the semantics and pragmatics of the modal auxiliaries because of their notorious ambiguity but modality can also be expressed through other means than auxiliaries, such as adverbial modification and non-auxiliary verbs such as want or believe. In fact, the same modality can be expressed by different linguistic means, e.g. 'Maybe he is already home' or 'He may already",,"Modality: logic, semantics, annotation and machine learning. Up to rather recently Natural Language Processing has not given much attention to modality. As long as the main task was to determined what a text was about (Information Retrieval) or who the participants in an eventuality were (Information Extraction), this neglect was understandable. With the focus moving to questions of natural language understanding and inferencing as well as to sentiment and opinion analysis, it becomes necessary to distinguish between actual and envisioned eventualities and to draw conclusions about the attitude of the writer or speaker towards the eventualities referred to. This means, i.a., to be able to distinguish 'John went to Paris' and 'John wanted to go to Paris'. To do this one has to calculate the effect of different linguistic operators on the eventuality predication. 1 Modality has different shades of meaning that are subtle, and often difficult to distinguish, being able to express hypothetical situations (he could/may come in), desired or undesired (permitted or non-permitted situations (he can/may come in/enter), or (physical) abilities: he can enter. The study of modality often focusses on the semantics and pragmatics of the modal auxiliaries because of their notorious ambiguity but modality can also be expressed through other means than auxiliaries, such as adverbial modification and non-auxiliary verbs such as want or believe. In fact, the same modality can be expressed by different linguistic means, e.g. 'Maybe he is already home' or 'He may already",2016
moschitti-2010-kernel,https://aclanthology.org/C10-5001,0,,,,,,,Kernel Engineering for Fast and Easy Design of Natural Language Applications. ,Kernel Engineering for Fast and Easy Design of Natural Language Applications,,Kernel Engineering for Fast and Easy Design of Natural Language Applications,,,Kernel Engineering for Fast and Easy Design of Natural Language Applications. ,2010
ichikawa-etal-2005-ebonsai,https://aclanthology.org/I05-2019,0,,,,,,,"eBonsai: An Integrated Environment for Annotating Treebanks. Syntactically annotated corpora (treebanks) play an important role in recent statistical natural language processing. However, building a large treebank is labor intensive and time consuming work. To remedy this problem, there have been many attempts to develop software tools for annotating treebanks. This paper presents an integrated environment for annotating a treebank, called eBonsai. eBonsai helps annotators to choose a correct syntactic structure of a sentence from outputs of a parser, allowing the annotators to retrieve similar sentences in the treebank for referring to their structures.",e{B}onsai: An Integrated Environment for Annotating Treebanks,"Syntactically annotated corpora (treebanks) play an important role in recent statistical natural language processing. However, building a large treebank is labor intensive and time consuming work. To remedy this problem, there have been many attempts to develop software tools for annotating treebanks. This paper presents an integrated environment for annotating a treebank, called eBonsai. eBonsai helps annotators to choose a correct syntactic structure of a sentence from outputs of a parser, allowing the annotators to retrieve similar sentences in the treebank for referring to their structures.",eBonsai: An Integrated Environment for Annotating Treebanks,"Syntactically annotated corpora (treebanks) play an important role in recent statistical natural language processing. However, building a large treebank is labor intensive and time consuming work. To remedy this problem, there have been many attempts to develop software tools for annotating treebanks. This paper presents an integrated environment for annotating a treebank, called eBonsai. eBonsai helps annotators to choose a correct syntactic structure of a sentence from outputs of a parser, allowing the annotators to retrieve similar sentences in the treebank for referring to their structures.",,"eBonsai: An Integrated Environment for Annotating Treebanks. Syntactically annotated corpora (treebanks) play an important role in recent statistical natural language processing. However, building a large treebank is labor intensive and time consuming work. To remedy this problem, there have been many attempts to develop software tools for annotating treebanks. This paper presents an integrated environment for annotating a treebank, called eBonsai. eBonsai helps annotators to choose a correct syntactic structure of a sentence from outputs of a parser, allowing the annotators to retrieve similar sentences in the treebank for referring to their structures.",2005
kim-etal-2010-computational,https://aclanthology.org/Y10-1050,0,,,,,,,"A Computational Treatment of Korean Serial Verb Constructions. The so-called serial verb construction (SVC) is a complex predicate structure consisting of two or more verbal heads but denotes one single event. This paper first discusses the grammatical properties of Korean SVCs and provides a lexicalist, constructionbased analysis couched upon a typed-feature structure grammar. We also show the results of implementing the grammar in the LKB (Linguistics Knowledge Building) system couched upon the existing the KRG (Korean Resource Grammar) which has been developed since 2003. The implementation results provides us with a feasible direction of expanding the analysis to cover a wider range of relevant data.",A Computational Treatment of {K}orean Serial Verb Constructions,"The so-called serial verb construction (SVC) is a complex predicate structure consisting of two or more verbal heads but denotes one single event. This paper first discusses the grammatical properties of Korean SVCs and provides a lexicalist, constructionbased analysis couched upon a typed-feature structure grammar. We also show the results of implementing the grammar in the LKB (Linguistics Knowledge Building) system couched upon the existing the KRG (Korean Resource Grammar) which has been developed since 2003. The implementation results provides us with a feasible direction of expanding the analysis to cover a wider range of relevant data.",A Computational Treatment of Korean Serial Verb Constructions,"The so-called serial verb construction (SVC) is a complex predicate structure consisting of two or more verbal heads but denotes one single event. This paper first discusses the grammatical properties of Korean SVCs and provides a lexicalist, constructionbased analysis couched upon a typed-feature structure grammar. We also show the results of implementing the grammar in the LKB (Linguistics Knowledge Building) system couched upon the existing the KRG (Korean Resource Grammar) which has been developed since 2003. The implementation results provides us with a feasible direction of expanding the analysis to cover a wider range of relevant data.",,"A Computational Treatment of Korean Serial Verb Constructions. The so-called serial verb construction (SVC) is a complex predicate structure consisting of two or more verbal heads but denotes one single event. This paper first discusses the grammatical properties of Korean SVCs and provides a lexicalist, constructionbased analysis couched upon a typed-feature structure grammar. We also show the results of implementing the grammar in the LKB (Linguistics Knowledge Building) system couched upon the existing the KRG (Korean Resource Grammar) which has been developed since 2003. The implementation results provides us with a feasible direction of expanding the analysis to cover a wider range of relevant data.",2010
tao-etal-2019-one,https://aclanthology.org/P19-1001,0,,,,,,,"One Time of Interaction May Not Be Enough: Go Deep with an Interaction-over-Interaction Network for Response Selection in Dialogues. Currently, researchers have paid great attention to retrieval-based dialogues in opendomain. In particular, people study the problem by investigating context-response matching for multi-turn response selection based on publicly recognized benchmark data sets. State-of-the-art methods require a response to interact with each utterance in a context from the beginning, but the interaction is performed in a shallow way. In this work, we let utterance-response interaction go deep by proposing an interaction-over-interaction network (IoI). The model performs matching by stacking multiple interaction blocks in which residual information from one time of interaction initiates the interaction process again. Thus, matching information within an utterance-response pair is extracted from the interaction of the pair in an iterative fashion, and the information flows along the chain of the blocks via representations. Evaluation results on three benchmark data sets indicate that IoI can significantly outperform state-of-theart methods in terms of various matching metrics. Through further analysis, we also unveil how the depth of interaction affects the performance of IoI.",One Time of Interaction May Not Be Enough: Go Deep with an Interaction-over-Interaction Network for Response Selection in Dialogues,"Currently, researchers have paid great attention to retrieval-based dialogues in opendomain. In particular, people study the problem by investigating context-response matching for multi-turn response selection based on publicly recognized benchmark data sets. State-of-the-art methods require a response to interact with each utterance in a context from the beginning, but the interaction is performed in a shallow way. In this work, we let utterance-response interaction go deep by proposing an interaction-over-interaction network (IoI). The model performs matching by stacking multiple interaction blocks in which residual information from one time of interaction initiates the interaction process again. Thus, matching information within an utterance-response pair is extracted from the interaction of the pair in an iterative fashion, and the information flows along the chain of the blocks via representations. Evaluation results on three benchmark data sets indicate that IoI can significantly outperform state-of-theart methods in terms of various matching metrics. Through further analysis, we also unveil how the depth of interaction affects the performance of IoI.",One Time of Interaction May Not Be Enough: Go Deep with an Interaction-over-Interaction Network for Response Selection in Dialogues,"Currently, researchers have paid great attention to retrieval-based dialogues in opendomain. In particular, people study the problem by investigating context-response matching for multi-turn response selection based on publicly recognized benchmark data sets. State-of-the-art methods require a response to interact with each utterance in a context from the beginning, but the interaction is performed in a shallow way. In this work, we let utterance-response interaction go deep by proposing an interaction-over-interaction network (IoI). The model performs matching by stacking multiple interaction blocks in which residual information from one time of interaction initiates the interaction process again. Thus, matching information within an utterance-response pair is extracted from the interaction of the pair in an iterative fashion, and the information flows along the chain of the blocks via representations. Evaluation results on three benchmark data sets indicate that IoI can significantly outperform state-of-theart methods in terms of various matching metrics. Through further analysis, we also unveil how the depth of interaction affects the performance of IoI.","We would like to thank the anonymous reviewers for their constructive comments. This work was supported by the National Key Research and Development Program of China (No. 2017YFC0804001), the National Science Foundation of China (NSFC Nos. 61672058 and 61876196).","One Time of Interaction May Not Be Enough: Go Deep with an Interaction-over-Interaction Network for Response Selection in Dialogues. Currently, researchers have paid great attention to retrieval-based dialogues in opendomain. In particular, people study the problem by investigating context-response matching for multi-turn response selection based on publicly recognized benchmark data sets. State-of-the-art methods require a response to interact with each utterance in a context from the beginning, but the interaction is performed in a shallow way. In this work, we let utterance-response interaction go deep by proposing an interaction-over-interaction network (IoI). The model performs matching by stacking multiple interaction blocks in which residual information from one time of interaction initiates the interaction process again. Thus, matching information within an utterance-response pair is extracted from the interaction of the pair in an iterative fashion, and the information flows along the chain of the blocks via representations. Evaluation results on three benchmark data sets indicate that IoI can significantly outperform state-of-theart methods in terms of various matching metrics. Through further analysis, we also unveil how the depth of interaction affects the performance of IoI.",2019
osborne-baldridge-2004-ensemble,https://aclanthology.org/N04-1012,0,,,,,,,"Ensemble-based Active Learning for Parse Selection. Supervised estimation methods are widely seen as being superior to semi and fully unsupervised methods. However, supervised methods crucially rely upon training sets that need to be manually annotated. This can be very expensive, especially when skilled annotators are required. Active learning (AL) promises to help reduce this annotation cost. Within the complex domain of HPSG parse selection, we show that ideas from ensemble learning can help further reduce the cost of annotation. Our main results show that at times, an ensemble model trained with randomly sampled examples can outperform a single model trained using AL. However, converting the single-model AL method into an ensemble-based AL method shows that even this much stronger baseline model can be improved upon. Our best results show a ¢ ¤ £ ¦ ¥ reduction in annotation cost compared with single-model random sampling.",Ensemble-based Active Learning for Parse Selection,"Supervised estimation methods are widely seen as being superior to semi and fully unsupervised methods. However, supervised methods crucially rely upon training sets that need to be manually annotated. This can be very expensive, especially when skilled annotators are required. Active learning (AL) promises to help reduce this annotation cost. Within the complex domain of HPSG parse selection, we show that ideas from ensemble learning can help further reduce the cost of annotation. Our main results show that at times, an ensemble model trained with randomly sampled examples can outperform a single model trained using AL. However, converting the single-model AL method into an ensemble-based AL method shows that even this much stronger baseline model can be improved upon. Our best results show a ¢ ¤ £ ¦ ¥ reduction in annotation cost compared with single-model random sampling.",Ensemble-based Active Learning for Parse Selection,"Supervised estimation methods are widely seen as being superior to semi and fully unsupervised methods. However, supervised methods crucially rely upon training sets that need to be manually annotated. This can be very expensive, especially when skilled annotators are required. Active learning (AL) promises to help reduce this annotation cost. Within the complex domain of HPSG parse selection, we show that ideas from ensemble learning can help further reduce the cost of annotation. Our main results show that at times, an ensemble model trained with randomly sampled examples can outperform a single model trained using AL. However, converting the single-model AL method into an ensemble-based AL method shows that even this much stronger baseline model can be improved upon. Our best results show a ¢ ¤ £ ¦ ¥ reduction in annotation cost compared with single-model random sampling.","We would like to thank Markus Becker, Steve Clark, and the anonymous reviewers for their comments. Jeremiah Crim developed some of the feature extraction code and conglomerate features, and Alex Lascarides made suggestions for the semantic features. This work was supported by Edinburgh-Stanford Link R36763, ROSIE project.","Ensemble-based Active Learning for Parse Selection. Supervised estimation methods are widely seen as being superior to semi and fully unsupervised methods. However, supervised methods crucially rely upon training sets that need to be manually annotated. This can be very expensive, especially when skilled annotators are required. Active learning (AL) promises to help reduce this annotation cost. Within the complex domain of HPSG parse selection, we show that ideas from ensemble learning can help further reduce the cost of annotation. Our main results show that at times, an ensemble model trained with randomly sampled examples can outperform a single model trained using AL. However, converting the single-model AL method into an ensemble-based AL method shows that even this much stronger baseline model can be improved upon. Our best results show a ¢ ¤ £ ¦ ¥ reduction in annotation cost compared with single-model random sampling.",2004
ribeiro-etal-2018-semantically,https://aclanthology.org/P18-1079,0,,,,,,,"Semantically Equivalent Adversarial Rules for Debugging NLP models. Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior for individual instances, we present semantically equivalent adversaries (SEAs)-semantic-preserving perturbations that induce changes in the model's predictions. We generalize these adversaries into semantically equivalent adversarial rules (SEARs)-simple, universal replacement rules that induce adversaries on many instances. We demonstrate the usefulness and flexibility of SEAs and SEARs by detecting bugs in black-box state-of-the-art models for three domains: machine comprehension, visual questionanswering, and sentiment analysis. Via user studies, we demonstrate that we generate high-quality local adversaries for more instances than humans, and that SEARs induce four times as many mistakes as the bugs discovered by human experts. SEARs are also actionable: retraining models using data augmentation significantly reduces bugs, while maintaining accuracy.",Semantically Equivalent Adversarial Rules for Debugging {NLP} models,"Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior for individual instances, we present semantically equivalent adversaries (SEAs)-semantic-preserving perturbations that induce changes in the model's predictions. We generalize these adversaries into semantically equivalent adversarial rules (SEARs)-simple, universal replacement rules that induce adversaries on many instances. We demonstrate the usefulness and flexibility of SEAs and SEARs by detecting bugs in black-box state-of-the-art models for three domains: machine comprehension, visual questionanswering, and sentiment analysis. Via user studies, we demonstrate that we generate high-quality local adversaries for more instances than humans, and that SEARs induce four times as many mistakes as the bugs discovered by human experts. SEARs are also actionable: retraining models using data augmentation significantly reduces bugs, while maintaining accuracy.",Semantically Equivalent Adversarial Rules for Debugging NLP models,"Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior for individual instances, we present semantically equivalent adversaries (SEAs)-semantic-preserving perturbations that induce changes in the model's predictions. We generalize these adversaries into semantically equivalent adversarial rules (SEARs)-simple, universal replacement rules that induce adversaries on many instances. We demonstrate the usefulness and flexibility of SEAs and SEARs by detecting bugs in black-box state-of-the-art models for three domains: machine comprehension, visual questionanswering, and sentiment analysis. Via user studies, we demonstrate that we generate high-quality local adversaries for more instances than humans, and that SEARs induce four times as many mistakes as the bugs discovered by human experts. SEARs are also actionable: retraining models using data augmentation significantly reduces bugs, while maintaining accuracy.","We are grateful to Dan Weld, Robert L. Logan IV, and to the anonymous reviewers for their feedback. This work was supported in part by ONR award #N00014-13-1-0023, in part by NSF award #IIS-1756023, and in part by funding from FICO. The views expressed are of the authors and do not reflect the policy or opinion of the funding agencies.","Semantically Equivalent Adversarial Rules for Debugging NLP models. Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior for individual instances, we present semantically equivalent adversaries (SEAs)-semantic-preserving perturbations that induce changes in the model's predictions. We generalize these adversaries into semantically equivalent adversarial rules (SEARs)-simple, universal replacement rules that induce adversaries on many instances. We demonstrate the usefulness and flexibility of SEAs and SEARs by detecting bugs in black-box state-of-the-art models for three domains: machine comprehension, visual questionanswering, and sentiment analysis. Via user studies, we demonstrate that we generate high-quality local adversaries for more instances than humans, and that SEARs induce four times as many mistakes as the bugs discovered by human experts. SEARs are also actionable: retraining models using data augmentation significantly reduces bugs, while maintaining accuracy.",2018
zhao-etal-2020-robust,https://aclanthology.org/2020.coling-main.248,0,,,,,,,"Robust Machine Reading Comprehension by Learning Soft labels. Neural models have achieved great success on the task of machine reading comprehension (MRC), which are typically trained on hard labels. We argue that hard labels limit the model capability on generalization due to the label sparseness problem. In this paper, we propose a robust training method for MRC models to address this problem. Our method consists of three strategies, 1) label smoothing, 2) word overlapping, 3) distribution prediction. All of them help to train models on soft labels. We validate our approach on the representative architecture-ALBERT. Experimental results show that our method can greatly boost the baseline with 1% improvement in average, and achieve state-of-the-art performance on NewsQA and QUOREF. Paragraph: One of the first Norman mercenaries to serve as a Byzantine general was Hervé in the 1050s. By then however, there were already Norman mercenaries serving as far away as Trebizond and Georgia.. .. Question: When did Hervé serve as a Byzantine general? Answer1: 1050s Answer2: in the 1050s Figure 1: An example of multiple answer in extractive reading comprehension † Work done while the first author was an intern at Tencent.",Robust Machine Reading Comprehension by Learning Soft labels,"Neural models have achieved great success on the task of machine reading comprehension (MRC), which are typically trained on hard labels. We argue that hard labels limit the model capability on generalization due to the label sparseness problem. In this paper, we propose a robust training method for MRC models to address this problem. Our method consists of three strategies, 1) label smoothing, 2) word overlapping, 3) distribution prediction. All of them help to train models on soft labels. We validate our approach on the representative architecture-ALBERT. Experimental results show that our method can greatly boost the baseline with 1% improvement in average, and achieve state-of-the-art performance on NewsQA and QUOREF. Paragraph: One of the first Norman mercenaries to serve as a Byzantine general was Hervé in the 1050s. By then however, there were already Norman mercenaries serving as far away as Trebizond and Georgia.. .. Question: When did Hervé serve as a Byzantine general? Answer1: 1050s Answer2: in the 1050s Figure 1: An example of multiple answer in extractive reading comprehension † Work done while the first author was an intern at Tencent.",Robust Machine Reading Comprehension by Learning Soft labels,"Neural models have achieved great success on the task of machine reading comprehension (MRC), which are typically trained on hard labels. We argue that hard labels limit the model capability on generalization due to the label sparseness problem. In this paper, we propose a robust training method for MRC models to address this problem. Our method consists of three strategies, 1) label smoothing, 2) word overlapping, 3) distribution prediction. All of them help to train models on soft labels. We validate our approach on the representative architecture-ALBERT. Experimental results show that our method can greatly boost the baseline with 1% improvement in average, and achieve state-of-the-art performance on NewsQA and QUOREF. Paragraph: One of the first Norman mercenaries to serve as a Byzantine general was Hervé in the 1050s. By then however, there were already Norman mercenaries serving as far away as Trebizond and Georgia.. .. Question: When did Hervé serve as a Byzantine general? Answer1: 1050s Answer2: in the 1050s Figure 1: An example of multiple answer in extractive reading comprehension † Work done while the first author was an intern at Tencent.","We thank anonymous reviewers for their insightful comments. This work is sponsored in part by the National Key Research and Development Program of China (2018YFC0830700) and the National Natural Science Foundation of China (61806075). And the work of M. Yang was also supported in part by the project granted by Zhizhesihai(Beijing) Technology Co., Ltd.","Robust Machine Reading Comprehension by Learning Soft labels. Neural models have achieved great success on the task of machine reading comprehension (MRC), which are typically trained on hard labels. We argue that hard labels limit the model capability on generalization due to the label sparseness problem. In this paper, we propose a robust training method for MRC models to address this problem. Our method consists of three strategies, 1) label smoothing, 2) word overlapping, 3) distribution prediction. All of them help to train models on soft labels. We validate our approach on the representative architecture-ALBERT. Experimental results show that our method can greatly boost the baseline with 1% improvement in average, and achieve state-of-the-art performance on NewsQA and QUOREF. Paragraph: One of the first Norman mercenaries to serve as a Byzantine general was Hervé in the 1050s. By then however, there were already Norman mercenaries serving as far away as Trebizond and Georgia.. .. Question: When did Hervé serve as a Byzantine general? Answer1: 1050s Answer2: in the 1050s Figure 1: An example of multiple answer in extractive reading comprehension † Work done while the first author was an intern at Tencent.",2020
zhao-huang-2013-minibatch,https://aclanthology.org/N13-1038,0,,,,,,,"Minibatch and Parallelization for Online Large Margin Structured Learning. Online learning algorithms such as perceptron and MIRA have become popular for many NLP tasks thanks to their simpler architecture and faster convergence over batch learning methods. However, while batch learning such as CRF is easily parallelizable, online learning is much harder to parallelize: previous efforts often witness a decrease in the converged accuracy, and the speedup is typically very small (∼3) even with many (10+) processors. We instead present a much simpler architecture based on ""mini-batches"", which is trivially parallelizable. We show that, unlike previous methods, minibatch learning (in serial mode) actually improves the converged accuracy for both perceptron and MIRA learning, and when combined with simple parallelization, minibatch leads to very significant speedups (up to 9x on 12 processors) on stateof-the-art parsing and tagging systems.",Minibatch and Parallelization for Online Large Margin Structured Learning,"Online learning algorithms such as perceptron and MIRA have become popular for many NLP tasks thanks to their simpler architecture and faster convergence over batch learning methods. However, while batch learning such as CRF is easily parallelizable, online learning is much harder to parallelize: previous efforts often witness a decrease in the converged accuracy, and the speedup is typically very small (∼3) even with many (10+) processors. We instead present a much simpler architecture based on ""mini-batches"", which is trivially parallelizable. We show that, unlike previous methods, minibatch learning (in serial mode) actually improves the converged accuracy for both perceptron and MIRA learning, and when combined with simple parallelization, minibatch leads to very significant speedups (up to 9x on 12 processors) on stateof-the-art parsing and tagging systems.",Minibatch and Parallelization for Online Large Margin Structured Learning,"Online learning algorithms such as perceptron and MIRA have become popular for many NLP tasks thanks to their simpler architecture and faster convergence over batch learning methods. However, while batch learning such as CRF is easily parallelizable, online learning is much harder to parallelize: previous efforts often witness a decrease in the converged accuracy, and the speedup is typically very small (∼3) even with many (10+) processors. We instead present a much simpler architecture based on ""mini-batches"", which is trivially parallelizable. We show that, unlike previous methods, minibatch learning (in serial mode) actually improves the converged accuracy for both perceptron and MIRA learning, and when combined with simple parallelization, minibatch leads to very significant speedups (up to 9x on 12 processors) on stateof-the-art parsing and tagging systems.","We thank Ryan McDonald, Yoav Goldberg, and Hal Daumé, III for helpful discussions, and the anonymous reviewers for suggestions. This work was partially supported by DARPA FA8750-13-2-0041 ""Deep Exploration and Filtering of Text"" (DEFT) Program and by Queens College for equipment.","Minibatch and Parallelization for Online Large Margin Structured Learning. Online learning algorithms such as perceptron and MIRA have become popular for many NLP tasks thanks to their simpler architecture and faster convergence over batch learning methods. However, while batch learning such as CRF is easily parallelizable, online learning is much harder to parallelize: previous efforts often witness a decrease in the converged accuracy, and the speedup is typically very small (∼3) even with many (10+) processors. We instead present a much simpler architecture based on ""mini-batches"", which is trivially parallelizable. We show that, unlike previous methods, minibatch learning (in serial mode) actually improves the converged accuracy for both perceptron and MIRA learning, and when combined with simple parallelization, minibatch leads to very significant speedups (up to 9x on 12 processors) on stateof-the-art parsing and tagging systems.",2013
vasiljevs-etal-2012-creation,http://www.lrec-conf.org/proceedings/lrec2012/pdf/744_Paper.pdf,0,,,,,,,"Creation of an Open Shared Language Resource Repository in the Nordic and Baltic Countries. The META-NORD project has contributed to an open infrastructure for language resources (data and tools) under the META-NET umbrella. This paper presents the key objectives of META-NORD and reports on the results achieved in the first year of the project. META-NORD has mapped and described the national language technology landscape in the Nordic and Baltic countries in terms of language use, language technology and resources, main actors in the academy, industry, government and society; identified and collected the first batch of language resources in the Nordic and Baltic countries; documented, processed, linked, and upgraded the identified language resources to agreed standards and guidelines. The three horizontal multilingual actions in META-NORD are overviewed in this paper: linking and validating Nordic and Baltic wordnets, the harmonisation of multilingual Nordic and Baltic treebanks, and consolidating multilingual terminology resources across European countries. This paper also touches upon intellectual property rights for the sharing of language resources.",Creation of an Open Shared Language Resource Repository in the Nordic and Baltic Countries,"The META-NORD project has contributed to an open infrastructure for language resources (data and tools) under the META-NET umbrella. This paper presents the key objectives of META-NORD and reports on the results achieved in the first year of the project. META-NORD has mapped and described the national language technology landscape in the Nordic and Baltic countries in terms of language use, language technology and resources, main actors in the academy, industry, government and society; identified and collected the first batch of language resources in the Nordic and Baltic countries; documented, processed, linked, and upgraded the identified language resources to agreed standards and guidelines. The three horizontal multilingual actions in META-NORD are overviewed in this paper: linking and validating Nordic and Baltic wordnets, the harmonisation of multilingual Nordic and Baltic treebanks, and consolidating multilingual terminology resources across European countries. This paper also touches upon intellectual property rights for the sharing of language resources.",Creation of an Open Shared Language Resource Repository in the Nordic and Baltic Countries,"The META-NORD project has contributed to an open infrastructure for language resources (data and tools) under the META-NET umbrella. This paper presents the key objectives of META-NORD and reports on the results achieved in the first year of the project. META-NORD has mapped and described the national language technology landscape in the Nordic and Baltic countries in terms of language use, language technology and resources, main actors in the academy, industry, government and society; identified and collected the first batch of language resources in the Nordic and Baltic countries; documented, processed, linked, and upgraded the identified language resources to agreed standards and guidelines. The three horizontal multilingual actions in META-NORD are overviewed in this paper: linking and validating Nordic and Baltic wordnets, the harmonisation of multilingual Nordic and Baltic treebanks, and consolidating multilingual terminology resources across European countries. This paper also touches upon intellectual property rights for the sharing of language resources.","The META-NORD project has received funding from the European Commission through the ICT PSP Programme, grant agreement no 270899.","Creation of an Open Shared Language Resource Repository in the Nordic and Baltic Countries. The META-NORD project has contributed to an open infrastructure for language resources (data and tools) under the META-NET umbrella. This paper presents the key objectives of META-NORD and reports on the results achieved in the first year of the project. META-NORD has mapped and described the national language technology landscape in the Nordic and Baltic countries in terms of language use, language technology and resources, main actors in the academy, industry, government and society; identified and collected the first batch of language resources in the Nordic and Baltic countries; documented, processed, linked, and upgraded the identified language resources to agreed standards and guidelines. The three horizontal multilingual actions in META-NORD are overviewed in this paper: linking and validating Nordic and Baltic wordnets, the harmonisation of multilingual Nordic and Baltic treebanks, and consolidating multilingual terminology resources across European countries. This paper also touches upon intellectual property rights for the sharing of language resources.",2012
ws-2000-anlp-naacl,https://aclanthology.org/W00-0500,0,,,,,,,ANLP-NAACL 2000 Workshop: Embedded Machine Translation Systems. ,{ANLP}-{NAACL} 2000 Workshop: Embedded Machine Translation Systems,,ANLP-NAACL 2000 Workshop: Embedded Machine Translation Systems,,,ANLP-NAACL 2000 Workshop: Embedded Machine Translation Systems. ,2000
han-sun-2014-semantic,https://aclanthology.org/P14-2117,0,,,,,,,"Semantic Consistency: A Local Subspace Based Method for Distant Supervised Relation Extraction. One fundamental problem of distant supervision is the noisy training corpus problem. In this paper, we propose a new distant supervision method, called Semantic Consistency, which can identify reliable instances from noisy instances by inspecting whether an instance is located in a semantically consistent region. Specifically, we propose a semantic consistency model, which first models the local subspace around an instance as a sparse linear combination of training instances, then estimate the semantic consistency by exploiting the characteristics of the local subspace. Experimental results verified the effectiveness of our method.",Semantic Consistency: A Local Subspace Based Method for Distant Supervised Relation Extraction,"One fundamental problem of distant supervision is the noisy training corpus problem. In this paper, we propose a new distant supervision method, called Semantic Consistency, which can identify reliable instances from noisy instances by inspecting whether an instance is located in a semantically consistent region. Specifically, we propose a semantic consistency model, which first models the local subspace around an instance as a sparse linear combination of training instances, then estimate the semantic consistency by exploiting the characteristics of the local subspace. Experimental results verified the effectiveness of our method.",Semantic Consistency: A Local Subspace Based Method for Distant Supervised Relation Extraction,"One fundamental problem of distant supervision is the noisy training corpus problem. In this paper, we propose a new distant supervision method, called Semantic Consistency, which can identify reliable instances from noisy instances by inspecting whether an instance is located in a semantically consistent region. Specifically, we propose a semantic consistency model, which first models the local subspace around an instance as a sparse linear combination of training instances, then estimate the semantic consistency by exploiting the characteristics of the local subspace. Experimental results verified the effectiveness of our method.",,"Semantic Consistency: A Local Subspace Based Method for Distant Supervised Relation Extraction. One fundamental problem of distant supervision is the noisy training corpus problem. In this paper, we propose a new distant supervision method, called Semantic Consistency, which can identify reliable instances from noisy instances by inspecting whether an instance is located in a semantically consistent region. Specifically, we propose a semantic consistency model, which first models the local subspace around an instance as a sparse linear combination of training instances, then estimate the semantic consistency by exploiting the characteristics of the local subspace. Experimental results verified the effectiveness of our method.",2014
vela-etal-2019-improving,https://aclanthology.org/W19-6702,0,,,,,,,"Improving CAT Tools in the Translation Workflow: New Approaches and Evaluation. This paper describes strategies to improve an existing web-based computeraided translation (CAT) tool entitled CATaLog Online. CATaLog Online provides a post-editing environment with simple yet helpful project management tools. It offers translation suggestions from translation memories (TM), machine translation (MT), and automatic post-editing (APE) and records detailed logs of post-editing activities. To test the new approaches proposed in this paper, we carried out a user study on an English-German translation task using CATaLog Online. User feedback revealed that the users preferred using CATaLog Online over existing CAT tools in some respects, especially by selecting the output of the MT system and taking advantage of the color scheme for TM suggestions.",Improving {CAT} Tools in the Translation Workflow: New Approaches and Evaluation,"This paper describes strategies to improve an existing web-based computeraided translation (CAT) tool entitled CATaLog Online. CATaLog Online provides a post-editing environment with simple yet helpful project management tools. It offers translation suggestions from translation memories (TM), machine translation (MT), and automatic post-editing (APE) and records detailed logs of post-editing activities. To test the new approaches proposed in this paper, we carried out a user study on an English-German translation task using CATaLog Online. User feedback revealed that the users preferred using CATaLog Online over existing CAT tools in some respects, especially by selecting the output of the MT system and taking advantage of the color scheme for TM suggestions.",Improving CAT Tools in the Translation Workflow: New Approaches and Evaluation,"This paper describes strategies to improve an existing web-based computeraided translation (CAT) tool entitled CATaLog Online. CATaLog Online provides a post-editing environment with simple yet helpful project management tools. It offers translation suggestions from translation memories (TM), machine translation (MT), and automatic post-editing (APE) and records detailed logs of post-editing activities. To test the new approaches proposed in this paper, we carried out a user study on an English-German translation task using CATaLog Online. User feedback revealed that the users preferred using CATaLog Online over existing CAT tools in some respects, especially by selecting the output of the MT system and taking advantage of the color scheme for TM suggestions.","We would like to thank the participants of this user study for their valuable contribution. We further thank the MT Summit anonymous reviewers for their insightful feedback.This research was funded in part by the Ger- /2007-2013) under REA grant agreement no 317471. We are also thankful to Pangeanic, Valencia, Spain for kindly providing us with professional translators for these experiments.","Improving CAT Tools in the Translation Workflow: New Approaches and Evaluation. This paper describes strategies to improve an existing web-based computeraided translation (CAT) tool entitled CATaLog Online. CATaLog Online provides a post-editing environment with simple yet helpful project management tools. It offers translation suggestions from translation memories (TM), machine translation (MT), and automatic post-editing (APE) and records detailed logs of post-editing activities. To test the new approaches proposed in this paper, we carried out a user study on an English-German translation task using CATaLog Online. User feedback revealed that the users preferred using CATaLog Online over existing CAT tools in some respects, especially by selecting the output of the MT system and taking advantage of the color scheme for TM suggestions.",2019
lopez-etal-2016-encoding,https://aclanthology.org/L16-1177,0,,,,,,,"Encoding Adjective Scales for Fine-grained Resources. We propose an automatic approach towards determining the relative location of adjectives on a common scale based on their strength. We focus on adjectives expressing different degrees of goodness occurring in French product (perfumes) reviews. Using morphosyntactic patterns, we extract from the reviews short phrases consisting of a noun that encodes a particular aspect of the perfume and an adjective modifying that noun. We then associate each such n-gram with the corresponding product aspect and its related star rating. Next, based on the star scores, we generate adjective scales reflecting the relative strength of specific adjectives associated with a shared attribute of the product. An automatic ordering of the adjectives ""correct"" (correct), ""sympa"" (nice), ""bon"" (good) and ""excellent"" (excellent) according to their score in our resource is consistent with an intuitive scale based on human judgments. Our long-term objective is to generate different adjective scales in an empirical manner, which could allow the enrichment of lexical resources.",Encoding Adjective Scales for Fine-grained Resources,"We propose an automatic approach towards determining the relative location of adjectives on a common scale based on their strength. We focus on adjectives expressing different degrees of goodness occurring in French product (perfumes) reviews. Using morphosyntactic patterns, we extract from the reviews short phrases consisting of a noun that encodes a particular aspect of the perfume and an adjective modifying that noun. We then associate each such n-gram with the corresponding product aspect and its related star rating. Next, based on the star scores, we generate adjective scales reflecting the relative strength of specific adjectives associated with a shared attribute of the product. An automatic ordering of the adjectives ""correct"" (correct), ""sympa"" (nice), ""bon"" (good) and ""excellent"" (excellent) according to their score in our resource is consistent with an intuitive scale based on human judgments. Our long-term objective is to generate different adjective scales in an empirical manner, which could allow the enrichment of lexical resources.",Encoding Adjective Scales for Fine-grained Resources,"We propose an automatic approach towards determining the relative location of adjectives on a common scale based on their strength. We focus on adjectives expressing different degrees of goodness occurring in French product (perfumes) reviews. Using morphosyntactic patterns, we extract from the reviews short phrases consisting of a noun that encodes a particular aspect of the perfume and an adjective modifying that noun. We then associate each such n-gram with the corresponding product aspect and its related star rating. Next, based on the star scores, we generate adjective scales reflecting the relative strength of specific adjectives associated with a shared attribute of the product. An automatic ordering of the adjectives ""correct"" (correct), ""sympa"" (nice), ""bon"" (good) and ""excellent"" (excellent) according to their score in our resource is consistent with an intuitive scale based on human judgments. Our long-term objective is to generate different adjective scales in an empirical manner, which could allow the enrichment of lexical resources.",,"Encoding Adjective Scales for Fine-grained Resources. We propose an automatic approach towards determining the relative location of adjectives on a common scale based on their strength. We focus on adjectives expressing different degrees of goodness occurring in French product (perfumes) reviews. Using morphosyntactic patterns, we extract from the reviews short phrases consisting of a noun that encodes a particular aspect of the perfume and an adjective modifying that noun. We then associate each such n-gram with the corresponding product aspect and its related star rating. Next, based on the star scores, we generate adjective scales reflecting the relative strength of specific adjectives associated with a shared attribute of the product. An automatic ordering of the adjectives ""correct"" (correct), ""sympa"" (nice), ""bon"" (good) and ""excellent"" (excellent) according to their score in our resource is consistent with an intuitive scale based on human judgments. Our long-term objective is to generate different adjective scales in an empirical manner, which could allow the enrichment of lexical resources.",2016
abdelali-etal-2021-qadi,https://aclanthology.org/2021.wanlp-1.1,0,,,,,,,"QADI: Arabic Dialect Identification in the Wild. Proper dialect identification is important for a variety of Arabic NLP applications. In this paper, we present a method for rapidly constructing a tweet dataset containing a wide range of country-level Arabic dialects-covering 18 different countries in the Middle East and North Africa region. Our method relies on applying multiple filters to identify users who belong to different countries based on their account descriptions and to eliminate tweets that either write mainly in Modern Standard Arabic or mostly use vulgar language. The resultant dataset contains 540k tweets from 2,525 users who are evenly distributed across 18 Arab countries. Using intrinsic evaluation, we show that the labels of a set of randomly selected tweets are 91.5% accurate. For extrinsic evaluation, we are able to build effective countrylevel dialect identification on tweets with a macro-averaged F1-score of 60.6% across 18 classes.",{QADI}: {A}rabic Dialect Identification in the Wild,"Proper dialect identification is important for a variety of Arabic NLP applications. In this paper, we present a method for rapidly constructing a tweet dataset containing a wide range of country-level Arabic dialects-covering 18 different countries in the Middle East and North Africa region. Our method relies on applying multiple filters to identify users who belong to different countries based on their account descriptions and to eliminate tweets that either write mainly in Modern Standard Arabic or mostly use vulgar language. The resultant dataset contains 540k tweets from 2,525 users who are evenly distributed across 18 Arab countries. Using intrinsic evaluation, we show that the labels of a set of randomly selected tweets are 91.5% accurate. For extrinsic evaluation, we are able to build effective countrylevel dialect identification on tweets with a macro-averaged F1-score of 60.6% across 18 classes.",QADI: Arabic Dialect Identification in the Wild,"Proper dialect identification is important for a variety of Arabic NLP applications. In this paper, we present a method for rapidly constructing a tweet dataset containing a wide range of country-level Arabic dialects-covering 18 different countries in the Middle East and North Africa region. Our method relies on applying multiple filters to identify users who belong to different countries based on their account descriptions and to eliminate tweets that either write mainly in Modern Standard Arabic or mostly use vulgar language. The resultant dataset contains 540k tweets from 2,525 users who are evenly distributed across 18 Arab countries. Using intrinsic evaluation, we show that the labels of a set of randomly selected tweets are 91.5% accurate. For extrinsic evaluation, we are able to build effective countrylevel dialect identification on tweets with a macro-averaged F1-score of 60.6% across 18 classes.",9 https://en.wikipedia.org/wiki/ Egyptian_Arabic,"QADI: Arabic Dialect Identification in the Wild. Proper dialect identification is important for a variety of Arabic NLP applications. In this paper, we present a method for rapidly constructing a tweet dataset containing a wide range of country-level Arabic dialects-covering 18 different countries in the Middle East and North Africa region. Our method relies on applying multiple filters to identify users who belong to different countries based on their account descriptions and to eliminate tweets that either write mainly in Modern Standard Arabic or mostly use vulgar language. The resultant dataset contains 540k tweets from 2,525 users who are evenly distributed across 18 Arab countries. Using intrinsic evaluation, we show that the labels of a set of randomly selected tweets are 91.5% accurate. For extrinsic evaluation, we are able to build effective countrylevel dialect identification on tweets with a macro-averaged F1-score of 60.6% across 18 classes.",2021
minnema-herbelot-2019-brain,https://aclanthology.org/P19-2021,1,,,,health,industry_innovation_infrastructure,,"From Brain Space to Distributional Space: The Perilous Journeys of fMRI Decoding. Recent work in cognitive neuroscience has introduced models for predicting distributional word meaning representations from brain imaging data. Such models have great potential, but the quality of their predictions has not yet been thoroughly evaluated from a computational linguistics point of view. Due to the limited size of available brain imaging datasets, standard quality metrics (e.g. similarity judgments and analogies) cannot be used. Instead, we investigate the use of several alternative measures for evaluating the predicted distributional space against a corpus-derived distributional space. We show that a stateof-the-art decoder, while performing impressively on metrics that are commonly used in cognitive neuroscience, performs unexpectedly poorly on our metrics. To address this, we propose strategies for improving the model's performance. Despite returning promising results, our experiments also demonstrate that much work remains to be done before distributional representations can reliably be predicted from brain data.",From Brain Space to Distributional Space: The Perilous Journeys of f{MRI} Decoding,"Recent work in cognitive neuroscience has introduced models for predicting distributional word meaning representations from brain imaging data. Such models have great potential, but the quality of their predictions has not yet been thoroughly evaluated from a computational linguistics point of view. Due to the limited size of available brain imaging datasets, standard quality metrics (e.g. similarity judgments and analogies) cannot be used. Instead, we investigate the use of several alternative measures for evaluating the predicted distributional space against a corpus-derived distributional space. We show that a stateof-the-art decoder, while performing impressively on metrics that are commonly used in cognitive neuroscience, performs unexpectedly poorly on our metrics. To address this, we propose strategies for improving the model's performance. Despite returning promising results, our experiments also demonstrate that much work remains to be done before distributional representations can reliably be predicted from brain data.",From Brain Space to Distributional Space: The Perilous Journeys of fMRI Decoding,"Recent work in cognitive neuroscience has introduced models for predicting distributional word meaning representations from brain imaging data. Such models have great potential, but the quality of their predictions has not yet been thoroughly evaluated from a computational linguistics point of view. Due to the limited size of available brain imaging datasets, standard quality metrics (e.g. similarity judgments and analogies) cannot be used. Instead, we investigate the use of several alternative measures for evaluating the predicted distributional space against a corpus-derived distributional space. We show that a stateof-the-art decoder, while performing impressively on metrics that are commonly used in cognitive neuroscience, performs unexpectedly poorly on our metrics. To address this, we propose strategies for improving the model's performance. Despite returning promising results, our experiments also demonstrate that much work remains to be done before distributional representations can reliably be predicted from brain data.","The first author of this paper (GM) was enrolled in the European Master Program in Language and Communication Technologies (LCT) while writing the paper, and was supported by the European Union Erasmus Mundus program.","From Brain Space to Distributional Space: The Perilous Journeys of fMRI Decoding. Recent work in cognitive neuroscience has introduced models for predicting distributional word meaning representations from brain imaging data. Such models have great potential, but the quality of their predictions has not yet been thoroughly evaluated from a computational linguistics point of view. Due to the limited size of available brain imaging datasets, standard quality metrics (e.g. similarity judgments and analogies) cannot be used. Instead, we investigate the use of several alternative measures for evaluating the predicted distributional space against a corpus-derived distributional space. We show that a stateof-the-art decoder, while performing impressively on metrics that are commonly used in cognitive neuroscience, performs unexpectedly poorly on our metrics. To address this, we propose strategies for improving the model's performance. Despite returning promising results, our experiments also demonstrate that much work remains to be done before distributional representations can reliably be predicted from brain data.",2019
chen-etal-2019-capturing,https://aclanthology.org/D19-1544,0,,,,,,,"Capturing Argument Interaction in Semantic Role Labeling with Capsule Networks. Semantic role labeling (SRL) involves extracting propositions (i.e. predicates and their typed arguments) from natural language sentences. State-of-the-art SRL models rely on powerful encoders (e.g., LSTMs) and do not model non-local interaction between arguments. We propose a new approach to modeling these interactions while maintaining efficient inference. Specifically, we use Capsule Networks (Sabour et al., 2017): each proposition is encoded as a tuple of capsules, one capsule per argument type (i.e. role). These tuples serve as embeddings of entire propositions. In every network layer, the capsules interact with each other and with representations of words in the sentence. Each iteration results in updated proposition embeddings and updated predictions about the SRL structure. Our model substantially outperforms the nonrefinement baseline model on all 7 CoNLL-2019 languages and achieves state-of-the-art results on 5 languages (including English) for dependency SRL. We analyze the types of mistakes corrected by the refinement procedure. For example, each role is typically (but not always) filled with at most one argument. Whereas enforcing this approximate constraint is not useful with the modern SRL system, iterative procedure corrects the mistakes by capturing this intuition in a flexible and contextsensitive way. 1",Capturing Argument Interaction in Semantic Role Labeling with Capsule Networks,"Semantic role labeling (SRL) involves extracting propositions (i.e. predicates and their typed arguments) from natural language sentences. State-of-the-art SRL models rely on powerful encoders (e.g., LSTMs) and do not model non-local interaction between arguments. We propose a new approach to modeling these interactions while maintaining efficient inference. Specifically, we use Capsule Networks (Sabour et al., 2017): each proposition is encoded as a tuple of capsules, one capsule per argument type (i.e. role). These tuples serve as embeddings of entire propositions. In every network layer, the capsules interact with each other and with representations of words in the sentence. Each iteration results in updated proposition embeddings and updated predictions about the SRL structure. Our model substantially outperforms the nonrefinement baseline model on all 7 CoNLL-2019 languages and achieves state-of-the-art results on 5 languages (including English) for dependency SRL. We analyze the types of mistakes corrected by the refinement procedure. For example, each role is typically (but not always) filled with at most one argument. Whereas enforcing this approximate constraint is not useful with the modern SRL system, iterative procedure corrects the mistakes by capturing this intuition in a flexible and contextsensitive way. 1",Capturing Argument Interaction in Semantic Role Labeling with Capsule Networks,"Semantic role labeling (SRL) involves extracting propositions (i.e. predicates and their typed arguments) from natural language sentences. State-of-the-art SRL models rely on powerful encoders (e.g., LSTMs) and do not model non-local interaction between arguments. We propose a new approach to modeling these interactions while maintaining efficient inference. Specifically, we use Capsule Networks (Sabour et al., 2017): each proposition is encoded as a tuple of capsules, one capsule per argument type (i.e. role). These tuples serve as embeddings of entire propositions. In every network layer, the capsules interact with each other and with representations of words in the sentence. Each iteration results in updated proposition embeddings and updated predictions about the SRL structure. Our model substantially outperforms the nonrefinement baseline model on all 7 CoNLL-2019 languages and achieves state-of-the-art results on 5 languages (including English) for dependency SRL. We analyze the types of mistakes corrected by the refinement procedure. For example, each role is typically (but not always) filled with at most one argument. Whereas enforcing this approximate constraint is not useful with the modern SRL system, iterative procedure corrects the mistakes by capturing this intuition in a flexible and contextsensitive way. 1","We thank Diego Marcheggiani, Jonathan Mallinson and Philip Williams for constructive feedback and suggestions, as well as anonymous reviewers for their comments. The project was supported by the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518).","Capturing Argument Interaction in Semantic Role Labeling with Capsule Networks. Semantic role labeling (SRL) involves extracting propositions (i.e. predicates and their typed arguments) from natural language sentences. State-of-the-art SRL models rely on powerful encoders (e.g., LSTMs) and do not model non-local interaction between arguments. We propose a new approach to modeling these interactions while maintaining efficient inference. Specifically, we use Capsule Networks (Sabour et al., 2017): each proposition is encoded as a tuple of capsules, one capsule per argument type (i.e. role). These tuples serve as embeddings of entire propositions. In every network layer, the capsules interact with each other and with representations of words in the sentence. Each iteration results in updated proposition embeddings and updated predictions about the SRL structure. Our model substantially outperforms the nonrefinement baseline model on all 7 CoNLL-2019 languages and achieves state-of-the-art results on 5 languages (including English) for dependency SRL. We analyze the types of mistakes corrected by the refinement procedure. For example, each role is typically (but not always) filled with at most one argument. Whereas enforcing this approximate constraint is not useful with the modern SRL system, iterative procedure corrects the mistakes by capturing this intuition in a flexible and contextsensitive way. 1",2019
robin-favero-2000-content,https://aclanthology.org/W00-1417,0,,,,,,,"Content aggregation in natural language hypertext summarization of OLAP and Data Mining Discoveries. We present a new approach to paratactic content aggregation in the context of generating hypertext summaries of OLAP and data mining discoveries. Two key properties make this approach innovative and interesting: (1) it encapsulates aggregation inside the sentence planning component, and (2) it relies on a domain independent algorithm working on a data structure that abstracts from lexical and syntactic knowledge.",Content aggregation in natural language hypertext summarization of {OLAP} and Data Mining Discoveries,"We present a new approach to paratactic content aggregation in the context of generating hypertext summaries of OLAP and data mining discoveries. Two key properties make this approach innovative and interesting: (1) it encapsulates aggregation inside the sentence planning component, and (2) it relies on a domain independent algorithm working on a data structure that abstracts from lexical and syntactic knowledge.",Content aggregation in natural language hypertext summarization of OLAP and Data Mining Discoveries,"We present a new approach to paratactic content aggregation in the context of generating hypertext summaries of OLAP and data mining discoveries. Two key properties make this approach innovative and interesting: (1) it encapsulates aggregation inside the sentence planning component, and (2) it relies on a domain independent algorithm working on a data structure that abstracts from lexical and syntactic knowledge.",,"Content aggregation in natural language hypertext summarization of OLAP and Data Mining Discoveries. We present a new approach to paratactic content aggregation in the context of generating hypertext summaries of OLAP and data mining discoveries. Two key properties make this approach innovative and interesting: (1) it encapsulates aggregation inside the sentence planning component, and (2) it relies on a domain independent algorithm working on a data structure that abstracts from lexical and syntactic knowledge.",2000
alfonseca-manandhar-2002-proposal,http://www.lrec-conf.org/proceedings/lrec2002/pdf/38.pdf,0,,,,,,,"Proposal for Evaluating Ontology Refinement Methods. Ontologies are a tool for Knowledge Representation that is now widely used, but the effort employed to build an ontology is still high. There are a few automatic and semi-automatic methods for extending ontologies with domain-specific information, but they use different training and test data, and different evaluation metrics. The work described in this paper is an attempt to build a benchmark corpus that can be used for comparing these systems. We provide standard evaluation metrics as well as two different annotated corpora: one in which every unknown word has been labelled with the places where it should be added onto the ontology, and other in which only the high-frequency unknown terms have been annotated",Proposal for Evaluating Ontology Refinement Methods,"Ontologies are a tool for Knowledge Representation that is now widely used, but the effort employed to build an ontology is still high. There are a few automatic and semi-automatic methods for extending ontologies with domain-specific information, but they use different training and test data, and different evaluation metrics. The work described in this paper is an attempt to build a benchmark corpus that can be used for comparing these systems. We provide standard evaluation metrics as well as two different annotated corpora: one in which every unknown word has been labelled with the places where it should be added onto the ontology, and other in which only the high-frequency unknown terms have been annotated",Proposal for Evaluating Ontology Refinement Methods,"Ontologies are a tool for Knowledge Representation that is now widely used, but the effort employed to build an ontology is still high. There are a few automatic and semi-automatic methods for extending ontologies with domain-specific information, but they use different training and test data, and different evaluation metrics. The work described in this paper is an attempt to build a benchmark corpus that can be used for comparing these systems. We provide standard evaluation metrics as well as two different annotated corpora: one in which every unknown word has been labelled with the places where it should be added onto the ontology, and other in which only the high-frequency unknown terms have been annotated","This work does not attempt to evaluate learning of nontaxonomic relations (e.g. meronymy, holonymy, telic, etc.), but we believe that similar evaluation metrics could be used (Maedche and Staab, 2000) . Further work can be done on this topic.","Proposal for Evaluating Ontology Refinement Methods. Ontologies are a tool for Knowledge Representation that is now widely used, but the effort employed to build an ontology is still high. There are a few automatic and semi-automatic methods for extending ontologies with domain-specific information, but they use different training and test data, and different evaluation metrics. The work described in this paper is an attempt to build a benchmark corpus that can be used for comparing these systems. We provide standard evaluation metrics as well as two different annotated corpora: one in which every unknown word has been labelled with the places where it should be added onto the ontology, and other in which only the high-frequency unknown terms have been annotated",2002
chiang-su-1996-statistical,https://aclanthology.org/W96-0110,0,,,,,,,"Statistical Models for Deep-structure Disambiguation. In this paper, an integrated score function is proposed to resolve the ambiguity of deepstructure, which includes the cases of constituents and the senses of words. With the integrated score function, different knowledge sources, including part-of-speech, syntax and semantics, are integrated in a uniform formulation. Based on this formulation, different models for case identification and word-sense disambiguation are derived. In the baseline system, the values of parameters are estimated by using the maximum likelihood estimation method. The accuracy rates of 56.3% for parse tree, 77.5% for case and 86.2% for word sense are obtained when the baseline system is tested on a corpus of 800 sentences. Afterwards, to reduce the estimation error caused by the maximum likelihood estimation, the Good-Turing's smoothing method is applied. In addition, a robust discriminative learning algorithm is also derived to minimize the testing set error rate. By applying these algorithms, the accuracy rates of 77% for parse tree, 88,9% for case, and 88.6% for sense are obtained. Compared with the baseline system; 17.4% error reduction rate for sense discrimination, 50.7% for case identification, and 47.4% for parsing accuracy are obtained. These results clearly demonstrate the superiority of the proposed models for deep-structure disambiguation.",Statistical Models for Deep-structure Disambiguation,"In this paper, an integrated score function is proposed to resolve the ambiguity of deepstructure, which includes the cases of constituents and the senses of words. With the integrated score function, different knowledge sources, including part-of-speech, syntax and semantics, are integrated in a uniform formulation. Based on this formulation, different models for case identification and word-sense disambiguation are derived. In the baseline system, the values of parameters are estimated by using the maximum likelihood estimation method. The accuracy rates of 56.3% for parse tree, 77.5% for case and 86.2% for word sense are obtained when the baseline system is tested on a corpus of 800 sentences. Afterwards, to reduce the estimation error caused by the maximum likelihood estimation, the Good-Turing's smoothing method is applied. In addition, a robust discriminative learning algorithm is also derived to minimize the testing set error rate. By applying these algorithms, the accuracy rates of 77% for parse tree, 88,9% for case, and 88.6% for sense are obtained. Compared with the baseline system; 17.4% error reduction rate for sense discrimination, 50.7% for case identification, and 47.4% for parsing accuracy are obtained. These results clearly demonstrate the superiority of the proposed models for deep-structure disambiguation.",Statistical Models for Deep-structure Disambiguation,"In this paper, an integrated score function is proposed to resolve the ambiguity of deepstructure, which includes the cases of constituents and the senses of words. With the integrated score function, different knowledge sources, including part-of-speech, syntax and semantics, are integrated in a uniform formulation. Based on this formulation, different models for case identification and word-sense disambiguation are derived. In the baseline system, the values of parameters are estimated by using the maximum likelihood estimation method. The accuracy rates of 56.3% for parse tree, 77.5% for case and 86.2% for word sense are obtained when the baseline system is tested on a corpus of 800 sentences. Afterwards, to reduce the estimation error caused by the maximum likelihood estimation, the Good-Turing's smoothing method is applied. In addition, a robust discriminative learning algorithm is also derived to minimize the testing set error rate. By applying these algorithms, the accuracy rates of 77% for parse tree, 88,9% for case, and 88.6% for sense are obtained. Compared with the baseline system; 17.4% error reduction rate for sense discrimination, 50.7% for case identification, and 47.4% for parsing accuracy are obtained. These results clearly demonstrate the superiority of the proposed models for deep-structure disambiguation.",,"Statistical Models for Deep-structure Disambiguation. In this paper, an integrated score function is proposed to resolve the ambiguity of deepstructure, which includes the cases of constituents and the senses of words. With the integrated score function, different knowledge sources, including part-of-speech, syntax and semantics, are integrated in a uniform formulation. Based on this formulation, different models for case identification and word-sense disambiguation are derived. In the baseline system, the values of parameters are estimated by using the maximum likelihood estimation method. The accuracy rates of 56.3% for parse tree, 77.5% for case and 86.2% for word sense are obtained when the baseline system is tested on a corpus of 800 sentences. Afterwards, to reduce the estimation error caused by the maximum likelihood estimation, the Good-Turing's smoothing method is applied. In addition, a robust discriminative learning algorithm is also derived to minimize the testing set error rate. By applying these algorithms, the accuracy rates of 77% for parse tree, 88,9% for case, and 88.6% for sense are obtained. Compared with the baseline system; 17.4% error reduction rate for sense discrimination, 50.7% for case identification, and 47.4% for parsing accuracy are obtained. These results clearly demonstrate the superiority of the proposed models for deep-structure disambiguation.",1996
tang-etal-2011-clgvsm,https://aclanthology.org/I11-1065,0,,,,,,,"CLGVSM: Adapting Generalized Vector Space Model to Cross-lingual Document Clustering. Cross-lingual document clustering (CLDC) is the task to automatically organize a large collection of cross-lingual documents into groups considering content or topic. Different from the traditional hard matching strategy, this paper extends traditional generalized vector space model (GVSM) to handle cross-lingual cases, referred to as CLGVSM, by incorporating cross-lingual word similarity measures. With this model, we further compare different word similarity measures in cross-lingual document clustering. To select cross-lingual features effectively, we also propose a softmatching based feature selection method in CLGVSM. Experimental results on benchmarking data set show that (1) the proposed CLGVSM is very effective for cross-document clustering, outperforming the two strong baselines vector space model (VSM) and latent semantic analysis (LSA) significantly; and (2) the new feature selection method can further improve CLGVSM.",{CLGVSM}: Adapting Generalized Vector Space Model to Cross-lingual Document Clustering,"Cross-lingual document clustering (CLDC) is the task to automatically organize a large collection of cross-lingual documents into groups considering content or topic. Different from the traditional hard matching strategy, this paper extends traditional generalized vector space model (GVSM) to handle cross-lingual cases, referred to as CLGVSM, by incorporating cross-lingual word similarity measures. With this model, we further compare different word similarity measures in cross-lingual document clustering. To select cross-lingual features effectively, we also propose a softmatching based feature selection method in CLGVSM. Experimental results on benchmarking data set show that (1) the proposed CLGVSM is very effective for cross-document clustering, outperforming the two strong baselines vector space model (VSM) and latent semantic analysis (LSA) significantly; and (2) the new feature selection method can further improve CLGVSM.",CLGVSM: Adapting Generalized Vector Space Model to Cross-lingual Document Clustering,"Cross-lingual document clustering (CLDC) is the task to automatically organize a large collection of cross-lingual documents into groups considering content or topic. Different from the traditional hard matching strategy, this paper extends traditional generalized vector space model (GVSM) to handle cross-lingual cases, referred to as CLGVSM, by incorporating cross-lingual word similarity measures. With this model, we further compare different word similarity measures in cross-lingual document clustering. To select cross-lingual features effectively, we also propose a softmatching based feature selection method in CLGVSM. Experimental results on benchmarking data set show that (1) the proposed CLGVSM is very effective for cross-document clustering, outperforming the two strong baselines vector space model (VSM) and latent semantic analysis (LSA) significantly; and (2) the new feature selection method can further improve CLGVSM.",This work is partially supported by NSFC (60703051) and MOST (2009DFA12970). We thank the reviewers for the valuable comments.,"CLGVSM: Adapting Generalized Vector Space Model to Cross-lingual Document Clustering. Cross-lingual document clustering (CLDC) is the task to automatically organize a large collection of cross-lingual documents into groups considering content or topic. Different from the traditional hard matching strategy, this paper extends traditional generalized vector space model (GVSM) to handle cross-lingual cases, referred to as CLGVSM, by incorporating cross-lingual word similarity measures. With this model, we further compare different word similarity measures in cross-lingual document clustering. To select cross-lingual features effectively, we also propose a softmatching based feature selection method in CLGVSM. Experimental results on benchmarking data set show that (1) the proposed CLGVSM is very effective for cross-document clustering, outperforming the two strong baselines vector space model (VSM) and latent semantic analysis (LSA) significantly; and (2) the new feature selection method can further improve CLGVSM.",2011
guo-etal-2020-text,https://aclanthology.org/2020.coling-main.542,1,,,,health,,,"Text Classification by Contrastive Learning and Cross-lingual Data Augmentation for Alzheimer's Disease Detection. Data scarcity is always a constraint on analyzing speech transcriptions for automatic Alzheimer's disease (AD) detection, especially when the subjects are non-English speakers. To deal with this issue, this paper first proposes a contrastive learning method to obtain effective representations for text classification based on monolingual embeddings of BERT. Furthermore, a cross-lingual data augmentation method is designed by building autoencoders to learn the text representations shared by both languages. Experiments on a Mandarin AD corpus show that the contrastive learning method can achieve better detection accuracy than conventional CNN-based and BERTbased methods. Our cross-lingual data augmentation method also outperforms other compared methods when using another English AD corpus for augmentation. Finally, a best detection accuracy of 81.6% is obtained by our proposed methods on the Mandarin AD corpus.",Text Classification by Contrastive Learning and Cross-lingual Data Augmentation for {A}lzheimer{'}s Disease Detection,"Data scarcity is always a constraint on analyzing speech transcriptions for automatic Alzheimer's disease (AD) detection, especially when the subjects are non-English speakers. To deal with this issue, this paper first proposes a contrastive learning method to obtain effective representations for text classification based on monolingual embeddings of BERT. Furthermore, a cross-lingual data augmentation method is designed by building autoencoders to learn the text representations shared by both languages. Experiments on a Mandarin AD corpus show that the contrastive learning method can achieve better detection accuracy than conventional CNN-based and BERTbased methods. Our cross-lingual data augmentation method also outperforms other compared methods when using another English AD corpus for augmentation. Finally, a best detection accuracy of 81.6% is obtained by our proposed methods on the Mandarin AD corpus.",Text Classification by Contrastive Learning and Cross-lingual Data Augmentation for Alzheimer's Disease Detection,"Data scarcity is always a constraint on analyzing speech transcriptions for automatic Alzheimer's disease (AD) detection, especially when the subjects are non-English speakers. To deal with this issue, this paper first proposes a contrastive learning method to obtain effective representations for text classification based on monolingual embeddings of BERT. Furthermore, a cross-lingual data augmentation method is designed by building autoencoders to learn the text representations shared by both languages. Experiments on a Mandarin AD corpus show that the contrastive learning method can achieve better detection accuracy than conventional CNN-based and BERTbased methods. Our cross-lingual data augmentation method also outperforms other compared methods when using another English AD corpus for augmentation. Finally, a best detection accuracy of 81.6% is obtained by our proposed methods on the Mandarin AD corpus.",,"Text Classification by Contrastive Learning and Cross-lingual Data Augmentation for Alzheimer's Disease Detection. Data scarcity is always a constraint on analyzing speech transcriptions for automatic Alzheimer's disease (AD) detection, especially when the subjects are non-English speakers. To deal with this issue, this paper first proposes a contrastive learning method to obtain effective representations for text classification based on monolingual embeddings of BERT. Furthermore, a cross-lingual data augmentation method is designed by building autoencoders to learn the text representations shared by both languages. Experiments on a Mandarin AD corpus show that the contrastive learning method can achieve better detection accuracy than conventional CNN-based and BERTbased methods. Our cross-lingual data augmentation method also outperforms other compared methods when using another English AD corpus for augmentation. Finally, a best detection accuracy of 81.6% is obtained by our proposed methods on the Mandarin AD corpus.",2020
schabes-etal-1988-parsing,https://aclanthology.org/C88-2121,0,,,,,,,Parsing Strategies with `Lexicalized' Grammars: Application to Tree Adjoining Grammars. In this paper we present a general parsing strategy that arose from the development of an Earley-type parsing algorithm for TAGs (Schabes and Joshi 1988) and from recent linguistic work in TAGs (Abeille 1988).,Parsing Strategies with {`}Lexicalized{'} Grammars: Application to {T}ree {A}djoining {G}rammars,In this paper we present a general parsing strategy that arose from the development of an Earley-type parsing algorithm for TAGs (Schabes and Joshi 1988) and from recent linguistic work in TAGs (Abeille 1988).,Parsing Strategies with `Lexicalized' Grammars: Application to Tree Adjoining Grammars,In this paper we present a general parsing strategy that arose from the development of an Earley-type parsing algorithm for TAGs (Schabes and Joshi 1988) and from recent linguistic work in TAGs (Abeille 1988).,,Parsing Strategies with `Lexicalized' Grammars: Application to Tree Adjoining Grammars. In this paper we present a general parsing strategy that arose from the development of an Earley-type parsing algorithm for TAGs (Schabes and Joshi 1988) and from recent linguistic work in TAGs (Abeille 1988).,1988
loukachevitch-etal-2018-comparing,https://aclanthology.org/2018.gwc-1.5,0,,,,,,,"Comparing Two Thesaurus Representations for Russian. In the paper we presented a new Russian wordnet, RuWordNet, which was semiautomatically obtained by transformation of the existing Russian thesaurus RuThes. At the first step, the basic structure of wordnets was reproduced: synsets' hierarchy for each part of speech and the basic set of relations between synsets (hyponym-hypernym, partwhole, antonyms). At the second stage, we added causation, entailment and domain relations between synsets. Also derivation relations were established for single words and the component structure for phrases included in RuWordNet. The described procedure of transformation highlights the specific features of each type of thesaurus representations.",Comparing Two Thesaurus Representations for {R}ussian,"In the paper we presented a new Russian wordnet, RuWordNet, which was semiautomatically obtained by transformation of the existing Russian thesaurus RuThes. At the first step, the basic structure of wordnets was reproduced: synsets' hierarchy for each part of speech and the basic set of relations between synsets (hyponym-hypernym, partwhole, antonyms). At the second stage, we added causation, entailment and domain relations between synsets. Also derivation relations were established for single words and the component structure for phrases included in RuWordNet. The described procedure of transformation highlights the specific features of each type of thesaurus representations.",Comparing Two Thesaurus Representations for Russian,"In the paper we presented a new Russian wordnet, RuWordNet, which was semiautomatically obtained by transformation of the existing Russian thesaurus RuThes. At the first step, the basic structure of wordnets was reproduced: synsets' hierarchy for each part of speech and the basic set of relations between synsets (hyponym-hypernym, partwhole, antonyms). At the second stage, we added causation, entailment and domain relations between synsets. Also derivation relations were established for single words and the component structure for phrases included in RuWordNet. The described procedure of transformation highlights the specific features of each type of thesaurus representations.","This work is partially supported by Russian Scientific Foundation, according to the research project No. 16-18-020.","Comparing Two Thesaurus Representations for Russian. In the paper we presented a new Russian wordnet, RuWordNet, which was semiautomatically obtained by transformation of the existing Russian thesaurus RuThes. At the first step, the basic structure of wordnets was reproduced: synsets' hierarchy for each part of speech and the basic set of relations between synsets (hyponym-hypernym, partwhole, antonyms). At the second stage, we added causation, entailment and domain relations between synsets. Also derivation relations were established for single words and the component structure for phrases included in RuWordNet. The described procedure of transformation highlights the specific features of each type of thesaurus representations.",2018
bhandwaldar-zadrozny-2018-uncc,https://aclanthology.org/W18-5308,1,,,,health,,,"UNCC QA: Biomedical Question Answering system. In this paper, we detail our submission to the BioASQ competition's Biomedical Semantic Question and Answering task. Our system uses extractive summarization techniques to generate answers and has scored highest ROUGE-2 and Rogue-SU4 in all test batch sets. Our contributions are named-entity based method for answering factoid and list questions, and an extractive summarization techniques for building paragraph-sized summaries, based on lexical chains. Our system got highest ROUGE-2 and ROUGE-SU4 scores for ideal-type answers in all test batch sets. We also discuss the limitations of the described system, such lack of the evaluation on other criteria (e.g. manual). Also, for factoidand list-type question our system got low accuracy (which suggests that our algorithm needs to improve in the ranking of entities).",{UNCC} {QA}: Biomedical Question Answering system,"In this paper, we detail our submission to the BioASQ competition's Biomedical Semantic Question and Answering task. Our system uses extractive summarization techniques to generate answers and has scored highest ROUGE-2 and Rogue-SU4 in all test batch sets. Our contributions are named-entity based method for answering factoid and list questions, and an extractive summarization techniques for building paragraph-sized summaries, based on lexical chains. Our system got highest ROUGE-2 and ROUGE-SU4 scores for ideal-type answers in all test batch sets. We also discuss the limitations of the described system, such lack of the evaluation on other criteria (e.g. manual). Also, for factoidand list-type question our system got low accuracy (which suggests that our algorithm needs to improve in the ranking of entities).",UNCC QA: Biomedical Question Answering system,"In this paper, we detail our submission to the BioASQ competition's Biomedical Semantic Question and Answering task. Our system uses extractive summarization techniques to generate answers and has scored highest ROUGE-2 and Rogue-SU4 in all test batch sets. Our contributions are named-entity based method for answering factoid and list questions, and an extractive summarization techniques for building paragraph-sized summaries, based on lexical chains. Our system got highest ROUGE-2 and ROUGE-SU4 scores for ideal-type answers in all test batch sets. We also discuss the limitations of the described system, such lack of the evaluation on other criteria (e.g. manual). Also, for factoidand list-type question our system got low accuracy (which suggests that our algorithm needs to improve in the ranking of entities).",Acknowledgment. We would like to thank the referees for their comments and suggestions. All the remaining faults are ours.,"UNCC QA: Biomedical Question Answering system. In this paper, we detail our submission to the BioASQ competition's Biomedical Semantic Question and Answering task. Our system uses extractive summarization techniques to generate answers and has scored highest ROUGE-2 and Rogue-SU4 in all test batch sets. Our contributions are named-entity based method for answering factoid and list questions, and an extractive summarization techniques for building paragraph-sized summaries, based on lexical chains. Our system got highest ROUGE-2 and ROUGE-SU4 scores for ideal-type answers in all test batch sets. We also discuss the limitations of the described system, such lack of the evaluation on other criteria (e.g. manual). Also, for factoidand list-type question our system got low accuracy (which suggests that our algorithm needs to improve in the ranking of entities).",2018
chatzichrisafis-etal-2006-evaluating,https://aclanthology.org/W06-3702,1,,,,health,,,"Evaluating Task Performance for a Unidirectional Controlled Language Medical Speech Translation System. We present a task-level evaluation of the French to English version of MedSLT, a medium-vocabulary unidirectional controlled language medical speech translation system designed for doctor-patient diagnosis interviews. Our main goal was to establish task performance levels of novice users and compare them to expert users. Tests were carried out on eight medical students with no previous exposure to the system, with each student using the system for a total of three sessions. By the end of the third session, all the students were able to use the system confidently, with an average task completion time of about 4 minutes.",Evaluating Task Performance for a Unidirectional Controlled Language Medical Speech Translation System,"We present a task-level evaluation of the French to English version of MedSLT, a medium-vocabulary unidirectional controlled language medical speech translation system designed for doctor-patient diagnosis interviews. Our main goal was to establish task performance levels of novice users and compare them to expert users. Tests were carried out on eight medical students with no previous exposure to the system, with each student using the system for a total of three sessions. By the end of the third session, all the students were able to use the system confidently, with an average task completion time of about 4 minutes.",Evaluating Task Performance for a Unidirectional Controlled Language Medical Speech Translation System,"We present a task-level evaluation of the French to English version of MedSLT, a medium-vocabulary unidirectional controlled language medical speech translation system designed for doctor-patient diagnosis interviews. Our main goal was to establish task performance levels of novice users and compare them to expert users. Tests were carried out on eight medical students with no previous exposure to the system, with each student using the system for a total of three sessions. By the end of the third session, all the students were able to use the system confidently, with an average task completion time of about 4 minutes.","We would like to thank Agnes Lisowska, Alia Rahal, and Nancy Underwood for being impartial judges over our system's results.This work was funded by the Swiss National Science Foundation.","Evaluating Task Performance for a Unidirectional Controlled Language Medical Speech Translation System. We present a task-level evaluation of the French to English version of MedSLT, a medium-vocabulary unidirectional controlled language medical speech translation system designed for doctor-patient diagnosis interviews. Our main goal was to establish task performance levels of novice users and compare them to expert users. Tests were carried out on eight medical students with no previous exposure to the system, with each student using the system for a total of three sessions. By the end of the third session, all the students were able to use the system confidently, with an average task completion time of about 4 minutes.",2006
kunath-weinberger-2010-wisdom,https://aclanthology.org/W10-0726,0,,,,,,,"The Wisdom of the Crowd's Ear: Speech Accent Rating and Annotation with Amazon Mechanical Turk. Human listeners can almost instantaneously judge whether or not another speaker is part of their speech community. The basis of this judgment is the speaker's accent. Even though humans judge speech accents with ease, it has been tremendously difficult to automatically evaluate and rate accents in any consistent manner. This paper describes an experiment using the Amazon Mechanical Turk to develop an automatic speech accent rating dataset.",The Wisdom of the Crowd{'}s Ear: Speech Accent Rating and Annotation with {A}mazon {M}echanical {T}urk,"Human listeners can almost instantaneously judge whether or not another speaker is part of their speech community. The basis of this judgment is the speaker's accent. Even though humans judge speech accents with ease, it has been tremendously difficult to automatically evaluate and rate accents in any consistent manner. This paper describes an experiment using the Amazon Mechanical Turk to develop an automatic speech accent rating dataset.",The Wisdom of the Crowd's Ear: Speech Accent Rating and Annotation with Amazon Mechanical Turk,"Human listeners can almost instantaneously judge whether or not another speaker is part of their speech community. The basis of this judgment is the speaker's accent. Even though humans judge speech accents with ease, it has been tremendously difficult to automatically evaluate and rate accents in any consistent manner. This paper describes an experiment using the Amazon Mechanical Turk to develop an automatic speech accent rating dataset.",The authors would like to thank Amazon.com and the workshop organizers for providing MTurk credits to perform this research.,"The Wisdom of the Crowd's Ear: Speech Accent Rating and Annotation with Amazon Mechanical Turk. Human listeners can almost instantaneously judge whether or not another speaker is part of their speech community. The basis of this judgment is the speaker's accent. Even though humans judge speech accents with ease, it has been tremendously difficult to automatically evaluate and rate accents in any consistent manner. This paper describes an experiment using the Amazon Mechanical Turk to develop an automatic speech accent rating dataset.",2010
lin-etal-2021-rumor,https://aclanthology.org/2021.emnlp-main.786,1,,,,disinformation_and_fake_news,,,"Rumor Detection on Twitter with Claim-Guided Hierarchical Graph Attention Networks. Rumors are rampant in the era of social media. Conversation structures provide valuable clues to differentiate between real and fake claims. However, existing rumor detection methods are either limited to the strict relation of user responses or oversimplify the conversation structure. In this study, to substantially reinforces the interaction of user opinions while alleviating the negative impact imposed by irrelevant posts, we first represent the conversation thread as an undirected interaction graph. We then present a Claim-guided Hierarchical Graph Attention Network for rumor classification, which enhances the representation learning for responsive posts considering the entire social contexts and attends over the posts that can semantically infer the target claim. Extensive experiments on three Twitter datasets demonstrate that our rumor detection method achieves much better performance than stateof-the-art methods and exhibits a superior capacity for detecting rumors at early stages.",Rumor Detection on {T}witter with Claim-Guided Hierarchical Graph Attention Networks,"Rumors are rampant in the era of social media. Conversation structures provide valuable clues to differentiate between real and fake claims. However, existing rumor detection methods are either limited to the strict relation of user responses or oversimplify the conversation structure. In this study, to substantially reinforces the interaction of user opinions while alleviating the negative impact imposed by irrelevant posts, we first represent the conversation thread as an undirected interaction graph. We then present a Claim-guided Hierarchical Graph Attention Network for rumor classification, which enhances the representation learning for responsive posts considering the entire social contexts and attends over the posts that can semantically infer the target claim. Extensive experiments on three Twitter datasets demonstrate that our rumor detection method achieves much better performance than stateof-the-art methods and exhibits a superior capacity for detecting rumors at early stages.",Rumor Detection on Twitter with Claim-Guided Hierarchical Graph Attention Networks,"Rumors are rampant in the era of social media. Conversation structures provide valuable clues to differentiate between real and fake claims. However, existing rumor detection methods are either limited to the strict relation of user responses or oversimplify the conversation structure. In this study, to substantially reinforces the interaction of user opinions while alleviating the negative impact imposed by irrelevant posts, we first represent the conversation thread as an undirected interaction graph. We then present a Claim-guided Hierarchical Graph Attention Network for rumor classification, which enhances the representation learning for responsive posts considering the entire social contexts and attends over the posts that can semantically infer the target claim. Extensive experiments on three Twitter datasets demonstrate that our rumor detection method achieves much better performance than stateof-the-art methods and exhibits a superior capacity for detecting rumors at early stages.",We thank all anonymous reviewers for their helpful comments and suggestions. This work was partially supported by the Foundation of Guizhou Provincial Key Laboratory of Public Big Data (No.2019BDKFJJ002). Jing Ma was supported by HKBU direct grant (Ref. AIS 21-22/02).,"Rumor Detection on Twitter with Claim-Guided Hierarchical Graph Attention Networks. Rumors are rampant in the era of social media. Conversation structures provide valuable clues to differentiate between real and fake claims. However, existing rumor detection methods are either limited to the strict relation of user responses or oversimplify the conversation structure. In this study, to substantially reinforces the interaction of user opinions while alleviating the negative impact imposed by irrelevant posts, we first represent the conversation thread as an undirected interaction graph. We then present a Claim-guided Hierarchical Graph Attention Network for rumor classification, which enhances the representation learning for responsive posts considering the entire social contexts and attends over the posts that can semantically infer the target claim. Extensive experiments on three Twitter datasets demonstrate that our rumor detection method achieves much better performance than stateof-the-art methods and exhibits a superior capacity for detecting rumors at early stages.",2021
al-negheimish-etal-2021-numerical,https://aclanthology.org/2021.emnlp-main.759,0,,,,,,,"Numerical reasoning in machine reading comprehension tasks: are we there yet?. Numerical reasoning based machine reading comprehension is a task that involves reading comprehension along with using arithmetic operations such as addition, subtraction, sorting, and counting. The DROP benchmark (Dua et al., 2019) is a recent dataset that has inspired the design of NLP models aimed at solving this task. The current standings of these models in the DROP leaderboard, over standard metrics, suggest that the models have achieved near-human performance. However, does this mean that these models have learned to reason? In this paper, we present a controlled study on some of the top-performing model architectures for the task of numerical reasoning. Our observations suggest that the standard metrics are incapable of measuring progress towards such tasks.",Numerical reasoning in machine reading comprehension tasks: are we there yet?,"Numerical reasoning based machine reading comprehension is a task that involves reading comprehension along with using arithmetic operations such as addition, subtraction, sorting, and counting. The DROP benchmark (Dua et al., 2019) is a recent dataset that has inspired the design of NLP models aimed at solving this task. The current standings of these models in the DROP leaderboard, over standard metrics, suggest that the models have achieved near-human performance. However, does this mean that these models have learned to reason? In this paper, we present a controlled study on some of the top-performing model architectures for the task of numerical reasoning. Our observations suggest that the standard metrics are incapable of measuring progress towards such tasks.",Numerical reasoning in machine reading comprehension tasks: are we there yet?,"Numerical reasoning based machine reading comprehension is a task that involves reading comprehension along with using arithmetic operations such as addition, subtraction, sorting, and counting. The DROP benchmark (Dua et al., 2019) is a recent dataset that has inspired the design of NLP models aimed at solving this task. The current standings of these models in the DROP leaderboard, over standard metrics, suggest that the models have achieved near-human performance. However, does this mean that these models have learned to reason? In this paper, we present a controlled study on some of the top-performing model architectures for the task of numerical reasoning. Our observations suggest that the standard metrics are incapable of measuring progress towards such tasks.","This research has been supported by a PhD scholarship from King Saud University. We thank our anonymous reviewers for their constructive comments and suggestions, and SPIKE research group members for their feedback throughout this work.","Numerical reasoning in machine reading comprehension tasks: are we there yet?. Numerical reasoning based machine reading comprehension is a task that involves reading comprehension along with using arithmetic operations such as addition, subtraction, sorting, and counting. The DROP benchmark (Dua et al., 2019) is a recent dataset that has inspired the design of NLP models aimed at solving this task. The current standings of these models in the DROP leaderboard, over standard metrics, suggest that the models have achieved near-human performance. However, does this mean that these models have learned to reason? In this paper, we present a controlled study on some of the top-performing model architectures for the task of numerical reasoning. Our observations suggest that the standard metrics are incapable of measuring progress towards such tasks.",2021
ishizaki-kato-1998-exploring,https://aclanthology.org/C98-1092,0,,,,,,,"Exploring the Characteristics of Multi-Party Dialogues. This paper describes novel results on the charactcristics of three-party dialogues by quantitatively comparing them with those of two-party. In previous dialogue research, two-party dialogues are mainly focussed because data collection of multi-party dialogues is difficult and there are very few theories handling them, although research on multi-party dialogues is expected to be of much use in building computer supported collaborative work environments and computer assisted instruction systems, in this paper, firstly we describe our data collection method of multi-party dialogues using a meeting scheduling task, which enables us to compare three-party dialogues with those of two party. Then we quantitively compare these two kinds of dialogues su('h as the number of characters and turns and patterns of inforlnation exchanges. Lastly we show that patterns of information exchanges in speaker alternation and initiative-taking can be used to characterise three-party dialogues.",Exploring the Characteristics of Multi-Party Dialogues,"This paper describes novel results on the charactcristics of three-party dialogues by quantitatively comparing them with those of two-party. In previous dialogue research, two-party dialogues are mainly focussed because data collection of multi-party dialogues is difficult and there are very few theories handling them, although research on multi-party dialogues is expected to be of much use in building computer supported collaborative work environments and computer assisted instruction systems, in this paper, firstly we describe our data collection method of multi-party dialogues using a meeting scheduling task, which enables us to compare three-party dialogues with those of two party. Then we quantitively compare these two kinds of dialogues su('h as the number of characters and turns and patterns of inforlnation exchanges. Lastly we show that patterns of information exchanges in speaker alternation and initiative-taking can be used to characterise three-party dialogues.",Exploring the Characteristics of Multi-Party Dialogues,"This paper describes novel results on the charactcristics of three-party dialogues by quantitatively comparing them with those of two-party. In previous dialogue research, two-party dialogues are mainly focussed because data collection of multi-party dialogues is difficult and there are very few theories handling them, although research on multi-party dialogues is expected to be of much use in building computer supported collaborative work environments and computer assisted instruction systems, in this paper, firstly we describe our data collection method of multi-party dialogues using a meeting scheduling task, which enables us to compare three-party dialogues with those of two party. Then we quantitively compare these two kinds of dialogues su('h as the number of characters and turns and patterns of inforlnation exchanges. Lastly we show that patterns of information exchanges in speaker alternation and initiative-taking can be used to characterise three-party dialogues.",,"Exploring the Characteristics of Multi-Party Dialogues. This paper describes novel results on the charactcristics of three-party dialogues by quantitatively comparing them with those of two-party. In previous dialogue research, two-party dialogues are mainly focussed because data collection of multi-party dialogues is difficult and there are very few theories handling them, although research on multi-party dialogues is expected to be of much use in building computer supported collaborative work environments and computer assisted instruction systems, in this paper, firstly we describe our data collection method of multi-party dialogues using a meeting scheduling task, which enables us to compare three-party dialogues with those of two party. Then we quantitively compare these two kinds of dialogues su('h as the number of characters and turns and patterns of inforlnation exchanges. Lastly we show that patterns of information exchanges in speaker alternation and initiative-taking can be used to characterise three-party dialogues.",1998
berovic-etal-2012-croatian,http://www.lrec-conf.org/proceedings/lrec2012/pdf/719_Paper.pdf,0,,,,,,,"Croatian Dependency Treebank: Recent Development and Initial Experiments. We present the current state of development of the Croatian Dependency Treebank-with special empahsis on adapting the Prague Dependency Treebank formalism to Croatian language specifics-and illustrate its possible applications in an experiment with dependency parsing using MaltParser. The treebank currently contains approximately 2870 sentences, out of which the 2699 sentences and 66930 tokens were used in this experiment. Three linear-time projective algorithms implemented by the MaltParser system-Nivre eager, Nivre standard and stack projective-running on default settings were used in the experiment. The highest performing system, implementing the Nivre eager algorithm, scored (LAS 71.31 UAS 80.93 LA 83.87) within our experiment setup. The results obtained serve as an illustration of treebank's usefulness in natural language processing research and as a baseline for further research in dependency parsing of Croatian.",{C}roatian Dependency Treebank: Recent Development and Initial Experiments,"We present the current state of development of the Croatian Dependency Treebank-with special empahsis on adapting the Prague Dependency Treebank formalism to Croatian language specifics-and illustrate its possible applications in an experiment with dependency parsing using MaltParser. The treebank currently contains approximately 2870 sentences, out of which the 2699 sentences and 66930 tokens were used in this experiment. Three linear-time projective algorithms implemented by the MaltParser system-Nivre eager, Nivre standard and stack projective-running on default settings were used in the experiment. The highest performing system, implementing the Nivre eager algorithm, scored (LAS 71.31 UAS 80.93 LA 83.87) within our experiment setup. The results obtained serve as an illustration of treebank's usefulness in natural language processing research and as a baseline for further research in dependency parsing of Croatian.",Croatian Dependency Treebank: Recent Development and Initial Experiments,"We present the current state of development of the Croatian Dependency Treebank-with special empahsis on adapting the Prague Dependency Treebank formalism to Croatian language specifics-and illustrate its possible applications in an experiment with dependency parsing using MaltParser. The treebank currently contains approximately 2870 sentences, out of which the 2699 sentences and 66930 tokens were used in this experiment. Three linear-time projective algorithms implemented by the MaltParser system-Nivre eager, Nivre standard and stack projective-running on default settings were used in the experiment. The highest performing system, implementing the Nivre eager algorithm, scored (LAS 71.31 UAS 80.93 LA 83.87) within our experiment setup. The results obtained serve as an illustration of treebank's usefulness in natural language processing research and as a baseline for further research in dependency parsing of Croatian.","Special thanks to our colleagues Tena Gnjatović and Ida Raffaelli from the Department of Linguistics, Faculty of Humanities and Social Sciences, University of Zagreb, for substantial contributions to the process of manual annotation of sentences for HOBS.The results presented here were partially obtained from research within projects ACCURAT (FP7, grant 248347), CESAR (ICT-PSP, grant 271022) funded by EC, and and partially from projects 130-1300646-0645 and 130-1300646-1776 funded by the Ministry of Science, Education and Sports of the Republic of Croatia.","Croatian Dependency Treebank: Recent Development and Initial Experiments. We present the current state of development of the Croatian Dependency Treebank-with special empahsis on adapting the Prague Dependency Treebank formalism to Croatian language specifics-and illustrate its possible applications in an experiment with dependency parsing using MaltParser. The treebank currently contains approximately 2870 sentences, out of which the 2699 sentences and 66930 tokens were used in this experiment. Three linear-time projective algorithms implemented by the MaltParser system-Nivre eager, Nivre standard and stack projective-running on default settings were used in the experiment. The highest performing system, implementing the Nivre eager algorithm, scored (LAS 71.31 UAS 80.93 LA 83.87) within our experiment setup. The results obtained serve as an illustration of treebank's usefulness in natural language processing research and as a baseline for further research in dependency parsing of Croatian.",2012
arsenos-siolas-2020-ntuaails,https://aclanthology.org/2020.semeval-1.195,1,,,,disinformation_and_fake_news,,,"NTUAAILS at SemEval-2020 Task 11: Propaganda Detection and Classification with biLSTMs and ELMo. This paper describes the NTUAAILS submission for SemEval 2020 Task 11 Detection of Propaganda Techniques in News Articles. This task comprises of two different sub-tasks, namely A: Span Identification (SI), B: Technique Classification (TC). The goal for the SI sub-task is to identify specific fragments, in a given plain text, containing at least one propaganda technique. The TC sub-task aims to identify the applied propaganda technique in a given text fragment. A different model was trained for each sub-task. Our best performing system for the SI task consists of pre-trained ELMo word embeddings followed by residual bidirectional LSTM network. For the TC sub-task pre-trained word embeddings from GloVe fed to a bidirectional LSTM neural network. The models achieved rank 28 among 36 teams with F1 score of 0.335 and rank 25 among 31 teams with 0.463 F1 score for SI and TC sub-tasks respectively. Our results indicate that the proposed deep learning models, although relatively simple in architecture and fast to train, achieve satisfactory results in the tasks on hand.",{NTUAAILS} at {S}em{E}val-2020 Task 11: Propaganda Detection and Classification with bi{LSTM}s and {ELM}o,"This paper describes the NTUAAILS submission for SemEval 2020 Task 11 Detection of Propaganda Techniques in News Articles. This task comprises of two different sub-tasks, namely A: Span Identification (SI), B: Technique Classification (TC). The goal for the SI sub-task is to identify specific fragments, in a given plain text, containing at least one propaganda technique. The TC sub-task aims to identify the applied propaganda technique in a given text fragment. A different model was trained for each sub-task. Our best performing system for the SI task consists of pre-trained ELMo word embeddings followed by residual bidirectional LSTM network. For the TC sub-task pre-trained word embeddings from GloVe fed to a bidirectional LSTM neural network. The models achieved rank 28 among 36 teams with F1 score of 0.335 and rank 25 among 31 teams with 0.463 F1 score for SI and TC sub-tasks respectively. Our results indicate that the proposed deep learning models, although relatively simple in architecture and fast to train, achieve satisfactory results in the tasks on hand.",NTUAAILS at SemEval-2020 Task 11: Propaganda Detection and Classification with biLSTMs and ELMo,"This paper describes the NTUAAILS submission for SemEval 2020 Task 11 Detection of Propaganda Techniques in News Articles. This task comprises of two different sub-tasks, namely A: Span Identification (SI), B: Technique Classification (TC). The goal for the SI sub-task is to identify specific fragments, in a given plain text, containing at least one propaganda technique. The TC sub-task aims to identify the applied propaganda technique in a given text fragment. A different model was trained for each sub-task. Our best performing system for the SI task consists of pre-trained ELMo word embeddings followed by residual bidirectional LSTM network. For the TC sub-task pre-trained word embeddings from GloVe fed to a bidirectional LSTM neural network. The models achieved rank 28 among 36 teams with F1 score of 0.335 and rank 25 among 31 teams with 0.463 F1 score for SI and TC sub-tasks respectively. Our results indicate that the proposed deep learning models, although relatively simple in architecture and fast to train, achieve satisfactory results in the tasks on hand.",,"NTUAAILS at SemEval-2020 Task 11: Propaganda Detection and Classification with biLSTMs and ELMo. This paper describes the NTUAAILS submission for SemEval 2020 Task 11 Detection of Propaganda Techniques in News Articles. This task comprises of two different sub-tasks, namely A: Span Identification (SI), B: Technique Classification (TC). The goal for the SI sub-task is to identify specific fragments, in a given plain text, containing at least one propaganda technique. The TC sub-task aims to identify the applied propaganda technique in a given text fragment. A different model was trained for each sub-task. Our best performing system for the SI task consists of pre-trained ELMo word embeddings followed by residual bidirectional LSTM network. For the TC sub-task pre-trained word embeddings from GloVe fed to a bidirectional LSTM neural network. The models achieved rank 28 among 36 teams with F1 score of 0.335 and rank 25 among 31 teams with 0.463 F1 score for SI and TC sub-tasks respectively. Our results indicate that the proposed deep learning models, although relatively simple in architecture and fast to train, achieve satisfactory results in the tasks on hand.",2020
takeuchi-etal-2004-construction,https://aclanthology.org/W04-1814,0,,,,,,,Construction of Grammar Based Term Extraction Model for Japanese. ,Construction of Grammar Based Term Extraction Model for {J}apanese,,Construction of Grammar Based Term Extraction Model for Japanese,,,Construction of Grammar Based Term Extraction Model for Japanese. ,2004
shapiro-1978-path-based,https://aclanthology.org/T78-1031,0,,,,,,,"Path-Based and Node-Based Inference in Semantic Networks. Two styles of performing inference in semantic networks are presented and compared. Path-based inference allows an arc or a path of arcs between two given nodes to be inferred from the existence of another specified path between the same two nodes. Path-based inference rules may be written using a binary relational calculus notation. Node-based inference allows a structure of nodes to be inferred from the existence of an instance of a pattern of node structures. Node-based inference rules can be constructed in a semantic network using a variant of a predicate calculus notation. Path-based inference is more efficient, while node-based inference is more general. A method is described of combining the two styles in a single system in order to take advantage of the strengths of each. Applications of path-based inference rules to the representation of the extensional equivalence of intensional concepts, and to the explication of inheritance in hierarchies are sketched. I.",Path-Based and Node-Based Inference in Semantic Networks,"Two styles of performing inference in semantic networks are presented and compared. Path-based inference allows an arc or a path of arcs between two given nodes to be inferred from the existence of another specified path between the same two nodes. Path-based inference rules may be written using a binary relational calculus notation. Node-based inference allows a structure of nodes to be inferred from the existence of an instance of a pattern of node structures. Node-based inference rules can be constructed in a semantic network using a variant of a predicate calculus notation. Path-based inference is more efficient, while node-based inference is more general. A method is described of combining the two styles in a single system in order to take advantage of the strengths of each. Applications of path-based inference rules to the representation of the extensional equivalence of intensional concepts, and to the explication of inheritance in hierarchies are sketched. I.",Path-Based and Node-Based Inference in Semantic Networks,"Two styles of performing inference in semantic networks are presented and compared. Path-based inference allows an arc or a path of arcs between two given nodes to be inferred from the existence of another specified path between the same two nodes. Path-based inference rules may be written using a binary relational calculus notation. Node-based inference allows a structure of nodes to be inferred from the existence of an instance of a pattern of node structures. Node-based inference rules can be constructed in a semantic network using a variant of a predicate calculus notation. Path-based inference is more efficient, while node-based inference is more general. A method is described of combining the two styles in a single system in order to take advantage of the strengths of each. Applications of path-based inference rules to the representation of the extensional equivalence of intensional concepts, and to the explication of inheritance in hierarchies are sketched. I.",,"Path-Based and Node-Based Inference in Semantic Networks. Two styles of performing inference in semantic networks are presented and compared. Path-based inference allows an arc or a path of arcs between two given nodes to be inferred from the existence of another specified path between the same two nodes. Path-based inference rules may be written using a binary relational calculus notation. Node-based inference allows a structure of nodes to be inferred from the existence of an instance of a pattern of node structures. Node-based inference rules can be constructed in a semantic network using a variant of a predicate calculus notation. Path-based inference is more efficient, while node-based inference is more general. A method is described of combining the two styles in a single system in order to take advantage of the strengths of each. Applications of path-based inference rules to the representation of the extensional equivalence of intensional concepts, and to the explication of inheritance in hierarchies are sketched. I.",1978
shnarch-etal-2020-unsupervised,https://aclanthology.org/2020.findings-emnlp.243,0,,,,,,,"Unsupervised Expressive Rules Provide Explainability and Assist Human Experts Grasping New Domains. Approaching new data can be quite deterrent; you do not know how your categories of interest are realized in it, commonly, there is no labeled data at hand, and the performance of domain adaptation methods is unsatisfactory. Aiming to assist domain experts in their first steps into a new task over a new corpus, we present an unsupervised approach to reveal complex rules which cluster the unexplored corpus by its prominent categories (or facets). These rules are human-readable, thus providing an important ingredient which has become in short supply lately-explainability. Each rule provides an explanation for the commonality of all the texts it clusters together. We present an extensive evaluation of the usefulness of these rules in identifying target categories, as well as a user study which assesses their interpretability.",Unsupervised Expressive Rules Provide Explainability and Assist Human Experts Grasping New Domains,"Approaching new data can be quite deterrent; you do not know how your categories of interest are realized in it, commonly, there is no labeled data at hand, and the performance of domain adaptation methods is unsatisfactory. Aiming to assist domain experts in their first steps into a new task over a new corpus, we present an unsupervised approach to reveal complex rules which cluster the unexplored corpus by its prominent categories (or facets). These rules are human-readable, thus providing an important ingredient which has become in short supply lately-explainability. Each rule provides an explanation for the commonality of all the texts it clusters together. We present an extensive evaluation of the usefulness of these rules in identifying target categories, as well as a user study which assesses their interpretability.",Unsupervised Expressive Rules Provide Explainability and Assist Human Experts Grasping New Domains,"Approaching new data can be quite deterrent; you do not know how your categories of interest are realized in it, commonly, there is no labeled data at hand, and the performance of domain adaptation methods is unsatisfactory. Aiming to assist domain experts in their first steps into a new task over a new corpus, we present an unsupervised approach to reveal complex rules which cluster the unexplored corpus by its prominent categories (or facets). These rules are human-readable, thus providing an important ingredient which has become in short supply lately-explainability. Each rule provides an explanation for the commonality of all the texts it clusters together. We present an extensive evaluation of the usefulness of these rules in identifying target categories, as well as a user study which assesses their interpretability.",,"Unsupervised Expressive Rules Provide Explainability and Assist Human Experts Grasping New Domains. Approaching new data can be quite deterrent; you do not know how your categories of interest are realized in it, commonly, there is no labeled data at hand, and the performance of domain adaptation methods is unsatisfactory. Aiming to assist domain experts in their first steps into a new task over a new corpus, we present an unsupervised approach to reveal complex rules which cluster the unexplored corpus by its prominent categories (or facets). These rules are human-readable, thus providing an important ingredient which has become in short supply lately-explainability. Each rule provides an explanation for the commonality of all the texts it clusters together. We present an extensive evaluation of the usefulness of these rules in identifying target categories, as well as a user study which assesses their interpretability.",2020
yu-etal-2018-syntaxsqlnet,https://aclanthology.org/D18-1193,0,,,,,,,"SyntaxSQLNet: Syntax Tree Networks for Complex and Cross-Domain Text-to-SQL Task. Most existing studies in text-to-SQL tasks do not require generating complex SQL queries with multiple clauses or sub-queries, and generalizing to new, unseen databases. In this paper we propose SyntaxSQLNet, a syntax tree network to address the complex and crossdomain text-to-SQL generation task. Syn-taxSQLNet employs a SQL specific syntax tree-based decoder with SQL generation path history and table-aware column attention encoders. We evaluate SyntaxSQLNet on a new large-scale text-to-SQL corpus containing databases with multiple tables and complex SQL queries containing multiple SQL clauses and nested queries. We use a database split setting where databases in the test set are unseen during training. Experimental results show that SyntaxSQLNet can handle a significantly greater number of complex SQL examples than prior work, outperforming the previous state-of-the-art model by 9.5% in exact matching accuracy. To our knowledge, we are the first to study this complex text-to-SQL task. Our task and models with the latest updates are available at https://yale-lily. github.io/seq2sql/spider.",{S}yntax{SQLN}et: Syntax Tree Networks for Complex and Cross-Domain Text-to-{SQL} Task,"Most existing studies in text-to-SQL tasks do not require generating complex SQL queries with multiple clauses or sub-queries, and generalizing to new, unseen databases. In this paper we propose SyntaxSQLNet, a syntax tree network to address the complex and crossdomain text-to-SQL generation task. Syn-taxSQLNet employs a SQL specific syntax tree-based decoder with SQL generation path history and table-aware column attention encoders. We evaluate SyntaxSQLNet on a new large-scale text-to-SQL corpus containing databases with multiple tables and complex SQL queries containing multiple SQL clauses and nested queries. We use a database split setting where databases in the test set are unseen during training. Experimental results show that SyntaxSQLNet can handle a significantly greater number of complex SQL examples than prior work, outperforming the previous state-of-the-art model by 9.5% in exact matching accuracy. To our knowledge, we are the first to study this complex text-to-SQL task. Our task and models with the latest updates are available at https://yale-lily. github.io/seq2sql/spider.",SyntaxSQLNet: Syntax Tree Networks for Complex and Cross-Domain Text-to-SQL Task,"Most existing studies in text-to-SQL tasks do not require generating complex SQL queries with multiple clauses or sub-queries, and generalizing to new, unseen databases. In this paper we propose SyntaxSQLNet, a syntax tree network to address the complex and crossdomain text-to-SQL generation task. Syn-taxSQLNet employs a SQL specific syntax tree-based decoder with SQL generation path history and table-aware column attention encoders. We evaluate SyntaxSQLNet on a new large-scale text-to-SQL corpus containing databases with multiple tables and complex SQL queries containing multiple SQL clauses and nested queries. We use a database split setting where databases in the test set are unseen during training. Experimental results show that SyntaxSQLNet can handle a significantly greater number of complex SQL examples than prior work, outperforming the previous state-of-the-art model by 9.5% in exact matching accuracy. To our knowledge, we are the first to study this complex text-to-SQL task. Our task and models with the latest updates are available at https://yale-lily. github.io/seq2sql/spider.","We thank Graham Neubig, Tianze Shi, and three anonymous reviewers for their helpful feedback and discussion on this work.","SyntaxSQLNet: Syntax Tree Networks for Complex and Cross-Domain Text-to-SQL Task. Most existing studies in text-to-SQL tasks do not require generating complex SQL queries with multiple clauses or sub-queries, and generalizing to new, unseen databases. In this paper we propose SyntaxSQLNet, a syntax tree network to address the complex and crossdomain text-to-SQL generation task. Syn-taxSQLNet employs a SQL specific syntax tree-based decoder with SQL generation path history and table-aware column attention encoders. We evaluate SyntaxSQLNet on a new large-scale text-to-SQL corpus containing databases with multiple tables and complex SQL queries containing multiple SQL clauses and nested queries. We use a database split setting where databases in the test set are unseen during training. Experimental results show that SyntaxSQLNet can handle a significantly greater number of complex SQL examples than prior work, outperforming the previous state-of-the-art model by 9.5% in exact matching accuracy. To our knowledge, we are the first to study this complex text-to-SQL task. Our task and models with the latest updates are available at https://yale-lily. github.io/seq2sql/spider.",2018
shen-etal-2017-conditional,https://aclanthology.org/P17-2080,0,,,,,,,"A Conditional Variational Framework for Dialog Generation. Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems. However, these latent variables are highly randomized, leading to uncontrollable generated responses. In this paper, we propose a framework allowing conditional response generation based on specific attributes. These attributes can be either manually assigned or automatically detected. Moreover, the dialog states for both speakers are modeled separately in order to reflect personal features. We validate this framework on two different scenarios, where the attribute refers to genericness and sentiment states respectively. The experiment result testified the potential of our model, where meaningful responses can be generated in accordance with the specified attributes.",A Conditional Variational Framework for Dialog Generation,"Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems. However, these latent variables are highly randomized, leading to uncontrollable generated responses. In this paper, we propose a framework allowing conditional response generation based on specific attributes. These attributes can be either manually assigned or automatically detected. Moreover, the dialog states for both speakers are modeled separately in order to reflect personal features. We validate this framework on two different scenarios, where the attribute refers to genericness and sentiment states respectively. The experiment result testified the potential of our model, where meaningful responses can be generated in accordance with the specified attributes.",A Conditional Variational Framework for Dialog Generation,"Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems. However, these latent variables are highly randomized, leading to uncontrollable generated responses. In this paper, we propose a framework allowing conditional response generation based on specific attributes. These attributes can be either manually assigned or automatically detected. Moreover, the dialog states for both speakers are modeled separately in order to reflect personal features. We validate this framework on two different scenarios, where the attribute refers to genericness and sentiment states respectively. The experiment result testified the potential of our model, where meaningful responses can be generated in accordance with the specified attributes.","This work was supported by the National Natural Science of China under Grant No. 61602451, 61672445 and JSPS KAKENHI Grant Numbers 15H02754, 16K12546.","A Conditional Variational Framework for Dialog Generation. Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems. However, these latent variables are highly randomized, leading to uncontrollable generated responses. In this paper, we propose a framework allowing conditional response generation based on specific attributes. These attributes can be either manually assigned or automatically detected. Moreover, the dialog states for both speakers are modeled separately in order to reflect personal features. We validate this framework on two different scenarios, where the attribute refers to genericness and sentiment states respectively. The experiment result testified the potential of our model, where meaningful responses can be generated in accordance with the specified attributes.",2017
tursun-cakici-2017-noisy,https://aclanthology.org/W17-4412,0,,,,,,,"Noisy Uyghur Text Normalization. Uyghur is the second largest and most actively used social media language in China. However, a non-negligible part of Uyghur text appearing in social media is unsystematically written with the Latin alphabet, and it continues to increase in size. Uyghur text in this format is incomprehensible and ambiguous even to native Uyghur speakers. In addition, Uyghur texts in this form lack the potential for any kind of advancement for the NLP tasks related to the Uyghur language. Restoring and preventing noisy Uyghur text written with unsystematic Latin alphabets will be essential to the protection of Uyghur language and improving the accuracy of Uyghur NLP tasks. To this purpose, in this work we propose and compare the noisy channel model and the neural encoderdecoder model as normalizing methods.",Noisy {U}yghur Text Normalization,"Uyghur is the second largest and most actively used social media language in China. However, a non-negligible part of Uyghur text appearing in social media is unsystematically written with the Latin alphabet, and it continues to increase in size. Uyghur text in this format is incomprehensible and ambiguous even to native Uyghur speakers. In addition, Uyghur texts in this form lack the potential for any kind of advancement for the NLP tasks related to the Uyghur language. Restoring and preventing noisy Uyghur text written with unsystematic Latin alphabets will be essential to the protection of Uyghur language and improving the accuracy of Uyghur NLP tasks. To this purpose, in this work we propose and compare the noisy channel model and the neural encoderdecoder model as normalizing methods.",Noisy Uyghur Text Normalization,"Uyghur is the second largest and most actively used social media language in China. However, a non-negligible part of Uyghur text appearing in social media is unsystematically written with the Latin alphabet, and it continues to increase in size. Uyghur text in this format is incomprehensible and ambiguous even to native Uyghur speakers. In addition, Uyghur texts in this form lack the potential for any kind of advancement for the NLP tasks related to the Uyghur language. Restoring and preventing noisy Uyghur text written with unsystematic Latin alphabets will be essential to the protection of Uyghur language and improving the accuracy of Uyghur NLP tasks. To this purpose, in this work we propose and compare the noisy channel model and the neural encoderdecoder model as normalizing methods.",We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research.,"Noisy Uyghur Text Normalization. Uyghur is the second largest and most actively used social media language in China. However, a non-negligible part of Uyghur text appearing in social media is unsystematically written with the Latin alphabet, and it continues to increase in size. Uyghur text in this format is incomprehensible and ambiguous even to native Uyghur speakers. In addition, Uyghur texts in this form lack the potential for any kind of advancement for the NLP tasks related to the Uyghur language. Restoring and preventing noisy Uyghur text written with unsystematic Latin alphabets will be essential to the protection of Uyghur language and improving the accuracy of Uyghur NLP tasks. To this purpose, in this work we propose and compare the noisy channel model and the neural encoderdecoder model as normalizing methods.",2017
baldridge-kruijff-2002-coupling,https://aclanthology.org/P02-1041,0,,,,,,,"Coupling CCG and Hybrid Logic Dependency Semantics. Categorial grammar has traditionally used the λ-calculus to represent meaning. We present an alternative, dependency-based perspective on linguistic meaning and situate it in the computational setting. This perspective is formalized in terms of hybrid logic and has a rich yet perspicuous propositional ontology that enables a wide variety of semantic phenomena to be represented in a single meaning formalism. Finally, we show how we can couple this formalization to Combinatory Categorial Grammar to produce interpretations compositionally.",Coupling {CCG} and Hybrid Logic Dependency Semantics,"Categorial grammar has traditionally used the λ-calculus to represent meaning. We present an alternative, dependency-based perspective on linguistic meaning and situate it in the computational setting. This perspective is formalized in terms of hybrid logic and has a rich yet perspicuous propositional ontology that enables a wide variety of semantic phenomena to be represented in a single meaning formalism. Finally, we show how we can couple this formalization to Combinatory Categorial Grammar to produce interpretations compositionally.",Coupling CCG and Hybrid Logic Dependency Semantics,"Categorial grammar has traditionally used the λ-calculus to represent meaning. We present an alternative, dependency-based perspective on linguistic meaning and situate it in the computational setting. This perspective is formalized in terms of hybrid logic and has a rich yet perspicuous propositional ontology that enables a wide variety of semantic phenomena to be represented in a single meaning formalism. Finally, we show how we can couple this formalization to Combinatory Categorial Grammar to produce interpretations compositionally.","We would like to thank Patrick Blackburn, Johan Bos, Nissim Francez, Alex Lascarides, Mark Steedman, Bonnie Webber and the ACL reviewers for helpful comments on earlier versions of this paper. All errors are, of course, our own. Jason Baldridge's work is supported in part by Overseas Research Student Award ORS/98014014. Geert-Jan Kruijff's work is supported by the DFG Sonderforschungsbereich 378 Resource-Sensitive Cognitive Processes, Project NEGRA EM6.","Coupling CCG and Hybrid Logic Dependency Semantics. Categorial grammar has traditionally used the λ-calculus to represent meaning. We present an alternative, dependency-based perspective on linguistic meaning and situate it in the computational setting. This perspective is formalized in terms of hybrid logic and has a rich yet perspicuous propositional ontology that enables a wide variety of semantic phenomena to be represented in a single meaning formalism. Finally, we show how we can couple this formalization to Combinatory Categorial Grammar to produce interpretations compositionally.",2002
li-etal-2021-retrieve,https://aclanthology.org/2021.findings-acl.39,0,,,,,,,"Retrieve \& Memorize: Dialog Policy Learning with Multi-Action Memory. Dialogue policy learning, a subtask that determines the content of system response generation and then the degree of task completion, is essential for task-oriented dialogue systems. However, the unbalanced distribution of system actions in dialogue datasets often causes difficulty in learning to generate desired actions and responses. In this paper, we propose a retrieve-and-memorize framework to enhance the learning of system actions. Specially, we first design a neural context-aware retrieval module to retrieve multiple candidate system actions from the training set given a dialogue context. Then, we propose a memoryaugmented multi-decoder network to generate the system actions conditioned on the candidate actions, which allows the network to adaptively select key information in the candidate actions and ignore noises. We conduct experiments on the large-scale multi-domain task-oriented dialogue dataset MultiWOZ 2.0 and MultiWOZ 2.1. Experimental results show that our method achieves competitive performance among several state-of-the-art models in the context-to-response generation task.",Retrieve {\&} Memorize: Dialog Policy Learning with Multi-Action Memory,"Dialogue policy learning, a subtask that determines the content of system response generation and then the degree of task completion, is essential for task-oriented dialogue systems. However, the unbalanced distribution of system actions in dialogue datasets often causes difficulty in learning to generate desired actions and responses. In this paper, we propose a retrieve-and-memorize framework to enhance the learning of system actions. Specially, we first design a neural context-aware retrieval module to retrieve multiple candidate system actions from the training set given a dialogue context. Then, we propose a memoryaugmented multi-decoder network to generate the system actions conditioned on the candidate actions, which allows the network to adaptively select key information in the candidate actions and ignore noises. We conduct experiments on the large-scale multi-domain task-oriented dialogue dataset MultiWOZ 2.0 and MultiWOZ 2.1. Experimental results show that our method achieves competitive performance among several state-of-the-art models in the context-to-response generation task.",Retrieve \& Memorize: Dialog Policy Learning with Multi-Action Memory,"Dialogue policy learning, a subtask that determines the content of system response generation and then the degree of task completion, is essential for task-oriented dialogue systems. However, the unbalanced distribution of system actions in dialogue datasets often causes difficulty in learning to generate desired actions and responses. In this paper, we propose a retrieve-and-memorize framework to enhance the learning of system actions. Specially, we first design a neural context-aware retrieval module to retrieve multiple candidate system actions from the training set given a dialogue context. Then, we propose a memoryaugmented multi-decoder network to generate the system actions conditioned on the candidate actions, which allows the network to adaptively select key information in the candidate actions and ignore noises. We conduct experiments on the large-scale multi-domain task-oriented dialogue dataset MultiWOZ 2.0 and MultiWOZ 2.1. Experimental results show that our method achieves competitive performance among several state-of-the-art models in the context-to-response generation task.",The paper was supported by the National Natural Science Foundation of China (No.61906217) and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No.2017ZT07X355).,"Retrieve \& Memorize: Dialog Policy Learning with Multi-Action Memory. Dialogue policy learning, a subtask that determines the content of system response generation and then the degree of task completion, is essential for task-oriented dialogue systems. However, the unbalanced distribution of system actions in dialogue datasets often causes difficulty in learning to generate desired actions and responses. In this paper, we propose a retrieve-and-memorize framework to enhance the learning of system actions. Specially, we first design a neural context-aware retrieval module to retrieve multiple candidate system actions from the training set given a dialogue context. Then, we propose a memoryaugmented multi-decoder network to generate the system actions conditioned on the candidate actions, which allows the network to adaptively select key information in the candidate actions and ignore noises. We conduct experiments on the large-scale multi-domain task-oriented dialogue dataset MultiWOZ 2.0 and MultiWOZ 2.1. Experimental results show that our method achieves competitive performance among several state-of-the-art models in the context-to-response generation task.",2021
wu-etal-2019-open,https://aclanthology.org/D19-1021,0,,,,,,,"Open Relation Extraction: Relational Knowledge Transfer from Supervised Data to Unsupervised Data. Open relation extraction (OpenRE) aims to extract relational facts from the open-domain corpus. To this end, it discovers relation patterns between named entities and then clusters those semantically equivalent patterns into a united relation cluster. Most OpenRE methods typically confine themselves to unsupervised paradigms, without taking advantage of existing relational facts in knowledge bases (KBs) and their high-quality labeled instances. To address this issue, we propose Relational Siamese Networks (RSNs) to learn similarity metrics of relations from labeled data of pre-defined relations, and then transfer the relational knowledge to identify novel relations in unlabeled data. Experiment results on two real-world datasets show that our framework can achieve significant improvements as compared with other state-of-the-art methods. Our code is available at https://github. com/thunlp/RSN.",Open Relation Extraction: Relational Knowledge Transfer from Supervised Data to Unsupervised Data,"Open relation extraction (OpenRE) aims to extract relational facts from the open-domain corpus. To this end, it discovers relation patterns between named entities and then clusters those semantically equivalent patterns into a united relation cluster. Most OpenRE methods typically confine themselves to unsupervised paradigms, without taking advantage of existing relational facts in knowledge bases (KBs) and their high-quality labeled instances. To address this issue, we propose Relational Siamese Networks (RSNs) to learn similarity metrics of relations from labeled data of pre-defined relations, and then transfer the relational knowledge to identify novel relations in unlabeled data. Experiment results on two real-world datasets show that our framework can achieve significant improvements as compared with other state-of-the-art methods. Our code is available at https://github. com/thunlp/RSN.",Open Relation Extraction: Relational Knowledge Transfer from Supervised Data to Unsupervised Data,"Open relation extraction (OpenRE) aims to extract relational facts from the open-domain corpus. To this end, it discovers relation patterns between named entities and then clusters those semantically equivalent patterns into a united relation cluster. Most OpenRE methods typically confine themselves to unsupervised paradigms, without taking advantage of existing relational facts in knowledge bases (KBs) and their high-quality labeled instances. To address this issue, we propose Relational Siamese Networks (RSNs) to learn similarity metrics of relations from labeled data of pre-defined relations, and then transfer the relational knowledge to identify novel relations in unlabeled data. Experiment results on two real-world datasets show that our framework can achieve significant improvements as compared with other state-of-the-art methods. Our code is available at https://github. com/thunlp/RSN.","This work is supported by the National Key Research and Development Program of China (No. 2018YFB1004503) and the National Natural Science Foundation of China (NSFC No. 61572273, 61661146007). Ruidong Wu is also supported by Tsinghua University Initiative Scientific Research Program.","Open Relation Extraction: Relational Knowledge Transfer from Supervised Data to Unsupervised Data. Open relation extraction (OpenRE) aims to extract relational facts from the open-domain corpus. To this end, it discovers relation patterns between named entities and then clusters those semantically equivalent patterns into a united relation cluster. Most OpenRE methods typically confine themselves to unsupervised paradigms, without taking advantage of existing relational facts in knowledge bases (KBs) and their high-quality labeled instances. To address this issue, we propose Relational Siamese Networks (RSNs) to learn similarity metrics of relations from labeled data of pre-defined relations, and then transfer the relational knowledge to identify novel relations in unlabeled data. Experiment results on two real-world datasets show that our framework can achieve significant improvements as compared with other state-of-the-art methods. Our code is available at https://github. com/thunlp/RSN.",2019
sevgili-etal-2019-improving,https://aclanthology.org/P19-2044,0,,,,,,,"Improving Neural Entity Disambiguation with Graph Embeddings. Entity Disambiguation (ED) is the task of linking an ambiguous entity mention to a corresponding entry in a knowledge base. Current methods have mostly focused on unstructured text data to learn representations of entities, however, there is structured information in the knowledge base itself that should be useful to disambiguate entities. In this work, we propose a method that uses graph embeddings for integrating structured information from the knowledge base with unstructured information from text-based representations. Our experiments confirm that graph embeddings trained on a graph of hyperlinks between Wikipedia articles improve the performances of simple feed-forward neural ED model and a state-ofthe-art neural ED system.",Improving Neural Entity Disambiguation with Graph Embeddings,"Entity Disambiguation (ED) is the task of linking an ambiguous entity mention to a corresponding entry in a knowledge base. Current methods have mostly focused on unstructured text data to learn representations of entities, however, there is structured information in the knowledge base itself that should be useful to disambiguate entities. In this work, we propose a method that uses graph embeddings for integrating structured information from the knowledge base with unstructured information from text-based representations. Our experiments confirm that graph embeddings trained on a graph of hyperlinks between Wikipedia articles improve the performances of simple feed-forward neural ED model and a state-ofthe-art neural ED system.",Improving Neural Entity Disambiguation with Graph Embeddings,"Entity Disambiguation (ED) is the task of linking an ambiguous entity mention to a corresponding entry in a knowledge base. Current methods have mostly focused on unstructured text data to learn representations of entities, however, there is structured information in the knowledge base itself that should be useful to disambiguate entities. In this work, we propose a method that uses graph embeddings for integrating structured information from the knowledge base with unstructured information from text-based representations. Our experiments confirm that graph embeddings trained on a graph of hyperlinks between Wikipedia articles improve the performances of simple feed-forward neural ED model and a state-ofthe-art neural ED system.",We thank the SRW mentor Matt Gardner and anonymous reviewers for their most useful feedback on this work. The work was partially supported by a Deutscher Akademischer Austauschdienst (DAAD) doctoral stipend and the DFGfunded JOIN-T project BI 1544/4.,"Improving Neural Entity Disambiguation with Graph Embeddings. Entity Disambiguation (ED) is the task of linking an ambiguous entity mention to a corresponding entry in a knowledge base. Current methods have mostly focused on unstructured text data to learn representations of entities, however, there is structured information in the knowledge base itself that should be useful to disambiguate entities. In this work, we propose a method that uses graph embeddings for integrating structured information from the knowledge base with unstructured information from text-based representations. Our experiments confirm that graph embeddings trained on a graph of hyperlinks between Wikipedia articles improve the performances of simple feed-forward neural ED model and a state-ofthe-art neural ED system.",2019
wang-etal-2015-feature,https://aclanthology.org/P15-1110,0,,,,,,,"Feature Optimization for Constituent Parsing via Neural Networks. The performance of discriminative constituent parsing relies crucially on feature engineering, and effective features usually have to be carefully selected through a painful manual process. In this paper, we propose to automatically learn a set of effective features via neural networks. Specifically, we build a feedforward neural network model, which takes as input a few primitive units (words, POS tags and certain contextual tokens) from the local context, induces the feature representation in the hidden layer and makes parsing predictions in the output layer. The network simultaneously learns the feature representation and the prediction model parameters using a back propagation algorithm. By pre-training the model on a large amount of automatically parsed data, and then fine-tuning on the manually annotated Treebank data, our parser achieves the highest F 1 score at 86.6% on Chinese Treebank 5.1, and a competitive F 1 score at 90.7% on English Treebank. More importantly, our parser generalizes well on cross-domain test sets, where we significantly outperform Berkeley parser by 3.4 points on average for Chinese and 2.5 points for English.",Feature Optimization for Constituent Parsing via Neural Networks,"The performance of discriminative constituent parsing relies crucially on feature engineering, and effective features usually have to be carefully selected through a painful manual process. In this paper, we propose to automatically learn a set of effective features via neural networks. Specifically, we build a feedforward neural network model, which takes as input a few primitive units (words, POS tags and certain contextual tokens) from the local context, induces the feature representation in the hidden layer and makes parsing predictions in the output layer. The network simultaneously learns the feature representation and the prediction model parameters using a back propagation algorithm. By pre-training the model on a large amount of automatically parsed data, and then fine-tuning on the manually annotated Treebank data, our parser achieves the highest F 1 score at 86.6% on Chinese Treebank 5.1, and a competitive F 1 score at 90.7% on English Treebank. More importantly, our parser generalizes well on cross-domain test sets, where we significantly outperform Berkeley parser by 3.4 points on average for Chinese and 2.5 points for English.",Feature Optimization for Constituent Parsing via Neural Networks,"The performance of discriminative constituent parsing relies crucially on feature engineering, and effective features usually have to be carefully selected through a painful manual process. In this paper, we propose to automatically learn a set of effective features via neural networks. Specifically, we build a feedforward neural network model, which takes as input a few primitive units (words, POS tags and certain contextual tokens) from the local context, induces the feature representation in the hidden layer and makes parsing predictions in the output layer. The network simultaneously learns the feature representation and the prediction model parameters using a back propagation algorithm. By pre-training the model on a large amount of automatically parsed data, and then fine-tuning on the manually annotated Treebank data, our parser achieves the highest F 1 score at 86.6% on Chinese Treebank 5.1, and a competitive F 1 score at 90.7% on English Treebank. More importantly, our parser generalizes well on cross-domain test sets, where we significantly outperform Berkeley parser by 3.4 points on average for Chinese and 2.5 points for English.",We thank the anonymous reviewers for comments. Haitao Mi is supported by DARPA HR0011-12-C-0015 (BOLT) and Nianwen Xue is supported by DAPRA HR0011-11-C-0145 (BOLT). The views and findings in this paper are those of the authors and are not endorsed by the DARPA.,"Feature Optimization for Constituent Parsing via Neural Networks. The performance of discriminative constituent parsing relies crucially on feature engineering, and effective features usually have to be carefully selected through a painful manual process. In this paper, we propose to automatically learn a set of effective features via neural networks. Specifically, we build a feedforward neural network model, which takes as input a few primitive units (words, POS tags and certain contextual tokens) from the local context, induces the feature representation in the hidden layer and makes parsing predictions in the output layer. The network simultaneously learns the feature representation and the prediction model parameters using a back propagation algorithm. By pre-training the model on a large amount of automatically parsed data, and then fine-tuning on the manually annotated Treebank data, our parser achieves the highest F 1 score at 86.6% on Chinese Treebank 5.1, and a competitive F 1 score at 90.7% on English Treebank. More importantly, our parser generalizes well on cross-domain test sets, where we significantly outperform Berkeley parser by 3.4 points on average for Chinese and 2.5 points for English.",2015
das-bandyopadhyay-2010-towards,https://aclanthology.org/Y10-1092,0,,,,,,,"Towards the Global SentiWordNet. The discipline where sentiment/opinion/emotion has been identified and classified in human written text is well known as sentiment analysis. A typical computational approach to sentiment analysis starts with prior polarity lexicons where entries are tagged with their prior out of context polarity as human beings perceive using cognitive knowledge. Till date, all research efforts found in sentiment analysis literature deal mostly with English texts. In this article, we propose an interactive gaming (Dr Sentiment) technology to create and validate SentiWordNet in 56 languages by involving Internet population. Dr Sentiment is a fictitious character, interact with players using series of questions and finally reveal the behavioral or sentimental status of any player and store the lexicons as the players polarized during playing. The interactive gaming technology is then compared with other multiple automatic linguistics techniques like, WordNet based, dictionary based, corpus based or generative approaches for generating SentiWordNet(s) for Indian languages and other International languages as well. A number of automatic, semiautomatic and manual validations and evaluation methodologies have been adopted to measure the coverage and credibility of the developed SentiWordNet(s).",Towards the Global {S}enti{W}ord{N}et,"The discipline where sentiment/opinion/emotion has been identified and classified in human written text is well known as sentiment analysis. A typical computational approach to sentiment analysis starts with prior polarity lexicons where entries are tagged with their prior out of context polarity as human beings perceive using cognitive knowledge. Till date, all research efforts found in sentiment analysis literature deal mostly with English texts. In this article, we propose an interactive gaming (Dr Sentiment) technology to create and validate SentiWordNet in 56 languages by involving Internet population. Dr Sentiment is a fictitious character, interact with players using series of questions and finally reveal the behavioral or sentimental status of any player and store the lexicons as the players polarized during playing. The interactive gaming technology is then compared with other multiple automatic linguistics techniques like, WordNet based, dictionary based, corpus based or generative approaches for generating SentiWordNet(s) for Indian languages and other International languages as well. A number of automatic, semiautomatic and manual validations and evaluation methodologies have been adopted to measure the coverage and credibility of the developed SentiWordNet(s).",Towards the Global SentiWordNet,"The discipline where sentiment/opinion/emotion has been identified and classified in human written text is well known as sentiment analysis. A typical computational approach to sentiment analysis starts with prior polarity lexicons where entries are tagged with their prior out of context polarity as human beings perceive using cognitive knowledge. Till date, all research efforts found in sentiment analysis literature deal mostly with English texts. In this article, we propose an interactive gaming (Dr Sentiment) technology to create and validate SentiWordNet in 56 languages by involving Internet population. Dr Sentiment is a fictitious character, interact with players using series of questions and finally reveal the behavioral or sentimental status of any player and store the lexicons as the players polarized during playing. The interactive gaming technology is then compared with other multiple automatic linguistics techniques like, WordNet based, dictionary based, corpus based or generative approaches for generating SentiWordNet(s) for Indian languages and other International languages as well. A number of automatic, semiautomatic and manual validations and evaluation methodologies have been adopted to measure the coverage and credibility of the developed SentiWordNet(s).",,"Towards the Global SentiWordNet. The discipline where sentiment/opinion/emotion has been identified and classified in human written text is well known as sentiment analysis. A typical computational approach to sentiment analysis starts with prior polarity lexicons where entries are tagged with their prior out of context polarity as human beings perceive using cognitive knowledge. Till date, all research efforts found in sentiment analysis literature deal mostly with English texts. In this article, we propose an interactive gaming (Dr Sentiment) technology to create and validate SentiWordNet in 56 languages by involving Internet population. Dr Sentiment is a fictitious character, interact with players using series of questions and finally reveal the behavioral or sentimental status of any player and store the lexicons as the players polarized during playing. The interactive gaming technology is then compared with other multiple automatic linguistics techniques like, WordNet based, dictionary based, corpus based or generative approaches for generating SentiWordNet(s) for Indian languages and other International languages as well. A number of automatic, semiautomatic and manual validations and evaluation methodologies have been adopted to measure the coverage and credibility of the developed SentiWordNet(s).",2010
al-saleh-menai-2018-ant,https://aclanthology.org/C18-1062,0,,,,,,,"Ant Colony System for Multi-Document Summarization. This paper proposes an extractive multi-document summarization approach based on an ant colony system to optimize the information coverage of summary sentences. The implemented system was evaluated on both English and Arabic versions of the corpus of the Text Analysis Conference 2011 MultiLing Pilot by using ROUGE metrics. The evaluation results are promising in comparison to those of the participating systems. Indeed, our system achieved the best scores based on several ROUGE metrics.",Ant Colony System for Multi-Document Summarization,"This paper proposes an extractive multi-document summarization approach based on an ant colony system to optimize the information coverage of summary sentences. The implemented system was evaluated on both English and Arabic versions of the corpus of the Text Analysis Conference 2011 MultiLing Pilot by using ROUGE metrics. The evaluation results are promising in comparison to those of the participating systems. Indeed, our system achieved the best scores based on several ROUGE metrics.",Ant Colony System for Multi-Document Summarization,"This paper proposes an extractive multi-document summarization approach based on an ant colony system to optimize the information coverage of summary sentences. The implemented system was evaluated on both English and Arabic versions of the corpus of the Text Analysis Conference 2011 MultiLing Pilot by using ROUGE metrics. The evaluation results are promising in comparison to those of the participating systems. Indeed, our system achieved the best scores based on several ROUGE metrics.",,"Ant Colony System for Multi-Document Summarization. This paper proposes an extractive multi-document summarization approach based on an ant colony system to optimize the information coverage of summary sentences. The implemented system was evaluated on both English and Arabic versions of the corpus of the Text Analysis Conference 2011 MultiLing Pilot by using ROUGE metrics. The evaluation results are promising in comparison to those of the participating systems. Indeed, our system achieved the best scores based on several ROUGE metrics.",2018
trippel-etal-2014-towards,http://www.lrec-conf.org/proceedings/lrec2014/pdf/1011_Paper.pdf,0,,,,,,,"Towards automatic quality assessment of component metadata. Measuring the quality of metadata is only possible by assessing the quality of the underlying schema and the metadata instance. We propose some factors that are measurable automatically for metadata according to the CMD framework, taking into account the variability of schemas that can be defined in this framework. The factors include among others the number of elements, the (re-)use of reusable components, the number of filled in elements. The resulting score can serve as an indicator of the overall quality of the CMD instance, used for feedback to metadata providers or to provide an overview of the overall quality of metadata within a repository. The score is independent of specific schemas and generalizable. An overall assessment of harvested metadata is provided in form of statistical summaries and the distribution, based on a corpus of harvested metadata. The score is implemented in XQuery and can be used in tools, editors and repositories.",Towards automatic quality assessment of component metadata,"Measuring the quality of metadata is only possible by assessing the quality of the underlying schema and the metadata instance. We propose some factors that are measurable automatically for metadata according to the CMD framework, taking into account the variability of schemas that can be defined in this framework. The factors include among others the number of elements, the (re-)use of reusable components, the number of filled in elements. The resulting score can serve as an indicator of the overall quality of the CMD instance, used for feedback to metadata providers or to provide an overview of the overall quality of metadata within a repository. The score is independent of specific schemas and generalizable. An overall assessment of harvested metadata is provided in form of statistical summaries and the distribution, based on a corpus of harvested metadata. The score is implemented in XQuery and can be used in tools, editors and repositories.",Towards automatic quality assessment of component metadata,"Measuring the quality of metadata is only possible by assessing the quality of the underlying schema and the metadata instance. We propose some factors that are measurable automatically for metadata according to the CMD framework, taking into account the variability of schemas that can be defined in this framework. The factors include among others the number of elements, the (re-)use of reusable components, the number of filled in elements. The resulting score can serve as an indicator of the overall quality of the CMD instance, used for feedback to metadata providers or to provide an overview of the overall quality of metadata within a repository. The score is independent of specific schemas and generalizable. An overall assessment of harvested metadata is provided in form of statistical summaries and the distribution, based on a corpus of harvested metadata. The score is implemented in XQuery and can be used in tools, editors and repositories.",,"Towards automatic quality assessment of component metadata. Measuring the quality of metadata is only possible by assessing the quality of the underlying schema and the metadata instance. We propose some factors that are measurable automatically for metadata according to the CMD framework, taking into account the variability of schemas that can be defined in this framework. The factors include among others the number of elements, the (re-)use of reusable components, the number of filled in elements. The resulting score can serve as an indicator of the overall quality of the CMD instance, used for feedback to metadata providers or to provide an overview of the overall quality of metadata within a repository. The score is independent of specific schemas and generalizable. An overall assessment of harvested metadata is provided in form of statistical summaries and the distribution, based on a corpus of harvested metadata. The score is implemented in XQuery and can be used in tools, editors and repositories.",2014
hori-etal-2004-evaluation,https://aclanthology.org/W04-1014,0,,,,,,,"Evaluation Measures Considering Sentence Concatenation for Automatic Summarization by Sentence or Word Extraction. Automatic summaries of text generated through sentence or word extraction has been evaluated by comparing them with manual summaries generated by humans by using numerical evaluation measures based on precision or accuracy. Although sentence extraction has previously been evaluated based only on precision of a single sentence, sentence concatenations in the summaries should be evaluated as well. We have evaluated the appropriateness of sentence concatenations in summaries by using evaluation measures used for evaluating word concatenations in summaries through word extraction. We determined that measures considering sentence concatenation much better reflect the human judgment rather than those based only on the precision of a single sentence.",Evaluation Measures Considering Sentence Concatenation for Automatic Summarization by Sentence or Word Extraction,"Automatic summaries of text generated through sentence or word extraction has been evaluated by comparing them with manual summaries generated by humans by using numerical evaluation measures based on precision or accuracy. Although sentence extraction has previously been evaluated based only on precision of a single sentence, sentence concatenations in the summaries should be evaluated as well. We have evaluated the appropriateness of sentence concatenations in summaries by using evaluation measures used for evaluating word concatenations in summaries through word extraction. We determined that measures considering sentence concatenation much better reflect the human judgment rather than those based only on the precision of a single sentence.",Evaluation Measures Considering Sentence Concatenation for Automatic Summarization by Sentence or Word Extraction,"Automatic summaries of text generated through sentence or word extraction has been evaluated by comparing them with manual summaries generated by humans by using numerical evaluation measures based on precision or accuracy. Although sentence extraction has previously been evaluated based only on precision of a single sentence, sentence concatenations in the summaries should be evaluated as well. We have evaluated the appropriateness of sentence concatenations in summaries by using evaluation measures used for evaluating word concatenations in summaries through word extraction. We determined that measures considering sentence concatenation much better reflect the human judgment rather than those based only on the precision of a single sentence.",We thank NHK (Japan Broadcasting Corporation) for providing the broadcast news database. We also thank Prof. Sadaoki Furui at Tokyo Institute of Technology for providing the summaries of the broadcast news speech.,"Evaluation Measures Considering Sentence Concatenation for Automatic Summarization by Sentence or Word Extraction. Automatic summaries of text generated through sentence or word extraction has been evaluated by comparing them with manual summaries generated by humans by using numerical evaluation measures based on precision or accuracy. Although sentence extraction has previously been evaluated based only on precision of a single sentence, sentence concatenations in the summaries should be evaluated as well. We have evaluated the appropriateness of sentence concatenations in summaries by using evaluation measures used for evaluating word concatenations in summaries through word extraction. We determined that measures considering sentence concatenation much better reflect the human judgment rather than those based only on the precision of a single sentence.",2004
salehi-etal-2016-determining,https://aclanthology.org/C16-1046,0,,,,,,,"Determining the Multiword Expression Inventory of a Surprise Language. Much previous research on multiword expressions (MWEs) has focused on the token-and typelevel tasks of MWE identification and extraction, respectively. Such studies typically target known prevalent MWE types in a given language. This paper describes the first attempt to learn the MWE inventory of a ""surprise"" language for which we have no explicit prior knowledge of MWE patterns, certainly no annotated MWE data, and not even a parallel corpus. Our proposed model is trained on a treebank with MWE relations of a source language, and can be applied to the monolingual corpus of the surprise language to identify its MWE construction types.",Determining the Multiword Expression Inventory of a Surprise Language,"Much previous research on multiword expressions (MWEs) has focused on the token-and typelevel tasks of MWE identification and extraction, respectively. Such studies typically target known prevalent MWE types in a given language. This paper describes the first attempt to learn the MWE inventory of a ""surprise"" language for which we have no explicit prior knowledge of MWE patterns, certainly no annotated MWE data, and not even a parallel corpus. Our proposed model is trained on a treebank with MWE relations of a source language, and can be applied to the monolingual corpus of the surprise language to identify its MWE construction types.",Determining the Multiword Expression Inventory of a Surprise Language,"Much previous research on multiword expressions (MWEs) has focused on the token-and typelevel tasks of MWE identification and extraction, respectively. Such studies typically target known prevalent MWE types in a given language. This paper describes the first attempt to learn the MWE inventory of a ""surprise"" language for which we have no explicit prior knowledge of MWE patterns, certainly no annotated MWE data, and not even a parallel corpus. Our proposed model is trained on a treebank with MWE relations of a source language, and can be applied to the monolingual corpus of the surprise language to identify its MWE construction types.","We wish to thank Long Duong for help with the transfer-based dependency parsing, Jan Snajder for his kind assistance with the Croatian annotation, and Dan Flickinger, Lars Hellan, Ned Letcher and João Silva for valuable advice in the early stages of development of this work. We would also like to thank the anonymous reviewers for their insightful comments and valuable suggestions. NICTA is funded by the Australian government as represented by Department of Broadband, Communication and Digital Economy, and the Australian Research Council through the ICT Centre of Excellence programme.","Determining the Multiword Expression Inventory of a Surprise Language. Much previous research on multiword expressions (MWEs) has focused on the token-and typelevel tasks of MWE identification and extraction, respectively. Such studies typically target known prevalent MWE types in a given language. This paper describes the first attempt to learn the MWE inventory of a ""surprise"" language for which we have no explicit prior knowledge of MWE patterns, certainly no annotated MWE data, and not even a parallel corpus. Our proposed model is trained on a treebank with MWE relations of a source language, and can be applied to the monolingual corpus of the surprise language to identify its MWE construction types.",2016
wu-etal-2003-totalrecall,https://aclanthology.org/O03-3005,0,,,,,,,"TotalRecall: A Bilingual Concordance in National Digital Learning Project - CANDLE. This paper describes a Web-based English-Chinese concordance system, TotalRecall, being developed in National Digital Learning Project-CANDLE, to promote translation reuse and encourage authentic and idiomatic use in second language learning. We exploited and structured existing high-quality translations from the bilingual Sinorama Magazine to build the concordance of authentic text and translation. Novel approaches were taken to provide high-precision bilingual alignment on the sentence, phrase and word levels. A browser-based user interface also developed for ease of access over the Internet. Users can search for word, phrase or expression in English or Chinese. The Web-based user interface facilitates the recording of the user actions to provide data for further research.",{T}otal{R}ecall: A Bilingual Concordance in National Digital Learning Project - {CANDLE},"This paper describes a Web-based English-Chinese concordance system, TotalRecall, being developed in National Digital Learning Project-CANDLE, to promote translation reuse and encourage authentic and idiomatic use in second language learning. We exploited and structured existing high-quality translations from the bilingual Sinorama Magazine to build the concordance of authentic text and translation. Novel approaches were taken to provide high-precision bilingual alignment on the sentence, phrase and word levels. A browser-based user interface also developed for ease of access over the Internet. Users can search for word, phrase or expression in English or Chinese. The Web-based user interface facilitates the recording of the user actions to provide data for further research.",TotalRecall: A Bilingual Concordance in National Digital Learning Project - CANDLE,"This paper describes a Web-based English-Chinese concordance system, TotalRecall, being developed in National Digital Learning Project-CANDLE, to promote translation reuse and encourage authentic and idiomatic use in second language learning. We exploited and structured existing high-quality translations from the bilingual Sinorama Magazine to build the concordance of authentic text and translation. Novel approaches were taken to provide high-precision bilingual alignment on the sentence, phrase and word levels. A browser-based user interface also developed for ease of access over the Internet. Users can search for word, phrase or expression in English or Chinese. The Web-based user interface facilitates the recording of the user actions to provide data for further research.","We acknowledge the support for this study through grants from National Science Council and Ministry of Education, Taiwan (NSC 90-2411-H-007-033-MC and MOE EX-91-E-FA06-4-4) and a special grant for preparing the Sinorama Corpus for distribution by the Association for Computational Linguistics and Chinese Language Processing.","TotalRecall: A Bilingual Concordance in National Digital Learning Project - CANDLE. This paper describes a Web-based English-Chinese concordance system, TotalRecall, being developed in National Digital Learning Project-CANDLE, to promote translation reuse and encourage authentic and idiomatic use in second language learning. We exploited and structured existing high-quality translations from the bilingual Sinorama Magazine to build the concordance of authentic text and translation. Novel approaches were taken to provide high-precision bilingual alignment on the sentence, phrase and word levels. A browser-based user interface also developed for ease of access over the Internet. Users can search for word, phrase or expression in English or Chinese. The Web-based user interface facilitates the recording of the user actions to provide data for further research.",2003
chakrabarty-etal-2021-mermaid,https://aclanthology.org/2021.naacl-main.336,0,,,,,,,"MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding. Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning. In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs. Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus by transforming a large number of metaphorical sentences from the Gutenberg Poetry corpus (Jacobs, 2018) to their literal counterpart using recent advances in masked language modeling coupled with commonsense inference. For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model finetuned on our parallel data to generate high quality metaphors. Human evaluation on an independent test set of literal statements shows that our best model generates metaphors better than three well-crafted baselines 66% of the time on average. Moreover, a task-based evaluation shows that human-written poems enhanced with metaphors proposed by our model are preferred 68% of the time compared to poems without metaphors.",{MERMAID}: Metaphor Generation with Symbolism and Discriminative Decoding,"Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning. In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs. Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus by transforming a large number of metaphorical sentences from the Gutenberg Poetry corpus (Jacobs, 2018) to their literal counterpart using recent advances in masked language modeling coupled with commonsense inference. For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model finetuned on our parallel data to generate high quality metaphors. Human evaluation on an independent test set of literal statements shows that our best model generates metaphors better than three well-crafted baselines 66% of the time on average. Moreover, a task-based evaluation shows that human-written poems enhanced with metaphors proposed by our model are preferred 68% of the time compared to poems without metaphors.",MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding,"Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning. In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs. Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus by transforming a large number of metaphorical sentences from the Gutenberg Poetry corpus (Jacobs, 2018) to their literal counterpart using recent advances in masked language modeling coupled with commonsense inference. For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model finetuned on our parallel data to generate high quality metaphors. Human evaluation on an independent test set of literal statements shows that our best model generates metaphors better than three well-crafted baselines 66% of the time on average. Moreover, a task-based evaluation shows that human-written poems enhanced with metaphors proposed by our model are preferred 68% of the time compared to poems without metaphors.","This work was supported in part by the MCS program under Cooperative Agreement N66001-19-2-4032, and the CwC program under Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. The authors would like to thank the members of PLUS-Lab at the University of California Los Angeles and University of Southern California and the anonymous reviewers for helpful comments.","MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding. Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning. In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs. Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus by transforming a large number of metaphorical sentences from the Gutenberg Poetry corpus (Jacobs, 2018) to their literal counterpart using recent advances in masked language modeling coupled with commonsense inference. For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model finetuned on our parallel data to generate high quality metaphors. Human evaluation on an independent test set of literal statements shows that our best model generates metaphors better than three well-crafted baselines 66% of the time on average. Moreover, a task-based evaluation shows that human-written poems enhanced with metaphors proposed by our model are preferred 68% of the time compared to poems without metaphors.",2021
daille-morin-2005-french,https://aclanthology.org/I05-1062,0,,,,,,,"French-English Terminology Extraction from Comparable Corpora. This article presents a method of extracting bilingual lexica composed of single-word terms (SWTs) and multi-word terms (MWTs) from comparable corpora of a technical domain. First, this method extracts MWTs in each language, and then uses statistical methods to align single words and MWTs by exploiting the term contexts. After explaining the difficulties involved in aligning MWTs and specifying our approach, we show the adopted process for bilingual terminology extraction and the resources used in our experiments. Finally, we evaluate our approach and demonstrate its significance, particularly in relation to non-compositional MWT alignment.",{F}rench-{E}nglish Terminology Extraction from Comparable Corpora,"This article presents a method of extracting bilingual lexica composed of single-word terms (SWTs) and multi-word terms (MWTs) from comparable corpora of a technical domain. First, this method extracts MWTs in each language, and then uses statistical methods to align single words and MWTs by exploiting the term contexts. After explaining the difficulties involved in aligning MWTs and specifying our approach, we show the adopted process for bilingual terminology extraction and the resources used in our experiments. Finally, we evaluate our approach and demonstrate its significance, particularly in relation to non-compositional MWT alignment.",French-English Terminology Extraction from Comparable Corpora,"This article presents a method of extracting bilingual lexica composed of single-word terms (SWTs) and multi-word terms (MWTs) from comparable corpora of a technical domain. First, this method extracts MWTs in each language, and then uses statistical methods to align single words and MWTs by exploiting the term contexts. After explaining the difficulties involved in aligning MWTs and specifying our approach, we show the adopted process for bilingual terminology extraction and the resources used in our experiments. Finally, we evaluate our approach and demonstrate its significance, particularly in relation to non-compositional MWT alignment.","We are particularly grateful to Samuel Dufour-Kowalski, who undertook the computer programs. This work has also benefited from his comments.","French-English Terminology Extraction from Comparable Corpora. This article presents a method of extracting bilingual lexica composed of single-word terms (SWTs) and multi-word terms (MWTs) from comparable corpora of a technical domain. First, this method extracts MWTs in each language, and then uses statistical methods to align single words and MWTs by exploiting the term contexts. After explaining the difficulties involved in aligning MWTs and specifying our approach, we show the adopted process for bilingual terminology extraction and the resources used in our experiments. Finally, we evaluate our approach and demonstrate its significance, particularly in relation to non-compositional MWT alignment.",2005
grundkiewicz-etal-2015-human,https://aclanthology.org/D15-1052,0,,,,,,,Human Evaluation of Grammatical Error Correction Systems. The paper presents the results of the first large-scale human evaluation of automatic grammatical error correction (GEC) systems. Twelve participating systems and the unchanged input of the CoNLL-2014 shared task have been reassessed in a WMT-inspired human evaluation procedure. Methods introduced for the Workshop of Machine Translation evaluation campaigns have been adapted to GEC and extended where necessary. The produced rankings are used to evaluate standard metrics for grammatical error correction in terms of correlation with human judgment.,Human Evaluation of Grammatical Error Correction Systems,The paper presents the results of the first large-scale human evaluation of automatic grammatical error correction (GEC) systems. Twelve participating systems and the unchanged input of the CoNLL-2014 shared task have been reassessed in a WMT-inspired human evaluation procedure. Methods introduced for the Workshop of Machine Translation evaluation campaigns have been adapted to GEC and extended where necessary. The produced rankings are used to evaluate standard metrics for grammatical error correction in terms of correlation with human judgment.,Human Evaluation of Grammatical Error Correction Systems,The paper presents the results of the first large-scale human evaluation of automatic grammatical error correction (GEC) systems. Twelve participating systems and the unchanged input of the CoNLL-2014 shared task have been reassessed in a WMT-inspired human evaluation procedure. Methods introduced for the Workshop of Machine Translation evaluation campaigns have been adapted to GEC and extended where necessary. The produced rankings are used to evaluate standard metrics for grammatical error correction in terms of correlation with human judgment.,"Partially funded by the Polish National Science Centre (Grant No. 2014/15/N/ST6/02330).The authors would like to thank the following judges for their hard work on the ranking task: Sam Bennett, Peter Dunne, Stacia Levy, Kenneth Turner, and John Winward.",Human Evaluation of Grammatical Error Correction Systems. The paper presents the results of the first large-scale human evaluation of automatic grammatical error correction (GEC) systems. Twelve participating systems and the unchanged input of the CoNLL-2014 shared task have been reassessed in a WMT-inspired human evaluation procedure. Methods introduced for the Workshop of Machine Translation evaluation campaigns have been adapted to GEC and extended where necessary. The produced rankings are used to evaluate standard metrics for grammatical error correction in terms of correlation with human judgment.,2015
schrodt-2020-keynote,https://aclanthology.org/2020.aespen-1.3,0,,,,,,,"Keynote Abstract: Current Open Questions for Operational Event Data. In this brief keynote, I will address what I see as five major issues in terms of development for operational event data sets (that is, event data intended for real time monitoring and forecasting, rather than purely for academic research). First, there are no currently active real time systems with fully open and transparent pipelines: instead, one or more components are proprietary. Ideally we need several of these, using different approaches (and in particular, comparisons between classical dictionary-and rule-based coders versus newer coders based on machine-learning approaches). Second, the CAMEO event ontology needs to be replaced by a more general system that includes, for example, political codes for electoral competition, legislative debate, and parliamentary coalition formation, as well as a robust set of codes for non-political events such as natural disasters, disease, and economic dislocations. Third, the issue of duplicate stories needs to be addressed -for example, the ICEWS system can generate as many as 150 coded events from a single occurrence on the groundeither to reduce these sets of related stories to a single set of events, or at least to label clusters of related stories as is already done in a number of systems (for example European Media Monitor).
Fourth, a systematic analysis needs to be done as to the additional information provided by hundreds of highly local sources (which have varying degrees of varacity and independence from states and local elites) as opposed to a relatively small number of international sources: obviously this will vary depending on the specific question being asked but has yet to be addressed at all. Finally, and this will overlap with academic work, a number of open benchmarks need to be constructed for the calibration of both coding systems and resulting models: these could be historical but need to include an easily licensed (or open) very large set of texts covering a substantial period of time, probably along the lines of the Linguistics Data Consortium Gigaword sets; if licensed, these need to be accessible to individual researchers and NGOs, not just academic institutions.",Keynote Abstract: Current Open Questions for Operational Event Data,"In this brief keynote, I will address what I see as five major issues in terms of development for operational event data sets (that is, event data intended for real time monitoring and forecasting, rather than purely for academic research). First, there are no currently active real time systems with fully open and transparent pipelines: instead, one or more components are proprietary. Ideally we need several of these, using different approaches (and in particular, comparisons between classical dictionary-and rule-based coders versus newer coders based on machine-learning approaches). Second, the CAMEO event ontology needs to be replaced by a more general system that includes, for example, political codes for electoral competition, legislative debate, and parliamentary coalition formation, as well as a robust set of codes for non-political events such as natural disasters, disease, and economic dislocations. Third, the issue of duplicate stories needs to be addressed -for example, the ICEWS system can generate as many as 150 coded events from a single occurrence on the groundeither to reduce these sets of related stories to a single set of events, or at least to label clusters of related stories as is already done in a number of systems (for example European Media Monitor).
Fourth, a systematic analysis needs to be done as to the additional information provided by hundreds of highly local sources (which have varying degrees of varacity and independence from states and local elites) as opposed to a relatively small number of international sources: obviously this will vary depending on the specific question being asked but has yet to be addressed at all. Finally, and this will overlap with academic work, a number of open benchmarks need to be constructed for the calibration of both coding systems and resulting models: these could be historical but need to include an easily licensed (or open) very large set of texts covering a substantial period of time, probably along the lines of the Linguistics Data Consortium Gigaword sets; if licensed, these need to be accessible to individual researchers and NGOs, not just academic institutions.",Keynote Abstract: Current Open Questions for Operational Event Data,"In this brief keynote, I will address what I see as five major issues in terms of development for operational event data sets (that is, event data intended for real time monitoring and forecasting, rather than purely for academic research). First, there are no currently active real time systems with fully open and transparent pipelines: instead, one or more components are proprietary. Ideally we need several of these, using different approaches (and in particular, comparisons between classical dictionary-and rule-based coders versus newer coders based on machine-learning approaches). Second, the CAMEO event ontology needs to be replaced by a more general system that includes, for example, political codes for electoral competition, legislative debate, and parliamentary coalition formation, as well as a robust set of codes for non-political events such as natural disasters, disease, and economic dislocations. Third, the issue of duplicate stories needs to be addressed -for example, the ICEWS system can generate as many as 150 coded events from a single occurrence on the groundeither to reduce these sets of related stories to a single set of events, or at least to label clusters of related stories as is already done in a number of systems (for example European Media Monitor).
Fourth, a systematic analysis needs to be done as to the additional information provided by hundreds of highly local sources (which have varying degrees of varacity and independence from states and local elites) as opposed to a relatively small number of international sources: obviously this will vary depending on the specific question being asked but has yet to be addressed at all. Finally, and this will overlap with academic work, a number of open benchmarks need to be constructed for the calibration of both coding systems and resulting models: these could be historical but need to include an easily licensed (or open) very large set of texts covering a substantial period of time, probably along the lines of the Linguistics Data Consortium Gigaword sets; if licensed, these need to be accessible to individual researchers and NGOs, not just academic institutions.",,"Keynote Abstract: Current Open Questions for Operational Event Data. In this brief keynote, I will address what I see as five major issues in terms of development for operational event data sets (that is, event data intended for real time monitoring and forecasting, rather than purely for academic research). First, there are no currently active real time systems with fully open and transparent pipelines: instead, one or more components are proprietary. Ideally we need several of these, using different approaches (and in particular, comparisons between classical dictionary-and rule-based coders versus newer coders based on machine-learning approaches). Second, the CAMEO event ontology needs to be replaced by a more general system that includes, for example, political codes for electoral competition, legislative debate, and parliamentary coalition formation, as well as a robust set of codes for non-political events such as natural disasters, disease, and economic dislocations. Third, the issue of duplicate stories needs to be addressed -for example, the ICEWS system can generate as many as 150 coded events from a single occurrence on the groundeither to reduce these sets of related stories to a single set of events, or at least to label clusters of related stories as is already done in a number of systems (for example European Media Monitor).
Fourth, a systematic analysis needs to be done as to the additional information provided by hundreds of highly local sources (which have varying degrees of varacity and independence from states and local elites) as opposed to a relatively small number of international sources: obviously this will vary depending on the specific question being asked but has yet to be addressed at all. Finally, and this will overlap with academic work, a number of open benchmarks need to be constructed for the calibration of both coding systems and resulting models: these could be historical but need to include an easily licensed (or open) very large set of texts covering a substantial period of time, probably along the lines of the Linguistics Data Consortium Gigaword sets; if licensed, these need to be accessible to individual researchers and NGOs, not just academic institutions.",2020
baquero-arnal-etal-2019-mllp,https://aclanthology.org/W19-5423,0,,,,,,,"The MLLP-UPV Spanish-Portuguese and Portuguese-Spanish Machine Translation Systems for WMT19 Similar Language Translation Task. This paper describes the participation of the MLLP research group of the Universitat Politècnica de València in the WMT 2019 Similar Language Translation Shared Task. We have submitted systems for the Portuguese ↔ Spanish language pair, in both directions. They are based on the Transformer architecture as well as on a novel architecture called 2D alternating RNN. Both systems have been domain adapted through fine-tuning that has been shown to be very effective.",The {MLLP}-{UPV} {S}panish-{P}ortuguese and {P}ortuguese-{S}panish Machine Translation Systems for {WMT}19 Similar Language Translation Task,"This paper describes the participation of the MLLP research group of the Universitat Politècnica de València in the WMT 2019 Similar Language Translation Shared Task. We have submitted systems for the Portuguese ↔ Spanish language pair, in both directions. They are based on the Transformer architecture as well as on a novel architecture called 2D alternating RNN. Both systems have been domain adapted through fine-tuning that has been shown to be very effective.",The MLLP-UPV Spanish-Portuguese and Portuguese-Spanish Machine Translation Systems for WMT19 Similar Language Translation Task,"This paper describes the participation of the MLLP research group of the Universitat Politècnica de València in the WMT 2019 Similar Language Translation Shared Task. We have submitted systems for the Portuguese ↔ Spanish language pair, in both directions. They are based on the Transformer architecture as well as on a novel architecture called 2D alternating RNN. Both systems have been domain adapted through fine-tuning that has been shown to be very effective.",The research leading to these results has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 761758 X5gon ,"The MLLP-UPV Spanish-Portuguese and Portuguese-Spanish Machine Translation Systems for WMT19 Similar Language Translation Task. This paper describes the participation of the MLLP research group of the Universitat Politècnica de València in the WMT 2019 Similar Language Translation Shared Task. We have submitted systems for the Portuguese ↔ Spanish language pair, in both directions. They are based on the Transformer architecture as well as on a novel architecture called 2D alternating RNN. Both systems have been domain adapted through fine-tuning that has been shown to be very effective.",2019
loukachevitch-dobrov-2004-development,http://www.lrec-conf.org/proceedings/lrec2004/pdf/343.pdf,0,,,,,,,"Development of Bilingual Domain-Specific Ontology for Automatic Conceptual Indexing. In the paper we describe development, means of evaluation and applications of Russian-English Sociopolitical Thesaurus specially developed as a linguistic resource for automatic text processing applications. The Sociopolitical domain is not a domain of social research but a broad domain of social relations including economic, political, military, cultural, sports and other subdomains. The knowledge of this domain is necessary for automatic text processing of such important documents as official documents, legislative acts, newspaper articles.",Development of Bilingual Domain-Specific Ontology for Automatic Conceptual Indexing,"In the paper we describe development, means of evaluation and applications of Russian-English Sociopolitical Thesaurus specially developed as a linguistic resource for automatic text processing applications. The Sociopolitical domain is not a domain of social research but a broad domain of social relations including economic, political, military, cultural, sports and other subdomains. The knowledge of this domain is necessary for automatic text processing of such important documents as official documents, legislative acts, newspaper articles.",Development of Bilingual Domain-Specific Ontology for Automatic Conceptual Indexing,"In the paper we describe development, means of evaluation and applications of Russian-English Sociopolitical Thesaurus specially developed as a linguistic resource for automatic text processing applications. The Sociopolitical domain is not a domain of social research but a broad domain of social relations including economic, political, military, cultural, sports and other subdomains. The knowledge of this domain is necessary for automatic text processing of such important documents as official documents, legislative acts, newspaper articles.",Partial support for this work is provided by the Russian Foundation for Basic Research through grant # 03-01-00472.,"Development of Bilingual Domain-Specific Ontology for Automatic Conceptual Indexing. In the paper we describe development, means of evaluation and applications of Russian-English Sociopolitical Thesaurus specially developed as a linguistic resource for automatic text processing applications. The Sociopolitical domain is not a domain of social research but a broad domain of social relations including economic, political, military, cultural, sports and other subdomains. The knowledge of this domain is necessary for automatic text processing of such important documents as official documents, legislative acts, newspaper articles.",2004
takamichi-saruwatari-2018-cpjd,https://aclanthology.org/L18-1067,0,,,,,,,"CPJD Corpus: Crowdsourced Parallel Speech Corpus of Japanese Dialects. Public parallel corpora of dialects can accelerate related studies such as spoken language processing. Various corpora have been collected using a well-equipped recording environment, such as voice recording in an anechoic room. However, due to geographical and expense issues, it is impossible to use such a perfect recording environment for collecting all existing dialects. To address this problem, we used web-based recording and crowdsourcing platforms to construct a crowdsourced parallel speech corpus of Japanese dialects (CPJD corpus) including parallel text and speech data of 21 Japanese dialects. We recruited native dialect speakers on the crowdsourcing platform, and the hired speakers recorded their dialect speech using their personal computer or smartphone in their homes. This paper shows the results of the data collection and analyzes the audio data in terms of the signal-to-noise ratio and mispronunciations.",{CPJD} Corpus: Crowdsourced Parallel Speech Corpus of {J}apanese Dialects,"Public parallel corpora of dialects can accelerate related studies such as spoken language processing. Various corpora have been collected using a well-equipped recording environment, such as voice recording in an anechoic room. However, due to geographical and expense issues, it is impossible to use such a perfect recording environment for collecting all existing dialects. To address this problem, we used web-based recording and crowdsourcing platforms to construct a crowdsourced parallel speech corpus of Japanese dialects (CPJD corpus) including parallel text and speech data of 21 Japanese dialects. We recruited native dialect speakers on the crowdsourcing platform, and the hired speakers recorded their dialect speech using their personal computer or smartphone in their homes. This paper shows the results of the data collection and analyzes the audio data in terms of the signal-to-noise ratio and mispronunciations.",CPJD Corpus: Crowdsourced Parallel Speech Corpus of Japanese Dialects,"Public parallel corpora of dialects can accelerate related studies such as spoken language processing. Various corpora have been collected using a well-equipped recording environment, such as voice recording in an anechoic room. However, due to geographical and expense issues, it is impossible to use such a perfect recording environment for collecting all existing dialects. To address this problem, we used web-based recording and crowdsourcing platforms to construct a crowdsourced parallel speech corpus of Japanese dialects (CPJD corpus) including parallel text and speech data of 21 Japanese dialects. We recruited native dialect speakers on the crowdsourcing platform, and the hired speakers recorded their dialect speech using their personal computer or smartphone in their homes. This paper shows the results of the data collection and analyzes the audio data in terms of the signal-to-noise ratio and mispronunciations.",Part of this work was supported by the SECOM Science and Technology Foundation.,"CPJD Corpus: Crowdsourced Parallel Speech Corpus of Japanese Dialects. Public parallel corpora of dialects can accelerate related studies such as spoken language processing. Various corpora have been collected using a well-equipped recording environment, such as voice recording in an anechoic room. However, due to geographical and expense issues, it is impossible to use such a perfect recording environment for collecting all existing dialects. To address this problem, we used web-based recording and crowdsourcing platforms to construct a crowdsourced parallel speech corpus of Japanese dialects (CPJD corpus) including parallel text and speech data of 21 Japanese dialects. We recruited native dialect speakers on the crowdsourcing platform, and the hired speakers recorded their dialect speech using their personal computer or smartphone in their homes. This paper shows the results of the data collection and analyzes the audio data in terms of the signal-to-noise ratio and mispronunciations.",2018
jiang-zhai-2006-exploiting,https://aclanthology.org/N06-1010,0,,,,,,,"Exploiting Domain Structure for Named Entity Recognition. Named Entity Recognition (NER) is a fundamental task in text mining and natural language understanding. Current approaches to NER (mostly based on supervised learning) perform well on domains similar to the training domain, but they tend to adapt poorly to slightly different domains. We present several strategies for exploiting the domain structure in the training data to learn a more robust named entity recognizer that can perform well on a new domain. First, we propose a simple yet effective way to automatically rank features based on their generalizabilities across domains. We then train a classifier with strong emphasis on the most generalizable features. This emphasis is imposed by putting a rank-based prior on a logistic regression model. We further propose a domain-aware cross validation strategy to help choose an appropriate parameter for the rank-based prior. We evaluated the proposed method with a task of recognizing named entities (genes) in biology text involving three species. The experiment results show that the new domainaware approach outperforms a state-ofthe-art baseline method in adapting to new domains, especially when there is a great difference between the new domain and the training domain.",Exploiting Domain Structure for Named Entity Recognition,"Named Entity Recognition (NER) is a fundamental task in text mining and natural language understanding. Current approaches to NER (mostly based on supervised learning) perform well on domains similar to the training domain, but they tend to adapt poorly to slightly different domains. We present several strategies for exploiting the domain structure in the training data to learn a more robust named entity recognizer that can perform well on a new domain. First, we propose a simple yet effective way to automatically rank features based on their generalizabilities across domains. We then train a classifier with strong emphasis on the most generalizable features. This emphasis is imposed by putting a rank-based prior on a logistic regression model. We further propose a domain-aware cross validation strategy to help choose an appropriate parameter for the rank-based prior. We evaluated the proposed method with a task of recognizing named entities (genes) in biology text involving three species. The experiment results show that the new domainaware approach outperforms a state-ofthe-art baseline method in adapting to new domains, especially when there is a great difference between the new domain and the training domain.",Exploiting Domain Structure for Named Entity Recognition,"Named Entity Recognition (NER) is a fundamental task in text mining and natural language understanding. Current approaches to NER (mostly based on supervised learning) perform well on domains similar to the training domain, but they tend to adapt poorly to slightly different domains. We present several strategies for exploiting the domain structure in the training data to learn a more robust named entity recognizer that can perform well on a new domain. First, we propose a simple yet effective way to automatically rank features based on their generalizabilities across domains. We then train a classifier with strong emphasis on the most generalizable features. This emphasis is imposed by putting a rank-based prior on a logistic regression model. We further propose a domain-aware cross validation strategy to help choose an appropriate parameter for the rank-based prior. We evaluated the proposed method with a task of recognizing named entities (genes) in biology text involving three species. The experiment results show that the new domainaware approach outperforms a state-ofthe-art baseline method in adapting to new domains, especially when there is a great difference between the new domain and the training domain.","This work was in part supported by the National Science Foundation under award numbers 0425852, 0347933, and 0428472. We would like to thank Bruce Schatz, Xin He, Qiaozhu Mei, Xu Ling, and some other BeeSpace project members for useful discussions. We would like to thank Mark Sammons for his help with FEX. We would also like to thank the anonymous reviewers for their comments.","Exploiting Domain Structure for Named Entity Recognition. Named Entity Recognition (NER) is a fundamental task in text mining and natural language understanding. Current approaches to NER (mostly based on supervised learning) perform well on domains similar to the training domain, but they tend to adapt poorly to slightly different domains. We present several strategies for exploiting the domain structure in the training data to learn a more robust named entity recognizer that can perform well on a new domain. First, we propose a simple yet effective way to automatically rank features based on their generalizabilities across domains. We then train a classifier with strong emphasis on the most generalizable features. This emphasis is imposed by putting a rank-based prior on a logistic regression model. We further propose a domain-aware cross validation strategy to help choose an appropriate parameter for the rank-based prior. We evaluated the proposed method with a task of recognizing named entities (genes) in biology text involving three species. The experiment results show that the new domainaware approach outperforms a state-ofthe-art baseline method in adapting to new domains, especially when there is a great difference between the new domain and the training domain.",2006
nn-1976-finite-string-volume-13-number-4,https://aclanthology.org/J76-2010,0,,,,,,,"The FINITE STRING, Volume 13, Number 4 (continued). Each year the federal government contracts for billions of dollars of work t o support efforts deemed t o be in the national interest. A significant percentage of the contract services are in the form of Research and ~e v e l o p m e n t (R & D) or programmatic work which colleges and universities are particularly wcll-suited t o perform. The government commits these funds i n eithet of two WAYS: grants or contracts. University researchers are generally more familiar with the grant procedure than with the contract procedure. Under a grant program, a given federal agency is authorized to grant funds to non-profit institutions, frequently educational institutions, for the purpose of supporting research or a in a given general area. A body of general conditions are established by the Congress and refined by the applicable agency t o set parameters for the pro$am as a whole. A specific grant for a program can be made so long as it fits within the gevral stpndards (the Guidelines) of the program and meets whatever qualitative standards for review that have been established. tion; forest product utilization and marketing.","The {F}INITE {S}TRING, Volume 13, Number 4 (continued)","Each year the federal government contracts for billions of dollars of work t o support efforts deemed t o be in the national interest. A significant percentage of the contract services are in the form of Research and ~e v e l o p m e n t (R & D) or programmatic work which colleges and universities are particularly wcll-suited t o perform. The government commits these funds i n eithet of two WAYS: grants or contracts. University researchers are generally more familiar with the grant procedure than with the contract procedure. Under a grant program, a given federal agency is authorized to grant funds to non-profit institutions, frequently educational institutions, for the purpose of supporting research or a in a given general area. A body of general conditions are established by the Congress and refined by the applicable agency t o set parameters for the pro$am as a whole. A specific grant for a program can be made so long as it fits within the gevral stpndards (the Guidelines) of the program and meets whatever qualitative standards for review that have been established. tion; forest product utilization and marketing.","The FINITE STRING, Volume 13, Number 4 (continued)","Each year the federal government contracts for billions of dollars of work t o support efforts deemed t o be in the national interest. A significant percentage of the contract services are in the form of Research and ~e v e l o p m e n t (R & D) or programmatic work which colleges and universities are particularly wcll-suited t o perform. The government commits these funds i n eithet of two WAYS: grants or contracts. University researchers are generally more familiar with the grant procedure than with the contract procedure. Under a grant program, a given federal agency is authorized to grant funds to non-profit institutions, frequently educational institutions, for the purpose of supporting research or a in a given general area. A body of general conditions are established by the Congress and refined by the applicable agency t o set parameters for the pro$am as a whole. A specific grant for a program can be made so long as it fits within the gevral stpndards (the Guidelines) of the program and meets whatever qualitative standards for review that have been established. tion; forest product utilization and marketing.",,"The FINITE STRING, Volume 13, Number 4 (continued). Each year the federal government contracts for billions of dollars of work t o support efforts deemed t o be in the national interest. A significant percentage of the contract services are in the form of Research and ~e v e l o p m e n t (R & D) or programmatic work which colleges and universities are particularly wcll-suited t o perform. The government commits these funds i n eithet of two WAYS: grants or contracts. University researchers are generally more familiar with the grant procedure than with the contract procedure. Under a grant program, a given federal agency is authorized to grant funds to non-profit institutions, frequently educational institutions, for the purpose of supporting research or a in a given general area. A body of general conditions are established by the Congress and refined by the applicable agency t o set parameters for the pro$am as a whole. A specific grant for a program can be made so long as it fits within the gevral stpndards (the Guidelines) of the program and meets whatever qualitative standards for review that have been established. tion; forest product utilization and marketing.",1976
wang-etal-1999-lexicon,https://aclanthology.org/Y99-1023,0,,,,,,,"The Lexicon in FCIDB : A Friendly Chinese Interface for DBMS. FCIDB (Friendly Chinese Interface for DataBase management systems) can understand users' queries in the Chinese language. It works like a translator that translates Chinese queries into SQL commands. In the translation process, the lexicon of FCIDB plays a key role in both parsing and word segmentation. We designed some questionnaires to collect the frequently occurring words and add them to the public 'lexicon in FCIDB. FCIDB will produce a private lexicon for every new connected database. This paper will focus on the words included in the public lexicon and in the private lexicon. We also discuss the function, the structure, and the contents of the lexicon in FCIDB.",The Lexicon in {FCIDB} : A Friendly {C}hinese Interface for {DBMS},"FCIDB (Friendly Chinese Interface for DataBase management systems) can understand users' queries in the Chinese language. It works like a translator that translates Chinese queries into SQL commands. In the translation process, the lexicon of FCIDB plays a key role in both parsing and word segmentation. We designed some questionnaires to collect the frequently occurring words and add them to the public 'lexicon in FCIDB. FCIDB will produce a private lexicon for every new connected database. This paper will focus on the words included in the public lexicon and in the private lexicon. We also discuss the function, the structure, and the contents of the lexicon in FCIDB.",The Lexicon in FCIDB : A Friendly Chinese Interface for DBMS,"FCIDB (Friendly Chinese Interface for DataBase management systems) can understand users' queries in the Chinese language. It works like a translator that translates Chinese queries into SQL commands. In the translation process, the lexicon of FCIDB plays a key role in both parsing and word segmentation. We designed some questionnaires to collect the frequently occurring words and add them to the public 'lexicon in FCIDB. FCIDB will produce a private lexicon for every new connected database. This paper will focus on the words included in the public lexicon and in the private lexicon. We also discuss the function, the structure, and the contents of the lexicon in FCIDB.","We carried out an experiment to explore the lexicon. We constructed two different databases and designed questionnaires to collect queries. The results helped us to identify which words we needed in the public and private lexicon.We still need to simplify the word definition process to make it easier for users to add terminology and to move from one database to another. Now, the system can be an interface with ACCESS and Visual dBASE. In the future, we hope to port it to other systems.","The Lexicon in FCIDB : A Friendly Chinese Interface for DBMS. FCIDB (Friendly Chinese Interface for DataBase management systems) can understand users' queries in the Chinese language. It works like a translator that translates Chinese queries into SQL commands. In the translation process, the lexicon of FCIDB plays a key role in both parsing and word segmentation. We designed some questionnaires to collect the frequently occurring words and add them to the public 'lexicon in FCIDB. FCIDB will produce a private lexicon for every new connected database. This paper will focus on the words included in the public lexicon and in the private lexicon. We also discuss the function, the structure, and the contents of the lexicon in FCIDB.",1999
mclauchlan-2004-thesauruses,https://aclanthology.org/W04-2410,0,,,,,,,"Thesauruses for Prepositional Phrase Attachment. Probabilistic models have been effective in resolving prepositional phrase attachment ambiguity, but sparse data remains a significant problem. We propose a solution based on similarity-based smoothing, where the probability of new PPs is estimated with information from similar examples generated using a thesaurus. Three thesauruses are compared on this task: two existing generic thesauruses and a new specialist PP thesaurus tailored for this problem. We also compare three smoothing techniques for prepositional phrases. We find that the similarity scores provided by the thesaurus tend to weight distant neighbours too highly, and describe a better score based on the rank of a word in the list of similar words. Our smoothing methods are applied to an existing PP attachment model and we obtain significant improvements over the baseline.",Thesauruses for Prepositional Phrase Attachment,"Probabilistic models have been effective in resolving prepositional phrase attachment ambiguity, but sparse data remains a significant problem. We propose a solution based on similarity-based smoothing, where the probability of new PPs is estimated with information from similar examples generated using a thesaurus. Three thesauruses are compared on this task: two existing generic thesauruses and a new specialist PP thesaurus tailored for this problem. We also compare three smoothing techniques for prepositional phrases. We find that the similarity scores provided by the thesaurus tend to weight distant neighbours too highly, and describe a better score based on the rank of a word in the list of similar words. Our smoothing methods are applied to an existing PP attachment model and we obtain significant improvements over the baseline.",Thesauruses for Prepositional Phrase Attachment,"Probabilistic models have been effective in resolving prepositional phrase attachment ambiguity, but sparse data remains a significant problem. We propose a solution based on similarity-based smoothing, where the probability of new PPs is estimated with information from similar examples generated using a thesaurus. Three thesauruses are compared on this task: two existing generic thesauruses and a new specialist PP thesaurus tailored for this problem. We also compare three smoothing techniques for prepositional phrases. We find that the similarity scores provided by the thesaurus tend to weight distant neighbours too highly, and describe a better score based on the rank of a word in the list of similar words. Our smoothing methods are applied to an existing PP attachment model and we obtain significant improvements over the baseline.","Many thanks to Julie Weeds and Adam Kilgarriff for providing the specialist and WASPS thesauruses, and for useful discussions. Thanks also to the anonymous reviewers for many helpful comments.","Thesauruses for Prepositional Phrase Attachment. Probabilistic models have been effective in resolving prepositional phrase attachment ambiguity, but sparse data remains a significant problem. We propose a solution based on similarity-based smoothing, where the probability of new PPs is estimated with information from similar examples generated using a thesaurus. Three thesauruses are compared on this task: two existing generic thesauruses and a new specialist PP thesaurus tailored for this problem. We also compare three smoothing techniques for prepositional phrases. We find that the similarity scores provided by the thesaurus tend to weight distant neighbours too highly, and describe a better score based on the rank of a word in the list of similar words. Our smoothing methods are applied to an existing PP attachment model and we obtain significant improvements over the baseline.",2004
elkaref-hassan-2021-joint,https://aclanthology.org/2021.smm4h-1.16,1,,,,peace_justice_and_strong_institutions,,,"A Joint Training Approach to Tweet Classification and Adverse Effect Extraction and Normalization for SMM4H 2021. In this work we describe our submissions to the Social Media Mining for Health (SMM4H) 2021 Shared Task (Magge et al., 2021). We investigated the effectiveness of a joint training approach to Task 1, specifically classification, extraction and normalization of Adverse Drug Effect (ADE) mentions in English tweets. Our approach performed well on the normalization task, achieving an above average f1 score of 24%, but less so on classification and extraction, with f1 scores of 22% and 37% respectively. Our experiments also showed that a larger dataset with more negative results led to stronger results than a smaller more balanced dataset, even when both datasets have the same positive examples. Finally we also submitted a tuned BERT model for Task 6: Classification of Covid-19 tweets containing symptoms, which achieved an above average f1 score of 96%.",A Joint Training Approach to Tweet Classification and Adverse Effect Extraction and Normalization for {SMM}4{H} 2021,"In this work we describe our submissions to the Social Media Mining for Health (SMM4H) 2021 Shared Task (Magge et al., 2021). We investigated the effectiveness of a joint training approach to Task 1, specifically classification, extraction and normalization of Adverse Drug Effect (ADE) mentions in English tweets. Our approach performed well on the normalization task, achieving an above average f1 score of 24%, but less so on classification and extraction, with f1 scores of 22% and 37% respectively. Our experiments also showed that a larger dataset with more negative results led to stronger results than a smaller more balanced dataset, even when both datasets have the same positive examples. Finally we also submitted a tuned BERT model for Task 6: Classification of Covid-19 tweets containing symptoms, which achieved an above average f1 score of 96%.",A Joint Training Approach to Tweet Classification and Adverse Effect Extraction and Normalization for SMM4H 2021,"In this work we describe our submissions to the Social Media Mining for Health (SMM4H) 2021 Shared Task (Magge et al., 2021). We investigated the effectiveness of a joint training approach to Task 1, specifically classification, extraction and normalization of Adverse Drug Effect (ADE) mentions in English tweets. Our approach performed well on the normalization task, achieving an above average f1 score of 24%, but less so on classification and extraction, with f1 scores of 22% and 37% respectively. Our experiments also showed that a larger dataset with more negative results led to stronger results than a smaller more balanced dataset, even when both datasets have the same positive examples. Finally we also submitted a tuned BERT model for Task 6: Classification of Covid-19 tweets containing symptoms, which achieved an above average f1 score of 96%.",,"A Joint Training Approach to Tweet Classification and Adverse Effect Extraction and Normalization for SMM4H 2021. In this work we describe our submissions to the Social Media Mining for Health (SMM4H) 2021 Shared Task (Magge et al., 2021). We investigated the effectiveness of a joint training approach to Task 1, specifically classification, extraction and normalization of Adverse Drug Effect (ADE) mentions in English tweets. Our approach performed well on the normalization task, achieving an above average f1 score of 24%, but less so on classification and extraction, with f1 scores of 22% and 37% respectively. Our experiments also showed that a larger dataset with more negative results led to stronger results than a smaller more balanced dataset, even when both datasets have the same positive examples. Finally we also submitted a tuned BERT model for Task 6: Classification of Covid-19 tweets containing symptoms, which achieved an above average f1 score of 96%.",2021
gu-etal-2018-language,https://aclanthology.org/D18-1493,0,,,,,,,"Language Modeling with Sparse Product of Sememe Experts. Most language modeling methods rely on large-scale data to statistically learn the sequential patterns of words. In this paper, we argue that words are atomic language units but not necessarily atomic semantic units. Inspired by HowNet, we use sememes, the minimum semantic units in human languages, to represent the implicit semantics behind words for language modeling, named Sememe-Driven Language Model (SDLM). More specifically, to predict the next word, SDLM first estimates the sememe distribution given textual context. Afterwards, it regards each sememe as a distinct semantic expert, and these experts jointly identify the most probable senses and the corresponding word. In this way, SDLM enables language models to work beyond word-level manipulation to fine-grained sememe-level semantics, and offers us more powerful tools to fine-tune language models and improve the interpretability as well as the robustness of language models. Experiments on language modeling and the downstream application of headline generation demonstrate the significant effectiveness of SDLM. Source code and data used in the experiments can be accessed at https:// github.com/thunlp/SDLM-pytorch.",Language Modeling with Sparse Product of Sememe Experts,"Most language modeling methods rely on large-scale data to statistically learn the sequential patterns of words. In this paper, we argue that words are atomic language units but not necessarily atomic semantic units. Inspired by HowNet, we use sememes, the minimum semantic units in human languages, to represent the implicit semantics behind words for language modeling, named Sememe-Driven Language Model (SDLM). More specifically, to predict the next word, SDLM first estimates the sememe distribution given textual context. Afterwards, it regards each sememe as a distinct semantic expert, and these experts jointly identify the most probable senses and the corresponding word. In this way, SDLM enables language models to work beyond word-level manipulation to fine-grained sememe-level semantics, and offers us more powerful tools to fine-tune language models and improve the interpretability as well as the robustness of language models. Experiments on language modeling and the downstream application of headline generation demonstrate the significant effectiveness of SDLM. Source code and data used in the experiments can be accessed at https:// github.com/thunlp/SDLM-pytorch.",Language Modeling with Sparse Product of Sememe Experts,"Most language modeling methods rely on large-scale data to statistically learn the sequential patterns of words. In this paper, we argue that words are atomic language units but not necessarily atomic semantic units. Inspired by HowNet, we use sememes, the minimum semantic units in human languages, to represent the implicit semantics behind words for language modeling, named Sememe-Driven Language Model (SDLM). More specifically, to predict the next word, SDLM first estimates the sememe distribution given textual context. Afterwards, it regards each sememe as a distinct semantic expert, and these experts jointly identify the most probable senses and the corresponding word. In this way, SDLM enables language models to work beyond word-level manipulation to fine-grained sememe-level semantics, and offers us more powerful tools to fine-tune language models and improve the interpretability as well as the robustness of language models. Experiments on language modeling and the downstream application of headline generation demonstrate the significant effectiveness of SDLM. Source code and data used in the experiments can be accessed at https:// github.com/thunlp/SDLM-pytorch.","This work is supported by the 973 Program (No. 2014CB340501), the National Natural Science Foundation of China (NSFC No. 61572273) and the research fund of Tsinghua University-Tencent Joint Laboratory for Internet Innovation Technology. This work is also funded by China Association for Science and Technology (2016QNRC001). Hao Zhu and Jun Yan are supported by Tsinghua University Initiative Scientific Research Program. We thank all members of Tsinghua NLP lab. We also thank anonymous reviewers for their careful reading and their insightful comments.","Language Modeling with Sparse Product of Sememe Experts. Most language modeling methods rely on large-scale data to statistically learn the sequential patterns of words. In this paper, we argue that words are atomic language units but not necessarily atomic semantic units. Inspired by HowNet, we use sememes, the minimum semantic units in human languages, to represent the implicit semantics behind words for language modeling, named Sememe-Driven Language Model (SDLM). More specifically, to predict the next word, SDLM first estimates the sememe distribution given textual context. Afterwards, it regards each sememe as a distinct semantic expert, and these experts jointly identify the most probable senses and the corresponding word. In this way, SDLM enables language models to work beyond word-level manipulation to fine-grained sememe-level semantics, and offers us more powerful tools to fine-tune language models and improve the interpretability as well as the robustness of language models. Experiments on language modeling and the downstream application of headline generation demonstrate the significant effectiveness of SDLM. Source code and data used in the experiments can be accessed at https:// github.com/thunlp/SDLM-pytorch.",2018
chen-chen-2006-high,https://aclanthology.org/P06-2011,0,,,,,,,"A High-Accurate Chinese-English NE Backward Translation System Combining Both Lexical Information and Web Statistics. Named entity translation is indispensable in cross language information retrieval nowadays. We propose an approach of combining lexical information, web statistics, and inverse search based on Google to backward translate a Chinese named entity (NE) into English. Our system achieves a high Top-1 accuracy of 87.6%, which is a relatively good performance reported in this area until present.",A High-Accurate {C}hinese-{E}nglish {NE} Backward Translation System Combining Both Lexical Information and Web Statistics,"Named entity translation is indispensable in cross language information retrieval nowadays. We propose an approach of combining lexical information, web statistics, and inverse search based on Google to backward translate a Chinese named entity (NE) into English. Our system achieves a high Top-1 accuracy of 87.6%, which is a relatively good performance reported in this area until present.",A High-Accurate Chinese-English NE Backward Translation System Combining Both Lexical Information and Web Statistics,"Named entity translation is indispensable in cross language information retrieval nowadays. We propose an approach of combining lexical information, web statistics, and inverse search based on Google to backward translate a Chinese named entity (NE) into English. Our system achieves a high Top-1 accuracy of 87.6%, which is a relatively good performance reported in this area until present.",,"A High-Accurate Chinese-English NE Backward Translation System Combining Both Lexical Information and Web Statistics. Named entity translation is indispensable in cross language information retrieval nowadays. We propose an approach of combining lexical information, web statistics, and inverse search based on Google to backward translate a Chinese named entity (NE) into English. Our system achieves a high Top-1 accuracy of 87.6%, which is a relatively good performance reported in this area until present.",2006
hopkins-may-2013-models,https://aclanthology.org/P13-1139,0,,,,,,,"Models of Translation Competitions. What do we want to learn from a translation competition and how do we learn it with confidence? We argue that a disproportionate focus on ranking competition participants has led to lots of different rankings, but little insight about which rankings we should trust. In response, we provide the first framework that allows an empirical comparison of different analyses of competition results. We then use this framework to compare several analytical models on data from the Workshop on Machine Translation (WMT).",Models of Translation Competitions,"What do we want to learn from a translation competition and how do we learn it with confidence? We argue that a disproportionate focus on ranking competition participants has led to lots of different rankings, but little insight about which rankings we should trust. In response, we provide the first framework that allows an empirical comparison of different analyses of competition results. We then use this framework to compare several analytical models on data from the Workshop on Machine Translation (WMT).",Models of Translation Competitions,"What do we want to learn from a translation competition and how do we learn it with confidence? We argue that a disproportionate focus on ranking competition participants has led to lots of different rankings, but little insight about which rankings we should trust. In response, we provide the first framework that allows an empirical comparison of different analyses of competition results. We then use this framework to compare several analytical models on data from the Workshop on Machine Translation (WMT).",,"Models of Translation Competitions. What do we want to learn from a translation competition and how do we learn it with confidence? We argue that a disproportionate focus on ranking competition participants has led to lots of different rankings, but little insight about which rankings we should trust. In response, we provide the first framework that allows an empirical comparison of different analyses of competition results. We then use this framework to compare several analytical models on data from the Workshop on Machine Translation (WMT).",2013
fischer-laubli-2020-whats,https://aclanthology.org/2020.eamt-1.23,0,,,,,,,"What's the Difference Between Professional Human and Machine Translation? A Blind Multi-language Study on Domain-specific MT. Machine translation (MT) has been shown to produce a number of errors that require human post-editing, but the extent to which professional human translation (HT) contains such errors has not yet been compared to MT. We compile pretranslated documents in which MT and HT are interleaved, and ask professional translators to flag errors and post-edit these documents in a blind evaluation. We find that the post-editing effort for MT segments is only higher in two out of three language pairs, and that the number of segments with wrong terminology, omissions, and typographical problems is similar in HT.",What{'}s the Difference Between Professional Human and Machine Translation? A Blind Multi-language Study on Domain-specific {MT},"Machine translation (MT) has been shown to produce a number of errors that require human post-editing, but the extent to which professional human translation (HT) contains such errors has not yet been compared to MT. We compile pretranslated documents in which MT and HT are interleaved, and ask professional translators to flag errors and post-edit these documents in a blind evaluation. We find that the post-editing effort for MT segments is only higher in two out of three language pairs, and that the number of segments with wrong terminology, omissions, and typographical problems is similar in HT.",What's the Difference Between Professional Human and Machine Translation? A Blind Multi-language Study on Domain-specific MT,"Machine translation (MT) has been shown to produce a number of errors that require human post-editing, but the extent to which professional human translation (HT) contains such errors has not yet been compared to MT. We compile pretranslated documents in which MT and HT are interleaved, and ask professional translators to flag errors and post-edit these documents in a blind evaluation. We find that the post-editing effort for MT segments is only higher in two out of three language pairs, and that the number of segments with wrong terminology, omissions, and typographical problems is similar in HT.",,"What's the Difference Between Professional Human and Machine Translation? A Blind Multi-language Study on Domain-specific MT. Machine translation (MT) has been shown to produce a number of errors that require human post-editing, but the extent to which professional human translation (HT) contains such errors has not yet been compared to MT. We compile pretranslated documents in which MT and HT are interleaved, and ask professional translators to flag errors and post-edit these documents in a blind evaluation. We find that the post-editing effort for MT segments is only higher in two out of three language pairs, and that the number of segments with wrong terminology, omissions, and typographical problems is similar in HT.",2020
nghiem-ananiadou-2018-aplenty,https://aclanthology.org/D18-2019,0,,,,,,,"APLenty: annotation tool for creating high-quality datasets using active and proactive learning. In this paper, we present APLenty, an annotation tool for creating high-quality sequence labeling datasets using active and proactive learning. A major innovation of our tool is the integration of automatic annotation with active learning and proactive learning. This makes the task of creating labeled datasets easier, less time-consuming and requiring less human effort. APLenty is highly flexible and can be adapted to various other tasks.",{APL}enty: annotation tool for creating high-quality datasets using active and proactive learning,"In this paper, we present APLenty, an annotation tool for creating high-quality sequence labeling datasets using active and proactive learning. A major innovation of our tool is the integration of automatic annotation with active learning and proactive learning. This makes the task of creating labeled datasets easier, less time-consuming and requiring less human effort. APLenty is highly flexible and can be adapted to various other tasks.",APLenty: annotation tool for creating high-quality datasets using active and proactive learning,"In this paper, we present APLenty, an annotation tool for creating high-quality sequence labeling datasets using active and proactive learning. A major innovation of our tool is the integration of automatic annotation with active learning and proactive learning. This makes the task of creating labeled datasets easier, less time-consuming and requiring less human effort. APLenty is highly flexible and can be adapted to various other tasks.",This research has been carried out with funding from BBSRC BB/P025684/1 and BB/M006891/1. We would like to thank the anonymous reviewers for their helpful comments.,"APLenty: annotation tool for creating high-quality datasets using active and proactive learning. In this paper, we present APLenty, an annotation tool for creating high-quality sequence labeling datasets using active and proactive learning. A major innovation of our tool is the integration of automatic annotation with active learning and proactive learning. This makes the task of creating labeled datasets easier, less time-consuming and requiring less human effort. APLenty is highly flexible and can be adapted to various other tasks.",2018
naskar-bandyopadhyay-2005-use,https://aclanthology.org/2005.mtsummit-posters.21,0,,,,,,,Use of Machine Translation in India: Current Status. A survey of the machine translation systems that have been developed in India for translation from English to Indian languages and among Indian languages reveals that the MT softwares are used in field testing or are available as web translation service. These systems are also used for teaching machine translation to the students and researchers. Most of these systems are in the English-Hindi or Indian language-Indian language domain. The translation domains are mostly government documents/reports and news stories. There are a number of other MT systems that are at their various phases of development and have been demonstrated at various forums. Many of these systems cover other Indian languages beside Hindi.,Use of Machine Translation in {I}ndia: Current Status,A survey of the machine translation systems that have been developed in India for translation from English to Indian languages and among Indian languages reveals that the MT softwares are used in field testing or are available as web translation service. These systems are also used for teaching machine translation to the students and researchers. Most of these systems are in the English-Hindi or Indian language-Indian language domain. The translation domains are mostly government documents/reports and news stories. There are a number of other MT systems that are at their various phases of development and have been demonstrated at various forums. Many of these systems cover other Indian languages beside Hindi.,Use of Machine Translation in India: Current Status,A survey of the machine translation systems that have been developed in India for translation from English to Indian languages and among Indian languages reveals that the MT softwares are used in field testing or are available as web translation service. These systems are also used for teaching machine translation to the students and researchers. Most of these systems are in the English-Hindi or Indian language-Indian language domain. The translation domains are mostly government documents/reports and news stories. There are a number of other MT systems that are at their various phases of development and have been demonstrated at various forums. Many of these systems cover other Indian languages beside Hindi.,,Use of Machine Translation in India: Current Status. A survey of the machine translation systems that have been developed in India for translation from English to Indian languages and among Indian languages reveals that the MT softwares are used in field testing or are available as web translation service. These systems are also used for teaching machine translation to the students and researchers. Most of these systems are in the English-Hindi or Indian language-Indian language domain. The translation domains are mostly government documents/reports and news stories. There are a number of other MT systems that are at their various phases of development and have been demonstrated at various forums. Many of these systems cover other Indian languages beside Hindi.,2005
wang-etal-2005-web,https://aclanthology.org/I05-1046,0,,,,,,,"Web-Based Unsupervised Learning for Query Formulation in Question Answering. Converting questions to effective queries is crucial to open-domain question answering systems. In this paper, we present a web-based unsupervised learning approach for transforming a given natural-language question to an effective query. The method involves querying a search engine for Web passages that contain the answer to the question, extracting patterns that characterize fine-grained classification for answers, and linking these patterns with n-grams in answer passages. Independent evaluation on a set of questions shows that the proposed approach outperforms a naive keywordbased approach in terms of mean reciprocal rank and human effort.",Web-Based Unsupervised Learning for Query Formulation in Question Answering,"Converting questions to effective queries is crucial to open-domain question answering systems. In this paper, we present a web-based unsupervised learning approach for transforming a given natural-language question to an effective query. The method involves querying a search engine for Web passages that contain the answer to the question, extracting patterns that characterize fine-grained classification for answers, and linking these patterns with n-grams in answer passages. Independent evaluation on a set of questions shows that the proposed approach outperforms a naive keywordbased approach in terms of mean reciprocal rank and human effort.",Web-Based Unsupervised Learning for Query Formulation in Question Answering,"Converting questions to effective queries is crucial to open-domain question answering systems. In this paper, we present a web-based unsupervised learning approach for transforming a given natural-language question to an effective query. The method involves querying a search engine for Web passages that contain the answer to the question, extracting patterns that characterize fine-grained classification for answers, and linking these patterns with n-grams in answer passages. Independent evaluation on a set of questions shows that the proposed approach outperforms a naive keywordbased approach in terms of mean reciprocal rank and human effort.",,"Web-Based Unsupervised Learning for Query Formulation in Question Answering. Converting questions to effective queries is crucial to open-domain question answering systems. In this paper, we present a web-based unsupervised learning approach for transforming a given natural-language question to an effective query. The method involves querying a search engine for Web passages that contain the answer to the question, extracting patterns that characterize fine-grained classification for answers, and linking these patterns with n-grams in answer passages. Independent evaluation on a set of questions shows that the proposed approach outperforms a naive keywordbased approach in terms of mean reciprocal rank and human effort.",2005
zhang-bansal-2021-finding,https://aclanthology.org/2021.emnlp-main.531,0,,,,,,,"Finding a Balanced Degree of Automation for Summary Evaluation. Human evaluation for summarization tasks is reliable but brings in issues of reproducibility and high costs. Automatic metrics are cheap and reproducible but sometimes poorly correlated with human judgment. In this work, we propose flexible semiautomatic to automatic summary evaluation metrics, following the Pyramid human evaluation method. Semi-automatic Lite 2 Pyramid retains the reusable human-labeled Summary Content Units (SCUs) for reference(s) but replaces the manual work of judging SCUs' presence in system summaries with a natural language inference (NLI) model. Fully automatic Lite 3 Pyramid further substitutes SCUs with automatically extracted Semantic Triplet Units (STUs) via a semantic role labeling (SRL) model. Finally, we propose in-between metrics, Lite 2.x Pyramid, where we use a simple regressor to predict how well the STUs can simulate SCUs and retain SCUs that are more difficult to simulate, which provides a smooth transition and balance between automation and manual evaluation. Comparing to 15 existing metrics, we evaluate human-metric correlations on 3 existing meta-evaluation datasets and our newlycollected PyrXSum (with 100/10 XSum examples/systems). It shows that Lite 2 Pyramid consistently has the best summary-level correlations; Lite 3 Pyramid works better than or comparable to other automatic metrics; Lite 2.x Pyramid trades off small correlation drops for larger manual effort reduction, which can reduce costs for future data collection. 1",Finding a Balanced Degree of Automation for Summary Evaluation,"Human evaluation for summarization tasks is reliable but brings in issues of reproducibility and high costs. Automatic metrics are cheap and reproducible but sometimes poorly correlated with human judgment. In this work, we propose flexible semiautomatic to automatic summary evaluation metrics, following the Pyramid human evaluation method. Semi-automatic Lite 2 Pyramid retains the reusable human-labeled Summary Content Units (SCUs) for reference(s) but replaces the manual work of judging SCUs' presence in system summaries with a natural language inference (NLI) model. Fully automatic Lite 3 Pyramid further substitutes SCUs with automatically extracted Semantic Triplet Units (STUs) via a semantic role labeling (SRL) model. Finally, we propose in-between metrics, Lite 2.x Pyramid, where we use a simple regressor to predict how well the STUs can simulate SCUs and retain SCUs that are more difficult to simulate, which provides a smooth transition and balance between automation and manual evaluation. Comparing to 15 existing metrics, we evaluate human-metric correlations on 3 existing meta-evaluation datasets and our newlycollected PyrXSum (with 100/10 XSum examples/systems). It shows that Lite 2 Pyramid consistently has the best summary-level correlations; Lite 3 Pyramid works better than or comparable to other automatic metrics; Lite 2.x Pyramid trades off small correlation drops for larger manual effort reduction, which can reduce costs for future data collection. 1",Finding a Balanced Degree of Automation for Summary Evaluation,"Human evaluation for summarization tasks is reliable but brings in issues of reproducibility and high costs. Automatic metrics are cheap and reproducible but sometimes poorly correlated with human judgment. In this work, we propose flexible semiautomatic to automatic summary evaluation metrics, following the Pyramid human evaluation method. Semi-automatic Lite 2 Pyramid retains the reusable human-labeled Summary Content Units (SCUs) for reference(s) but replaces the manual work of judging SCUs' presence in system summaries with a natural language inference (NLI) model. Fully automatic Lite 3 Pyramid further substitutes SCUs with automatically extracted Semantic Triplet Units (STUs) via a semantic role labeling (SRL) model. Finally, we propose in-between metrics, Lite 2.x Pyramid, where we use a simple regressor to predict how well the STUs can simulate SCUs and retain SCUs that are more difficult to simulate, which provides a smooth transition and balance between automation and manual evaluation. Comparing to 15 existing metrics, we evaluate human-metric correlations on 3 existing meta-evaluation datasets and our newlycollected PyrXSum (with 100/10 XSum examples/systems). It shows that Lite 2 Pyramid consistently has the best summary-level correlations; Lite 3 Pyramid works better than or comparable to other automatic metrics; Lite 2.x Pyramid trades off small correlation drops for larger manual effort reduction, which can reduce costs for future data collection. 1",We thank the reviewers for their helpful comments. We thank Xiang Zhou for useful discussions and thank Steven Chen for proofreading SCUs for PyrXSum. This work was supported by NSF-CAREER Award 1846185.,"Finding a Balanced Degree of Automation for Summary Evaluation. Human evaluation for summarization tasks is reliable but brings in issues of reproducibility and high costs. Automatic metrics are cheap and reproducible but sometimes poorly correlated with human judgment. In this work, we propose flexible semiautomatic to automatic summary evaluation metrics, following the Pyramid human evaluation method. Semi-automatic Lite 2 Pyramid retains the reusable human-labeled Summary Content Units (SCUs) for reference(s) but replaces the manual work of judging SCUs' presence in system summaries with a natural language inference (NLI) model. Fully automatic Lite 3 Pyramid further substitutes SCUs with automatically extracted Semantic Triplet Units (STUs) via a semantic role labeling (SRL) model. Finally, we propose in-between metrics, Lite 2.x Pyramid, where we use a simple regressor to predict how well the STUs can simulate SCUs and retain SCUs that are more difficult to simulate, which provides a smooth transition and balance between automation and manual evaluation. Comparing to 15 existing metrics, we evaluate human-metric correlations on 3 existing meta-evaluation datasets and our newlycollected PyrXSum (with 100/10 XSum examples/systems). It shows that Lite 2 Pyramid consistently has the best summary-level correlations; Lite 3 Pyramid works better than or comparable to other automatic metrics; Lite 2.x Pyramid trades off small correlation drops for larger manual effort reduction, which can reduce costs for future data collection. 1",2021
escudero-etal-2000-comparison,https://aclanthology.org/W00-0706,0,,,,,,,"A Comparison between Supervised Learning Algorithms for Word Sense Disambiguation. This paper describes a set of comparative experiments, including cross-corpus evaluation, between five alternative algorithms for supervised Word Sense Disambiguation (WSD), namely Naive Bayes, Exemplar-based learning, SNOW, Decision Lists, and Boosting. Two main conclusions can be drawn: 1) The LazyBoosting algorithm outperforms the other four state-of-theart algorithms in terms of accuracy and ability to tune to new domains; 2) The domain dependence of WSD systems seems very strong and suggests that some kind of adaptation or tuning is required for cross-corpus application.",A Comparison between Supervised Learning Algorithms for Word Sense Disambiguation,"This paper describes a set of comparative experiments, including cross-corpus evaluation, between five alternative algorithms for supervised Word Sense Disambiguation (WSD), namely Naive Bayes, Exemplar-based learning, SNOW, Decision Lists, and Boosting. Two main conclusions can be drawn: 1) The LazyBoosting algorithm outperforms the other four state-of-theart algorithms in terms of accuracy and ability to tune to new domains; 2) The domain dependence of WSD systems seems very strong and suggests that some kind of adaptation or tuning is required for cross-corpus application.",A Comparison between Supervised Learning Algorithms for Word Sense Disambiguation,"This paper describes a set of comparative experiments, including cross-corpus evaluation, between five alternative algorithms for supervised Word Sense Disambiguation (WSD), namely Naive Bayes, Exemplar-based learning, SNOW, Decision Lists, and Boosting. Two main conclusions can be drawn: 1) The LazyBoosting algorithm outperforms the other four state-of-theart algorithms in terms of accuracy and ability to tune to new domains; 2) The domain dependence of WSD systems seems very strong and suggests that some kind of adaptation or tuning is required for cross-corpus application.",,"A Comparison between Supervised Learning Algorithms for Word Sense Disambiguation. This paper describes a set of comparative experiments, including cross-corpus evaluation, between five alternative algorithms for supervised Word Sense Disambiguation (WSD), namely Naive Bayes, Exemplar-based learning, SNOW, Decision Lists, and Boosting. Two main conclusions can be drawn: 1) The LazyBoosting algorithm outperforms the other four state-of-theart algorithms in terms of accuracy and ability to tune to new domains; 2) The domain dependence of WSD systems seems very strong and suggests that some kind of adaptation or tuning is required for cross-corpus application.",2000
federmann-lewis-2016-microsoft,https://aclanthology.org/2016.iwslt-1.12,0,,,,,,,"Microsoft Speech Language Translation (MSLT) Corpus: The IWSLT 2016 release for English, French and German. We describe the Microsoft Speech Language Translation (MSLT) corpus, which was created in order to evaluate endto-end conversational speech translation quality. The corpus was created from actual conversations over Skype, and we provide details on the recording setup and the different layers of associated text data. The corpus release includes Test and Dev sets with reference transcripts for speech recognition. Additionally, cleaned up transcripts and reference translations are available for evaluation of machine translation quality. The IWSLT 2016 release described here includes the source audio, raw transcripts, cleaned up transcripts, and translations to or from English for both French and German.","{M}icrosoft Speech Language Translation ({MSLT}) Corpus: The {IWSLT} 2016 release for {E}nglish, {F}rench and {G}erman","We describe the Microsoft Speech Language Translation (MSLT) corpus, which was created in order to evaluate endto-end conversational speech translation quality. The corpus was created from actual conversations over Skype, and we provide details on the recording setup and the different layers of associated text data. The corpus release includes Test and Dev sets with reference transcripts for speech recognition. Additionally, cleaned up transcripts and reference translations are available for evaluation of machine translation quality. The IWSLT 2016 release described here includes the source audio, raw transcripts, cleaned up transcripts, and translations to or from English for both French and German.","Microsoft Speech Language Translation (MSLT) Corpus: The IWSLT 2016 release for English, French and German","We describe the Microsoft Speech Language Translation (MSLT) corpus, which was created in order to evaluate endto-end conversational speech translation quality. The corpus was created from actual conversations over Skype, and we provide details on the recording setup and the different layers of associated text data. The corpus release includes Test and Dev sets with reference transcripts for speech recognition. Additionally, cleaned up transcripts and reference translations are available for evaluation of machine translation quality. The IWSLT 2016 release described here includes the source audio, raw transcripts, cleaned up transcripts, and translations to or from English for both French and German.",,"Microsoft Speech Language Translation (MSLT) Corpus: The IWSLT 2016 release for English, French and German. We describe the Microsoft Speech Language Translation (MSLT) corpus, which was created in order to evaluate endto-end conversational speech translation quality. The corpus was created from actual conversations over Skype, and we provide details on the recording setup and the different layers of associated text data. The corpus release includes Test and Dev sets with reference transcripts for speech recognition. Additionally, cleaned up transcripts and reference translations are available for evaluation of machine translation quality. The IWSLT 2016 release described here includes the source audio, raw transcripts, cleaned up transcripts, and translations to or from English for both French and German.",2016
zhang-duh-2020-reproducible,https://aclanthology.org/2020.tacl-1.26,0,,,,,,,"Reproducible and Efficient Benchmarks for Hyperparameter Optimization of Neural Machine Translation Systems. Hyperparameter selection is a crucial part of building neural machine translation (NMT) systems across both academia and industry. Fine-grained adjustments to a model's architecture or training recipe can mean the difference between a positive and negative research result or between a state-of-the-art and underperforming system. While recent literature has proposed methods for automatic hyperparameter optimization (HPO), there has been limited work on applying these methods to neural machine translation (NMT), due in part to the high costs associated with experiments that train large numbers of model variants. To facilitate research in this space, we introduce a lookup-based approach that uses a library of pre-trained models for fast, low cost HPO experimentation. Our contributions include (1) the release of a large collection of trained NMT models covering a wide range of hyperparameters, (2) the proposal of targeted metrics for evaluating HPO methods on NMT, and (3) a reproducible benchmark of several HPO methods against our model library, including novel graph-based and multiobjective methods.",Reproducible and Efficient Benchmarks for Hyperparameter Optimization of Neural Machine Translation Systems,"Hyperparameter selection is a crucial part of building neural machine translation (NMT) systems across both academia and industry. Fine-grained adjustments to a model's architecture or training recipe can mean the difference between a positive and negative research result or between a state-of-the-art and underperforming system. While recent literature has proposed methods for automatic hyperparameter optimization (HPO), there has been limited work on applying these methods to neural machine translation (NMT), due in part to the high costs associated with experiments that train large numbers of model variants. To facilitate research in this space, we introduce a lookup-based approach that uses a library of pre-trained models for fast, low cost HPO experimentation. Our contributions include (1) the release of a large collection of trained NMT models covering a wide range of hyperparameters, (2) the proposal of targeted metrics for evaluating HPO methods on NMT, and (3) a reproducible benchmark of several HPO methods against our model library, including novel graph-based and multiobjective methods.",Reproducible and Efficient Benchmarks for Hyperparameter Optimization of Neural Machine Translation Systems,"Hyperparameter selection is a crucial part of building neural machine translation (NMT) systems across both academia and industry. Fine-grained adjustments to a model's architecture or training recipe can mean the difference between a positive and negative research result or between a state-of-the-art and underperforming system. While recent literature has proposed methods for automatic hyperparameter optimization (HPO), there has been limited work on applying these methods to neural machine translation (NMT), due in part to the high costs associated with experiments that train large numbers of model variants. To facilitate research in this space, we introduce a lookup-based approach that uses a library of pre-trained models for fast, low cost HPO experimentation. Our contributions include (1) the release of a large collection of trained NMT models covering a wide range of hyperparameters, (2) the proposal of targeted metrics for evaluating HPO methods on NMT, and (3) a reproducible benchmark of several HPO methods against our model library, including novel graph-based and multiobjective methods.",This work is supported in part by an Amazon Research Award and an IARPA MATERIAL grant. We are especially grateful to Michael Denkowski for helpful discussions and feedback throughout the project.,"Reproducible and Efficient Benchmarks for Hyperparameter Optimization of Neural Machine Translation Systems. Hyperparameter selection is a crucial part of building neural machine translation (NMT) systems across both academia and industry. Fine-grained adjustments to a model's architecture or training recipe can mean the difference between a positive and negative research result or between a state-of-the-art and underperforming system. While recent literature has proposed methods for automatic hyperparameter optimization (HPO), there has been limited work on applying these methods to neural machine translation (NMT), due in part to the high costs associated with experiments that train large numbers of model variants. To facilitate research in this space, we introduce a lookup-based approach that uses a library of pre-trained models for fast, low cost HPO experimentation. Our contributions include (1) the release of a large collection of trained NMT models covering a wide range of hyperparameters, (2) the proposal of targeted metrics for evaluating HPO methods on NMT, and (3) a reproducible benchmark of several HPO methods against our model library, including novel graph-based and multiobjective methods.",2020
angrosh-etal-2014-lexico,https://aclanthology.org/C14-1188,0,,,,,,,"Lexico-syntactic text simplification and compression with typed dependencies. We describe two systems for text simplification using typed dependency structures, one that performs lexical and syntactic simplification, and another that performs sentence compression optimised to satisfy global text constraints such as lexical density, the ratio of difficult words, and text length. We report a substantial evaluation that demonstrates the superiority of our systems, individually and in combination, over the state of the art, and also report a comprehension based evaluation of contemporary automatic text simplification systems with target non-native readers. This work is licensed under a Creative Commons Attribution 4.0 International Licence.",Lexico-syntactic text simplification and compression with typed dependencies,"We describe two systems for text simplification using typed dependency structures, one that performs lexical and syntactic simplification, and another that performs sentence compression optimised to satisfy global text constraints such as lexical density, the ratio of difficult words, and text length. We report a substantial evaluation that demonstrates the superiority of our systems, individually and in combination, over the state of the art, and also report a comprehension based evaluation of contemporary automatic text simplification systems with target non-native readers. This work is licensed under a Creative Commons Attribution 4.0 International Licence.",Lexico-syntactic text simplification and compression with typed dependencies,"We describe two systems for text simplification using typed dependency structures, one that performs lexical and syntactic simplification, and another that performs sentence compression optimised to satisfy global text constraints such as lexical density, the ratio of difficult words, and text length. We report a substantial evaluation that demonstrates the superiority of our systems, individually and in combination, over the state of the art, and also report a comprehension based evaluation of contemporary automatic text simplification systems with target non-native readers. This work is licensed under a Creative Commons Attribution 4.0 International Licence.",This research is supported by an award made by the EPSRC; award reference: EP/J018805/1.,"Lexico-syntactic text simplification and compression with typed dependencies. We describe two systems for text simplification using typed dependency structures, one that performs lexical and syntactic simplification, and another that performs sentence compression optimised to satisfy global text constraints such as lexical density, the ratio of difficult words, and text length. We report a substantial evaluation that demonstrates the superiority of our systems, individually and in combination, over the state of the art, and also report a comprehension based evaluation of contemporary automatic text simplification systems with target non-native readers. This work is licensed under a Creative Commons Attribution 4.0 International Licence.",2014
meng-rumshisky-2018-triad,https://aclanthology.org/C18-1004,0,,,,,,,"Triad-based Neural Network for Coreference Resolution. We propose a triad-based neural network system that generates affinity scores between entity mentions for coreference resolution. The system simultaneously accepts three mentions as input, taking mutual dependency and logical constraints of all three mentions into account, and thus makes more accurate predictions than the traditional pairwise approach. Depending on system choices, the affinity scores can be further used in clustering or mention ranking. Our experiments show that a standard hierarchical clustering using the scores produces state-of-art results with MUC and B 3 metrics on the English portion of CoNLL 2012 Shared Task. The model does not rely on many handcrafted features and is easy to train and use. The triads can also be easily extended to polyads of higher orders. To our knowledge, this is the first neural network system to model mutual dependency of more than two members at mention level.",Triad-based Neural Network for Coreference Resolution,"We propose a triad-based neural network system that generates affinity scores between entity mentions for coreference resolution. The system simultaneously accepts three mentions as input, taking mutual dependency and logical constraints of all three mentions into account, and thus makes more accurate predictions than the traditional pairwise approach. Depending on system choices, the affinity scores can be further used in clustering or mention ranking. Our experiments show that a standard hierarchical clustering using the scores produces state-of-art results with MUC and B 3 metrics on the English portion of CoNLL 2012 Shared Task. The model does not rely on many handcrafted features and is easy to train and use. The triads can also be easily extended to polyads of higher orders. To our knowledge, this is the first neural network system to model mutual dependency of more than two members at mention level.",Triad-based Neural Network for Coreference Resolution,"We propose a triad-based neural network system that generates affinity scores between entity mentions for coreference resolution. The system simultaneously accepts three mentions as input, taking mutual dependency and logical constraints of all three mentions into account, and thus makes more accurate predictions than the traditional pairwise approach. Depending on system choices, the affinity scores can be further used in clustering or mention ranking. Our experiments show that a standard hierarchical clustering using the scores produces state-of-art results with MUC and B 3 metrics on the English portion of CoNLL 2012 Shared Task. The model does not rely on many handcrafted features and is easy to train and use. The triads can also be easily extended to polyads of higher orders. To our knowledge, this is the first neural network system to model mutual dependency of more than two members at mention level.",This project is funded in part by an NSF CAREER award to Anna Rumshisky (IIS-1652742).,"Triad-based Neural Network for Coreference Resolution. We propose a triad-based neural network system that generates affinity scores between entity mentions for coreference resolution. The system simultaneously accepts three mentions as input, taking mutual dependency and logical constraints of all three mentions into account, and thus makes more accurate predictions than the traditional pairwise approach. Depending on system choices, the affinity scores can be further used in clustering or mention ranking. Our experiments show that a standard hierarchical clustering using the scores produces state-of-art results with MUC and B 3 metrics on the English portion of CoNLL 2012 Shared Task. The model does not rely on many handcrafted features and is easy to train and use. The triads can also be easily extended to polyads of higher orders. To our knowledge, this is the first neural network system to model mutual dependency of more than two members at mention level.",2018
camargo-de-souza-etal-2013-fbk,https://aclanthology.org/W13-2243,0,,,,,,,"FBK-UEdin Participation to the WMT13 Quality Estimation Shared Task. In this paper we present the approach and system setup of the joint participation of Fondazione Bruno Kessler and University of Edinburgh in the WMT 2013 Quality Estimation shared-task. Our submissions were focused on tasks whose aim was predicting sentence-level Human-mediated Translation Edit Rate and sentence-level post-editing time (Task 1.1 and 1.3, respectively). We designed features that are built on resources such as automatic word alignment, n-best candidate translation lists, back-translations and word posterior probabilities. Our models consistently overcome the baselines for both tasks and performed particularly well for Task 1.3, ranking first among seven participants.",{FBK}-{UE}din Participation to the {WMT}13 Quality Estimation Shared Task,"In this paper we present the approach and system setup of the joint participation of Fondazione Bruno Kessler and University of Edinburgh in the WMT 2013 Quality Estimation shared-task. Our submissions were focused on tasks whose aim was predicting sentence-level Human-mediated Translation Edit Rate and sentence-level post-editing time (Task 1.1 and 1.3, respectively). We designed features that are built on resources such as automatic word alignment, n-best candidate translation lists, back-translations and word posterior probabilities. Our models consistently overcome the baselines for both tasks and performed particularly well for Task 1.3, ranking first among seven participants.",FBK-UEdin Participation to the WMT13 Quality Estimation Shared Task,"In this paper we present the approach and system setup of the joint participation of Fondazione Bruno Kessler and University of Edinburgh in the WMT 2013 Quality Estimation shared-task. Our submissions were focused on tasks whose aim was predicting sentence-level Human-mediated Translation Edit Rate and sentence-level post-editing time (Task 1.1 and 1.3, respectively). We designed features that are built on resources such as automatic word alignment, n-best candidate translation lists, back-translations and word posterior probabilities. Our models consistently overcome the baselines for both tasks and performed particularly well for Task 1.3, ranking first among seven participants.","This work was partially funded by the European Commission under the project MateCat, Grant 287688. The authors want to thank Philipp Koehn for training two of the models used in Section 2.2.","FBK-UEdin Participation to the WMT13 Quality Estimation Shared Task. In this paper we present the approach and system setup of the joint participation of Fondazione Bruno Kessler and University of Edinburgh in the WMT 2013 Quality Estimation shared-task. Our submissions were focused on tasks whose aim was predicting sentence-level Human-mediated Translation Edit Rate and sentence-level post-editing time (Task 1.1 and 1.3, respectively). We designed features that are built on resources such as automatic word alignment, n-best candidate translation lists, back-translations and word posterior probabilities. Our models consistently overcome the baselines for both tasks and performed particularly well for Task 1.3, ranking first among seven participants.",2013
rieser-lemon-2008-automatic,http://www.lrec-conf.org/proceedings/lrec2008/pdf/592_paper.pdf,0,,,,,,,"Automatic Learning and Evaluation of User-Centered Objective Functions for Dialogue System Optimisation. The ultimate goal when building dialogue systems is to satisfy the needs of real users, but quality assurance for dialogue strategies is a non-trivial problem. The applied evaluation metrics and resulting design principles are often obscure, emerge by trial-and-error, and are highly context dependent. This paper introduces data-driven methods for obtaining reliable objective functions for system design. In particular, we test whether an objective function obtained from Wizard-of-Oz (WOZ) data is a valid estimate of real users' preferences. We test this in a test-retest comparison between the model obtained from the WOZ study and the models obtained when testing with real users. We can show that, despite a low fit to the initial data, the objective function obtained from WOZ data makes accurate predictions for automatic dialogue evaluation, and, when automatically optimising a policy using these predictions, the improvement over a strategy simply mimicking the data becomes clear from an error analysis.",Automatic Learning and Evaluation of User-Centered Objective Functions for Dialogue System Optimisation,"The ultimate goal when building dialogue systems is to satisfy the needs of real users, but quality assurance for dialogue strategies is a non-trivial problem. The applied evaluation metrics and resulting design principles are often obscure, emerge by trial-and-error, and are highly context dependent. This paper introduces data-driven methods for obtaining reliable objective functions for system design. In particular, we test whether an objective function obtained from Wizard-of-Oz (WOZ) data is a valid estimate of real users' preferences. We test this in a test-retest comparison between the model obtained from the WOZ study and the models obtained when testing with real users. We can show that, despite a low fit to the initial data, the objective function obtained from WOZ data makes accurate predictions for automatic dialogue evaluation, and, when automatically optimising a policy using these predictions, the improvement over a strategy simply mimicking the data becomes clear from an error analysis.",Automatic Learning and Evaluation of User-Centered Objective Functions for Dialogue System Optimisation,"The ultimate goal when building dialogue systems is to satisfy the needs of real users, but quality assurance for dialogue strategies is a non-trivial problem. The applied evaluation metrics and resulting design principles are often obscure, emerge by trial-and-error, and are highly context dependent. This paper introduces data-driven methods for obtaining reliable objective functions for system design. In particular, we test whether an objective function obtained from Wizard-of-Oz (WOZ) data is a valid estimate of real users' preferences. We test this in a test-retest comparison between the model obtained from the WOZ study and the models obtained when testing with real users. We can show that, despite a low fit to the initial data, the objective function obtained from WOZ data makes accurate predictions for automatic dialogue evaluation, and, when automatically optimising a policy using these predictions, the improvement over a strategy simply mimicking the data becomes clear from an error analysis.","This work was partially funded by the International Research Training Group Language Technology and Cognitive Systems, Saarland University, and by EPSRC project number EP/E019501/1. The research leading to these results has also received funding from the European Community's Seventh Framework Programme (FP7/2007(FP7/ -2013 under grant agreement number 216594 (CLASSIC project: www.classic-project.org)","Automatic Learning and Evaluation of User-Centered Objective Functions for Dialogue System Optimisation. The ultimate goal when building dialogue systems is to satisfy the needs of real users, but quality assurance for dialogue strategies is a non-trivial problem. The applied evaluation metrics and resulting design principles are often obscure, emerge by trial-and-error, and are highly context dependent. This paper introduces data-driven methods for obtaining reliable objective functions for system design. In particular, we test whether an objective function obtained from Wizard-of-Oz (WOZ) data is a valid estimate of real users' preferences. We test this in a test-retest comparison between the model obtained from the WOZ study and the models obtained when testing with real users. We can show that, despite a low fit to the initial data, the objective function obtained from WOZ data makes accurate predictions for automatic dialogue evaluation, and, when automatically optimising a policy using these predictions, the improvement over a strategy simply mimicking the data becomes clear from an error analysis.",2008
ney-popovic-2004-improving,https://aclanthology.org/C04-1045,0,,,,,,,"Improving Word Alignment Quality using Morpho-syntactic Information. In this paper, we present an approach to include morpho-syntactic dependencies into the training of the statistical alignment models. Existing statistical translation systems usually treat different derivations of the same base form as they were independent of each other. We propose a method which explicitly takes into account such interdependencies during the EM training of the statistical alignment models. The evaluation is done by comparing the obtained Viterbi alignments with a manually annotated reference alignment. The improvements of the alignment quality compared to the, to our knowledge, best system are reported on the German-English Verbmobil corpus.",Improving Word Alignment Quality using Morpho-syntactic Information,"In this paper, we present an approach to include morpho-syntactic dependencies into the training of the statistical alignment models. Existing statistical translation systems usually treat different derivations of the same base form as they were independent of each other. We propose a method which explicitly takes into account such interdependencies during the EM training of the statistical alignment models. The evaluation is done by comparing the obtained Viterbi alignments with a manually annotated reference alignment. The improvements of the alignment quality compared to the, to our knowledge, best system are reported on the German-English Verbmobil corpus.",Improving Word Alignment Quality using Morpho-syntactic Information,"In this paper, we present an approach to include morpho-syntactic dependencies into the training of the statistical alignment models. Existing statistical translation systems usually treat different derivations of the same base form as they were independent of each other. We propose a method which explicitly takes into account such interdependencies during the EM training of the statistical alignment models. The evaluation is done by comparing the obtained Viterbi alignments with a manually annotated reference alignment. The improvements of the alignment quality compared to the, to our knowledge, best system are reported on the German-English Verbmobil corpus.","We assume that the method can be very effective for cases where only small amount of data is available. We also expect further improvements by performing a special modelling for the rare words.We are planning to investigate possibilities of improving the alignment quality for different language pairs using different types of morphosyntactic information, like for example to use word stems and suffixes for morphologicaly rich languages where some parts of the words have to be aligned to the whole English words (e.g. Spanish verbs, Finnish in general, etc.) We are also planning to use the refined alignments for the translation process.","Improving Word Alignment Quality using Morpho-syntactic Information. In this paper, we present an approach to include morpho-syntactic dependencies into the training of the statistical alignment models. Existing statistical translation systems usually treat different derivations of the same base form as they were independent of each other. We propose a method which explicitly takes into account such interdependencies during the EM training of the statistical alignment models. The evaluation is done by comparing the obtained Viterbi alignments with a manually annotated reference alignment. The improvements of the alignment quality compared to the, to our knowledge, best system are reported on the German-English Verbmobil corpus.",2004
woodley-etal-2006-natural,https://aclanthology.org/U06-1026,0,,,,,,,"Natural Language Processing and XML Retrieval. XML information retrieval (XML-IR) systems respond to user queries with results more specific than documents. XML-IR queries contain both content and structural requirements traditionally expressed in a formal language. However, an intuitive alternative is natural language queries (NLQs). Here, we discuss three approaches for handling NLQs in an XML-IR system that are comparable to, and even outperform formal language queries.",Natural Language Processing and {XML} Retrieval,"XML information retrieval (XML-IR) systems respond to user queries with results more specific than documents. XML-IR queries contain both content and structural requirements traditionally expressed in a formal language. However, an intuitive alternative is natural language queries (NLQs). Here, we discuss three approaches for handling NLQs in an XML-IR system that are comparable to, and even outperform formal language queries.",Natural Language Processing and XML Retrieval,"XML information retrieval (XML-IR) systems respond to user queries with results more specific than documents. XML-IR queries contain both content and structural requirements traditionally expressed in a formal language. However, an intuitive alternative is natural language queries (NLQs). Here, we discuss three approaches for handling NLQs in an XML-IR system that are comparable to, and even outperform formal language queries.",,"Natural Language Processing and XML Retrieval. XML information retrieval (XML-IR) systems respond to user queries with results more specific than documents. XML-IR queries contain both content and structural requirements traditionally expressed in a formal language. However, an intuitive alternative is natural language queries (NLQs). Here, we discuss three approaches for handling NLQs in an XML-IR system that are comparable to, and even outperform formal language queries.",2006
gulati-2015-extracting,https://aclanthology.org/W15-5921,0,,,,,,,"Extracting Information from Indian First Names. First name of a person can tell important demographic and cultural information about that person. This paper proposes statistical models for extracting vital information that is gender, religion and name validity from Indian first names. Statistical models combine some classical features like ngrams and Levenshtein distance along with some self observed features like vowel score and religion belief. Rigorous evaluation of models has been performed through several machine learning algorithms to compare the accuracy, F-Measure, Kappa Static and RMS error. Experimental results give promising and favorable results which indicate that these models proposed can be directly used in other information extraction systems.",Extracting Information from {I}ndian First Names,"First name of a person can tell important demographic and cultural information about that person. This paper proposes statistical models for extracting vital information that is gender, religion and name validity from Indian first names. Statistical models combine some classical features like ngrams and Levenshtein distance along with some self observed features like vowel score and religion belief. Rigorous evaluation of models has been performed through several machine learning algorithms to compare the accuracy, F-Measure, Kappa Static and RMS error. Experimental results give promising and favorable results which indicate that these models proposed can be directly used in other information extraction systems.",Extracting Information from Indian First Names,"First name of a person can tell important demographic and cultural information about that person. This paper proposes statistical models for extracting vital information that is gender, religion and name validity from Indian first names. Statistical models combine some classical features like ngrams and Levenshtein distance along with some self observed features like vowel score and religion belief. Rigorous evaluation of models has been performed through several machine learning algorithms to compare the accuracy, F-Measure, Kappa Static and RMS error. Experimental results give promising and favorable results which indicate that these models proposed can be directly used in other information extraction systems.",,"Extracting Information from Indian First Names. First name of a person can tell important demographic and cultural information about that person. This paper proposes statistical models for extracting vital information that is gender, religion and name validity from Indian first names. Statistical models combine some classical features like ngrams and Levenshtein distance along with some self observed features like vowel score and religion belief. Rigorous evaluation of models has been performed through several machine learning algorithms to compare the accuracy, F-Measure, Kappa Static and RMS error. Experimental results give promising and favorable results which indicate that these models proposed can be directly used in other information extraction systems.",2015
devault-stone-2004-interpreting,https://aclanthology.org/C04-1181,0,,,,,,,"Interpreting Vague Utterances in Context. We use the interpretation of vague scalar predicates like small as an illustration of how systematic semantic models of dialogue context enable the derivation of useful, fine-grained utterance interpretations from radically underspecified semantic forms. Because dialogue context suffices to determine salient alternative scales and relevant distinctions along these scales, we can infer implicit standards of comparison for vague scalar predicates through completely general pragmatics, yet closely constrain the intended meaning to within a natural range.",Interpreting Vague Utterances in Context,"We use the interpretation of vague scalar predicates like small as an illustration of how systematic semantic models of dialogue context enable the derivation of useful, fine-grained utterance interpretations from radically underspecified semantic forms. Because dialogue context suffices to determine salient alternative scales and relevant distinctions along these scales, we can infer implicit standards of comparison for vague scalar predicates through completely general pragmatics, yet closely constrain the intended meaning to within a natural range.",Interpreting Vague Utterances in Context,"We use the interpretation of vague scalar predicates like small as an illustration of how systematic semantic models of dialogue context enable the derivation of useful, fine-grained utterance interpretations from radically underspecified semantic forms. Because dialogue context suffices to determine salient alternative scales and relevant distinctions along these scales, we can infer implicit standards of comparison for vague scalar predicates through completely general pragmatics, yet closely constrain the intended meaning to within a natural range.",We thank Kees van Deemter and our anonymous reviewers for valuable comments. This work was supported by NSF grant HLC 0308121.,"Interpreting Vague Utterances in Context. We use the interpretation of vague scalar predicates like small as an illustration of how systematic semantic models of dialogue context enable the derivation of useful, fine-grained utterance interpretations from radically underspecified semantic forms. Because dialogue context suffices to determine salient alternative scales and relevant distinctions along these scales, we can infer implicit standards of comparison for vague scalar predicates through completely general pragmatics, yet closely constrain the intended meaning to within a natural range.",2004
fell-etal-2020-love,https://aclanthology.org/2020.lrec-1.262,0,,,,,,,"Love Me, Love Me, Say (and Write!) that You Love Me: Enriching the WASABI Song Corpus with Lyrics Annotations. We present the WASABI Song Corpus, a large corpus of songs enriched with metadata extracted from music databases on the Web, and resulting from the processing of song lyrics and from audio analysis. More specifically, given that lyrics encode an important part of the semantics of a song, we focus here on the description of the methods we proposed to extract relevant information from the lyrics, such as their structure segmentation, their topics, the explicitness of the lyrics content, the salient passages of a song and the emotions conveyed. The creation of the resource is still ongoing: so far, the corpus contains 1.73M songs with lyrics (1.41M unique lyrics) annotated at different levels with the output of the above mentioned methods. Such corpus labels and the provided methods can be exploited by music search engines and music professionals (e.g. journalists, radio presenters) to better handle large collections of lyrics, allowing an intelligent browsing, categorization and recommendation of songs. We provide the files of the current version of the WASABI Song Corpus, the models we have built on it as well as updates here: https://github.com/micbuffa/WasabiDataset.","Love Me, Love Me, Say (and Write!) that You Love Me: Enriching the {WASABI} Song Corpus with Lyrics Annotations","We present the WASABI Song Corpus, a large corpus of songs enriched with metadata extracted from music databases on the Web, and resulting from the processing of song lyrics and from audio analysis. More specifically, given that lyrics encode an important part of the semantics of a song, we focus here on the description of the methods we proposed to extract relevant information from the lyrics, such as their structure segmentation, their topics, the explicitness of the lyrics content, the salient passages of a song and the emotions conveyed. The creation of the resource is still ongoing: so far, the corpus contains 1.73M songs with lyrics (1.41M unique lyrics) annotated at different levels with the output of the above mentioned methods. Such corpus labels and the provided methods can be exploited by music search engines and music professionals (e.g. journalists, radio presenters) to better handle large collections of lyrics, allowing an intelligent browsing, categorization and recommendation of songs. We provide the files of the current version of the WASABI Song Corpus, the models we have built on it as well as updates here: https://github.com/micbuffa/WasabiDataset.","Love Me, Love Me, Say (and Write!) that You Love Me: Enriching the WASABI Song Corpus with Lyrics Annotations","We present the WASABI Song Corpus, a large corpus of songs enriched with metadata extracted from music databases on the Web, and resulting from the processing of song lyrics and from audio analysis. More specifically, given that lyrics encode an important part of the semantics of a song, we focus here on the description of the methods we proposed to extract relevant information from the lyrics, such as their structure segmentation, their topics, the explicitness of the lyrics content, the salient passages of a song and the emotions conveyed. The creation of the resource is still ongoing: so far, the corpus contains 1.73M songs with lyrics (1.41M unique lyrics) annotated at different levels with the output of the above mentioned methods. Such corpus labels and the provided methods can be exploited by music search engines and music professionals (e.g. journalists, radio presenters) to better handle large collections of lyrics, allowing an intelligent browsing, categorization and recommendation of songs. We provide the files of the current version of the WASABI Song Corpus, the models we have built on it as well as updates here: https://github.com/micbuffa/WasabiDataset.",This work is partly funded by the French Research National Agency (ANR) under the WASABI project (contract ANR-16-CE23-0017-01) and by the EU Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 690974 (MIREL).,"Love Me, Love Me, Say (and Write!) that You Love Me: Enriching the WASABI Song Corpus with Lyrics Annotations. We present the WASABI Song Corpus, a large corpus of songs enriched with metadata extracted from music databases on the Web, and resulting from the processing of song lyrics and from audio analysis. More specifically, given that lyrics encode an important part of the semantics of a song, we focus here on the description of the methods we proposed to extract relevant information from the lyrics, such as their structure segmentation, their topics, the explicitness of the lyrics content, the salient passages of a song and the emotions conveyed. The creation of the resource is still ongoing: so far, the corpus contains 1.73M songs with lyrics (1.41M unique lyrics) annotated at different levels with the output of the above mentioned methods. Such corpus labels and the provided methods can be exploited by music search engines and music professionals (e.g. journalists, radio presenters) to better handle large collections of lyrics, allowing an intelligent browsing, categorization and recommendation of songs. We provide the files of the current version of the WASABI Song Corpus, the models we have built on it as well as updates here: https://github.com/micbuffa/WasabiDataset.",2020
shi-etal-2021-keyword,https://aclanthology.org/2021.ecnlp-1.5,0,,,,,,,"Keyword Augmentation via Generative Methods. Keyword augmentation is a fundamental problem for sponsored search modeling and business. Machine generated keywords can be recommended to advertisers for better campaign discoverability as well as used as features for sourcing and ranking models. Generating highquality keywords is difficult, especially for cold campaigns with limited or even no historical logs; and the industry trend of including multiple products in a single ad campaign is making the problem more challenging. In this paper, we propose a keyword augmentation method based on generative seq2seq model and triebased search mechanism, which is able to generate high-quality keywords for any products or product lists. We conduct human annotations, offline analysis, and online experiments to evaluate the performance of our method against benchmarks in terms of augmented keyword quality as well as lifted ad exposure. The experiment results demonstrate that our method is able to generate more valid keywords which can serve as an efficient addition to advertiser selected keywords.",Keyword Augmentation via Generative Methods,"Keyword augmentation is a fundamental problem for sponsored search modeling and business. Machine generated keywords can be recommended to advertisers for better campaign discoverability as well as used as features for sourcing and ranking models. Generating highquality keywords is difficult, especially for cold campaigns with limited or even no historical logs; and the industry trend of including multiple products in a single ad campaign is making the problem more challenging. In this paper, we propose a keyword augmentation method based on generative seq2seq model and triebased search mechanism, which is able to generate high-quality keywords for any products or product lists. We conduct human annotations, offline analysis, and online experiments to evaluate the performance of our method against benchmarks in terms of augmented keyword quality as well as lifted ad exposure. The experiment results demonstrate that our method is able to generate more valid keywords which can serve as an efficient addition to advertiser selected keywords.",Keyword Augmentation via Generative Methods,"Keyword augmentation is a fundamental problem for sponsored search modeling and business. Machine generated keywords can be recommended to advertisers for better campaign discoverability as well as used as features for sourcing and ranking models. Generating highquality keywords is difficult, especially for cold campaigns with limited or even no historical logs; and the industry trend of including multiple products in a single ad campaign is making the problem more challenging. In this paper, we propose a keyword augmentation method based on generative seq2seq model and triebased search mechanism, which is able to generate high-quality keywords for any products or product lists. We conduct human annotations, offline analysis, and online experiments to evaluate the performance of our method against benchmarks in terms of augmented keyword quality as well as lifted ad exposure. The experiment results demonstrate that our method is able to generate more valid keywords which can serve as an efficient addition to advertiser selected keywords.","We would like to thank to Hongyu Zhu, Weiming Wu, Barry Bai, Hirohisa Fujita for their help to set up the online A/B testing, and all the reviewers for their valuable suggestions.","Keyword Augmentation via Generative Methods. Keyword augmentation is a fundamental problem for sponsored search modeling and business. Machine generated keywords can be recommended to advertisers for better campaign discoverability as well as used as features for sourcing and ranking models. Generating highquality keywords is difficult, especially for cold campaigns with limited or even no historical logs; and the industry trend of including multiple products in a single ad campaign is making the problem more challenging. In this paper, we propose a keyword augmentation method based on generative seq2seq model and triebased search mechanism, which is able to generate high-quality keywords for any products or product lists. We conduct human annotations, offline analysis, and online experiments to evaluate the performance of our method against benchmarks in terms of augmented keyword quality as well as lifted ad exposure. The experiment results demonstrate that our method is able to generate more valid keywords which can serve as an efficient addition to advertiser selected keywords.",2021
kanerva-etal-2014-turku,https://aclanthology.org/S14-2121,0,,,,,,,"Turku: Broad-Coverage Semantic Parsing with Rich Features. In this paper we introduce our system capable of producing semantic parses of sentences using three different annotation formats. The system was used to participate in the SemEval-2014 Shared Task on broad-coverage semantic dependency parsing and it was ranked third with an overall F 1-score of 80.49%. The system has a pipeline architecture, consisting of three separate supervised classification steps.",{T}urku: Broad-Coverage Semantic Parsing with Rich Features,"In this paper we introduce our system capable of producing semantic parses of sentences using three different annotation formats. The system was used to participate in the SemEval-2014 Shared Task on broad-coverage semantic dependency parsing and it was ranked third with an overall F 1-score of 80.49%. The system has a pipeline architecture, consisting of three separate supervised classification steps.",Turku: Broad-Coverage Semantic Parsing with Rich Features,"In this paper we introduce our system capable of producing semantic parses of sentences using three different annotation formats. The system was used to participate in the SemEval-2014 Shared Task on broad-coverage semantic dependency parsing and it was ranked third with an overall F 1-score of 80.49%. The system has a pipeline architecture, consisting of three separate supervised classification steps.",This work was supported by the Emil Aaltonen Foundation and the Kone Foundation. Computational resources were provided by CSC -IT Center for Science.,"Turku: Broad-Coverage Semantic Parsing with Rich Features. In this paper we introduce our system capable of producing semantic parses of sentences using three different annotation formats. The system was used to participate in the SemEval-2014 Shared Task on broad-coverage semantic dependency parsing and it was ranked third with an overall F 1-score of 80.49%. The system has a pipeline architecture, consisting of three separate supervised classification steps.",2014
miller-etal-2008-infrastructure,http://www.lrec-conf.org/proceedings/lrec2008/pdf/805_paper.pdf,0,,,,,,,"An Infrastructure, Tools and Methodology for Evaluation of Multicultural Name Matching Systems. This paper describes a Name Matching Evaluation Laboratory that is a joint effort across multiple projects. The lab houses our evaluation infrastructure as well as multiple name matching engines and customized analytical tools. Included is an explanation of the methodology used by the lab to carry out evaluations. This methodology is based on standard information retrieval evaluation, which requires a carefully-constructed test data set. The paper describes how we created that test data set, including the ""ground truth"" used to score the systems' performance. Descriptions and snapshots of the lab's various tools are provided, as well as information on how the different tools are used throughout the evaluation process. By using this evaluation process, the lab has been able to identify strengths and weaknesses of different name matching engines. These findings have led the lab to an ongoing investigation into various techniques for combining results from multiple name matching engines to achieve optimal results, as well as into research on the more general problem of identity management and resolution.","An Infrastructure, Tools and Methodology for Evaluation of Multicultural Name Matching Systems","This paper describes a Name Matching Evaluation Laboratory that is a joint effort across multiple projects. The lab houses our evaluation infrastructure as well as multiple name matching engines and customized analytical tools. Included is an explanation of the methodology used by the lab to carry out evaluations. This methodology is based on standard information retrieval evaluation, which requires a carefully-constructed test data set. The paper describes how we created that test data set, including the ""ground truth"" used to score the systems' performance. Descriptions and snapshots of the lab's various tools are provided, as well as information on how the different tools are used throughout the evaluation process. By using this evaluation process, the lab has been able to identify strengths and weaknesses of different name matching engines. These findings have led the lab to an ongoing investigation into various techniques for combining results from multiple name matching engines to achieve optimal results, as well as into research on the more general problem of identity management and resolution.","An Infrastructure, Tools and Methodology for Evaluation of Multicultural Name Matching Systems","This paper describes a Name Matching Evaluation Laboratory that is a joint effort across multiple projects. The lab houses our evaluation infrastructure as well as multiple name matching engines and customized analytical tools. Included is an explanation of the methodology used by the lab to carry out evaluations. This methodology is based on standard information retrieval evaluation, which requires a carefully-constructed test data set. The paper describes how we created that test data set, including the ""ground truth"" used to score the systems' performance. Descriptions and snapshots of the lab's various tools are provided, as well as information on how the different tools are used throughout the evaluation process. By using this evaluation process, the lab has been able to identify strengths and weaknesses of different name matching engines. These findings have led the lab to an ongoing investigation into various techniques for combining results from multiple name matching engines to achieve optimal results, as well as into research on the more general problem of identity management and resolution.",,"An Infrastructure, Tools and Methodology for Evaluation of Multicultural Name Matching Systems. This paper describes a Name Matching Evaluation Laboratory that is a joint effort across multiple projects. The lab houses our evaluation infrastructure as well as multiple name matching engines and customized analytical tools. Included is an explanation of the methodology used by the lab to carry out evaluations. This methodology is based on standard information retrieval evaluation, which requires a carefully-constructed test data set. The paper describes how we created that test data set, including the ""ground truth"" used to score the systems' performance. Descriptions and snapshots of the lab's various tools are provided, as well as information on how the different tools are used throughout the evaluation process. By using this evaluation process, the lab has been able to identify strengths and weaknesses of different name matching engines. These findings have led the lab to an ongoing investigation into various techniques for combining results from multiple name matching engines to achieve optimal results, as well as into research on the more general problem of identity management and resolution.",2008
van-der-meer-2010-thousand,https://aclanthology.org/2010.eamt-1.3,0,,,,,,,"Let a Thousand MT Systems Bloom. Looking into the future, I see a thousand MT systems blooming. I see fortune for the translation industry, and new solutions to overcome failed translations. I see a better world due to improved communications among the world's seven billion citizens. And the reason why I am so optimistic is that the process of data effectiveness is joining hands with the trend towards profit of sharing. The first is somewhat hidden from view in academic circles; the other leads a public life in the media and on the internet. One is simply science at work, steadily proving that numbers count and synergies work. The other is part of the ongoing battle between self-interest and the Zeitgeist. And the Zeitgeist is destined to win."" In his presentation Jaap van der Meer will share a perspective on translation automation, localization business innovation and industry collaboration.",Let a Thousand {MT} Systems Bloom,"Looking into the future, I see a thousand MT systems blooming. I see fortune for the translation industry, and new solutions to overcome failed translations. I see a better world due to improved communications among the world's seven billion citizens. And the reason why I am so optimistic is that the process of data effectiveness is joining hands with the trend towards profit of sharing. The first is somewhat hidden from view in academic circles; the other leads a public life in the media and on the internet. One is simply science at work, steadily proving that numbers count and synergies work. The other is part of the ongoing battle between self-interest and the Zeitgeist. And the Zeitgeist is destined to win."" In his presentation Jaap van der Meer will share a perspective on translation automation, localization business innovation and industry collaboration.",Let a Thousand MT Systems Bloom,"Looking into the future, I see a thousand MT systems blooming. I see fortune for the translation industry, and new solutions to overcome failed translations. I see a better world due to improved communications among the world's seven billion citizens. And the reason why I am so optimistic is that the process of data effectiveness is joining hands with the trend towards profit of sharing. The first is somewhat hidden from view in academic circles; the other leads a public life in the media and on the internet. One is simply science at work, steadily proving that numbers count and synergies work. The other is part of the ongoing battle between self-interest and the Zeitgeist. And the Zeitgeist is destined to win."" In his presentation Jaap van der Meer will share a perspective on translation automation, localization business innovation and industry collaboration.",,"Let a Thousand MT Systems Bloom. Looking into the future, I see a thousand MT systems blooming. I see fortune for the translation industry, and new solutions to overcome failed translations. I see a better world due to improved communications among the world's seven billion citizens. And the reason why I am so optimistic is that the process of data effectiveness is joining hands with the trend towards profit of sharing. The first is somewhat hidden from view in academic circles; the other leads a public life in the media and on the internet. One is simply science at work, steadily proving that numbers count and synergies work. The other is part of the ongoing battle between self-interest and the Zeitgeist. And the Zeitgeist is destined to win."" In his presentation Jaap van der Meer will share a perspective on translation automation, localization business innovation and industry collaboration.",2010
carl-etal-2005-reversible,https://aclanthology.org/2005.mtsummit-ebmt.3,0,,,,,,,"Reversible Template-based Shake \& Bake Generation. Corpus-based MT systems that analyse and generalise texts beyond the surface forms of words require generation tools to regenerate the various internal representations into valid target language (TL) sentences. While the generation of word-forms from lemmas is probably the last step in every text generation process at its very bottom end, token-generation cannot be accomplished without structural and morpho-syntactic knowledge of the sentence to be generated. As in many other MT models, this knowledge is composed of a target language model and a bag of information transferred from the source language. In this paper we establish an abstracted, linguistically informed, target language model. We use a tagger, a lemmatiser and a parser to infer a template grammar from the TL corpus. Given a linguistically informed TL model, the aim is to see what need be provided from the transfer module for generation. During computation of the template grammar, we simultaneously build up for each TL sentence the content of the bag such that the sentence can be deterministically reproduced. In this way we control the completeness of the approach and will have an idea of what pieces of information we need to code in the TL bag.",Reversible Template-based Shake {\&}amp; Bake Generation,"Corpus-based MT systems that analyse and generalise texts beyond the surface forms of words require generation tools to regenerate the various internal representations into valid target language (TL) sentences. While the generation of word-forms from lemmas is probably the last step in every text generation process at its very bottom end, token-generation cannot be accomplished without structural and morpho-syntactic knowledge of the sentence to be generated. As in many other MT models, this knowledge is composed of a target language model and a bag of information transferred from the source language. In this paper we establish an abstracted, linguistically informed, target language model. We use a tagger, a lemmatiser and a parser to infer a template grammar from the TL corpus. Given a linguistically informed TL model, the aim is to see what need be provided from the transfer module for generation. During computation of the template grammar, we simultaneously build up for each TL sentence the content of the bag such that the sentence can be deterministically reproduced. In this way we control the completeness of the approach and will have an idea of what pieces of information we need to code in the TL bag.",Reversible Template-based Shake \& Bake Generation,"Corpus-based MT systems that analyse and generalise texts beyond the surface forms of words require generation tools to regenerate the various internal representations into valid target language (TL) sentences. While the generation of word-forms from lemmas is probably the last step in every text generation process at its very bottom end, token-generation cannot be accomplished without structural and morpho-syntactic knowledge of the sentence to be generated. As in many other MT models, this knowledge is composed of a target language model and a bag of information transferred from the source language. In this paper we establish an abstracted, linguistically informed, target language model. We use a tagger, a lemmatiser and a parser to infer a template grammar from the TL corpus. Given a linguistically informed TL model, the aim is to see what need be provided from the transfer module for generation. During computation of the template grammar, we simultaneously build up for each TL sentence the content of the bag such that the sentence can be deterministically reproduced. In this way we control the completeness of the approach and will have an idea of what pieces of information we need to code in the TL bag.",,"Reversible Template-based Shake \& Bake Generation. Corpus-based MT systems that analyse and generalise texts beyond the surface forms of words require generation tools to regenerate the various internal representations into valid target language (TL) sentences. While the generation of word-forms from lemmas is probably the last step in every text generation process at its very bottom end, token-generation cannot be accomplished without structural and morpho-syntactic knowledge of the sentence to be generated. As in many other MT models, this knowledge is composed of a target language model and a bag of information transferred from the source language. In this paper we establish an abstracted, linguistically informed, target language model. We use a tagger, a lemmatiser and a parser to infer a template grammar from the TL corpus. Given a linguistically informed TL model, the aim is to see what need be provided from the transfer module for generation. During computation of the template grammar, we simultaneously build up for each TL sentence the content of the bag such that the sentence can be deterministically reproduced. In this way we control the completeness of the approach and will have an idea of what pieces of information we need to code in the TL bag.",2005
jiang-etal-2016-encoding,https://aclanthology.org/D16-1260,0,,,,,,,"Encoding Temporal Information for Time-Aware Link Prediction. Most existing knowledge base (KB) embedding methods solely learn from time-unknown fact triples but neglect the temporal information in the knowledge base. In this paper, we propose a novel time-aware KB embedding approach taking advantage of the happening time of facts. Specifically, we use temporal order constraints to model transformation between time-sensitive relations and enforce the embeddings to be temporally consistent and more accurate. We empirically evaluate our approach in two tasks of link prediction and triple classification. Experimental results show that our method outperforms other baselines on the two tasks consistently.",Encoding Temporal Information for Time-Aware Link Prediction,"Most existing knowledge base (KB) embedding methods solely learn from time-unknown fact triples but neglect the temporal information in the knowledge base. In this paper, we propose a novel time-aware KB embedding approach taking advantage of the happening time of facts. Specifically, we use temporal order constraints to model transformation between time-sensitive relations and enforce the embeddings to be temporally consistent and more accurate. We empirically evaluate our approach in two tasks of link prediction and triple classification. Experimental results show that our method outperforms other baselines on the two tasks consistently.",Encoding Temporal Information for Time-Aware Link Prediction,"Most existing knowledge base (KB) embedding methods solely learn from time-unknown fact triples but neglect the temporal information in the knowledge base. In this paper, we propose a novel time-aware KB embedding approach taking advantage of the happening time of facts. Specifically, we use temporal order constraints to model transformation between time-sensitive relations and enforce the embeddings to be temporally consistent and more accurate. We empirically evaluate our approach in two tasks of link prediction and triple classification. Experimental results show that our method outperforms other baselines on the two tasks consistently.","This research is supported by National Key Basic Research Program of China (No.2014CB340504) and National Natural Science Foundation of China (No.61375074,61273318). The contact author for this paper is Baobao Chang and Zhifang Sui.","Encoding Temporal Information for Time-Aware Link Prediction. Most existing knowledge base (KB) embedding methods solely learn from time-unknown fact triples but neglect the temporal information in the knowledge base. In this paper, we propose a novel time-aware KB embedding approach taking advantage of the happening time of facts. Specifically, we use temporal order constraints to model transformation between time-sensitive relations and enforce the embeddings to be temporally consistent and more accurate. We empirically evaluate our approach in two tasks of link prediction and triple classification. Experimental results show that our method outperforms other baselines on the two tasks consistently.",2016
lee-etal-2017-mit,https://aclanthology.org/S17-2171,0,,,,,,,"MIT at SemEval-2017 Task 10: Relation Extraction with Convolutional Neural Networks. Over 50 million scholarly articles have been published: they constitute a unique repository of knowledge. In particular, one may infer from them relations between scientific concepts. Artificial neural networks have recently been explored for relation extraction. In this work, we continue this line of work and present a system based on a convolutional neural network to extract relations. Our model ranked first in the SemEval-2017 task 10 (ScienceIE) for relation extraction in scientific articles (subtask C).",{MIT} at {S}em{E}val-2017 Task 10: Relation Extraction with Convolutional Neural Networks,"Over 50 million scholarly articles have been published: they constitute a unique repository of knowledge. In particular, one may infer from them relations between scientific concepts. Artificial neural networks have recently been explored for relation extraction. In this work, we continue this line of work and present a system based on a convolutional neural network to extract relations. Our model ranked first in the SemEval-2017 task 10 (ScienceIE) for relation extraction in scientific articles (subtask C).",MIT at SemEval-2017 Task 10: Relation Extraction with Convolutional Neural Networks,"Over 50 million scholarly articles have been published: they constitute a unique repository of knowledge. In particular, one may infer from them relations between scientific concepts. Artificial neural networks have recently been explored for relation extraction. In this work, we continue this line of work and present a system based on a convolutional neural network to extract relations. Our model ranked first in the SemEval-2017 task 10 (ScienceIE) for relation extraction in scientific articles (subtask C).",The authors would like to thank the ScienceIE organizers as well as the anonymous reviewers. The project was supported by Philips Research. The content is solely the responsibility of the authors and does not necessarily represent the official views of Philips Research.,"MIT at SemEval-2017 Task 10: Relation Extraction with Convolutional Neural Networks. Over 50 million scholarly articles have been published: they constitute a unique repository of knowledge. In particular, one may infer from them relations between scientific concepts. Artificial neural networks have recently been explored for relation extraction. In this work, we continue this line of work and present a system based on a convolutional neural network to extract relations. Our model ranked first in the SemEval-2017 task 10 (ScienceIE) for relation extraction in scientific articles (subtask C).",2017
meza-ruiz-riedel-2009-jointly,https://aclanthology.org/N09-1018,0,,,,,,,"Jointly Identifying Predicates, Arguments and Senses using Markov Logic. In this paper we present a Markov Logic Network for Semantic Role Labelling that jointly performs predicate identification, frame disambiguation, argument identification and argument classification for all predicates in a sentence. Empirically we find that our approach is competitive: our best model would appear on par with the best entry in the CoNLL 2008 shared task open track, and at the 4th place of the closed track-right behind the systems that use significantly better parsers to generate their input features. Moreover, we observe that by fully capturing the complete SRL pipeline in a single probabilistic model we can achieve significant improvements over more isolated systems, in particular for out-of-domain data. Finally, we show that despite the joint approach, our system is still efficient.","Jointly Identifying Predicates, Arguments and Senses using {M}arkov {L}ogic","In this paper we present a Markov Logic Network for Semantic Role Labelling that jointly performs predicate identification, frame disambiguation, argument identification and argument classification for all predicates in a sentence. Empirically we find that our approach is competitive: our best model would appear on par with the best entry in the CoNLL 2008 shared task open track, and at the 4th place of the closed track-right behind the systems that use significantly better parsers to generate their input features. Moreover, we observe that by fully capturing the complete SRL pipeline in a single probabilistic model we can achieve significant improvements over more isolated systems, in particular for out-of-domain data. Finally, we show that despite the joint approach, our system is still efficient.","Jointly Identifying Predicates, Arguments and Senses using Markov Logic","In this paper we present a Markov Logic Network for Semantic Role Labelling that jointly performs predicate identification, frame disambiguation, argument identification and argument classification for all predicates in a sentence. Empirically we find that our approach is competitive: our best model would appear on par with the best entry in the CoNLL 2008 shared task open track, and at the 4th place of the closed track-right behind the systems that use significantly better parsers to generate their input features. Moreover, we observe that by fully capturing the complete SRL pipeline in a single probabilistic model we can achieve significant improvements over more isolated systems, in particular for out-of-domain data. Finally, we show that despite the joint approach, our system is still efficient.",The authors are grateful to Mihai Surdeanu for providing the version of the corpus used in this work.,"Jointly Identifying Predicates, Arguments and Senses using Markov Logic. In this paper we present a Markov Logic Network for Semantic Role Labelling that jointly performs predicate identification, frame disambiguation, argument identification and argument classification for all predicates in a sentence. Empirically we find that our approach is competitive: our best model would appear on par with the best entry in the CoNLL 2008 shared task open track, and at the 4th place of the closed track-right behind the systems that use significantly better parsers to generate their input features. Moreover, we observe that by fully capturing the complete SRL pipeline in a single probabilistic model we can achieve significant improvements over more isolated systems, in particular for out-of-domain data. Finally, we show that despite the joint approach, our system is still efficient.",2009
doukhan-etal-2012-designing,http://www.lrec-conf.org/proceedings/lrec2012/pdf/876_Paper.pdf,0,,,,,,,"Designing French Tale Corpora for Entertaining Text To Speech Synthesis. Text and speech corpora for training a tale telling robot have been designed, recorded and annotated. The aim of these corpora is to study expressive storytelling behaviour, and to help in designing expressive prosodic and co-verbal variations for the artificial storyteller). A set of 89 children tales in French serves as a basis for this work. The tales annotation principles and scheme are described, together with the corpus description in terms of coverage and inter-annotator agreement. Automatic analysis of a new tale with the help of this corpus and machine learning is discussed. Metrics for evaluation of automatic annotation methods are discussed. A speech corpus of about 1 hour, with 12 tales has been recorded and aligned and annotated. This corpus is used for predicting expressive prosody in children tales, above the level of the sentence.",Designing {F}rench Tale Corpora for Entertaining Text To Speech Synthesis,"Text and speech corpora for training a tale telling robot have been designed, recorded and annotated. The aim of these corpora is to study expressive storytelling behaviour, and to help in designing expressive prosodic and co-verbal variations for the artificial storyteller). A set of 89 children tales in French serves as a basis for this work. The tales annotation principles and scheme are described, together with the corpus description in terms of coverage and inter-annotator agreement. Automatic analysis of a new tale with the help of this corpus and machine learning is discussed. Metrics for evaluation of automatic annotation methods are discussed. A speech corpus of about 1 hour, with 12 tales has been recorded and aligned and annotated. This corpus is used for predicting expressive prosody in children tales, above the level of the sentence.",Designing French Tale Corpora for Entertaining Text To Speech Synthesis,"Text and speech corpora for training a tale telling robot have been designed, recorded and annotated. The aim of these corpora is to study expressive storytelling behaviour, and to help in designing expressive prosodic and co-verbal variations for the artificial storyteller). A set of 89 children tales in French serves as a basis for this work. The tales annotation principles and scheme are described, together with the corpus description in terms of coverage and inter-annotator agreement. Automatic analysis of a new tale with the help of this corpus and machine learning is discussed. Metrics for evaluation of automatic annotation methods are discussed. A speech corpus of about 1 hour, with 12 tales has been recorded and aligned and annotated. This corpus is used for predicting expressive prosody in children tales, above the level of the sentence.",This work has been funded by the French project GV-LEx (ANR-08-CORD-024 http://www.gvlex.com).,"Designing French Tale Corpora for Entertaining Text To Speech Synthesis. Text and speech corpora for training a tale telling robot have been designed, recorded and annotated. The aim of these corpora is to study expressive storytelling behaviour, and to help in designing expressive prosodic and co-verbal variations for the artificial storyteller). A set of 89 children tales in French serves as a basis for this work. The tales annotation principles and scheme are described, together with the corpus description in terms of coverage and inter-annotator agreement. Automatic analysis of a new tale with the help of this corpus and machine learning is discussed. Metrics for evaluation of automatic annotation methods are discussed. A speech corpus of about 1 hour, with 12 tales has been recorded and aligned and annotated. This corpus is used for predicting expressive prosody in children tales, above the level of the sentence.",2012
tang-etal-2020-syntactic,https://aclanthology.org/2020.findings-emnlp.69,0,,,,,,,"Syntactic and Semantic-driven Learning for Open Information Extraction. One of the biggest bottlenecks in building accurate, high coverage neural open IE systems is the need for large labelled corpora. The diversity of open domain corpora and the variety of natural language expressions further exacerbate this problem. In this paper, we propose a syntactic and semantic-driven learning approach, which can learn neural open IE models without any human-labelled data by leveraging syntactic and semantic knowledge as noisier, higher-level supervisions. Specifically, we first employ syntactic patterns as data labelling functions and pretrain a base model using the generated labels. Then we propose a syntactic and semantic-driven reinforcement learning algorithm, which can effectively generalize the base model to open situations with high accuracy. Experimental results show that our approach significantly outperforms the supervised counterparts, and can even achieve competitive performance to supervised stateof-the-art (SoA) model. * Corresponding author Sentences Pattern-based Data labeling Syntax and Semantic Driven RL Open IE Model Noisy Training Corpus [Parragon] ARG1 [operates] P [more than 35 markets] ARG2 and has 10 offices. Parragon operates more than 35 markets and has 10 offices. def dl(x): all verbs are labeled as P …",Syntactic and Semantic-driven Learning for Open Information Extraction,"One of the biggest bottlenecks in building accurate, high coverage neural open IE systems is the need for large labelled corpora. The diversity of open domain corpora and the variety of natural language expressions further exacerbate this problem. In this paper, we propose a syntactic and semantic-driven learning approach, which can learn neural open IE models without any human-labelled data by leveraging syntactic and semantic knowledge as noisier, higher-level supervisions. Specifically, we first employ syntactic patterns as data labelling functions and pretrain a base model using the generated labels. Then we propose a syntactic and semantic-driven reinforcement learning algorithm, which can effectively generalize the base model to open situations with high accuracy. Experimental results show that our approach significantly outperforms the supervised counterparts, and can even achieve competitive performance to supervised stateof-the-art (SoA) model. * Corresponding author Sentences Pattern-based Data labeling Syntax and Semantic Driven RL Open IE Model Noisy Training Corpus [Parragon] ARG1 [operates] P [more than 35 markets] ARG2 and has 10 offices. Parragon operates more than 35 markets and has 10 offices. def dl(x): all verbs are labeled as P …",Syntactic and Semantic-driven Learning for Open Information Extraction,"One of the biggest bottlenecks in building accurate, high coverage neural open IE systems is the need for large labelled corpora. The diversity of open domain corpora and the variety of natural language expressions further exacerbate this problem. In this paper, we propose a syntactic and semantic-driven learning approach, which can learn neural open IE models without any human-labelled data by leveraging syntactic and semantic knowledge as noisier, higher-level supervisions. Specifically, we first employ syntactic patterns as data labelling functions and pretrain a base model using the generated labels. Then we propose a syntactic and semantic-driven reinforcement learning algorithm, which can effectively generalize the base model to open situations with high accuracy. Experimental results show that our approach significantly outperforms the supervised counterparts, and can even achieve competitive performance to supervised stateof-the-art (SoA) model. * Corresponding author Sentences Pattern-based Data labeling Syntax and Semantic Driven RL Open IE Model Noisy Training Corpus [Parragon] ARG1 [operates] P [more than 35 markets] ARG2 and has 10 offices. Parragon operates more than 35 markets and has 10 offices. def dl(x): all verbs are labeled as P …",,"Syntactic and Semantic-driven Learning for Open Information Extraction. One of the biggest bottlenecks in building accurate, high coverage neural open IE systems is the need for large labelled corpora. The diversity of open domain corpora and the variety of natural language expressions further exacerbate this problem. In this paper, we propose a syntactic and semantic-driven learning approach, which can learn neural open IE models without any human-labelled data by leveraging syntactic and semantic knowledge as noisier, higher-level supervisions. Specifically, we first employ syntactic patterns as data labelling functions and pretrain a base model using the generated labels. Then we propose a syntactic and semantic-driven reinforcement learning algorithm, which can effectively generalize the base model to open situations with high accuracy. Experimental results show that our approach significantly outperforms the supervised counterparts, and can even achieve competitive performance to supervised stateof-the-art (SoA) model. * Corresponding author Sentences Pattern-based Data labeling Syntax and Semantic Driven RL Open IE Model Noisy Training Corpus [Parragon] ARG1 [operates] P [more than 35 markets] ARG2 and has 10 offices. Parragon operates more than 35 markets and has 10 offices. def dl(x): all verbs are labeled as P …",2020
shen-etal-2020-blank,https://aclanthology.org/2020.emnlp-main.420,0,,,,,,,"Blank Language Models. We propose Blank Language Model (BLM), a model that generates sequences by dynamically creating and filling in blanks. The blanks control which part of the sequence to expand, making BLM ideal for a variety of text editing and rewriting tasks. The model can start from a single blank or partially completed text with blanks at specified locations. It iteratively determines which word to place in a blank and whether to insert new blanks, and stops generating when no blanks are left to fill. BLM can be efficiently trained using a lower bound of the marginal data likelihood. On the task of filling missing text snippets, BLM significantly outperforms all other baselines in terms of both accuracy and fluency. Experiments on style transfer and damaged ancient text restoration demonstrate the potential of this framework for a wide range of applications. 1",Blank Language Models,"We propose Blank Language Model (BLM), a model that generates sequences by dynamically creating and filling in blanks. The blanks control which part of the sequence to expand, making BLM ideal for a variety of text editing and rewriting tasks. The model can start from a single blank or partially completed text with blanks at specified locations. It iteratively determines which word to place in a blank and whether to insert new blanks, and stops generating when no blanks are left to fill. BLM can be efficiently trained using a lower bound of the marginal data likelihood. On the task of filling missing text snippets, BLM significantly outperforms all other baselines in terms of both accuracy and fluency. Experiments on style transfer and damaged ancient text restoration demonstrate the potential of this framework for a wide range of applications. 1",Blank Language Models,"We propose Blank Language Model (BLM), a model that generates sequences by dynamically creating and filling in blanks. The blanks control which part of the sequence to expand, making BLM ideal for a variety of text editing and rewriting tasks. The model can start from a single blank or partially completed text with blanks at specified locations. It iteratively determines which word to place in a blank and whether to insert new blanks, and stops generating when no blanks are left to fill. BLM can be efficiently trained using a lower bound of the marginal data likelihood. On the task of filling missing text snippets, BLM significantly outperforms all other baselines in terms of both accuracy and fluency. Experiments on style transfer and damaged ancient text restoration demonstrate the potential of this framework for a wide range of applications. 1",We thank all reviewers and the MIT NLP group for their thoughtful feedback.,"Blank Language Models. We propose Blank Language Model (BLM), a model that generates sequences by dynamically creating and filling in blanks. The blanks control which part of the sequence to expand, making BLM ideal for a variety of text editing and rewriting tasks. The model can start from a single blank or partially completed text with blanks at specified locations. It iteratively determines which word to place in a blank and whether to insert new blanks, and stops generating when no blanks are left to fill. BLM can be efficiently trained using a lower bound of the marginal data likelihood. On the task of filling missing text snippets, BLM significantly outperforms all other baselines in terms of both accuracy and fluency. Experiments on style transfer and damaged ancient text restoration demonstrate the potential of this framework for a wide range of applications. 1",2020
kim-riloff-2015-stacked,https://aclanthology.org/W15-3807,1,,,,health,,,"Stacked Generalization for Medical Concept Extraction from Clinical Notes. The goal of our research is to extract medical concepts from clinical notes containing patient information. Our research explores stacked generalization as a metalearning technique to exploit a diverse set of concept extraction models. First, we create multiple models for concept extraction using a variety of information extraction techniques, including knowledgebased, rule-based, and machine learning models. Next, we train a meta-classifier using stacked generalization with a feature set generated from the outputs of the individual classifiers. The meta-classifier learns to predict concepts based on information about the predictions of the component classifiers. Our results show that the stacked generalization learner performs better than the individual models and achieves state-of-the-art performance on the 2010 i2b2 data set.",Stacked Generalization for Medical Concept Extraction from Clinical Notes,"The goal of our research is to extract medical concepts from clinical notes containing patient information. Our research explores stacked generalization as a metalearning technique to exploit a diverse set of concept extraction models. First, we create multiple models for concept extraction using a variety of information extraction techniques, including knowledgebased, rule-based, and machine learning models. Next, we train a meta-classifier using stacked generalization with a feature set generated from the outputs of the individual classifiers. The meta-classifier learns to predict concepts based on information about the predictions of the component classifiers. Our results show that the stacked generalization learner performs better than the individual models and achieves state-of-the-art performance on the 2010 i2b2 data set.",Stacked Generalization for Medical Concept Extraction from Clinical Notes,"The goal of our research is to extract medical concepts from clinical notes containing patient information. Our research explores stacked generalization as a metalearning technique to exploit a diverse set of concept extraction models. First, we create multiple models for concept extraction using a variety of information extraction techniques, including knowledgebased, rule-based, and machine learning models. Next, we train a meta-classifier using stacked generalization with a feature set generated from the outputs of the individual classifiers. The meta-classifier learns to predict concepts based on information about the predictions of the component classifiers. Our results show that the stacked generalization learner performs better than the individual models and achieves state-of-the-art performance on the 2010 i2b2 data set.",This research was supported in part by the National Science Foundation under grant IIS-1018314.,"Stacked Generalization for Medical Concept Extraction from Clinical Notes. The goal of our research is to extract medical concepts from clinical notes containing patient information. Our research explores stacked generalization as a metalearning technique to exploit a diverse set of concept extraction models. First, we create multiple models for concept extraction using a variety of information extraction techniques, including knowledgebased, rule-based, and machine learning models. Next, we train a meta-classifier using stacked generalization with a feature set generated from the outputs of the individual classifiers. The meta-classifier learns to predict concepts based on information about the predictions of the component classifiers. Our results show that the stacked generalization learner performs better than the individual models and achieves state-of-the-art performance on the 2010 i2b2 data set.",2015
choi-etal-1994-yanhui,https://aclanthology.org/O94-1002,0,,,,,,,"Yanhui (宴會), a Softwre Based High Performance Mandarin Text-To-Speech System. ","Yanhui (宴會), a Softwre Based High Performance {M}andarin Text-To-Speech System",,"Yanhui (宴會), a Softwre Based High Performance Mandarin Text-To-Speech System",,,"Yanhui (宴會), a Softwre Based High Performance Mandarin Text-To-Speech System. ",1994
tian-etal-2014-um,http://www.lrec-conf.org/proceedings/lrec2014/pdf/774_Paper.pdf,0,,,,,,,"UM-Corpus: A Large English-Chinese Parallel Corpus for Statistical Machine Translation. Parallel corpus is a valuable resource for cross-language information retrieval and data-driven natural language processing systems, especially for Statistical Machine Translation (SMT). However, most existing parallel corpora to Chinese are subject to in-house use, while others are domain specific and limited in size. To a certain degree, this limits the SMT research. This paper describes the acquisition of a large scale and high quality parallel corpora for English and Chinese. The corpora constructed in this paper contain about 15 million English-Chinese (E-C) parallel sentences, and more than 2 million training data and 5,000 testing sentences are made publicly available. Different from previous work, the corpus is designed to embrace eight different domains. Some of them are further categorized into different topics. The corpus will be released to the research community, which is available at the NLP 2 CT 1 website.",{UM}-Corpus: A Large {E}nglish-{C}hinese Parallel Corpus for Statistical Machine Translation,"Parallel corpus is a valuable resource for cross-language information retrieval and data-driven natural language processing systems, especially for Statistical Machine Translation (SMT). However, most existing parallel corpora to Chinese are subject to in-house use, while others are domain specific and limited in size. To a certain degree, this limits the SMT research. This paper describes the acquisition of a large scale and high quality parallel corpora for English and Chinese. The corpora constructed in this paper contain about 15 million English-Chinese (E-C) parallel sentences, and more than 2 million training data and 5,000 testing sentences are made publicly available. Different from previous work, the corpus is designed to embrace eight different domains. Some of them are further categorized into different topics. The corpus will be released to the research community, which is available at the NLP 2 CT 1 website.",UM-Corpus: A Large English-Chinese Parallel Corpus for Statistical Machine Translation,"Parallel corpus is a valuable resource for cross-language information retrieval and data-driven natural language processing systems, especially for Statistical Machine Translation (SMT). However, most existing parallel corpora to Chinese are subject to in-house use, while others are domain specific and limited in size. To a certain degree, this limits the SMT research. This paper describes the acquisition of a large scale and high quality parallel corpora for English and Chinese. The corpora constructed in this paper contain about 15 million English-Chinese (E-C) parallel sentences, and more than 2 million training data and 5,000 testing sentences are made publicly available. Different from previous work, the corpus is designed to embrace eight different domains. Some of them are further categorized into different topics. The corpus will be released to the research community, which is available at the NLP 2 CT 1 website.","The authors would like to thank all reviewers for the very careful reading and helpful suggestions. The authors are grateful to the Science and Technology Development Fund of Macau and the Research Committee of the University of Macau for the funding support for their research, under the","UM-Corpus: A Large English-Chinese Parallel Corpus for Statistical Machine Translation. Parallel corpus is a valuable resource for cross-language information retrieval and data-driven natural language processing systems, especially for Statistical Machine Translation (SMT). However, most existing parallel corpora to Chinese are subject to in-house use, while others are domain specific and limited in size. To a certain degree, this limits the SMT research. This paper describes the acquisition of a large scale and high quality parallel corpora for English and Chinese. The corpora constructed in this paper contain about 15 million English-Chinese (E-C) parallel sentences, and more than 2 million training data and 5,000 testing sentences are made publicly available. Different from previous work, the corpus is designed to embrace eight different domains. Some of them are further categorized into different topics. The corpus will be released to the research community, which is available at the NLP 2 CT 1 website.",2014
fomicheva-etal-2016-cobaltf,https://aclanthology.org/W16-2339,0,,,,,,,"CobaltF: A Fluent Metric for MT Evaluation. The vast majority of Machine Translation (MT) evaluation approaches are based on the idea that the closer the MT output is to a human reference translation, the higher its quality. While translation quality has two important aspects, adequacy and fluency, the existing referencebased metrics are largely focused on the former. In this work we combine our metric UPF-Cobalt, originally presented at the WMT15 Metrics Task, with a number of features intended to capture translation fluency. Experiments show that the integration of fluency-oriented features significantly improves the results, rivalling the best-performing evaluation metrics on the WMT15 data.",{C}obalt{F}: A Fluent Metric for {MT} Evaluation,"The vast majority of Machine Translation (MT) evaluation approaches are based on the idea that the closer the MT output is to a human reference translation, the higher its quality. While translation quality has two important aspects, adequacy and fluency, the existing referencebased metrics are largely focused on the former. In this work we combine our metric UPF-Cobalt, originally presented at the WMT15 Metrics Task, with a number of features intended to capture translation fluency. Experiments show that the integration of fluency-oriented features significantly improves the results, rivalling the best-performing evaluation metrics on the WMT15 data.",CobaltF: A Fluent Metric for MT Evaluation,"The vast majority of Machine Translation (MT) evaluation approaches are based on the idea that the closer the MT output is to a human reference translation, the higher its quality. While translation quality has two important aspects, adequacy and fluency, the existing referencebased metrics are largely focused on the former. In this work we combine our metric UPF-Cobalt, originally presented at the WMT15 Metrics Task, with a number of features intended to capture translation fluency. Experiments show that the integration of fluency-oriented features significantly improves the results, rivalling the best-performing evaluation metrics on the WMT15 data.","This work was partially funded by TUNER (TIN2015-65308-C5-5-R) and MINECO/FEDER, UE. Marina Fomicheva was supported by funding from the FI-DGR grant program of the Generalitat de Catalunya. Iria da Cunha was supported by a Ramón y Cajal contract (RYC-2014-16935). Lucia Specia was supported by the QT21 project (H2020 No. 645452).","CobaltF: A Fluent Metric for MT Evaluation. The vast majority of Machine Translation (MT) evaluation approaches are based on the idea that the closer the MT output is to a human reference translation, the higher its quality. While translation quality has two important aspects, adequacy and fluency, the existing referencebased metrics are largely focused on the former. In this work we combine our metric UPF-Cobalt, originally presented at the WMT15 Metrics Task, with a number of features intended to capture translation fluency. Experiments show that the integration of fluency-oriented features significantly improves the results, rivalling the best-performing evaluation metrics on the WMT15 data.",2016
abzaliev-2019-gap,https://aclanthology.org/W19-3816,0,,,,,,,"On GAP Coreference Resolution Shared Task: Insights from the 3rd Place Solution. This paper presents the 3rd-place-winning solution to the GAP coreference resolution shared task. The approach adopted consists of two key components: fine-tuning the BERT language representation model (Devlin et al., 2018) and the usage of external datasets during the training process. The model uses hidden states from the intermediate BERT layers instead of the last layer. The resulting system almost eliminates the difference in log loss per gender during the cross-validation, while providing high performance.",On {GAP} Coreference Resolution Shared Task: Insights from the 3rd Place Solution,"This paper presents the 3rd-place-winning solution to the GAP coreference resolution shared task. The approach adopted consists of two key components: fine-tuning the BERT language representation model (Devlin et al., 2018) and the usage of external datasets during the training process. The model uses hidden states from the intermediate BERT layers instead of the last layer. The resulting system almost eliminates the difference in log loss per gender during the cross-validation, while providing high performance.",On GAP Coreference Resolution Shared Task: Insights from the 3rd Place Solution,"This paper presents the 3rd-place-winning solution to the GAP coreference resolution shared task. The approach adopted consists of two key components: fine-tuning the BERT language representation model (Devlin et al., 2018) and the usage of external datasets during the training process. The model uses hidden states from the intermediate BERT layers instead of the last layer. The resulting system almost eliminates the difference in log loss per gender during the cross-validation, while providing high performance.",,"On GAP Coreference Resolution Shared Task: Insights from the 3rd Place Solution. This paper presents the 3rd-place-winning solution to the GAP coreference resolution shared task. The approach adopted consists of two key components: fine-tuning the BERT language representation model (Devlin et al., 2018) and the usage of external datasets during the training process. The model uses hidden states from the intermediate BERT layers instead of the last layer. The resulting system almost eliminates the difference in log loss per gender during the cross-validation, while providing high performance.",2019
slawik-etal-2014-kit,https://aclanthology.org/2014.iwslt-evaluation.17,0,,,,,,,"The KIT translation systems for IWSLT 2014. In this paper, we present the KIT systems participating in the TED translation tasks of the IWSLT 2014 machine translation evaluation. We submitted phrase-based translation systems for all three official directions, namely English→German, German→English, and English→French, as well as for the optional directions English→Chinese and English→Arabic. For the official directions we built systems both for the machine translation as well as the spoken language translation track. This year we improved our systems' performance over last year through n-best list rescoring using neural networkbased translation and language models and novel preordering rules based on tree information of multiple syntactic levels. Furthermore, we could successfully apply a novel phrase extraction algorithm and transliteration of unknown words for Arabic. We also submitted a contrastive system for German→English built with stemmed German adjectives. For the SLT tracks, we used a monolingual translation system to translate the lowercased ASR hypotheses with all punctuation stripped to truecased, punctuated output as a preprocessing step to our usual translation system.",The {KIT} translation systems for {IWSLT} 2014,"In this paper, we present the KIT systems participating in the TED translation tasks of the IWSLT 2014 machine translation evaluation. We submitted phrase-based translation systems for all three official directions, namely English→German, German→English, and English→French, as well as for the optional directions English→Chinese and English→Arabic. For the official directions we built systems both for the machine translation as well as the spoken language translation track. This year we improved our systems' performance over last year through n-best list rescoring using neural networkbased translation and language models and novel preordering rules based on tree information of multiple syntactic levels. Furthermore, we could successfully apply a novel phrase extraction algorithm and transliteration of unknown words for Arabic. We also submitted a contrastive system for German→English built with stemmed German adjectives. For the SLT tracks, we used a monolingual translation system to translate the lowercased ASR hypotheses with all punctuation stripped to truecased, punctuated output as a preprocessing step to our usual translation system.",The KIT translation systems for IWSLT 2014,"In this paper, we present the KIT systems participating in the TED translation tasks of the IWSLT 2014 machine translation evaluation. We submitted phrase-based translation systems for all three official directions, namely English→German, German→English, and English→French, as well as for the optional directions English→Chinese and English→Arabic. For the official directions we built systems both for the machine translation as well as the spoken language translation track. This year we improved our systems' performance over last year through n-best list rescoring using neural networkbased translation and language models and novel preordering rules based on tree information of multiple syntactic levels. Furthermore, we could successfully apply a novel phrase extraction algorithm and transliteration of unknown words for Arabic. We also submitted a contrastive system for German→English built with stemmed German adjectives. For the SLT tracks, we used a monolingual translation system to translate the lowercased ASR hypotheses with all punctuation stripped to truecased, punctuated output as a preprocessing step to our usual translation system.",The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n • 287658.,"The KIT translation systems for IWSLT 2014. In this paper, we present the KIT systems participating in the TED translation tasks of the IWSLT 2014 machine translation evaluation. We submitted phrase-based translation systems for all three official directions, namely English→German, German→English, and English→French, as well as for the optional directions English→Chinese and English→Arabic. For the official directions we built systems both for the machine translation as well as the spoken language translation track. This year we improved our systems' performance over last year through n-best list rescoring using neural networkbased translation and language models and novel preordering rules based on tree information of multiple syntactic levels. Furthermore, we could successfully apply a novel phrase extraction algorithm and transliteration of unknown words for Arabic. We also submitted a contrastive system for German→English built with stemmed German adjectives. For the SLT tracks, we used a monolingual translation system to translate the lowercased ASR hypotheses with all punctuation stripped to truecased, punctuated output as a preprocessing step to our usual translation system.",2014
pedersen-etal-2010-merging,http://www.lrec-conf.org/proceedings/lrec2010/pdf/200_Paper.pdf,0,,,,,,,"Merging Specialist Taxonomies and Folk Taxonomies in Wordnets - A case Study of Plants, Animals and Foods in the Danish Wordnet. In this paper we investigate the problem of merging specialist taxonomies with the more intuitive folk taxonomies in lexical-semantic resources like wordnets; and we focus in particular on plants, animals and foods. We show that a traditional dictionary like Den Danske Ordbog (DDO) survives well with several inconsistencies between different taxonomies of the vocabulary and that a restructuring is therefore necessary in order to compile a consistent wordnet resource on its basis. To this end, we apply Cruse's definitions for hyponymies, namely those of natural kinds (such as plants and animals) on the one hand and functional kinds (such as foods) on the other. We pursue this distinction in the development of the Danish wordnet, DanNet, which has recently been built on the basis of DDO and is made open source for all potential users at www.wordnet.dk. Not surprisingly, we conclude that cultural background influences the structure of folk taxonomies quite radically, and that wordnet builders must therefore consider these carefully in order to capture their central characteristics in a systematic way.","Merging Specialist Taxonomies and Folk Taxonomies in Wordnets - A case Study of Plants, Animals and Foods in the {D}anish {W}ordnet","In this paper we investigate the problem of merging specialist taxonomies with the more intuitive folk taxonomies in lexical-semantic resources like wordnets; and we focus in particular on plants, animals and foods. We show that a traditional dictionary like Den Danske Ordbog (DDO) survives well with several inconsistencies between different taxonomies of the vocabulary and that a restructuring is therefore necessary in order to compile a consistent wordnet resource on its basis. To this end, we apply Cruse's definitions for hyponymies, namely those of natural kinds (such as plants and animals) on the one hand and functional kinds (such as foods) on the other. We pursue this distinction in the development of the Danish wordnet, DanNet, which has recently been built on the basis of DDO and is made open source for all potential users at www.wordnet.dk. Not surprisingly, we conclude that cultural background influences the structure of folk taxonomies quite radically, and that wordnet builders must therefore consider these carefully in order to capture their central characteristics in a systematic way.","Merging Specialist Taxonomies and Folk Taxonomies in Wordnets - A case Study of Plants, Animals and Foods in the Danish Wordnet","In this paper we investigate the problem of merging specialist taxonomies with the more intuitive folk taxonomies in lexical-semantic resources like wordnets; and we focus in particular on plants, animals and foods. We show that a traditional dictionary like Den Danske Ordbog (DDO) survives well with several inconsistencies between different taxonomies of the vocabulary and that a restructuring is therefore necessary in order to compile a consistent wordnet resource on its basis. To this end, we apply Cruse's definitions for hyponymies, namely those of natural kinds (such as plants and animals) on the one hand and functional kinds (such as foods) on the other. We pursue this distinction in the development of the Danish wordnet, DanNet, which has recently been built on the basis of DDO and is made open source for all potential users at www.wordnet.dk. Not surprisingly, we conclude that cultural background influences the structure of folk taxonomies quite radically, and that wordnet builders must therefore consider these carefully in order to capture their central characteristics in a systematic way.",,"Merging Specialist Taxonomies and Folk Taxonomies in Wordnets - A case Study of Plants, Animals and Foods in the Danish Wordnet. In this paper we investigate the problem of merging specialist taxonomies with the more intuitive folk taxonomies in lexical-semantic resources like wordnets; and we focus in particular on plants, animals and foods. We show that a traditional dictionary like Den Danske Ordbog (DDO) survives well with several inconsistencies between different taxonomies of the vocabulary and that a restructuring is therefore necessary in order to compile a consistent wordnet resource on its basis. To this end, we apply Cruse's definitions for hyponymies, namely those of natural kinds (such as plants and animals) on the one hand and functional kinds (such as foods) on the other. We pursue this distinction in the development of the Danish wordnet, DanNet, which has recently been built on the basis of DDO and is made open source for all potential users at www.wordnet.dk. Not surprisingly, we conclude that cultural background influences the structure of folk taxonomies quite radically, and that wordnet builders must therefore consider these carefully in order to capture their central characteristics in a systematic way.",2010
chambers-jurafsky-2011-template,https://aclanthology.org/P11-1098,0,,,,,,,"Template-Based Information Extraction without the Templates. Standard algorithms for template-based information extraction (IE) require predefined template schemas, and often labeled data, to learn to extract their slot fillers (e.g., an embassy is the Target of a Bombing template). This paper describes an approach to template-based IE that removes this requirement and performs extraction without knowing the template structure in advance. Our algorithm instead learns the template structure automatically from raw text, inducing template schemas as sets of linked events (e.g., bombings include detonate, set off, and destroy events) associated with semantic roles. We also solve the standard IE task, using the induced syntactic patterns to extract role fillers from specific documents. We evaluate on the MUC-4 terrorism dataset and show that we induce template structure very similar to handcreated gold structure, and we extract role fillers with an F1 score of .40, approaching the performance of algorithms that require full knowledge of the templates.",Template-Based Information Extraction without the Templates,"Standard algorithms for template-based information extraction (IE) require predefined template schemas, and often labeled data, to learn to extract their slot fillers (e.g., an embassy is the Target of a Bombing template). This paper describes an approach to template-based IE that removes this requirement and performs extraction without knowing the template structure in advance. Our algorithm instead learns the template structure automatically from raw text, inducing template schemas as sets of linked events (e.g., bombings include detonate, set off, and destroy events) associated with semantic roles. We also solve the standard IE task, using the induced syntactic patterns to extract role fillers from specific documents. We evaluate on the MUC-4 terrorism dataset and show that we induce template structure very similar to handcreated gold structure, and we extract role fillers with an F1 score of .40, approaching the performance of algorithms that require full knowledge of the templates.",Template-Based Information Extraction without the Templates,"Standard algorithms for template-based information extraction (IE) require predefined template schemas, and often labeled data, to learn to extract their slot fillers (e.g., an embassy is the Target of a Bombing template). This paper describes an approach to template-based IE that removes this requirement and performs extraction without knowing the template structure in advance. Our algorithm instead learns the template structure automatically from raw text, inducing template schemas as sets of linked events (e.g., bombings include detonate, set off, and destroy events) associated with semantic roles. We also solve the standard IE task, using the induced syntactic patterns to extract role fillers from specific documents. We evaluate on the MUC-4 terrorism dataset and show that we induce template structure very similar to handcreated gold structure, and we extract role fillers with an F1 score of .40, approaching the performance of algorithms that require full knowledge of the templates.","This work was supported by the National Science Foundation IIS-0811974, and this material is also based upon work supported by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the Air Force Research Laboratory (AFRL). Thanks to the Stanford NLP Group and reviewers for helpful suggestions.","Template-Based Information Extraction without the Templates. Standard algorithms for template-based information extraction (IE) require predefined template schemas, and often labeled data, to learn to extract their slot fillers (e.g., an embassy is the Target of a Bombing template). This paper describes an approach to template-based IE that removes this requirement and performs extraction without knowing the template structure in advance. Our algorithm instead learns the template structure automatically from raw text, inducing template schemas as sets of linked events (e.g., bombings include detonate, set off, and destroy events) associated with semantic roles. We also solve the standard IE task, using the induced syntactic patterns to extract role fillers from specific documents. We evaluate on the MUC-4 terrorism dataset and show that we induce template structure very similar to handcreated gold structure, and we extract role fillers with an F1 score of .40, approaching the performance of algorithms that require full knowledge of the templates.",2011
wang-etal-2015-learning-domain,https://aclanthology.org/W15-4654,0,,,,,,,"Learning Domain-Independent Dialogue Policies via Ontology Parameterisation. This paper introduces a novel approach to eliminate the domain dependence of dialogue state and action representations, such that dialogue policies trained based on the proposed representation can be transferred across different domains. The experimental results show that the policy optimised in a restaurant search domain using our domain-independent representations can be deployed to a laptop sale domain, achieving a task success rate very close (96.4% relative) to that of the policy optimised on in-domain dialogues.",Learning Domain-Independent Dialogue Policies via Ontology Parameterisation,"This paper introduces a novel approach to eliminate the domain dependence of dialogue state and action representations, such that dialogue policies trained based on the proposed representation can be transferred across different domains. The experimental results show that the policy optimised in a restaurant search domain using our domain-independent representations can be deployed to a laptop sale domain, achieving a task success rate very close (96.4% relative) to that of the policy optimised on in-domain dialogues.",Learning Domain-Independent Dialogue Policies via Ontology Parameterisation,"This paper introduces a novel approach to eliminate the domain dependence of dialogue state and action representations, such that dialogue policies trained based on the proposed representation can be transferred across different domains. The experimental results show that the policy optimised in a restaurant search domain using our domain-independent representations can be deployed to a laptop sale domain, achieving a task success rate very close (96.4% relative) to that of the policy optimised on in-domain dialogues.","The authors would like to thank David Vandyke, Milica Gašić and Steve Young for providing the BUDS system and the simulator, as well as for their help in setting up the crowdsourcing experiments.","Learning Domain-Independent Dialogue Policies via Ontology Parameterisation. This paper introduces a novel approach to eliminate the domain dependence of dialogue state and action representations, such that dialogue policies trained based on the proposed representation can be transferred across different domains. The experimental results show that the policy optimised in a restaurant search domain using our domain-independent representations can be deployed to a laptop sale domain, achieving a task success rate very close (96.4% relative) to that of the policy optimised on in-domain dialogues.",2015
tatsumi-etal-2012-good,https://aclanthology.org/2012.amta-wptp.8,0,,,,,,,"How Good Is Crowd Post-Editing? Its Potential and Limitations. This paper is a partial report of a research effort on evaluating the effect of crowdsourced post-editing. We first discuss the emerging trend of crowd-sourced postediting of machine translation output, along with its benefits and drawbacks. Second, we describe the pilot study we have conducted on a platform that facilitates crowd-sourced post-editing. Finally, we provide our plans for further studies to have more insight on how effective crowdsourced post-editing is.",How Good Is Crowd Post-Editing? Its Potential and Limitations,"This paper is a partial report of a research effort on evaluating the effect of crowdsourced post-editing. We first discuss the emerging trend of crowd-sourced postediting of machine translation output, along with its benefits and drawbacks. Second, we describe the pilot study we have conducted on a platform that facilitates crowd-sourced post-editing. Finally, we provide our plans for further studies to have more insight on how effective crowdsourced post-editing is.",How Good Is Crowd Post-Editing? Its Potential and Limitations,"This paper is a partial report of a research effort on evaluating the effect of crowdsourced post-editing. We first discuss the emerging trend of crowd-sourced postediting of machine translation output, along with its benefits and drawbacks. Second, we describe the pilot study we have conducted on a platform that facilitates crowd-sourced post-editing. Finally, we provide our plans for further studies to have more insight on how effective crowdsourced post-editing is.","This project was funded by International Affairs Division at Toyohashi University of Technology, and we would like to give special thanks to all the members of International Affairs Division for their support during the project. We are also thankful to Dr. Anthony Hartley for his support on conducting the experiment.","How Good Is Crowd Post-Editing? Its Potential and Limitations. This paper is a partial report of a research effort on evaluating the effect of crowdsourced post-editing. We first discuss the emerging trend of crowd-sourced postediting of machine translation output, along with its benefits and drawbacks. Second, we describe the pilot study we have conducted on a platform that facilitates crowd-sourced post-editing. Finally, we provide our plans for further studies to have more insight on how effective crowdsourced post-editing is.",2012
habash-etal-2006-challenges,https://aclanthology.org/2006.amta-papers.7,0,,,,,,,"Challenges in Building an Arabic-English GHMT System with SMT Components. The research context of this paper is developing hybrid machine translation (MT) systems that exploit the advantages of linguistic rule-based and statistical MT systems. Arabic, as a morphologically rich language, is especially challenging even without addressing the hybridization question. In this paper, we describe the challenges in building an Arabic-English generation-heavy machine translation (GHMT) system and boosting it with statistical machine translation (SMT) components. We present an extensive evaluation of multiple system variants and report positive results on the advantages of hybridization.",Challenges in Building an {A}rabic-{E}nglish {GHMT} System with {SMT} Components,"The research context of this paper is developing hybrid machine translation (MT) systems that exploit the advantages of linguistic rule-based and statistical MT systems. Arabic, as a morphologically rich language, is especially challenging even without addressing the hybridization question. In this paper, we describe the challenges in building an Arabic-English generation-heavy machine translation (GHMT) system and boosting it with statistical machine translation (SMT) components. We present an extensive evaluation of multiple system variants and report positive results on the advantages of hybridization.",Challenges in Building an Arabic-English GHMT System with SMT Components,"The research context of this paper is developing hybrid machine translation (MT) systems that exploit the advantages of linguistic rule-based and statistical MT systems. Arabic, as a morphologically rich language, is especially challenging even without addressing the hybridization question. In this paper, we describe the challenges in building an Arabic-English generation-heavy machine translation (GHMT) system and boosting it with statistical machine translation (SMT) components. We present an extensive evaluation of multiple system variants and report positive results on the advantages of hybridization.","This work has been supported, in part, under Army Research ","Challenges in Building an Arabic-English GHMT System with SMT Components. The research context of this paper is developing hybrid machine translation (MT) systems that exploit the advantages of linguistic rule-based and statistical MT systems. Arabic, as a morphologically rich language, is especially challenging even without addressing the hybridization question. In this paper, we describe the challenges in building an Arabic-English generation-heavy machine translation (GHMT) system and boosting it with statistical machine translation (SMT) components. We present an extensive evaluation of multiple system variants and report positive results on the advantages of hybridization.",2006
aralikatte-etal-2021-ellipsis,https://aclanthology.org/2021.eacl-main.68,0,,,,,,,"Ellipsis Resolution as Question Answering: An Evaluation. Most, if not all forms of ellipsis (e.g., 'so does Mary') are similar to reading comprehension questions ('what does Mary do'), in that in order to resolve them, we need to identify an appropriate text span in the preceding discourse. Following this observation, we present an alternative approach for English ellipsis resolution relying on architectures developed for question answering (QA). We present both single-task models, and joint models trained on auxiliary QA and coreference resolution datasets, clearly outperforming the current state of the art for Sluice Ellipsis (from 70.00 to 86.01 F 1) and Verb Phrase Ellipsis (from 72.89 to 78.66 F 1).",Ellipsis Resolution as Question Answering: An Evaluation,"Most, if not all forms of ellipsis (e.g., 'so does Mary') are similar to reading comprehension questions ('what does Mary do'), in that in order to resolve them, we need to identify an appropriate text span in the preceding discourse. Following this observation, we present an alternative approach for English ellipsis resolution relying on architectures developed for question answering (QA). We present both single-task models, and joint models trained on auxiliary QA and coreference resolution datasets, clearly outperforming the current state of the art for Sluice Ellipsis (from 70.00 to 86.01 F 1) and Verb Phrase Ellipsis (from 72.89 to 78.66 F 1).",Ellipsis Resolution as Question Answering: An Evaluation,"Most, if not all forms of ellipsis (e.g., 'so does Mary') are similar to reading comprehension questions ('what does Mary do'), in that in order to resolve them, we need to identify an appropriate text span in the preceding discourse. Following this observation, we present an alternative approach for English ellipsis resolution relying on architectures developed for question answering (QA). We present both single-task models, and joint models trained on auxiliary QA and coreference resolution datasets, clearly outperforming the current state of the art for Sluice Ellipsis (from 70.00 to 86.01 F 1) and Verb Phrase Ellipsis (from 72.89 to 78.66 F 1).",,"Ellipsis Resolution as Question Answering: An Evaluation. Most, if not all forms of ellipsis (e.g., 'so does Mary') are similar to reading comprehension questions ('what does Mary do'), in that in order to resolve them, we need to identify an appropriate text span in the preceding discourse. Following this observation, we present an alternative approach for English ellipsis resolution relying on architectures developed for question answering (QA). We present both single-task models, and joint models trained on auxiliary QA and coreference resolution datasets, clearly outperforming the current state of the art for Sluice Ellipsis (from 70.00 to 86.01 F 1) and Verb Phrase Ellipsis (from 72.89 to 78.66 F 1).",2021
borg-gatt-2017-morphological,https://aclanthology.org/W17-1304,0,,,,,,,"Morphological Analysis for the Maltese Language: The challenges of a hybrid system. Maltese is a morphologically rich language with a hybrid morphological system which features both concatenative and non-concatenative processes. This paper analyses the impact of this hybridity on the performance of machine learning techniques for morphological labelling and clustering. In particular, we analyse a dataset of morphologically related word clusters to evaluate the difference in results for concatenative and nonconcatenative clusters. We also describe research carried out in morphological labelling, with a particular focus on the verb category. Two evaluations were carried out, one using an unseen dataset, and another one using a gold standard dataset which was manually labelled. The gold standard dataset was split into concatenative and non-concatenative to analyse the difference in results between the two morphological systems.",Morphological Analysis for the {M}altese Language: The challenges of a hybrid system,"Maltese is a morphologically rich language with a hybrid morphological system which features both concatenative and non-concatenative processes. This paper analyses the impact of this hybridity on the performance of machine learning techniques for morphological labelling and clustering. In particular, we analyse a dataset of morphologically related word clusters to evaluate the difference in results for concatenative and nonconcatenative clusters. We also describe research carried out in morphological labelling, with a particular focus on the verb category. Two evaluations were carried out, one using an unseen dataset, and another one using a gold standard dataset which was manually labelled. The gold standard dataset was split into concatenative and non-concatenative to analyse the difference in results between the two morphological systems.",Morphological Analysis for the Maltese Language: The challenges of a hybrid system,"Maltese is a morphologically rich language with a hybrid morphological system which features both concatenative and non-concatenative processes. This paper analyses the impact of this hybridity on the performance of machine learning techniques for morphological labelling and clustering. In particular, we analyse a dataset of morphologically related word clusters to evaluate the difference in results for concatenative and nonconcatenative clusters. We also describe research carried out in morphological labelling, with a particular focus on the verb category. Two evaluations were carried out, one using an unseen dataset, and another one using a gold standard dataset which was manually labelled. The gold standard dataset was split into concatenative and non-concatenative to analyse the difference in results between the two morphological systems.",The authors acknowledge the insight and expertise of Prof. Ray Fabri. The research work disclosed in this publication is partially funded by the Malta Government Scholarship Scheme grant.,"Morphological Analysis for the Maltese Language: The challenges of a hybrid system. Maltese is a morphologically rich language with a hybrid morphological system which features both concatenative and non-concatenative processes. This paper analyses the impact of this hybridity on the performance of machine learning techniques for morphological labelling and clustering. In particular, we analyse a dataset of morphologically related word clusters to evaluate the difference in results for concatenative and nonconcatenative clusters. We also describe research carried out in morphological labelling, with a particular focus on the verb category. Two evaluations were carried out, one using an unseen dataset, and another one using a gold standard dataset which was manually labelled. The gold standard dataset was split into concatenative and non-concatenative to analyse the difference in results between the two morphological systems.",2017
li-etal-2017-bibi,https://aclanthology.org/W17-5404,0,,,,,,,"BIBI System Description: Building with CNNs and Breaking with Deep Reinforcement Learning. This paper describes our submission to the sentiment analysis sub-task of ""Build It, Break It: The Language Edition (BIBI)"", on both the builder and breaker sides. As a builder, we use convolutional neural nets, trained on both phrase and sentence data. As a breaker, we use Q-learning to learn minimal change pairs, and apply a token substitution method automatically. We analyse the results to gauge the robustness of NLP systems.",{BIBI} System Description: Building with {CNN}s and Breaking with Deep Reinforcement Learning,"This paper describes our submission to the sentiment analysis sub-task of ""Build It, Break It: The Language Edition (BIBI)"", on both the builder and breaker sides. As a builder, we use convolutional neural nets, trained on both phrase and sentence data. As a breaker, we use Q-learning to learn minimal change pairs, and apply a token substitution method automatically. We analyse the results to gauge the robustness of NLP systems.",BIBI System Description: Building with CNNs and Breaking with Deep Reinforcement Learning,"This paper describes our submission to the sentiment analysis sub-task of ""Build It, Break It: The Language Edition (BIBI)"", on both the builder and breaker sides. As a builder, we use convolutional neural nets, trained on both phrase and sentence data. As a breaker, we use Q-learning to learn minimal change pairs, and apply a token substitution method automatically. We analyse the results to gauge the robustness of NLP systems.","We would like to thank the three anonymous reviewers for their helpful feedback and suggestions, and to Meng Fang for assisting with the implementation of the RL system.","BIBI System Description: Building with CNNs and Breaking with Deep Reinforcement Learning. This paper describes our submission to the sentiment analysis sub-task of ""Build It, Break It: The Language Edition (BIBI)"", on both the builder and breaker sides. As a builder, we use convolutional neural nets, trained on both phrase and sentence data. As a breaker, we use Q-learning to learn minimal change pairs, and apply a token substitution method automatically. We analyse the results to gauge the robustness of NLP systems.",2017
nakazawa-2015-promoting,https://aclanthology.org/2015.mtsummit-wpslt.5,1,,,,industry_innovation_infrastructure,,,"Promoting science and technology exchange using machine translation. There are plenty of useful scientific and technical documents which are written in languages other than English, and are referenced domestically. Accessing these domestic documents in other countries is very important in order to know what has been accomplished and what is needed next in the science and technology fields. However, we need to surmount the language barrier to directly access these valuable documents. One obvious way to achieve this is using machine translation systems to translate foreign documents into the users' language. Even after the long history of developing machine translation systems among East Asian languages, there is still no practical system. We have launched a project to develop practical machine translation technology for promoting science and technology exchange. As the starting point, we aim at developping Chinese ↔ Japanese practical machine translation system. In this talk, I will introduce the background, goals and status of the project. Also, I will give you the summary of the 2nd Workshop on Asian Translation (WAT2015) 1 where Chinese ↔ Japanese scientific paper translation subtasks has been carried out. Figure 1 shows the number of scientific papers in the world which are written in ""English"". We can presume that the number of papers written in each language has the similar proportion to this graph. You can see that the number of papers from China is rapidly growing in recent years, which means we have a large number of ""Chinese"" papers.",Promoting science and technology exchange using machine translation,"There are plenty of useful scientific and technical documents which are written in languages other than English, and are referenced domestically. Accessing these domestic documents in other countries is very important in order to know what has been accomplished and what is needed next in the science and technology fields. However, we need to surmount the language barrier to directly access these valuable documents. One obvious way to achieve this is using machine translation systems to translate foreign documents into the users' language. Even after the long history of developing machine translation systems among East Asian languages, there is still no practical system. We have launched a project to develop practical machine translation technology for promoting science and technology exchange. As the starting point, we aim at developping Chinese ↔ Japanese practical machine translation system. In this talk, I will introduce the background, goals and status of the project. Also, I will give you the summary of the 2nd Workshop on Asian Translation (WAT2015) 1 where Chinese ↔ Japanese scientific paper translation subtasks has been carried out. Figure 1 shows the number of scientific papers in the world which are written in ""English"". We can presume that the number of papers written in each language has the similar proportion to this graph. You can see that the number of papers from China is rapidly growing in recent years, which means we have a large number of ""Chinese"" papers.",Promoting science and technology exchange using machine translation,"There are plenty of useful scientific and technical documents which are written in languages other than English, and are referenced domestically. Accessing these domestic documents in other countries is very important in order to know what has been accomplished and what is needed next in the science and technology fields. However, we need to surmount the language barrier to directly access these valuable documents. One obvious way to achieve this is using machine translation systems to translate foreign documents into the users' language. Even after the long history of developing machine translation systems among East Asian languages, there is still no practical system. We have launched a project to develop practical machine translation technology for promoting science and technology exchange. As the starting point, we aim at developping Chinese ↔ Japanese practical machine translation system. In this talk, I will introduce the background, goals and status of the project. Also, I will give you the summary of the 2nd Workshop on Asian Translation (WAT2015) 1 where Chinese ↔ Japanese scientific paper translation subtasks has been carried out. Figure 1 shows the number of scientific papers in the world which are written in ""English"". We can presume that the number of papers written in each language has the similar proportion to this graph. You can see that the number of papers from China is rapidly growing in recent years, which means we have a large number of ""Chinese"" papers.",,"Promoting science and technology exchange using machine translation. There are plenty of useful scientific and technical documents which are written in languages other than English, and are referenced domestically. Accessing these domestic documents in other countries is very important in order to know what has been accomplished and what is needed next in the science and technology fields. However, we need to surmount the language barrier to directly access these valuable documents. One obvious way to achieve this is using machine translation systems to translate foreign documents into the users' language. Even after the long history of developing machine translation systems among East Asian languages, there is still no practical system. We have launched a project to develop practical machine translation technology for promoting science and technology exchange. As the starting point, we aim at developping Chinese ↔ Japanese practical machine translation system. In this talk, I will introduce the background, goals and status of the project. Also, I will give you the summary of the 2nd Workshop on Asian Translation (WAT2015) 1 where Chinese ↔ Japanese scientific paper translation subtasks has been carried out. Figure 1 shows the number of scientific papers in the world which are written in ""English"". We can presume that the number of papers written in each language has the similar proportion to this graph. You can see that the number of papers from China is rapidly growing in recent years, which means we have a large number of ""Chinese"" papers.",2015
summers-sawaf-2010-user,https://aclanthology.org/2010.amta-government.8,0,,,,,,,User-generated System for Critical Document Triage and Exploitation--Version 2011. ,User-generated System for Critical Document Triage and Exploitation{--}Version 2011,,User-generated System for Critical Document Triage and Exploitation--Version 2011,,,User-generated System for Critical Document Triage and Exploitation--Version 2011. ,2010
wang-etal-2019-multi-hop,https://aclanthology.org/D19-5813,0,,,,,,,"Do Multi-hop Readers Dream of Reasoning Chains?. General Question Answering (QA) systems over texts require the multi-hop reasoning capability, i.e. the ability to reason with information collected from multiple passages to derive the answer. In this paper we conduct a systematic analysis to assess such an ability of various existing models proposed for multi-hop QA tasks. Specifically, our analysis investigates that whether providing the full reasoning chain of multiple passages, instead of just one final passage where the answer appears, could improve the performance of the existing QA models. Surprisingly, when using the additional evidence passages, the improvements of all the existing multi-hop reading approaches are rather limited, with the highest error reduction of 5.8% on F1 (corresponding to 1.3% absolute improvement) from the BERT model. To better understand whether the reasoning chains could indeed help find correct answers, we further develop a co-matchingbased method that leads to 13.1% error reduction with passage chains when applied to two of our base readers (including BERT). Our results demonstrate the existence of the potential improvement using explicit multi-hop reasoning and the necessity to develop models with better reasoning abilities. 1 * Equal contributions. 1 Code and data released at https://github.com/ helloeve/bert-co-matching.",Do Multi-hop Readers Dream of Reasoning Chains?,"General Question Answering (QA) systems over texts require the multi-hop reasoning capability, i.e. the ability to reason with information collected from multiple passages to derive the answer. In this paper we conduct a systematic analysis to assess such an ability of various existing models proposed for multi-hop QA tasks. Specifically, our analysis investigates that whether providing the full reasoning chain of multiple passages, instead of just one final passage where the answer appears, could improve the performance of the existing QA models. Surprisingly, when using the additional evidence passages, the improvements of all the existing multi-hop reading approaches are rather limited, with the highest error reduction of 5.8% on F1 (corresponding to 1.3% absolute improvement) from the BERT model. To better understand whether the reasoning chains could indeed help find correct answers, we further develop a co-matchingbased method that leads to 13.1% error reduction with passage chains when applied to two of our base readers (including BERT). Our results demonstrate the existence of the potential improvement using explicit multi-hop reasoning and the necessity to develop models with better reasoning abilities. 1 * Equal contributions. 1 Code and data released at https://github.com/ helloeve/bert-co-matching.",Do Multi-hop Readers Dream of Reasoning Chains?,"General Question Answering (QA) systems over texts require the multi-hop reasoning capability, i.e. the ability to reason with information collected from multiple passages to derive the answer. In this paper we conduct a systematic analysis to assess such an ability of various existing models proposed for multi-hop QA tasks. Specifically, our analysis investigates that whether providing the full reasoning chain of multiple passages, instead of just one final passage where the answer appears, could improve the performance of the existing QA models. Surprisingly, when using the additional evidence passages, the improvements of all the existing multi-hop reading approaches are rather limited, with the highest error reduction of 5.8% on F1 (corresponding to 1.3% absolute improvement) from the BERT model. To better understand whether the reasoning chains could indeed help find correct answers, we further develop a co-matchingbased method that leads to 13.1% error reduction with passage chains when applied to two of our base readers (including BERT). Our results demonstrate the existence of the potential improvement using explicit multi-hop reasoning and the necessity to develop models with better reasoning abilities. 1 * Equal contributions. 1 Code and data released at https://github.com/ helloeve/bert-co-matching.",We thank the anonymous reviewers for their very valuable comments and suggestions.,"Do Multi-hop Readers Dream of Reasoning Chains?. General Question Answering (QA) systems over texts require the multi-hop reasoning capability, i.e. the ability to reason with information collected from multiple passages to derive the answer. In this paper we conduct a systematic analysis to assess such an ability of various existing models proposed for multi-hop QA tasks. Specifically, our analysis investigates that whether providing the full reasoning chain of multiple passages, instead of just one final passage where the answer appears, could improve the performance of the existing QA models. Surprisingly, when using the additional evidence passages, the improvements of all the existing multi-hop reading approaches are rather limited, with the highest error reduction of 5.8% on F1 (corresponding to 1.3% absolute improvement) from the BERT model. To better understand whether the reasoning chains could indeed help find correct answers, we further develop a co-matchingbased method that leads to 13.1% error reduction with passage chains when applied to two of our base readers (including BERT). Our results demonstrate the existence of the potential improvement using explicit multi-hop reasoning and the necessity to develop models with better reasoning abilities. 1 * Equal contributions. 1 Code and data released at https://github.com/ helloeve/bert-co-matching.",2019
versley-2007-antecedent,https://aclanthology.org/D07-1052,0,,,,,,,"Antecedent Selection Techniques for High-Recall Coreference Resolution. We investigate methods to improve the recall in coreference resolution by also trying to resolve those definite descriptions where no earlier mention of the referent shares the same lexical head (coreferent bridging). The problem, which is notably harder than identifying coreference relations among mentions which have the same lexical head, has been tackled with several rather different approaches, and we attempt to provide a meaningful classification along with a quantitative comparison. Based on the different merits of the methods, we discuss possibilities to improve them and show how they can be effectively combined.",Antecedent Selection Techniques for High-Recall Coreference Resolution,"We investigate methods to improve the recall in coreference resolution by also trying to resolve those definite descriptions where no earlier mention of the referent shares the same lexical head (coreferent bridging). The problem, which is notably harder than identifying coreference relations among mentions which have the same lexical head, has been tackled with several rather different approaches, and we attempt to provide a meaningful classification along with a quantitative comparison. Based on the different merits of the methods, we discuss possibilities to improve them and show how they can be effectively combined.",Antecedent Selection Techniques for High-Recall Coreference Resolution,"We investigate methods to improve the recall in coreference resolution by also trying to resolve those definite descriptions where no earlier mention of the referent shares the same lexical head (coreferent bridging). The problem, which is notably harder than identifying coreference relations among mentions which have the same lexical head, has been tackled with several rather different approaches, and we attempt to provide a meaningful classification along with a quantitative comparison. Based on the different merits of the methods, we discuss possibilities to improve them and show how they can be effectively combined.","Acknowledgements I am very grateful to Sabine Schulte im Walde, Piklu Gupta and Sandra Kübler for useful criticism of an earlier version, and to Simone Ponzetto and Michael Strube for feedback on a talk related to this paper. The research reported in this paper was supported by the Deutsche Forschungsgemeinschaft (DFG) as part of Collaborative Research Centre (Sonderforschungsbereich) 441 ""Linguistic Data Structures"".","Antecedent Selection Techniques for High-Recall Coreference Resolution. We investigate methods to improve the recall in coreference resolution by also trying to resolve those definite descriptions where no earlier mention of the referent shares the same lexical head (coreferent bridging). The problem, which is notably harder than identifying coreference relations among mentions which have the same lexical head, has been tackled with several rather different approaches, and we attempt to provide a meaningful classification along with a quantitative comparison. Based on the different merits of the methods, we discuss possibilities to improve them and show how they can be effectively combined.",2007
xu-etal-2021-probing,https://aclanthology.org/2021.naacl-main.7,0,,,,,,,"Probing Word Translations in the Transformer and Trading Decoder for Encoder Layers. Due to its effectiveness and performance, the Transformer translation model has attracted wide attention, most recently in terms of probing-based approaches. Previous work focuses on using or probing source linguistic features in the encoder. To date, the way word translation evolves in Transformer layers has not yet been investigated. Naively, one might assume that encoder layers capture source information while decoder layers translate. In this work, we show that this is not quite the case: translation already happens progressively in encoder layers and even in the input embeddings. More surprisingly, we find that some of the lower decoder layers do not actually do that much decoding. We show all of this in terms of a probing approach where we project representations of the layer analyzed to the final trained and frozen classifier level of the Transformer decoder to measure word translation accuracy. Our findings motivate and explain a Transformer configuration change: if translation already happens in the encoder layers, perhaps we can increase the number of encoder layers, while decreasing the number of decoder layers, boosting decoding speed, without loss in translation quality? Our experiments show that this is indeed the case: we can increase speed by up to a factor 2.3 with small gains in translation quality, while an 18-4 deep encoder configuration boosts translation quality by +1.42 BLEU (En-De) at a speed-up of 1.4.",Probing Word Translations in the Transformer and Trading Decoder for Encoder Layers,"Due to its effectiveness and performance, the Transformer translation model has attracted wide attention, most recently in terms of probing-based approaches. Previous work focuses on using or probing source linguistic features in the encoder. To date, the way word translation evolves in Transformer layers has not yet been investigated. Naively, one might assume that encoder layers capture source information while decoder layers translate. In this work, we show that this is not quite the case: translation already happens progressively in encoder layers and even in the input embeddings. More surprisingly, we find that some of the lower decoder layers do not actually do that much decoding. We show all of this in terms of a probing approach where we project representations of the layer analyzed to the final trained and frozen classifier level of the Transformer decoder to measure word translation accuracy. Our findings motivate and explain a Transformer configuration change: if translation already happens in the encoder layers, perhaps we can increase the number of encoder layers, while decreasing the number of decoder layers, boosting decoding speed, without loss in translation quality? Our experiments show that this is indeed the case: we can increase speed by up to a factor 2.3 with small gains in translation quality, while an 18-4 deep encoder configuration boosts translation quality by +1.42 BLEU (En-De) at a speed-up of 1.4.",Probing Word Translations in the Transformer and Trading Decoder for Encoder Layers,"Due to its effectiveness and performance, the Transformer translation model has attracted wide attention, most recently in terms of probing-based approaches. Previous work focuses on using or probing source linguistic features in the encoder. To date, the way word translation evolves in Transformer layers has not yet been investigated. Naively, one might assume that encoder layers capture source information while decoder layers translate. In this work, we show that this is not quite the case: translation already happens progressively in encoder layers and even in the input embeddings. More surprisingly, we find that some of the lower decoder layers do not actually do that much decoding. We show all of this in terms of a probing approach where we project representations of the layer analyzed to the final trained and frozen classifier level of the Transformer decoder to measure word translation accuracy. Our findings motivate and explain a Transformer configuration change: if translation already happens in the encoder layers, perhaps we can increase the number of encoder layers, while decreasing the number of decoder layers, boosting decoding speed, without loss in translation quality? Our experiments show that this is indeed the case: we can increase speed by up to a factor 2.3 with small gains in translation quality, while an 18-4 deep encoder configuration boosts translation quality by +1.42 BLEU (En-De) at a speed-up of 1.4.",We thank anonymous reviewers for their insightful comments. Hongfei Xu acknowledges the support of China Scholarship Council ([2018 ,"Probing Word Translations in the Transformer and Trading Decoder for Encoder Layers. Due to its effectiveness and performance, the Transformer translation model has attracted wide attention, most recently in terms of probing-based approaches. Previous work focuses on using or probing source linguistic features in the encoder. To date, the way word translation evolves in Transformer layers has not yet been investigated. Naively, one might assume that encoder layers capture source information while decoder layers translate. In this work, we show that this is not quite the case: translation already happens progressively in encoder layers and even in the input embeddings. More surprisingly, we find that some of the lower decoder layers do not actually do that much decoding. We show all of this in terms of a probing approach where we project representations of the layer analyzed to the final trained and frozen classifier level of the Transformer decoder to measure word translation accuracy. Our findings motivate and explain a Transformer configuration change: if translation already happens in the encoder layers, perhaps we can increase the number of encoder layers, while decreasing the number of decoder layers, boosting decoding speed, without loss in translation quality? Our experiments show that this is indeed the case: we can increase speed by up to a factor 2.3 with small gains in translation quality, while an 18-4 deep encoder configuration boosts translation quality by +1.42 BLEU (En-De) at a speed-up of 1.4.",2021
zhao-etal-2017-n,https://aclanthology.org/W17-5907,0,,,,,,,N-gram Model for Chinese Grammatical Error Diagnosis. Detection and correction of Chinese grammatical errors have been two of major challenges for Chinese automatic grammatical error diagnosis. This paper presents an N-gram model for automatic detection and correction of Chinese grammatical errors in NLPTEA 2017 task. The experiment results show that the proposed method is good at correction of Chinese grammatical errors.,N-gram Model for {C}hinese Grammatical Error Diagnosis,Detection and correction of Chinese grammatical errors have been two of major challenges for Chinese automatic grammatical error diagnosis. This paper presents an N-gram model for automatic detection and correction of Chinese grammatical errors in NLPTEA 2017 task. The experiment results show that the proposed method is good at correction of Chinese grammatical errors.,N-gram Model for Chinese Grammatical Error Diagnosis,Detection and correction of Chinese grammatical errors have been two of major challenges for Chinese automatic grammatical error diagnosis. This paper presents an N-gram model for automatic detection and correction of Chinese grammatical errors in NLPTEA 2017 task. The experiment results show that the proposed method is good at correction of Chinese grammatical errors.,,N-gram Model for Chinese Grammatical Error Diagnosis. Detection and correction of Chinese grammatical errors have been two of major challenges for Chinese automatic grammatical error diagnosis. This paper presents an N-gram model for automatic detection and correction of Chinese grammatical errors in NLPTEA 2017 task. The experiment results show that the proposed method is good at correction of Chinese grammatical errors.,2017
li-etal-2020-low,https://aclanthology.org/2020.ccl-1.92,0,,,,,,,"Low-Resource Text Classification via Cross-lingual Language Model Fine-tuning. Text classification tends to be difficult when data are inadequate considering the amount of manually labeled text corpora. For low-resource agglutinative languages including Uyghur, Kazakh, and Kyrgyz (UKK languages), in which words are manufactured via stems concatenated with several suffixes and stems are used as the representation of text content, this feature allows infinite derivatives vocabulary that leads to high uncertainty of writing forms and huge redundant features. There are major challenges of low-resource agglutinative text classification the lack of labeled data in a target domain and morphologic diversity of derivations in language structures. It is an effective solution which fine-tuning a pre-trained language model to provide meaningful and favorable-to-use feature extractors for downstream text classification tasks. To this end, we propose a low-resource agglutinative language model fine-tuning AgglutiF iT , specifically, we build a low-noise fine-tuning dataset by morphological analysis and stem extraction, then finetune the cross-lingual pre-training model on this dataset. Moreover, we propose an attentionbased fine-tuning strategy that better selects relevant semantic and syntactic information from the pre-trained language model and uses those features on downstream text classification tasks. We evaluate our methods on nine Uyghur, Kazakh, and Kyrgyz classification datasets, where they have significantly better performance compared with several strong baselines.",Low-Resource Text Classification via Cross-lingual Language Model Fine-tuning,"Text classification tends to be difficult when data are inadequate considering the amount of manually labeled text corpora. For low-resource agglutinative languages including Uyghur, Kazakh, and Kyrgyz (UKK languages), in which words are manufactured via stems concatenated with several suffixes and stems are used as the representation of text content, this feature allows infinite derivatives vocabulary that leads to high uncertainty of writing forms and huge redundant features. There are major challenges of low-resource agglutinative text classification the lack of labeled data in a target domain and morphologic diversity of derivations in language structures. It is an effective solution which fine-tuning a pre-trained language model to provide meaningful and favorable-to-use feature extractors for downstream text classification tasks. To this end, we propose a low-resource agglutinative language model fine-tuning AgglutiF iT , specifically, we build a low-noise fine-tuning dataset by morphological analysis and stem extraction, then finetune the cross-lingual pre-training model on this dataset. Moreover, we propose an attentionbased fine-tuning strategy that better selects relevant semantic and syntactic information from the pre-trained language model and uses those features on downstream text classification tasks. We evaluate our methods on nine Uyghur, Kazakh, and Kyrgyz classification datasets, where they have significantly better performance compared with several strong baselines.",Low-Resource Text Classification via Cross-lingual Language Model Fine-tuning,"Text classification tends to be difficult when data are inadequate considering the amount of manually labeled text corpora. For low-resource agglutinative languages including Uyghur, Kazakh, and Kyrgyz (UKK languages), in which words are manufactured via stems concatenated with several suffixes and stems are used as the representation of text content, this feature allows infinite derivatives vocabulary that leads to high uncertainty of writing forms and huge redundant features. There are major challenges of low-resource agglutinative text classification the lack of labeled data in a target domain and morphologic diversity of derivations in language structures. It is an effective solution which fine-tuning a pre-trained language model to provide meaningful and favorable-to-use feature extractors for downstream text classification tasks. To this end, we propose a low-resource agglutinative language model fine-tuning AgglutiF iT , specifically, we build a low-noise fine-tuning dataset by morphological analysis and stem extraction, then finetune the cross-lingual pre-training model on this dataset. Moreover, we propose an attentionbased fine-tuning strategy that better selects relevant semantic and syntactic information from the pre-trained language model and uses those features on downstream text classification tasks. We evaluate our methods on nine Uyghur, Kazakh, and Kyrgyz classification datasets, where they have significantly better performance compared with several strong baselines.",,"Low-Resource Text Classification via Cross-lingual Language Model Fine-tuning. Text classification tends to be difficult when data are inadequate considering the amount of manually labeled text corpora. For low-resource agglutinative languages including Uyghur, Kazakh, and Kyrgyz (UKK languages), in which words are manufactured via stems concatenated with several suffixes and stems are used as the representation of text content, this feature allows infinite derivatives vocabulary that leads to high uncertainty of writing forms and huge redundant features. There are major challenges of low-resource agglutinative text classification the lack of labeled data in a target domain and morphologic diversity of derivations in language structures. It is an effective solution which fine-tuning a pre-trained language model to provide meaningful and favorable-to-use feature extractors for downstream text classification tasks. To this end, we propose a low-resource agglutinative language model fine-tuning AgglutiF iT , specifically, we build a low-noise fine-tuning dataset by morphological analysis and stem extraction, then finetune the cross-lingual pre-training model on this dataset. Moreover, we propose an attentionbased fine-tuning strategy that better selects relevant semantic and syntactic information from the pre-trained language model and uses those features on downstream text classification tasks. We evaluate our methods on nine Uyghur, Kazakh, and Kyrgyz classification datasets, where they have significantly better performance compared with several strong baselines.",2020
yeh-etal-2016-grammatical,https://aclanthology.org/W16-4918,0,,,,,,,"Grammatical Error Detection Based on Machine Learning for Mandarin as Second Language Learning. Mandarin is not simple language for foreigner. Even using Mandarin as the mother tongue, they have to spend more time to learn when they were child. The following issues are the reason why causes learning problem. First, the word is envolved by Hieroglyphic. So a character can express meanings independently, but become a word has another semantic. Second, the Mandarin's grammars have flexible rule and special usage. Therefore, the common grammatical errors can classify to missing, redundant, selection and disorder. In this paper, we proposed the structure of the Recurrent Neural Networks using Long Short-term memory (RNN-LSTM). It can detect the error type from the foreign learner writing. The features based on the word vector and part-of-speech vector. In the test data found that our method in the detection level of recall better than the others, even as high as 0.9755. That is because we give the possibility of greater choice in detecting errors.",Grammatical Error Detection Based on Machine Learning for {M}andarin as Second Language Learning,"Mandarin is not simple language for foreigner. Even using Mandarin as the mother tongue, they have to spend more time to learn when they were child. The following issues are the reason why causes learning problem. First, the word is envolved by Hieroglyphic. So a character can express meanings independently, but become a word has another semantic. Second, the Mandarin's grammars have flexible rule and special usage. Therefore, the common grammatical errors can classify to missing, redundant, selection and disorder. In this paper, we proposed the structure of the Recurrent Neural Networks using Long Short-term memory (RNN-LSTM). It can detect the error type from the foreign learner writing. The features based on the word vector and part-of-speech vector. In the test data found that our method in the detection level of recall better than the others, even as high as 0.9755. That is because we give the possibility of greater choice in detecting errors.",Grammatical Error Detection Based on Machine Learning for Mandarin as Second Language Learning,"Mandarin is not simple language for foreigner. Even using Mandarin as the mother tongue, they have to spend more time to learn when they were child. The following issues are the reason why causes learning problem. First, the word is envolved by Hieroglyphic. So a character can express meanings independently, but become a word has another semantic. Second, the Mandarin's grammars have flexible rule and special usage. Therefore, the common grammatical errors can classify to missing, redundant, selection and disorder. In this paper, we proposed the structure of the Recurrent Neural Networks using Long Short-term memory (RNN-LSTM). It can detect the error type from the foreign learner writing. The features based on the word vector and part-of-speech vector. In the test data found that our method in the detection level of recall better than the others, even as high as 0.9755. That is because we give the possibility of greater choice in detecting errors.",,"Grammatical Error Detection Based on Machine Learning for Mandarin as Second Language Learning. Mandarin is not simple language for foreigner. Even using Mandarin as the mother tongue, they have to spend more time to learn when they were child. The following issues are the reason why causes learning problem. First, the word is envolved by Hieroglyphic. So a character can express meanings independently, but become a word has another semantic. Second, the Mandarin's grammars have flexible rule and special usage. Therefore, the common grammatical errors can classify to missing, redundant, selection and disorder. In this paper, we proposed the structure of the Recurrent Neural Networks using Long Short-term memory (RNN-LSTM). It can detect the error type from the foreign learner writing. The features based on the word vector and part-of-speech vector. In the test data found that our method in the detection level of recall better than the others, even as high as 0.9755. That is because we give the possibility of greater choice in detecting errors.",2016
gildea-etal-2006-factoring,https://aclanthology.org/P06-2036,0,,,,,,,"Factoring Synchronous Grammars by Sorting. Synchronous Context-Free Grammars (SCFGs) have been successfully exploited as translation models in machine translation applications. When parsing with an SCFG, computational complexity grows exponentially with the length of the rules, in the worst case. In this paper we examine the problem of factorizing each rule of an input SCFG to a generatively equivalent set of rules, each having the smallest possible length. Our algorithm works in time O(n log n), for each rule of length n. This improves upon previous results and solves an open problem about recognizing permutations that can be factored.",Factoring Synchronous Grammars by Sorting,"Synchronous Context-Free Grammars (SCFGs) have been successfully exploited as translation models in machine translation applications. When parsing with an SCFG, computational complexity grows exponentially with the length of the rules, in the worst case. In this paper we examine the problem of factorizing each rule of an input SCFG to a generatively equivalent set of rules, each having the smallest possible length. Our algorithm works in time O(n log n), for each rule of length n. This improves upon previous results and solves an open problem about recognizing permutations that can be factored.",Factoring Synchronous Grammars by Sorting,"Synchronous Context-Free Grammars (SCFGs) have been successfully exploited as translation models in machine translation applications. When parsing with an SCFG, computational complexity grows exponentially with the length of the rules, in the worst case. In this paper we examine the problem of factorizing each rule of an input SCFG to a generatively equivalent set of rules, each having the smallest possible length. Our algorithm works in time O(n log n), for each rule of length n. This improves upon previous results and solves an open problem about recognizing permutations that can be factored.",Acknowledgments This work was partially supported by NSF ITR IIS-09325646 and NSF ITR IIS-0428020.,"Factoring Synchronous Grammars by Sorting. Synchronous Context-Free Grammars (SCFGs) have been successfully exploited as translation models in machine translation applications. When parsing with an SCFG, computational complexity grows exponentially with the length of the rules, in the worst case. In this paper we examine the problem of factorizing each rule of an input SCFG to a generatively equivalent set of rules, each having the smallest possible length. Our algorithm works in time O(n log n), for each rule of length n. This improves upon previous results and solves an open problem about recognizing permutations that can be factored.",2006
han-etal-2004-subcategorization,https://aclanthology.org/C04-1104,0,,,,,,,"Subcategorization Acquisition and Evaluation for Chinese Verbs. This paper describes the technology and an experiment of subcategorization acquisition for Chinese verbs. The SCF hypotheses are generated by means of linguistic heuristic information and filtered via statistical methods. Evaluation on the acquisition of 20 multi-pattern verbs shows that our experiment achieved the similar precision and recall with former researches. Besides, simple application of the acquired lexicon to a PCFG parser indicates great potentialities of subcategorization information in the fields of NLP.",Subcategorization Acquisition and Evaluation for {C}hinese Verbs,"This paper describes the technology and an experiment of subcategorization acquisition for Chinese verbs. The SCF hypotheses are generated by means of linguistic heuristic information and filtered via statistical methods. Evaluation on the acquisition of 20 multi-pattern verbs shows that our experiment achieved the similar precision and recall with former researches. Besides, simple application of the acquired lexicon to a PCFG parser indicates great potentialities of subcategorization information in the fields of NLP.",Subcategorization Acquisition and Evaluation for Chinese Verbs,"This paper describes the technology and an experiment of subcategorization acquisition for Chinese verbs. The SCF hypotheses are generated by means of linguistic heuristic information and filtered via statistical methods. Evaluation on the acquisition of 20 multi-pattern verbs shows that our experiment achieved the similar precision and recall with former researches. Besides, simple application of the acquired lexicon to a PCFG parser indicates great potentialities of subcategorization information in the fields of NLP.",,"Subcategorization Acquisition and Evaluation for Chinese Verbs. This paper describes the technology and an experiment of subcategorization acquisition for Chinese verbs. The SCF hypotheses are generated by means of linguistic heuristic information and filtered via statistical methods. Evaluation on the acquisition of 20 multi-pattern verbs shows that our experiment achieved the similar precision and recall with former researches. Besides, simple application of the acquired lexicon to a PCFG parser indicates great potentialities of subcategorization information in the fields of NLP.",2004
papay-etal-2018-addressing,https://aclanthology.org/W18-1204,0,,,,,,,"Addressing Low-Resource Scenarios with Character-aware Embeddings. Most modern approaches to computing word embeddings assume the availability of text corpora with billions of words. In this paper, we explore a setup where only corpora with millions of words are available, and many words in any new text are out of vocabulary. This setup is both of practical interest-modeling the situation for specific domains and low-resource languages-and of psycholinguistic interest, since it corresponds much more closely to the actual experiences and challenges of human language learning and use. We evaluate skip-gram word embeddings and two types of character-based embeddings on word relatedness prediction. On large corpora, performance of both model types is equal for frequent words, but character awareness already helps for infrequent words. Consistently, on small corpora, the characterbased models perform overall better than skipgrams. The concatenation of different embeddings performs best on small corpora and robustly on large corpora.",Addressing Low-Resource Scenarios with Character-aware Embeddings,"Most modern approaches to computing word embeddings assume the availability of text corpora with billions of words. In this paper, we explore a setup where only corpora with millions of words are available, and many words in any new text are out of vocabulary. This setup is both of practical interest-modeling the situation for specific domains and low-resource languages-and of psycholinguistic interest, since it corresponds much more closely to the actual experiences and challenges of human language learning and use. We evaluate skip-gram word embeddings and two types of character-based embeddings on word relatedness prediction. On large corpora, performance of both model types is equal for frequent words, but character awareness already helps for infrequent words. Consistently, on small corpora, the characterbased models perform overall better than skipgrams. The concatenation of different embeddings performs best on small corpora and robustly on large corpora.",Addressing Low-Resource Scenarios with Character-aware Embeddings,"Most modern approaches to computing word embeddings assume the availability of text corpora with billions of words. In this paper, we explore a setup where only corpora with millions of words are available, and many words in any new text are out of vocabulary. This setup is both of practical interest-modeling the situation for specific domains and low-resource languages-and of psycholinguistic interest, since it corresponds much more closely to the actual experiences and challenges of human language learning and use. We evaluate skip-gram word embeddings and two types of character-based embeddings on word relatedness prediction. On large corpora, performance of both model types is equal for frequent words, but character awareness already helps for infrequent words. Consistently, on small corpora, the characterbased models perform overall better than skipgrams. The concatenation of different embeddings performs best on small corpora and robustly on large corpora.",Acknowledgments. Partial funding for this study was provided by Deutsche Forschungsgemeinschaft (project PA 1956/4-1).,"Addressing Low-Resource Scenarios with Character-aware Embeddings. Most modern approaches to computing word embeddings assume the availability of text corpora with billions of words. In this paper, we explore a setup where only corpora with millions of words are available, and many words in any new text are out of vocabulary. This setup is both of practical interest-modeling the situation for specific domains and low-resource languages-and of psycholinguistic interest, since it corresponds much more closely to the actual experiences and challenges of human language learning and use. We evaluate skip-gram word embeddings and two types of character-based embeddings on word relatedness prediction. On large corpora, performance of both model types is equal for frequent words, but character awareness already helps for infrequent words. Consistently, on small corpora, the characterbased models perform overall better than skipgrams. The concatenation of different embeddings performs best on small corpora and robustly on large corpora.",2018
georgi-etal-2015-enriching,https://aclanthology.org/W15-3709,0,,,,,,,"Enriching Interlinear Text using Automatically Constructed Annotators. In this paper, we will demonstrate a system that shows great promise for creating Part-of-Speech taggers for languages with little to no curated resources available, and which needs no expert involvement. Interlinear Glossed Text (IGT) is a resource which is available for over 1,000 languages as part of the Online Database of INterlinear text (ODIN) (Lewis and Xia, 2010). Using nothing more than IGT from this database and a classification-based projection approach tailored for IGT, we will show that it is feasible to train reasonably performing annotators of interlinear text using projected annotations for potentially hundreds of world's languages. Doing so can facilitate automatic enrichment of interlinear resources to aid the field of linguistics.",Enriching Interlinear Text using Automatically Constructed Annotators,"In this paper, we will demonstrate a system that shows great promise for creating Part-of-Speech taggers for languages with little to no curated resources available, and which needs no expert involvement. Interlinear Glossed Text (IGT) is a resource which is available for over 1,000 languages as part of the Online Database of INterlinear text (ODIN) (Lewis and Xia, 2010). Using nothing more than IGT from this database and a classification-based projection approach tailored for IGT, we will show that it is feasible to train reasonably performing annotators of interlinear text using projected annotations for potentially hundreds of world's languages. Doing so can facilitate automatic enrichment of interlinear resources to aid the field of linguistics.",Enriching Interlinear Text using Automatically Constructed Annotators,"In this paper, we will demonstrate a system that shows great promise for creating Part-of-Speech taggers for languages with little to no curated resources available, and which needs no expert involvement. Interlinear Glossed Text (IGT) is a resource which is available for over 1,000 languages as part of the Online Database of INterlinear text (ODIN) (Lewis and Xia, 2010). Using nothing more than IGT from this database and a classification-based projection approach tailored for IGT, we will show that it is feasible to train reasonably performing annotators of interlinear text using projected annotations for potentially hundreds of world's languages. Doing so can facilitate automatic enrichment of interlinear resources to aid the field of linguistics.","This work is supported by the National Science Foundation Grant BCS-0748919. We would also like to thank Balthasar Bickel and his team for allowing us to use the Chintang data set in our experiments, and our three anonymous reviewers for the helpful feedback.","Enriching Interlinear Text using Automatically Constructed Annotators. In this paper, we will demonstrate a system that shows great promise for creating Part-of-Speech taggers for languages with little to no curated resources available, and which needs no expert involvement. Interlinear Glossed Text (IGT) is a resource which is available for over 1,000 languages as part of the Online Database of INterlinear text (ODIN) (Lewis and Xia, 2010). Using nothing more than IGT from this database and a classification-based projection approach tailored for IGT, we will show that it is feasible to train reasonably performing annotators of interlinear text using projected annotations for potentially hundreds of world's languages. Doing so can facilitate automatic enrichment of interlinear resources to aid the field of linguistics.",2015
wahlster-etal-1978-glancing,https://aclanthology.org/J78-3004,0,,,,,,,"Glancing, Referring and Explaining in the Dialogue System HAM-RPM. P r o j e c t : 'Simulat ion of Language Understand1 ng ' Germanisches Seminar der U n i v e r s i t a t Hamburg von-Melle-Park 6 , D-2000 Hamburg 13, West Getmany SUMMARY T h i s paper tocusses on t h r e e components oft h e d i a l o g u e system HAM-KYM, which converses i n n a t u r a l language about v i s i b l e scenes. F~r s t , ~t i s demonstrated how the system's communicative competence i s enhanced by i t s i m i t a t i o n of human v i s u a l-s e a r c h processes. The approach taken t o nounphrase r e s o l u t i o n i s then d e s c r i b e d , and an a l g o r i t h m f o r t h e generation o f noun phrases i s illustrated w i t h a s e r i e s o f examples: Finally, the s y s t e m ' s a b i l i t y to e x p l a i n i t s own reasoning i s d i s c u s s e d , w i t h emphasis on the novel a s p e c t s o f i t s i m p l e m e n t a t~o n .","Glancing, Referring and Explaining in the Dialogue System {HAM-RPM}","P r o j e c t : 'Simulat ion of Language Understand1 ng ' Germanisches Seminar der U n i v e r s i t a t Hamburg von-Melle-Park 6 , D-2000 Hamburg 13, West Getmany SUMMARY T h i s paper tocusses on t h r e e components oft h e d i a l o g u e system HAM-KYM, which converses i n n a t u r a l language about v i s i b l e scenes. F~r s t , ~t i s demonstrated how the system's communicative competence i s enhanced by i t s i m i t a t i o n of human v i s u a l-s e a r c h processes. The approach taken t o nounphrase r e s o l u t i o n i s then d e s c r i b e d , and an a l g o r i t h m f o r t h e generation o f noun phrases i s illustrated w i t h a s e r i e s o f examples: Finally, the s y s t e m ' s a b i l i t y to e x p l a i n i t s own reasoning i s d i s c u s s e d , w i t h emphasis on the novel a s p e c t s o f i t s i m p l e m e n t a t~o n .","Glancing, Referring and Explaining in the Dialogue System HAM-RPM","P r o j e c t : 'Simulat ion of Language Understand1 ng ' Germanisches Seminar der U n i v e r s i t a t Hamburg von-Melle-Park 6 , D-2000 Hamburg 13, West Getmany SUMMARY T h i s paper tocusses on t h r e e components oft h e d i a l o g u e system HAM-KYM, which converses i n n a t u r a l language about v i s i b l e scenes. F~r s t , ~t i s demonstrated how the system's communicative competence i s enhanced by i t s i m i t a t i o n of human v i s u a l-s e a r c h processes. The approach taken t o nounphrase r e s o l u t i o n i s then d e s c r i b e d , and an a l g o r i t h m f o r t h e generation o f noun phrases i s illustrated w i t h a s e r i e s o f examples: Finally, the s y s t e m ' s a b i l i t y to e x p l a i n i t s own reasoning i s d i s c u s s e d , w i t h emphasis on the novel a s p e c t s o f i t s i m p l e m e n t a t~o n .",,"Glancing, Referring and Explaining in the Dialogue System HAM-RPM. P r o j e c t : 'Simulat ion of Language Understand1 ng ' Germanisches Seminar der U n i v e r s i t a t Hamburg von-Melle-Park 6 , D-2000 Hamburg 13, West Getmany SUMMARY T h i s paper tocusses on t h r e e components oft h e d i a l o g u e system HAM-KYM, which converses i n n a t u r a l language about v i s i b l e scenes. F~r s t , ~t i s demonstrated how the system's communicative competence i s enhanced by i t s i m i t a t i o n of human v i s u a l-s e a r c h processes. The approach taken t o nounphrase r e s o l u t i o n i s then d e s c r i b e d , and an a l g o r i t h m f o r t h e generation o f noun phrases i s illustrated w i t h a s e r i e s o f examples: Finally, the s y s t e m ' s a b i l i t y to e x p l a i n i t s own reasoning i s d i s c u s s e d , w i t h emphasis on the novel a s p e c t s o f i t s i m p l e m e n t a t~o n .",1978
molins-lapalme-2015-jsrealb,https://aclanthology.org/W15-4719,0,,,,,,,"JSrealB: A Bilingual Text Realizer for Web Programming. JSrealB is an English and French text realizer written in JavaScript to ease its integration in web applications. The realization engine is mainly rule-based. Table driven rules are defined for inflection and algorithmic propagation rules, for agreements. It allows its user to build a variety of French and English expressions and sentences from a single specification to produce dynamic output depending on the content of a web page.",{JS}real{B}: A Bilingual Text Realizer for Web Programming,"JSrealB is an English and French text realizer written in JavaScript to ease its integration in web applications. The realization engine is mainly rule-based. Table driven rules are defined for inflection and algorithmic propagation rules, for agreements. It allows its user to build a variety of French and English expressions and sentences from a single specification to produce dynamic output depending on the content of a web page.",JSrealB: A Bilingual Text Realizer for Web Programming,"JSrealB is an English and French text realizer written in JavaScript to ease its integration in web applications. The realization engine is mainly rule-based. Table driven rules are defined for inflection and algorithmic propagation rules, for agreements. It allows its user to build a variety of French and English expressions and sentences from a single specification to produce dynamic output depending on the content of a web page.",,"JSrealB: A Bilingual Text Realizer for Web Programming. JSrealB is an English and French text realizer written in JavaScript to ease its integration in web applications. The realization engine is mainly rule-based. Table driven rules are defined for inflection and algorithmic propagation rules, for agreements. It allows its user to build a variety of French and English expressions and sentences from a single specification to produce dynamic output depending on the content of a web page.",2015
steedman-2013-robust,https://aclanthology.org/U13-1001,0,,,,,,,"Robust Computational Semantics. Practical tasks like question answering and machine translational ultimately require computing meaning representations that support inference. Standard linguistic accounts of meaning are impracticable for such purposes, both because they assume nonmonotonic operations such as quantifier movement, and because they lack a representation for the meaning of content words that supports efficient computation of entailment. I'll discuss practical solutions to some of these problems within a near-context free grammar formalism for a working wide-coverage parser, in current work with Mike Lewis, and show how these solutions can be usefully applied in NLP tasks.",Robust Computational Semantics,"Practical tasks like question answering and machine translational ultimately require computing meaning representations that support inference. Standard linguistic accounts of meaning are impracticable for such purposes, both because they assume nonmonotonic operations such as quantifier movement, and because they lack a representation for the meaning of content words that supports efficient computation of entailment. I'll discuss practical solutions to some of these problems within a near-context free grammar formalism for a working wide-coverage parser, in current work with Mike Lewis, and show how these solutions can be usefully applied in NLP tasks.",Robust Computational Semantics,"Practical tasks like question answering and machine translational ultimately require computing meaning representations that support inference. Standard linguistic accounts of meaning are impracticable for such purposes, both because they assume nonmonotonic operations such as quantifier movement, and because they lack a representation for the meaning of content words that supports efficient computation of entailment. I'll discuss practical solutions to some of these problems within a near-context free grammar formalism for a working wide-coverage parser, in current work with Mike Lewis, and show how these solutions can be usefully applied in NLP tasks.",,"Robust Computational Semantics. Practical tasks like question answering and machine translational ultimately require computing meaning representations that support inference. Standard linguistic accounts of meaning are impracticable for such purposes, both because they assume nonmonotonic operations such as quantifier movement, and because they lack a representation for the meaning of content words that supports efficient computation of entailment. I'll discuss practical solutions to some of these problems within a near-context free grammar formalism for a working wide-coverage parser, in current work with Mike Lewis, and show how these solutions can be usefully applied in NLP tasks.",2013
coto-solano-etal-2021-towards,https://aclanthology.org/2021.udw-1.2,0,,,,,,,"Towards Universal Dependencies for Bribri. This paper presents a first attempt to apply Universal Dependencies (Nivre et al., 2016; de Marneffe et al., 2021) to Bribri, an Indigenous language from Costa Rica belonging to the Chibchan family. There is limited previous work on Bribri NLP, so we also present a proposal for a dependency parser, as well as a listing of structures that were challenging to parse (e.g. flexible word order, verbal sequences, arguments of intransitive verbs and mismatches between the tense systems of Bribri and UD). We also list some of the challenges in performing NLP with an extremely low-resource Indigenous language, including issues with tokenization, data normalization and the training of tools like POS taggers which are necessary for the parsing. In total we collected 150 sentences (760 words) from publicly available sources like grammar books and corpora. We then used a context-free grammar for the initial parse, and then applied the headfloating algorithm in Xia and Palmer (2001) to automatically generate dependency parses. This work is a first step towards building a UD treebank for Bribri, and we hope to use this tool to improve the documentation of the language and develop language-learning materials and NLP tools like chatbots and question answering-systems. Resumen Este artículo presenta un primer intento de aplicar Dependencias Universales (Nivre et al., 2016; de Marneffe et al., 2021) al bribri, una lengua indígena chibchense de Costa Rica. Dado el limitado trabajo existente en procesamiento de lenguaje natural (PLN) en bribri incluimos también una propuesta para un analizador sintáctico de dependencias, así como una lista de estructuras difíciles de analizar (e.g. palabras con orden flexible, secuencias verbales, argumentos de verbos intransitivos y diferencias entre el sistema verbal del bribri y los rasgos morfológicos de UD). También mencionamos algunos retos del PLN en lenguas indígenas extremadamente bajas en recursos, como la tokenización, la normalización de los datos y el entrenamiento de herramientas como el etiquetado gramatical, necesario para el análisis sintáctico. Se recolectaron 150 oraciones (760 palabras) de fuentes públicas como gramáticas y corpus y se usó una gramática libre de contexto para el análisis inicial. Luego se aplicó el algoritmo de flotación de cabezas de Xia y Palmer (2001) para generar automáticamente los análisis sintácticos de dependencias. Este es el primer paso hacia la construcción de un treebank de dependencias en bribri. Esperamos usar esta herramienta para mejorar la documentación de la lengua y desarrollar materiales de aprendizaje de la lengua y herramientas de PLN como chatbots y sistemas de pregunta-respuesta.",Towards {U}niversal {D}ependencies for {B}ribri,"This paper presents a first attempt to apply Universal Dependencies (Nivre et al., 2016; de Marneffe et al., 2021) to Bribri, an Indigenous language from Costa Rica belonging to the Chibchan family. There is limited previous work on Bribri NLP, so we also present a proposal for a dependency parser, as well as a listing of structures that were challenging to parse (e.g. flexible word order, verbal sequences, arguments of intransitive verbs and mismatches between the tense systems of Bribri and UD). We also list some of the challenges in performing NLP with an extremely low-resource Indigenous language, including issues with tokenization, data normalization and the training of tools like POS taggers which are necessary for the parsing. In total we collected 150 sentences (760 words) from publicly available sources like grammar books and corpora. We then used a context-free grammar for the initial parse, and then applied the headfloating algorithm in Xia and Palmer (2001) to automatically generate dependency parses. This work is a first step towards building a UD treebank for Bribri, and we hope to use this tool to improve the documentation of the language and develop language-learning materials and NLP tools like chatbots and question answering-systems. Resumen Este artículo presenta un primer intento de aplicar Dependencias Universales (Nivre et al., 2016; de Marneffe et al., 2021) al bribri, una lengua indígena chibchense de Costa Rica. Dado el limitado trabajo existente en procesamiento de lenguaje natural (PLN) en bribri incluimos también una propuesta para un analizador sintáctico de dependencias, así como una lista de estructuras difíciles de analizar (e.g. palabras con orden flexible, secuencias verbales, argumentos de verbos intransitivos y diferencias entre el sistema verbal del bribri y los rasgos morfológicos de UD). También mencionamos algunos retos del PLN en lenguas indígenas extremadamente bajas en recursos, como la tokenización, la normalización de los datos y el entrenamiento de herramientas como el etiquetado gramatical, necesario para el análisis sintáctico. Se recolectaron 150 oraciones (760 palabras) de fuentes públicas como gramáticas y corpus y se usó una gramática libre de contexto para el análisis inicial. Luego se aplicó el algoritmo de flotación de cabezas de Xia y Palmer (2001) para generar automáticamente los análisis sintácticos de dependencias. Este es el primer paso hacia la construcción de un treebank de dependencias en bribri. Esperamos usar esta herramienta para mejorar la documentación de la lengua y desarrollar materiales de aprendizaje de la lengua y herramientas de PLN como chatbots y sistemas de pregunta-respuesta.",Towards Universal Dependencies for Bribri,"This paper presents a first attempt to apply Universal Dependencies (Nivre et al., 2016; de Marneffe et al., 2021) to Bribri, an Indigenous language from Costa Rica belonging to the Chibchan family. There is limited previous work on Bribri NLP, so we also present a proposal for a dependency parser, as well as a listing of structures that were challenging to parse (e.g. flexible word order, verbal sequences, arguments of intransitive verbs and mismatches between the tense systems of Bribri and UD). We also list some of the challenges in performing NLP with an extremely low-resource Indigenous language, including issues with tokenization, data normalization and the training of tools like POS taggers which are necessary for the parsing. In total we collected 150 sentences (760 words) from publicly available sources like grammar books and corpora. We then used a context-free grammar for the initial parse, and then applied the headfloating algorithm in Xia and Palmer (2001) to automatically generate dependency parses. This work is a first step towards building a UD treebank for Bribri, and we hope to use this tool to improve the documentation of the language and develop language-learning materials and NLP tools like chatbots and question answering-systems. Resumen Este artículo presenta un primer intento de aplicar Dependencias Universales (Nivre et al., 2016; de Marneffe et al., 2021) al bribri, una lengua indígena chibchense de Costa Rica. Dado el limitado trabajo existente en procesamiento de lenguaje natural (PLN) en bribri incluimos también una propuesta para un analizador sintáctico de dependencias, así como una lista de estructuras difíciles de analizar (e.g. palabras con orden flexible, secuencias verbales, argumentos de verbos intransitivos y diferencias entre el sistema verbal del bribri y los rasgos morfológicos de UD). También mencionamos algunos retos del PLN en lenguas indígenas extremadamente bajas en recursos, como la tokenización, la normalización de los datos y el entrenamiento de herramientas como el etiquetado gramatical, necesario para el análisis sintáctico. Se recolectaron 150 oraciones (760 palabras) de fuentes públicas como gramáticas y corpus y se usó una gramática libre de contexto para el análisis inicial. Luego se aplicó el algoritmo de flotación de cabezas de Xia y Palmer (2001) para generar automáticamente los análisis sintácticos de dependencias. Este es el primer paso hacia la construcción de un treebank de dependencias en bribri. Esperamos usar esta herramienta para mejorar la documentación de la lengua y desarrollar materiales de aprendizaje de la lengua y herramientas de PLN como chatbots y sistemas de pregunta-respuesta.",,"Towards Universal Dependencies for Bribri. This paper presents a first attempt to apply Universal Dependencies (Nivre et al., 2016; de Marneffe et al., 2021) to Bribri, an Indigenous language from Costa Rica belonging to the Chibchan family. There is limited previous work on Bribri NLP, so we also present a proposal for a dependency parser, as well as a listing of structures that were challenging to parse (e.g. flexible word order, verbal sequences, arguments of intransitive verbs and mismatches between the tense systems of Bribri and UD). We also list some of the challenges in performing NLP with an extremely low-resource Indigenous language, including issues with tokenization, data normalization and the training of tools like POS taggers which are necessary for the parsing. In total we collected 150 sentences (760 words) from publicly available sources like grammar books and corpora. We then used a context-free grammar for the initial parse, and then applied the headfloating algorithm in Xia and Palmer (2001) to automatically generate dependency parses. This work is a first step towards building a UD treebank for Bribri, and we hope to use this tool to improve the documentation of the language and develop language-learning materials and NLP tools like chatbots and question answering-systems. Resumen Este artículo presenta un primer intento de aplicar Dependencias Universales (Nivre et al., 2016; de Marneffe et al., 2021) al bribri, una lengua indígena chibchense de Costa Rica. Dado el limitado trabajo existente en procesamiento de lenguaje natural (PLN) en bribri incluimos también una propuesta para un analizador sintáctico de dependencias, así como una lista de estructuras difíciles de analizar (e.g. palabras con orden flexible, secuencias verbales, argumentos de verbos intransitivos y diferencias entre el sistema verbal del bribri y los rasgos morfológicos de UD). También mencionamos algunos retos del PLN en lenguas indígenas extremadamente bajas en recursos, como la tokenización, la normalización de los datos y el entrenamiento de herramientas como el etiquetado gramatical, necesario para el análisis sintáctico. Se recolectaron 150 oraciones (760 palabras) de fuentes públicas como gramáticas y corpus y se usó una gramática libre de contexto para el análisis inicial. Luego se aplicó el algoritmo de flotación de cabezas de Xia y Palmer (2001) para generar automáticamente los análisis sintácticos de dependencias. Este es el primer paso hacia la construcción de un treebank de dependencias en bribri. Esperamos usar esta herramienta para mejorar la documentación de la lengua y desarrollar materiales de aprendizaje de la lengua y herramientas de PLN como chatbots y sistemas de pregunta-respuesta.",2021
rhyne-2020-reconciling,https://aclanthology.org/2020.scil-1.51,0,,,,,,,"Reconciling historical data and modern computational models in corpus creation. We live in a time of unprecedented access to linguistic data, from audio recordings to corpora of billions of words. Linguists have used these resources to advance their research and understanding of language. Historical linguistics, despite being the oldest linguistic subfield, has lagged behind in this regard. However, this is due to several unique challenges that face the subfield. Historical data is plagued by two problems: a lack of overall data due to the ravages of time and a lack of model-ready data that have gone through standard NLP processing. Barring the discovery of more texts, the former issue cannot be solved; the latter can, though it is time-consuming and resourceintensive. These problems have only begun to be addressed for well-documented language families like Indo-European, but even within these progress is slow. There have been numerous advances in synchronic models for basic NLP tasks like POS and morphological tagging. However, modern models are not designed to work with historical data: they depend on large volumes of data and pretagged training sets that are not available for the majority of historical languages. Some have found success with methods that are designed to imitate traditional historical approaches, e.g. (Bouchard-Côté et al., 2013; McMahon and McMahon, 2003; Nakleh et al., 2005), but, if we intend to use stateof-the-art computational tools, they are essentially incompatible. This is an important challenge that computational historical linguists must address if they are going to meet the standards set by both modern corpora and historical analyses. This paper approaches the issue by treating historical data in the same way as a low-resource language (Fang",Reconciling historical data and modern computational models in corpus creation,"We live in a time of unprecedented access to linguistic data, from audio recordings to corpora of billions of words. Linguists have used these resources to advance their research and understanding of language. Historical linguistics, despite being the oldest linguistic subfield, has lagged behind in this regard. However, this is due to several unique challenges that face the subfield. Historical data is plagued by two problems: a lack of overall data due to the ravages of time and a lack of model-ready data that have gone through standard NLP processing. Barring the discovery of more texts, the former issue cannot be solved; the latter can, though it is time-consuming and resourceintensive. These problems have only begun to be addressed for well-documented language families like Indo-European, but even within these progress is slow. There have been numerous advances in synchronic models for basic NLP tasks like POS and morphological tagging. However, modern models are not designed to work with historical data: they depend on large volumes of data and pretagged training sets that are not available for the majority of historical languages. Some have found success with methods that are designed to imitate traditional historical approaches, e.g. (Bouchard-Côté et al., 2013; McMahon and McMahon, 2003; Nakleh et al., 2005), but, if we intend to use stateof-the-art computational tools, they are essentially incompatible. This is an important challenge that computational historical linguists must address if they are going to meet the standards set by both modern corpora and historical analyses. This paper approaches the issue by treating historical data in the same way as a low-resource language (Fang",Reconciling historical data and modern computational models in corpus creation,"We live in a time of unprecedented access to linguistic data, from audio recordings to corpora of billions of words. Linguists have used these resources to advance their research and understanding of language. Historical linguistics, despite being the oldest linguistic subfield, has lagged behind in this regard. However, this is due to several unique challenges that face the subfield. Historical data is plagued by two problems: a lack of overall data due to the ravages of time and a lack of model-ready data that have gone through standard NLP processing. Barring the discovery of more texts, the former issue cannot be solved; the latter can, though it is time-consuming and resourceintensive. These problems have only begun to be addressed for well-documented language families like Indo-European, but even within these progress is slow. There have been numerous advances in synchronic models for basic NLP tasks like POS and morphological tagging. However, modern models are not designed to work with historical data: they depend on large volumes of data and pretagged training sets that are not available for the majority of historical languages. Some have found success with methods that are designed to imitate traditional historical approaches, e.g. (Bouchard-Côté et al., 2013; McMahon and McMahon, 2003; Nakleh et al., 2005), but, if we intend to use stateof-the-art computational tools, they are essentially incompatible. This is an important challenge that computational historical linguists must address if they are going to meet the standards set by both modern corpora and historical analyses. This paper approaches the issue by treating historical data in the same way as a low-resource language (Fang",,"Reconciling historical data and modern computational models in corpus creation. We live in a time of unprecedented access to linguistic data, from audio recordings to corpora of billions of words. Linguists have used these resources to advance their research and understanding of language. Historical linguistics, despite being the oldest linguistic subfield, has lagged behind in this regard. However, this is due to several unique challenges that face the subfield. Historical data is plagued by two problems: a lack of overall data due to the ravages of time and a lack of model-ready data that have gone through standard NLP processing. Barring the discovery of more texts, the former issue cannot be solved; the latter can, though it is time-consuming and resourceintensive. These problems have only begun to be addressed for well-documented language families like Indo-European, but even within these progress is slow. There have been numerous advances in synchronic models for basic NLP tasks like POS and morphological tagging. However, modern models are not designed to work with historical data: they depend on large volumes of data and pretagged training sets that are not available for the majority of historical languages. Some have found success with methods that are designed to imitate traditional historical approaches, e.g. (Bouchard-Côté et al., 2013; McMahon and McMahon, 2003; Nakleh et al., 2005), but, if we intend to use stateof-the-art computational tools, they are essentially incompatible. This is an important challenge that computational historical linguists must address if they are going to meet the standards set by both modern corpora and historical analyses. This paper approaches the issue by treating historical data in the same way as a low-resource language (Fang",2020
surdeanu-etal-2015-two,https://aclanthology.org/N15-3001,0,,,,,,,"Two Practical Rhetorical Structure Theory Parsers. We describe the design, development, and API for two discourse parsers for Rhetorical Structure Theory. The two parsers use the same underlying framework, but one uses features that rely on dependency syntax, produced by a fast shift-reduce parser, whereas the other uses a richer feature space, including both constituent-and dependency-syntax and coreference information, produced by the Stanford CoreNLP toolkit. Both parsers obtain state-of-the-art performance, and use a very simple API consisting of, minimally, two lines of Scala code. We accompany this code with a visualization library that runs the two parsers in parallel, and displays the two generated discourse trees side by side, which provides an intuitive way of comparing the two parsers.",Two Practical {R}hetorical {S}tructure {T}heory Parsers,"We describe the design, development, and API for two discourse parsers for Rhetorical Structure Theory. The two parsers use the same underlying framework, but one uses features that rely on dependency syntax, produced by a fast shift-reduce parser, whereas the other uses a richer feature space, including both constituent-and dependency-syntax and coreference information, produced by the Stanford CoreNLP toolkit. Both parsers obtain state-of-the-art performance, and use a very simple API consisting of, minimally, two lines of Scala code. We accompany this code with a visualization library that runs the two parsers in parallel, and displays the two generated discourse trees side by side, which provides an intuitive way of comparing the two parsers.",Two Practical Rhetorical Structure Theory Parsers,"We describe the design, development, and API for two discourse parsers for Rhetorical Structure Theory. The two parsers use the same underlying framework, but one uses features that rely on dependency syntax, produced by a fast shift-reduce parser, whereas the other uses a richer feature space, including both constituent-and dependency-syntax and coreference information, produced by the Stanford CoreNLP toolkit. Both parsers obtain state-of-the-art performance, and use a very simple API consisting of, minimally, two lines of Scala code. We accompany this code with a visualization library that runs the two parsers in parallel, and displays the two generated discourse trees side by side, which provides an intuitive way of comparing the two parsers.",This work was funded by the DARPA Big Mechanism program under ARO contract W911NF-14-1-0395.,"Two Practical Rhetorical Structure Theory Parsers. We describe the design, development, and API for two discourse parsers for Rhetorical Structure Theory. The two parsers use the same underlying framework, but one uses features that rely on dependency syntax, produced by a fast shift-reduce parser, whereas the other uses a richer feature space, including both constituent-and dependency-syntax and coreference information, produced by the Stanford CoreNLP toolkit. Both parsers obtain state-of-the-art performance, and use a very simple API consisting of, minimally, two lines of Scala code. We accompany this code with a visualization library that runs the two parsers in parallel, and displays the two generated discourse trees side by side, which provides an intuitive way of comparing the two parsers.",2015
cercas-curry-etal-2021-convabuse,https://aclanthology.org/2021.emnlp-main.587,1,,,,hate_speech,,,"ConvAbuse: Data, Analysis, and Benchmarks for Nuanced Abuse Detection in Conversational AI. We present the first English corpus study on abusive language towards three conversational AI systems gathered 'in the wild': an opendomain social bot, a rule-based chatbot, and a task-based system. To account for the complexity of the task, we take a more 'nuanced' approach where our ConvAI dataset reflects fine-grained notions of abuse, as well as views from multiple expert annotators. We find that the distribution of abuse is vastly different compared to other commonly used datasets, with more sexually tinted aggression towards the virtual persona of these systems. Finally, we report results from bench-marking existing models against this data. Unsurprisingly, we find that there is substantial room for improvement with F1 scores below 90%. Warning: This paper contains examples of language that some people may find offensive or upsetting.","{C}onv{A}buse: Data, Analysis, and Benchmarks for Nuanced Abuse Detection in Conversational {AI}","We present the first English corpus study on abusive language towards three conversational AI systems gathered 'in the wild': an opendomain social bot, a rule-based chatbot, and a task-based system. To account for the complexity of the task, we take a more 'nuanced' approach where our ConvAI dataset reflects fine-grained notions of abuse, as well as views from multiple expert annotators. We find that the distribution of abuse is vastly different compared to other commonly used datasets, with more sexually tinted aggression towards the virtual persona of these systems. Finally, we report results from bench-marking existing models against this data. Unsurprisingly, we find that there is substantial room for improvement with F1 scores below 90%. Warning: This paper contains examples of language that some people may find offensive or upsetting.","ConvAbuse: Data, Analysis, and Benchmarks for Nuanced Abuse Detection in Conversational AI","We present the first English corpus study on abusive language towards three conversational AI systems gathered 'in the wild': an opendomain social bot, a rule-based chatbot, and a task-based system. To account for the complexity of the task, we take a more 'nuanced' approach where our ConvAI dataset reflects fine-grained notions of abuse, as well as views from multiple expert annotators. We find that the distribution of abuse is vastly different compared to other commonly used datasets, with more sexually tinted aggression towards the virtual persona of these systems. Finally, we report results from bench-marking existing models against this data. Unsurprisingly, we find that there is substantial room for improvement with F1 scores below 90%. Warning: This paper contains examples of language that some people may find offensive or upsetting.","This research received funding from the EPSRC project 'Designing Conversational Assistants to Reduce Gender Bias' (EP/T023767/1). The authors would like to thank Juules Bare, Lottie Basil, Susana Demelas, Maina Flintham Hjelde, Lauren Galligan, Lucile Logan, Megan McElhone, MollieMcLean and the reviewers for their helpful comments.","ConvAbuse: Data, Analysis, and Benchmarks for Nuanced Abuse Detection in Conversational AI. We present the first English corpus study on abusive language towards three conversational AI systems gathered 'in the wild': an opendomain social bot, a rule-based chatbot, and a task-based system. To account for the complexity of the task, we take a more 'nuanced' approach where our ConvAI dataset reflects fine-grained notions of abuse, as well as views from multiple expert annotators. We find that the distribution of abuse is vastly different compared to other commonly used datasets, with more sexually tinted aggression towards the virtual persona of these systems. Finally, we report results from bench-marking existing models against this data. Unsurprisingly, we find that there is substantial room for improvement with F1 scores below 90%. Warning: This paper contains examples of language that some people may find offensive or upsetting.",2021
hakkani-tur-2015-keynote,https://aclanthology.org/W15-4628,0,,,,,,,"Keynote: Graph-based Approaches for Spoken Language Understanding. Following an upsurge in mobile device usage and improvements in speech recognition performance, multiple virtual personal assistant systems have emerged, and have been widely adopted by users. While these assistants proved to be beneficial, their usage has been limited to certain scenarios and domains, with underlying language understanding models that have been finely tuned by their builders. ",{K}eynote: Graph-based Approaches for Spoken Language Understanding,"Following an upsurge in mobile device usage and improvements in speech recognition performance, multiple virtual personal assistant systems have emerged, and have been widely adopted by users. While these assistants proved to be beneficial, their usage has been limited to certain scenarios and domains, with underlying language understanding models that have been finely tuned by their builders. ",Keynote: Graph-based Approaches for Spoken Language Understanding,"Following an upsurge in mobile device usage and improvements in speech recognition performance, multiple virtual personal assistant systems have emerged, and have been widely adopted by users. While these assistants proved to be beneficial, their usage has been limited to certain scenarios and domains, with underlying language understanding models that have been finely tuned by their builders. ",,"Keynote: Graph-based Approaches for Spoken Language Understanding. Following an upsurge in mobile device usage and improvements in speech recognition performance, multiple virtual personal assistant systems have emerged, and have been widely adopted by users. While these assistants proved to be beneficial, their usage has been limited to certain scenarios and domains, with underlying language understanding models that have been finely tuned by their builders. ",2015
bak-oh-2019-variational,https://aclanthology.org/D19-1202,0,,,,,,,"Variational Hierarchical User-based Conversation Model. Generating appropriate conversation responses requires careful modeling of the utterances and speakers together. Some recent approaches to response generation model both the utterances and the speakers, but these approaches tend to generate responses that are overly tailored to the speakers. To overcome this limitation, we propose a new model with a stochastic variable designed to capture the speaker information and deliver it to the conversational context. An important part of this model is the network of speakers in which each speaker is connected to one or more conversational partner, and this network is then used to model the speakers better. To test whether our model generates more appropriate conversation responses, we build a new conversation corpus containing approximately 27,000 speakers and 770,000 conversations. With this corpus, we run experiments of generating conversational responses and compare our model with other state-of-the-art models. By automatic evaluation metrics and human evaluation, we show that our model outperforms other models in generating appropriate responses. An additional advantage of our model is that it generates better responses for various new user scenarios, for example when one of the speakers is a known user in our corpus but the partner is a new user. For replicability, we make available all our code and data 1 .",Variational Hierarchical User-based Conversation Model,"Generating appropriate conversation responses requires careful modeling of the utterances and speakers together. Some recent approaches to response generation model both the utterances and the speakers, but these approaches tend to generate responses that are overly tailored to the speakers. To overcome this limitation, we propose a new model with a stochastic variable designed to capture the speaker information and deliver it to the conversational context. An important part of this model is the network of speakers in which each speaker is connected to one or more conversational partner, and this network is then used to model the speakers better. To test whether our model generates more appropriate conversation responses, we build a new conversation corpus containing approximately 27,000 speakers and 770,000 conversations. With this corpus, we run experiments of generating conversational responses and compare our model with other state-of-the-art models. By automatic evaluation metrics and human evaluation, we show that our model outperforms other models in generating appropriate responses. An additional advantage of our model is that it generates better responses for various new user scenarios, for example when one of the speakers is a known user in our corpus but the partner is a new user. For replicability, we make available all our code and data 1 .",Variational Hierarchical User-based Conversation Model,"Generating appropriate conversation responses requires careful modeling of the utterances and speakers together. Some recent approaches to response generation model both the utterances and the speakers, but these approaches tend to generate responses that are overly tailored to the speakers. To overcome this limitation, we propose a new model with a stochastic variable designed to capture the speaker information and deliver it to the conversational context. An important part of this model is the network of speakers in which each speaker is connected to one or more conversational partner, and this network is then used to model the speakers better. To test whether our model generates more appropriate conversation responses, we build a new conversation corpus containing approximately 27,000 speakers and 770,000 conversations. With this corpus, we run experiments of generating conversational responses and compare our model with other state-of-the-art models. By automatic evaluation metrics and human evaluation, we show that our model outperforms other models in generating appropriate responses. An additional advantage of our model is that it generates better responses for various new user scenarios, for example when one of the speakers is a known user in our corpus but the partner is a new user. For replicability, we make available all our code and data 1 .","We would like to thank the anonymous reviewers for helpful questions and comments. This work was supported by IITP grant funded by the Korea government (MSIT) (No.2017-0-01779, XAI).","Variational Hierarchical User-based Conversation Model. Generating appropriate conversation responses requires careful modeling of the utterances and speakers together. Some recent approaches to response generation model both the utterances and the speakers, but these approaches tend to generate responses that are overly tailored to the speakers. To overcome this limitation, we propose a new model with a stochastic variable designed to capture the speaker information and deliver it to the conversational context. An important part of this model is the network of speakers in which each speaker is connected to one or more conversational partner, and this network is then used to model the speakers better. To test whether our model generates more appropriate conversation responses, we build a new conversation corpus containing approximately 27,000 speakers and 770,000 conversations. With this corpus, we run experiments of generating conversational responses and compare our model with other state-of-the-art models. By automatic evaluation metrics and human evaluation, we show that our model outperforms other models in generating appropriate responses. An additional advantage of our model is that it generates better responses for various new user scenarios, for example when one of the speakers is a known user in our corpus but the partner is a new user. For replicability, we make available all our code and data 1 .",2019
yuan-2006-language,https://aclanthology.org/Y06-1056,0,,,,,,,"Language Model Based on Word Clustering. Category-based statistic language model is an important method to solve the problem of sparse data. But there are two bottlenecks about this model: (1) the problem of word clustering, it is hard to find a suitable clustering method that has good performance and not large amount of computation. (2) class-based method always loses some prediction ability to adapt the text of different domain. The authors try to solve above problems in this paper. This paper presents a definition of word similarity by utilizing mutual information. Based on word similarity, this paper gives the definition of word set similarity. Experiments show that word clustering algorithm based on similarity is better than conventional greedy clustering method in speed and performance. At the same time, this paper presents a new method to create the vari-gram model.",Language Model Based on Word Clustering,"Category-based statistic language model is an important method to solve the problem of sparse data. But there are two bottlenecks about this model: (1) the problem of word clustering, it is hard to find a suitable clustering method that has good performance and not large amount of computation. (2) class-based method always loses some prediction ability to adapt the text of different domain. The authors try to solve above problems in this paper. This paper presents a definition of word similarity by utilizing mutual information. Based on word similarity, this paper gives the definition of word set similarity. Experiments show that word clustering algorithm based on similarity is better than conventional greedy clustering method in speed and performance. At the same time, this paper presents a new method to create the vari-gram model.",Language Model Based on Word Clustering,"Category-based statistic language model is an important method to solve the problem of sparse data. But there are two bottlenecks about this model: (1) the problem of word clustering, it is hard to find a suitable clustering method that has good performance and not large amount of computation. (2) class-based method always loses some prediction ability to adapt the text of different domain. The authors try to solve above problems in this paper. This paper presents a definition of word similarity by utilizing mutual information. Based on word similarity, this paper gives the definition of word set similarity. Experiments show that word clustering algorithm based on similarity is better than conventional greedy clustering method in speed and performance. At the same time, this paper presents a new method to create the vari-gram model.",,"Language Model Based on Word Clustering. Category-based statistic language model is an important method to solve the problem of sparse data. But there are two bottlenecks about this model: (1) the problem of word clustering, it is hard to find a suitable clustering method that has good performance and not large amount of computation. (2) class-based method always loses some prediction ability to adapt the text of different domain. The authors try to solve above problems in this paper. This paper presents a definition of word similarity by utilizing mutual information. Based on word similarity, this paper gives the definition of word set similarity. Experiments show that word clustering algorithm based on similarity is better than conventional greedy clustering method in speed and performance. At the same time, this paper presents a new method to create the vari-gram model.",2006
dingli-etal-2003-mining,https://aclanthology.org/E03-1011,0,,,,,,,"Mining Web Sites Using Unsupervised Adaptive Information Extraction. Adaptive Information Extraction systems (IES) are currently used by some Semantic Web (SW) annotation tools as support to annotation (Handschuh et al., 2002; Vargas-Vera et al., 2002) . They are generally based on fully supervised methodologies requiring fairly intense domain-specific annotation. Unfortunately, selecting representative examples may be difficult and annotations can be incorrect and require time. In this paper we present a methodology that drastically reduce (or even remove) the amount of manual annotation required when annotating consistent sets of pages. A very limited number of user-defined examples are used to bootstrap learning. Simple, high precision (and possibly high recall) IE patterns are induced using such examples, these patterns will then discover more examples which will in turn discover more patterns, etc.",Mining Web Sites Using Unsupervised Adaptive Information Extraction,"Adaptive Information Extraction systems (IES) are currently used by some Semantic Web (SW) annotation tools as support to annotation (Handschuh et al., 2002; Vargas-Vera et al., 2002) . They are generally based on fully supervised methodologies requiring fairly intense domain-specific annotation. Unfortunately, selecting representative examples may be difficult and annotations can be incorrect and require time. In this paper we present a methodology that drastically reduce (or even remove) the amount of manual annotation required when annotating consistent sets of pages. A very limited number of user-defined examples are used to bootstrap learning. Simple, high precision (and possibly high recall) IE patterns are induced using such examples, these patterns will then discover more examples which will in turn discover more patterns, etc.",Mining Web Sites Using Unsupervised Adaptive Information Extraction,"Adaptive Information Extraction systems (IES) are currently used by some Semantic Web (SW) annotation tools as support to annotation (Handschuh et al., 2002; Vargas-Vera et al., 2002) . They are generally based on fully supervised methodologies requiring fairly intense domain-specific annotation. Unfortunately, selecting representative examples may be difficult and annotations can be incorrect and require time. In this paper we present a methodology that drastically reduce (or even remove) the amount of manual annotation required when annotating consistent sets of pages. A very limited number of user-defined examples are used to bootstrap learning. Simple, high precision (and possibly high recall) IE patterns are induced using such examples, these patterns will then discover more examples which will in turn discover more patterns, etc.",,"Mining Web Sites Using Unsupervised Adaptive Information Extraction. Adaptive Information Extraction systems (IES) are currently used by some Semantic Web (SW) annotation tools as support to annotation (Handschuh et al., 2002; Vargas-Vera et al., 2002) . They are generally based on fully supervised methodologies requiring fairly intense domain-specific annotation. Unfortunately, selecting representative examples may be difficult and annotations can be incorrect and require time. In this paper we present a methodology that drastically reduce (or even remove) the amount of manual annotation required when annotating consistent sets of pages. A very limited number of user-defined examples are used to bootstrap learning. Simple, high precision (and possibly high recall) IE patterns are induced using such examples, these patterns will then discover more examples which will in turn discover more patterns, etc.",2003
palakurthi-etal-2015-classification,https://aclanthology.org/R15-1065,0,,,,,,,"Classification of Attributes in a Natural Language Query into Different SQL Clauses. Attribute information in a natural language query is one of the key features for converting a natural language query into a Structured Query Language 1 (SQL) in Natural Language Interface to Database systems. In this paper, we explore the task of classifying the attributes present in a natural language query into different SQL clauses in a SQL query. In particular, we investigate the effectiveness of various features and Conditional Random Fields for this task. Our system uses a statistical classifier trained on manually prepared data. We report our results on three different domains and also show how our system can be used for generating a complete SQL query.",Classification of Attributes in a Natural Language Query into Different {SQL} Clauses,"Attribute information in a natural language query is one of the key features for converting a natural language query into a Structured Query Language 1 (SQL) in Natural Language Interface to Database systems. In this paper, we explore the task of classifying the attributes present in a natural language query into different SQL clauses in a SQL query. In particular, we investigate the effectiveness of various features and Conditional Random Fields for this task. Our system uses a statistical classifier trained on manually prepared data. We report our results on three different domains and also show how our system can be used for generating a complete SQL query.",Classification of Attributes in a Natural Language Query into Different SQL Clauses,"Attribute information in a natural language query is one of the key features for converting a natural language query into a Structured Query Language 1 (SQL) in Natural Language Interface to Database systems. In this paper, we explore the task of classifying the attributes present in a natural language query into different SQL clauses in a SQL query. In particular, we investigate the effectiveness of various features and Conditional Random Fields for this task. Our system uses a statistical classifier trained on manually prepared data. We report our results on three different domains and also show how our system can be used for generating a complete SQL query.","We would like to thank the anonymous reviewers for the valuable feedback on this work. This research was supported in part by the Information Technology Research Academy (ITRA), Government of India under ITRA-Mobile grant ITRA/15(62)/Mobile/VAMD/01. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the ITRA.","Classification of Attributes in a Natural Language Query into Different SQL Clauses. Attribute information in a natural language query is one of the key features for converting a natural language query into a Structured Query Language 1 (SQL) in Natural Language Interface to Database systems. In this paper, we explore the task of classifying the attributes present in a natural language query into different SQL clauses in a SQL query. In particular, we investigate the effectiveness of various features and Conditional Random Fields for this task. Our system uses a statistical classifier trained on manually prepared data. We report our results on three different domains and also show how our system can be used for generating a complete SQL query.",2015
ishiwatari-etal-2019-learning,https://aclanthology.org/N19-1350,0,,,,,,,"Learning to Describe Unknown Phrases with Local and Global Contexts. When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities. If we humans cannot figure out the meaning of those expressions from the immediate local context, we consult dictionaries for definitions or search documents or the web to find other global context to help in interpretation. Can machines help us do this work? Which type of context is more important for machines to solve the problem? To answer these questions, we undertake a task of describing a given phrase in natural language based on its local and global contexts. To solve this task, we propose a neural description model that consists of two context encoders and a description decoder. In contrast to the existing methods for non-standard English explanation (Ni and Wang, 2017) and definition generation (Noraset et al., 2017; Gadetsky et al., 2018), our model appropriately takes important clues from both local and global contexts. Experimental results on three existing datasets (including WordNet, Oxford and Urban Dictionaries) and a dataset newly created from Wikipedia demonstrate the effectiveness of our method over previous work.",Learning to Describe Unknown Phrases with Local and Global Contexts,"When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities. If we humans cannot figure out the meaning of those expressions from the immediate local context, we consult dictionaries for definitions or search documents or the web to find other global context to help in interpretation. Can machines help us do this work? Which type of context is more important for machines to solve the problem? To answer these questions, we undertake a task of describing a given phrase in natural language based on its local and global contexts. To solve this task, we propose a neural description model that consists of two context encoders and a description decoder. In contrast to the existing methods for non-standard English explanation (Ni and Wang, 2017) and definition generation (Noraset et al., 2017; Gadetsky et al., 2018), our model appropriately takes important clues from both local and global contexts. Experimental results on three existing datasets (including WordNet, Oxford and Urban Dictionaries) and a dataset newly created from Wikipedia demonstrate the effectiveness of our method over previous work.",Learning to Describe Unknown Phrases with Local and Global Contexts,"When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities. If we humans cannot figure out the meaning of those expressions from the immediate local context, we consult dictionaries for definitions or search documents or the web to find other global context to help in interpretation. Can machines help us do this work? Which type of context is more important for machines to solve the problem? To answer these questions, we undertake a task of describing a given phrase in natural language based on its local and global contexts. To solve this task, we propose a neural description model that consists of two context encoders and a description decoder. In contrast to the existing methods for non-standard English explanation (Ni and Wang, 2017) and definition generation (Noraset et al., 2017; Gadetsky et al., 2018), our model appropriately takes important clues from both local and global contexts. Experimental results on three existing datasets (including WordNet, Oxford and Urban Dictionaries) and a dataset newly created from Wikipedia demonstrate the effectiveness of our method over previous work.","The authors are grateful to Thanapon Noraset for sharing the details of his implementation of the previous work. We also thank the anonymous reviewers for their careful reading of our paper and insightful comments, and the members of Kitsuregawa-Toyoda-Nemoto-Yoshinaga-Goda laboratory in the University of Tokyo for proofreading the draft.This work was partially supported by Grant-in-Aid for JSPS Fellows (Grant Number 17J06394) and Commissioned Research (201) of the National Institute of Information and Communications Technology of Japan.","Learning to Describe Unknown Phrases with Local and Global Contexts. When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities. If we humans cannot figure out the meaning of those expressions from the immediate local context, we consult dictionaries for definitions or search documents or the web to find other global context to help in interpretation. Can machines help us do this work? Which type of context is more important for machines to solve the problem? To answer these questions, we undertake a task of describing a given phrase in natural language based on its local and global contexts. To solve this task, we propose a neural description model that consists of two context encoders and a description decoder. In contrast to the existing methods for non-standard English explanation (Ni and Wang, 2017) and definition generation (Noraset et al., 2017; Gadetsky et al., 2018), our model appropriately takes important clues from both local and global contexts. Experimental results on three existing datasets (including WordNet, Oxford and Urban Dictionaries) and a dataset newly created from Wikipedia demonstrate the effectiveness of our method over previous work.",2019
chen-styler-2013-anafora,https://aclanthology.org/N13-3004,0,,,,,,,"Anafora: A Web-based General Purpose Annotation Tool. Anafora is a newly-developed open source web-based text annotation tool built to be lightweight, flexible, easy to use and capable of annotating with a variety of schemas, simple and complex. Anafora allows secure web-based annotation of any plaintext file with both spanned (e.g. named entity or markable) and relation annotations, as well as adjudication for both types of annotation. Anafora offers automatic set assignment and progress-tracking, centralized and humaneditable XML annotation schemas, and filebased storage and organization of data in a human-readable single-file XML format.",{A}nafora: A Web-based General Purpose Annotation Tool,"Anafora is a newly-developed open source web-based text annotation tool built to be lightweight, flexible, easy to use and capable of annotating with a variety of schemas, simple and complex. Anafora allows secure web-based annotation of any plaintext file with both spanned (e.g. named entity or markable) and relation annotations, as well as adjudication for both types of annotation. Anafora offers automatic set assignment and progress-tracking, centralized and humaneditable XML annotation schemas, and filebased storage and organization of data in a human-readable single-file XML format.",Anafora: A Web-based General Purpose Annotation Tool,"Anafora is a newly-developed open source web-based text annotation tool built to be lightweight, flexible, easy to use and capable of annotating with a variety of schemas, simple and complex. Anafora allows secure web-based annotation of any plaintext file with both spanned (e.g. named entity or markable) and relation annotations, as well as adjudication for both types of annotation. Anafora offers automatic set assignment and progress-tracking, centralized and humaneditable XML annotation schemas, and filebased storage and organization of data in a human-readable single-file XML format.","The development of this annotation tool was supported by award numbers NLM R0110090 (THYME) and 90TR002 (SHARP), as well as DARPA FA8750-09-C-0179 (via BBN) Machine Reading. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NLM/NIH or DARPA. We would also like to especially thank Jinho Choi for his input on the data format, schemas, and UI/UX.","Anafora: A Web-based General Purpose Annotation Tool. Anafora is a newly-developed open source web-based text annotation tool built to be lightweight, flexible, easy to use and capable of annotating with a variety of schemas, simple and complex. Anafora allows secure web-based annotation of any plaintext file with both spanned (e.g. named entity or markable) and relation annotations, as well as adjudication for both types of annotation. Anafora offers automatic set assignment and progress-tracking, centralized and humaneditable XML annotation schemas, and filebased storage and organization of data in a human-readable single-file XML format.",2013
gracia-etal-2014-enabling,http://www.lrec-conf.org/proceedings/lrec2014/pdf/863_Paper.pdf,0,,,,,,,"Enabling Language Resources to Expose Translations as Linked Data on the Web. Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).",Enabling Language Resources to Expose Translations as Linked Data on the Web,"Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).",Enabling Language Resources to Expose Translations as Linked Data on the Web,"Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).","Acknowledgements. We are very thankful to AETER and AENOR for making Terminesp data available. We also thank Javier Bezos, from FUNDEU, for his assistance with the data. Some ideas contained in this paper were inspired after fruitful discussions with other members of the W3C Ontology-Lexica community group. This work is supported by the FP7 European","Enabling Language Resources to Expose Translations as Linked Data on the Web. Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).",2014
lakomkin-etal-2017-gradascent,https://aclanthology.org/W17-5222,0,,,,,,,"GradAscent at EmoInt-2017: Character and Word Level Recurrent Neural Network Models for Tweet Emotion Intensity Detection. The WASSA 2017 EmoInt shared task has the goal to predict emotion intensity values of tweet messages. Given the text of a tweet and its emotion category (anger, joy, fear, and sadness), the participants were asked to build a system that assigns emotion intensity values. Emotion intensity estimation is a challenging problem given the short length of the tweets, the noisy structure of the text and the lack of annotated data. To solve this problem, we developed an ensemble of two neural models, processing input on the character. and word-level with a lexicon-driven system. The correlation scores across all four emotions are averaged to determine the bottom-line competition metric, and our system ranks place forth in full intensity range and third in 0.5-1 range of intensity among 23 systems at the time of writing (June 2017).",{G}rad{A}scent at {E}mo{I}nt-2017: Character and Word Level Recurrent Neural Network Models for Tweet Emotion Intensity Detection,"The WASSA 2017 EmoInt shared task has the goal to predict emotion intensity values of tweet messages. Given the text of a tweet and its emotion category (anger, joy, fear, and sadness), the participants were asked to build a system that assigns emotion intensity values. Emotion intensity estimation is a challenging problem given the short length of the tweets, the noisy structure of the text and the lack of annotated data. To solve this problem, we developed an ensemble of two neural models, processing input on the character. and word-level with a lexicon-driven system. The correlation scores across all four emotions are averaged to determine the bottom-line competition metric, and our system ranks place forth in full intensity range and third in 0.5-1 range of intensity among 23 systems at the time of writing (June 2017).",GradAscent at EmoInt-2017: Character and Word Level Recurrent Neural Network Models for Tweet Emotion Intensity Detection,"The WASSA 2017 EmoInt shared task has the goal to predict emotion intensity values of tweet messages. Given the text of a tweet and its emotion category (anger, joy, fear, and sadness), the participants were asked to build a system that assigns emotion intensity values. Emotion intensity estimation is a challenging problem given the short length of the tweets, the noisy structure of the text and the lack of annotated data. To solve this problem, we developed an ensemble of two neural models, processing input on the character. and word-level with a lexicon-driven system. The correlation scores across all four emotions are averaged to determine the bottom-line competition metric, and our system ranks place forth in full intensity range and third in 0.5-1 range of intensity among 23 systems at the time of writing (June 2017).",This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 642667 (SECURE). We would like to thank Dr. Cornelius Weber and Dr. Sven Magg for their helpful comments and suggestions.,"GradAscent at EmoInt-2017: Character and Word Level Recurrent Neural Network Models for Tweet Emotion Intensity Detection. The WASSA 2017 EmoInt shared task has the goal to predict emotion intensity values of tweet messages. Given the text of a tweet and its emotion category (anger, joy, fear, and sadness), the participants were asked to build a system that assigns emotion intensity values. Emotion intensity estimation is a challenging problem given the short length of the tweets, the noisy structure of the text and the lack of annotated data. To solve this problem, we developed an ensemble of two neural models, processing input on the character. and word-level with a lexicon-driven system. The correlation scores across all four emotions are averaged to determine the bottom-line competition metric, and our system ranks place forth in full intensity range and third in 0.5-1 range of intensity among 23 systems at the time of writing (June 2017).",2017
stolcke-1995-efficient,https://aclanthology.org/J95-2002,0,,,,,,,"An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities. We describe an extension of Earley's parser for stochastic context-free grammars that computes the following quantities given a stochastic context-free grammar and an input string: a) probabilities of successive prefixes being generated by the grammar; b) probabilities of substrings being generated by the nonterminals, including the entire string being generated by the grammar; c) most likely (Viterbi) parse of the string; d) posterior expected number of applications of each grammar production, as required for reestimating rule probabilities. Probabilities (a) and (b) are computed incrementally in a single left-to-right pass over the input. Our algorithm compares favorably to standard bottom-up parsing methods for SCFGs in that it works efficiently on sparse grammars by making use of Earley's top-down control structure. It can process any context-free rule format without conversion to some normal form, and combines computations for (a) through (d) in a single algorithm. Finally, the algorithm has simple extensions for processing partially bracketed inputs, and for finding partial parses and their likelihoods on ungrammatical inputs.",An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities,"We describe an extension of Earley's parser for stochastic context-free grammars that computes the following quantities given a stochastic context-free grammar and an input string: a) probabilities of successive prefixes being generated by the grammar; b) probabilities of substrings being generated by the nonterminals, including the entire string being generated by the grammar; c) most likely (Viterbi) parse of the string; d) posterior expected number of applications of each grammar production, as required for reestimating rule probabilities. Probabilities (a) and (b) are computed incrementally in a single left-to-right pass over the input. Our algorithm compares favorably to standard bottom-up parsing methods for SCFGs in that it works efficiently on sparse grammars by making use of Earley's top-down control structure. It can process any context-free rule format without conversion to some normal form, and combines computations for (a) through (d) in a single algorithm. Finally, the algorithm has simple extensions for processing partially bracketed inputs, and for finding partial parses and their likelihoods on ungrammatical inputs.",An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities,"We describe an extension of Earley's parser for stochastic context-free grammars that computes the following quantities given a stochastic context-free grammar and an input string: a) probabilities of successive prefixes being generated by the grammar; b) probabilities of substrings being generated by the nonterminals, including the entire string being generated by the grammar; c) most likely (Viterbi) parse of the string; d) posterior expected number of applications of each grammar production, as required for reestimating rule probabilities. Probabilities (a) and (b) are computed incrementally in a single left-to-right pass over the input. Our algorithm compares favorably to standard bottom-up parsing methods for SCFGs in that it works efficiently on sparse grammars by making use of Earley's top-down control structure. It can process any context-free rule format without conversion to some normal form, and combines computations for (a) through (d) in a single algorithm. Finally, the algorithm has simple extensions for processing partially bracketed inputs, and for finding partial parses and their likelihoods on ungrammatical inputs.","Thanks are due Dan Jurafsky and Steve Omohundro for extensive discussions on the topics in this paper, and Fernando Pereira for helpful advice and pointers. Jerry Feldman, Terry Regier, Jonathan Segal, Kevin Thompson, and the anonymous reviewers provided valuable comments for improving content and presentation.","An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities. We describe an extension of Earley's parser for stochastic context-free grammars that computes the following quantities given a stochastic context-free grammar and an input string: a) probabilities of successive prefixes being generated by the grammar; b) probabilities of substrings being generated by the nonterminals, including the entire string being generated by the grammar; c) most likely (Viterbi) parse of the string; d) posterior expected number of applications of each grammar production, as required for reestimating rule probabilities. Probabilities (a) and (b) are computed incrementally in a single left-to-right pass over the input. Our algorithm compares favorably to standard bottom-up parsing methods for SCFGs in that it works efficiently on sparse grammars by making use of Earley's top-down control structure. It can process any context-free rule format without conversion to some normal form, and combines computations for (a) through (d) in a single algorithm. Finally, the algorithm has simple extensions for processing partially bracketed inputs, and for finding partial parses and their likelihoods on ungrammatical inputs.",1995
radev-etal-2003-evaluation,https://aclanthology.org/P03-1048,0,,,,,,,"Evaluation Challenges in Large-Scale Document Summarization. We present a large-scale meta evaluation of eight evaluation measures for both single-document and multi-document summarizers. To this end we built a corpus consisting of (a) 100 Million automatic summaries using six summarizers and baselines at ten summary lengths in both English and Chinese, (b) more than 10,000 manual abstracts and extracts, and (c) 200 Million automatic document and summary retrievals using 20 queries. We present both qualitative and quantitative results showing the strengths and drawbacks of all evaluation methods and how they rank the different summarizers. Cluster 2",Evaluation Challenges in Large-Scale Document Summarization,"We present a large-scale meta evaluation of eight evaluation measures for both single-document and multi-document summarizers. To this end we built a corpus consisting of (a) 100 Million automatic summaries using six summarizers and baselines at ten summary lengths in both English and Chinese, (b) more than 10,000 manual abstracts and extracts, and (c) 200 Million automatic document and summary retrievals using 20 queries. We present both qualitative and quantitative results showing the strengths and drawbacks of all evaluation methods and how they rank the different summarizers. Cluster 2",Evaluation Challenges in Large-Scale Document Summarization,"We present a large-scale meta evaluation of eight evaluation measures for both single-document and multi-document summarizers. To this end we built a corpus consisting of (a) 100 Million automatic summaries using six summarizers and baselines at ten summary lengths in both English and Chinese, (b) more than 10,000 manual abstracts and extracts, and (c) 200 Million automatic document and summary retrievals using 20 queries. We present both qualitative and quantitative results showing the strengths and drawbacks of all evaluation methods and how they rank the different summarizers. Cluster 2",,"Evaluation Challenges in Large-Scale Document Summarization. We present a large-scale meta evaluation of eight evaluation measures for both single-document and multi-document summarizers. To this end we built a corpus consisting of (a) 100 Million automatic summaries using six summarizers and baselines at ten summary lengths in both English and Chinese, (b) more than 10,000 manual abstracts and extracts, and (c) 200 Million automatic document and summary retrievals using 20 queries. We present both qualitative and quantitative results showing the strengths and drawbacks of all evaluation methods and how they rank the different summarizers. Cluster 2",2003
o-donnaile-2014-tools,https://aclanthology.org/W14-4603,0,,,,,,,"Tools facilitating better use of online dictionaries: Technical aspects of Multidict, Wordlink and Clilstore. The Internet contains a plethora of openly available dictionaries of many kinds, translating between thousands of language pairs. Three tools are described, Multidict, Wordlink and Clilstore, all openly available at multidict.net, which enable these diverse resources to be harnessed, unified, and utilised in ergonomic fashion. They are of particular benefit to intermediate level language learners, but also to researchers and learners of all kinds. Multidict facilitates finding and using online dictionaries in hundreds of languages, and enables easy switching between different dictionaries and target languages. It enables the utilization of page-image dictionaries in the Web Archive. Wordlink can link most webpages word by word to online dictionaries via Multidict. Clilstore is an open store of language teaching materials utilizing the power of Wordlink and Multidict. The programing and database structures and ideas behind Multidict, Wordlink and Clilstore are described.","Tools facilitating better use of online dictionaries: Technical aspects of Multidict, Wordlink and Clilstore","The Internet contains a plethora of openly available dictionaries of many kinds, translating between thousands of language pairs. Three tools are described, Multidict, Wordlink and Clilstore, all openly available at multidict.net, which enable these diverse resources to be harnessed, unified, and utilised in ergonomic fashion. They are of particular benefit to intermediate level language learners, but also to researchers and learners of all kinds. Multidict facilitates finding and using online dictionaries in hundreds of languages, and enables easy switching between different dictionaries and target languages. It enables the utilization of page-image dictionaries in the Web Archive. Wordlink can link most webpages word by word to online dictionaries via Multidict. Clilstore is an open store of language teaching materials utilizing the power of Wordlink and Multidict. The programing and database structures and ideas behind Multidict, Wordlink and Clilstore are described.","Tools facilitating better use of online dictionaries: Technical aspects of Multidict, Wordlink and Clilstore","The Internet contains a plethora of openly available dictionaries of many kinds, translating between thousands of language pairs. Three tools are described, Multidict, Wordlink and Clilstore, all openly available at multidict.net, which enable these diverse resources to be harnessed, unified, and utilised in ergonomic fashion. They are of particular benefit to intermediate level language learners, but also to researchers and learners of all kinds. Multidict facilitates finding and using online dictionaries in hundreds of languages, and enables easy switching between different dictionaries and target languages. It enables the utilization of page-image dictionaries in the Web Archive. Wordlink can link most webpages word by word to online dictionaries via Multidict. Clilstore is an open store of language teaching materials utilizing the power of Wordlink and Multidict. The programing and database structures and ideas behind Multidict, Wordlink and Clilstore are described.","Multidict and Wordlink were first developed under the EC financed 44 POOLS-T 45 project. Clilstore was developed, and Multidict and Wordlink further developed under TOOLS 46 project financed by the EC's Lifelong Learning Programme. Much of the credit for their development goes to the suggestions, user testing and feedback by the project teams from 9 different European countries, and in particular to the project leader Kent Andersen. Wordlink was inspired by Kent's Textblender program. We are grateful to Kevin Scannell for the Irish lemmatization table used by Multidict, and to Mìcheal Bauer and Will Robertson for the Scottish Gaelic lemmatization table.","Tools facilitating better use of online dictionaries: Technical aspects of Multidict, Wordlink and Clilstore. The Internet contains a plethora of openly available dictionaries of many kinds, translating between thousands of language pairs. Three tools are described, Multidict, Wordlink and Clilstore, all openly available at multidict.net, which enable these diverse resources to be harnessed, unified, and utilised in ergonomic fashion. They are of particular benefit to intermediate level language learners, but also to researchers and learners of all kinds. Multidict facilitates finding and using online dictionaries in hundreds of languages, and enables easy switching between different dictionaries and target languages. It enables the utilization of page-image dictionaries in the Web Archive. Wordlink can link most webpages word by word to online dictionaries via Multidict. Clilstore is an open store of language teaching materials utilizing the power of Wordlink and Multidict. The programing and database structures and ideas behind Multidict, Wordlink and Clilstore are described.",2014
nothman-etal-2014-command,https://aclanthology.org/W14-5207,0,,,,,,,"Command-line utilities for managing and exploring annotated corpora. Users of annotated corpora frequently perform basic operations such as inspecting the available annotations, filtering documents, formatting data, and aggregating basic statistics over a corpus. While these may be easily performed over flat text files with stream-processing UNIX tools, similar tools for structured annotation require custom design. Dawborn and Curran (2014) have developed a declarative description and storage for structured annotation, on top of which we have built generic command-line utilities. We describe the most useful utilities-some for quick data exploration, others for high-level corpus management-with reference to comparable UNIX utilities. We suggest that such tools are universally valuable for working with structured corpora; in turn, their utility promotes common storage and distribution formats for annotated text.",Command-line utilities for managing and exploring annotated corpora,"Users of annotated corpora frequently perform basic operations such as inspecting the available annotations, filtering documents, formatting data, and aggregating basic statistics over a corpus. While these may be easily performed over flat text files with stream-processing UNIX tools, similar tools for structured annotation require custom design. Dawborn and Curran (2014) have developed a declarative description and storage for structured annotation, on top of which we have built generic command-line utilities. We describe the most useful utilities-some for quick data exploration, others for high-level corpus management-with reference to comparable UNIX utilities. We suggest that such tools are universally valuable for working with structured corpora; in turn, their utility promotes common storage and distribution formats for annotated text.",Command-line utilities for managing and exploring annotated corpora,"Users of annotated corpora frequently perform basic operations such as inspecting the available annotations, filtering documents, formatting data, and aggregating basic statistics over a corpus. While these may be easily performed over flat text files with stream-processing UNIX tools, similar tools for structured annotation require custom design. Dawborn and Curran (2014) have developed a declarative description and storage for structured annotation, on top of which we have built generic command-line utilities. We describe the most useful utilities-some for quick data exploration, others for high-level corpus management-with reference to comparable UNIX utilities. We suggest that such tools are universally valuable for working with structured corpora; in turn, their utility promotes common storage and distribution formats for annotated text.",,"Command-line utilities for managing and exploring annotated corpora. Users of annotated corpora frequently perform basic operations such as inspecting the available annotations, filtering documents, formatting data, and aggregating basic statistics over a corpus. While these may be easily performed over flat text files with stream-processing UNIX tools, similar tools for structured annotation require custom design. Dawborn and Curran (2014) have developed a declarative description and storage for structured annotation, on top of which we have built generic command-line utilities. We describe the most useful utilities-some for quick data exploration, others for high-level corpus management-with reference to comparable UNIX utilities. We suggest that such tools are universally valuable for working with structured corpora; in turn, their utility promotes common storage and distribution formats for annotated text.",2014
joshi-schabes-1989-evaluation,https://aclanthology.org/H89-2053,0,,,,,,,"An Evaluation of Lexicalization in Parsing. In this paper, we evaluate a two-pass parsing strategy proposed for the so-called 'lexicalized' grammar. In 'lexicalized' grammars (Schabes, Abeill$ and Joshi, 1988), each elementary structure is systematically associated with a lexical item called anchor. These structures specify extended domains of locality (as compared to CFGs) over which constraints can be stated. The 'grammar' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the anchor. There are no separate grammar rules. There are, of course, ~rules' which tell us how these structures are combined. A general two-pass parsing strategy for 'lexicalized' grammars follows naturally. In the first stage, the parser selects a set of elementary structures associated with the lexical items in the input sentence, and in the second stage the sentence is parsed with respect to this set. We evaluate this strategy with respect to two characteristics. First, the amount of filtering on the entire grammar is evaluated: once the first pass is performed, the parser uses only a subset of the grammar. Second, we evaluate the use of non-local information: the structures selected during the first pass encode the morphological value (and therefore the position in the string) of their anchor; this enables the parser to use non-local information to guide its search. We take Lexicalized Tree Adjoining Grammars as an instance of lexicallzed grammar. We illustrate the organization of the grammar. Then we show how a general Earley-type TAG parser (Schabes and Joshi, 1988) can take advantage of lexicalization. Empirical data show that the filtering of the grammar and the non-local information provided by the two-pass strategy improve the performance of the parser.",An Evaluation of Lexicalization in Parsing,"In this paper, we evaluate a two-pass parsing strategy proposed for the so-called 'lexicalized' grammar. In 'lexicalized' grammars (Schabes, Abeill$ and Joshi, 1988), each elementary structure is systematically associated with a lexical item called anchor. These structures specify extended domains of locality (as compared to CFGs) over which constraints can be stated. The 'grammar' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the anchor. There are no separate grammar rules. There are, of course, ~rules' which tell us how these structures are combined. A general two-pass parsing strategy for 'lexicalized' grammars follows naturally. In the first stage, the parser selects a set of elementary structures associated with the lexical items in the input sentence, and in the second stage the sentence is parsed with respect to this set. We evaluate this strategy with respect to two characteristics. First, the amount of filtering on the entire grammar is evaluated: once the first pass is performed, the parser uses only a subset of the grammar. Second, we evaluate the use of non-local information: the structures selected during the first pass encode the morphological value (and therefore the position in the string) of their anchor; this enables the parser to use non-local information to guide its search. We take Lexicalized Tree Adjoining Grammars as an instance of lexicallzed grammar. We illustrate the organization of the grammar. Then we show how a general Earley-type TAG parser (Schabes and Joshi, 1988) can take advantage of lexicalization. Empirical data show that the filtering of the grammar and the non-local information provided by the two-pass strategy improve the performance of the parser.",An Evaluation of Lexicalization in Parsing,"In this paper, we evaluate a two-pass parsing strategy proposed for the so-called 'lexicalized' grammar. In 'lexicalized' grammars (Schabes, Abeill$ and Joshi, 1988), each elementary structure is systematically associated with a lexical item called anchor. These structures specify extended domains of locality (as compared to CFGs) over which constraints can be stated. The 'grammar' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the anchor. There are no separate grammar rules. There are, of course, ~rules' which tell us how these structures are combined. A general two-pass parsing strategy for 'lexicalized' grammars follows naturally. In the first stage, the parser selects a set of elementary structures associated with the lexical items in the input sentence, and in the second stage the sentence is parsed with respect to this set. We evaluate this strategy with respect to two characteristics. First, the amount of filtering on the entire grammar is evaluated: once the first pass is performed, the parser uses only a subset of the grammar. Second, we evaluate the use of non-local information: the structures selected during the first pass encode the morphological value (and therefore the position in the string) of their anchor; this enables the parser to use non-local information to guide its search. We take Lexicalized Tree Adjoining Grammars as an instance of lexicallzed grammar. We illustrate the organization of the grammar. Then we show how a general Earley-type TAG parser (Schabes and Joshi, 1988) can take advantage of lexicalization. Empirical data show that the filtering of the grammar and the non-local information provided by the two-pass strategy improve the performance of the parser.",,"An Evaluation of Lexicalization in Parsing. In this paper, we evaluate a two-pass parsing strategy proposed for the so-called 'lexicalized' grammar. In 'lexicalized' grammars (Schabes, Abeill$ and Joshi, 1988), each elementary structure is systematically associated with a lexical item called anchor. These structures specify extended domains of locality (as compared to CFGs) over which constraints can be stated. The 'grammar' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the anchor. There are no separate grammar rules. There are, of course, ~rules' which tell us how these structures are combined. A general two-pass parsing strategy for 'lexicalized' grammars follows naturally. In the first stage, the parser selects a set of elementary structures associated with the lexical items in the input sentence, and in the second stage the sentence is parsed with respect to this set. We evaluate this strategy with respect to two characteristics. First, the amount of filtering on the entire grammar is evaluated: once the first pass is performed, the parser uses only a subset of the grammar. Second, we evaluate the use of non-local information: the structures selected during the first pass encode the morphological value (and therefore the position in the string) of their anchor; this enables the parser to use non-local information to guide its search. We take Lexicalized Tree Adjoining Grammars as an instance of lexicallzed grammar. We illustrate the organization of the grammar. Then we show how a general Earley-type TAG parser (Schabes and Joshi, 1988) can take advantage of lexicalization. Empirical data show that the filtering of the grammar and the non-local information provided by the two-pass strategy improve the performance of the parser.",1989
gonzalez-etal-2019-elirf,https://aclanthology.org/S19-2031,0,,,,,,,ELiRF-UPV at SemEval-2019 Task 3: Snapshot Ensemble of Hierarchical Convolutional Neural Networks for Contextual Emotion Detection. This paper describes the approach developed by the ELiRF-UPV team at SemEval 2019 Task 3: Contextual Emotion Detection in Text. We have developed a Snapshot Ensemble of 1D Hierarchical Convolutional Neural Networks to extract features from 3-turn conversations in order to perform contextual emotion detection in text. This Snapshot Ensemble is obtained by averaging the models selected by a Genetic Algorithm that optimizes the evaluation measure. The proposed ensemble obtains better results than a single model and it obtains competitive and promising results on Contextual Emotion Detection in Text.,{EL}i{RF}-{UPV} at {S}em{E}val-2019 Task 3: Snapshot Ensemble of Hierarchical Convolutional Neural Networks for Contextual Emotion Detection,This paper describes the approach developed by the ELiRF-UPV team at SemEval 2019 Task 3: Contextual Emotion Detection in Text. We have developed a Snapshot Ensemble of 1D Hierarchical Convolutional Neural Networks to extract features from 3-turn conversations in order to perform contextual emotion detection in text. This Snapshot Ensemble is obtained by averaging the models selected by a Genetic Algorithm that optimizes the evaluation measure. The proposed ensemble obtains better results than a single model and it obtains competitive and promising results on Contextual Emotion Detection in Text.,ELiRF-UPV at SemEval-2019 Task 3: Snapshot Ensemble of Hierarchical Convolutional Neural Networks for Contextual Emotion Detection,This paper describes the approach developed by the ELiRF-UPV team at SemEval 2019 Task 3: Contextual Emotion Detection in Text. We have developed a Snapshot Ensemble of 1D Hierarchical Convolutional Neural Networks to extract features from 3-turn conversations in order to perform contextual emotion detection in text. This Snapshot Ensemble is obtained by averaging the models selected by a Genetic Algorithm that optimizes the evaluation measure. The proposed ensemble obtains better results than a single model and it obtains competitive and promising results on Contextual Emotion Detection in Text.,This work has been partially supported by the Spanish MINECO and FEDER founds under project AMIC (TIN2017-85854-C4-2-R) and the GiSPRO project (PROMETEU/2018/176). Work of José-Ángel González is also financed by Universitat Politècnica de València under grant PAID-01-17.,ELiRF-UPV at SemEval-2019 Task 3: Snapshot Ensemble of Hierarchical Convolutional Neural Networks for Contextual Emotion Detection. This paper describes the approach developed by the ELiRF-UPV team at SemEval 2019 Task 3: Contextual Emotion Detection in Text. We have developed a Snapshot Ensemble of 1D Hierarchical Convolutional Neural Networks to extract features from 3-turn conversations in order to perform contextual emotion detection in text. This Snapshot Ensemble is obtained by averaging the models selected by a Genetic Algorithm that optimizes the evaluation measure. The proposed ensemble obtains better results than a single model and it obtains competitive and promising results on Contextual Emotion Detection in Text.,2019
gardent-perez-beltrachini-2012-using,https://aclanthology.org/W12-4614,0,,,,,,,"Using FB-LTAG Derivation Trees to Generate Transformation-Based Grammar Exercises. Using a Feature-Based Lexicalised Tree Adjoining Grammar (FB-LTAG), we present an approach for generating pairs of sentences that are related by a syntactic transformation and we apply this approach to create language learning exercises. We argue that the derivation trees of an FB-LTAG provide a good level of representation for capturing syntactic transformations. We relate our approach to previous work on sentence reformulation, question generation and grammar exercise generation. We evaluate precision and linguistic coverage. And we demonstrate the genericity of the proposal by applying it to a range of transformations including the Passive/Active transformation, the pronominalisation of an NP, the assertion / yes-no question relation and the assertion / wh-question transformation.",Using {FB}-{LTAG} Derivation Trees to Generate Transformation-Based Grammar Exercises,"Using a Feature-Based Lexicalised Tree Adjoining Grammar (FB-LTAG), we present an approach for generating pairs of sentences that are related by a syntactic transformation and we apply this approach to create language learning exercises. We argue that the derivation trees of an FB-LTAG provide a good level of representation for capturing syntactic transformations. We relate our approach to previous work on sentence reformulation, question generation and grammar exercise generation. We evaluate precision and linguistic coverage. And we demonstrate the genericity of the proposal by applying it to a range of transformations including the Passive/Active transformation, the pronominalisation of an NP, the assertion / yes-no question relation and the assertion / wh-question transformation.",Using FB-LTAG Derivation Trees to Generate Transformation-Based Grammar Exercises,"Using a Feature-Based Lexicalised Tree Adjoining Grammar (FB-LTAG), we present an approach for generating pairs of sentences that are related by a syntactic transformation and we apply this approach to create language learning exercises. We argue that the derivation trees of an FB-LTAG provide a good level of representation for capturing syntactic transformations. We relate our approach to previous work on sentence reformulation, question generation and grammar exercise generation. We evaluate precision and linguistic coverage. And we demonstrate the genericity of the proposal by applying it to a range of transformations including the Passive/Active transformation, the pronominalisation of an NP, the assertion / yes-no question relation and the assertion / wh-question transformation.",The research presented in this paper was partially supported by the European Fund for Regional Development within the framework of the INTER-REG IV A Allegro Project. We would also like to thank German Kruszewski and Elise Fetet for their help in developing and annotating the test data.,"Using FB-LTAG Derivation Trees to Generate Transformation-Based Grammar Exercises. Using a Feature-Based Lexicalised Tree Adjoining Grammar (FB-LTAG), we present an approach for generating pairs of sentences that are related by a syntactic transformation and we apply this approach to create language learning exercises. We argue that the derivation trees of an FB-LTAG provide a good level of representation for capturing syntactic transformations. We relate our approach to previous work on sentence reformulation, question generation and grammar exercise generation. We evaluate precision and linguistic coverage. And we demonstrate the genericity of the proposal by applying it to a range of transformations including the Passive/Active transformation, the pronominalisation of an NP, the assertion / yes-no question relation and the assertion / wh-question transformation.",2012
kawahara-etal-2002-construction,http://www.lrec-conf.org/proceedings/lrec2002/pdf/302.pdf,0,,,,,,,"Construction of a Japanese Relevance-tagged Corpus. This paper describes our corpus annotation project. The annotated corpus has relevance tags which consist of predicate-argument relations, relations between nouns, and coreferences. To construct this relevance-tagged corpus, we investigated a large corpus and established the specification of the annotation. This paper shows the specification and difficult tagging problems which have emerged through the annotation so far.",Construction of a {J}apanese Relevance-tagged Corpus,"This paper describes our corpus annotation project. The annotated corpus has relevance tags which consist of predicate-argument relations, relations between nouns, and coreferences. To construct this relevance-tagged corpus, we investigated a large corpus and established the specification of the annotation. This paper shows the specification and difficult tagging problems which have emerged through the annotation so far.",Construction of a Japanese Relevance-tagged Corpus,"This paper describes our corpus annotation project. The annotated corpus has relevance tags which consist of predicate-argument relations, relations between nouns, and coreferences. To construct this relevance-tagged corpus, we investigated a large corpus and established the specification of the annotation. This paper shows the specification and difficult tagging problems which have emerged through the annotation so far.",,"Construction of a Japanese Relevance-tagged Corpus. This paper describes our corpus annotation project. The annotated corpus has relevance tags which consist of predicate-argument relations, relations between nouns, and coreferences. To construct this relevance-tagged corpus, we investigated a large corpus and established the specification of the annotation. This paper shows the specification and difficult tagging problems which have emerged through the annotation so far.",2002
oepen-etal-2005-holistic,https://aclanthology.org/2005.eamt-1.27,0,,,,,,,"Holistic regression testing for high-quality MT: some methodological and technological reflections. We review the techniques and tools used for regression testing, the primary quality assurance measure, in a multi-site research project working towards a high-quality Norwegian-English MT demonstrator. A combination of hand-constructed test suites, domain-specific corpora, specialized software tools, and somewhat rigid release procedures is used for semi-automated diagnostic and regression evaluation. Based on project-internal experience so far, we comment on a range of methodological aspects and desiderata for systematic evaluation in MT development and show analogies to evaluation work in other NLP tasks.",Holistic regression testing for high-quality {MT}: some methodological and technological reflections,"We review the techniques and tools used for regression testing, the primary quality assurance measure, in a multi-site research project working towards a high-quality Norwegian-English MT demonstrator. A combination of hand-constructed test suites, domain-specific corpora, specialized software tools, and somewhat rigid release procedures is used for semi-automated diagnostic and regression evaluation. Based on project-internal experience so far, we comment on a range of methodological aspects and desiderata for systematic evaluation in MT development and show analogies to evaluation work in other NLP tasks.",Holistic regression testing for high-quality MT: some methodological and technological reflections,"We review the techniques and tools used for regression testing, the primary quality assurance measure, in a multi-site research project working towards a high-quality Norwegian-English MT demonstrator. A combination of hand-constructed test suites, domain-specific corpora, specialized software tools, and somewhat rigid release procedures is used for semi-automated diagnostic and regression evaluation. Based on project-internal experience so far, we comment on a range of methodological aspects and desiderata for systematic evaluation in MT development and show analogies to evaluation work in other NLP tasks.",,"Holistic regression testing for high-quality MT: some methodological and technological reflections. We review the techniques and tools used for regression testing, the primary quality assurance measure, in a multi-site research project working towards a high-quality Norwegian-English MT demonstrator. A combination of hand-constructed test suites, domain-specific corpora, specialized software tools, and somewhat rigid release procedures is used for semi-automated diagnostic and regression evaluation. Based on project-internal experience so far, we comment on a range of methodological aspects and desiderata for systematic evaluation in MT development and show analogies to evaluation work in other NLP tasks.",2005
dunn-adams-2020-geographically,https://aclanthology.org/2020.lrec-1.308,0,,,,,,,"Geographically-Balanced Gigaword Corpora for 50 Language Varieties. While text corpora have been steadily increasing in overall size, even very large corpora are not designed to represent global population demographics. For example, recent work has shown that existing English gigaword corpora over-represent inner-circle varieties from the US and the UK (Dunn, 2019b). To correct implicit geographic and demographic biases, this paper uses country-level population demographics to guide the construction of gigaword web corpora. The resulting corpora explicitly match the ground-truth geographic distribution of each language, thus equally representing language users from around the world. This is important because it ensures that speakers of under-resourced language varieties (i.e., Indian English or Algerian French) are represented, both in the corpora themselves but also in derivative resources like word embeddings.",Geographically-Balanced {G}igaword Corpora for 50 Language Varieties,"While text corpora have been steadily increasing in overall size, even very large corpora are not designed to represent global population demographics. For example, recent work has shown that existing English gigaword corpora over-represent inner-circle varieties from the US and the UK (Dunn, 2019b). To correct implicit geographic and demographic biases, this paper uses country-level population demographics to guide the construction of gigaword web corpora. The resulting corpora explicitly match the ground-truth geographic distribution of each language, thus equally representing language users from around the world. This is important because it ensures that speakers of under-resourced language varieties (i.e., Indian English or Algerian French) are represented, both in the corpora themselves but also in derivative resources like word embeddings.",Geographically-Balanced Gigaword Corpora for 50 Language Varieties,"While text corpora have been steadily increasing in overall size, even very large corpora are not designed to represent global population demographics. For example, recent work has shown that existing English gigaword corpora over-represent inner-circle varieties from the US and the UK (Dunn, 2019b). To correct implicit geographic and demographic biases, this paper uses country-level population demographics to guide the construction of gigaword web corpora. The resulting corpora explicitly match the ground-truth geographic distribution of each language, thus equally representing language users from around the world. This is important because it ensures that speakers of under-resourced language varieties (i.e., Indian English or Algerian French) are represented, both in the corpora themselves but also in derivative resources like word embeddings.",,"Geographically-Balanced Gigaword Corpora for 50 Language Varieties. While text corpora have been steadily increasing in overall size, even very large corpora are not designed to represent global population demographics. For example, recent work has shown that existing English gigaword corpora over-represent inner-circle varieties from the US and the UK (Dunn, 2019b). To correct implicit geographic and demographic biases, this paper uses country-level population demographics to guide the construction of gigaword web corpora. The resulting corpora explicitly match the ground-truth geographic distribution of each language, thus equally representing language users from around the world. This is important because it ensures that speakers of under-resourced language varieties (i.e., Indian English or Algerian French) are represented, both in the corpora themselves but also in derivative resources like word embeddings.",2020
wang-etal-2017-crowd,https://aclanthology.org/D17-1205,0,,,,,,,"CROWD-IN-THE-LOOP: A Hybrid Approach for Annotating Semantic Roles. Crowdsourcing has proven to be an effective method for generating labeled data for a range of NLP tasks. However, multiple recent attempts of using crowdsourcing to generate gold-labeled training data for semantic role labeling (SRL) reported only modest results, indicating that SRL is perhaps too difficult a task to be effectively crowdsourced. In this paper, we postulate that while producing SRL annotation does require expert involvement in general, a large subset of SRL labeling tasks is in fact appropriate for the crowd. We present a novel workflow in which we employ a classifier to identify difficult annotation tasks and route each task either to experts or crowd workers according to their difficulties. Our experimental evaluation shows that the proposed approach reduces the workload for experts by over two-thirds, and thus significantly reduces the cost of producing SRL annotation at little loss in quality.",{CROWD}-{IN}-{THE}-{LOOP}: A Hybrid Approach for Annotating Semantic Roles,"Crowdsourcing has proven to be an effective method for generating labeled data for a range of NLP tasks. However, multiple recent attempts of using crowdsourcing to generate gold-labeled training data for semantic role labeling (SRL) reported only modest results, indicating that SRL is perhaps too difficult a task to be effectively crowdsourced. In this paper, we postulate that while producing SRL annotation does require expert involvement in general, a large subset of SRL labeling tasks is in fact appropriate for the crowd. We present a novel workflow in which we employ a classifier to identify difficult annotation tasks and route each task either to experts or crowd workers according to their difficulties. Our experimental evaluation shows that the proposed approach reduces the workload for experts by over two-thirds, and thus significantly reduces the cost of producing SRL annotation at little loss in quality.",CROWD-IN-THE-LOOP: A Hybrid Approach for Annotating Semantic Roles,"Crowdsourcing has proven to be an effective method for generating labeled data for a range of NLP tasks. However, multiple recent attempts of using crowdsourcing to generate gold-labeled training data for semantic role labeling (SRL) reported only modest results, indicating that SRL is perhaps too difficult a task to be effectively crowdsourced. In this paper, we postulate that while producing SRL annotation does require expert involvement in general, a large subset of SRL labeling tasks is in fact appropriate for the crowd. We present a novel workflow in which we employ a classifier to identify difficult annotation tasks and route each task either to experts or crowd workers according to their difficulties. Our experimental evaluation shows that the proposed approach reduces the workload for experts by over two-thirds, and thus significantly reduces the cost of producing SRL annotation at little loss in quality.",,"CROWD-IN-THE-LOOP: A Hybrid Approach for Annotating Semantic Roles. Crowdsourcing has proven to be an effective method for generating labeled data for a range of NLP tasks. However, multiple recent attempts of using crowdsourcing to generate gold-labeled training data for semantic role labeling (SRL) reported only modest results, indicating that SRL is perhaps too difficult a task to be effectively crowdsourced. In this paper, we postulate that while producing SRL annotation does require expert involvement in general, a large subset of SRL labeling tasks is in fact appropriate for the crowd. We present a novel workflow in which we employ a classifier to identify difficult annotation tasks and route each task either to experts or crowd workers according to their difficulties. Our experimental evaluation shows that the proposed approach reduces the workload for experts by over two-thirds, and thus significantly reduces the cost of producing SRL annotation at little loss in quality.",2017
ashida-etal-2020-building,https://aclanthology.org/2020.aacl-srw.9,0,,,,,,,"Building a Part-of-Speech Tagged Corpus for Drenjongke (Bhutia). This research paper reports on the generation of the first Drenjongke corpus based on texts taken from a phrase book for beginners, written in the Tibetan script. A corpus of sentences was created after correcting errors in the text scanned through optical character reading (OCR). A total of 34 Part-of-Speech (PoS) tags were defined based on manual annotation performed by the three authors, one of whom is a native speaker of Drenjongke. The first corpus of the Drenjongke language comprises 275 sentences and 1379 tokens, which we plan to expand with other materials to promote further studies of this language.",Building a Part-of-Speech Tagged Corpus for Drenjongke (Bhutia),"This research paper reports on the generation of the first Drenjongke corpus based on texts taken from a phrase book for beginners, written in the Tibetan script. A corpus of sentences was created after correcting errors in the text scanned through optical character reading (OCR). A total of 34 Part-of-Speech (PoS) tags were defined based on manual annotation performed by the three authors, one of whom is a native speaker of Drenjongke. The first corpus of the Drenjongke language comprises 275 sentences and 1379 tokens, which we plan to expand with other materials to promote further studies of this language.",Building a Part-of-Speech Tagged Corpus for Drenjongke (Bhutia),"This research paper reports on the generation of the first Drenjongke corpus based on texts taken from a phrase book for beginners, written in the Tibetan script. A corpus of sentences was created after correcting errors in the text scanned through optical character reading (OCR). A total of 34 Part-of-Speech (PoS) tags were defined based on manual annotation performed by the three authors, one of whom is a native speaker of Drenjongke. The first corpus of the Drenjongke language comprises 275 sentences and 1379 tokens, which we plan to expand with other materials to promote further studies of this language.","The authors are grateful to Jin-Dong Kim and three anonymous reviewers for their feedback on the paper, Mamoru Komachi for insightful discussions regarding the annotation process, Arseny Tolmachev for the post-acceptance mentorship, and the AACL-IJCNLP SRW committee members for providing support of various kinds. Of course, all remaining errors are of our own. Thanks also go to Jigmee Wangchuk Bhutia and Lopen Karma Gyaltsen Drenjongpo for allowing us to edit and publish the contents of the phrase book.","Building a Part-of-Speech Tagged Corpus for Drenjongke (Bhutia). This research paper reports on the generation of the first Drenjongke corpus based on texts taken from a phrase book for beginners, written in the Tibetan script. A corpus of sentences was created after correcting errors in the text scanned through optical character reading (OCR). A total of 34 Part-of-Speech (PoS) tags were defined based on manual annotation performed by the three authors, one of whom is a native speaker of Drenjongke. The first corpus of the Drenjongke language comprises 275 sentences and 1379 tokens, which we plan to expand with other materials to promote further studies of this language.",2020
kuribayashi-etal-2020-language,https://aclanthology.org/2020.acl-main.47,0,,,,,,,"Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese. We examine a methodology using neural language models (LMs) for analyzing the word order of language. This LM-based method has the potential to overcome the difficulties existing methods face, such as the propagation of preprocessor errors in count-based methods. In this study, we explore whether the LMbased method is valid for analyzing the word order. As a case study, this study focuses on Japanese due to its complex and flexible word order. To validate the LM-based method, we test (i) parallels between LMs and human word order preference, and (ii) consistency of the results obtained using the LM-based method with previous linguistic studies. Through our experiments, we tentatively conclude that LMs display sufficient word order knowledge for usage as an analysis tool. Finally, using the LMbased method, we demonstrate the relationship between the canonical word order and topicalization, which had yet to be analyzed by largescale experiments.",Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in {J}apanese,"We examine a methodology using neural language models (LMs) for analyzing the word order of language. This LM-based method has the potential to overcome the difficulties existing methods face, such as the propagation of preprocessor errors in count-based methods. In this study, we explore whether the LMbased method is valid for analyzing the word order. As a case study, this study focuses on Japanese due to its complex and flexible word order. To validate the LM-based method, we test (i) parallels between LMs and human word order preference, and (ii) consistency of the results obtained using the LM-based method with previous linguistic studies. Through our experiments, we tentatively conclude that LMs display sufficient word order knowledge for usage as an analysis tool. Finally, using the LMbased method, we demonstrate the relationship between the canonical word order and topicalization, which had yet to be analyzed by largescale experiments.",Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese,"We examine a methodology using neural language models (LMs) for analyzing the word order of language. This LM-based method has the potential to overcome the difficulties existing methods face, such as the propagation of preprocessor errors in count-based methods. In this study, we explore whether the LMbased method is valid for analyzing the word order. As a case study, this study focuses on Japanese due to its complex and flexible word order. To validate the LM-based method, we test (i) parallels between LMs and human word order preference, and (ii) consistency of the results obtained using the LM-based method with previous linguistic studies. Through our experiments, we tentatively conclude that LMs display sufficient word order knowledge for usage as an analysis tool. Finally, using the LMbased method, we demonstrate the relationship between the canonical word order and topicalization, which had yet to be analyzed by largescale experiments.",,"Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese. We examine a methodology using neural language models (LMs) for analyzing the word order of language. This LM-based method has the potential to overcome the difficulties existing methods face, such as the propagation of preprocessor errors in count-based methods. In this study, we explore whether the LMbased method is valid for analyzing the word order. As a case study, this study focuses on Japanese due to its complex and flexible word order. To validate the LM-based method, we test (i) parallels between LMs and human word order preference, and (ii) consistency of the results obtained using the LM-based method with previous linguistic studies. Through our experiments, we tentatively conclude that LMs display sufficient word order knowledge for usage as an analysis tool. Finally, using the LMbased method, we demonstrate the relationship between the canonical word order and topicalization, which had yet to be analyzed by largescale experiments.",2020
pruksachatkun-etal-2020-intermediate,https://aclanthology.org/2020.acl-main.467,0,,,,,,,"Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?. While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task, before fine-tuning it on a target task. However, it is still poorly understood when and why intermediate-task training is beneficial for a given target task. To investigate this, we perform a large-scale study on the pretrained RoBERTa model with 110 intermediate-target task combinations. We further evaluate all trained models with 25 probing tasks meant to reveal the specific skills that drive transfer. We observe that intermediate tasks requiring high-level inference and reasoning abilities tend to work best. We also observe that target task performance is strongly correlated with higher-level abilities such as coreference resolution. However, we fail to observe more granular correlations between probing and target task performance, highlighting the need for further work on broad-coverage probing benchmarks. We also observe evidence that the forgetting of knowledge learned during pretraining may limit our analysis, highlighting the need for further work on transfer learning methods in these settings.",Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?,"While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task, before fine-tuning it on a target task. However, it is still poorly understood when and why intermediate-task training is beneficial for a given target task. To investigate this, we perform a large-scale study on the pretrained RoBERTa model with 110 intermediate-target task combinations. We further evaluate all trained models with 25 probing tasks meant to reveal the specific skills that drive transfer. We observe that intermediate tasks requiring high-level inference and reasoning abilities tend to work best. We also observe that target task performance is strongly correlated with higher-level abilities such as coreference resolution. However, we fail to observe more granular correlations between probing and target task performance, highlighting the need for further work on broad-coverage probing benchmarks. We also observe evidence that the forgetting of knowledge learned during pretraining may limit our analysis, highlighting the need for further work on transfer learning methods in these settings.",Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?,"While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task, before fine-tuning it on a target task. However, it is still poorly understood when and why intermediate-task training is beneficial for a given target task. To investigate this, we perform a large-scale study on the pretrained RoBERTa model with 110 intermediate-target task combinations. We further evaluate all trained models with 25 probing tasks meant to reveal the specific skills that drive transfer. We observe that intermediate tasks requiring high-level inference and reasoning abilities tend to work best. We also observe that target task performance is strongly correlated with higher-level abilities such as coreference resolution. However, we fail to observe more granular correlations between probing and target task performance, highlighting the need for further work on broad-coverage probing benchmarks. We also observe evidence that the forgetting of knowledge learned during pretraining may limit our analysis, highlighting the need for further work on transfer learning methods in these settings.","This project has benefited from support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), by Samsung Research (under the project Improving Deep Learning using Latent Structure), by Intuit, Inc., and by NVIDIA Corporation (with the donation of a Titan V GPU).","Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?. While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task, before fine-tuning it on a target task. However, it is still poorly understood when and why intermediate-task training is beneficial for a given target task. To investigate this, we perform a large-scale study on the pretrained RoBERTa model with 110 intermediate-target task combinations. We further evaluate all trained models with 25 probing tasks meant to reveal the specific skills that drive transfer. We observe that intermediate tasks requiring high-level inference and reasoning abilities tend to work best. We also observe that target task performance is strongly correlated with higher-level abilities such as coreference resolution. However, we fail to observe more granular correlations between probing and target task performance, highlighting the need for further work on broad-coverage probing benchmarks. We also observe evidence that the forgetting of knowledge learned during pretraining may limit our analysis, highlighting the need for further work on transfer learning methods in these settings.",2020
xu-etal-2002-domain,http://www.lrec-conf.org/proceedings/lrec2002/pdf/351.pdf,0,,,,,,,"A Domain Adaptive Approach to Automatic Acquisition of Domain Relevant Terms and their Relations with Bootstrapping. In this paper, we present an unsupervised hybrid text-mining approach to automatic acquisition of domain relevant terms and their relations. We deploy the TFIDF-based term classification method to acquire domain relevant single-word terms. Further, we apply two strategies in order to learn lexico-syntatic patterns which indicate paradigmatic and domain relevant syntagmatic relations between the extracted terms. The first one uses an existing ontology as initial knowledge for learning lexico-syntactic patterns, while the second is based on different collocation acquisition methods to deal with the free-word order languages like German. This domain-adaptive method yields good results even when trained on relatively small training corpora. It can be applied to different real-world applications, which need domain-relevant ontology, for example , information extraction, information retrieval or text classification.",A Domain Adaptive Approach to Automatic Acquisition of Domain Relevant Terms and their Relations with Bootstrapping,"In this paper, we present an unsupervised hybrid text-mining approach to automatic acquisition of domain relevant terms and their relations. We deploy the TFIDF-based term classification method to acquire domain relevant single-word terms. Further, we apply two strategies in order to learn lexico-syntatic patterns which indicate paradigmatic and domain relevant syntagmatic relations between the extracted terms. The first one uses an existing ontology as initial knowledge for learning lexico-syntactic patterns, while the second is based on different collocation acquisition methods to deal with the free-word order languages like German. This domain-adaptive method yields good results even when trained on relatively small training corpora. It can be applied to different real-world applications, which need domain-relevant ontology, for example , information extraction, information retrieval or text classification.",A Domain Adaptive Approach to Automatic Acquisition of Domain Relevant Terms and their Relations with Bootstrapping,"In this paper, we present an unsupervised hybrid text-mining approach to automatic acquisition of domain relevant terms and their relations. We deploy the TFIDF-based term classification method to acquire domain relevant single-word terms. Further, we apply two strategies in order to learn lexico-syntatic patterns which indicate paradigmatic and domain relevant syntagmatic relations between the extracted terms. The first one uses an existing ontology as initial knowledge for learning lexico-syntactic patterns, while the second is based on different collocation acquisition methods to deal with the free-word order languages like German. This domain-adaptive method yields good results even when trained on relatively small training corpora. It can be applied to different real-world applications, which need domain-relevant ontology, for example , information extraction, information retrieval or text classification.",,"A Domain Adaptive Approach to Automatic Acquisition of Domain Relevant Terms and their Relations with Bootstrapping. In this paper, we present an unsupervised hybrid text-mining approach to automatic acquisition of domain relevant terms and their relations. We deploy the TFIDF-based term classification method to acquire domain relevant single-word terms. Further, we apply two strategies in order to learn lexico-syntatic patterns which indicate paradigmatic and domain relevant syntagmatic relations between the extracted terms. The first one uses an existing ontology as initial knowledge for learning lexico-syntactic patterns, while the second is based on different collocation acquisition methods to deal with the free-word order languages like German. This domain-adaptive method yields good results even when trained on relatively small training corpora. It can be applied to different real-world applications, which need domain-relevant ontology, for example , information extraction, information retrieval or text classification.",2002
feldman-etal-2006-cross,http://www.lrec-conf.org/proceedings/lrec2006/pdf/554_pdf.pdf,0,,,,,,,"A Cross-language Approach to Rapid Creation of New Morpho-syntactically Annotated Resources. We take a novel approach to rapid, low-cost development of morpho-syntactically annotated resources without using parallel corpora or bilingual lexicons. The overall research question is how to exploit language resources and properties to facilitate and automate the creation of morphologically annotated corpora for new languages. This portability issue is especially relevant to minority languages, for which such resources are likely to remain unavailable in the foreseeable future. We compare the performance of our system on languages that belong to different language families (Romance vs. Slavic), as well as different language pairs within the same language family (Portuguese via Spanish vs. Catalan via Spanish). We show that across language families, the most difficult category is the category of nominals (the noun homonymy is challenging for morphological analysis and the order variation of adjectives within a sentence makes it challenging to create a realiable model), whereas different language families present different challenges with respect to their morpho-syntactic descriptions: for the Slavic languages, case is the most challenging category; for the Romance languages, gender is more challenging than case. In addition, we present an alternative evaluation metric for our system, where we measure how much human labor will be needed to convert the result of our tagging to a high precision annotated resource.",A Cross-language Approach to Rapid Creation of New Morpho-syntactically Annotated Resources,"We take a novel approach to rapid, low-cost development of morpho-syntactically annotated resources without using parallel corpora or bilingual lexicons. The overall research question is how to exploit language resources and properties to facilitate and automate the creation of morphologically annotated corpora for new languages. This portability issue is especially relevant to minority languages, for which such resources are likely to remain unavailable in the foreseeable future. We compare the performance of our system on languages that belong to different language families (Romance vs. Slavic), as well as different language pairs within the same language family (Portuguese via Spanish vs. Catalan via Spanish). We show that across language families, the most difficult category is the category of nominals (the noun homonymy is challenging for morphological analysis and the order variation of adjectives within a sentence makes it challenging to create a realiable model), whereas different language families present different challenges with respect to their morpho-syntactic descriptions: for the Slavic languages, case is the most challenging category; for the Romance languages, gender is more challenging than case. In addition, we present an alternative evaluation metric for our system, where we measure how much human labor will be needed to convert the result of our tagging to a high precision annotated resource.",A Cross-language Approach to Rapid Creation of New Morpho-syntactically Annotated Resources,"We take a novel approach to rapid, low-cost development of morpho-syntactically annotated resources without using parallel corpora or bilingual lexicons. The overall research question is how to exploit language resources and properties to facilitate and automate the creation of morphologically annotated corpora for new languages. This portability issue is especially relevant to minority languages, for which such resources are likely to remain unavailable in the foreseeable future. We compare the performance of our system on languages that belong to different language families (Romance vs. Slavic), as well as different language pairs within the same language family (Portuguese via Spanish vs. Catalan via Spanish). We show that across language families, the most difficult category is the category of nominals (the noun homonymy is challenging for morphological analysis and the order variation of adjectives within a sentence makes it challenging to create a realiable model), whereas different language families present different challenges with respect to their morpho-syntactic descriptions: for the Slavic languages, case is the most challenging category; for the Romance languages, gender is more challenging than case. In addition, we present an alternative evaluation metric for our system, where we measure how much human labor will be needed to convert the result of our tagging to a high precision annotated resource.","We would like to thank Maria das Graças Volpe Nunes, Sandra Maria Aluísio, and Ricardo Hasegawa for giving us access to the NILC corpus annotated with PALAVRAS and to Carlos Rodríguez Penagos for letting us use the CLiC-TALP corpus.","A Cross-language Approach to Rapid Creation of New Morpho-syntactically Annotated Resources. We take a novel approach to rapid, low-cost development of morpho-syntactically annotated resources without using parallel corpora or bilingual lexicons. The overall research question is how to exploit language resources and properties to facilitate and automate the creation of morphologically annotated corpora for new languages. This portability issue is especially relevant to minority languages, for which such resources are likely to remain unavailable in the foreseeable future. We compare the performance of our system on languages that belong to different language families (Romance vs. Slavic), as well as different language pairs within the same language family (Portuguese via Spanish vs. Catalan via Spanish). We show that across language families, the most difficult category is the category of nominals (the noun homonymy is challenging for morphological analysis and the order variation of adjectives within a sentence makes it challenging to create a realiable model), whereas different language families present different challenges with respect to their morpho-syntactic descriptions: for the Slavic languages, case is the most challenging category; for the Romance languages, gender is more challenging than case. In addition, we present an alternative evaluation metric for our system, where we measure how much human labor will be needed to convert the result of our tagging to a high precision annotated resource.",2006
biemann-etal-2008-asv,http://www.lrec-conf.org/proceedings/lrec2008/pdf/447_paper.pdf,0,,,,,,,"ASV Toolbox: a Modular Collection of Language Exploration Tools. ASV Toolbox is a modular collection of tools for the exploration of written language data both for scientific and educational purposes. It includes modules that operate on word lists or texts and allow to perform various linguistic annotation, classification and clustering tasks, including language detection, POS-tagging, base form reduction, named entity recognition, and terminology extraction. On a more abstract level, the algorithms deal with various kinds of word similarity, using pattern-based and statistical approaches. The collection can be used to work on large real-world data sets as well as for studying the underlying algorithms. Each module of the ASV Toolbox is designed to work either on a plain text files or with a connection to a MySQL database. While it is especially designed to work with corpora of the Leipzig Corpora Collection, it can easily be adapted to other sources.",{ASV} Toolbox: a Modular Collection of Language Exploration Tools,"ASV Toolbox is a modular collection of tools for the exploration of written language data both for scientific and educational purposes. It includes modules that operate on word lists or texts and allow to perform various linguistic annotation, classification and clustering tasks, including language detection, POS-tagging, base form reduction, named entity recognition, and terminology extraction. On a more abstract level, the algorithms deal with various kinds of word similarity, using pattern-based and statistical approaches. The collection can be used to work on large real-world data sets as well as for studying the underlying algorithms. Each module of the ASV Toolbox is designed to work either on a plain text files or with a connection to a MySQL database. While it is especially designed to work with corpora of the Leipzig Corpora Collection, it can easily be adapted to other sources.",ASV Toolbox: a Modular Collection of Language Exploration Tools,"ASV Toolbox is a modular collection of tools for the exploration of written language data both for scientific and educational purposes. It includes modules that operate on word lists or texts and allow to perform various linguistic annotation, classification and clustering tasks, including language detection, POS-tagging, base form reduction, named entity recognition, and terminology extraction. On a more abstract level, the algorithms deal with various kinds of word similarity, using pattern-based and statistical approaches. The collection can be used to work on large real-world data sets as well as for studying the underlying algorithms. Each module of the ASV Toolbox is designed to work either on a plain text files or with a connection to a MySQL database. While it is especially designed to work with corpora of the Leipzig Corpora Collection, it can easily be adapted to other sources.",,"ASV Toolbox: a Modular Collection of Language Exploration Tools. ASV Toolbox is a modular collection of tools for the exploration of written language data both for scientific and educational purposes. It includes modules that operate on word lists or texts and allow to perform various linguistic annotation, classification and clustering tasks, including language detection, POS-tagging, base form reduction, named entity recognition, and terminology extraction. On a more abstract level, the algorithms deal with various kinds of word similarity, using pattern-based and statistical approaches. The collection can be used to work on large real-world data sets as well as for studying the underlying algorithms. Each module of the ASV Toolbox is designed to work either on a plain text files or with a connection to a MySQL database. While it is especially designed to work with corpora of the Leipzig Corpora Collection, it can easily be adapted to other sources.",2008
ghannay-etal-2016-evaluation,https://aclanthology.org/W16-2511,0,,,,,,,"Evaluation of acoustic word embeddings. Recently, researchers in speech recognition have started to reconsider using whole words as the basic modeling unit, instead of phonetic units. These systems rely on a function that embeds an arbitrary or fixed dimensional speech segments to a vector in a fixed-dimensional space, named acoustic word embedding. Thus, speech segments of words that sound similarly will be projected in a close area in a continuous space. This paper focuses on the evaluation of acoustic word embeddings. We propose two approaches to evaluate the intrinsic performances of acoustic word embeddings in comparison to orthographic representations in order to evaluate whether they capture discriminative phonetic information. Since French language is targeted in experiments, a particular focus is made on homophone words.",Evaluation of acoustic word embeddings,"Recently, researchers in speech recognition have started to reconsider using whole words as the basic modeling unit, instead of phonetic units. These systems rely on a function that embeds an arbitrary or fixed dimensional speech segments to a vector in a fixed-dimensional space, named acoustic word embedding. Thus, speech segments of words that sound similarly will be projected in a close area in a continuous space. This paper focuses on the evaluation of acoustic word embeddings. We propose two approaches to evaluate the intrinsic performances of acoustic word embeddings in comparison to orthographic representations in order to evaluate whether they capture discriminative phonetic information. Since French language is targeted in experiments, a particular focus is made on homophone words.",Evaluation of acoustic word embeddings,"Recently, researchers in speech recognition have started to reconsider using whole words as the basic modeling unit, instead of phonetic units. These systems rely on a function that embeds an arbitrary or fixed dimensional speech segments to a vector in a fixed-dimensional space, named acoustic word embedding. Thus, speech segments of words that sound similarly will be projected in a close area in a continuous space. This paper focuses on the evaluation of acoustic word embeddings. We propose two approaches to evaluate the intrinsic performances of acoustic word embeddings in comparison to orthographic representations in order to evaluate whether they capture discriminative phonetic information. Since French language is targeted in experiments, a particular focus is made on homophone words.","This work was partially funded by the European Commission through the EUMSSI project, under the contract number 611057, in the framework of the FP7-ICT-2013-10 call, by the French National Research Agency (ANR) through the VERA project, under the contract number ANR-12-BS02-006-01, and by the Région Pays de la Loire.","Evaluation of acoustic word embeddings. Recently, researchers in speech recognition have started to reconsider using whole words as the basic modeling unit, instead of phonetic units. These systems rely on a function that embeds an arbitrary or fixed dimensional speech segments to a vector in a fixed-dimensional space, named acoustic word embedding. Thus, speech segments of words that sound similarly will be projected in a close area in a continuous space. This paper focuses on the evaluation of acoustic word embeddings. We propose two approaches to evaluate the intrinsic performances of acoustic word embeddings in comparison to orthographic representations in order to evaluate whether they capture discriminative phonetic information. Since French language is targeted in experiments, a particular focus is made on homophone words.",2016
sloetjes-wittenburg-2008-annotation,http://www.lrec-conf.org/proceedings/lrec2008/pdf/208_paper.pdf,0,,,,,,,"Annotation by Category: ELAN and ISO DCR. The Data Category Registry is one of the ISO initiatives towards the establishment of standards for Language Resource management, creation and coding. Successful application of the DCR depends on the availability of tools that can interact with it. This paper describes the first steps that have been taken to provide users of the multimedia annotation tool ELAN, with the means to create references from tiers and annotations to data categories defined in the ISO Data Category Registry. It first gives a brief description of the capabilities of ELAN and the structure of the documents it creates. After a concise overview of the goals and current state of the ISO DCR infrastructure, a description is given of how the preliminary connectivity with the DCR is implemented in ELAN.",Annotation by Category: {ELAN} and {ISO} {DCR},"The Data Category Registry is one of the ISO initiatives towards the establishment of standards for Language Resource management, creation and coding. Successful application of the DCR depends on the availability of tools that can interact with it. This paper describes the first steps that have been taken to provide users of the multimedia annotation tool ELAN, with the means to create references from tiers and annotations to data categories defined in the ISO Data Category Registry. It first gives a brief description of the capabilities of ELAN and the structure of the documents it creates. After a concise overview of the goals and current state of the ISO DCR infrastructure, a description is given of how the preliminary connectivity with the DCR is implemented in ELAN.",Annotation by Category: ELAN and ISO DCR,"The Data Category Registry is one of the ISO initiatives towards the establishment of standards for Language Resource management, creation and coding. Successful application of the DCR depends on the availability of tools that can interact with it. This paper describes the first steps that have been taken to provide users of the multimedia annotation tool ELAN, with the means to create references from tiers and annotations to data categories defined in the ISO Data Category Registry. It first gives a brief description of the capabilities of ELAN and the structure of the documents it creates. After a concise overview of the goals and current state of the ISO DCR infrastructure, a description is given of how the preliminary connectivity with the DCR is implemented in ELAN.",,"Annotation by Category: ELAN and ISO DCR. The Data Category Registry is one of the ISO initiatives towards the establishment of standards for Language Resource management, creation and coding. Successful application of the DCR depends on the availability of tools that can interact with it. This paper describes the first steps that have been taken to provide users of the multimedia annotation tool ELAN, with the means to create references from tiers and annotations to data categories defined in the ISO Data Category Registry. It first gives a brief description of the capabilities of ELAN and the structure of the documents it creates. After a concise overview of the goals and current state of the ISO DCR infrastructure, a description is given of how the preliminary connectivity with the DCR is implemented in ELAN.",2008
wan-etal-2018-ibm,https://aclanthology.org/K18-2009,0,,,,,,,"IBM Research at the CoNLL 2018 Shared Task on Multilingual Parsing. This paper presents the IBM Research AI submission to the CoNLL 2018 Shared Task on Parsing Universal Dependencies. Our system implements a new joint transition-based parser, based on the Stack-LSTM framework and the Arc-Standard algorithm, that handles tokenization, part-of-speech tagging, morphological tagging and dependency parsing in one single model. By leveraging a combination of character-based modeling of words and recursive composition of partially built linguistic structures we qualified 13th overall and 7th in low resource. We also present a new sentence segmentation neural architecture based on Stack-LSTMs that was the 4th best overall.",{IBM} Research at the {C}o{NLL} 2018 Shared Task on Multilingual Parsing,"This paper presents the IBM Research AI submission to the CoNLL 2018 Shared Task on Parsing Universal Dependencies. Our system implements a new joint transition-based parser, based on the Stack-LSTM framework and the Arc-Standard algorithm, that handles tokenization, part-of-speech tagging, morphological tagging and dependency parsing in one single model. By leveraging a combination of character-based modeling of words and recursive composition of partially built linguistic structures we qualified 13th overall and 7th in low resource. We also present a new sentence segmentation neural architecture based on Stack-LSTMs that was the 4th best overall.",IBM Research at the CoNLL 2018 Shared Task on Multilingual Parsing,"This paper presents the IBM Research AI submission to the CoNLL 2018 Shared Task on Parsing Universal Dependencies. Our system implements a new joint transition-based parser, based on the Stack-LSTM framework and the Arc-Standard algorithm, that handles tokenization, part-of-speech tagging, morphological tagging and dependency parsing in one single model. By leveraging a combination of character-based modeling of words and recursive composition of partially built linguistic structures we qualified 13th overall and 7th in low resource. We also present a new sentence segmentation neural architecture based on Stack-LSTMs that was the 4th best overall.","We thank Radu Florian, Todd Ward and Salim Roukos for useful discussions.","IBM Research at the CoNLL 2018 Shared Task on Multilingual Parsing. This paper presents the IBM Research AI submission to the CoNLL 2018 Shared Task on Parsing Universal Dependencies. Our system implements a new joint transition-based parser, based on the Stack-LSTM framework and the Arc-Standard algorithm, that handles tokenization, part-of-speech tagging, morphological tagging and dependency parsing in one single model. By leveraging a combination of character-based modeling of words and recursive composition of partially built linguistic structures we qualified 13th overall and 7th in low resource. We also present a new sentence segmentation neural architecture based on Stack-LSTMs that was the 4th best overall.",2018
vincze-etal-2010-hungarian,http://www.lrec-conf.org/proceedings/lrec2010/pdf/465_Paper.pdf,0,,,,,,,"Hungarian Dependency Treebank. Herein, we present the process of developing the first Hungarian Dependency TreeBank. First, short references are made to dependency grammars we considered important in the development of our Treebank. Second, mention is made of existing dependency corpora for other languages. Third, we present the steps of converting the Szeged Treebank into dependency-tree format: from the originally phrase-structured treebank, we produced dependency trees by automatic conversion, checked and corrected them thereby creating the first manually annotated dependency corpus for Hungarian. We also go into detail about the two major sets of problems, i.e. coordination and predicative nouns and adjectives. Fourth, we give statistics on the treebank: by now, we have completed the annotation of business news, newspaper articles, legal texts and texts in informatics, at the same time, we are planning to convert the entire corpus into dependency tree format. Finally, we give some hints on the applicability of the system: the present database may be utilized-among others-in information extraction and machine translation as well.",{H}ungarian Dependency Treebank,"Herein, we present the process of developing the first Hungarian Dependency TreeBank. First, short references are made to dependency grammars we considered important in the development of our Treebank. Second, mention is made of existing dependency corpora for other languages. Third, we present the steps of converting the Szeged Treebank into dependency-tree format: from the originally phrase-structured treebank, we produced dependency trees by automatic conversion, checked and corrected them thereby creating the first manually annotated dependency corpus for Hungarian. We also go into detail about the two major sets of problems, i.e. coordination and predicative nouns and adjectives. Fourth, we give statistics on the treebank: by now, we have completed the annotation of business news, newspaper articles, legal texts and texts in informatics, at the same time, we are planning to convert the entire corpus into dependency tree format. Finally, we give some hints on the applicability of the system: the present database may be utilized-among others-in information extraction and machine translation as well.",Hungarian Dependency Treebank,"Herein, we present the process of developing the first Hungarian Dependency TreeBank. First, short references are made to dependency grammars we considered important in the development of our Treebank. Second, mention is made of existing dependency corpora for other languages. Third, we present the steps of converting the Szeged Treebank into dependency-tree format: from the originally phrase-structured treebank, we produced dependency trees by automatic conversion, checked and corrected them thereby creating the first manually annotated dependency corpus for Hungarian. We also go into detail about the two major sets of problems, i.e. coordination and predicative nouns and adjectives. Fourth, we give statistics on the treebank: by now, we have completed the annotation of business news, newspaper articles, legal texts and texts in informatics, at the same time, we are planning to convert the entire corpus into dependency tree format. Finally, we give some hints on the applicability of the system: the present database may be utilized-among others-in information extraction and machine translation as well.",The research was -in part -supported by NKTH within the framework of TUDORKA and MASZEKER projects (Ányos Jedlik programs).,"Hungarian Dependency Treebank. Herein, we present the process of developing the first Hungarian Dependency TreeBank. First, short references are made to dependency grammars we considered important in the development of our Treebank. Second, mention is made of existing dependency corpora for other languages. Third, we present the steps of converting the Szeged Treebank into dependency-tree format: from the originally phrase-structured treebank, we produced dependency trees by automatic conversion, checked and corrected them thereby creating the first manually annotated dependency corpus for Hungarian. We also go into detail about the two major sets of problems, i.e. coordination and predicative nouns and adjectives. Fourth, we give statistics on the treebank: by now, we have completed the annotation of business news, newspaper articles, legal texts and texts in informatics, at the same time, we are planning to convert the entire corpus into dependency tree format. Finally, we give some hints on the applicability of the system: the present database may be utilized-among others-in information extraction and machine translation as well.",2010
marasovic-etal-2020-natural,https://aclanthology.org/2020.findings-emnlp.253,0,,,,,,,"Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs. Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights. We present the first study focused on generating natural language rationales across several complex visual reasoning tasks: visual commonsense reasoning, visual-textual entailment, and visual question answering. The key challenge of accurate rationalization is comprehensive image understanding at all levels: not just their explicit content at the pixel level, but their contextual contents at the semantic and pragmatic levels. We present RATIONALE VT TRANSFORMER, an integrated model that learns to generate free-text rationales by combining pretrained language models with object recognition, grounded visual semantic frames, and visual commonsense graphs. Our experiments show that free-text rationalization is a promising research direction to complement model interpretability for complex visual-textual reasoning tasks. In addition, we find that integration of richer semantic and pragmatic visual features improves visual fidelity of rationales.",Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs,"Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights. We present the first study focused on generating natural language rationales across several complex visual reasoning tasks: visual commonsense reasoning, visual-textual entailment, and visual question answering. The key challenge of accurate rationalization is comprehensive image understanding at all levels: not just their explicit content at the pixel level, but their contextual contents at the semantic and pragmatic levels. We present RATIONALE VT TRANSFORMER, an integrated model that learns to generate free-text rationales by combining pretrained language models with object recognition, grounded visual semantic frames, and visual commonsense graphs. Our experiments show that free-text rationalization is a promising research direction to complement model interpretability for complex visual-textual reasoning tasks. In addition, we find that integration of richer semantic and pragmatic visual features improves visual fidelity of rationales.",Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs,"Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights. We present the first study focused on generating natural language rationales across several complex visual reasoning tasks: visual commonsense reasoning, visual-textual entailment, and visual question answering. The key challenge of accurate rationalization is comprehensive image understanding at all levels: not just their explicit content at the pixel level, but their contextual contents at the semantic and pragmatic levels. We present RATIONALE VT TRANSFORMER, an integrated model that learns to generate free-text rationales by combining pretrained language models with object recognition, grounded visual semantic frames, and visual commonsense graphs. Our experiments show that free-text rationalization is a promising research direction to complement model interpretability for complex visual-textual reasoning tasks. In addition, we find that integration of richer semantic and pragmatic visual features improves visual fidelity of rationales.","The authors thank Sarah Pratt for her assistance with the grounded situation recognizer, Amandalynne Paullada, members of the AllenNLP team, and anonymous reviewers for helpful feedback.This research was supported in part by NSF (IIS1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031), and gifts from Allen Institute for Artificial Intelligence.","Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs. Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights. We present the first study focused on generating natural language rationales across several complex visual reasoning tasks: visual commonsense reasoning, visual-textual entailment, and visual question answering. The key challenge of accurate rationalization is comprehensive image understanding at all levels: not just their explicit content at the pixel level, but their contextual contents at the semantic and pragmatic levels. We present RATIONALE VT TRANSFORMER, an integrated model that learns to generate free-text rationales by combining pretrained language models with object recognition, grounded visual semantic frames, and visual commonsense graphs. Our experiments show that free-text rationalization is a promising research direction to complement model interpretability for complex visual-textual reasoning tasks. In addition, we find that integration of richer semantic and pragmatic visual features improves visual fidelity of rationales.",2020
rei-etal-2018-scoring,https://aclanthology.org/P18-2101,0,,,,,,,"Scoring Lexical Entailment with a Supervised Directional Similarity Network. We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of generalpurpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the stateof-the-art on the HyperLex dataset by approximately 25%.",Scoring Lexical Entailment with a Supervised Directional Similarity Network,"We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of generalpurpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the stateof-the-art on the HyperLex dataset by approximately 25%.",Scoring Lexical Entailment with a Supervised Directional Similarity Network,"We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of generalpurpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the stateof-the-art on the HyperLex dataset by approximately 25%.",Daniela Gerz and Ivan Vulić are supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). We would like to thank the NVIDIA Corporation for the donation of the Titan GPU that was used for this research.,"Scoring Lexical Entailment with a Supervised Directional Similarity Network. We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of generalpurpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the stateof-the-art on the HyperLex dataset by approximately 25%.",2018
lai-pustejovsky-2019-dynamic,https://aclanthology.org/W19-0601,0,,,,,,,"A Dynamic Semantics for Causal Counterfactuals. Under the standard approach to counterfactuals, to determine the meaning of a counterfactual sentence, we consider the ""closest"" possible world(s) where the antecedent is true, and evaluate the consequent. Building on the standard approach, some researchers have found that the set of worlds to be considered is dependent on context; it evolves with the discourse. Others have focused on how to define the ""distance"" between possible worlds, using ideas from causal modeling. This paper integrates the two ideas. We present a semantics for counterfactuals that uses a distance measure based on causal laws, that can also change over time. We show how our semantics can be implemented in the Haskell programming language.",A Dynamic Semantics for Causal Counterfactuals,"Under the standard approach to counterfactuals, to determine the meaning of a counterfactual sentence, we consider the ""closest"" possible world(s) where the antecedent is true, and evaluate the consequent. Building on the standard approach, some researchers have found that the set of worlds to be considered is dependent on context; it evolves with the discourse. Others have focused on how to define the ""distance"" between possible worlds, using ideas from causal modeling. This paper integrates the two ideas. We present a semantics for counterfactuals that uses a distance measure based on causal laws, that can also change over time. We show how our semantics can be implemented in the Haskell programming language.",A Dynamic Semantics for Causal Counterfactuals,"Under the standard approach to counterfactuals, to determine the meaning of a counterfactual sentence, we consider the ""closest"" possible world(s) where the antecedent is true, and evaluate the consequent. Building on the standard approach, some researchers have found that the set of worlds to be considered is dependent on context; it evolves with the discourse. Others have focused on how to define the ""distance"" between possible worlds, using ideas from causal modeling. This paper integrates the two ideas. We present a semantics for counterfactuals that uses a distance measure based on causal laws, that can also change over time. We show how our semantics can be implemented in the Haskell programming language.",We would like to thank the reviewers for their helpful comments.,"A Dynamic Semantics for Causal Counterfactuals. Under the standard approach to counterfactuals, to determine the meaning of a counterfactual sentence, we consider the ""closest"" possible world(s) where the antecedent is true, and evaluate the consequent. Building on the standard approach, some researchers have found that the set of worlds to be considered is dependent on context; it evolves with the discourse. Others have focused on how to define the ""distance"" between possible worlds, using ideas from causal modeling. This paper integrates the two ideas. We present a semantics for counterfactuals that uses a distance measure based on causal laws, that can also change over time. We show how our semantics can be implemented in the Haskell programming language.",2019
zhang-duh-2021-approaching,https://aclanthology.org/2021.mtsummit-at4ssl.7,1,,,,social_equality,,,"Approaching Sign Language Gloss Translation as a Low-Resource Machine Translation Task. A cascaded Sign Language Translation system first maps sign videos to gloss annotations and then translates glosses into a spoken languages. This work focuses on the second-stage gloss translation component, which is challenging due to the scarcity of publicly available parallel data. We approach gloss translation as a low-resource machine translation task and investigate two popular methods for improving translation quality: hyperparameter search and backtranslation. We discuss the potentials and pitfalls of these methods based on experiments on the RWTH-PHOENIX-Weather 2014T dataset.",Approaching Sign Language Gloss Translation as a Low-Resource Machine Translation Task,"A cascaded Sign Language Translation system first maps sign videos to gloss annotations and then translates glosses into a spoken languages. This work focuses on the second-stage gloss translation component, which is challenging due to the scarcity of publicly available parallel data. We approach gloss translation as a low-resource machine translation task and investigate two popular methods for improving translation quality: hyperparameter search and backtranslation. We discuss the potentials and pitfalls of these methods based on experiments on the RWTH-PHOENIX-Weather 2014T dataset.",Approaching Sign Language Gloss Translation as a Low-Resource Machine Translation Task,"A cascaded Sign Language Translation system first maps sign videos to gloss annotations and then translates glosses into a spoken languages. This work focuses on the second-stage gloss translation component, which is challenging due to the scarcity of publicly available parallel data. We approach gloss translation as a low-resource machine translation task and investigate two popular methods for improving translation quality: hyperparameter search and backtranslation. We discuss the potentials and pitfalls of these methods based on experiments on the RWTH-PHOENIX-Weather 2014T dataset.",,"Approaching Sign Language Gloss Translation as a Low-Resource Machine Translation Task. A cascaded Sign Language Translation system first maps sign videos to gloss annotations and then translates glosses into a spoken languages. This work focuses on the second-stage gloss translation component, which is challenging due to the scarcity of publicly available parallel data. We approach gloss translation as a low-resource machine translation task and investigate two popular methods for improving translation quality: hyperparameter search and backtranslation. We discuss the potentials and pitfalls of these methods based on experiments on the RWTH-PHOENIX-Weather 2014T dataset.",2021
sha-2020-gradient,https://aclanthology.org/2020.emnlp-main.701,0,,,,,,,"Gradient-guided Unsupervised Lexically Constrained Text Generation. Lexically-constrained generation requires the target sentence to satisfy some lexical constraints, such as containing some specific words or being the paraphrase to a given sentence, which is very important in many real-world natural language generation applications. Previous works usually apply beamsearch-based methods or stochastic searching methods to lexically-constrained generation. However, when the search space is too large, beam-search-based methods always fail to find the constrained optimal solution. At the same time, stochastic search methods always cost too many steps to find the correct optimization direction. In this paper, we propose a novel method G2LC to solve the lexically-constrained generation as an unsupervised gradient-guided optimization problem. We propose a differentiable objective function and use the gradient to help determine which position in the sequence should be changed (deleted or inserted/replaced by another word). The word updating process of the inserted/replaced word also benefits from the guidance of gradient. Besides, our method is free of parallel data training, which is flexible to be used in the inference stage of any pre-trained generation model. We apply G2LC to two generation tasks: keyword-to-sentence generation and unsupervised paraphrase generation. The experiment results show that our method achieves state-of-the-art compared to previous lexically-constrained methods.",Gradient-guided Unsupervised Lexically Constrained Text Generation,"Lexically-constrained generation requires the target sentence to satisfy some lexical constraints, such as containing some specific words or being the paraphrase to a given sentence, which is very important in many real-world natural language generation applications. Previous works usually apply beamsearch-based methods or stochastic searching methods to lexically-constrained generation. However, when the search space is too large, beam-search-based methods always fail to find the constrained optimal solution. At the same time, stochastic search methods always cost too many steps to find the correct optimization direction. In this paper, we propose a novel method G2LC to solve the lexically-constrained generation as an unsupervised gradient-guided optimization problem. We propose a differentiable objective function and use the gradient to help determine which position in the sequence should be changed (deleted or inserted/replaced by another word). The word updating process of the inserted/replaced word also benefits from the guidance of gradient. Besides, our method is free of parallel data training, which is flexible to be used in the inference stage of any pre-trained generation model. We apply G2LC to two generation tasks: keyword-to-sentence generation and unsupervised paraphrase generation. The experiment results show that our method achieves state-of-the-art compared to previous lexically-constrained methods.",Gradient-guided Unsupervised Lexically Constrained Text Generation,"Lexically-constrained generation requires the target sentence to satisfy some lexical constraints, such as containing some specific words or being the paraphrase to a given sentence, which is very important in many real-world natural language generation applications. Previous works usually apply beamsearch-based methods or stochastic searching methods to lexically-constrained generation. However, when the search space is too large, beam-search-based methods always fail to find the constrained optimal solution. At the same time, stochastic search methods always cost too many steps to find the correct optimization direction. In this paper, we propose a novel method G2LC to solve the lexically-constrained generation as an unsupervised gradient-guided optimization problem. We propose a differentiable objective function and use the gradient to help determine which position in the sequence should be changed (deleted or inserted/replaced by another word). The word updating process of the inserted/replaced word also benefits from the guidance of gradient. Besides, our method is free of parallel data training, which is flexible to be used in the inference stage of any pre-trained generation model. We apply G2LC to two generation tasks: keyword-to-sentence generation and unsupervised paraphrase generation. The experiment results show that our method achieves state-of-the-art compared to previous lexically-constrained methods.",We would like to thank the three anonymous reviewers and the anonymous meta-reviewer for so many good suggestions.,"Gradient-guided Unsupervised Lexically Constrained Text Generation. Lexically-constrained generation requires the target sentence to satisfy some lexical constraints, such as containing some specific words or being the paraphrase to a given sentence, which is very important in many real-world natural language generation applications. Previous works usually apply beamsearch-based methods or stochastic searching methods to lexically-constrained generation. However, when the search space is too large, beam-search-based methods always fail to find the constrained optimal solution. At the same time, stochastic search methods always cost too many steps to find the correct optimization direction. In this paper, we propose a novel method G2LC to solve the lexically-constrained generation as an unsupervised gradient-guided optimization problem. We propose a differentiable objective function and use the gradient to help determine which position in the sequence should be changed (deleted or inserted/replaced by another word). The word updating process of the inserted/replaced word also benefits from the guidance of gradient. Besides, our method is free of parallel data training, which is flexible to be used in the inference stage of any pre-trained generation model. We apply G2LC to two generation tasks: keyword-to-sentence generation and unsupervised paraphrase generation. The experiment results show that our method achieves state-of-the-art compared to previous lexically-constrained methods.",2020
freitag-2004-toward,https://aclanthology.org/C04-1052,0,,,,,,,Toward Unsupervised Whole-Corpus Tagging. ,Toward Unsupervised Whole-Corpus Tagging,,Toward Unsupervised Whole-Corpus Tagging,,,Toward Unsupervised Whole-Corpus Tagging. ,2004
medlock-2006-introduction,http://www.lrec-conf.org/proceedings/lrec2006/pdf/200_pdf.pdf,0,,,,,,,"An Introduction to NLP-based Textual Anonymisation. We introduce the problem of automatic textual anonymisation and present a new publicly-available, pseudonymised benchmark corpus of personal email text for the task, dubbed ITAC (Informal Text Anonymisation Corpus). We discuss the method by which the corpus was constructed, and consider some important issues related to the evaluation of textual anonymisation systems. We also present some initial baseline results on the new corpus using a state of the art HMM-based tagger.",An Introduction to {NLP}-based Textual Anonymisation,"We introduce the problem of automatic textual anonymisation and present a new publicly-available, pseudonymised benchmark corpus of personal email text for the task, dubbed ITAC (Informal Text Anonymisation Corpus). We discuss the method by which the corpus was constructed, and consider some important issues related to the evaluation of textual anonymisation systems. We also present some initial baseline results on the new corpus using a state of the art HMM-based tagger.",An Introduction to NLP-based Textual Anonymisation,"We introduce the problem of automatic textual anonymisation and present a new publicly-available, pseudonymised benchmark corpus of personal email text for the task, dubbed ITAC (Informal Text Anonymisation Corpus). We discuss the method by which the corpus was constructed, and consider some important issues related to the evaluation of textual anonymisation systems. We also present some initial baseline results on the new corpus using a state of the art HMM-based tagger.",,"An Introduction to NLP-based Textual Anonymisation. We introduce the problem of automatic textual anonymisation and present a new publicly-available, pseudonymised benchmark corpus of personal email text for the task, dubbed ITAC (Informal Text Anonymisation Corpus). We discuss the method by which the corpus was constructed, and consider some important issues related to the evaluation of textual anonymisation systems. We also present some initial baseline results on the new corpus using a state of the art HMM-based tagger.",2006
raybaud-etal-2011-broadcast,https://aclanthology.org/2011.mtsummit-systems.3,0,,,,,,,"Broadcast news speech-to-text translation experiments. We present S2TT, an integrated speech-totext translation system based on POCKET-SPHINX and MOSES. It is compared to different baselines based on ANTS-the broadcast news transcription system developed at LORIA's Speech group, MOSES and Google's translation tools. A small corpus of reference transcriptions of broadcast news from the evaluation campaign ESTER2 was translated by human experts for evaluation. The Word Error Rate (WER) of the recognition stage of both systems are evaluated, and BLEU is used to score the translations. Furthermore, the reference transcriptions are automatically translated using MOSES and GOOGLE in order to evaluate the impact of recognition errors on translation quality.",Broadcast news speech-to-text translation experiments,"We present S2TT, an integrated speech-totext translation system based on POCKET-SPHINX and MOSES. It is compared to different baselines based on ANTS-the broadcast news transcription system developed at LORIA's Speech group, MOSES and Google's translation tools. A small corpus of reference transcriptions of broadcast news from the evaluation campaign ESTER2 was translated by human experts for evaluation. The Word Error Rate (WER) of the recognition stage of both systems are evaluated, and BLEU is used to score the translations. Furthermore, the reference transcriptions are automatically translated using MOSES and GOOGLE in order to evaluate the impact of recognition errors on translation quality.",Broadcast news speech-to-text translation experiments,"We present S2TT, an integrated speech-totext translation system based on POCKET-SPHINX and MOSES. It is compared to different baselines based on ANTS-the broadcast news transcription system developed at LORIA's Speech group, MOSES and Google's translation tools. A small corpus of reference transcriptions of broadcast news from the evaluation campaign ESTER2 was translated by human experts for evaluation. The Word Error Rate (WER) of the recognition stage of both systems are evaluated, and BLEU is used to score the translations. Furthermore, the reference transcriptions are automatically translated using MOSES and GOOGLE in order to evaluate the impact of recognition errors on translation quality.",,"Broadcast news speech-to-text translation experiments. We present S2TT, an integrated speech-totext translation system based on POCKET-SPHINX and MOSES. It is compared to different baselines based on ANTS-the broadcast news transcription system developed at LORIA's Speech group, MOSES and Google's translation tools. A small corpus of reference transcriptions of broadcast news from the evaluation campaign ESTER2 was translated by human experts for evaluation. The Word Error Rate (WER) of the recognition stage of both systems are evaluated, and BLEU is used to score the translations. Furthermore, the reference transcriptions are automatically translated using MOSES and GOOGLE in order to evaluate the impact of recognition errors on translation quality.",2011
mctait-etal-1999-building,https://aclanthology.org/1999.tc-1.11,0,,,,,,,"A Building Blocks Approach to Translation Memory. Traditional Translation Memory systems that find the best match between a SL input sentence and SL sentences in a database of previously translated sentences are not ideal. Studies in the cognitive processes underlying human translation reveal that translators very rarely process SL text at the level of the sentence. The units with which translators work are usually much smaller i.e. word, syntactic unit, clause or group of meaningful words. A building blocks approach (a term borrowed from the theoretical framework discussed in Lange et al (1997)), is advantageous in that it extracts fragments of text, from a traditional TM database, that more closely represent those with which a human translator works. The text fragments are combined with the intention of producing TL translations that are more accurate, thus requiring less postediting on the part of the translator.",A Building Blocks Approach to Translation Memory,"Traditional Translation Memory systems that find the best match between a SL input sentence and SL sentences in a database of previously translated sentences are not ideal. Studies in the cognitive processes underlying human translation reveal that translators very rarely process SL text at the level of the sentence. The units with which translators work are usually much smaller i.e. word, syntactic unit, clause or group of meaningful words. A building blocks approach (a term borrowed from the theoretical framework discussed in Lange et al (1997)), is advantageous in that it extracts fragments of text, from a traditional TM database, that more closely represent those with which a human translator works. The text fragments are combined with the intention of producing TL translations that are more accurate, thus requiring less postediting on the part of the translator.",A Building Blocks Approach to Translation Memory,"Traditional Translation Memory systems that find the best match between a SL input sentence and SL sentences in a database of previously translated sentences are not ideal. Studies in the cognitive processes underlying human translation reveal that translators very rarely process SL text at the level of the sentence. The units with which translators work are usually much smaller i.e. word, syntactic unit, clause or group of meaningful words. A building blocks approach (a term borrowed from the theoretical framework discussed in Lange et al (1997)), is advantageous in that it extracts fragments of text, from a traditional TM database, that more closely represent those with which a human translator works. The text fragments are combined with the intention of producing TL translations that are more accurate, thus requiring less postediting on the part of the translator.",,"A Building Blocks Approach to Translation Memory. Traditional Translation Memory systems that find the best match between a SL input sentence and SL sentences in a database of previously translated sentences are not ideal. Studies in the cognitive processes underlying human translation reveal that translators very rarely process SL text at the level of the sentence. The units with which translators work are usually much smaller i.e. word, syntactic unit, clause or group of meaningful words. A building blocks approach (a term borrowed from the theoretical framework discussed in Lange et al (1997)), is advantageous in that it extracts fragments of text, from a traditional TM database, that more closely represent those with which a human translator works. The text fragments are combined with the intention of producing TL translations that are more accurate, thus requiring less postediting on the part of the translator.",1999
pinnis-etal-2018-tilde,https://aclanthology.org/L18-1214,1,,,,industry_innovation_infrastructure,,,"Tilde MT Platform for Developing Client Specific MT Solutions. In this paper, we present Tilde MT, a custom machine translation (MT) platform that provides linguistic data storage (parallel, monolingual corpora, multilingual term collections), data cleaning and normalisation, statistical and neural machine translation system training and hosting functionality, as well as wide integration capabilities (a machine user API and popular computer-assisted translation tool plugins). We provide details for the most important features of the platform, as well as elaborate typical MT system training workflows for client-specific MT solution development.",Tilde {MT} Platform for Developing Client Specific {MT} Solutions,"In this paper, we present Tilde MT, a custom machine translation (MT) platform that provides linguistic data storage (parallel, monolingual corpora, multilingual term collections), data cleaning and normalisation, statistical and neural machine translation system training and hosting functionality, as well as wide integration capabilities (a machine user API and popular computer-assisted translation tool plugins). We provide details for the most important features of the platform, as well as elaborate typical MT system training workflows for client-specific MT solution development.",Tilde MT Platform for Developing Client Specific MT Solutions,"In this paper, we present Tilde MT, a custom machine translation (MT) platform that provides linguistic data storage (parallel, monolingual corpora, multilingual term collections), data cleaning and normalisation, statistical and neural machine translation system training and hosting functionality, as well as wide integration capabilities (a machine user API and popular computer-assisted translation tool plugins). We provide details for the most important features of the platform, as well as elaborate typical MT system training workflows for client-specific MT solution development.",,"Tilde MT Platform for Developing Client Specific MT Solutions. In this paper, we present Tilde MT, a custom machine translation (MT) platform that provides linguistic data storage (parallel, monolingual corpora, multilingual term collections), data cleaning and normalisation, statistical and neural machine translation system training and hosting functionality, as well as wide integration capabilities (a machine user API and popular computer-assisted translation tool plugins). We provide details for the most important features of the platform, as well as elaborate typical MT system training workflows for client-specific MT solution development.",2018
yang-etal-2014-joint,https://aclanthology.org/D14-1071,0,,,,,,,"Joint Relational Embeddings for Knowledge-based Question Answering. Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KB-QA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.",Joint Relational Embeddings for Knowledge-based Question Answering,"Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KB-QA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.",Joint Relational Embeddings for Knowledge-based Question Answering,"Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KB-QA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.",,"Joint Relational Embeddings for Knowledge-based Question Answering. Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KB-QA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.",2014
dziob-etal-2019-plwordnet,https://aclanthology.org/2019.gwc-1.45,0,,,,,,,"plWordNet 4.1 - a Linguistically Motivated, Corpus-based Bilingual Resource. The paper presents the latest release of the Polish WordNet, namely plWord-Net 4.1. The most significant developments since 3.0 version include new relations for nouns and verbs, mapping semantic role-relations from the valency lexicon Walenty onto the plWord-Net structure and sense-level interlingual mapping. Several statistics are presented in order to illustrate the development and contemporary state of the wordnet.","pl{W}ord{N}et 4.1 - a Linguistically Motivated, Corpus-based Bilingual Resource","The paper presents the latest release of the Polish WordNet, namely plWord-Net 4.1. The most significant developments since 3.0 version include new relations for nouns and verbs, mapping semantic role-relations from the valency lexicon Walenty onto the plWord-Net structure and sense-level interlingual mapping. Several statistics are presented in order to illustrate the development and contemporary state of the wordnet.","plWordNet 4.1 - a Linguistically Motivated, Corpus-based Bilingual Resource","The paper presents the latest release of the Polish WordNet, namely plWord-Net 4.1. The most significant developments since 3.0 version include new relations for nouns and verbs, mapping semantic role-relations from the valency lexicon Walenty onto the plWord-Net structure and sense-level interlingual mapping. Several statistics are presented in order to illustrate the development and contemporary state of the wordnet.","The work co-financed as part of the investment in the CLARIN-PL research infrastructure funded by the Polish Ministry of Science and Higher Education and the project funded by the National Science Centre, Poland under the grant agreement No UMO-2015/18/M/HS2/00100.","plWordNet 4.1 - a Linguistically Motivated, Corpus-based Bilingual Resource. The paper presents the latest release of the Polish WordNet, namely plWord-Net 4.1. The most significant developments since 3.0 version include new relations for nouns and verbs, mapping semantic role-relations from the valency lexicon Walenty onto the plWord-Net structure and sense-level interlingual mapping. Several statistics are presented in order to illustrate the development and contemporary state of the wordnet.",2019
lin-eisner-2018-neural,https://aclanthology.org/N18-1085,0,,,,,,,"Neural Particle Smoothing for Sampling from Conditional Sequence Models. We introduce neural particle smoothing, a sequential Monte Carlo method for sampling annotations of an input string from a given probability model. In contrast to conventional particle filtering algorithms, we train a proposal distribution that looks ahead to the end of the input string by means of a right-to-left LSTM. We demonstrate that this innovation can improve the quality of the sample. To motivate our formal choices, we explain how our neural model and neural sampler can be viewed as low-dimensional but nonlinear approximations to working with HMMs over very large state spaces.",Neural Particle Smoothing for Sampling from Conditional Sequence Models,"We introduce neural particle smoothing, a sequential Monte Carlo method for sampling annotations of an input string from a given probability model. In contrast to conventional particle filtering algorithms, we train a proposal distribution that looks ahead to the end of the input string by means of a right-to-left LSTM. We demonstrate that this innovation can improve the quality of the sample. To motivate our formal choices, we explain how our neural model and neural sampler can be viewed as low-dimensional but nonlinear approximations to working with HMMs over very large state spaces.",Neural Particle Smoothing for Sampling from Conditional Sequence Models,"We introduce neural particle smoothing, a sequential Monte Carlo method for sampling annotations of an input string from a given probability model. In contrast to conventional particle filtering algorithms, we train a proposal distribution that looks ahead to the end of the input string by means of a right-to-left LSTM. We demonstrate that this innovation can improve the quality of the sample. To motivate our formal choices, we explain how our neural model and neural sampler can be viewed as low-dimensional but nonlinear approximations to working with HMMs over very large state spaces.",This work has been generously supported by a Google Faculty Research Award and by Grant No. 1718846 from the National Science Foundation.,"Neural Particle Smoothing for Sampling from Conditional Sequence Models. We introduce neural particle smoothing, a sequential Monte Carlo method for sampling annotations of an input string from a given probability model. In contrast to conventional particle filtering algorithms, we train a proposal distribution that looks ahead to the end of the input string by means of a right-to-left LSTM. We demonstrate that this innovation can improve the quality of the sample. To motivate our formal choices, we explain how our neural model and neural sampler can be viewed as low-dimensional but nonlinear approximations to working with HMMs over very large state spaces.",2018
kupsc-etal-2004-pronominal,http://www.lrec-conf.org/proceedings/lrec2004/pdf/671.pdf,0,,,,,,,"Pronominal Anaphora Resolution for Unrestricted Text. The paper presents an anaphora resolution algorithm for unrestricted text. In particular, we examine portability of a knowledge-based approach of (Mitamura et al., 2002), proposed for a domain-specific task. We obtain up to 70% accuracy on unrestricted text, which is a significant improvement (almost 20%) over a baseline we set for general text. As the overall results leave much room for improvement, we provide a detailed error analysis and investigate possible enhancements.",Pronominal Anaphora Resolution for Unrestricted Text,"The paper presents an anaphora resolution algorithm for unrestricted text. In particular, we examine portability of a knowledge-based approach of (Mitamura et al., 2002), proposed for a domain-specific task. We obtain up to 70% accuracy on unrestricted text, which is a significant improvement (almost 20%) over a baseline we set for general text. As the overall results leave much room for improvement, we provide a detailed error analysis and investigate possible enhancements.",Pronominal Anaphora Resolution for Unrestricted Text,"The paper presents an anaphora resolution algorithm for unrestricted text. In particular, we examine portability of a knowledge-based approach of (Mitamura et al., 2002), proposed for a domain-specific task. We obtain up to 70% accuracy on unrestricted text, which is a significant improvement (almost 20%) over a baseline we set for general text. As the overall results leave much room for improvement, we provide a detailed error analysis and investigate possible enhancements.","This work was supported in part by the Advanced Research and Development Activity (ARDA) under AQUAINT contract MDA904-01-C-0988. We would like to thank Curtis Huttenhower, for his work on integrating the tools we used for text analysis, as well as three anonymous reviewers and Adam Przepiórkowski for useful comments on earlier versions of this paper.","Pronominal Anaphora Resolution for Unrestricted Text. The paper presents an anaphora resolution algorithm for unrestricted text. In particular, we examine portability of a knowledge-based approach of (Mitamura et al., 2002), proposed for a domain-specific task. We obtain up to 70% accuracy on unrestricted text, which is a significant improvement (almost 20%) over a baseline we set for general text. As the overall results leave much room for improvement, we provide a detailed error analysis and investigate possible enhancements.",2004
aizawa-2002-method,https://aclanthology.org/C02-1045,0,,,,,,,"A Method of Cluster-Based Indexing of Textual Data. This paper presents a framework for clustering in text-based information retrieval systems. The prominent feature of the proposed method is that documents, terms, and other related elements of textual information are clustered simultaneously into small overlapping clusters. In the paper, the mathematical formulation and implementation of the clustering method are briefly introduced, together with some experimental results.",A Method of Cluster-Based Indexing of Textual Data,"This paper presents a framework for clustering in text-based information retrieval systems. The prominent feature of the proposed method is that documents, terms, and other related elements of textual information are clustered simultaneously into small overlapping clusters. In the paper, the mathematical formulation and implementation of the clustering method are briefly introduced, together with some experimental results.",A Method of Cluster-Based Indexing of Textual Data,"This paper presents a framework for clustering in text-based information retrieval systems. The prominent feature of the proposed method is that documents, terms, and other related elements of textual information are clustered simultaneously into small overlapping clusters. In the paper, the mathematical formulation and implementation of the clustering method are briefly introduced, together with some experimental results.",,"A Method of Cluster-Based Indexing of Textual Data. This paper presents a framework for clustering in text-based information retrieval systems. The prominent feature of the proposed method is that documents, terms, and other related elements of textual information are clustered simultaneously into small overlapping clusters. In the paper, the mathematical formulation and implementation of the clustering method are briefly introduced, together with some experimental results.",2002
xanthos-etal-2006-exploring,https://aclanthology.org/W06-3205,0,,,,,,,"Exploring variant definitions of pointer length in MDL. Within the information-theoretical framework described by (Rissanen, 1989; de Marcken, 1996; Goldsmith, 2001), pointers are used to avoid repetition of phonological material. Work with which we are familiar has assumed that there is only one way in which items could be pointed to. The purpose of this paper is to describe and compare several different methods, each of which satisfies MDL's basic requirements, but which have different consequences for the treatment of linguistic phenomena. In particular, we assess the conditions under which these different ways of pointing yield more compact descriptions of the data, both from a theoretical and an empirical perspective.",Exploring variant definitions of pointer length in {MDL},"Within the information-theoretical framework described by (Rissanen, 1989; de Marcken, 1996; Goldsmith, 2001), pointers are used to avoid repetition of phonological material. Work with which we are familiar has assumed that there is only one way in which items could be pointed to. The purpose of this paper is to describe and compare several different methods, each of which satisfies MDL's basic requirements, but which have different consequences for the treatment of linguistic phenomena. In particular, we assess the conditions under which these different ways of pointing yield more compact descriptions of the data, both from a theoretical and an empirical perspective.",Exploring variant definitions of pointer length in MDL,"Within the information-theoretical framework described by (Rissanen, 1989; de Marcken, 1996; Goldsmith, 2001), pointers are used to avoid repetition of phonological material. Work with which we are familiar has assumed that there is only one way in which items could be pointed to. The purpose of this paper is to describe and compare several different methods, each of which satisfies MDL's basic requirements, but which have different consequences for the treatment of linguistic phenomena. In particular, we assess the conditions under which these different ways of pointing yield more compact descriptions of the data, both from a theoretical and an empirical perspective.",This research was supported by a grant of the Swiss National Science Foundation to the first author.,"Exploring variant definitions of pointer length in MDL. Within the information-theoretical framework described by (Rissanen, 1989; de Marcken, 1996; Goldsmith, 2001), pointers are used to avoid repetition of phonological material. Work with which we are familiar has assumed that there is only one way in which items could be pointed to. The purpose of this paper is to describe and compare several different methods, each of which satisfies MDL's basic requirements, but which have different consequences for the treatment of linguistic phenomena. In particular, we assess the conditions under which these different ways of pointing yield more compact descriptions of the data, both from a theoretical and an empirical perspective.",2006
biggins-etal-2012-university,https://aclanthology.org/S12-1097,0,,,,,,,University\_Of\_Sheffield: Two Approaches to Semantic Text Similarity. This paper describes the University of Sheffield's submission to SemEval-2012 Task 6: Semantic Text Similarity. Two approaches were developed. The first is an unsupervised technique based on the widely used vector space model and information from WordNet. The second method relies on supervised machine learning and represents each sentence as a set of n-grams. This approach also makes use of information from WordNet. Results from the formal evaluation show that both approaches are useful for determining the similarity in meaning between pairs of sentences with the best performance being obtained by the supervised approach. Incorporating information from WordNet also improves performance for both approaches.,{U}niversity{\_}{O}f{\_}{S}heffield: Two Approaches to Semantic Text Similarity,This paper describes the University of Sheffield's submission to SemEval-2012 Task 6: Semantic Text Similarity. Two approaches were developed. The first is an unsupervised technique based on the widely used vector space model and information from WordNet. The second method relies on supervised machine learning and represents each sentence as a set of n-grams. This approach also makes use of information from WordNet. Results from the formal evaluation show that both approaches are useful for determining the similarity in meaning between pairs of sentences with the best performance being obtained by the supervised approach. Incorporating information from WordNet also improves performance for both approaches.,University\_Of\_Sheffield: Two Approaches to Semantic Text Similarity,This paper describes the University of Sheffield's submission to SemEval-2012 Task 6: Semantic Text Similarity. Two approaches were developed. The first is an unsupervised technique based on the widely used vector space model and information from WordNet. The second method relies on supervised machine learning and represents each sentence as a set of n-grams. This approach also makes use of information from WordNet. Results from the formal evaluation show that both approaches are useful for determining the similarity in meaning between pairs of sentences with the best performance being obtained by the supervised approach. Incorporating information from WordNet also improves performance for both approaches.,This research has been supported by a Google Research Award.,University\_Of\_Sheffield: Two Approaches to Semantic Text Similarity. This paper describes the University of Sheffield's submission to SemEval-2012 Task 6: Semantic Text Similarity. Two approaches were developed. The first is an unsupervised technique based on the widely used vector space model and information from WordNet. The second method relies on supervised machine learning and represents each sentence as a set of n-grams. This approach also makes use of information from WordNet. Results from the formal evaluation show that both approaches are useful for determining the similarity in meaning between pairs of sentences with the best performance being obtained by the supervised approach. Incorporating information from WordNet also improves performance for both approaches.,2012
gavrila-etal-2012-domain,http://www.lrec-conf.org/proceedings/lrec2012/pdf/1003_Paper.pdf,0,,,,,,,"Same domain different discourse style - A case study on Language Resources for data-driven Machine Translation. Data-driven machine translation (MT) approaches became very popular during last years, especially for language pairs for which it is difficult to find specialists to develop transfer rules. Statistical (SMT) or example-based (EBMT) systems can provide reasonable translation quality for assimilation purposes, as long as a large amount of training data is available. Especially SMT systems rely on parallel aligned corpora which have to be statistical relevant for the given language pair. The construction of large domain specific parallel corpora is time-and cost-consuming; the current practice relies on one or two big such corpora per language pair. Recent developed strategies ensure certain portability to other domains through specialized lexicons or small domain specific corpora. In this paper we discuss the influence of different discourse styles on statistical machine translation systems. We investigate how a pure SMT performs when training and test data belong to same domain but the discourse style varies.",Same domain different discourse style - A case study on Language Resources for data-driven Machine Translation,"Data-driven machine translation (MT) approaches became very popular during last years, especially for language pairs for which it is difficult to find specialists to develop transfer rules. Statistical (SMT) or example-based (EBMT) systems can provide reasonable translation quality for assimilation purposes, as long as a large amount of training data is available. Especially SMT systems rely on parallel aligned corpora which have to be statistical relevant for the given language pair. The construction of large domain specific parallel corpora is time-and cost-consuming; the current practice relies on one or two big such corpora per language pair. Recent developed strategies ensure certain portability to other domains through specialized lexicons or small domain specific corpora. In this paper we discuss the influence of different discourse styles on statistical machine translation systems. We investigate how a pure SMT performs when training and test data belong to same domain but the discourse style varies.",Same domain different discourse style - A case study on Language Resources for data-driven Machine Translation,"Data-driven machine translation (MT) approaches became very popular during last years, especially for language pairs for which it is difficult to find specialists to develop transfer rules. Statistical (SMT) or example-based (EBMT) systems can provide reasonable translation quality for assimilation purposes, as long as a large amount of training data is available. Especially SMT systems rely on parallel aligned corpora which have to be statistical relevant for the given language pair. The construction of large domain specific parallel corpora is time-and cost-consuming; the current practice relies on one or two big such corpora per language pair. Recent developed strategies ensure certain portability to other domains through specialized lexicons or small domain specific corpora. In this paper we discuss the influence of different discourse styles on statistical machine translation systems. We investigate how a pure SMT performs when training and test data belong to same domain but the discourse style varies.",,"Same domain different discourse style - A case study on Language Resources for data-driven Machine Translation. Data-driven machine translation (MT) approaches became very popular during last years, especially for language pairs for which it is difficult to find specialists to develop transfer rules. Statistical (SMT) or example-based (EBMT) systems can provide reasonable translation quality for assimilation purposes, as long as a large amount of training data is available. Especially SMT systems rely on parallel aligned corpora which have to be statistical relevant for the given language pair. The construction of large domain specific parallel corpora is time-and cost-consuming; the current practice relies on one or two big such corpora per language pair. Recent developed strategies ensure certain portability to other domains through specialized lexicons or small domain specific corpora. In this paper we discuss the influence of different discourse styles on statistical machine translation systems. We investigate how a pure SMT performs when training and test data belong to same domain but the discourse style varies.",2012
veaux-etal-2013-towards,https://aclanthology.org/W13-3917,0,,,,,,,"Towards Personalised Synthesised Voices for Individuals with Vocal Disabilities: Voice Banking and Reconstruction. When individuals lose the ability to produce their own speech, due to degenerative diseases such as motor neurone disease (MND) or Parkinson's, they lose not only a functional means of communication but also a display of their individual and group identity. In order to build personalized synthetic voices, attempts have been made to capture the voice before it is lost, using a process known as voice banking. But, for some patients, the speech deterioration frequently coincides or quickly follows diagnosis. Using HMM-based speech synthesis, it is now possible to build personalized synthetic voices with minimal data recordings and even disordered speech. The power of this approach is that it is possible to use the patient's recordings to adapt existing voice models pre-trained on many speakers. When the speech has begun to deteriorate, the adapted voice model can be further modified in order to compensate for the disordered characteristics found in the patient's speech. The University of Edinburgh has initiated a project for voice banking and reconstruction based on this speech synthesis technology. At the current stage of the project, more than fifteen patients with MND have already been recorded and five of them have been delivered a reconstructed voice. In this paper, we present an overview of the project as well as subjective assessments of the reconstructed voices and feedback from patients and their families.",Towards Personalised Synthesised Voices for Individuals with Vocal Disabilities: Voice Banking and Reconstruction,"When individuals lose the ability to produce their own speech, due to degenerative diseases such as motor neurone disease (MND) or Parkinson's, they lose not only a functional means of communication but also a display of their individual and group identity. In order to build personalized synthetic voices, attempts have been made to capture the voice before it is lost, using a process known as voice banking. But, for some patients, the speech deterioration frequently coincides or quickly follows diagnosis. Using HMM-based speech synthesis, it is now possible to build personalized synthetic voices with minimal data recordings and even disordered speech. The power of this approach is that it is possible to use the patient's recordings to adapt existing voice models pre-trained on many speakers. When the speech has begun to deteriorate, the adapted voice model can be further modified in order to compensate for the disordered characteristics found in the patient's speech. The University of Edinburgh has initiated a project for voice banking and reconstruction based on this speech synthesis technology. At the current stage of the project, more than fifteen patients with MND have already been recorded and five of them have been delivered a reconstructed voice. In this paper, we present an overview of the project as well as subjective assessments of the reconstructed voices and feedback from patients and their families.",Towards Personalised Synthesised Voices for Individuals with Vocal Disabilities: Voice Banking and Reconstruction,"When individuals lose the ability to produce their own speech, due to degenerative diseases such as motor neurone disease (MND) or Parkinson's, they lose not only a functional means of communication but also a display of their individual and group identity. In order to build personalized synthetic voices, attempts have been made to capture the voice before it is lost, using a process known as voice banking. But, for some patients, the speech deterioration frequently coincides or quickly follows diagnosis. Using HMM-based speech synthesis, it is now possible to build personalized synthetic voices with minimal data recordings and even disordered speech. The power of this approach is that it is possible to use the patient's recordings to adapt existing voice models pre-trained on many speakers. When the speech has begun to deteriorate, the adapted voice model can be further modified in order to compensate for the disordered characteristics found in the patient's speech. The University of Edinburgh has initiated a project for voice banking and reconstruction based on this speech synthesis technology. At the current stage of the project, more than fifteen patients with MND have already been recorded and five of them have been delivered a reconstructed voice. In this paper, we present an overview of the project as well as subjective assessments of the reconstructed voices and feedback from patients and their families.",,"Towards Personalised Synthesised Voices for Individuals with Vocal Disabilities: Voice Banking and Reconstruction. When individuals lose the ability to produce their own speech, due to degenerative diseases such as motor neurone disease (MND) or Parkinson's, they lose not only a functional means of communication but also a display of their individual and group identity. In order to build personalized synthetic voices, attempts have been made to capture the voice before it is lost, using a process known as voice banking. But, for some patients, the speech deterioration frequently coincides or quickly follows diagnosis. Using HMM-based speech synthesis, it is now possible to build personalized synthetic voices with minimal data recordings and even disordered speech. The power of this approach is that it is possible to use the patient's recordings to adapt existing voice models pre-trained on many speakers. When the speech has begun to deteriorate, the adapted voice model can be further modified in order to compensate for the disordered characteristics found in the patient's speech. The University of Edinburgh has initiated a project for voice banking and reconstruction based on this speech synthesis technology. At the current stage of the project, more than fifteen patients with MND have already been recorded and five of them have been delivered a reconstructed voice. In this paper, we present an overview of the project as well as subjective assessments of the reconstructed voices and feedback from patients and their families.",2013
lange-ljunglof-2018-demonstrating,https://aclanthology.org/W18-7105,1,,,,education,,,Demonstrating the MUSTE Language Learning Environment. We present a language learning application that relies on grammars to model the learning outcome. Based on this concept we can provide a powerful framework for language learning exercises with an intuitive user interface and a high reliability. Currently the application aims to augment existing language classes and support students by improving the learner attitude and the general learning outcome. Extensions beyond that scope are promising and likely to be added in the future.,Demonstrating the {MUSTE} Language Learning Environment,We present a language learning application that relies on grammars to model the learning outcome. Based on this concept we can provide a powerful framework for language learning exercises with an intuitive user interface and a high reliability. Currently the application aims to augment existing language classes and support students by improving the learner attitude and the general learning outcome. Extensions beyond that scope are promising and likely to be added in the future.,Demonstrating the MUSTE Language Learning Environment,We present a language learning application that relies on grammars to model the learning outcome. Based on this concept we can provide a powerful framework for language learning exercises with an intuitive user interface and a high reliability. Currently the application aims to augment existing language classes and support students by improving the learner attitude and the general learning outcome. Extensions beyond that scope are promising and likely to be added in the future.,,Demonstrating the MUSTE Language Learning Environment. We present a language learning application that relies on grammars to model the learning outcome. Based on this concept we can provide a powerful framework for language learning exercises with an intuitive user interface and a high reliability. Currently the application aims to augment existing language classes and support students by improving the learner attitude and the general learning outcome. Extensions beyond that scope are promising and likely to be added in the future.,2018
von-glasersfeld-1974-yerkish,https://aclanthology.org/J74-3007,0,,,,,,,The Yerkish Language for Non-Human Primates. ,The {Y}erkish Language for Non-Human Primates,,The Yerkish Language for Non-Human Primates,,,The Yerkish Language for Non-Human Primates. ,1974
dinu-etal-2017-stylistic,https://doi.org/10.26615/978-954-452-049-6_028,1,,,,peace_justice_and_strong_institutions,,,On the stylistic evolution from communism to democracy: Solomon Marcus study case. ,On the stylistic evolution from communism to democracy: {S}olomon {M}arcus study case,,On the stylistic evolution from communism to democracy: Solomon Marcus study case,,,On the stylistic evolution from communism to democracy: Solomon Marcus study case. ,2017
mckeown-2006-lessons,https://aclanthology.org/W06-1401,0,,,,,,,"Lessons Learned from Large Scale Evaluation of Systems that Produce Text: Nightmares and Pleasant Surprises. As the language generation community explores the possibility of an evaluation program for language generation, it behooves us to examine our experience in evaluation of other systems that produce text as output. Large scale evaluation of summarization systems and of question answering systems has been carried out for several years now. Summarization and question answering systems produce text output given text as input, while language generation produces text from a semantic representation. Given that the output has the same properties, we can learn from the mistakes and the understandings gained in earlier evaluations. In this invited talk, I will discuss what we have learned in the large scale summarization evaluations carried out in the Document Understanding Conferences (DUC) from 2001 to present, and in the large scale question answering evaluations carried out in TREC (e.g., the definition pilot) as well as the new large scale evaluations being carried out in the DARPA GALE (Global Autonomous Language Environment) program.
DUC was developed and run by NIST and provides a forum for regular evaluation of summarization systems. NIST oversees the gathering of data, including both input documents and gold standard summaries, some of which is done by NIST and some of which is done by LDC. Each year, some 30 to 50 document sets were gathered as test data and somewhere between two to nine summaries were written for each of the input sets. NIST has carried out both manual and automatic evaluation by comparing system output against the gold standard summaries written by humans. The results are made public at the annual conference. In the most recent years, the number of participants has grown to 25 or 30 sites from all over the world.",Lessons Learned from Large Scale Evaluation of Systems that Produce Text: Nightmares and Pleasant Surprises,"As the language generation community explores the possibility of an evaluation program for language generation, it behooves us to examine our experience in evaluation of other systems that produce text as output. Large scale evaluation of summarization systems and of question answering systems has been carried out for several years now. Summarization and question answering systems produce text output given text as input, while language generation produces text from a semantic representation. Given that the output has the same properties, we can learn from the mistakes and the understandings gained in earlier evaluations. In this invited talk, I will discuss what we have learned in the large scale summarization evaluations carried out in the Document Understanding Conferences (DUC) from 2001 to present, and in the large scale question answering evaluations carried out in TREC (e.g., the definition pilot) as well as the new large scale evaluations being carried out in the DARPA GALE (Global Autonomous Language Environment) program.
DUC was developed and run by NIST and provides a forum for regular evaluation of summarization systems. NIST oversees the gathering of data, including both input documents and gold standard summaries, some of which is done by NIST and some of which is done by LDC. Each year, some 30 to 50 document sets were gathered as test data and somewhere between two to nine summaries were written for each of the input sets. NIST has carried out both manual and automatic evaluation by comparing system output against the gold standard summaries written by humans. The results are made public at the annual conference. In the most recent years, the number of participants has grown to 25 or 30 sites from all over the world.",Lessons Learned from Large Scale Evaluation of Systems that Produce Text: Nightmares and Pleasant Surprises,"As the language generation community explores the possibility of an evaluation program for language generation, it behooves us to examine our experience in evaluation of other systems that produce text as output. Large scale evaluation of summarization systems and of question answering systems has been carried out for several years now. Summarization and question answering systems produce text output given text as input, while language generation produces text from a semantic representation. Given that the output has the same properties, we can learn from the mistakes and the understandings gained in earlier evaluations. In this invited talk, I will discuss what we have learned in the large scale summarization evaluations carried out in the Document Understanding Conferences (DUC) from 2001 to present, and in the large scale question answering evaluations carried out in TREC (e.g., the definition pilot) as well as the new large scale evaluations being carried out in the DARPA GALE (Global Autonomous Language Environment) program.
DUC was developed and run by NIST and provides a forum for regular evaluation of summarization systems. NIST oversees the gathering of data, including both input documents and gold standard summaries, some of which is done by NIST and some of which is done by LDC. Each year, some 30 to 50 document sets were gathered as test data and somewhere between two to nine summaries were written for each of the input sets. NIST has carried out both manual and automatic evaluation by comparing system output against the gold standard summaries written by humans. The results are made public at the annual conference. In the most recent years, the number of participants has grown to 25 or 30 sites from all over the world.","This material is based upon work supported in part by the ARDA AQUAINT program (Contract No. MDA908-02-C-0008 and Contract No. NBCHC040040) and the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-06-C-0023 and Contract No. N66001-00-1-8919. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA or ARDA.","Lessons Learned from Large Scale Evaluation of Systems that Produce Text: Nightmares and Pleasant Surprises. As the language generation community explores the possibility of an evaluation program for language generation, it behooves us to examine our experience in evaluation of other systems that produce text as output. Large scale evaluation of summarization systems and of question answering systems has been carried out for several years now. Summarization and question answering systems produce text output given text as input, while language generation produces text from a semantic representation. Given that the output has the same properties, we can learn from the mistakes and the understandings gained in earlier evaluations. In this invited talk, I will discuss what we have learned in the large scale summarization evaluations carried out in the Document Understanding Conferences (DUC) from 2001 to present, and in the large scale question answering evaluations carried out in TREC (e.g., the definition pilot) as well as the new large scale evaluations being carried out in the DARPA GALE (Global Autonomous Language Environment) program.
DUC was developed and run by NIST and provides a forum for regular evaluation of summarization systems. NIST oversees the gathering of data, including both input documents and gold standard summaries, some of which is done by NIST and some of which is done by LDC. Each year, some 30 to 50 document sets were gathered as test data and somewhere between two to nine summaries were written for each of the input sets. NIST has carried out both manual and automatic evaluation by comparing system output against the gold standard summaries written by humans. The results are made public at the annual conference. In the most recent years, the number of participants has grown to 25 or 30 sites from all over the world.",2006
hambardzumyan-etal-2021-warp,https://aclanthology.org/2021.acl-long.381,0,,,,,,,"WARP: Word-level Adversarial ReProgramming. Transfer learning from pretrained language models recently became the dominant approach for solving many NLP tasks. A common approach to transfer learning for multiple tasks that maximize parameter sharing trains one or more task-specific layers on top of the language model. In this paper, we present an alternative approach based on adversarial reprogramming, which extends earlier work on automatic prompt generation. Adversarial reprogramming attempts to learn task-specific word embeddings that, when concatenated to the input text, instruct the language model to solve the specified task. Using up to 25K trainable parameters per task, this approach outperforms all existing methods with up to 25M trainable parameters on the public leaderboard of the GLUE benchmark. Our method, initialized with task-specific human-readable prompts, also works in a few-shot setting, outperforming GPT-3 on two SuperGLUE tasks with just 32 training samples.",{WARP}: {W}ord-level {A}dversarial {R}e{P}rogramming,"Transfer learning from pretrained language models recently became the dominant approach for solving many NLP tasks. A common approach to transfer learning for multiple tasks that maximize parameter sharing trains one or more task-specific layers on top of the language model. In this paper, we present an alternative approach based on adversarial reprogramming, which extends earlier work on automatic prompt generation. Adversarial reprogramming attempts to learn task-specific word embeddings that, when concatenated to the input text, instruct the language model to solve the specified task. Using up to 25K trainable parameters per task, this approach outperforms all existing methods with up to 25M trainable parameters on the public leaderboard of the GLUE benchmark. Our method, initialized with task-specific human-readable prompts, also works in a few-shot setting, outperforming GPT-3 on two SuperGLUE tasks with just 32 training samples.",WARP: Word-level Adversarial ReProgramming,"Transfer learning from pretrained language models recently became the dominant approach for solving many NLP tasks. A common approach to transfer learning for multiple tasks that maximize parameter sharing trains one or more task-specific layers on top of the language model. In this paper, we present an alternative approach based on adversarial reprogramming, which extends earlier work on automatic prompt generation. Adversarial reprogramming attempts to learn task-specific word embeddings that, when concatenated to the input text, instruct the language model to solve the specified task. Using up to 25K trainable parameters per task, this approach outperforms all existing methods with up to 25M trainable parameters on the public leaderboard of the GLUE benchmark. Our method, initialized with task-specific human-readable prompts, also works in a few-shot setting, outperforming GPT-3 on two SuperGLUE tasks with just 32 training samples.",,"WARP: Word-level Adversarial ReProgramming. Transfer learning from pretrained language models recently became the dominant approach for solving many NLP tasks. A common approach to transfer learning for multiple tasks that maximize parameter sharing trains one or more task-specific layers on top of the language model. In this paper, we present an alternative approach based on adversarial reprogramming, which extends earlier work on automatic prompt generation. Adversarial reprogramming attempts to learn task-specific word embeddings that, when concatenated to the input text, instruct the language model to solve the specified task. Using up to 25K trainable parameters per task, this approach outperforms all existing methods with up to 25M trainable parameters on the public leaderboard of the GLUE benchmark. Our method, initialized with task-specific human-readable prompts, also works in a few-shot setting, outperforming GPT-3 on two SuperGLUE tasks with just 32 training samples.",2021
funakoshi-etal-2009-probabilistic,https://aclanthology.org/W09-0634,0,,,,,,,"A Probabilistic Model of Referring Expressions for Complex Objects. This paper presents a probabilistic model both for generation and understanding of referring expressions. This model introduces the concept of parts of objects, modelling the necessity to deal with the characteristics of separate parts of an object in the referring process. This was ignored or implicit in previous literature. Integrating this concept into a probabilistic formulation, the model captures human characteristics of visual perception and some type of pragmatic implicature in referring expressions. Developing this kind of model is critical to deal with more complex domains in the future. As a first step in our research, we validate the model with the TUNA corpus to show that it includes conventional domain modeling as a subset.",A Probabilistic Model of Referring Expressions for Complex Objects,"This paper presents a probabilistic model both for generation and understanding of referring expressions. This model introduces the concept of parts of objects, modelling the necessity to deal with the characteristics of separate parts of an object in the referring process. This was ignored or implicit in previous literature. Integrating this concept into a probabilistic formulation, the model captures human characteristics of visual perception and some type of pragmatic implicature in referring expressions. Developing this kind of model is critical to deal with more complex domains in the future. As a first step in our research, we validate the model with the TUNA corpus to show that it includes conventional domain modeling as a subset.",A Probabilistic Model of Referring Expressions for Complex Objects,"This paper presents a probabilistic model both for generation and understanding of referring expressions. This model introduces the concept of parts of objects, modelling the necessity to deal with the characteristics of separate parts of an object in the referring process. This was ignored or implicit in previous literature. Integrating this concept into a probabilistic formulation, the model captures human characteristics of visual perception and some type of pragmatic implicature in referring expressions. Developing this kind of model is critical to deal with more complex domains in the future. As a first step in our research, we validate the model with the TUNA corpus to show that it includes conventional domain modeling as a subset.",,"A Probabilistic Model of Referring Expressions for Complex Objects. This paper presents a probabilistic model both for generation and understanding of referring expressions. This model introduces the concept of parts of objects, modelling the necessity to deal with the characteristics of separate parts of an object in the referring process. This was ignored or implicit in previous literature. Integrating this concept into a probabilistic formulation, the model captures human characteristics of visual perception and some type of pragmatic implicature in referring expressions. Developing this kind of model is critical to deal with more complex domains in the future. As a first step in our research, we validate the model with the TUNA corpus to show that it includes conventional domain modeling as a subset.",2009
edunov-etal-2019-pre,https://aclanthology.org/N19-1409,0,,,,,,,"Pre-trained language model representations for language generation. Pre-trained language model representations have been successful in a wide range of language understanding tasks. In this paper, we examine different strategies to integrate pretrained representations into sequence to sequence models and apply it to neural machine translation and abstractive summarization. We find that pre-trained representations are most effective when added to the encoder network which slows inference by only 14%. Our experiments in machine translation show gains of up to 5.3 BLEU in a simulated resource-poor setup. While returns diminish with more labeled data, we still observe improvements when millions of sentence-pairs are available. Finally, on abstractive summarization we achieve a new state of the art on the full text version of CNN-DailyMail. 1",Pre-trained language model representations for language generation,"Pre-trained language model representations have been successful in a wide range of language understanding tasks. In this paper, we examine different strategies to integrate pretrained representations into sequence to sequence models and apply it to neural machine translation and abstractive summarization. We find that pre-trained representations are most effective when added to the encoder network which slows inference by only 14%. Our experiments in machine translation show gains of up to 5.3 BLEU in a simulated resource-poor setup. While returns diminish with more labeled data, we still observe improvements when millions of sentence-pairs are available. Finally, on abstractive summarization we achieve a new state of the art on the full text version of CNN-DailyMail. 1",Pre-trained language model representations for language generation,"Pre-trained language model representations have been successful in a wide range of language understanding tasks. In this paper, we examine different strategies to integrate pretrained representations into sequence to sequence models and apply it to neural machine translation and abstractive summarization. We find that pre-trained representations are most effective when added to the encoder network which slows inference by only 14%. Our experiments in machine translation show gains of up to 5.3 BLEU in a simulated resource-poor setup. While returns diminish with more labeled data, we still observe improvements when millions of sentence-pairs are available. Finally, on abstractive summarization we achieve a new state of the art on the full text version of CNN-DailyMail. 1",,"Pre-trained language model representations for language generation. Pre-trained language model representations have been successful in a wide range of language understanding tasks. In this paper, we examine different strategies to integrate pretrained representations into sequence to sequence models and apply it to neural machine translation and abstractive summarization. We find that pre-trained representations are most effective when added to the encoder network which slows inference by only 14%. Our experiments in machine translation show gains of up to 5.3 BLEU in a simulated resource-poor setup. While returns diminish with more labeled data, we still observe improvements when millions of sentence-pairs are available. Finally, on abstractive summarization we achieve a new state of the art on the full text version of CNN-DailyMail. 1",2019
li-etal-2015-tree,https://aclanthology.org/D15-1278,0,,,,,,,"When Are Tree Structures Necessary for Deep Learning of Representations?. Recursive neural models, which use syntactic parse trees to recursively generate representations bottom-up, are a popular architecture. However there have not been rigorous evaluations showing for exactly which tasks this syntax-based method is appropriate. In this paper, we benchmark recursive neural models against sequential recurrent neural models, enforcing applesto-apples comparison as much as possible. We investigate 4 tasks: (1) sentiment classification at the sentence level and phrase level; (2) matching questions to answerphrases; (3) discourse parsing; (4) semantic relation extraction. Our goal is to understand better when, and why, recursive models can outperform simpler models. We find that recursive models help mainly on tasks (like semantic relation extraction) that require longdistance connection modeling, particularly on very long sequences. We then introduce a method for allowing recurrent models to achieve similar performance: breaking long sentences into clause-like units at punctuation and processing them separately before combining. Our results thus help understand the limitations of both classes of models, and suggest directions for improving recurrent models.",When Are Tree Structures Necessary for Deep Learning of Representations?,"Recursive neural models, which use syntactic parse trees to recursively generate representations bottom-up, are a popular architecture. However there have not been rigorous evaluations showing for exactly which tasks this syntax-based method is appropriate. In this paper, we benchmark recursive neural models against sequential recurrent neural models, enforcing applesto-apples comparison as much as possible. We investigate 4 tasks: (1) sentiment classification at the sentence level and phrase level; (2) matching questions to answerphrases; (3) discourse parsing; (4) semantic relation extraction. Our goal is to understand better when, and why, recursive models can outperform simpler models. We find that recursive models help mainly on tasks (like semantic relation extraction) that require longdistance connection modeling, particularly on very long sequences. We then introduce a method for allowing recurrent models to achieve similar performance: breaking long sentences into clause-like units at punctuation and processing them separately before combining. Our results thus help understand the limitations of both classes of models, and suggest directions for improving recurrent models.",When Are Tree Structures Necessary for Deep Learning of Representations?,"Recursive neural models, which use syntactic parse trees to recursively generate representations bottom-up, are a popular architecture. However there have not been rigorous evaluations showing for exactly which tasks this syntax-based method is appropriate. In this paper, we benchmark recursive neural models against sequential recurrent neural models, enforcing applesto-apples comparison as much as possible. We investigate 4 tasks: (1) sentiment classification at the sentence level and phrase level; (2) matching questions to answerphrases; (3) discourse parsing; (4) semantic relation extraction. Our goal is to understand better when, and why, recursive models can outperform simpler models. We find that recursive models help mainly on tasks (like semantic relation extraction) that require longdistance connection modeling, particularly on very long sequences. We then introduce a method for allowing recurrent models to achieve similar performance: breaking long sentences into clause-like units at punctuation and processing them separately before combining. Our results thus help understand the limitations of both classes of models, and suggest directions for improving recurrent models.","We would especially like to thank Richard Socher and Kai-Sheng Tai for insightful comments, advice, and suggestions. We would also like to thank Sam Bowman, Ignacio Cases, Jon Gauthier, Kevin Gu, Gabor Angeli, Sida Wang, Percy Liang and other members of the Stanford NLP group, as well as the anonymous reviewers for their helpful advice on various aspects of this work. We acknowledge the support of NVIDIA Corporation with the donation of Tesla K40 GPUs We gratefully acknowledge support from an Enlight Foundation Graduate Fellowship, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, and the NSF via award IIS-1514268. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of Bloomberg L.P., DARPA, AFRL, NSF, or the US government.","When Are Tree Structures Necessary for Deep Learning of Representations?. Recursive neural models, which use syntactic parse trees to recursively generate representations bottom-up, are a popular architecture. However there have not been rigorous evaluations showing for exactly which tasks this syntax-based method is appropriate. In this paper, we benchmark recursive neural models against sequential recurrent neural models, enforcing applesto-apples comparison as much as possible. We investigate 4 tasks: (1) sentiment classification at the sentence level and phrase level; (2) matching questions to answerphrases; (3) discourse parsing; (4) semantic relation extraction. Our goal is to understand better when, and why, recursive models can outperform simpler models. We find that recursive models help mainly on tasks (like semantic relation extraction) that require longdistance connection modeling, particularly on very long sequences. We then introduce a method for allowing recurrent models to achieve similar performance: breaking long sentences into clause-like units at punctuation and processing them separately before combining. Our results thus help understand the limitations of both classes of models, and suggest directions for improving recurrent models.",2015
chatterjee-etal-2017-textual,https://aclanthology.org/W17-7506,0,,,,,,,"Textual Relations and Topic-Projection: Issues in Text Categorization. Categorization of text is done on the basis of its aboutness. Understanding what a text is about often involves a subjective dimension. Developments in linguistics, however, can provide some important insights about what underlies the process of text categorization in general and topic spotting in particular. More specifically, theoretical underpinnings from formal linguistics and systemic functional linguistics may give some important insights about the way challenges can be dealt with. Under this situation, this paper seeks to present a theoretical framework which can take care of the categorization of text in terms of relational hierarchies embodied in the overall organization of the text.",Textual Relations and Topic-Projection: Issues in Text Categorization,"Categorization of text is done on the basis of its aboutness. Understanding what a text is about often involves a subjective dimension. Developments in linguistics, however, can provide some important insights about what underlies the process of text categorization in general and topic spotting in particular. More specifically, theoretical underpinnings from formal linguistics and systemic functional linguistics may give some important insights about the way challenges can be dealt with. Under this situation, this paper seeks to present a theoretical framework which can take care of the categorization of text in terms of relational hierarchies embodied in the overall organization of the text.",Textual Relations and Topic-Projection: Issues in Text Categorization,"Categorization of text is done on the basis of its aboutness. Understanding what a text is about often involves a subjective dimension. Developments in linguistics, however, can provide some important insights about what underlies the process of text categorization in general and topic spotting in particular. More specifically, theoretical underpinnings from formal linguistics and systemic functional linguistics may give some important insights about the way challenges can be dealt with. Under this situation, this paper seeks to present a theoretical framework which can take care of the categorization of text in terms of relational hierarchies embodied in the overall organization of the text.",,"Textual Relations and Topic-Projection: Issues in Text Categorization. Categorization of text is done on the basis of its aboutness. Understanding what a text is about often involves a subjective dimension. Developments in linguistics, however, can provide some important insights about what underlies the process of text categorization in general and topic spotting in particular. More specifically, theoretical underpinnings from formal linguistics and systemic functional linguistics may give some important insights about the way challenges can be dealt with. Under this situation, this paper seeks to present a theoretical framework which can take care of the categorization of text in terms of relational hierarchies embodied in the overall organization of the text.",2017
ostling-etal-2013-automated,https://aclanthology.org/W13-1705,1,,,,education,,,"Automated Essay Scoring for Swedish. We present the first system developed for automated grading of high school essays written in Swedish. The system uses standard text quality indicators and is able to compare vocabulary and grammar to large reference corpora of blog posts and newspaper articles. The system is evaluated on a corpus of 1 702 essays, each graded independently by the student's own teacher and also in a blind re-grading process by another teacher. We show that our system's performance is fair, given the low agreement between the two human graders, and furthermore show how it could improve efficiency in a practical setting where one seeks to identify incorrectly graded essays.",Automated Essay Scoring for {S}wedish,"We present the first system developed for automated grading of high school essays written in Swedish. The system uses standard text quality indicators and is able to compare vocabulary and grammar to large reference corpora of blog posts and newspaper articles. The system is evaluated on a corpus of 1 702 essays, each graded independently by the student's own teacher and also in a blind re-grading process by another teacher. We show that our system's performance is fair, given the low agreement between the two human graders, and furthermore show how it could improve efficiency in a practical setting where one seeks to identify incorrectly graded essays.",Automated Essay Scoring for Swedish,"We present the first system developed for automated grading of high school essays written in Swedish. The system uses standard text quality indicators and is able to compare vocabulary and grammar to large reference corpora of blog posts and newspaper articles. The system is evaluated on a corpus of 1 702 essays, each graded independently by the student's own teacher and also in a blind re-grading process by another teacher. We show that our system's performance is fair, given the low agreement between the two human graders, and furthermore show how it could improve efficiency in a practical setting where one seeks to identify incorrectly graded essays.",We would like to thank the anonymous reviewers for their useful comments.,"Automated Essay Scoring for Swedish. We present the first system developed for automated grading of high school essays written in Swedish. The system uses standard text quality indicators and is able to compare vocabulary and grammar to large reference corpora of blog posts and newspaper articles. The system is evaluated on a corpus of 1 702 essays, each graded independently by the student's own teacher and also in a blind re-grading process by another teacher. We show that our system's performance is fair, given the low agreement between the two human graders, and furthermore show how it could improve efficiency in a practical setting where one seeks to identify incorrectly graded essays.",2013
dwivedi-shrivastava-2017-beyond,https://aclanthology.org/W17-7526,0,,,,,,,"Beyond Word2Vec: Embedding Words and Phrases in Same Vector Space. Word embeddings are being used for several linguistic problems and NLP tasks. Improvements in solutions to such problems are great because of the recent breakthroughs in vector representation of words and research in vector space models. However, vector embeddings of phrases keeping semantics intact with words has been challenging. We propose a novel methodology using Siamese deep neural networks to embed multi-word units and fine-tune the current state-of-the-art word embeddings keeping both in the same vector space. We show several semantic relations between words and phrases using the embeddings generated by our system and evaluate that the similarity of words and their corresponding paraphrases are maximized using the modified embeddings.",Beyond {W}ord2{V}ec: Embedding Words and Phrases in Same Vector Space,"Word embeddings are being used for several linguistic problems and NLP tasks. Improvements in solutions to such problems are great because of the recent breakthroughs in vector representation of words and research in vector space models. However, vector embeddings of phrases keeping semantics intact with words has been challenging. We propose a novel methodology using Siamese deep neural networks to embed multi-word units and fine-tune the current state-of-the-art word embeddings keeping both in the same vector space. We show several semantic relations between words and phrases using the embeddings generated by our system and evaluate that the similarity of words and their corresponding paraphrases are maximized using the modified embeddings.",Beyond Word2Vec: Embedding Words and Phrases in Same Vector Space,"Word embeddings are being used for several linguistic problems and NLP tasks. Improvements in solutions to such problems are great because of the recent breakthroughs in vector representation of words and research in vector space models. However, vector embeddings of phrases keeping semantics intact with words has been challenging. We propose a novel methodology using Siamese deep neural networks to embed multi-word units and fine-tune the current state-of-the-art word embeddings keeping both in the same vector space. We show several semantic relations between words and phrases using the embeddings generated by our system and evaluate that the similarity of words and their corresponding paraphrases are maximized using the modified embeddings.",We would like to thank Naveen Kumar Laskari for discussions during the course of this work and Pruthwik Mishra and Saurav Jha for their valuable suggestions.,"Beyond Word2Vec: Embedding Words and Phrases in Same Vector Space. Word embeddings are being used for several linguistic problems and NLP tasks. Improvements in solutions to such problems are great because of the recent breakthroughs in vector representation of words and research in vector space models. However, vector embeddings of phrases keeping semantics intact with words has been challenging. We propose a novel methodology using Siamese deep neural networks to embed multi-word units and fine-tune the current state-of-the-art word embeddings keeping both in the same vector space. We show several semantic relations between words and phrases using the embeddings generated by our system and evaluate that the similarity of words and their corresponding paraphrases are maximized using the modified embeddings.",2017
sankepally-oard-2018-initial,https://aclanthology.org/L18-1328,0,,,,,,,"An Initial Test Collection for Ranked Retrieval of SMS Conversations. This paper describes a test collection for evaluating systems that search English SMS (Short Message Service) conversations. The collection is built from about 120,000 text messages. Topic development involved identifying typical types of information needs, then generating topics of each type for which relevant content might be found in the collection. Relevance judgments were then made for groups of messages that were most highly ranked by one or more of several ranked retrieval systems. The resulting TREC style test collection can be used to compare some alternative retrieval system designs.",An Initial Test Collection for Ranked Retrieval of {SMS} Conversations,"This paper describes a test collection for evaluating systems that search English SMS (Short Message Service) conversations. The collection is built from about 120,000 text messages. Topic development involved identifying typical types of information needs, then generating topics of each type for which relevant content might be found in the collection. Relevance judgments were then made for groups of messages that were most highly ranked by one or more of several ranked retrieval systems. The resulting TREC style test collection can be used to compare some alternative retrieval system designs.",An Initial Test Collection for Ranked Retrieval of SMS Conversations,"This paper describes a test collection for evaluating systems that search English SMS (Short Message Service) conversations. The collection is built from about 120,000 text messages. Topic development involved identifying typical types of information needs, then generating topics of each type for which relevant content might be found in the collection. Relevance judgments were then made for groups of messages that were most highly ranked by one or more of several ranked retrieval systems. The resulting TREC style test collection can be used to compare some alternative retrieval system designs.",,"An Initial Test Collection for Ranked Retrieval of SMS Conversations. This paper describes a test collection for evaluating systems that search English SMS (Short Message Service) conversations. The collection is built from about 120,000 text messages. Topic development involved identifying typical types of information needs, then generating topics of each type for which relevant content might be found in the collection. Relevance judgments were then made for groups of messages that were most highly ranked by one or more of several ranked retrieval systems. The resulting TREC style test collection can be used to compare some alternative retrieval system designs.",2018
lin-etal-2012-combining,https://aclanthology.org/P12-1106,0,,,,,,,"Combining Coherence Models and Machine Translation Evaluation Metrics for Summarization Evaluation. An ideal summarization system should produce summaries that have high content coverage and linguistic quality. Many state-ofthe-art summarization systems focus on content coverage by extracting content-dense sentences from source articles. A current research focus is to process these sentences so that they read fluently as a whole. The current AE-SOP task encourages research on evaluating summaries on content, readability, and overall responsiveness. In this work, we adapt a machine translation metric to measure content coverage, apply an enhanced discourse coherence model to evaluate summary readability, and combine both in a trained regression model to evaluate overall responsiveness. The results show significantly improved performance over AESOP 2011 submitted metrics.",Combining Coherence Models and Machine Translation Evaluation Metrics for Summarization Evaluation,"An ideal summarization system should produce summaries that have high content coverage and linguistic quality. Many state-ofthe-art summarization systems focus on content coverage by extracting content-dense sentences from source articles. A current research focus is to process these sentences so that they read fluently as a whole. The current AE-SOP task encourages research on evaluating summaries on content, readability, and overall responsiveness. In this work, we adapt a machine translation metric to measure content coverage, apply an enhanced discourse coherence model to evaluate summary readability, and combine both in a trained regression model to evaluate overall responsiveness. The results show significantly improved performance over AESOP 2011 submitted metrics.",Combining Coherence Models and Machine Translation Evaluation Metrics for Summarization Evaluation,"An ideal summarization system should produce summaries that have high content coverage and linguistic quality. Many state-ofthe-art summarization systems focus on content coverage by extracting content-dense sentences from source articles. A current research focus is to process these sentences so that they read fluently as a whole. The current AE-SOP task encourages research on evaluating summaries on content, readability, and overall responsiveness. In this work, we adapt a machine translation metric to measure content coverage, apply an enhanced discourse coherence model to evaluate summary readability, and combine both in a trained regression model to evaluate overall responsiveness. The results show significantly improved performance over AESOP 2011 submitted metrics.",This research is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Office.3 Our metrics are publicly available at http://wing. comp.nus.edu.sg/˜linzihen/summeval/.,"Combining Coherence Models and Machine Translation Evaluation Metrics for Summarization Evaluation. An ideal summarization system should produce summaries that have high content coverage and linguistic quality. Many state-ofthe-art summarization systems focus on content coverage by extracting content-dense sentences from source articles. A current research focus is to process these sentences so that they read fluently as a whole. The current AE-SOP task encourages research on evaluating summaries on content, readability, and overall responsiveness. In this work, we adapt a machine translation metric to measure content coverage, apply an enhanced discourse coherence model to evaluate summary readability, and combine both in a trained regression model to evaluate overall responsiveness. The results show significantly improved performance over AESOP 2011 submitted metrics.",2012
abu-jbara-radev-2011-clairlib,https://aclanthology.org/P11-4021,0,,,,,,,"Clairlib: A Toolkit for Natural Language Processing, Information Retrieval, and Network Analysis. In this paper we present Clairlib, an opensource toolkit for Natural Language Processing, Information Retrieval, and Network Analysis. Clairlib provides an integrated framework intended to simplify a number of generic tasks within and across those three areas. It has a command-line interface, a graphical interface, and a documented API. Clairlib is compatible with all the common platforms and operating systems. In addition to its own functionality, it provides interfaces to external software and corpora. Clairlib comes with a comprehensive documentation and a rich set of tutorials and visual demos.","{C}lairlib: A Toolkit for Natural Language Processing, Information Retrieval, and Network Analysis","In this paper we present Clairlib, an opensource toolkit for Natural Language Processing, Information Retrieval, and Network Analysis. Clairlib provides an integrated framework intended to simplify a number of generic tasks within and across those three areas. It has a command-line interface, a graphical interface, and a documented API. Clairlib is compatible with all the common platforms and operating systems. In addition to its own functionality, it provides interfaces to external software and corpora. Clairlib comes with a comprehensive documentation and a rich set of tutorials and visual demos.","Clairlib: A Toolkit for Natural Language Processing, Information Retrieval, and Network Analysis","In this paper we present Clairlib, an opensource toolkit for Natural Language Processing, Information Retrieval, and Network Analysis. Clairlib provides an integrated framework intended to simplify a number of generic tasks within and across those three areas. It has a command-line interface, a graphical interface, and a documented API. Clairlib is compatible with all the common platforms and operating systems. In addition to its own functionality, it provides interfaces to external software and corpora. Clairlib comes with a comprehensive documentation and a rich set of tutorials and visual demos.","We would like to thank Mark Hodges, Anthony Fader, Mark Joseph, Joshua Gerrish, Mark Schaller, Jonathan dePeri, Bryan Gibson, Chen Huang, Arzucan Ozgur, and Prem Ganeshkumar who contributed to the development of Clairlib.This work was supported in part by grants R01-LM008106 and U54-DA021519 from the US National Institutes of Health, U54 DA021519, IDM 0329043, DHB 0527513, 0534323, and 0527513 from the National Science Foundation, and W911NF-09-C-0141 from IARPA.","Clairlib: A Toolkit for Natural Language Processing, Information Retrieval, and Network Analysis. In this paper we present Clairlib, an opensource toolkit for Natural Language Processing, Information Retrieval, and Network Analysis. Clairlib provides an integrated framework intended to simplify a number of generic tasks within and across those three areas. It has a command-line interface, a graphical interface, and a documented API. Clairlib is compatible with all the common platforms and operating systems. In addition to its own functionality, it provides interfaces to external software and corpora. Clairlib comes with a comprehensive documentation and a rich set of tutorials and visual demos.",2011
yu-etal-2013-candidate,https://aclanthology.org/W13-4420,0,,,,,,,"Candidate Scoring Using Web-Based Measure for Chinese Spelling Error Correction. Chinese character correction involves two major steps: 1) Providing candidate corrections for all or partially identified characters in a sentence, and 2) Scoring all altered sentences and identifying which is the best corrected sentence. In this paper a web-based measure is used to score candidate sentences, in which there exists one continuous error character in a sentence in almost all sentences in the Bakeoff corpora. The approach of using a web-based measure can be applied directly to sentences with multiple error characters, either consecutive or not, and is not optimized for one-character error correction of Chinese sentences. The results show that the approach achieved a fair precision score whereas the recall is low compared to results reported in this Bakeoff.",Candidate Scoring Using Web-Based Measure for {C}hinese Spelling Error Correction,"Chinese character correction involves two major steps: 1) Providing candidate corrections for all or partially identified characters in a sentence, and 2) Scoring all altered sentences and identifying which is the best corrected sentence. In this paper a web-based measure is used to score candidate sentences, in which there exists one continuous error character in a sentence in almost all sentences in the Bakeoff corpora. The approach of using a web-based measure can be applied directly to sentences with multiple error characters, either consecutive or not, and is not optimized for one-character error correction of Chinese sentences. The results show that the approach achieved a fair precision score whereas the recall is low compared to results reported in this Bakeoff.",Candidate Scoring Using Web-Based Measure for Chinese Spelling Error Correction,"Chinese character correction involves two major steps: 1) Providing candidate corrections for all or partially identified characters in a sentence, and 2) Scoring all altered sentences and identifying which is the best corrected sentence. In this paper a web-based measure is used to score candidate sentences, in which there exists one continuous error character in a sentence in almost all sentences in the Bakeoff corpora. The approach of using a web-based measure can be applied directly to sentences with multiple error characters, either consecutive or not, and is not optimized for one-character error correction of Chinese sentences. The results show that the approach achieved a fair precision score whereas the recall is low compared to results reported in this Bakeoff.","This work was supported by National Science Council (NSC), Taiwan, under Contract number: 102-2221-E-155-029-MY3.","Candidate Scoring Using Web-Based Measure for Chinese Spelling Error Correction. Chinese character correction involves two major steps: 1) Providing candidate corrections for all or partially identified characters in a sentence, and 2) Scoring all altered sentences and identifying which is the best corrected sentence. In this paper a web-based measure is used to score candidate sentences, in which there exists one continuous error character in a sentence in almost all sentences in the Bakeoff corpora. The approach of using a web-based measure can be applied directly to sentences with multiple error characters, either consecutive or not, and is not optimized for one-character error correction of Chinese sentences. The results show that the approach achieved a fair precision score whereas the recall is low compared to results reported in this Bakeoff.",2013
ishiwatari-etal-2017-chunk,https://aclanthology.org/P17-1174,0,,,,,,,"Chunk-based Decoder for Neural Machine Translation. Chunks (or phrases) once played a pivotal role in machine translation. By using a chunk rather than a word as the basic translation unit, local (intrachunk) and global (inter-chunk) word orders and dependencies can be easily modeled. The chunk structure, despite its importance, has not been considered in the decoders used for neural machine translation (NMT). In this paper, we propose chunk-based decoders for NMT, each of which consists of a chunk-level decoder and a word-level decoder. The chunklevel decoder models global dependencies while the word-level decoder decides the local word order in a chunk. To output a target sentence, the chunk-level decoder generates a chunk representation containing global information, which the wordlevel decoder then uses as a basis to predict the words inside the chunk. Experimental results show that our proposed decoders can significantly improve translation performance in a WAT '16 Englishto-Japanese translation task.",Chunk-based Decoder for Neural Machine Translation,"Chunks (or phrases) once played a pivotal role in machine translation. By using a chunk rather than a word as the basic translation unit, local (intrachunk) and global (inter-chunk) word orders and dependencies can be easily modeled. The chunk structure, despite its importance, has not been considered in the decoders used for neural machine translation (NMT). In this paper, we propose chunk-based decoders for NMT, each of which consists of a chunk-level decoder and a word-level decoder. The chunklevel decoder models global dependencies while the word-level decoder decides the local word order in a chunk. To output a target sentence, the chunk-level decoder generates a chunk representation containing global information, which the wordlevel decoder then uses as a basis to predict the words inside the chunk. Experimental results show that our proposed decoders can significantly improve translation performance in a WAT '16 Englishto-Japanese translation task.",Chunk-based Decoder for Neural Machine Translation,"Chunks (or phrases) once played a pivotal role in machine translation. By using a chunk rather than a word as the basic translation unit, local (intrachunk) and global (inter-chunk) word orders and dependencies can be easily modeled. The chunk structure, despite its importance, has not been considered in the decoders used for neural machine translation (NMT). In this paper, we propose chunk-based decoders for NMT, each of which consists of a chunk-level decoder and a word-level decoder. The chunklevel decoder models global dependencies while the word-level decoder decides the local word order in a chunk. To output a target sentence, the chunk-level decoder generates a chunk representation containing global information, which the wordlevel decoder then uses as a basis to predict the words inside the chunk. Experimental results show that our proposed decoders can significantly improve translation performance in a WAT '16 Englishto-Japanese translation task.","This research was partially supported by the Research and Development on Real World Big Data Integration and Analysis program of the Ministry of Education, Culture, Sports, Science and Technology (MEXT) and RIKEN, Japan, and by the Chinese National Research Fund (NSFC) Key Project No. 61532013 and National China 973 Project No. 2015CB352401.The authors appreciate Dongdong Zhang, Shuangzhi Wu, and Zhirui Zhang for the fruitful discussions during the first and second authors were interns at Microsoft Research Asia. We also thank Masashi Toyoda and his group for letting us use their computing resources. Finally, we thank the anonymous reviewers for their careful reading of our paper and insightful comments.","Chunk-based Decoder for Neural Machine Translation. Chunks (or phrases) once played a pivotal role in machine translation. By using a chunk rather than a word as the basic translation unit, local (intrachunk) and global (inter-chunk) word orders and dependencies can be easily modeled. The chunk structure, despite its importance, has not been considered in the decoders used for neural machine translation (NMT). In this paper, we propose chunk-based decoders for NMT, each of which consists of a chunk-level decoder and a word-level decoder. The chunklevel decoder models global dependencies while the word-level decoder decides the local word order in a chunk. To output a target sentence, the chunk-level decoder generates a chunk representation containing global information, which the wordlevel decoder then uses as a basis to predict the words inside the chunk. Experimental results show that our proposed decoders can significantly improve translation performance in a WAT '16 Englishto-Japanese translation task.",2017
wood-doughty-etal-2018-challenges,https://aclanthology.org/D18-1488,0,,,,,,,"Challenges of Using Text Classifiers for Causal Inference. Causal understanding is essential for many kinds of decision-making, but causal inference from observational data has typically only been applied to structured, low-dimensional datasets. While text classifiers produce low-dimensional outputs, their use in causal inference has not previously been studied. To facilitate causal analyses based on language data, we consider the role that text classifiers can play in causal inference through established modeling mechanisms from the causality literature on missing data and measurement error. We demonstrate how to conduct causal analyses using text classifiers on simulated and Yelp data, and discuss the opportunities and challenges of future work that uses text data in causal inference.",Challenges of Using Text Classifiers for Causal Inference,"Causal understanding is essential for many kinds of decision-making, but causal inference from observational data has typically only been applied to structured, low-dimensional datasets. While text classifiers produce low-dimensional outputs, their use in causal inference has not previously been studied. To facilitate causal analyses based on language data, we consider the role that text classifiers can play in causal inference through established modeling mechanisms from the causality literature on missing data and measurement error. We demonstrate how to conduct causal analyses using text classifiers on simulated and Yelp data, and discuss the opportunities and challenges of future work that uses text data in causal inference.",Challenges of Using Text Classifiers for Causal Inference,"Causal understanding is essential for many kinds of decision-making, but causal inference from observational data has typically only been applied to structured, low-dimensional datasets. While text classifiers produce low-dimensional outputs, their use in causal inference has not previously been studied. To facilitate causal analyses based on language data, we consider the role that text classifiers can play in causal inference through established modeling mechanisms from the causality literature on missing data and measurement error. We demonstrate how to conduct causal analyses using text classifiers on simulated and Yelp data, and discuss the opportunities and challenges of future work that uses text data in causal inference.",This work was in part supported by the National Institute of General Medical Sciences under grant number 5R01GM114771 and by the National Institute of Allergy and Infectious Diseases under grant number R01 AI127271-01A1. We thank the anonymous reviewers for their helpful comments.,"Challenges of Using Text Classifiers for Causal Inference. Causal understanding is essential for many kinds of decision-making, but causal inference from observational data has typically only been applied to structured, low-dimensional datasets. While text classifiers produce low-dimensional outputs, their use in causal inference has not previously been studied. To facilitate causal analyses based on language data, we consider the role that text classifiers can play in causal inference through established modeling mechanisms from the causality literature on missing data and measurement error. We demonstrate how to conduct causal analyses using text classifiers on simulated and Yelp data, and discuss the opportunities and challenges of future work that uses text data in causal inference.",2018
sawhney-etal-2021-multimodal,https://aclanthology.org/2021.acl-long.526,0,,,,finance,,,"Multimodal Multi-Speaker Merger \& Acquisition Financial Modeling: A New Task, Dataset, and Neural Baselines. Risk prediction is an essential task in financial markets. Merger and Acquisition (M&A) calls provide key insights into the claims made by company executives about the restructuring of the financial firms. Extracting vocal and textual cues from M&A calls can help model the risk associated with such financial activities. To aid the analysis of M&A calls, we curate a dataset of conference call transcripts and their corresponding audio recordings for the time period ranging from 2016 to 2020. We introduce M3ANet, a baseline architecture that takes advantage of the multimodal multispeaker input to forecast the financial risk associated with the M&A calls. Empirical results prove that the task is challenging, with the proposed architecture performing marginally better than strong BERT-based baselines. We release the M3A dataset and benchmark models to motivate future research on this challenging problem domain.","Multimodal Multi-Speaker Merger {\&} Acquisition Financial Modeling: A New Task, Dataset, and Neural Baselines","Risk prediction is an essential task in financial markets. Merger and Acquisition (M&A) calls provide key insights into the claims made by company executives about the restructuring of the financial firms. Extracting vocal and textual cues from M&A calls can help model the risk associated with such financial activities. To aid the analysis of M&A calls, we curate a dataset of conference call transcripts and their corresponding audio recordings for the time period ranging from 2016 to 2020. We introduce M3ANet, a baseline architecture that takes advantage of the multimodal multispeaker input to forecast the financial risk associated with the M&A calls. Empirical results prove that the task is challenging, with the proposed architecture performing marginally better than strong BERT-based baselines. We release the M3A dataset and benchmark models to motivate future research on this challenging problem domain.","Multimodal Multi-Speaker Merger \& Acquisition Financial Modeling: A New Task, Dataset, and Neural Baselines","Risk prediction is an essential task in financial markets. Merger and Acquisition (M&A) calls provide key insights into the claims made by company executives about the restructuring of the financial firms. Extracting vocal and textual cues from M&A calls can help model the risk associated with such financial activities. To aid the analysis of M&A calls, we curate a dataset of conference call transcripts and their corresponding audio recordings for the time period ranging from 2016 to 2020. We introduce M3ANet, a baseline architecture that takes advantage of the multimodal multispeaker input to forecast the financial risk associated with the M&A calls. Empirical results prove that the task is challenging, with the proposed architecture performing marginally better than strong BERT-based baselines. We release the M3A dataset and benchmark models to motivate future research on this challenging problem domain.",,"Multimodal Multi-Speaker Merger \& Acquisition Financial Modeling: A New Task, Dataset, and Neural Baselines. Risk prediction is an essential task in financial markets. Merger and Acquisition (M&A) calls provide key insights into the claims made by company executives about the restructuring of the financial firms. Extracting vocal and textual cues from M&A calls can help model the risk associated with such financial activities. To aid the analysis of M&A calls, we curate a dataset of conference call transcripts and their corresponding audio recordings for the time period ranging from 2016 to 2020. We introduce M3ANet, a baseline architecture that takes advantage of the multimodal multispeaker input to forecast the financial risk associated with the M&A calls. Empirical results prove that the task is challenging, with the proposed architecture performing marginally better than strong BERT-based baselines. We release the M3A dataset and benchmark models to motivate future research on this challenging problem domain.",2021
kordjamshidi-etal-2017-spatial,https://aclanthology.org/W17-4306,0,,,,,,,"Spatial Language Understanding with Multimodal Graphs using Declarative Learning based Programming. This work is on a previously formalized semantic evaluation task of spatial role labeling (SpRL) that aims at extraction of formal spatial meaning from text. Here, we report the results of initial efforts towards exploiting visual information in the form of images to help spatial language understanding. We discuss the way of designing new models in the framework of declarative learning-based programming (DeLBP). The DeLBP framework facilitates combining modalities and representing various data in a unified graph. The learning and inference models exploit the structure of the unified graph as well as the global first order domain constraints beyond the data to predict the semantics which forms a structured meaning representation of the spatial context. Continuous representations are used to relate the various elements of the graph originating from different modalities. We improved over the state-of-the-art results on SpRL.",Spatial Language Understanding with Multimodal Graphs using Declarative Learning based Programming,"This work is on a previously formalized semantic evaluation task of spatial role labeling (SpRL) that aims at extraction of formal spatial meaning from text. Here, we report the results of initial efforts towards exploiting visual information in the form of images to help spatial language understanding. We discuss the way of designing new models in the framework of declarative learning-based programming (DeLBP). The DeLBP framework facilitates combining modalities and representing various data in a unified graph. The learning and inference models exploit the structure of the unified graph as well as the global first order domain constraints beyond the data to predict the semantics which forms a structured meaning representation of the spatial context. Continuous representations are used to relate the various elements of the graph originating from different modalities. We improved over the state-of-the-art results on SpRL.",Spatial Language Understanding with Multimodal Graphs using Declarative Learning based Programming,"This work is on a previously formalized semantic evaluation task of spatial role labeling (SpRL) that aims at extraction of formal spatial meaning from text. Here, we report the results of initial efforts towards exploiting visual information in the form of images to help spatial language understanding. We discuss the way of designing new models in the framework of declarative learning-based programming (DeLBP). The DeLBP framework facilitates combining modalities and representing various data in a unified graph. The learning and inference models exploit the structure of the unified graph as well as the global first order domain constraints beyond the data to predict the semantics which forms a structured meaning representation of the spatial context. Continuous representations are used to relate the various elements of the graph originating from different modalities. We improved over the state-of-the-art results on SpRL.",,"Spatial Language Understanding with Multimodal Graphs using Declarative Learning based Programming. This work is on a previously formalized semantic evaluation task of spatial role labeling (SpRL) that aims at extraction of formal spatial meaning from text. Here, we report the results of initial efforts towards exploiting visual information in the form of images to help spatial language understanding. We discuss the way of designing new models in the framework of declarative learning-based programming (DeLBP). The DeLBP framework facilitates combining modalities and representing various data in a unified graph. The learning and inference models exploit the structure of the unified graph as well as the global first order domain constraints beyond the data to predict the semantics which forms a structured meaning representation of the spatial context. Continuous representations are used to relate the various elements of the graph originating from different modalities. We improved over the state-of-the-art results on SpRL.",2017
linde-goguen-1980-independence,https://aclanthology.org/P80-1010,0,,,,,,,"On the Independence of Discourse Structure and Semantic Domain. Traditionally, linguistics has been concerned with units at the level of the sentence or below, but recently, a body of research has emerged which demonstrates the existence and organization of linguistic units larger than the sentence. (Chafe, 1974; Goguen, Linde, and Weiner, to appear; Grosz, 1977; Halliday and Hasan, 1976; Labov, 1972; Linde, 1974 , 1980a Linde and Goguen, 1978; Linde and Labov, 1975; Folanyi, 1978; Weiner, 1979.) Each such study raises a question about whether the structure discovered is a property of the organization of Language or whether it is entirely a property of the semantic domain. That is, are we discovering general facts about the structure of language at a level beyond the sentence, or are we discovering particular facts about apartment layouts, water pump repair, Watergate politics, etc?
Such a crude question does not arise with regard to sentences.",On the Independence of Discourse Structure and Semantic Domain,"Traditionally, linguistics has been concerned with units at the level of the sentence or below, but recently, a body of research has emerged which demonstrates the existence and organization of linguistic units larger than the sentence. (Chafe, 1974; Goguen, Linde, and Weiner, to appear; Grosz, 1977; Halliday and Hasan, 1976; Labov, 1972; Linde, 1974 , 1980a Linde and Goguen, 1978; Linde and Labov, 1975; Folanyi, 1978; Weiner, 1979.) Each such study raises a question about whether the structure discovered is a property of the organization of Language or whether it is entirely a property of the semantic domain. That is, are we discovering general facts about the structure of language at a level beyond the sentence, or are we discovering particular facts about apartment layouts, water pump repair, Watergate politics, etc?
Such a crude question does not arise with regard to sentences.",On the Independence of Discourse Structure and Semantic Domain,"Traditionally, linguistics has been concerned with units at the level of the sentence or below, but recently, a body of research has emerged which demonstrates the existence and organization of linguistic units larger than the sentence. (Chafe, 1974; Goguen, Linde, and Weiner, to appear; Grosz, 1977; Halliday and Hasan, 1976; Labov, 1972; Linde, 1974 , 1980a Linde and Goguen, 1978; Linde and Labov, 1975; Folanyi, 1978; Weiner, 1979.) Each such study raises a question about whether the structure discovered is a property of the organization of Language or whether it is entirely a property of the semantic domain. That is, are we discovering general facts about the structure of language at a level beyond the sentence, or are we discovering particular facts about apartment layouts, water pump repair, Watergate politics, etc?
Such a crude question does not arise with regard to sentences.","We would like to thank R. M. Burstall and James Weiner for their help throughout much of the work reported in this paper. We owe our approach to discourse analysis to the work of William Labor, and our basic orientation to Chogyam Trungpa, Rinp~che.","On the Independence of Discourse Structure and Semantic Domain. Traditionally, linguistics has been concerned with units at the level of the sentence or below, but recently, a body of research has emerged which demonstrates the existence and organization of linguistic units larger than the sentence. (Chafe, 1974; Goguen, Linde, and Weiner, to appear; Grosz, 1977; Halliday and Hasan, 1976; Labov, 1972; Linde, 1974 , 1980a Linde and Goguen, 1978; Linde and Labov, 1975; Folanyi, 1978; Weiner, 1979.) Each such study raises a question about whether the structure discovered is a property of the organization of Language or whether it is entirely a property of the semantic domain. That is, are we discovering general facts about the structure of language at a level beyond the sentence, or are we discovering particular facts about apartment layouts, water pump repair, Watergate politics, etc?
Such a crude question does not arise with regard to sentences.",1980
bohan-etal-2000-evaluating,http://www.lrec-conf.org/proceedings/lrec2000/pdf/136.pdf,0,,,,,,,"Evaluating Translation Quality as Input to Product Development. In this paper we present a corpus-based method to evaluate the translation quality of machine translation (MT) systems. We start with a shallow analysis of a large corpus and gradually focus the attention on the translation problems. The method constitutes an efficient way to identify the most important grammatical and lexical weaknesses of an MT system and to guide development towards improved translation quality. The evaluation described in the paper was carried out as a cooperation between an MT technology developer, Sail Labs, and the Computational Linguistics group at the University of Zürich.",Evaluating Translation Quality as Input to Product Development,"In this paper we present a corpus-based method to evaluate the translation quality of machine translation (MT) systems. We start with a shallow analysis of a large corpus and gradually focus the attention on the translation problems. The method constitutes an efficient way to identify the most important grammatical and lexical weaknesses of an MT system and to guide development towards improved translation quality. The evaluation described in the paper was carried out as a cooperation between an MT technology developer, Sail Labs, and the Computational Linguistics group at the University of Zürich.",Evaluating Translation Quality as Input to Product Development,"In this paper we present a corpus-based method to evaluate the translation quality of machine translation (MT) systems. We start with a shallow analysis of a large corpus and gradually focus the attention on the translation problems. The method constitutes an efficient way to identify the most important grammatical and lexical weaknesses of an MT system and to guide development towards improved translation quality. The evaluation described in the paper was carried out as a cooperation between an MT technology developer, Sail Labs, and the Computational Linguistics group at the University of Zürich.",,"Evaluating Translation Quality as Input to Product Development. In this paper we present a corpus-based method to evaluate the translation quality of machine translation (MT) systems. We start with a shallow analysis of a large corpus and gradually focus the attention on the translation problems. The method constitutes an efficient way to identify the most important grammatical and lexical weaknesses of an MT system and to guide development towards improved translation quality. The evaluation described in the paper was carried out as a cooperation between an MT technology developer, Sail Labs, and the Computational Linguistics group at the University of Zürich.",2000
burchardt-etal-2008-formalising,https://aclanthology.org/I08-1051,0,,,,,,,"Formalising Multi-layer Corpora in OWL DL - Lexicon Modelling, Querying and Consistency Control. We present a general approach to formally modelling corpora with multi-layered annotation, thereby inducing a lexicon model in a typed logical representation language, OWL DL. This model can be interpreted as a graph structure that offers flexible querying functionality beyond current XML-based query languages and powerful methods for consistency control. We illustrate our approach by applying it to the syntactically and semantically annotated SALSA/TIGER corpus.","Formalising Multi-layer Corpora in {OWL} {DL} - Lexicon Modelling, Querying and Consistency Control","We present a general approach to formally modelling corpora with multi-layered annotation, thereby inducing a lexicon model in a typed logical representation language, OWL DL. This model can be interpreted as a graph structure that offers flexible querying functionality beyond current XML-based query languages and powerful methods for consistency control. We illustrate our approach by applying it to the syntactically and semantically annotated SALSA/TIGER corpus.","Formalising Multi-layer Corpora in OWL DL - Lexicon Modelling, Querying and Consistency Control","We present a general approach to formally modelling corpora with multi-layered annotation, thereby inducing a lexicon model in a typed logical representation language, OWL DL. This model can be interpreted as a graph structure that offers flexible querying functionality beyond current XML-based query languages and powerful methods for consistency control. We illustrate our approach by applying it to the syntactically and semantically annotated SALSA/TIGER corpus.",This work has been partly funded by the German Research Foundation DFG (grant PI 154/9-2). We also thank the two anonymous reviewers for their valuable comments and suggestions.,"Formalising Multi-layer Corpora in OWL DL - Lexicon Modelling, Querying and Consistency Control. We present a general approach to formally modelling corpora with multi-layered annotation, thereby inducing a lexicon model in a typed logical representation language, OWL DL. This model can be interpreted as a graph structure that offers flexible querying functionality beyond current XML-based query languages and powerful methods for consistency control. We illustrate our approach by applying it to the syntactically and semantically annotated SALSA/TIGER corpus.",2008
meiby-1996-building,https://aclanthology.org/1996.tc-1.17,0,,,,,,,"Building Machine Translation on a firm foundation. Professor Alan K. Melby, Brigham Young University at Provo, USA SYNOPSIS How can we build the next generation of machine translation systems on a firm foundation? We should build on the current generation of systems by incorporating proven technology. That is, we should emphasize indicative-quality translation where appropriate and high-quality controlled-language translation where appropriate, leaving other kinds of translation to humans. This paper will suggest a theoretical framework for discussing text types and make five practical proposals to machine translation vendors for enhancing current machine translation systems. Some of these enhancements will also benefit human translators who are using translation technology.
THEORETICAL FRAMEWORK How can we build the next generation of machine translation systems on a firm foundation? Unless astounding breakthroughs in computational linguistics appear on the horizon, the next generation of machine translation systems is not likely to replace all human translators or even reduce the current level of need for human translators. We should build the next generation of systems on the current generation, looking for ways to further help both human and machine translation benefit from technology that has been shown to work. Currently understood technology has not yet been fully implemented in machine translation and can provide a firm foundation for further development of existing systems and implementation of new systems. Before making five practical proposals and projecting their potential benefits, I will sketch a theoretical framework for the rest of the paper.",Building Machine Translation on a firm foundation,"Professor Alan K. Melby, Brigham Young University at Provo, USA SYNOPSIS How can we build the next generation of machine translation systems on a firm foundation? We should build on the current generation of systems by incorporating proven technology. That is, we should emphasize indicative-quality translation where appropriate and high-quality controlled-language translation where appropriate, leaving other kinds of translation to humans. This paper will suggest a theoretical framework for discussing text types and make five practical proposals to machine translation vendors for enhancing current machine translation systems. Some of these enhancements will also benefit human translators who are using translation technology.
THEORETICAL FRAMEWORK How can we build the next generation of machine translation systems on a firm foundation? Unless astounding breakthroughs in computational linguistics appear on the horizon, the next generation of machine translation systems is not likely to replace all human translators or even reduce the current level of need for human translators. We should build the next generation of systems on the current generation, looking for ways to further help both human and machine translation benefit from technology that has been shown to work. Currently understood technology has not yet been fully implemented in machine translation and can provide a firm foundation for further development of existing systems and implementation of new systems. Before making five practical proposals and projecting their potential benefits, I will sketch a theoretical framework for the rest of the paper.",Building Machine Translation on a firm foundation,"Professor Alan K. Melby, Brigham Young University at Provo, USA SYNOPSIS How can we build the next generation of machine translation systems on a firm foundation? We should build on the current generation of systems by incorporating proven technology. That is, we should emphasize indicative-quality translation where appropriate and high-quality controlled-language translation where appropriate, leaving other kinds of translation to humans. This paper will suggest a theoretical framework for discussing text types and make five practical proposals to machine translation vendors for enhancing current machine translation systems. Some of these enhancements will also benefit human translators who are using translation technology.
THEORETICAL FRAMEWORK How can we build the next generation of machine translation systems on a firm foundation? Unless astounding breakthroughs in computational linguistics appear on the horizon, the next generation of machine translation systems is not likely to replace all human translators or even reduce the current level of need for human translators. We should build the next generation of systems on the current generation, looking for ways to further help both human and machine translation benefit from technology that has been shown to work. Currently understood technology has not yet been fully implemented in machine translation and can provide a firm foundation for further development of existing systems and implementation of new systems. Before making five practical proposals and projecting their potential benefits, I will sketch a theoretical framework for the rest of the paper.",,"Building Machine Translation on a firm foundation. Professor Alan K. Melby, Brigham Young University at Provo, USA SYNOPSIS How can we build the next generation of machine translation systems on a firm foundation? We should build on the current generation of systems by incorporating proven technology. That is, we should emphasize indicative-quality translation where appropriate and high-quality controlled-language translation where appropriate, leaving other kinds of translation to humans. This paper will suggest a theoretical framework for discussing text types and make five practical proposals to machine translation vendors for enhancing current machine translation systems. Some of these enhancements will also benefit human translators who are using translation technology.
THEORETICAL FRAMEWORK How can we build the next generation of machine translation systems on a firm foundation? Unless astounding breakthroughs in computational linguistics appear on the horizon, the next generation of machine translation systems is not likely to replace all human translators or even reduce the current level of need for human translators. We should build the next generation of systems on the current generation, looking for ways to further help both human and machine translation benefit from technology that has been shown to work. Currently understood technology has not yet been fully implemented in machine translation and can provide a firm foundation for further development of existing systems and implementation of new systems. Before making five practical proposals and projecting their potential benefits, I will sketch a theoretical framework for the rest of the paper.",1996
stanovsky-tamari-2019-yall,https://aclanthology.org/D19-5549,0,,,,,,,"Y'all should read this! Identifying Plurality in Second-Person Personal Pronouns in English Texts. Distinguishing between singular and plural ""you"" in English is a challenging task which has potential for downstream applications, such as machine translation or coreference resolution. While formal written English does not distinguish between these cases, other languages (such as Spanish), as well as other dialects of English (via phrases such as ""y'all""), do make this distinction. We make use of this to obtain distantly-supervised labels for the task on a large-scale in two domains. Following, we train a model to distinguish between the single/plural 'you', finding that although in-domain training achieves reasonable accuracy (≥ 77%), there is still a lot of room for improvement, especially in the domain-transfer scenario, which proves extremely challenging. Our code and data are publicly available. 1 * Work done during an internship at the Allen Institute for Artificial Intelligence.",{Y}{'}all should read this! Identifying Plurality in Second-Person Personal Pronouns in {E}nglish Texts,"Distinguishing between singular and plural ""you"" in English is a challenging task which has potential for downstream applications, such as machine translation or coreference resolution. While formal written English does not distinguish between these cases, other languages (such as Spanish), as well as other dialects of English (via phrases such as ""y'all""), do make this distinction. We make use of this to obtain distantly-supervised labels for the task on a large-scale in two domains. Following, we train a model to distinguish between the single/plural 'you', finding that although in-domain training achieves reasonable accuracy (≥ 77%), there is still a lot of room for improvement, especially in the domain-transfer scenario, which proves extremely challenging. Our code and data are publicly available. 1 * Work done during an internship at the Allen Institute for Artificial Intelligence.",Y'all should read this! Identifying Plurality in Second-Person Personal Pronouns in English Texts,"Distinguishing between singular and plural ""you"" in English is a challenging task which has potential for downstream applications, such as machine translation or coreference resolution. While formal written English does not distinguish between these cases, other languages (such as Spanish), as well as other dialects of English (via phrases such as ""y'all""), do make this distinction. We make use of this to obtain distantly-supervised labels for the task on a large-scale in two domains. Following, we train a model to distinguish between the single/plural 'you', finding that although in-domain training achieves reasonable accuracy (≥ 77%), there is still a lot of room for improvement, especially in the domain-transfer scenario, which proves extremely challenging. Our code and data are publicly available. 1 * Work done during an internship at the Allen Institute for Artificial Intelligence.",We thank the anonymous reviewers for their many helpful comments and suggestions.,"Y'all should read this! Identifying Plurality in Second-Person Personal Pronouns in English Texts. Distinguishing between singular and plural ""you"" in English is a challenging task which has potential for downstream applications, such as machine translation or coreference resolution. While formal written English does not distinguish between these cases, other languages (such as Spanish), as well as other dialects of English (via phrases such as ""y'all""), do make this distinction. We make use of this to obtain distantly-supervised labels for the task on a large-scale in two domains. Following, we train a model to distinguish between the single/plural 'you', finding that although in-domain training achieves reasonable accuracy (≥ 77%), there is still a lot of room for improvement, especially in the domain-transfer scenario, which proves extremely challenging. Our code and data are publicly available. 1 * Work done during an internship at the Allen Institute for Artificial Intelligence.",2019
tan-bansal-2019-lxmert,https://aclanthology.org/D19-1514,0,,,,,,,"LXMERT: Learning Cross-Modality Encoder Representations from Transformers. Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pre-training tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR 2 , and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pretraining strategies significantly contribute to our strong results. 1",{LXMERT}: Learning Cross-Modality Encoder Representations from Transformers,"Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pre-training tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR 2 , and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pretraining strategies significantly contribute to our strong results. 1",LXMERT: Learning Cross-Modality Encoder Representations from Transformers,"Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pre-training tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR 2 , and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pretraining strategies significantly contribute to our strong results. 1","We thank the reviewers for their helpful comments. This work was supported by ARO-YIP Award #W911NF-18-1-0336, and awards from Google, Facebook, Salesforce, and Adobe. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. We also thank Alane Suhr for evaluation on NLVR 2 .","LXMERT: Learning Cross-Modality Encoder Representations from Transformers. Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pre-training tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR 2 , and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pretraining strategies significantly contribute to our strong results. 1",2019
riedl-biemann-2013-scaling,https://aclanthology.org/D13-1089,0,,,,,,,"Scaling to Large\mbox$^3$ Data: An Efficient and Effective Method to Compute Distributional Thesauri. We introduce a new highly scalable approach for computing Distributional Thesauri (DTs). By employing pruning techniques and a distributed framework, we make the computation for very large corpora feasible on comparably small computational resources. We demonstrate this by releasing a DT for the whole vocabulary of Google Books syntactic n-grams. Evaluating against lexical resources using two measures, we show that our approach produces higher quality DTs than previous approaches, and is thus preferable in terms of speed and quality for large corpora.",Scaling to Large{\mbox{$^3$}} Data: An Efficient and Effective Method to Compute Distributional Thesauri,"We introduce a new highly scalable approach for computing Distributional Thesauri (DTs). By employing pruning techniques and a distributed framework, we make the computation for very large corpora feasible on comparably small computational resources. We demonstrate this by releasing a DT for the whole vocabulary of Google Books syntactic n-grams. Evaluating against lexical resources using two measures, we show that our approach produces higher quality DTs than previous approaches, and is thus preferable in terms of speed and quality for large corpora.",Scaling to Large\mbox$^3$ Data: An Efficient and Effective Method to Compute Distributional Thesauri,"We introduce a new highly scalable approach for computing Distributional Thesauri (DTs). By employing pruning techniques and a distributed framework, we make the computation for very large corpora feasible on comparably small computational resources. We demonstrate this by releasing a DT for the whole vocabulary of Google Books syntactic n-grams. Evaluating against lexical resources using two measures, we show that our approach produces higher quality DTs than previous approaches, and is thus preferable in terms of speed and quality for large corpora.","This work has been supported by the Hessian research excellence program ""Landes-Offensive zur Entwicklung Wissenschaftlich-konomischer Exzellenz"" (LOEWE) as part of the research center ""Digital Humanities"". We would also thank the anonymous reviewers for their comments, which greatly helped to improve the paper.","Scaling to Large\mbox$^3$ Data: An Efficient and Effective Method to Compute Distributional Thesauri. We introduce a new highly scalable approach for computing Distributional Thesauri (DTs). By employing pruning techniques and a distributed framework, we make the computation for very large corpora feasible on comparably small computational resources. We demonstrate this by releasing a DT for the whole vocabulary of Google Books syntactic n-grams. Evaluating against lexical resources using two measures, we show that our approach produces higher quality DTs than previous approaches, and is thus preferable in terms of speed and quality for large corpora.",2013
blaschke-etal-2020-cyberwalle,https://aclanthology.org/2020.semeval-1.192,1,,,,disinformation_and_fake_news,,,CyberWallE at SemEval-2020 Task 11: An Analysis of Feature Engineering for Ensemble Models for Propaganda Detection. This paper describes our participation in the SemEval-2020 task Detection of Propaganda Techniques in News Articles. We participate in both subtasks: Span Identification (SI) and Technique Classification (TC). We use a bi-LSTM architecture in the SI subtask and train a complex ensemble model for the TC subtask. Our architectures are built using embeddings from BERT in combination with additional lexical features and extensive label post-processing. Our systems achieve a rank of 8 out of 35 teams in the SI subtask (F1-score: 43.86%) and 8 out of 31 teams in the TC subtask (F1-score: 57.37%).,{C}yber{W}all{E} at {S}em{E}val-2020 Task 11: An Analysis of Feature Engineering for Ensemble Models for Propaganda Detection,This paper describes our participation in the SemEval-2020 task Detection of Propaganda Techniques in News Articles. We participate in both subtasks: Span Identification (SI) and Technique Classification (TC). We use a bi-LSTM architecture in the SI subtask and train a complex ensemble model for the TC subtask. Our architectures are built using embeddings from BERT in combination with additional lexical features and extensive label post-processing. Our systems achieve a rank of 8 out of 35 teams in the SI subtask (F1-score: 43.86%) and 8 out of 31 teams in the TC subtask (F1-score: 57.37%).,CyberWallE at SemEval-2020 Task 11: An Analysis of Feature Engineering for Ensemble Models for Propaganda Detection,This paper describes our participation in the SemEval-2020 task Detection of Propaganda Techniques in News Articles. We participate in both subtasks: Span Identification (SI) and Technique Classification (TC). We use a bi-LSTM architecture in the SI subtask and train a complex ensemble model for the TC subtask. Our architectures are built using embeddings from BERT in combination with additional lexical features and extensive label post-processing. Our systems achieve a rank of 8 out of 35 teams in the SI subtask (F1-score: 43.86%) and 8 out of 31 teams in the TC subtask (F1-score: 57.37%).,We thank Dr. Ç agrı Çöltekin for useful discussions and his guidance throughout this project.,CyberWallE at SemEval-2020 Task 11: An Analysis of Feature Engineering for Ensemble Models for Propaganda Detection. This paper describes our participation in the SemEval-2020 task Detection of Propaganda Techniques in News Articles. We participate in both subtasks: Span Identification (SI) and Technique Classification (TC). We use a bi-LSTM architecture in the SI subtask and train a complex ensemble model for the TC subtask. Our architectures are built using embeddings from BERT in combination with additional lexical features and extensive label post-processing. Our systems achieve a rank of 8 out of 35 teams in the SI subtask (F1-score: 43.86%) and 8 out of 31 teams in the TC subtask (F1-score: 57.37%).,2020
agirre-etal-2013-ubc,https://aclanthology.org/S13-1018,0,,,,,,,UBC\_UOS-TYPED: Regression for typed-similarity. We approach the typed-similarity task using a range of heuristics that rely on information from the appropriate metadata fields for each type of similarity. In addition we train a linear regressor for each type of similarity. The results indicate that the linear regression is key for good performance. Our best system was ranked third in the task.,{UBC}{\_}{UOS}-{TYPED}: Regression for typed-similarity,We approach the typed-similarity task using a range of heuristics that rely on information from the appropriate metadata fields for each type of similarity. In addition we train a linear regressor for each type of similarity. The results indicate that the linear regression is key for good performance. Our best system was ranked third in the task.,UBC\_UOS-TYPED: Regression for typed-similarity,We approach the typed-similarity task using a range of heuristics that rely on information from the appropriate metadata fields for each type of similarity. In addition we train a linear regressor for each type of similarity. The results indicate that the linear regression is key for good performance. Our best system was ranked third in the task.,"This work is partially funded by the PATHS project (http://paths-project.eu) funded by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 270082. Aitor Gonzalez-Agirre is supported by a PhD grant from the Spanish Ministry of Education, Culture and Sport (grant FPU12/06243).",UBC\_UOS-TYPED: Regression for typed-similarity. We approach the typed-similarity task using a range of heuristics that rely on information from the appropriate metadata fields for each type of similarity. In addition we train a linear regressor for each type of similarity. The results indicate that the linear regression is key for good performance. Our best system was ranked third in the task.,2013
cai-etal-2017-crf,https://aclanthology.org/D17-1171,0,,,,,,,"CRF Autoencoder for Unsupervised Dependency Parsing. Unsupervised dependency parsing, which tries to discover linguistic dependency structures from unannotated data, is a very challenging task. Almost all previous work on this task focuses on learning generative models. In this paper, we develop an unsupervised dependency parsing model based on the CRF autoencoder. The encoder part of our model is discriminative and globally normalized which allows us to use rich features as well as universal linguistic priors. We propose an exact algorithm for parsing as well as a tractable learning algorithm. We evaluated the performance of our model on eight multilingual treebanks and found that our model achieved comparable performance with state-of-the-art approaches.",{CRF} Autoencoder for Unsupervised Dependency Parsing,"Unsupervised dependency parsing, which tries to discover linguistic dependency structures from unannotated data, is a very challenging task. Almost all previous work on this task focuses on learning generative models. In this paper, we develop an unsupervised dependency parsing model based on the CRF autoencoder. The encoder part of our model is discriminative and globally normalized which allows us to use rich features as well as universal linguistic priors. We propose an exact algorithm for parsing as well as a tractable learning algorithm. We evaluated the performance of our model on eight multilingual treebanks and found that our model achieved comparable performance with state-of-the-art approaches.",CRF Autoencoder for Unsupervised Dependency Parsing,"Unsupervised dependency parsing, which tries to discover linguistic dependency structures from unannotated data, is a very challenging task. Almost all previous work on this task focuses on learning generative models. In this paper, we develop an unsupervised dependency parsing model based on the CRF autoencoder. The encoder part of our model is discriminative and globally normalized which allows us to use rich features as well as universal linguistic priors. We propose an exact algorithm for parsing as well as a tractable learning algorithm. We evaluated the performance of our model on eight multilingual treebanks and found that our model achieved comparable performance with state-of-the-art approaches.",,"CRF Autoencoder for Unsupervised Dependency Parsing. Unsupervised dependency parsing, which tries to discover linguistic dependency structures from unannotated data, is a very challenging task. Almost all previous work on this task focuses on learning generative models. In this paper, we develop an unsupervised dependency parsing model based on the CRF autoencoder. The encoder part of our model is discriminative and globally normalized which allows us to use rich features as well as universal linguistic priors. We propose an exact algorithm for parsing as well as a tractable learning algorithm. We evaluated the performance of our model on eight multilingual treebanks and found that our model achieved comparable performance with state-of-the-art approaches.",2017
li-etal-2016-learning,https://aclanthology.org/C16-1136,0,,,,,,,"Learning Event Expressions via Bilingual Structure Projection. Identifying events of a specific type is a challenging task as events in texts are described in numerous and diverse ways. Aiming to resolve high complexities of event descriptions, previous work (Huang and Riloff, 2013) proposes multi-faceted event recognition and a bootstrapping method to automatically acquire both event facet phrases and event expressions from unannotated texts. However, to ensure high quality of learned phrases, this method is constrained to only learn phrases that match certain syntactic structures. In this paper, we propose a bilingual structure projection algorithm that explores linguistic divergences between two languages (Chinese and English) and mines new phrases with new syntactic structures, which have been ignored in the previous work. Experiments show that our approach can successfully find novel event phrases and structures, e.g., phrases headed by nouns. Furthermore, the newly mined phrases are capable of recognizing additional event descriptions and increasing the recall of event recognition.",Learning Event Expressions via Bilingual Structure Projection,"Identifying events of a specific type is a challenging task as events in texts are described in numerous and diverse ways. Aiming to resolve high complexities of event descriptions, previous work (Huang and Riloff, 2013) proposes multi-faceted event recognition and a bootstrapping method to automatically acquire both event facet phrases and event expressions from unannotated texts. However, to ensure high quality of learned phrases, this method is constrained to only learn phrases that match certain syntactic structures. In this paper, we propose a bilingual structure projection algorithm that explores linguistic divergences between two languages (Chinese and English) and mines new phrases with new syntactic structures, which have been ignored in the previous work. Experiments show that our approach can successfully find novel event phrases and structures, e.g., phrases headed by nouns. Furthermore, the newly mined phrases are capable of recognizing additional event descriptions and increasing the recall of event recognition.",Learning Event Expressions via Bilingual Structure Projection,"Identifying events of a specific type is a challenging task as events in texts are described in numerous and diverse ways. Aiming to resolve high complexities of event descriptions, previous work (Huang and Riloff, 2013) proposes multi-faceted event recognition and a bootstrapping method to automatically acquire both event facet phrases and event expressions from unannotated texts. However, to ensure high quality of learned phrases, this method is constrained to only learn phrases that match certain syntactic structures. In this paper, we propose a bilingual structure projection algorithm that explores linguistic divergences between two languages (Chinese and English) and mines new phrases with new syntactic structures, which have been ignored in the previous work. Experiments show that our approach can successfully find novel event phrases and structures, e.g., phrases headed by nouns. Furthermore, the newly mined phrases are capable of recognizing additional event descriptions and increasing the recall of event recognition.","The authors were supported by National Natural Science Foundation of China (Grant Nos. 61403269, 61432013 and 61525205) and Natural Science Foundation of Jiangsu Province (Grant No. BK20140355). This research was also partially supported by Ruihong Huang's startup funds in Texas A&M University. We also thank the anonymous reviewers for their insightful comments.","Learning Event Expressions via Bilingual Structure Projection. Identifying events of a specific type is a challenging task as events in texts are described in numerous and diverse ways. Aiming to resolve high complexities of event descriptions, previous work (Huang and Riloff, 2013) proposes multi-faceted event recognition and a bootstrapping method to automatically acquire both event facet phrases and event expressions from unannotated texts. However, to ensure high quality of learned phrases, this method is constrained to only learn phrases that match certain syntactic structures. In this paper, we propose a bilingual structure projection algorithm that explores linguistic divergences between two languages (Chinese and English) and mines new phrases with new syntactic structures, which have been ignored in the previous work. Experiments show that our approach can successfully find novel event phrases and structures, e.g., phrases headed by nouns. Furthermore, the newly mined phrases are capable of recognizing additional event descriptions and increasing the recall of event recognition.",2016
liu-chan-2012-role,https://aclanthology.org/Y12-1069,1,,,,education,,,"The Role of Qualia Structure in Mandarin Children Acquiring Noun-modifying Constructions. This paper investigates the types and the developmental trajectory of noun modifying constructions (NMCs), in the form of [Modifier + de + (Noun)], attested in Mandarin-speaking children's speech from a semantic perspective based on the generative lexicon framework (Pustejovsky, 1995). Based on 1034 NMCs (including those traditionally defined as relative clauses (RCs)) produced by 135 children aged 3 to 6 from a cross-sectional naturalistic speech corpus ""Zhou2"" in CHILDES, we analyzed the relation between the modifier and the head noun according to the 4 major roles of qualia structure: formal, constitutive, telic and agentive. Results suggest that (i) NMCs expressing the formal facet of the head noun's meaning are most frequently produced and acquired earliest, followed by those expressing the constitutive quale, and then those expressing the telic or the agentive quale; (ii) RC-type NMCs emerge either alongside the other non-RC type NMCs at the same time, or emerge later than the other non-RC type NMCs for the constitutive quale; and (iii) the majority of NMCs expressing the agentive and telic quales are those that fall within the traditional domain of RCs (called RC-type NMCs here), while the majority of NMCs expressing the formal and the constitutive quales are non-RC type NMCs. These findings are consistent with: (i) the semantic nature and complexity of the four qualia relations: formal and constitutive aspects of an object (called natural type concepts in Pustejovsky 2001, 2006) are more basic attributes, while telic and agentive (called artificial type concepts in Pustejovsky 2001, 2006) are derived and often eventive (hence conceptually more complex); and (ii) the properties of their adult input: NMCs expressing the formal quale are also most frequently encountered in the adult input; followed by the constitutive quale, and then the agentive and telic quales. The findings are also consistent with the idea that in Asian languages such as Japanese, Korean and Chinese, RCs develop from attributive constructions specifying a semantic feature of the head noun in acquisition (Diessel 2007, c.f. also Comrie 1996, 1998, 2002). This study is probably the first of using the generative lexicon framework in the field of child language acquisition.",The Role of Qualia Structure in {M}andarin Children Acquiring Noun-modifying Constructions,"This paper investigates the types and the developmental trajectory of noun modifying constructions (NMCs), in the form of [Modifier + de + (Noun)], attested in Mandarin-speaking children's speech from a semantic perspective based on the generative lexicon framework (Pustejovsky, 1995). Based on 1034 NMCs (including those traditionally defined as relative clauses (RCs)) produced by 135 children aged 3 to 6 from a cross-sectional naturalistic speech corpus ""Zhou2"" in CHILDES, we analyzed the relation between the modifier and the head noun according to the 4 major roles of qualia structure: formal, constitutive, telic and agentive. Results suggest that (i) NMCs expressing the formal facet of the head noun's meaning are most frequently produced and acquired earliest, followed by those expressing the constitutive quale, and then those expressing the telic or the agentive quale; (ii) RC-type NMCs emerge either alongside the other non-RC type NMCs at the same time, or emerge later than the other non-RC type NMCs for the constitutive quale; and (iii) the majority of NMCs expressing the agentive and telic quales are those that fall within the traditional domain of RCs (called RC-type NMCs here), while the majority of NMCs expressing the formal and the constitutive quales are non-RC type NMCs. These findings are consistent with: (i) the semantic nature and complexity of the four qualia relations: formal and constitutive aspects of an object (called natural type concepts in Pustejovsky 2001, 2006) are more basic attributes, while telic and agentive (called artificial type concepts in Pustejovsky 2001, 2006) are derived and often eventive (hence conceptually more complex); and (ii) the properties of their adult input: NMCs expressing the formal quale are also most frequently encountered in the adult input; followed by the constitutive quale, and then the agentive and telic quales. The findings are also consistent with the idea that in Asian languages such as Japanese, Korean and Chinese, RCs develop from attributive constructions specifying a semantic feature of the head noun in acquisition (Diessel 2007, c.f. also Comrie 1996, 1998, 2002). This study is probably the first of using the generative lexicon framework in the field of child language acquisition.",The Role of Qualia Structure in Mandarin Children Acquiring Noun-modifying Constructions,"This paper investigates the types and the developmental trajectory of noun modifying constructions (NMCs), in the form of [Modifier + de + (Noun)], attested in Mandarin-speaking children's speech from a semantic perspective based on the generative lexicon framework (Pustejovsky, 1995). Based on 1034 NMCs (including those traditionally defined as relative clauses (RCs)) produced by 135 children aged 3 to 6 from a cross-sectional naturalistic speech corpus ""Zhou2"" in CHILDES, we analyzed the relation between the modifier and the head noun according to the 4 major roles of qualia structure: formal, constitutive, telic and agentive. Results suggest that (i) NMCs expressing the formal facet of the head noun's meaning are most frequently produced and acquired earliest, followed by those expressing the constitutive quale, and then those expressing the telic or the agentive quale; (ii) RC-type NMCs emerge either alongside the other non-RC type NMCs at the same time, or emerge later than the other non-RC type NMCs for the constitutive quale; and (iii) the majority of NMCs expressing the agentive and telic quales are those that fall within the traditional domain of RCs (called RC-type NMCs here), while the majority of NMCs expressing the formal and the constitutive quales are non-RC type NMCs. These findings are consistent with: (i) the semantic nature and complexity of the four qualia relations: formal and constitutive aspects of an object (called natural type concepts in Pustejovsky 2001, 2006) are more basic attributes, while telic and agentive (called artificial type concepts in Pustejovsky 2001, 2006) are derived and often eventive (hence conceptually more complex); and (ii) the properties of their adult input: NMCs expressing the formal quale are also most frequently encountered in the adult input; followed by the constitutive quale, and then the agentive and telic quales. The findings are also consistent with the idea that in Asian languages such as Japanese, Korean and Chinese, RCs develop from attributive constructions specifying a semantic feature of the head noun in acquisition (Diessel 2007, c.f. also Comrie 1996, 1998, 2002). This study is probably the first of using the generative lexicon framework in the field of child language acquisition.",,"The Role of Qualia Structure in Mandarin Children Acquiring Noun-modifying Constructions. This paper investigates the types and the developmental trajectory of noun modifying constructions (NMCs), in the form of [Modifier + de + (Noun)], attested in Mandarin-speaking children's speech from a semantic perspective based on the generative lexicon framework (Pustejovsky, 1995). Based on 1034 NMCs (including those traditionally defined as relative clauses (RCs)) produced by 135 children aged 3 to 6 from a cross-sectional naturalistic speech corpus ""Zhou2"" in CHILDES, we analyzed the relation between the modifier and the head noun according to the 4 major roles of qualia structure: formal, constitutive, telic and agentive. Results suggest that (i) NMCs expressing the formal facet of the head noun's meaning are most frequently produced and acquired earliest, followed by those expressing the constitutive quale, and then those expressing the telic or the agentive quale; (ii) RC-type NMCs emerge either alongside the other non-RC type NMCs at the same time, or emerge later than the other non-RC type NMCs for the constitutive quale; and (iii) the majority of NMCs expressing the agentive and telic quales are those that fall within the traditional domain of RCs (called RC-type NMCs here), while the majority of NMCs expressing the formal and the constitutive quales are non-RC type NMCs. These findings are consistent with: (i) the semantic nature and complexity of the four qualia relations: formal and constitutive aspects of an object (called natural type concepts in Pustejovsky 2001, 2006) are more basic attributes, while telic and agentive (called artificial type concepts in Pustejovsky 2001, 2006) are derived and often eventive (hence conceptually more complex); and (ii) the properties of their adult input: NMCs expressing the formal quale are also most frequently encountered in the adult input; followed by the constitutive quale, and then the agentive and telic quales. The findings are also consistent with the idea that in Asian languages such as Japanese, Korean and Chinese, RCs develop from attributive constructions specifying a semantic feature of the head noun in acquisition (Diessel 2007, c.f. also Comrie 1996, 1998, 2002). This study is probably the first of using the generative lexicon framework in the field of child language acquisition.",2012
kim-etal-2018-modeling,https://aclanthology.org/C18-1235,0,,,,,,,"Modeling with Recurrent Neural Networks for Open Vocabulary Slots. Dealing with 'open-vocabulary' slots has been among the challenges in the natural language area. While recent studies on attention-based recurrent neural network (RNN) models have performed well in completing several language related tasks such as spoken language understanding and dialogue systems, there has been a lack of attempts to address filling slots that take on values from a virtually unlimited set. In this paper, we propose a new RNN model that can capture the vital concept: Understanding the role of a word may vary according to how long a reader focuses on a particular part of a sentence. The proposed model utilizes a longterm aware attention structure, positional encoding primarily considering the relative distance between words, and multi-task learning of a character-based language model and an intent detection model. We show that the model outperforms the existing RNN models with respect to discovering 'open-vocabulary' slots without any external information, such as a named entity database or knowledge base. In particular, we confirm that it performs better with a greater number of slots in a dataset, including unknown words, by evaluating the models on a dataset of several domains. In addition, the proposed model also demonstrates superior performance with regard to intent detection.",Modeling with Recurrent Neural Networks for Open Vocabulary Slots,"Dealing with 'open-vocabulary' slots has been among the challenges in the natural language area. While recent studies on attention-based recurrent neural network (RNN) models have performed well in completing several language related tasks such as spoken language understanding and dialogue systems, there has been a lack of attempts to address filling slots that take on values from a virtually unlimited set. In this paper, we propose a new RNN model that can capture the vital concept: Understanding the role of a word may vary according to how long a reader focuses on a particular part of a sentence. The proposed model utilizes a longterm aware attention structure, positional encoding primarily considering the relative distance between words, and multi-task learning of a character-based language model and an intent detection model. We show that the model outperforms the existing RNN models with respect to discovering 'open-vocabulary' slots without any external information, such as a named entity database or knowledge base. In particular, we confirm that it performs better with a greater number of slots in a dataset, including unknown words, by evaluating the models on a dataset of several domains. In addition, the proposed model also demonstrates superior performance with regard to intent detection.",Modeling with Recurrent Neural Networks for Open Vocabulary Slots,"Dealing with 'open-vocabulary' slots has been among the challenges in the natural language area. While recent studies on attention-based recurrent neural network (RNN) models have performed well in completing several language related tasks such as spoken language understanding and dialogue systems, there has been a lack of attempts to address filling slots that take on values from a virtually unlimited set. In this paper, we propose a new RNN model that can capture the vital concept: Understanding the role of a word may vary according to how long a reader focuses on a particular part of a sentence. The proposed model utilizes a longterm aware attention structure, positional encoding primarily considering the relative distance between words, and multi-task learning of a character-based language model and an intent detection model. We show that the model outperforms the existing RNN models with respect to discovering 'open-vocabulary' slots without any external information, such as a named entity database or knowledge base. In particular, we confirm that it performs better with a greater number of slots in a dataset, including unknown words, by evaluating the models on a dataset of several domains. In addition, the proposed model also demonstrates superior performance with regard to intent detection.",,"Modeling with Recurrent Neural Networks for Open Vocabulary Slots. Dealing with 'open-vocabulary' slots has been among the challenges in the natural language area. While recent studies on attention-based recurrent neural network (RNN) models have performed well in completing several language related tasks such as spoken language understanding and dialogue systems, there has been a lack of attempts to address filling slots that take on values from a virtually unlimited set. In this paper, we propose a new RNN model that can capture the vital concept: Understanding the role of a word may vary according to how long a reader focuses on a particular part of a sentence. The proposed model utilizes a longterm aware attention structure, positional encoding primarily considering the relative distance between words, and multi-task learning of a character-based language model and an intent detection model. We show that the model outperforms the existing RNN models with respect to discovering 'open-vocabulary' slots without any external information, such as a named entity database or knowledge base. In particular, we confirm that it performs better with a greater number of slots in a dataset, including unknown words, by evaluating the models on a dataset of several domains. In addition, the proposed model also demonstrates superior performance with regard to intent detection.",2018
hogenhout-matsumoto-1997-preliminary,https://aclanthology.org/W97-1003,0,,,,,,,A Preliminary Study of Word Clustering Based on Syntactic Behavior. We show how a treebank can be used to cluster words on the basis of their syntactic behavior. The resulting clusters represent distinct types of behavior with much more precision than parts of speech. As an example we show how prepositions can be automatically subdivided by their syntactic behavior and discuss the appropriateness of such a subdivision. Applications of this work are also discussed.,A Preliminary Study of Word Clustering Based on Syntactic Behavior,We show how a treebank can be used to cluster words on the basis of their syntactic behavior. The resulting clusters represent distinct types of behavior with much more precision than parts of speech. As an example we show how prepositions can be automatically subdivided by their syntactic behavior and discuss the appropriateness of such a subdivision. Applications of this work are also discussed.,A Preliminary Study of Word Clustering Based on Syntactic Behavior,We show how a treebank can be used to cluster words on the basis of their syntactic behavior. The resulting clusters represent distinct types of behavior with much more precision than parts of speech. As an example we show how prepositions can be automatically subdivided by their syntactic behavior and discuss the appropriateness of such a subdivision. Applications of this work are also discussed.,We would like to express our appreciation to the anonymous reviewers who have provided many valuable comments and criticisms.,A Preliminary Study of Word Clustering Based on Syntactic Behavior. We show how a treebank can be used to cluster words on the basis of their syntactic behavior. The resulting clusters represent distinct types of behavior with much more precision than parts of speech. As an example we show how prepositions can be automatically subdivided by their syntactic behavior and discuss the appropriateness of such a subdivision. Applications of this work are also discussed.,1997
shilen-wilson-2022-learning,https://aclanthology.org/2022.scil-1.26,0,,,,,,,"Learning Input Strictly Local Functions: Comparing Approaches with Catalan Adjectives. Input strictly local (ISL) functions are a class of subregular transductions that have well-understood mathematical and computational properties and that are sufficiently expressive to account for a wide variety of attested morphological and phonological patterns (e.g., Chandlee, 2014; Chandlee, 2017; . In this study, we compared several approaches to learning ISL functions: the ISL function learning algorithm (ISLFLA; Chandlee, 2014; and the classic OSTIA learner to which it is related (Oncina et al., 1993) ; the Minimal Generalization Learner (MGL; Hayes, 2002, 2003) ; and a novel deep neural network model presented here (DNN-ISL).",Learning Input Strictly Local Functions: Comparing Approaches with {C}atalan Adjectives,"Input strictly local (ISL) functions are a class of subregular transductions that have well-understood mathematical and computational properties and that are sufficiently expressive to account for a wide variety of attested morphological and phonological patterns (e.g., Chandlee, 2014; Chandlee, 2017; . In this study, we compared several approaches to learning ISL functions: the ISL function learning algorithm (ISLFLA; Chandlee, 2014; and the classic OSTIA learner to which it is related (Oncina et al., 1993) ; the Minimal Generalization Learner (MGL; Hayes, 2002, 2003) ; and a novel deep neural network model presented here (DNN-ISL).",Learning Input Strictly Local Functions: Comparing Approaches with Catalan Adjectives,"Input strictly local (ISL) functions are a class of subregular transductions that have well-understood mathematical and computational properties and that are sufficiently expressive to account for a wide variety of attested morphological and phonological patterns (e.g., Chandlee, 2014; Chandlee, 2017; . In this study, we compared several approaches to learning ISL functions: the ISL function learning algorithm (ISLFLA; Chandlee, 2014; and the classic OSTIA learner to which it is related (Oncina et al., 1993) ; the Minimal Generalization Learner (MGL; Hayes, 2002, 2003) ; and a novel deep neural network model presented here (DNN-ISL).","Thanks to Coleman Haley and Marina Bedny for helpful discussion of this research, which was supported by NSF grant BCS-1941593 to CW.","Learning Input Strictly Local Functions: Comparing Approaches with Catalan Adjectives. Input strictly local (ISL) functions are a class of subregular transductions that have well-understood mathematical and computational properties and that are sufficiently expressive to account for a wide variety of attested morphological and phonological patterns (e.g., Chandlee, 2014; Chandlee, 2017; . In this study, we compared several approaches to learning ISL functions: the ISL function learning algorithm (ISLFLA; Chandlee, 2014; and the classic OSTIA learner to which it is related (Oncina et al., 1993) ; the Minimal Generalization Learner (MGL; Hayes, 2002, 2003) ; and a novel deep neural network model presented here (DNN-ISL).",2022
li-etal-2019-building,https://aclanthology.org/2019.lilt-18.2,0,,,,,,,"Building a Chinese AMR Bank with Concept and Relation Alignments. Meaning Representation (AMR) is a meaning representation framework in which the meaning of a full sentence is represented as a single-rooted, acyclic, directed graph. In this article, we describe an ongoing project to build a Chinese AMR (CAMR) corpus, which currently includes 10,149 sentences from the newsgroup and weblog portion of the Chinese TreeBank (CTB). We describe the annotation specifications for the CAMR corpus, which follow the annotation principles of English AMR but make adaptations where needed to accommodate the linguistic facts of Chinese. The CAMR specifications also include a systematic treatment of sentence-internal discourse relations. One significant change we have made to the AMR annotation methodology is the inclusion of the alignment between word tokens in the sentence and the concepts/relations in the CAMR annotation to make it easier for automatic parsers to model the correspondence between a sentence and its meaning representation. We develop an annotation tool for CAMR, and the inter-agreement as measured by the Smatch score between the two annotators is 0.83, indicating reliable annotation. We also present some quantitative analysis of the CAMR corpus. 46.71% of the AMRs of the sentences are non-tree graphs. Moreover, the AMR of 88.95% of the sentences has concepts inferred from the context of the sentence but do not correspond to a specific word 2 / LiLT volume 18, issue (1) June 2019 or phrase in a sentence, and the average number of such inferred concepts per sentence is 2.88. These statistics will have to be taken into account when developing automatic Chinese AMR parsers.",Building a {C}hinese {AMR} Bank with Concept and Relation Alignments,"Meaning Representation (AMR) is a meaning representation framework in which the meaning of a full sentence is represented as a single-rooted, acyclic, directed graph. In this article, we describe an ongoing project to build a Chinese AMR (CAMR) corpus, which currently includes 10,149 sentences from the newsgroup and weblog portion of the Chinese TreeBank (CTB). We describe the annotation specifications for the CAMR corpus, which follow the annotation principles of English AMR but make adaptations where needed to accommodate the linguistic facts of Chinese. The CAMR specifications also include a systematic treatment of sentence-internal discourse relations. One significant change we have made to the AMR annotation methodology is the inclusion of the alignment between word tokens in the sentence and the concepts/relations in the CAMR annotation to make it easier for automatic parsers to model the correspondence between a sentence and its meaning representation. We develop an annotation tool for CAMR, and the inter-agreement as measured by the Smatch score between the two annotators is 0.83, indicating reliable annotation. We also present some quantitative analysis of the CAMR corpus. 46.71% of the AMRs of the sentences are non-tree graphs. Moreover, the AMR of 88.95% of the sentences has concepts inferred from the context of the sentence but do not correspond to a specific word 2 / LiLT volume 18, issue (1) June 2019 or phrase in a sentence, and the average number of such inferred concepts per sentence is 2.88. These statistics will have to be taken into account when developing automatic Chinese AMR parsers.",Building a Chinese AMR Bank with Concept and Relation Alignments,"Meaning Representation (AMR) is a meaning representation framework in which the meaning of a full sentence is represented as a single-rooted, acyclic, directed graph. In this article, we describe an ongoing project to build a Chinese AMR (CAMR) corpus, which currently includes 10,149 sentences from the newsgroup and weblog portion of the Chinese TreeBank (CTB). We describe the annotation specifications for the CAMR corpus, which follow the annotation principles of English AMR but make adaptations where needed to accommodate the linguistic facts of Chinese. The CAMR specifications also include a systematic treatment of sentence-internal discourse relations. One significant change we have made to the AMR annotation methodology is the inclusion of the alignment between word tokens in the sentence and the concepts/relations in the CAMR annotation to make it easier for automatic parsers to model the correspondence between a sentence and its meaning representation. We develop an annotation tool for CAMR, and the inter-agreement as measured by the Smatch score between the two annotators is 0.83, indicating reliable annotation. We also present some quantitative analysis of the CAMR corpus. 46.71% of the AMRs of the sentences are non-tree graphs. Moreover, the AMR of 88.95% of the sentences has concepts inferred from the context of the sentence but do not correspond to a specific word 2 / LiLT volume 18, issue (1) June 2019 or phrase in a sentence, and the average number of such inferred concepts per sentence is 2.88. These statistics will have to be taken into account when developing automatic Chinese AMR parsers.",This work is the staged achievement of the projects supported by National Social Science Foundation of China (18BYY127) and National Science Foundation of China (61772278).,"Building a Chinese AMR Bank with Concept and Relation Alignments. Meaning Representation (AMR) is a meaning representation framework in which the meaning of a full sentence is represented as a single-rooted, acyclic, directed graph. In this article, we describe an ongoing project to build a Chinese AMR (CAMR) corpus, which currently includes 10,149 sentences from the newsgroup and weblog portion of the Chinese TreeBank (CTB). We describe the annotation specifications for the CAMR corpus, which follow the annotation principles of English AMR but make adaptations where needed to accommodate the linguistic facts of Chinese. The CAMR specifications also include a systematic treatment of sentence-internal discourse relations. One significant change we have made to the AMR annotation methodology is the inclusion of the alignment between word tokens in the sentence and the concepts/relations in the CAMR annotation to make it easier for automatic parsers to model the correspondence between a sentence and its meaning representation. We develop an annotation tool for CAMR, and the inter-agreement as measured by the Smatch score between the two annotators is 0.83, indicating reliable annotation. We also present some quantitative analysis of the CAMR corpus. 46.71% of the AMRs of the sentences are non-tree graphs. Moreover, the AMR of 88.95% of the sentences has concepts inferred from the context of the sentence but do not correspond to a specific word 2 / LiLT volume 18, issue (1) June 2019 or phrase in a sentence, and the average number of such inferred concepts per sentence is 2.88. These statistics will have to be taken into account when developing automatic Chinese AMR parsers.",2019
proux-etal-2009-natural,https://aclanthology.org/W09-4506,1,,,,health,,,"Natural Language Processing to Detect Risk Patterns Related to Hospital Acquired Infections. Hospital Acquired Infections (HAI) has a major impact on public health and on related healthcare cost. HAI experts are fighting against this issue but they are struggling to access data. Information systems in hospitals are complex, highly heterogeneous, and generally not convenient to perform a real time surveillance. Developing a tool able to parse patient records in order to automatically detect signs of a possible issue would be a tremendous help for these experts and could allow them to react more rapidly and as a consequence to reduce the impact of such infections. Recent advances in Computational Intelligence Techniques such as Information Extraction, Risk Patterns Detection in documents and Decision Support Systems now allow to develop such systems.",Natural Language Processing to Detect Risk Patterns Related to Hospital Acquired Infections,"Hospital Acquired Infections (HAI) has a major impact on public health and on related healthcare cost. HAI experts are fighting against this issue but they are struggling to access data. Information systems in hospitals are complex, highly heterogeneous, and generally not convenient to perform a real time surveillance. Developing a tool able to parse patient records in order to automatically detect signs of a possible issue would be a tremendous help for these experts and could allow them to react more rapidly and as a consequence to reduce the impact of such infections. Recent advances in Computational Intelligence Techniques such as Information Extraction, Risk Patterns Detection in documents and Decision Support Systems now allow to develop such systems.",Natural Language Processing to Detect Risk Patterns Related to Hospital Acquired Infections,"Hospital Acquired Infections (HAI) has a major impact on public health and on related healthcare cost. HAI experts are fighting against this issue but they are struggling to access data. Information systems in hospitals are complex, highly heterogeneous, and generally not convenient to perform a real time surveillance. Developing a tool able to parse patient records in order to automatically detect signs of a possible issue would be a tremendous help for these experts and could allow them to react more rapidly and as a consequence to reduce the impact of such infections. Recent advances in Computational Intelligence Techniques such as Information Extraction, Risk Patterns Detection in documents and Decision Support Systems now allow to develop such systems.",,"Natural Language Processing to Detect Risk Patterns Related to Hospital Acquired Infections. Hospital Acquired Infections (HAI) has a major impact on public health and on related healthcare cost. HAI experts are fighting against this issue but they are struggling to access data. Information systems in hospitals are complex, highly heterogeneous, and generally not convenient to perform a real time surveillance. Developing a tool able to parse patient records in order to automatically detect signs of a possible issue would be a tremendous help for these experts and could allow them to react more rapidly and as a consequence to reduce the impact of such infections. Recent advances in Computational Intelligence Techniques such as Information Extraction, Risk Patterns Detection in documents and Decision Support Systems now allow to develop such systems.",2009
kron-etal-2007-development,https://aclanthology.org/W07-1807,0,,,,,,,"A Development Environment for Building Grammar-Based Speech-Enabled Applications. We present a development environment for Regulus, a toolkit for building unification grammar-based speech-enabled systems, focussing on new functionality added over the last year. In particular, we will show an initial version of a GUI-based top-level for the development environment, a tool that supports graphical debugging of unification grammars by cutting and pasting of derivation trees, and various functionalities that support systematic development of speech translation and spoken dialogue applications built using Regulus.",A Development Environment for Building Grammar-Based Speech-Enabled Applications,"We present a development environment for Regulus, a toolkit for building unification grammar-based speech-enabled systems, focussing on new functionality added over the last year. In particular, we will show an initial version of a GUI-based top-level for the development environment, a tool that supports graphical debugging of unification grammars by cutting and pasting of derivation trees, and various functionalities that support systematic development of speech translation and spoken dialogue applications built using Regulus.",A Development Environment for Building Grammar-Based Speech-Enabled Applications,"We present a development environment for Regulus, a toolkit for building unification grammar-based speech-enabled systems, focussing on new functionality added over the last year. In particular, we will show an initial version of a GUI-based top-level for the development environment, a tool that supports graphical debugging of unification grammars by cutting and pasting of derivation trees, and various functionalities that support systematic development of speech translation and spoken dialogue applications built using Regulus.",,"A Development Environment for Building Grammar-Based Speech-Enabled Applications. We present a development environment for Regulus, a toolkit for building unification grammar-based speech-enabled systems, focussing on new functionality added over the last year. In particular, we will show an initial version of a GUI-based top-level for the development environment, a tool that supports graphical debugging of unification grammars by cutting and pasting of derivation trees, and various functionalities that support systematic development of speech translation and spoken dialogue applications built using Regulus.",2007
jones-thompson-2003-identifying,https://aclanthology.org/W03-0418,0,,,,,,,"Identifying Events using Similarity and Context. As part of our work on automatically building knowledge structures from text, we apply machine learning to determine which clauses from multiple narratives describing similar situations should be grouped together as descriptions of the same type of occurrence. Our approach to the problem uses textual similarity and context from other clauses. Besides training data, our system uses only a partial parser as outside knowledge. We present results evaluating the cohesiveness of the aggregated clauses and a brief overview of how this work fits into our overall system.",Identifying Events using Similarity and Context,"As part of our work on automatically building knowledge structures from text, we apply machine learning to determine which clauses from multiple narratives describing similar situations should be grouped together as descriptions of the same type of occurrence. Our approach to the problem uses textual similarity and context from other clauses. Besides training data, our system uses only a partial parser as outside knowledge. We present results evaluating the cohesiveness of the aggregated clauses and a brief overview of how this work fits into our overall system.",Identifying Events using Similarity and Context,"As part of our work on automatically building knowledge structures from text, we apply machine learning to determine which clauses from multiple narratives describing similar situations should be grouped together as descriptions of the same type of occurrence. Our approach to the problem uses textual similarity and context from other clauses. Besides training data, our system uses only a partial parser as outside knowledge. We present results evaluating the cohesiveness of the aggregated clauses and a brief overview of how this work fits into our overall system.",We would like to thank Robert Cornell and Donald Jones for evaluating our system.,"Identifying Events using Similarity and Context. As part of our work on automatically building knowledge structures from text, we apply machine learning to determine which clauses from multiple narratives describing similar situations should be grouped together as descriptions of the same type of occurrence. Our approach to the problem uses textual similarity and context from other clauses. Besides training data, our system uses only a partial parser as outside knowledge. We present results evaluating the cohesiveness of the aggregated clauses and a brief overview of how this work fits into our overall system.",2003
kwon-etal-2013-bilingual,https://aclanthology.org/W13-2502,0,,,,,,,"Bilingual Lexicon Extraction via Pivot Language and Word Alignment Tool. This paper presents a simple and effective method for automatic bilingual lexicon extraction from less-known language pairs. To do this, we bring in a bridge language named the pivot language and adopt information retrieval techniques combined with natural language processing techniques. Moreover, we use a freely available word aligner: Anymalign (Lardilleux et al., 2011) for constructing context vectors. Unlike the previous works, we obtain context vectors via a pivot language. Therefore, we do not require to translate context vectors by using a seed dictionary and improve the accuracy of low frequency word alignments that is weakness of statistical model by using Anymalign. In this paper, experiments have been conducted on two different language pairs that are bi-directional Korean-Spanish and Korean-French, respectively. The experimental results have demonstrated that our method for high-frequency words shows at least 76.3 and up to 87.2% and for the lowfrequency words at least 43.3% and up to 48.9% within the top 20 ranking candidates, respectively.",Bilingual Lexicon Extraction via Pivot Language and Word Alignment Tool,"This paper presents a simple and effective method for automatic bilingual lexicon extraction from less-known language pairs. To do this, we bring in a bridge language named the pivot language and adopt information retrieval techniques combined with natural language processing techniques. Moreover, we use a freely available word aligner: Anymalign (Lardilleux et al., 2011) for constructing context vectors. Unlike the previous works, we obtain context vectors via a pivot language. Therefore, we do not require to translate context vectors by using a seed dictionary and improve the accuracy of low frequency word alignments that is weakness of statistical model by using Anymalign. In this paper, experiments have been conducted on two different language pairs that are bi-directional Korean-Spanish and Korean-French, respectively. The experimental results have demonstrated that our method for high-frequency words shows at least 76.3 and up to 87.2% and for the lowfrequency words at least 43.3% and up to 48.9% within the top 20 ranking candidates, respectively.",Bilingual Lexicon Extraction via Pivot Language and Word Alignment Tool,"This paper presents a simple and effective method for automatic bilingual lexicon extraction from less-known language pairs. To do this, we bring in a bridge language named the pivot language and adopt information retrieval techniques combined with natural language processing techniques. Moreover, we use a freely available word aligner: Anymalign (Lardilleux et al., 2011) for constructing context vectors. Unlike the previous works, we obtain context vectors via a pivot language. Therefore, we do not require to translate context vectors by using a seed dictionary and improve the accuracy of low frequency word alignments that is weakness of statistical model by using Anymalign. In this paper, experiments have been conducted on two different language pairs that are bi-directional Korean-Spanish and Korean-French, respectively. The experimental results have demonstrated that our method for high-frequency words shows at least 76.3 and up to 87.2% and for the lowfrequency words at least 43.3% and up to 48.9% within the top 20 ranking candidates, respectively.",This work was supported by the Korea Ministry of Knowledge Economy (MKE) under Grant No.10041807 ,"Bilingual Lexicon Extraction via Pivot Language and Word Alignment Tool. This paper presents a simple and effective method for automatic bilingual lexicon extraction from less-known language pairs. To do this, we bring in a bridge language named the pivot language and adopt information retrieval techniques combined with natural language processing techniques. Moreover, we use a freely available word aligner: Anymalign (Lardilleux et al., 2011) for constructing context vectors. Unlike the previous works, we obtain context vectors via a pivot language. Therefore, we do not require to translate context vectors by using a seed dictionary and improve the accuracy of low frequency word alignments that is weakness of statistical model by using Anymalign. In this paper, experiments have been conducted on two different language pairs that are bi-directional Korean-Spanish and Korean-French, respectively. The experimental results have demonstrated that our method for high-frequency words shows at least 76.3 and up to 87.2% and for the lowfrequency words at least 43.3% and up to 48.9% within the top 20 ranking candidates, respectively.",2013
yuste-rodrigo-braun-chen-2001-comparative,https://aclanthology.org/2001.mtsummit-eval.12,0,,,,,,,"Comparative evaluation of the linguistic output of MT systems for translation and information purposes. This paper describes a Machine Translation (MT) evaluation experiment where emphasis is placed on the quality of output and the extent to which it is geared to different users' needs. Adopting a very specific scenario, that of a multilingual international organisation, a clear distinction is made between two user classes: translators and administrators. Whereas the first group requires MT output to be accurate and of good post-editable quality in order to produce a polished translation, the second group primarily needs informative data for carrying out other, non-linguistic tasks, and therefore uses MT more as an information-gathering and gisting tool. During the experiment, MT output of three different systems is compared in order to establish which MT system best serves the organisation's multilingual communication and information needs. This is a comparative usability-and adequacy-oriented evaluation in that it attempts to help such organisations decide which system produces the most adequate output for certain well-defined user types. To perform the experiment, criteria relating to both users and MT output are examined with reference to the ISLE taxonomy. The experiment comprises two evaluation phases, the first at sentence level, the second at overall text level. In both phases, evaluators make use of a 1-5 rating scale. Weighted results provide some insight into the systems' usability and adequacy for the purposes described above. As a conclusion, it i s suggested that further research should be devoted to the most critical aspect of this exercise, namely defining meaningful and useful criteria for evaluating the post-editability and informativeness of MT output.",Comparative evaluation of the linguistic output of {MT} systems for translation and information purposes,"This paper describes a Machine Translation (MT) evaluation experiment where emphasis is placed on the quality of output and the extent to which it is geared to different users' needs. Adopting a very specific scenario, that of a multilingual international organisation, a clear distinction is made between two user classes: translators and administrators. Whereas the first group requires MT output to be accurate and of good post-editable quality in order to produce a polished translation, the second group primarily needs informative data for carrying out other, non-linguistic tasks, and therefore uses MT more as an information-gathering and gisting tool. During the experiment, MT output of three different systems is compared in order to establish which MT system best serves the organisation's multilingual communication and information needs. This is a comparative usability-and adequacy-oriented evaluation in that it attempts to help such organisations decide which system produces the most adequate output for certain well-defined user types. To perform the experiment, criteria relating to both users and MT output are examined with reference to the ISLE taxonomy. The experiment comprises two evaluation phases, the first at sentence level, the second at overall text level. In both phases, evaluators make use of a 1-5 rating scale. Weighted results provide some insight into the systems' usability and adequacy for the purposes described above. As a conclusion, it i s suggested that further research should be devoted to the most critical aspect of this exercise, namely defining meaningful and useful criteria for evaluating the post-editability and informativeness of MT output.",Comparative evaluation of the linguistic output of MT systems for translation and information purposes,"This paper describes a Machine Translation (MT) evaluation experiment where emphasis is placed on the quality of output and the extent to which it is geared to different users' needs. Adopting a very specific scenario, that of a multilingual international organisation, a clear distinction is made between two user classes: translators and administrators. Whereas the first group requires MT output to be accurate and of good post-editable quality in order to produce a polished translation, the second group primarily needs informative data for carrying out other, non-linguistic tasks, and therefore uses MT more as an information-gathering and gisting tool. During the experiment, MT output of three different systems is compared in order to establish which MT system best serves the organisation's multilingual communication and information needs. This is a comparative usability-and adequacy-oriented evaluation in that it attempts to help such organisations decide which system produces the most adequate output for certain well-defined user types. To perform the experiment, criteria relating to both users and MT output are examined with reference to the ISLE taxonomy. The experiment comprises two evaluation phases, the first at sentence level, the second at overall text level. In both phases, evaluators make use of a 1-5 rating scale. Weighted results provide some insight into the systems' usability and adequacy for the purposes described above. As a conclusion, it i s suggested that further research should be devoted to the most critical aspect of this exercise, namely defining meaningful and useful criteria for evaluating the post-editability and informativeness of MT output.",,"Comparative evaluation of the linguistic output of MT systems for translation and information purposes. This paper describes a Machine Translation (MT) evaluation experiment where emphasis is placed on the quality of output and the extent to which it is geared to different users' needs. Adopting a very specific scenario, that of a multilingual international organisation, a clear distinction is made between two user classes: translators and administrators. Whereas the first group requires MT output to be accurate and of good post-editable quality in order to produce a polished translation, the second group primarily needs informative data for carrying out other, non-linguistic tasks, and therefore uses MT more as an information-gathering and gisting tool. During the experiment, MT output of three different systems is compared in order to establish which MT system best serves the organisation's multilingual communication and information needs. This is a comparative usability-and adequacy-oriented evaluation in that it attempts to help such organisations decide which system produces the most adequate output for certain well-defined user types. To perform the experiment, criteria relating to both users and MT output are examined with reference to the ISLE taxonomy. The experiment comprises two evaluation phases, the first at sentence level, the second at overall text level. In both phases, evaluators make use of a 1-5 rating scale. Weighted results provide some insight into the systems' usability and adequacy for the purposes described above. As a conclusion, it i s suggested that further research should be devoted to the most critical aspect of this exercise, namely defining meaningful and useful criteria for evaluating the post-editability and informativeness of MT output.",2001
mayfield-etal-1995-concept,https://aclanthology.org/1995.tmi-1.15,0,,,,,,,"Concept-Based Parsing For Speech Translation. As part of the JANUS speech-to-speech translation project[5], we have developed a translation system that successfully parses full utterances and is effective in parsing spontaneous speech, which is often syntactically ill-formed. The system is concept-based, meaning that it has no explicit notion of a sentence but rather views each input utterance as a potential sequence of concepts. Generation is performed by translating each of these concepts in whole phrases into the target language, consulting lookup tables only for low-level concepts such as numbers. Currently, we are working on an appointment scheduling task, parsing English, German, Spanish, and Korean input and producing output in those same languages and also Japanese.",Concept-Based Parsing For Speech Translation,"As part of the JANUS speech-to-speech translation project[5], we have developed a translation system that successfully parses full utterances and is effective in parsing spontaneous speech, which is often syntactically ill-formed. The system is concept-based, meaning that it has no explicit notion of a sentence but rather views each input utterance as a potential sequence of concepts. Generation is performed by translating each of these concepts in whole phrases into the target language, consulting lookup tables only for low-level concepts such as numbers. Currently, we are working on an appointment scheduling task, parsing English, German, Spanish, and Korean input and producing output in those same languages and also Japanese.",Concept-Based Parsing For Speech Translation,"As part of the JANUS speech-to-speech translation project[5], we have developed a translation system that successfully parses full utterances and is effective in parsing spontaneous speech, which is often syntactically ill-formed. The system is concept-based, meaning that it has no explicit notion of a sentence but rather views each input utterance as a potential sequence of concepts. Generation is performed by translating each of these concepts in whole phrases into the target language, consulting lookup tables only for low-level concepts such as numbers. Currently, we are working on an appointment scheduling task, parsing English, German, Spanish, and Korean input and producing output in those same languages and also Japanese.",,"Concept-Based Parsing For Speech Translation. As part of the JANUS speech-to-speech translation project[5], we have developed a translation system that successfully parses full utterances and is effective in parsing spontaneous speech, which is often syntactically ill-formed. The system is concept-based, meaning that it has no explicit notion of a sentence but rather views each input utterance as a potential sequence of concepts. Generation is performed by translating each of these concepts in whole phrases into the target language, consulting lookup tables only for low-level concepts such as numbers. Currently, we are working on an appointment scheduling task, parsing English, German, Spanish, and Korean input and producing output in those same languages and also Japanese.",1995
clark-2021-strong,https://aclanthology.org/2021.scil-1.47,0,,,,,,,"Strong Learning of Probabilistic Tree Adjoining Grammars. In this abstract we outline some theoretical work on the probabilistic learning of a representative mildly context-sensitive grammar formalism from positive examples only. In a recent paper, Clark and Fijalkow (2020) (CF from now on) present a consistent unsupervised learning algorithm for probabilistic context-free grammars (PCFGs) satisfying certain structural conditions: it converges to the correct grammar and parameter values, taking as input only a sample of strings generated by the PCFG. Here we extend this to the problem of learning tree grammars from derived trees, and show that under analogous conditions, we can learn a probabilistic tree grammar, of a type that is equivalent to Tree Adjoining Grammars (TAGs) (Vijay-Shankar and Joshi, 1985) . In this learning model, we have a probabilistic tree grammar which generates a probability distribution over trees; given a sample of these trees, the learner must converge to a grammar that has the same structure as the original grammar and the same parameters.",Strong Learning of Probabilistic {T}ree {A}djoining {G}rammars,"In this abstract we outline some theoretical work on the probabilistic learning of a representative mildly context-sensitive grammar formalism from positive examples only. In a recent paper, Clark and Fijalkow (2020) (CF from now on) present a consistent unsupervised learning algorithm for probabilistic context-free grammars (PCFGs) satisfying certain structural conditions: it converges to the correct grammar and parameter values, taking as input only a sample of strings generated by the PCFG. Here we extend this to the problem of learning tree grammars from derived trees, and show that under analogous conditions, we can learn a probabilistic tree grammar, of a type that is equivalent to Tree Adjoining Grammars (TAGs) (Vijay-Shankar and Joshi, 1985) . In this learning model, we have a probabilistic tree grammar which generates a probability distribution over trees; given a sample of these trees, the learner must converge to a grammar that has the same structure as the original grammar and the same parameters.",Strong Learning of Probabilistic Tree Adjoining Grammars,"In this abstract we outline some theoretical work on the probabilistic learning of a representative mildly context-sensitive grammar formalism from positive examples only. In a recent paper, Clark and Fijalkow (2020) (CF from now on) present a consistent unsupervised learning algorithm for probabilistic context-free grammars (PCFGs) satisfying certain structural conditions: it converges to the correct grammar and parameter values, taking as input only a sample of strings generated by the PCFG. Here we extend this to the problem of learning tree grammars from derived trees, and show that under analogous conditions, we can learn a probabilistic tree grammar, of a type that is equivalent to Tree Adjoining Grammars (TAGs) (Vijay-Shankar and Joshi, 1985) . In this learning model, we have a probabilistic tree grammar which generates a probability distribution over trees; given a sample of these trees, the learner must converge to a grammar that has the same structure as the original grammar and the same parameters.",I would like to thank Ryo Yoshinaka; and the reviewers for comments on the paper of which this is an extended abstract.,"Strong Learning of Probabilistic Tree Adjoining Grammars. In this abstract we outline some theoretical work on the probabilistic learning of a representative mildly context-sensitive grammar formalism from positive examples only. In a recent paper, Clark and Fijalkow (2020) (CF from now on) present a consistent unsupervised learning algorithm for probabilistic context-free grammars (PCFGs) satisfying certain structural conditions: it converges to the correct grammar and parameter values, taking as input only a sample of strings generated by the PCFG. Here we extend this to the problem of learning tree grammars from derived trees, and show that under analogous conditions, we can learn a probabilistic tree grammar, of a type that is equivalent to Tree Adjoining Grammars (TAGs) (Vijay-Shankar and Joshi, 1985) . In this learning model, we have a probabilistic tree grammar which generates a probability distribution over trees; given a sample of these trees, the learner must converge to a grammar that has the same structure as the original grammar and the same parameters.",2021
wu-fung-2005-inversion,https://aclanthology.org/I05-1023,0,,,,,,,"Inversion Transduction Grammar Constraints for Mining Parallel Sentences from Quasi-Comparable Corpora. We present a new implication of Wu's (1997) Inversion Transduction Grammar (ITG) Hypothesis, on the problem of retrieving truly parallel sentence translations from large collections of highly non-parallel documents. Our approach leverages a strong language universal constraint posited by the ITG Hypothesis, that can serve as a strong inductive bias for various language learning problems, resulting in both efficiency and accuracy gains. The task we attack is highly practical since non-parallel multilingual data exists in far greater quantities than parallel corpora, but parallel sentences are a much more useful resource. Our aim here is to mine truly parallel sentences, as opposed to comparable sentence pairs or loose translations as in most previous work. The method we introduce exploits Bracketing ITGs to produce the first known results for this problem. Experiments show that it obtains large accuracy gains on this task compared to the expected performance of state-of-the-art models that were developed for the less stringent task of mining comparable sentence pairs.",Inversion Transduction Grammar Constraints for Mining Parallel Sentences from Quasi-Comparable Corpora,"We present a new implication of Wu's (1997) Inversion Transduction Grammar (ITG) Hypothesis, on the problem of retrieving truly parallel sentence translations from large collections of highly non-parallel documents. Our approach leverages a strong language universal constraint posited by the ITG Hypothesis, that can serve as a strong inductive bias for various language learning problems, resulting in both efficiency and accuracy gains. The task we attack is highly practical since non-parallel multilingual data exists in far greater quantities than parallel corpora, but parallel sentences are a much more useful resource. Our aim here is to mine truly parallel sentences, as opposed to comparable sentence pairs or loose translations as in most previous work. The method we introduce exploits Bracketing ITGs to produce the first known results for this problem. Experiments show that it obtains large accuracy gains on this task compared to the expected performance of state-of-the-art models that were developed for the less stringent task of mining comparable sentence pairs.",Inversion Transduction Grammar Constraints for Mining Parallel Sentences from Quasi-Comparable Corpora,"We present a new implication of Wu's (1997) Inversion Transduction Grammar (ITG) Hypothesis, on the problem of retrieving truly parallel sentence translations from large collections of highly non-parallel documents. Our approach leverages a strong language universal constraint posited by the ITG Hypothesis, that can serve as a strong inductive bias for various language learning problems, resulting in both efficiency and accuracy gains. The task we attack is highly practical since non-parallel multilingual data exists in far greater quantities than parallel corpora, but parallel sentences are a much more useful resource. Our aim here is to mine truly parallel sentences, as opposed to comparable sentence pairs or loose translations as in most previous work. The method we introduce exploits Bracketing ITGs to produce the first known results for this problem. Experiments show that it obtains large accuracy gains on this task compared to the expected performance of state-of-the-art models that were developed for the less stringent task of mining comparable sentence pairs.",,"Inversion Transduction Grammar Constraints for Mining Parallel Sentences from Quasi-Comparable Corpora. We present a new implication of Wu's (1997) Inversion Transduction Grammar (ITG) Hypothesis, on the problem of retrieving truly parallel sentence translations from large collections of highly non-parallel documents. Our approach leverages a strong language universal constraint posited by the ITG Hypothesis, that can serve as a strong inductive bias for various language learning problems, resulting in both efficiency and accuracy gains. The task we attack is highly practical since non-parallel multilingual data exists in far greater quantities than parallel corpora, but parallel sentences are a much more useful resource. Our aim here is to mine truly parallel sentences, as opposed to comparable sentence pairs or loose translations as in most previous work. The method we introduce exploits Bracketing ITGs to produce the first known results for this problem. Experiments show that it obtains large accuracy gains on this task compared to the expected performance of state-of-the-art models that were developed for the less stringent task of mining comparable sentence pairs.",2005
vijay-shanker-1992-using,https://aclanthology.org/J92-4004,0,,,,,,,"Using Descriptions of Trees in a Tree Adjoining Grammar. This paper describes a new interpretation of Tree Adjoining Grammars (TAG) that allows the embedding of TAG in the unification framework in a manner consistent with the declarative approach taken in this framework. In the new interpretation we present in this paper, the objects manipulated by a TAG are considered to be descriptions of trees. This is in contrast to the traditional view that in a TAG the composition operations of adjoining and substitution combine trees. Borrowing ideas from Description Theory, we propose quasi-trees as a means to represent partial descriptions of trees. Using quasi-trees, we are able to justify the definition of feature structurebased Tree Adjoining Grammars (FTAG) that was first given in Vijay-Shanker (1987) and Vijay-Shanker and Joshi (1988). In the definition of the FTAG formalism given here, we argue that a grammar manipulates descriptions of trees (i.e., quasi-trees); whereas the structures derived by a grammar are trees that are obtained by taking the minimal readings of such descriptions. We then build on and refine the earlier version of FTAG, give examples that illustrate the usefulness of embedding TAG in the unification framework, and present a logical formulation (and its associated semantics) of FTA G that shows the separation between descriptions of well-formed structures and the actual structures that are derived, a theme that is central to this work. Finally, we discuss some questions that are raised by our new interpretation of the TAG formalism: questions dealing with the nature and definition of the adjoining operation (in contrast to substitution), its relation to multi-component adjoining, and the distinctions between auxiliary and initial structures.",Using Descriptions of Trees in a {T}ree {A}djoining {G}rammar,"This paper describes a new interpretation of Tree Adjoining Grammars (TAG) that allows the embedding of TAG in the unification framework in a manner consistent with the declarative approach taken in this framework. In the new interpretation we present in this paper, the objects manipulated by a TAG are considered to be descriptions of trees. This is in contrast to the traditional view that in a TAG the composition operations of adjoining and substitution combine trees. Borrowing ideas from Description Theory, we propose quasi-trees as a means to represent partial descriptions of trees. Using quasi-trees, we are able to justify the definition of feature structurebased Tree Adjoining Grammars (FTAG) that was first given in Vijay-Shanker (1987) and Vijay-Shanker and Joshi (1988). In the definition of the FTAG formalism given here, we argue that a grammar manipulates descriptions of trees (i.e., quasi-trees); whereas the structures derived by a grammar are trees that are obtained by taking the minimal readings of such descriptions. We then build on and refine the earlier version of FTAG, give examples that illustrate the usefulness of embedding TAG in the unification framework, and present a logical formulation (and its associated semantics) of FTA G that shows the separation between descriptions of well-formed structures and the actual structures that are derived, a theme that is central to this work. Finally, we discuss some questions that are raised by our new interpretation of the TAG formalism: questions dealing with the nature and definition of the adjoining operation (in contrast to substitution), its relation to multi-component adjoining, and the distinctions between auxiliary and initial structures.",Using Descriptions of Trees in a Tree Adjoining Grammar,"This paper describes a new interpretation of Tree Adjoining Grammars (TAG) that allows the embedding of TAG in the unification framework in a manner consistent with the declarative approach taken in this framework. In the new interpretation we present in this paper, the objects manipulated by a TAG are considered to be descriptions of trees. This is in contrast to the traditional view that in a TAG the composition operations of adjoining and substitution combine trees. Borrowing ideas from Description Theory, we propose quasi-trees as a means to represent partial descriptions of trees. Using quasi-trees, we are able to justify the definition of feature structurebased Tree Adjoining Grammars (FTAG) that was first given in Vijay-Shanker (1987) and Vijay-Shanker and Joshi (1988). In the definition of the FTAG formalism given here, we argue that a grammar manipulates descriptions of trees (i.e., quasi-trees); whereas the structures derived by a grammar are trees that are obtained by taking the minimal readings of such descriptions. We then build on and refine the earlier version of FTAG, give examples that illustrate the usefulness of embedding TAG in the unification framework, and present a logical formulation (and its associated semantics) of FTA G that shows the separation between descriptions of well-formed structures and the actual structures that are derived, a theme that is central to this work. Finally, we discuss some questions that are raised by our new interpretation of the TAG formalism: questions dealing with the nature and definition of the adjoining operation (in contrast to substitution), its relation to multi-component adjoining, and the distinctions between auxiliary and initial structures.","This work was partially supported by NSF grant IRI-9016591. I am extremely grateful to A. AbeiUe, A. K. Joshi, A. Kroch, K. E McCoy, Y. Schabes, S. M. Shieber, and D. J. Weir. Their suggestions and comments at various stages have played a substantial role in the development of this work. I am thankful to the reviewers for many useful suggestions. Many of the figures in this paper have been drawn by XTAG (Schabes and Paroubek 1992) , a workbench for Tree-Adjoining Grammars. I would like to thank Yves Schabes for making this available to me.","Using Descriptions of Trees in a Tree Adjoining Grammar. This paper describes a new interpretation of Tree Adjoining Grammars (TAG) that allows the embedding of TAG in the unification framework in a manner consistent with the declarative approach taken in this framework. In the new interpretation we present in this paper, the objects manipulated by a TAG are considered to be descriptions of trees. This is in contrast to the traditional view that in a TAG the composition operations of adjoining and substitution combine trees. Borrowing ideas from Description Theory, we propose quasi-trees as a means to represent partial descriptions of trees. Using quasi-trees, we are able to justify the definition of feature structurebased Tree Adjoining Grammars (FTAG) that was first given in Vijay-Shanker (1987) and Vijay-Shanker and Joshi (1988). In the definition of the FTAG formalism given here, we argue that a grammar manipulates descriptions of trees (i.e., quasi-trees); whereas the structures derived by a grammar are trees that are obtained by taking the minimal readings of such descriptions. We then build on and refine the earlier version of FTAG, give examples that illustrate the usefulness of embedding TAG in the unification framework, and present a logical formulation (and its associated semantics) of FTA G that shows the separation between descriptions of well-formed structures and the actual structures that are derived, a theme that is central to this work. Finally, we discuss some questions that are raised by our new interpretation of the TAG formalism: questions dealing with the nature and definition of the adjoining operation (in contrast to substitution), its relation to multi-component adjoining, and the distinctions between auxiliary and initial structures.",1992
bangalore-etal-2012-real,https://aclanthology.org/N12-1048,0,,,,,,,"Real-time Incremental Speech-to-Speech Translation of Dialogs. In a conventional telephone conversation between two speakers of the same language, the interaction is real-time and the speakers process the information stream incrementally. In this work, we address the problem of incremental speech-to-speech translation (S2S) that enables cross-lingual communication between two remote participants over a telephone. We investigate the problem in a novel real-time Session Initiation Protocol (SIP) based S2S framework. The speech translation is performed incrementally based on generation of partial hypotheses from speech recognition. We describe the statistical models comprising the S2S system and the SIP architecture for enabling real-time two-way cross-lingual dialog. We present dialog experiments performed in this framework and study the tradeoff in accuracy versus latency in incremental speech translation. Experimental results demonstrate that high quality translations can be generated with the incremental approach with approximately half the latency associated with nonincremental approach.",Real-time Incremental Speech-to-Speech Translation of Dialogs,"In a conventional telephone conversation between two speakers of the same language, the interaction is real-time and the speakers process the information stream incrementally. In this work, we address the problem of incremental speech-to-speech translation (S2S) that enables cross-lingual communication between two remote participants over a telephone. We investigate the problem in a novel real-time Session Initiation Protocol (SIP) based S2S framework. The speech translation is performed incrementally based on generation of partial hypotheses from speech recognition. We describe the statistical models comprising the S2S system and the SIP architecture for enabling real-time two-way cross-lingual dialog. We present dialog experiments performed in this framework and study the tradeoff in accuracy versus latency in incremental speech translation. Experimental results demonstrate that high quality translations can be generated with the incremental approach with approximately half the latency associated with nonincremental approach.",Real-time Incremental Speech-to-Speech Translation of Dialogs,"In a conventional telephone conversation between two speakers of the same language, the interaction is real-time and the speakers process the information stream incrementally. In this work, we address the problem of incremental speech-to-speech translation (S2S) that enables cross-lingual communication between two remote participants over a telephone. We investigate the problem in a novel real-time Session Initiation Protocol (SIP) based S2S framework. The speech translation is performed incrementally based on generation of partial hypotheses from speech recognition. We describe the statistical models comprising the S2S system and the SIP architecture for enabling real-time two-way cross-lingual dialog. We present dialog experiments performed in this framework and study the tradeoff in accuracy versus latency in incremental speech translation. Experimental results demonstrate that high quality translations can be generated with the incremental approach with approximately half the latency associated with nonincremental approach.",,"Real-time Incremental Speech-to-Speech Translation of Dialogs. In a conventional telephone conversation between two speakers of the same language, the interaction is real-time and the speakers process the information stream incrementally. In this work, we address the problem of incremental speech-to-speech translation (S2S) that enables cross-lingual communication between two remote participants over a telephone. We investigate the problem in a novel real-time Session Initiation Protocol (SIP) based S2S framework. The speech translation is performed incrementally based on generation of partial hypotheses from speech recognition. We describe the statistical models comprising the S2S system and the SIP architecture for enabling real-time two-way cross-lingual dialog. We present dialog experiments performed in this framework and study the tradeoff in accuracy versus latency in incremental speech translation. Experimental results demonstrate that high quality translations can be generated with the incremental approach with approximately half the latency associated with nonincremental approach.",2012
karlsson-1990-constraint,https://aclanthology.org/C90-3030,0,,,,,,,"Constraint Grammar as a Framework for Parsing Running Text. Grammars which are used in parsers are often directly imported from autonomous grammar theory and descriptive practice that were not exercised for the explicit purpose of parsing. Parsers have been designed for English based on e.g. Government and Binding Theory, Generalized Phrase Structure Grammar, and LexicaI-Functional Grammar. We present a formalism to be used for parsing where the grammar statements are closer to real text sentences and more directly address some notorious parsing problems, especially ambiguity. The formalism is a linguistic one. It relies on transitional probabilities in an indirect way. The probabilities are not part of the description.",Constraint Grammar as a Framework for Parsing Running Text,"Grammars which are used in parsers are often directly imported from autonomous grammar theory and descriptive practice that were not exercised for the explicit purpose of parsing. Parsers have been designed for English based on e.g. Government and Binding Theory, Generalized Phrase Structure Grammar, and LexicaI-Functional Grammar. We present a formalism to be used for parsing where the grammar statements are closer to real text sentences and more directly address some notorious parsing problems, especially ambiguity. The formalism is a linguistic one. It relies on transitional probabilities in an indirect way. The probabilities are not part of the description.",Constraint Grammar as a Framework for Parsing Running Text,"Grammars which are used in parsers are often directly imported from autonomous grammar theory and descriptive practice that were not exercised for the explicit purpose of parsing. Parsers have been designed for English based on e.g. Government and Binding Theory, Generalized Phrase Structure Grammar, and LexicaI-Functional Grammar. We present a formalism to be used for parsing where the grammar statements are closer to real text sentences and more directly address some notorious parsing problems, especially ambiguity. The formalism is a linguistic one. It relies on transitional probabilities in an indirect way. The probabilities are not part of the description.","This research was supported by the Academy of Finland in 1985-89, and by the Technology Development Centre of Finland (TEKES) in 1989-90. Part of it belongs to the ESPRIT II project SIMPR (2083). I am indebted to Kimmo Koskenniemi for help in the field of morphological analysis, and to Atro Voutilainen, Juha Heikkil~, and Arto Anttila for help in testing the formalism.","Constraint Grammar as a Framework for Parsing Running Text. Grammars which are used in parsers are often directly imported from autonomous grammar theory and descriptive practice that were not exercised for the explicit purpose of parsing. Parsers have been designed for English based on e.g. Government and Binding Theory, Generalized Phrase Structure Grammar, and LexicaI-Functional Grammar. We present a formalism to be used for parsing where the grammar statements are closer to real text sentences and more directly address some notorious parsing problems, especially ambiguity. The formalism is a linguistic one. It relies on transitional probabilities in an indirect way. The probabilities are not part of the description.",1990
schlor-etal-2020-improving,https://aclanthology.org/2020.onion-1.5,0,,,,,,,"Improving Sentiment Analysis with Biofeedback Data. Humans frequently are able to read and interpret emotions of others by directly taking verbal and non-verbal signals in human-to-human communication into account or to infer or even experience emotions from mediated stories. For computers, however, emotion recognition is a complex problem: Thoughts and feelings are the roots of many behavioural responses and they are deeply entangled with neurophysiological changes within humans. As such, emotions are very subjective, often are expressed in a subtle manner, and are highly depending on context. For example, machine learning approaches for text-based sentiment analysis often rely on incorporating sentiment lexicons or language models to capture the contextual meaning. This paper explores if and how we further can enhance sentiment analysis using biofeedback of humans which are experiencing emotions while reading texts. Specifically, we record the heart rate and brain waves of readers that are presented with short texts which have been annotated with the emotions they induce. We use these physiological signals to improve the performance of a lexicon-based sentiment classifier. We find that the combination of several biosignals can improve the ability of a text-based classifier to detect the presence of a sentiment in a text on a per-sentence level.",Improving Sentiment Analysis with Biofeedback Data,"Humans frequently are able to read and interpret emotions of others by directly taking verbal and non-verbal signals in human-to-human communication into account or to infer or even experience emotions from mediated stories. For computers, however, emotion recognition is a complex problem: Thoughts and feelings are the roots of many behavioural responses and they are deeply entangled with neurophysiological changes within humans. As such, emotions are very subjective, often are expressed in a subtle manner, and are highly depending on context. For example, machine learning approaches for text-based sentiment analysis often rely on incorporating sentiment lexicons or language models to capture the contextual meaning. This paper explores if and how we further can enhance sentiment analysis using biofeedback of humans which are experiencing emotions while reading texts. Specifically, we record the heart rate and brain waves of readers that are presented with short texts which have been annotated with the emotions they induce. We use these physiological signals to improve the performance of a lexicon-based sentiment classifier. We find that the combination of several biosignals can improve the ability of a text-based classifier to detect the presence of a sentiment in a text on a per-sentence level.",Improving Sentiment Analysis with Biofeedback Data,"Humans frequently are able to read and interpret emotions of others by directly taking verbal and non-verbal signals in human-to-human communication into account or to infer or even experience emotions from mediated stories. For computers, however, emotion recognition is a complex problem: Thoughts and feelings are the roots of many behavioural responses and they are deeply entangled with neurophysiological changes within humans. As such, emotions are very subjective, often are expressed in a subtle manner, and are highly depending on context. For example, machine learning approaches for text-based sentiment analysis often rely on incorporating sentiment lexicons or language models to capture the contextual meaning. This paper explores if and how we further can enhance sentiment analysis using biofeedback of humans which are experiencing emotions while reading texts. Specifically, we record the heart rate and brain waves of readers that are presented with short texts which have been annotated with the emotions they induce. We use these physiological signals to improve the performance of a lexicon-based sentiment classifier. We find that the combination of several biosignals can improve the ability of a text-based classifier to detect the presence of a sentiment in a text on a per-sentence level.",,"Improving Sentiment Analysis with Biofeedback Data. Humans frequently are able to read and interpret emotions of others by directly taking verbal and non-verbal signals in human-to-human communication into account or to infer or even experience emotions from mediated stories. For computers, however, emotion recognition is a complex problem: Thoughts and feelings are the roots of many behavioural responses and they are deeply entangled with neurophysiological changes within humans. As such, emotions are very subjective, often are expressed in a subtle manner, and are highly depending on context. For example, machine learning approaches for text-based sentiment analysis often rely on incorporating sentiment lexicons or language models to capture the contextual meaning. This paper explores if and how we further can enhance sentiment analysis using biofeedback of humans which are experiencing emotions while reading texts. Specifically, we record the heart rate and brain waves of readers that are presented with short texts which have been annotated with the emotions they induce. We use these physiological signals to improve the performance of a lexicon-based sentiment classifier. We find that the combination of several biosignals can improve the ability of a text-based classifier to detect the presence of a sentiment in a text on a per-sentence level.",2020
zhao-ng-2007-identification,https://aclanthology.org/D07-1057,0,,,,,,,"Identification and Resolution of Chinese Zero Pronouns: A Machine Learning Approach. In this paper, we present a machine learning approach to the identification and resolution of Chinese anaphoric zero pronouns. We perform both identification and resolution automatically, with two sets of easily computable features. Experimental results show that our proposed learning approach achieves anaphoric zero pronoun resolution accuracy comparable to a previous state-ofthe-art, heuristic rule-based approach. To our knowledge, our work is the first to perform both identification and resolution of Chinese anaphoric zero pronouns using a machine learning approach.",Identification and Resolution of {C}hinese Zero Pronouns: A Machine Learning Approach,"In this paper, we present a machine learning approach to the identification and resolution of Chinese anaphoric zero pronouns. We perform both identification and resolution automatically, with two sets of easily computable features. Experimental results show that our proposed learning approach achieves anaphoric zero pronoun resolution accuracy comparable to a previous state-ofthe-art, heuristic rule-based approach. To our knowledge, our work is the first to perform both identification and resolution of Chinese anaphoric zero pronouns using a machine learning approach.",Identification and Resolution of Chinese Zero Pronouns: A Machine Learning Approach,"In this paper, we present a machine learning approach to the identification and resolution of Chinese anaphoric zero pronouns. We perform both identification and resolution automatically, with two sets of easily computable features. Experimental results show that our proposed learning approach achieves anaphoric zero pronoun resolution accuracy comparable to a previous state-ofthe-art, heuristic rule-based approach. To our knowledge, our work is the first to perform both identification and resolution of Chinese anaphoric zero pronouns using a machine learning approach.",We thank Susan Converse and Martha Palmer for sharing their Chinese third-person pronoun and zero pronoun coreference corpus.,"Identification and Resolution of Chinese Zero Pronouns: A Machine Learning Approach. In this paper, we present a machine learning approach to the identification and resolution of Chinese anaphoric zero pronouns. We perform both identification and resolution automatically, with two sets of easily computable features. Experimental results show that our proposed learning approach achieves anaphoric zero pronoun resolution accuracy comparable to a previous state-ofthe-art, heuristic rule-based approach. To our knowledge, our work is the first to perform both identification and resolution of Chinese anaphoric zero pronouns using a machine learning approach.",2007
shen-etal-2004-discriminative,https://aclanthology.org/N04-1023,0,,,,,,,"Discriminative Reranking for Machine Translation. This paper describes the application of discriminative reranking techniques to the problem of machine translation. For each sentence in the source language, we obtain from a baseline statistical machine translation system, a rankedbest list of candidate translations in the target language. We introduce two novel perceptroninspired reranking algorithms that improve on the quality of machine translation over the baseline system based on evaluation using the BLEU metric. We provide experimental results on the NIST 2003 Chinese-English large data track evaluation. We also provide theoretical analysis of our algorithms and experiments that verify that our algorithms provide state-of-theart performance in machine translation.",Discriminative Reranking for Machine Translation,"This paper describes the application of discriminative reranking techniques to the problem of machine translation. For each sentence in the source language, we obtain from a baseline statistical machine translation system, a rankedbest list of candidate translations in the target language. We introduce two novel perceptroninspired reranking algorithms that improve on the quality of machine translation over the baseline system based on evaluation using the BLEU metric. We provide experimental results on the NIST 2003 Chinese-English large data track evaluation. We also provide theoretical analysis of our algorithms and experiments that verify that our algorithms provide state-of-theart performance in machine translation.",Discriminative Reranking for Machine Translation,"This paper describes the application of discriminative reranking techniques to the problem of machine translation. For each sentence in the source language, we obtain from a baseline statistical machine translation system, a rankedbest list of candidate translations in the target language. We introduce two novel perceptroninspired reranking algorithms that improve on the quality of machine translation over the baseline system based on evaluation using the BLEU metric. We provide experimental results on the NIST 2003 Chinese-English large data track evaluation. We also provide theoretical analysis of our algorithms and experiments that verify that our algorithms provide state-of-theart performance in machine translation.","This material is based upon work supported by the National Science Foundation under Grant No. 0121285. The first author was partially supported by JHU postworkshop fellowship and NSF Grant ITR-0205456. The second author is partially supported by NSERC, Canada (RGPIN: 264905). We thank the members of the SMT team of JHU Workshop 2003 for help on the dataset and three anonymous reviewers for useful comments.","Discriminative Reranking for Machine Translation. This paper describes the application of discriminative reranking techniques to the problem of machine translation. For each sentence in the source language, we obtain from a baseline statistical machine translation system, a rankedbest list of candidate translations in the target language. We introduce two novel perceptroninspired reranking algorithms that improve on the quality of machine translation over the baseline system based on evaluation using the BLEU metric. We provide experimental results on the NIST 2003 Chinese-English large data track evaluation. We also provide theoretical analysis of our algorithms and experiments that verify that our algorithms provide state-of-theart performance in machine translation.",2004
sharifi-atashgah-bijankhan-2009-corpus,https://aclanthology.org/2009.mtsummit-caasl.11,0,,,,,,,"Corpus-based Analysis for Multi-token Units in Persian. Morphological and syntactic annotation of multi-token units confront several problems due to the concatenating nature of Persian script and so its orthographic variation. In the present paper, by the analysis of the different collocation types of the tokens, the compositional, non-compositional and semicompositional constructions are described and then, in order to explain these constructions, the static and dynamic multi-token units will be introduced for the non-generative and generative structures of the verbs, infinitives, prepositions, conjunctions, adverbs, adjectives and nouns. Defining the multi-token unit templates for these categories is one of the important results of this research. The findings can be input to the Persian Treebank generator systems. Also, the machine translation systems using the rule-based methods to parse the texts can utilize the results in text segmentation and parsing.",Corpus-based Analysis for Multi-token Units in {P}ersian,"Morphological and syntactic annotation of multi-token units confront several problems due to the concatenating nature of Persian script and so its orthographic variation. In the present paper, by the analysis of the different collocation types of the tokens, the compositional, non-compositional and semicompositional constructions are described and then, in order to explain these constructions, the static and dynamic multi-token units will be introduced for the non-generative and generative structures of the verbs, infinitives, prepositions, conjunctions, adverbs, adjectives and nouns. Defining the multi-token unit templates for these categories is one of the important results of this research. The findings can be input to the Persian Treebank generator systems. Also, the machine translation systems using the rule-based methods to parse the texts can utilize the results in text segmentation and parsing.",Corpus-based Analysis for Multi-token Units in Persian,"Morphological and syntactic annotation of multi-token units confront several problems due to the concatenating nature of Persian script and so its orthographic variation. In the present paper, by the analysis of the different collocation types of the tokens, the compositional, non-compositional and semicompositional constructions are described and then, in order to explain these constructions, the static and dynamic multi-token units will be introduced for the non-generative and generative structures of the verbs, infinitives, prepositions, conjunctions, adverbs, adjectives and nouns. Defining the multi-token unit templates for these categories is one of the important results of this research. The findings can be input to the Persian Treebank generator systems. Also, the machine translation systems using the rule-based methods to parse the texts can utilize the results in text segmentation and parsing.",,"Corpus-based Analysis for Multi-token Units in Persian. Morphological and syntactic annotation of multi-token units confront several problems due to the concatenating nature of Persian script and so its orthographic variation. In the present paper, by the analysis of the different collocation types of the tokens, the compositional, non-compositional and semicompositional constructions are described and then, in order to explain these constructions, the static and dynamic multi-token units will be introduced for the non-generative and generative structures of the verbs, infinitives, prepositions, conjunctions, adverbs, adjectives and nouns. Defining the multi-token unit templates for these categories is one of the important results of this research. The findings can be input to the Persian Treebank generator systems. Also, the machine translation systems using the rule-based methods to parse the texts can utilize the results in text segmentation and parsing.",2009
mowery-etal-2012-medical,https://aclanthology.org/W12-2407,1,,,,health,,,"Medical diagnosis lost in translation -- Analysis of uncertainty and negation expressions in English and Swedish clinical texts. In the English clinical and biomedical text domains, negation and certainty usage are two well-studied phenomena. However, few studies have made an in-depth characterization of uncertainties expressed in a clinical setting, and compared this between different annotation efforts. This preliminary, qualitative study attempts to 1) create a clinical uncertainty and negation taxonomy, 2) develop a translation map to convert annotation labels from an English schema into a Swedish schema, and 3) characterize and compare two data sets using this taxonomy. We define a clinical uncertainty and negation taxonomy and a translation map for converting annotation labels between two schemas and report observed similarities and differences between the two data sets.",Medical diagnosis lost in translation {--} Analysis of uncertainty and negation expressions in {E}nglish and {S}wedish clinical texts,"In the English clinical and biomedical text domains, negation and certainty usage are two well-studied phenomena. However, few studies have made an in-depth characterization of uncertainties expressed in a clinical setting, and compared this between different annotation efforts. This preliminary, qualitative study attempts to 1) create a clinical uncertainty and negation taxonomy, 2) develop a translation map to convert annotation labels from an English schema into a Swedish schema, and 3) characterize and compare two data sets using this taxonomy. We define a clinical uncertainty and negation taxonomy and a translation map for converting annotation labels between two schemas and report observed similarities and differences between the two data sets.",Medical diagnosis lost in translation -- Analysis of uncertainty and negation expressions in English and Swedish clinical texts,"In the English clinical and biomedical text domains, negation and certainty usage are two well-studied phenomena. However, few studies have made an in-depth characterization of uncertainties expressed in a clinical setting, and compared this between different annotation efforts. This preliminary, qualitative study attempts to 1) create a clinical uncertainty and negation taxonomy, 2) develop a translation map to convert annotation labels from an English schema into a Swedish schema, and 3) characterize and compare two data sets using this taxonomy. We define a clinical uncertainty and negation taxonomy and a translation map for converting annotation labels between two schemas and report observed similarities and differences between the two data sets.","For the English and Swedish data sets, we obtained approval from the University of Pittsburgh IRB and the Regional Ethical Review Board in Stockholm (Etikprövningsnämnden i Stockholm). The study is part of the Interlock project, funded by the Stockholm University Academic Initiative and partially funded by NLM Fellowship 5T15LM007059. Lexicons and probabilities will be made available and updated on the iDASH NLP ecosystem under Resources: http://idash.ucsd.edu/nlp/natural-languageprocessing-nlp-ecosystem.","Medical diagnosis lost in translation -- Analysis of uncertainty and negation expressions in English and Swedish clinical texts. In the English clinical and biomedical text domains, negation and certainty usage are two well-studied phenomena. However, few studies have made an in-depth characterization of uncertainties expressed in a clinical setting, and compared this between different annotation efforts. This preliminary, qualitative study attempts to 1) create a clinical uncertainty and negation taxonomy, 2) develop a translation map to convert annotation labels from an English schema into a Swedish schema, and 3) characterize and compare two data sets using this taxonomy. We define a clinical uncertainty and negation taxonomy and a translation map for converting annotation labels between two schemas and report observed similarities and differences between the two data sets.",2012
baldwin-etal-2003-alias,https://aclanthology.org/N03-4002,1,,,,peace_justice_and_strong_institutions,,,"Alias-i Threat Trackers. Alias-i ThreatTrackers are an advanced information access application designed around the needs of analysts working through a large daily data feed. ThreatTrackers help analysts decompose an information gathering topic like the unfolding political situation in Iraq into specifications including people, places, organizations and relationships. These specifications are then used to collect and browse information on a daily basis. The nearest related technologies are information retrieval (search engines), document categorization, information extraction and named entity detection.ThreatTrackers are currently being used in the Total Information Awareness program.",Alias-i Threat Trackers,"Alias-i ThreatTrackers are an advanced information access application designed around the needs of analysts working through a large daily data feed. ThreatTrackers help analysts decompose an information gathering topic like the unfolding political situation in Iraq into specifications including people, places, organizations and relationships. These specifications are then used to collect and browse information on a daily basis. The nearest related technologies are information retrieval (search engines), document categorization, information extraction and named entity detection.ThreatTrackers are currently being used in the Total Information Awareness program.",Alias-i Threat Trackers,"Alias-i ThreatTrackers are an advanced information access application designed around the needs of analysts working through a large daily data feed. ThreatTrackers help analysts decompose an information gathering topic like the unfolding political situation in Iraq into specifications including people, places, organizations and relationships. These specifications are then used to collect and browse information on a daily basis. The nearest related technologies are information retrieval (search engines), document categorization, information extraction and named entity detection.ThreatTrackers are currently being used in the Total Information Awareness program.",,"Alias-i Threat Trackers. Alias-i ThreatTrackers are an advanced information access application designed around the needs of analysts working through a large daily data feed. ThreatTrackers help analysts decompose an information gathering topic like the unfolding political situation in Iraq into specifications including people, places, organizations and relationships. These specifications are then used to collect and browse information on a daily basis. The nearest related technologies are information retrieval (search engines), document categorization, information extraction and named entity detection.ThreatTrackers are currently being used in the Total Information Awareness program.",2003
bernier-colborne-etal-2021-n,https://aclanthology.org/2021.vardial-1.15,0,,,,,,,"N-gram and Neural Models for Uralic Language Identification: NRC at VarDial 2021. We describe the systems developed by the National Research Council Canada for the Uralic language identification shared task at the 2021 VarDial evaluation campaign. We evaluated two different approaches to this task: a probabilistic classifier exploiting only character 5grams as features, and a character-based neural network pre-trained through self-supervision, then fine-tuned on the language identification task. The former method turned out to perform better, which casts doubt on the usefulness of deep learning methods for language identification, where they have yet to convincingly and consistently outperform simpler and less costly classification algorithms exploiting n-gram features.",N-gram and Neural Models for Uralic Language Identification: {NRC} at {V}ar{D}ial 2021,"We describe the systems developed by the National Research Council Canada for the Uralic language identification shared task at the 2021 VarDial evaluation campaign. We evaluated two different approaches to this task: a probabilistic classifier exploiting only character 5grams as features, and a character-based neural network pre-trained through self-supervision, then fine-tuned on the language identification task. The former method turned out to perform better, which casts doubt on the usefulness of deep learning methods for language identification, where they have yet to convincingly and consistently outperform simpler and less costly classification algorithms exploiting n-gram features.",N-gram and Neural Models for Uralic Language Identification: NRC at VarDial 2021,"We describe the systems developed by the National Research Council Canada for the Uralic language identification shared task at the 2021 VarDial evaluation campaign. We evaluated two different approaches to this task: a probabilistic classifier exploiting only character 5grams as features, and a character-based neural network pre-trained through self-supervision, then fine-tuned on the language identification task. The former method turned out to perform better, which casts doubt on the usefulness of deep learning methods for language identification, where they have yet to convincingly and consistently outperform simpler and less costly classification algorithms exploiting n-gram features.",We thank the organizers for their work developing and running this shared task.,"N-gram and Neural Models for Uralic Language Identification: NRC at VarDial 2021. We describe the systems developed by the National Research Council Canada for the Uralic language identification shared task at the 2021 VarDial evaluation campaign. We evaluated two different approaches to this task: a probabilistic classifier exploiting only character 5grams as features, and a character-based neural network pre-trained through self-supervision, then fine-tuned on the language identification task. The former method turned out to perform better, which casts doubt on the usefulness of deep learning methods for language identification, where they have yet to convincingly and consistently outperform simpler and less costly classification algorithms exploiting n-gram features.",2021
soundrarajan-etal-2011-interface,https://aclanthology.org/P11-4024,0,,,,,,,"An Interface for Rapid Natural Language Processing Development in UIMA. This demonstration presents the Annotation Librarian, an application programming interface that supports rapid development of natural language processing (NLP) projects built in Apache Unstructured Information Management Architecture (UIMA). The flexibility of UIMA to support all types of unstructured data-images, audio, and text-increases the complexity of some of the most common NLP development tasks. The Annotation Librarian interface handles these common functions and allows the creation and management of annotations by mirroring Java methods used to manipulate Strings. The familiar syntax and NLP-centric design allows developers to adopt and rapidly develop NLP algorithms in UIMA. The general functionality of the interface is described in relation to the use cases that necessitated its creation.",An Interface for Rapid Natural Language Processing Development in {UIMA},"This demonstration presents the Annotation Librarian, an application programming interface that supports rapid development of natural language processing (NLP) projects built in Apache Unstructured Information Management Architecture (UIMA). The flexibility of UIMA to support all types of unstructured data-images, audio, and text-increases the complexity of some of the most common NLP development tasks. The Annotation Librarian interface handles these common functions and allows the creation and management of annotations by mirroring Java methods used to manipulate Strings. The familiar syntax and NLP-centric design allows developers to adopt and rapidly develop NLP algorithms in UIMA. The general functionality of the interface is described in relation to the use cases that necessitated its creation.",An Interface for Rapid Natural Language Processing Development in UIMA,"This demonstration presents the Annotation Librarian, an application programming interface that supports rapid development of natural language processing (NLP) projects built in Apache Unstructured Information Management Architecture (UIMA). The flexibility of UIMA to support all types of unstructured data-images, audio, and text-increases the complexity of some of the most common NLP development tasks. The Annotation Librarian interface handles these common functions and allows the creation and management of annotations by mirroring Java methods used to manipulate Strings. The familiar syntax and NLP-centric design allows developers to adopt and rapidly develop NLP algorithms in UIMA. The general functionality of the interface is described in relation to the use cases that necessitated its creation.","This work was supported using resources and facilities at the VA Salt Lake City Health Care System with funding support from the VA Informatics and Computing Infrastructure (VINCI), VA HSR HIR 08-204 and the Consortium for Healthcare Informatics Research (CHIR), VA HSR HIR 08-374. Views expressed are those of the authors and not necessarily those of the Department of Veterans Affairs.","An Interface for Rapid Natural Language Processing Development in UIMA. This demonstration presents the Annotation Librarian, an application programming interface that supports rapid development of natural language processing (NLP) projects built in Apache Unstructured Information Management Architecture (UIMA). The flexibility of UIMA to support all types of unstructured data-images, audio, and text-increases the complexity of some of the most common NLP development tasks. The Annotation Librarian interface handles these common functions and allows the creation and management of annotations by mirroring Java methods used to manipulate Strings. The familiar syntax and NLP-centric design allows developers to adopt and rapidly develop NLP algorithms in UIMA. The general functionality of the interface is described in relation to the use cases that necessitated its creation.",2011
buechel-etal-2016-enterprises,https://aclanthology.org/W16-0423,1,,,,industry_innovation_infrastructure,decent_work_and_economy,,"Do Enterprises Have Emotions?. Emotional language of human individuals has been studied for quite a while dealing with opinions and value judgments people have and share with others. In our work, we take a different stance and investigate whether large organizations, such as major industrial players, have and communicate emotions, as well. Such an anthropomorphic perspective has recently been advocated in management and organization studies which consider organizations as social actors. We studied this assumption by analyzing 1,676 annual business and sustainability reports from 90 top-performing enterprises in the United States, Great Britain and Germany. We compared the measurements of emotions in this homogeneous corporate text corpus with those from RCV1, a heterogeneous Reuters newswire corpus. From this, we gathered empirical evidence that business reports compare well with typical emotion-neutral economic news, whereas sustainability reports are much more emotionally loaded, similar to emotion-heavy sports and fashion news from Reuters. Furthermore, our data suggest that these emotions are distinctive and relatively stable over time per organization, thus constituting an emotional profile for enterprises.",Do Enterprises Have Emotions?,"Emotional language of human individuals has been studied for quite a while dealing with opinions and value judgments people have and share with others. In our work, we take a different stance and investigate whether large organizations, such as major industrial players, have and communicate emotions, as well. Such an anthropomorphic perspective has recently been advocated in management and organization studies which consider organizations as social actors. We studied this assumption by analyzing 1,676 annual business and sustainability reports from 90 top-performing enterprises in the United States, Great Britain and Germany. We compared the measurements of emotions in this homogeneous corporate text corpus with those from RCV1, a heterogeneous Reuters newswire corpus. From this, we gathered empirical evidence that business reports compare well with typical emotion-neutral economic news, whereas sustainability reports are much more emotionally loaded, similar to emotion-heavy sports and fashion news from Reuters. Furthermore, our data suggest that these emotions are distinctive and relatively stable over time per organization, thus constituting an emotional profile for enterprises.",Do Enterprises Have Emotions?,"Emotional language of human individuals has been studied for quite a while dealing with opinions and value judgments people have and share with others. In our work, we take a different stance and investigate whether large organizations, such as major industrial players, have and communicate emotions, as well. Such an anthropomorphic perspective has recently been advocated in management and organization studies which consider organizations as social actors. We studied this assumption by analyzing 1,676 annual business and sustainability reports from 90 top-performing enterprises in the United States, Great Britain and Germany. We compared the measurements of emotions in this homogeneous corporate text corpus with those from RCV1, a heterogeneous Reuters newswire corpus. From this, we gathered empirical evidence that business reports compare well with typical emotion-neutral economic news, whereas sustainability reports are much more emotionally loaded, similar to emotion-heavy sports and fashion news from Reuters. Furthermore, our data suggest that these emotions are distinctive and relatively stable over time per organization, thus constituting an emotional profile for enterprises.",,"Do Enterprises Have Emotions?. Emotional language of human individuals has been studied for quite a while dealing with opinions and value judgments people have and share with others. In our work, we take a different stance and investigate whether large organizations, such as major industrial players, have and communicate emotions, as well. Such an anthropomorphic perspective has recently been advocated in management and organization studies which consider organizations as social actors. We studied this assumption by analyzing 1,676 annual business and sustainability reports from 90 top-performing enterprises in the United States, Great Britain and Germany. We compared the measurements of emotions in this homogeneous corporate text corpus with those from RCV1, a heterogeneous Reuters newswire corpus. From this, we gathered empirical evidence that business reports compare well with typical emotion-neutral economic news, whereas sustainability reports are much more emotionally loaded, similar to emotion-heavy sports and fashion news from Reuters. Furthermore, our data suggest that these emotions are distinctive and relatively stable over time per organization, thus constituting an emotional profile for enterprises.",2016
ws-2007-biological,https://aclanthology.org/W07-1000,1,,,,health,,,"Biological, translational, and clinical language processing. Biological, translational, and clinical language processing K. BRETONNEL COHEN, DINA DEMNER-FUSHMAN, CAROL FRIEDMAN, LYNETTE HIRSCHMAN, AND JOHN P. PESTIAN
Natural language processing has a long history in the medical domain, with research in the field dating back to at least the early 1960s. In the late 1990s, a separate thread of research involving natural language processing in the genomic domain began to gather steam. It has become a major focus of research in the bioinformatics, computational biology, and computational linguistics communities. A number of successful workshops and conference sessions have resulted, with significant progress in the areas of named entity recognition for a wide range of key biomedical classes, concept normalization, and system evaluation. A variety of publicly available resources have contributed to this progress, as well.","Biological, translational, and clinical language processing","Biological, translational, and clinical language processing K. BRETONNEL COHEN, DINA DEMNER-FUSHMAN, CAROL FRIEDMAN, LYNETTE HIRSCHMAN, AND JOHN P. PESTIAN
Natural language processing has a long history in the medical domain, with research in the field dating back to at least the early 1960s. In the late 1990s, a separate thread of research involving natural language processing in the genomic domain began to gather steam. It has become a major focus of research in the bioinformatics, computational biology, and computational linguistics communities. A number of successful workshops and conference sessions have resulted, with significant progress in the areas of named entity recognition for a wide range of key biomedical classes, concept normalization, and system evaluation. A variety of publicly available resources have contributed to this progress, as well.","Biological, translational, and clinical language processing","Biological, translational, and clinical language processing K. BRETONNEL COHEN, DINA DEMNER-FUSHMAN, CAROL FRIEDMAN, LYNETTE HIRSCHMAN, AND JOHN P. PESTIAN
Natural language processing has a long history in the medical domain, with research in the field dating back to at least the early 1960s. In the late 1990s, a separate thread of research involving natural language processing in the genomic domain began to gather steam. It has become a major focus of research in the bioinformatics, computational biology, and computational linguistics communities. A number of successful workshops and conference sessions have resulted, with significant progress in the areas of named entity recognition for a wide range of key biomedical classes, concept normalization, and system evaluation. A variety of publicly available resources have contributed to this progress, as well.",,"Biological, translational, and clinical language processing. Biological, translational, and clinical language processing K. BRETONNEL COHEN, DINA DEMNER-FUSHMAN, CAROL FRIEDMAN, LYNETTE HIRSCHMAN, AND JOHN P. PESTIAN
Natural language processing has a long history in the medical domain, with research in the field dating back to at least the early 1960s. In the late 1990s, a separate thread of research involving natural language processing in the genomic domain began to gather steam. It has become a major focus of research in the bioinformatics, computational biology, and computational linguistics communities. A number of successful workshops and conference sessions have resulted, with significant progress in the areas of named entity recognition for a wide range of key biomedical classes, concept normalization, and system evaluation. A variety of publicly available resources have contributed to this progress, as well.",2007
erjavec-etal-2004-making,http://www.lrec-conf.org/proceedings/lrec2004/pdf/107.pdf,1,,,,education,,,"Making an XML-based Japanese-Slovene Learners' Dictionary. In this paper we present a hypertext dictionary of Japanese lexical units for Slovene students of Japanese at the Faculty of Arts of Ljubljana University. The dictionary is planned as a long-term project in which a simple dictionary is to be gradually enlarged and enhanced, taking into account the needs of the students. Initially, the dictionary was encoded in a tabular format, in a mixture of encodings, and subsequently rendered in HTML. The paper first discusses the conversion of the dictionary into XML, into an encoding that complies with the Text Encoding Initiative (TEI) Guidelines. The conversion into such an encoding validates, enriches, explicates and standardises the structure of the dictionary, thus making it more usable for further development and linguistically oriented research. We also present the current Web implementation of the dictionary, which offers full text search and a tool for practising inflected parts of speech. The paper gives an overview of related research, i.e. other XML oriented Web dictionaries of Slovene and East Asian languages and presents planned developments, i.e. the inclusion of the dictionary into the Reading Tutor program.",Making an {XML}-based {J}apanese-{S}lovene Learners{'} Dictionary,"In this paper we present a hypertext dictionary of Japanese lexical units for Slovene students of Japanese at the Faculty of Arts of Ljubljana University. The dictionary is planned as a long-term project in which a simple dictionary is to be gradually enlarged and enhanced, taking into account the needs of the students. Initially, the dictionary was encoded in a tabular format, in a mixture of encodings, and subsequently rendered in HTML. The paper first discusses the conversion of the dictionary into XML, into an encoding that complies with the Text Encoding Initiative (TEI) Guidelines. The conversion into such an encoding validates, enriches, explicates and standardises the structure of the dictionary, thus making it more usable for further development and linguistically oriented research. We also present the current Web implementation of the dictionary, which offers full text search and a tool for practising inflected parts of speech. The paper gives an overview of related research, i.e. other XML oriented Web dictionaries of Slovene and East Asian languages and presents planned developments, i.e. the inclusion of the dictionary into the Reading Tutor program.",Making an XML-based Japanese-Slovene Learners' Dictionary,"In this paper we present a hypertext dictionary of Japanese lexical units for Slovene students of Japanese at the Faculty of Arts of Ljubljana University. The dictionary is planned as a long-term project in which a simple dictionary is to be gradually enlarged and enhanced, taking into account the needs of the students. Initially, the dictionary was encoded in a tabular format, in a mixture of encodings, and subsequently rendered in HTML. The paper first discusses the conversion of the dictionary into XML, into an encoding that complies with the Text Encoding Initiative (TEI) Guidelines. The conversion into such an encoding validates, enriches, explicates and standardises the structure of the dictionary, thus making it more usable for further development and linguistically oriented research. We also present the current Web implementation of the dictionary, which offers full text search and a tool for practising inflected parts of speech. The paper gives an overview of related research, i.e. other XML oriented Web dictionaries of Slovene and East Asian languages and presents planned developments, i.e. the inclusion of the dictionary into the Reading Tutor program.",,"Making an XML-based Japanese-Slovene Learners' Dictionary. In this paper we present a hypertext dictionary of Japanese lexical units for Slovene students of Japanese at the Faculty of Arts of Ljubljana University. The dictionary is planned as a long-term project in which a simple dictionary is to be gradually enlarged and enhanced, taking into account the needs of the students. Initially, the dictionary was encoded in a tabular format, in a mixture of encodings, and subsequently rendered in HTML. The paper first discusses the conversion of the dictionary into XML, into an encoding that complies with the Text Encoding Initiative (TEI) Guidelines. The conversion into such an encoding validates, enriches, explicates and standardises the structure of the dictionary, thus making it more usable for further development and linguistically oriented research. We also present the current Web implementation of the dictionary, which offers full text search and a tool for practising inflected parts of speech. The paper gives an overview of related research, i.e. other XML oriented Web dictionaries of Slovene and East Asian languages and presents planned developments, i.e. the inclusion of the dictionary into the Reading Tutor program.",2004
hintz-2016-data,https://aclanthology.org/N16-2006,0,,,,,,,"Data-driven Paraphrasing and Stylistic Harmonization. This thesis proposal outlines the use of unsupervised data-driven methods for paraphrasing tasks. We motivate the development of knowledge-free methods at the guiding use case of multi-document summarization, which requires a domain-adaptable system for both the detection and generation of sentential paraphrases. First, we define a number of guiding research questions that will be addressed in the scope of this thesis. We continue to present ongoing work in unsupervised lexical substitution. An existing supervised approach is first adapted to a new language and dataset. We observe that supervised lexical substitution relies heavily on lexical semantic resources, and present an approach to overcome this dependency. We describe a method for unsupervised relation extraction, which we aim to leverage in lexical substitution as a replacement for knowledge-based resources.",Data-driven Paraphrasing and Stylistic Harmonization,"This thesis proposal outlines the use of unsupervised data-driven methods for paraphrasing tasks. We motivate the development of knowledge-free methods at the guiding use case of multi-document summarization, which requires a domain-adaptable system for both the detection and generation of sentential paraphrases. First, we define a number of guiding research questions that will be addressed in the scope of this thesis. We continue to present ongoing work in unsupervised lexical substitution. An existing supervised approach is first adapted to a new language and dataset. We observe that supervised lexical substitution relies heavily on lexical semantic resources, and present an approach to overcome this dependency. We describe a method for unsupervised relation extraction, which we aim to leverage in lexical substitution as a replacement for knowledge-based resources.",Data-driven Paraphrasing and Stylistic Harmonization,"This thesis proposal outlines the use of unsupervised data-driven methods for paraphrasing tasks. We motivate the development of knowledge-free methods at the guiding use case of multi-document summarization, which requires a domain-adaptable system for both the detection and generation of sentential paraphrases. First, we define a number of guiding research questions that will be addressed in the scope of this thesis. We continue to present ongoing work in unsupervised lexical substitution. An existing supervised approach is first adapted to a new language and dataset. We observe that supervised lexical substitution relies heavily on lexical semantic resources, and present an approach to overcome this dependency. We describe a method for unsupervised relation extraction, which we aim to leverage in lexical substitution as a replacement for knowledge-based resources.","This work has been supported by the German Research Foundation as part of the Research Training Group ""Adaptive Preparation of Information from Heterogeneous Sources"" (AIPHES) under grant No. GRK 1994/1.","Data-driven Paraphrasing and Stylistic Harmonization. This thesis proposal outlines the use of unsupervised data-driven methods for paraphrasing tasks. We motivate the development of knowledge-free methods at the guiding use case of multi-document summarization, which requires a domain-adaptable system for both the detection and generation of sentential paraphrases. First, we define a number of guiding research questions that will be addressed in the scope of this thesis. We continue to present ongoing work in unsupervised lexical substitution. An existing supervised approach is first adapted to a new language and dataset. We observe that supervised lexical substitution relies heavily on lexical semantic resources, and present an approach to overcome this dependency. We describe a method for unsupervised relation extraction, which we aim to leverage in lexical substitution as a replacement for knowledge-based resources.",2016
lenci-etal-1999-fame,https://aclanthology.org/W99-0407,0,,,,,,,"FAME: a Functional Annotation Meta-scheme for multi-modal and multi-lingual Parsing Evaluation. The paper describes FAME, a functional annotation meta-scheme for comparison and evaluation of existing syntactic annotation schemes, intended to be used as a flexible yardstick in multilingual and multi-modal parser evaluation campaigns. We show that FAME complies with a variety of non-trivial methodological requirements, and has the potential for being effectively used as an ""interlingua"" between different syntactic representation formats.",{FAME}: a Functional Annotation Meta-scheme for multi-modal and multi-lingual Parsing Evaluation,"The paper describes FAME, a functional annotation meta-scheme for comparison and evaluation of existing syntactic annotation schemes, intended to be used as a flexible yardstick in multilingual and multi-modal parser evaluation campaigns. We show that FAME complies with a variety of non-trivial methodological requirements, and has the potential for being effectively used as an ""interlingua"" between different syntactic representation formats.",FAME: a Functional Annotation Meta-scheme for multi-modal and multi-lingual Parsing Evaluation,"The paper describes FAME, a functional annotation meta-scheme for comparison and evaluation of existing syntactic annotation schemes, intended to be used as a flexible yardstick in multilingual and multi-modal parser evaluation campaigns. We show that FAME complies with a variety of non-trivial methodological requirements, and has the potential for being effectively used as an ""interlingua"" between different syntactic representation formats.",,"FAME: a Functional Annotation Meta-scheme for multi-modal and multi-lingual Parsing Evaluation. The paper describes FAME, a functional annotation meta-scheme for comparison and evaluation of existing syntactic annotation schemes, intended to be used as a flexible yardstick in multilingual and multi-modal parser evaluation campaigns. We show that FAME complies with a variety of non-trivial methodological requirements, and has the potential for being effectively used as an ""interlingua"" between different syntactic representation formats.",1999
qi-etal-2018-universal,https://aclanthology.org/K18-2016,0,,,,,,,"Universal Dependency Parsing from Scratch. This paper describes Stanford's system at the CoNLL 2018 UD Shared Task. We introduce a complete neural pipeline system that takes raw text as input, and performs all tasks required by the shared task, ranging from tokenization and sentence segmentation, to POS tagging and dependency parsing. Our single system submission achieved very competitive performance on big treebanks. Moreover, after fixing an unfortunate bug, our corrected system would have placed the 2 nd , 1 st , and 3 rd on the official evaluation metrics LAS, MLAS, and BLEX, and would have outperformed all submission systems on lowresource treebank categories on all metrics by a large margin. We further show the effectiveness of different model components through extensive ablation studies.",{U}niversal {D}ependency Parsing from Scratch,"This paper describes Stanford's system at the CoNLL 2018 UD Shared Task. We introduce a complete neural pipeline system that takes raw text as input, and performs all tasks required by the shared task, ranging from tokenization and sentence segmentation, to POS tagging and dependency parsing. Our single system submission achieved very competitive performance on big treebanks. Moreover, after fixing an unfortunate bug, our corrected system would have placed the 2 nd , 1 st , and 3 rd on the official evaluation metrics LAS, MLAS, and BLEX, and would have outperformed all submission systems on lowresource treebank categories on all metrics by a large margin. We further show the effectiveness of different model components through extensive ablation studies.",Universal Dependency Parsing from Scratch,"This paper describes Stanford's system at the CoNLL 2018 UD Shared Task. We introduce a complete neural pipeline system that takes raw text as input, and performs all tasks required by the shared task, ranging from tokenization and sentence segmentation, to POS tagging and dependency parsing. Our single system submission achieved very competitive performance on big treebanks. Moreover, after fixing an unfortunate bug, our corrected system would have placed the 2 nd , 1 st , and 3 rd on the official evaluation metrics LAS, MLAS, and BLEX, and would have outperformed all submission systems on lowresource treebank categories on all metrics by a large margin. We further show the effectiveness of different model components through extensive ablation studies.",,"Universal Dependency Parsing from Scratch. This paper describes Stanford's system at the CoNLL 2018 UD Shared Task. We introduce a complete neural pipeline system that takes raw text as input, and performs all tasks required by the shared task, ranging from tokenization and sentence segmentation, to POS tagging and dependency parsing. Our single system submission achieved very competitive performance on big treebanks. Moreover, after fixing an unfortunate bug, our corrected system would have placed the 2 nd , 1 st , and 3 rd on the official evaluation metrics LAS, MLAS, and BLEX, and would have outperformed all submission systems on lowresource treebank categories on all metrics by a large margin. We further show the effectiveness of different model components through extensive ablation studies.",2018
sirai-1992-syntactic,https://aclanthology.org/C92-4172,0,,,,,,,"Syntactic Constraints on Relativization in Japanese. This paper discusses the formalization of relative clauses in Japanese based on JPSG framework. We characterize them as adjuncts to nouns, and formalize them in terms of constraints among grammatical features. Furthermore, we claim that there is a constraint on the number of slash elements and show the supporting facts.",Syntactic Constraints on Relativization in {J}apanese,"This paper discusses the formalization of relative clauses in Japanese based on JPSG framework. We characterize them as adjuncts to nouns, and formalize them in terms of constraints among grammatical features. Furthermore, we claim that there is a constraint on the number of slash elements and show the supporting facts.",Syntactic Constraints on Relativization in Japanese,"This paper discusses the formalization of relative clauses in Japanese based on JPSG framework. We characterize them as adjuncts to nouns, and formalize them in terms of constraints among grammatical features. Furthermore, we claim that there is a constraint on the number of slash elements and show the supporting facts.","Acknowledgments. We are grateful to I)r. Takao Gunji, Dr. K6iti Hasida and other members of the JPSG working group at ICOT for discussion. And we thank Dr. Phillip Morrow for proofreading.","Syntactic Constraints on Relativization in Japanese. This paper discusses the formalization of relative clauses in Japanese based on JPSG framework. We characterize them as adjuncts to nouns, and formalize them in terms of constraints among grammatical features. Furthermore, we claim that there is a constraint on the number of slash elements and show the supporting facts.",1992
yang-1999-towards,https://aclanthology.org/1999.mtsummit-1.58,0,,,,,,,"Towards the automatic acquisition of lexical selection rules. This paper is a study of a certain type of collocations and implication and application to acquisition of lexical selection rules in transfer-approach MT systems. Collocations reveal the co-occurrence possibilities of linguistic units in one language, which often require lexical selection rules to enhance the natural flow and clarity of MT output. The study presents an automatic acquisition and human verification process to acquire collocations and suggest possible candidates for lexical selection rules. The mechanism has been used in the development and enhancement of the Chinese-English and Japanese-English MT systems, and can be easily adapted to other language pairs. Future work includes expanding its usage to more language pairs and furthering its application to MT customers.",Towards the automatic acquisition of lexical selection rules,"This paper is a study of a certain type of collocations and implication and application to acquisition of lexical selection rules in transfer-approach MT systems. Collocations reveal the co-occurrence possibilities of linguistic units in one language, which often require lexical selection rules to enhance the natural flow and clarity of MT output. The study presents an automatic acquisition and human verification process to acquire collocations and suggest possible candidates for lexical selection rules. The mechanism has been used in the development and enhancement of the Chinese-English and Japanese-English MT systems, and can be easily adapted to other language pairs. Future work includes expanding its usage to more language pairs and furthering its application to MT customers.",Towards the automatic acquisition of lexical selection rules,"This paper is a study of a certain type of collocations and implication and application to acquisition of lexical selection rules in transfer-approach MT systems. Collocations reveal the co-occurrence possibilities of linguistic units in one language, which often require lexical selection rules to enhance the natural flow and clarity of MT output. The study presents an automatic acquisition and human verification process to acquire collocations and suggest possible candidates for lexical selection rules. The mechanism has been used in the development and enhancement of the Chinese-English and Japanese-English MT systems, and can be easily adapted to other language pairs. Future work includes expanding its usage to more language pairs and furthering its application to MT customers.","The work was initiated for the Chinese-English MT system development, which has been supported in part by NAIC (National Air Force Intelligence Center). We thank Dale Bostad of NAIC for his continuous support. The SYSTRAN Chinese and Japanese development groups have contributed to the experiment and evaluation of the process. Many thanks to my colleagues Elke Lange and Dan Roffee for reviewing the paper and anonymous MT Summit VII reviewers for their helpful comments.","Towards the automatic acquisition of lexical selection rules. This paper is a study of a certain type of collocations and implication and application to acquisition of lexical selection rules in transfer-approach MT systems. Collocations reveal the co-occurrence possibilities of linguistic units in one language, which often require lexical selection rules to enhance the natural flow and clarity of MT output. The study presents an automatic acquisition and human verification process to acquire collocations and suggest possible candidates for lexical selection rules. The mechanism has been used in the development and enhancement of the Chinese-English and Japanese-English MT systems, and can be easily adapted to other language pairs. Future work includes expanding its usage to more language pairs and furthering its application to MT customers.",1999
ruckle-etal-2021-adapterdrop,https://aclanthology.org/2021.emnlp-main.626,0,,,,,,,"AdapterDrop: On the Efficiency of Adapters in Transformers. Transformer models are expensive to fine-tune, slow for inference, and have large storage requirements. Recent approaches tackle these shortcomings by training smaller models, dynamically reducing the model size, and by training lightweight adapters. In this paper, we propose AdapterDrop, removing adapters from lower transformer layers during training and inference, which incorporates concepts from all three directions. We show that Adap-terDrop can dynamically reduce the computational overhead when performing inference over multiple tasks simultaneously, with minimal decrease in task performances. We further prune adapters from AdapterFusion, which improves the inference efficiency while maintaining the task performances entirely.",{AdapterDrop}: {O}n the Efficiency of Adapters in Transformers,"Transformer models are expensive to fine-tune, slow for inference, and have large storage requirements. Recent approaches tackle these shortcomings by training smaller models, dynamically reducing the model size, and by training lightweight adapters. In this paper, we propose AdapterDrop, removing adapters from lower transformer layers during training and inference, which incorporates concepts from all three directions. We show that Adap-terDrop can dynamically reduce the computational overhead when performing inference over multiple tasks simultaneously, with minimal decrease in task performances. We further prune adapters from AdapterFusion, which improves the inference efficiency while maintaining the task performances entirely.",AdapterDrop: On the Efficiency of Adapters in Transformers,"Transformer models are expensive to fine-tune, slow for inference, and have large storage requirements. Recent approaches tackle these shortcomings by training smaller models, dynamically reducing the model size, and by training lightweight adapters. In this paper, we propose AdapterDrop, removing adapters from lower transformer layers during training and inference, which incorporates concepts from all three directions. We show that Adap-terDrop can dynamically reduce the computational overhead when performing inference over multiple tasks simultaneously, with minimal decrease in task performances. We further prune adapters from AdapterFusion, which improves the inference efficiency while maintaining the task performances entirely.",This work has received financial support from multiple sources. (1) The German Federal Ministry of,"AdapterDrop: On the Efficiency of Adapters in Transformers. Transformer models are expensive to fine-tune, slow for inference, and have large storage requirements. Recent approaches tackle these shortcomings by training smaller models, dynamically reducing the model size, and by training lightweight adapters. In this paper, we propose AdapterDrop, removing adapters from lower transformer layers during training and inference, which incorporates concepts from all three directions. We show that Adap-terDrop can dynamically reduce the computational overhead when performing inference over multiple tasks simultaneously, with minimal decrease in task performances. We further prune adapters from AdapterFusion, which improves the inference efficiency while maintaining the task performances entirely.",2021
hutchins-2012-obituary,https://aclanthology.org/J12-3001,0,,,,,,,"Obituary: Victor H. Yngve. (MIT), as editor of its first journal, as designer and developer of the first non-numerical programming language (COMIT), and as an influential contributor to linguistic theory. While still completing his Ph.D. on cosmic ray physics at the University of Chicago during 1950-1953, Yngve had an idea for using the newly invented computers to translate languages. He contemplated building a translation machine based on simple dictionary lookup. At this time he knew nothing of the earlier speculations of Warren Weaver and others (Hutchins 1997). Then during a visit to Claude Shannon at Bell Telephone Laboratories in early 1952 he heard about a conference on machine translation to be held at MIT in June of that year. He attended the opening public meeting and participated in conference discussions, and then, after Bar-Hillel's departure from MIT, he was appointed in July 1953 by Jerome Wiesner at the Research Laboratory for Electronics (RLE) to lead the MT research effort there. (For a retrospective survey of his MT research activities see Yngve [2000].) Yngve, along with many others at the time, deprecated the premature publicity around the Georgetown-IBM system demonstrated in January 1954. Yngve was appalled to see research of such a limited nature reported in newspapers; his background in physics required experiments to be carefully planned, with their assumptions made plain, and properly tested and reviewed by other researchers. He was determined to set the new field of MT on a proper scientific course. The first step was a journal for the field, to be named Mechanical Translation-the field became ""machine translation"" in later years. He found a collaborator for the journal in William N. Locke of the MIT Modern Languages department. The aim was to provide a forum for information about what research was going on in the form of abstracts, and then for peer-reviewed articles. The first issue appeared in March 1954. Yngve's first experiments at MIT in October 1953 were an implementation of his earlier ideas on word-for-word translation. The results of translating from German were published in the collection edited by Locke and Booth (Yngve 1955b). One example of output began: Die CONVINCINGe CRITIQUE des CLASSICALen IDEA-OF-PROBABILITY IS eine der REMARKABLEen WORKS des AUTHORs. Er HAS BOTHen LAWe der GREATen NUMBERen ein DOUBLEes TO SHOWen: (1) wie sie IN seinem SYSTEM TO INTERPRETen ARE, (2) THAT sie THROUGH THISe INTERPRETATION NOT den CHARACTER von NOT-TRIVIALen DEMONSTRABLE PROPOSITIONen LOSen. . .",{O}bituary: Victor {H}. Yngve,"(MIT), as editor of its first journal, as designer and developer of the first non-numerical programming language (COMIT), and as an influential contributor to linguistic theory. While still completing his Ph.D. on cosmic ray physics at the University of Chicago during 1950-1953, Yngve had an idea for using the newly invented computers to translate languages. He contemplated building a translation machine based on simple dictionary lookup. At this time he knew nothing of the earlier speculations of Warren Weaver and others (Hutchins 1997). Then during a visit to Claude Shannon at Bell Telephone Laboratories in early 1952 he heard about a conference on machine translation to be held at MIT in June of that year. He attended the opening public meeting and participated in conference discussions, and then, after Bar-Hillel's departure from MIT, he was appointed in July 1953 by Jerome Wiesner at the Research Laboratory for Electronics (RLE) to lead the MT research effort there. (For a retrospective survey of his MT research activities see Yngve [2000].) Yngve, along with many others at the time, deprecated the premature publicity around the Georgetown-IBM system demonstrated in January 1954. Yngve was appalled to see research of such a limited nature reported in newspapers; his background in physics required experiments to be carefully planned, with their assumptions made plain, and properly tested and reviewed by other researchers. He was determined to set the new field of MT on a proper scientific course. The first step was a journal for the field, to be named Mechanical Translation-the field became ""machine translation"" in later years. He found a collaborator for the journal in William N. Locke of the MIT Modern Languages department. The aim was to provide a forum for information about what research was going on in the form of abstracts, and then for peer-reviewed articles. The first issue appeared in March 1954. Yngve's first experiments at MIT in October 1953 were an implementation of his earlier ideas on word-for-word translation. The results of translating from German were published in the collection edited by Locke and Booth (Yngve 1955b). One example of output began: Die CONVINCINGe CRITIQUE des CLASSICALen IDEA-OF-PROBABILITY IS eine der REMARKABLEen WORKS des AUTHORs. Er HAS BOTHen LAWe der GREATen NUMBERen ein DOUBLEes TO SHOWen: (1) wie sie IN seinem SYSTEM TO INTERPRETen ARE, (2) THAT sie THROUGH THISe INTERPRETATION NOT den CHARACTER von NOT-TRIVIALen DEMONSTRABLE PROPOSITIONen LOSen. . .",Obituary: Victor H. Yngve,"(MIT), as editor of its first journal, as designer and developer of the first non-numerical programming language (COMIT), and as an influential contributor to linguistic theory. While still completing his Ph.D. on cosmic ray physics at the University of Chicago during 1950-1953, Yngve had an idea for using the newly invented computers to translate languages. He contemplated building a translation machine based on simple dictionary lookup. At this time he knew nothing of the earlier speculations of Warren Weaver and others (Hutchins 1997). Then during a visit to Claude Shannon at Bell Telephone Laboratories in early 1952 he heard about a conference on machine translation to be held at MIT in June of that year. He attended the opening public meeting and participated in conference discussions, and then, after Bar-Hillel's departure from MIT, he was appointed in July 1953 by Jerome Wiesner at the Research Laboratory for Electronics (RLE) to lead the MT research effort there. (For a retrospective survey of his MT research activities see Yngve [2000].) Yngve, along with many others at the time, deprecated the premature publicity around the Georgetown-IBM system demonstrated in January 1954. Yngve was appalled to see research of such a limited nature reported in newspapers; his background in physics required experiments to be carefully planned, with their assumptions made plain, and properly tested and reviewed by other researchers. He was determined to set the new field of MT on a proper scientific course. The first step was a journal for the field, to be named Mechanical Translation-the field became ""machine translation"" in later years. He found a collaborator for the journal in William N. Locke of the MIT Modern Languages department. The aim was to provide a forum for information about what research was going on in the form of abstracts, and then for peer-reviewed articles. The first issue appeared in March 1954. Yngve's first experiments at MIT in October 1953 were an implementation of his earlier ideas on word-for-word translation. The results of translating from German were published in the collection edited by Locke and Booth (Yngve 1955b). One example of output began: Die CONVINCINGe CRITIQUE des CLASSICALen IDEA-OF-PROBABILITY IS eine der REMARKABLEen WORKS des AUTHORs. Er HAS BOTHen LAWe der GREATen NUMBERen ein DOUBLEes TO SHOWen: (1) wie sie IN seinem SYSTEM TO INTERPRETen ARE, (2) THAT sie THROUGH THISe INTERPRETATION NOT den CHARACTER von NOT-TRIVIALen DEMONSTRABLE PROPOSITIONen LOSen. . .",,"Obituary: Victor H. Yngve. (MIT), as editor of its first journal, as designer and developer of the first non-numerical programming language (COMIT), and as an influential contributor to linguistic theory. While still completing his Ph.D. on cosmic ray physics at the University of Chicago during 1950-1953, Yngve had an idea for using the newly invented computers to translate languages. He contemplated building a translation machine based on simple dictionary lookup. At this time he knew nothing of the earlier speculations of Warren Weaver and others (Hutchins 1997). Then during a visit to Claude Shannon at Bell Telephone Laboratories in early 1952 he heard about a conference on machine translation to be held at MIT in June of that year. He attended the opening public meeting and participated in conference discussions, and then, after Bar-Hillel's departure from MIT, he was appointed in July 1953 by Jerome Wiesner at the Research Laboratory for Electronics (RLE) to lead the MT research effort there. (For a retrospective survey of his MT research activities see Yngve [2000].) Yngve, along with many others at the time, deprecated the premature publicity around the Georgetown-IBM system demonstrated in January 1954. Yngve was appalled to see research of such a limited nature reported in newspapers; his background in physics required experiments to be carefully planned, with their assumptions made plain, and properly tested and reviewed by other researchers. He was determined to set the new field of MT on a proper scientific course. The first step was a journal for the field, to be named Mechanical Translation-the field became ""machine translation"" in later years. He found a collaborator for the journal in William N. Locke of the MIT Modern Languages department. The aim was to provide a forum for information about what research was going on in the form of abstracts, and then for peer-reviewed articles. The first issue appeared in March 1954. Yngve's first experiments at MIT in October 1953 were an implementation of his earlier ideas on word-for-word translation. The results of translating from German were published in the collection edited by Locke and Booth (Yngve 1955b). One example of output began: Die CONVINCINGe CRITIQUE des CLASSICALen IDEA-OF-PROBABILITY IS eine der REMARKABLEen WORKS des AUTHORs. Er HAS BOTHen LAWe der GREATen NUMBERen ein DOUBLEes TO SHOWen: (1) wie sie IN seinem SYSTEM TO INTERPRETen ARE, (2) THAT sie THROUGH THISe INTERPRETATION NOT den CHARACTER von NOT-TRIVIALen DEMONSTRABLE PROPOSITIONen LOSen. . .",2012
zue-1989-acoustic,https://aclanthology.org/H89-1025,0,,,,,,,"Acoustic-Phonetics Based Speech Recognition. The objective of this project is to develop a robust and high-performance speech recognitiotl system using a segment-based approach to phonetic recognition. The recognition system will eventually be integrated with natural language processing to achieve spoken lallguagc understanding.
Developed a phonetic recognition front-end and achieved 77% and 71% classiilcatiou accuracy under speaker-dependent and -independent conditions, respectively, using a set of 38 context-independent models.",Acoustic-Phonetics Based Speech Recognition,"The objective of this project is to develop a robust and high-performance speech recognitiotl system using a segment-based approach to phonetic recognition. The recognition system will eventually be integrated with natural language processing to achieve spoken lallguagc understanding.
Developed a phonetic recognition front-end and achieved 77% and 71% classiilcatiou accuracy under speaker-dependent and -independent conditions, respectively, using a set of 38 context-independent models.",Acoustic-Phonetics Based Speech Recognition,"The objective of this project is to develop a robust and high-performance speech recognitiotl system using a segment-based approach to phonetic recognition. The recognition system will eventually be integrated with natural language processing to achieve spoken lallguagc understanding.
Developed a phonetic recognition front-end and achieved 77% and 71% classiilcatiou accuracy under speaker-dependent and -independent conditions, respectively, using a set of 38 context-independent models.",,"Acoustic-Phonetics Based Speech Recognition. The objective of this project is to develop a robust and high-performance speech recognitiotl system using a segment-based approach to phonetic recognition. The recognition system will eventually be integrated with natural language processing to achieve spoken lallguagc understanding.
Developed a phonetic recognition front-end and achieved 77% and 71% classiilcatiou accuracy under speaker-dependent and -independent conditions, respectively, using a set of 38 context-independent models.",1989
riedl-biemann-2012-sweeping,https://aclanthology.org/W12-0703,0,,,,,,,"Sweeping through the Topic Space: Bad luck? Roll again!. Topic Models (TM) such as Latent Dirichlet Allocation (LDA) are increasingly used in Natural Language Processing applications. At this, the model parameters and the influence of randomized sampling and inference are rarely examined-usually, the recommendations from the original papers are adopted. In this paper, we examine the parameter space of LDA topic models with respect to the application of Text Segmentation (TS), specifically targeting error rates and their variance across different runs. We find that the recommended settings result in error rates far from optimal for our application. We show substantial variance in the results for different runs of model estimation and inference, and give recommendations for increasing the robustness and stability of topic models. Running the inference step several times and selecting the last topic ID assigned per token, shows considerable improvements. Similar improvements are achieved with the mode method: We store all assigned topic IDs during each inference iteration step and select the most frequent topic ID assigned to each word. These recommendations do not only apply to TS, but are generic enough to transfer to other applications.",Sweeping through the Topic Space: Bad luck? Roll again!,"Topic Models (TM) such as Latent Dirichlet Allocation (LDA) are increasingly used in Natural Language Processing applications. At this, the model parameters and the influence of randomized sampling and inference are rarely examined-usually, the recommendations from the original papers are adopted. In this paper, we examine the parameter space of LDA topic models with respect to the application of Text Segmentation (TS), specifically targeting error rates and their variance across different runs. We find that the recommended settings result in error rates far from optimal for our application. We show substantial variance in the results for different runs of model estimation and inference, and give recommendations for increasing the robustness and stability of topic models. Running the inference step several times and selecting the last topic ID assigned per token, shows considerable improvements. Similar improvements are achieved with the mode method: We store all assigned topic IDs during each inference iteration step and select the most frequent topic ID assigned to each word. These recommendations do not only apply to TS, but are generic enough to transfer to other applications.",Sweeping through the Topic Space: Bad luck? Roll again!,"Topic Models (TM) such as Latent Dirichlet Allocation (LDA) are increasingly used in Natural Language Processing applications. At this, the model parameters and the influence of randomized sampling and inference are rarely examined-usually, the recommendations from the original papers are adopted. In this paper, we examine the parameter space of LDA topic models with respect to the application of Text Segmentation (TS), specifically targeting error rates and their variance across different runs. We find that the recommended settings result in error rates far from optimal for our application. We show substantial variance in the results for different runs of model estimation and inference, and give recommendations for increasing the robustness and stability of topic models. Running the inference step several times and selecting the last topic ID assigned per token, shows considerable improvements. Similar improvements are achieved with the mode method: We store all assigned topic IDs during each inference iteration step and select the most frequent topic ID assigned to each word. These recommendations do not only apply to TS, but are generic enough to transfer to other applications.","This work has been supported by the Hessian research excellence program ""Landes-Offensive zur Entwicklung Wissenschaftlich-konomischer Exzellenz"" (LOEWE) as part of the research center ""Digital Humanities"". We would also thank the anonymous reviewers for their comments, which greatly helped to improve the paper.","Sweeping through the Topic Space: Bad luck? Roll again!. Topic Models (TM) such as Latent Dirichlet Allocation (LDA) are increasingly used in Natural Language Processing applications. At this, the model parameters and the influence of randomized sampling and inference are rarely examined-usually, the recommendations from the original papers are adopted. In this paper, we examine the parameter space of LDA topic models with respect to the application of Text Segmentation (TS), specifically targeting error rates and their variance across different runs. We find that the recommended settings result in error rates far from optimal for our application. We show substantial variance in the results for different runs of model estimation and inference, and give recommendations for increasing the robustness and stability of topic models. Running the inference step several times and selecting the last topic ID assigned per token, shows considerable improvements. Similar improvements are achieved with the mode method: We store all assigned topic IDs during each inference iteration step and select the most frequent topic ID assigned to each word. These recommendations do not only apply to TS, but are generic enough to transfer to other applications.",2012
liao-grishman-2011-using,https://aclanthology.org/I11-1080,0,,,,,,,"Using Prediction from Sentential Scope to Build a Pseudo Co-Testing Learner for Event Extraction. Event extraction involves the identification of instances of a type of event, along with their attributes and participants. Developing a training corpus by annotating events in text is very labor intensive, and so selecting informative instances to annotate can save a great deal of manual work. We present an active learning (AL) strategy, pseudo co-testing, based on one view from a classifier aiming to solve the original problem of event extraction, and another view from a classifier aiming to solve a coarser granularity task. As the second classifier can provide more graded matching from a wider scope, we can build a set of pseudocontention-points which are very informative, and can speed up the AL process. Moreover, we incorporate multiple selection criteria into the pseudo cotesting, seeking training examples that are informative, representative, and varied. Experiments show that pseudo co-testing can reduce annotation labor by 81%; incorporating multiple selection criteria reduces the labor by a further 7%.",Using Prediction from Sentential Scope to Build a Pseudo Co-Testing Learner for Event Extraction,"Event extraction involves the identification of instances of a type of event, along with their attributes and participants. Developing a training corpus by annotating events in text is very labor intensive, and so selecting informative instances to annotate can save a great deal of manual work. We present an active learning (AL) strategy, pseudo co-testing, based on one view from a classifier aiming to solve the original problem of event extraction, and another view from a classifier aiming to solve a coarser granularity task. As the second classifier can provide more graded matching from a wider scope, we can build a set of pseudocontention-points which are very informative, and can speed up the AL process. Moreover, we incorporate multiple selection criteria into the pseudo cotesting, seeking training examples that are informative, representative, and varied. Experiments show that pseudo co-testing can reduce annotation labor by 81%; incorporating multiple selection criteria reduces the labor by a further 7%.",Using Prediction from Sentential Scope to Build a Pseudo Co-Testing Learner for Event Extraction,"Event extraction involves the identification of instances of a type of event, along with their attributes and participants. Developing a training corpus by annotating events in text is very labor intensive, and so selecting informative instances to annotate can save a great deal of manual work. We present an active learning (AL) strategy, pseudo co-testing, based on one view from a classifier aiming to solve the original problem of event extraction, and another view from a classifier aiming to solve a coarser granularity task. As the second classifier can provide more graded matching from a wider scope, we can build a set of pseudocontention-points which are very informative, and can speed up the AL process. Moreover, we incorporate multiple selection criteria into the pseudo cotesting, seeking training examples that are informative, representative, and varied. Experiments show that pseudo co-testing can reduce annotation labor by 81%; incorporating multiple selection criteria reduces the labor by a further 7%.","Supported by the Intelligence Advanced Research Projects Activity (IARPA) via Air Force Research Laboratory (AFRL) contract number FA8650-10-C-7058. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. Government.","Using Prediction from Sentential Scope to Build a Pseudo Co-Testing Learner for Event Extraction. Event extraction involves the identification of instances of a type of event, along with their attributes and participants. Developing a training corpus by annotating events in text is very labor intensive, and so selecting informative instances to annotate can save a great deal of manual work. We present an active learning (AL) strategy, pseudo co-testing, based on one view from a classifier aiming to solve the original problem of event extraction, and another view from a classifier aiming to solve a coarser granularity task. As the second classifier can provide more graded matching from a wider scope, we can build a set of pseudocontention-points which are very informative, and can speed up the AL process. Moreover, we incorporate multiple selection criteria into the pseudo cotesting, seeking training examples that are informative, representative, and varied. Experiments show that pseudo co-testing can reduce annotation labor by 81%; incorporating multiple selection criteria reduces the labor by a further 7%.",2011
chiang-etal-2008-decomposability,https://aclanthology.org/D08-1064,0,,,,,,,"Decomposability of Translation Metrics for Improved Evaluation and Efficient Algorithms. B is the de facto standard for evaluation and development of statistical machine translation systems. We describe three real-world situations involving comparisons between different versions of the same systems where one can obtain improvements in B scores that are questionable or even absurd. These situations arise because B lacks the property of decomposability, a property which is also computationally convenient for various applications. We propose a very conservative modification to B and a cross between B and word error rate that address these issues while improving correlation with human judgments.",Decomposability of Translation Metrics for Improved Evaluation and Efficient Algorithms,"B is the de facto standard for evaluation and development of statistical machine translation systems. We describe three real-world situations involving comparisons between different versions of the same systems where one can obtain improvements in B scores that are questionable or even absurd. These situations arise because B lacks the property of decomposability, a property which is also computationally convenient for various applications. We propose a very conservative modification to B and a cross between B and word error rate that address these issues while improving correlation with human judgments.",Decomposability of Translation Metrics for Improved Evaluation and Efficient Algorithms,"B is the de facto standard for evaluation and development of statistical machine translation systems. We describe three real-world situations involving comparisons between different versions of the same systems where one can obtain improvements in B scores that are questionable or even absurd. These situations arise because B lacks the property of decomposability, a property which is also computationally convenient for various applications. We propose a very conservative modification to B and a cross between B and word error rate that address these issues while improving correlation with human judgments.","Our thanks go to Daniel Marcu for suggesting modifying the B brevity penalty, and to Jonathan May and Kevin Knight for their insightful comments. This research was supported in part by DARPA grant HR0011-06-C-0022 under BBN Technologies subcontract 9500008412.","Decomposability of Translation Metrics for Improved Evaluation and Efficient Algorithms. B is the de facto standard for evaluation and development of statistical machine translation systems. We describe three real-world situations involving comparisons between different versions of the same systems where one can obtain improvements in B scores that are questionable or even absurd. These situations arise because B lacks the property of decomposability, a property which is also computationally convenient for various applications. We propose a very conservative modification to B and a cross between B and word error rate that address these issues while improving correlation with human judgments.",2008
vacher-etal-2014-sweet,http://www.lrec-conf.org/proceedings/lrec2014/pdf/118_Paper.pdf,0,,,,,,,"The Sweet-Home speech and multimodal corpus for home automation interaction. Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes and Home Automation. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The SWEET-HOME multimodal corpus is a dataset recorded in realistic conditions in DOMUS, a fully equipped Smart Home with microphones and home automation sensors, in which participants performed Activities of Daily living (ADL). This corpus is made of a multimodal subset, a French home automation speech subset recorded in Distant Speech conditions, and two interaction subsets, the first one being recorded by 16 persons without disabilities and the second one by 6 seniors and 5 visually impaired people. This corpus was used in studies related to ADL recognition, context aware interaction and distant speech recognition applied to home automation controled through voice.",The Sweet-Home speech and multimodal corpus for home automation interaction,"Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes and Home Automation. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The SWEET-HOME multimodal corpus is a dataset recorded in realistic conditions in DOMUS, a fully equipped Smart Home with microphones and home automation sensors, in which participants performed Activities of Daily living (ADL). This corpus is made of a multimodal subset, a French home automation speech subset recorded in Distant Speech conditions, and two interaction subsets, the first one being recorded by 16 persons without disabilities and the second one by 6 seniors and 5 visually impaired people. This corpus was used in studies related to ADL recognition, context aware interaction and distant speech recognition applied to home automation controled through voice.",The Sweet-Home speech and multimodal corpus for home automation interaction,"Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes and Home Automation. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The SWEET-HOME multimodal corpus is a dataset recorded in realistic conditions in DOMUS, a fully equipped Smart Home with microphones and home automation sensors, in which participants performed Activities of Daily living (ADL). This corpus is made of a multimodal subset, a French home automation speech subset recorded in Distant Speech conditions, and two interaction subsets, the first one being recorded by 16 persons without disabilities and the second one by 6 seniors and 5 visually impaired people. This corpus was used in studies related to ADL recognition, context aware interaction and distant speech recognition applied to home automation controled through voice.","This work is part of the SWEET-HOME project founded by the French National Research Agency (Agence Nationale de la Recherche / ANR-09-VERS-011). The authors would like to thank the participants who accepted to perform the experiments. Thanks are extended to S. Humblot, S. Meignard, D. Guerin, C. Fontaine, D. Istrate, C. Roux and E. Elias for their support.","The Sweet-Home speech and multimodal corpus for home automation interaction. Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes and Home Automation. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The SWEET-HOME multimodal corpus is a dataset recorded in realistic conditions in DOMUS, a fully equipped Smart Home with microphones and home automation sensors, in which participants performed Activities of Daily living (ADL). This corpus is made of a multimodal subset, a French home automation speech subset recorded in Distant Speech conditions, and two interaction subsets, the first one being recorded by 16 persons without disabilities and the second one by 6 seniors and 5 visually impaired people. This corpus was used in studies related to ADL recognition, context aware interaction and distant speech recognition applied to home automation controled through voice.",2014
pallotta-etal-2007-user,https://aclanthology.org/P07-1127,0,,,,,,,"User Requirements Analysis for Meeting Information Retrieval Based on Query Elicitation. We present a user requirements study for Question Answering on meeting records that assesses the difficulty of users questions in terms of what type of knowledge is required in order to provide the correct answer. We grounded our work on the empirical analysis of elicited user queries. We found that the majority of elicited queries (around 60%) pertain to argumentative processes and outcomes. Our analysis also suggests that standard keyword-based Information Retrieval can only deal successfully with less than 20% of the queries, and that it must be complemented with other types of metadata and inference.",User Requirements Analysis for Meeting Information Retrieval Based on Query Elicitation,"We present a user requirements study for Question Answering on meeting records that assesses the difficulty of users questions in terms of what type of knowledge is required in order to provide the correct answer. We grounded our work on the empirical analysis of elicited user queries. We found that the majority of elicited queries (around 60%) pertain to argumentative processes and outcomes. Our analysis also suggests that standard keyword-based Information Retrieval can only deal successfully with less than 20% of the queries, and that it must be complemented with other types of metadata and inference.",User Requirements Analysis for Meeting Information Retrieval Based on Query Elicitation,"We present a user requirements study for Question Answering on meeting records that assesses the difficulty of users questions in terms of what type of knowledge is required in order to provide the correct answer. We grounded our work on the empirical analysis of elicited user queries. We found that the majority of elicited queries (around 60%) pertain to argumentative processes and outcomes. Our analysis also suggests that standard keyword-based Information Retrieval can only deal successfully with less than 20% of the queries, and that it must be complemented with other types of metadata and inference.",We wish to thank Martin Rajman and Hatem Ghorbel for their constant and valuable feedback. This work has been partially supported by the Swiss National Science Foundation NCCR IM2 and by the SNSF grant no. 200021-116235.,"User Requirements Analysis for Meeting Information Retrieval Based on Query Elicitation. We present a user requirements study for Question Answering on meeting records that assesses the difficulty of users questions in terms of what type of knowledge is required in order to provide the correct answer. We grounded our work on the empirical analysis of elicited user queries. We found that the majority of elicited queries (around 60%) pertain to argumentative processes and outcomes. Our analysis also suggests that standard keyword-based Information Retrieval can only deal successfully with less than 20% of the queries, and that it must be complemented with other types of metadata and inference.",2007
hasan-etal-2006-reranking,https://aclanthology.org/W06-2606,0,,,,,,,"Reranking Translation Hypotheses Using Structural Properties. We investigate methods that add syntactically motivated features to a statistical machine translation system in a reranking framework. The goal is to analyze whether shallow parsing techniques help in identifying ungrammatical hypotheses. We show that improvements are possible by utilizing supertagging, lightweight dependency analysis, a link grammar parser and a maximum-entropy based chunk parser. Adding features to n-best lists and discriminatively training the system on a development set increases the BLEU score up to 0.7% on the test set.",Reranking Translation Hypotheses Using Structural Properties,"We investigate methods that add syntactically motivated features to a statistical machine translation system in a reranking framework. The goal is to analyze whether shallow parsing techniques help in identifying ungrammatical hypotheses. We show that improvements are possible by utilizing supertagging, lightweight dependency analysis, a link grammar parser and a maximum-entropy based chunk parser. Adding features to n-best lists and discriminatively training the system on a development set increases the BLEU score up to 0.7% on the test set.",Reranking Translation Hypotheses Using Structural Properties,"We investigate methods that add syntactically motivated features to a statistical machine translation system in a reranking framework. The goal is to analyze whether shallow parsing techniques help in identifying ungrammatical hypotheses. We show that improvements are possible by utilizing supertagging, lightweight dependency analysis, a link grammar parser and a maximum-entropy based chunk parser. Adding features to n-best lists and discriminatively training the system on a development set increases the BLEU score up to 0.7% on the test set.","This work has been partly funded by the European Union under the integrated project TC-Star (Technology and Corpora for Speech to Speech Translation, IST-2002-FP6-506738, http://www.tc-star.org), and by the R&D project TRAMES managed by Bertin Technologies as prime contractor and operated by the french DGA (Délégation Générale pour l'Armement).","Reranking Translation Hypotheses Using Structural Properties. We investigate methods that add syntactically motivated features to a statistical machine translation system in a reranking framework. The goal is to analyze whether shallow parsing techniques help in identifying ungrammatical hypotheses. We show that improvements are possible by utilizing supertagging, lightweight dependency analysis, a link grammar parser and a maximum-entropy based chunk parser. Adding features to n-best lists and discriminatively training the system on a development set increases the BLEU score up to 0.7% on the test set.",2006
gonzalez-martinez-2019-graphemic,https://aclanthology.org/W19-9001,0,,,,,,,"Graphemic ambiguous queries on Arabic-scripted historical corpora. Arabic script is a multi-layered orthographic system that consists of a base of archigraphemes, roughly equivalent to the traditional so-called rasm, with several layers of diacritics. The archigrapheme represents the smallest logical unit of Arabic script; it consists of the shared features between two or more graphemes, i.e., eliminating diacritics. Archigraphemes are to orthography what archiphonemes are to phonology. An archiphoneme is the abstract representation of two or more phonemes without their distinctive phonological features. For example, in Spanish, occlusive consonants loose their distinctive feature of sonority in syllabic coda position; the words adjetivo 'adjective' [aDxe'tiβo] and atleta 'athlete' [aD'leta] both shared an archiphoneme [D] (in careful speech) in their first syllable, corresponding to the phonemes /d/ and /t/ respectively. In some cases, the neutralisation of two phonemes may cause two words to be homophones. For example, vid 'vine' and bit 'bit' are both pronounced as [biD] . In paleo-orthographic Arabic script, consonant diacritics were not written down in all positions as it happens in modern Arabic script, where they are mandatory. Consequently, homographic letter blocks were quite common. An additional characteristic of early Arabic script is that graphemic or logical spaces between words did not exist: Arabic orthography preserved the ancient practice of scriptio continua, in which script tries to represent connected speech. Diacritics are signs placed in relation with the archigraphemic skeleton. From a functional point of view, there are two basic types of diacritics: a layer of consonant diacritics for differentiating graphemes and a second layer for vowels. In early script, diacritics are marked in a different colour from the one of the skeleton. Strokes were used for consonant diacritics, whereas dots were used for indicating vowels. In modern Arabic script, dots are instead used for consonant diacritics and they are mandatory. On the other hand, vowels are marked by different types of symbols and are usually optional. Unicode, the standard for digital encoding of language information, evolved from a typographic approach to language and its main concern is modern script. Typography is a technique to reproduce written language based on surface shape. As a consequence, it represents an obstacle for dealing with script from a linguistic point of view, since the same logical grapheme may be rendered using different glyphs. The main problems that arise are the following: 1. Only contemporary everyday use is covered, and that with a typographical approach: Unicode encodes multiple Arabic letters (archigraphemes + consonant diacritics) as single printing units. 2. Some calligraphic variants for the same letter were allowed to have separate Unicode characters. In practice, this means that a search for an Arabic word may yield nothing when typed in a Persian or an Urdu keyboard. This is also why you may find only a fraction of all the results when searching in an Arabic text. 3. There are currently no specialised tools that allow scholars to perform searches on Arabic historical orthography: archigraphemes. Additionally, in order to study early documents written in Arabic script, we need to have search tools that can handle continuous archigraphemic representation, i.e., Arabic script as a scripto continua. In collaboration with Thomas Milo from the Dutch company DecoType, we have developed a search utility that disambiguates and normalises Arabic text in real time and also allows the user to perform archigraphemic search on any Arabic-scripted text. The system is called Yakabikaj (traditional invocation protecting texts against bugs), and show the new perspectives it opens for research in the field of historical digital humanities for Arabic-scripted texts.",Graphemic ambiguous queries on {A}rabic-scripted historical corpora,"Arabic script is a multi-layered orthographic system that consists of a base of archigraphemes, roughly equivalent to the traditional so-called rasm, with several layers of diacritics. The archigrapheme represents the smallest logical unit of Arabic script; it consists of the shared features between two or more graphemes, i.e., eliminating diacritics. Archigraphemes are to orthography what archiphonemes are to phonology. An archiphoneme is the abstract representation of two or more phonemes without their distinctive phonological features. For example, in Spanish, occlusive consonants loose their distinctive feature of sonority in syllabic coda position; the words adjetivo 'adjective' [aDxe'tiβo] and atleta 'athlete' [aD'leta] both shared an archiphoneme [D] (in careful speech) in their first syllable, corresponding to the phonemes /d/ and /t/ respectively. In some cases, the neutralisation of two phonemes may cause two words to be homophones. For example, vid 'vine' and bit 'bit' are both pronounced as [biD] . In paleo-orthographic Arabic script, consonant diacritics were not written down in all positions as it happens in modern Arabic script, where they are mandatory. Consequently, homographic letter blocks were quite common. An additional characteristic of early Arabic script is that graphemic or logical spaces between words did not exist: Arabic orthography preserved the ancient practice of scriptio continua, in which script tries to represent connected speech. Diacritics are signs placed in relation with the archigraphemic skeleton. From a functional point of view, there are two basic types of diacritics: a layer of consonant diacritics for differentiating graphemes and a second layer for vowels. In early script, diacritics are marked in a different colour from the one of the skeleton. Strokes were used for consonant diacritics, whereas dots were used for indicating vowels. In modern Arabic script, dots are instead used for consonant diacritics and they are mandatory. On the other hand, vowels are marked by different types of symbols and are usually optional. Unicode, the standard for digital encoding of language information, evolved from a typographic approach to language and its main concern is modern script. Typography is a technique to reproduce written language based on surface shape. As a consequence, it represents an obstacle for dealing with script from a linguistic point of view, since the same logical grapheme may be rendered using different glyphs. The main problems that arise are the following: 1. Only contemporary everyday use is covered, and that with a typographical approach: Unicode encodes multiple Arabic letters (archigraphemes + consonant diacritics) as single printing units. 2. Some calligraphic variants for the same letter were allowed to have separate Unicode characters. In practice, this means that a search for an Arabic word may yield nothing when typed in a Persian or an Urdu keyboard. This is also why you may find only a fraction of all the results when searching in an Arabic text. 3. There are currently no specialised tools that allow scholars to perform searches on Arabic historical orthography: archigraphemes. Additionally, in order to study early documents written in Arabic script, we need to have search tools that can handle continuous archigraphemic representation, i.e., Arabic script as a scripto continua. In collaboration with Thomas Milo from the Dutch company DecoType, we have developed a search utility that disambiguates and normalises Arabic text in real time and also allows the user to perform archigraphemic search on any Arabic-scripted text. The system is called Yakabikaj (traditional invocation protecting texts against bugs), and show the new perspectives it opens for research in the field of historical digital humanities for Arabic-scripted texts.",Graphemic ambiguous queries on Arabic-scripted historical corpora,"Arabic script is a multi-layered orthographic system that consists of a base of archigraphemes, roughly equivalent to the traditional so-called rasm, with several layers of diacritics. The archigrapheme represents the smallest logical unit of Arabic script; it consists of the shared features between two or more graphemes, i.e., eliminating diacritics. Archigraphemes are to orthography what archiphonemes are to phonology. An archiphoneme is the abstract representation of two or more phonemes without their distinctive phonological features. For example, in Spanish, occlusive consonants loose their distinctive feature of sonority in syllabic coda position; the words adjetivo 'adjective' [aDxe'tiβo] and atleta 'athlete' [aD'leta] both shared an archiphoneme [D] (in careful speech) in their first syllable, corresponding to the phonemes /d/ and /t/ respectively. In some cases, the neutralisation of two phonemes may cause two words to be homophones. For example, vid 'vine' and bit 'bit' are both pronounced as [biD] . In paleo-orthographic Arabic script, consonant diacritics were not written down in all positions as it happens in modern Arabic script, where they are mandatory. Consequently, homographic letter blocks were quite common. An additional characteristic of early Arabic script is that graphemic or logical spaces between words did not exist: Arabic orthography preserved the ancient practice of scriptio continua, in which script tries to represent connected speech. Diacritics are signs placed in relation with the archigraphemic skeleton. From a functional point of view, there are two basic types of diacritics: a layer of consonant diacritics for differentiating graphemes and a second layer for vowels. In early script, diacritics are marked in a different colour from the one of the skeleton. Strokes were used for consonant diacritics, whereas dots were used for indicating vowels. In modern Arabic script, dots are instead used for consonant diacritics and they are mandatory. On the other hand, vowels are marked by different types of symbols and are usually optional. Unicode, the standard for digital encoding of language information, evolved from a typographic approach to language and its main concern is modern script. Typography is a technique to reproduce written language based on surface shape. As a consequence, it represents an obstacle for dealing with script from a linguistic point of view, since the same logical grapheme may be rendered using different glyphs. The main problems that arise are the following: 1. Only contemporary everyday use is covered, and that with a typographical approach: Unicode encodes multiple Arabic letters (archigraphemes + consonant diacritics) as single printing units. 2. Some calligraphic variants for the same letter were allowed to have separate Unicode characters. In practice, this means that a search for an Arabic word may yield nothing when typed in a Persian or an Urdu keyboard. This is also why you may find only a fraction of all the results when searching in an Arabic text. 3. There are currently no specialised tools that allow scholars to perform searches on Arabic historical orthography: archigraphemes. Additionally, in order to study early documents written in Arabic script, we need to have search tools that can handle continuous archigraphemic representation, i.e., Arabic script as a scripto continua. In collaboration with Thomas Milo from the Dutch company DecoType, we have developed a search utility that disambiguates and normalises Arabic text in real time and also allows the user to perform archigraphemic search on any Arabic-scripted text. The system is called Yakabikaj (traditional invocation protecting texts against bugs), and show the new perspectives it opens for research in the field of historical digital humanities for Arabic-scripted texts.",,"Graphemic ambiguous queries on Arabic-scripted historical corpora. Arabic script is a multi-layered orthographic system that consists of a base of archigraphemes, roughly equivalent to the traditional so-called rasm, with several layers of diacritics. The archigrapheme represents the smallest logical unit of Arabic script; it consists of the shared features between two or more graphemes, i.e., eliminating diacritics. Archigraphemes are to orthography what archiphonemes are to phonology. An archiphoneme is the abstract representation of two or more phonemes without their distinctive phonological features. For example, in Spanish, occlusive consonants loose their distinctive feature of sonority in syllabic coda position; the words adjetivo 'adjective' [aDxe'tiβo] and atleta 'athlete' [aD'leta] both shared an archiphoneme [D] (in careful speech) in their first syllable, corresponding to the phonemes /d/ and /t/ respectively. In some cases, the neutralisation of two phonemes may cause two words to be homophones. For example, vid 'vine' and bit 'bit' are both pronounced as [biD] . In paleo-orthographic Arabic script, consonant diacritics were not written down in all positions as it happens in modern Arabic script, where they are mandatory. Consequently, homographic letter blocks were quite common. An additional characteristic of early Arabic script is that graphemic or logical spaces between words did not exist: Arabic orthography preserved the ancient practice of scriptio continua, in which script tries to represent connected speech. Diacritics are signs placed in relation with the archigraphemic skeleton. From a functional point of view, there are two basic types of diacritics: a layer of consonant diacritics for differentiating graphemes and a second layer for vowels. In early script, diacritics are marked in a different colour from the one of the skeleton. Strokes were used for consonant diacritics, whereas dots were used for indicating vowels. In modern Arabic script, dots are instead used for consonant diacritics and they are mandatory. On the other hand, vowels are marked by different types of symbols and are usually optional. Unicode, the standard for digital encoding of language information, evolved from a typographic approach to language and its main concern is modern script. Typography is a technique to reproduce written language based on surface shape. As a consequence, it represents an obstacle for dealing with script from a linguistic point of view, since the same logical grapheme may be rendered using different glyphs. The main problems that arise are the following: 1. Only contemporary everyday use is covered, and that with a typographical approach: Unicode encodes multiple Arabic letters (archigraphemes + consonant diacritics) as single printing units. 2. Some calligraphic variants for the same letter were allowed to have separate Unicode characters. In practice, this means that a search for an Arabic word may yield nothing when typed in a Persian or an Urdu keyboard. This is also why you may find only a fraction of all the results when searching in an Arabic text. 3. There are currently no specialised tools that allow scholars to perform searches on Arabic historical orthography: archigraphemes. Additionally, in order to study early documents written in Arabic script, we need to have search tools that can handle continuous archigraphemic representation, i.e., Arabic script as a scripto continua. In collaboration with Thomas Milo from the Dutch company DecoType, we have developed a search utility that disambiguates and normalises Arabic text in real time and also allows the user to perform archigraphemic search on any Arabic-scripted text. The system is called Yakabikaj (traditional invocation protecting texts against bugs), and show the new perspectives it opens for research in the field of historical digital humanities for Arabic-scripted texts.",2019
hu-etal-2022-deep,https://aclanthology.org/2022.acl-long.123,0,,,,,,,"DEEP: DEnoising Entity Pre-training for Neural Machine Translation. It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus. Earlier named entity translation methods mainly focus on phonetic transliteration, which ignores the sentence context for translation and is limited in domain and language coverage. To address this limitation, we propose DEEP, a DEnoising Entity Pretraining method that leverages large amounts of monolingual data and a knowledge base to improve named entity translation accuracy within sentences. Besides, we investigate a multi-task learning strategy that finetunes a pre-trained neural machine translation model on both entity-augmented monolingual data and parallel data to further improve entity translation. Experimental results on three language pairs demonstrate that DEEP results in significant improvements over strong denoising autoencoding baselines, with a gain of up to 1.3 BLEU and up to 9.2 entity accuracy points for English-Russian translation. 1 Krasnodar (Q3646) Language Label Description English Krasnodar capital of Krasnodar region (Krai) in Southern Russia Russian Краснодар город на юге России, административный центр Краснодарского края :",{DEEP}: {DE}noising Entity Pre-training for Neural Machine Translation,"It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus. Earlier named entity translation methods mainly focus on phonetic transliteration, which ignores the sentence context for translation and is limited in domain and language coverage. To address this limitation, we propose DEEP, a DEnoising Entity Pretraining method that leverages large amounts of monolingual data and a knowledge base to improve named entity translation accuracy within sentences. Besides, we investigate a multi-task learning strategy that finetunes a pre-trained neural machine translation model on both entity-augmented monolingual data and parallel data to further improve entity translation. Experimental results on three language pairs demonstrate that DEEP results in significant improvements over strong denoising autoencoding baselines, with a gain of up to 1.3 BLEU and up to 9.2 entity accuracy points for English-Russian translation. 1 Krasnodar (Q3646) Language Label Description English Krasnodar capital of Krasnodar region (Krai) in Southern Russia Russian Краснодар город на юге России, административный центр Краснодарского края :",DEEP: DEnoising Entity Pre-training for Neural Machine Translation,"It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus. Earlier named entity translation methods mainly focus on phonetic transliteration, which ignores the sentence context for translation and is limited in domain and language coverage. To address this limitation, we propose DEEP, a DEnoising Entity Pretraining method that leverages large amounts of monolingual data and a knowledge base to improve named entity translation accuracy within sentences. Besides, we investigate a multi-task learning strategy that finetunes a pre-trained neural machine translation model on both entity-augmented monolingual data and parallel data to further improve entity translation. Experimental results on three language pairs demonstrate that DEEP results in significant improvements over strong denoising autoencoding baselines, with a gain of up to 1.3 BLEU and up to 9.2 entity accuracy points for English-Russian translation. 1 Krasnodar (Q3646) Language Label Description English Krasnodar capital of Krasnodar region (Krai) in Southern Russia Russian Краснодар город на юге России, административный центр Краснодарского края :",This work was supported in part by a grant from the Singapore Defence Science and Technology Agency.,"DEEP: DEnoising Entity Pre-training for Neural Machine Translation. It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus. Earlier named entity translation methods mainly focus on phonetic transliteration, which ignores the sentence context for translation and is limited in domain and language coverage. To address this limitation, we propose DEEP, a DEnoising Entity Pretraining method that leverages large amounts of monolingual data and a knowledge base to improve named entity translation accuracy within sentences. Besides, we investigate a multi-task learning strategy that finetunes a pre-trained neural machine translation model on both entity-augmented monolingual data and parallel data to further improve entity translation. Experimental results on three language pairs demonstrate that DEEP results in significant improvements over strong denoising autoencoding baselines, with a gain of up to 1.3 BLEU and up to 9.2 entity accuracy points for English-Russian translation. 1 Krasnodar (Q3646) Language Label Description English Krasnodar capital of Krasnodar region (Krai) in Southern Russia Russian Краснодар город на юге России, административный центр Краснодарского края :",2022
stewart-etal-2018-si,https://aclanthology.org/N18-2022,0,,,,,,,"Si O No, Que Penses? Catalonian Independence and Linguistic Identity on Social Media. Political identity is often manifested in language variation, but the relationship between the two is still relatively unexplored from a quantitative perspective. This study examines the use of Catalan, a language local to the semi-autonomous region of Catalonia in Spain, on Twitter in discourse related to the 2017 independence referendum. We corroborate prior findings that pro-independence tweets are more likely to include the local language than anti-independence tweets. We also find that Catalan is used more often in referendum-related discourse than in other contexts, contrary to prior findings on language variation. This suggests a strong role for the Catalan language in the expression of Catalonian political identity.","Si {O} No, Que Penses? {C}atalonian Independence and Linguistic Identity on Social Media","Political identity is often manifested in language variation, but the relationship between the two is still relatively unexplored from a quantitative perspective. This study examines the use of Catalan, a language local to the semi-autonomous region of Catalonia in Spain, on Twitter in discourse related to the 2017 independence referendum. We corroborate prior findings that pro-independence tweets are more likely to include the local language than anti-independence tweets. We also find that Catalan is used more often in referendum-related discourse than in other contexts, contrary to prior findings on language variation. This suggests a strong role for the Catalan language in the expression of Catalonian political identity.","Si O No, Que Penses? Catalonian Independence and Linguistic Identity on Social Media","Political identity is often manifested in language variation, but the relationship between the two is still relatively unexplored from a quantitative perspective. This study examines the use of Catalan, a language local to the semi-autonomous region of Catalonia in Spain, on Twitter in discourse related to the 2017 independence referendum. We corroborate prior findings that pro-independence tweets are more likely to include the local language than anti-independence tweets. We also find that Catalan is used more often in referendum-related discourse than in other contexts, contrary to prior findings on language variation. This suggests a strong role for the Catalan language in the expression of Catalonian political identity.","We thank Sandeep Soni, Umashanthi Pavalanathan, our anonymous reviewers, and members of Georgia Tech's Computational Social Science class for their feedback. This research was supported by NSF award IIS-1452443 and NIH award R01-GM112697-03.","Si O No, Que Penses? Catalonian Independence and Linguistic Identity on Social Media. Political identity is often manifested in language variation, but the relationship between the two is still relatively unexplored from a quantitative perspective. This study examines the use of Catalan, a language local to the semi-autonomous region of Catalonia in Spain, on Twitter in discourse related to the 2017 independence referendum. We corroborate prior findings that pro-independence tweets are more likely to include the local language than anti-independence tweets. We also find that Catalan is used more often in referendum-related discourse than in other contexts, contrary to prior findings on language variation. This suggests a strong role for the Catalan language in the expression of Catalonian political identity.",2018
dulceanu-etal-2018-photoshopquia,https://aclanthology.org/L18-1438,0,,,,,,,"PhotoshopQuiA: A Corpus of Non-Factoid Questions and Answers for Why-Question Answering. Recent years have witnessed a high interest in non-factoid question answering using Community Question Answering (CQA) web sites. Despite ongoing research using state-of-the-art methods, there is a scarcity of available datasets for this task. Why-questions, which play an important role in open-domain and domain-specific applications, are difficult to answer automatically since the answers need to be constructed based on different information extracted from multiple knowledge sources. We introduce the PhotoshopQuiA dataset, a new publicly available set of 2,854 why-question and answer(s) (WhyQ, A) pairs related to Adobe Photoshop usage collected from five CQA web sites. We chose Adobe Photoshop because it is a popular and well-known product, with a lively, knowledgeable and sizable community. To the best of our knowledge, this is the first English dataset for Why-QA that focuses on a product, as opposed to previous open-domain datasets. The corpus is stored in JSON format and contains detailed data about questions and questioners as well as answers and answerers. The dataset can be used to build Why-QA systems, to evaluate current approaches for answering why-questions, and to develop new models for future QA systems research.",{P}hotoshop{Q}ui{A}: A Corpus of Non-Factoid Questions and Answers for Why-Question Answering,"Recent years have witnessed a high interest in non-factoid question answering using Community Question Answering (CQA) web sites. Despite ongoing research using state-of-the-art methods, there is a scarcity of available datasets for this task. Why-questions, which play an important role in open-domain and domain-specific applications, are difficult to answer automatically since the answers need to be constructed based on different information extracted from multiple knowledge sources. We introduce the PhotoshopQuiA dataset, a new publicly available set of 2,854 why-question and answer(s) (WhyQ, A) pairs related to Adobe Photoshop usage collected from five CQA web sites. We chose Adobe Photoshop because it is a popular and well-known product, with a lively, knowledgeable and sizable community. To the best of our knowledge, this is the first English dataset for Why-QA that focuses on a product, as opposed to previous open-domain datasets. The corpus is stored in JSON format and contains detailed data about questions and questioners as well as answers and answerers. The dataset can be used to build Why-QA systems, to evaluate current approaches for answering why-questions, and to develop new models for future QA systems research.",PhotoshopQuiA: A Corpus of Non-Factoid Questions and Answers for Why-Question Answering,"Recent years have witnessed a high interest in non-factoid question answering using Community Question Answering (CQA) web sites. Despite ongoing research using state-of-the-art methods, there is a scarcity of available datasets for this task. Why-questions, which play an important role in open-domain and domain-specific applications, are difficult to answer automatically since the answers need to be constructed based on different information extracted from multiple knowledge sources. We introduce the PhotoshopQuiA dataset, a new publicly available set of 2,854 why-question and answer(s) (WhyQ, A) pairs related to Adobe Photoshop usage collected from five CQA web sites. We chose Adobe Photoshop because it is a popular and well-known product, with a lively, knowledgeable and sizable community. To the best of our knowledge, this is the first English dataset for Why-QA that focuses on a product, as opposed to previous open-domain datasets. The corpus is stored in JSON format and contains detailed data about questions and questioners as well as answers and answerers. The dataset can be used to build Why-QA systems, to evaluate current approaches for answering why-questions, and to develop new models for future QA systems research.",The authors express their sincere thank to the University Gift Funding of Adobe Systems Incorporated for the partial financial support for this research.,"PhotoshopQuiA: A Corpus of Non-Factoid Questions and Answers for Why-Question Answering. Recent years have witnessed a high interest in non-factoid question answering using Community Question Answering (CQA) web sites. Despite ongoing research using state-of-the-art methods, there is a scarcity of available datasets for this task. Why-questions, which play an important role in open-domain and domain-specific applications, are difficult to answer automatically since the answers need to be constructed based on different information extracted from multiple knowledge sources. We introduce the PhotoshopQuiA dataset, a new publicly available set of 2,854 why-question and answer(s) (WhyQ, A) pairs related to Adobe Photoshop usage collected from five CQA web sites. We chose Adobe Photoshop because it is a popular and well-known product, with a lively, knowledgeable and sizable community. To the best of our knowledge, this is the first English dataset for Why-QA that focuses on a product, as opposed to previous open-domain datasets. The corpus is stored in JSON format and contains detailed data about questions and questioners as well as answers and answerers. The dataset can be used to build Why-QA systems, to evaluate current approaches for answering why-questions, and to develop new models for future QA systems research.",2018
fahmi-bouma-2006-learning,https://aclanthology.org/W06-2609,0,,,,,,,"Learning to Identify Definitions using Syntactic Features. This paper describes an approach to learning concept definitions which operates on fully parsed text. A subcorpus of the Dutch version of Wikipedia was searched for sentences which have the syntactic properties of definitions. Next, we experimented with various text classification techniques to distinguish actual definitions from other sentences. A maximum entropy classifier which incorporates features referring to the position of the sentence in the document as well as various syntactic features, gives the best results.",Learning to Identify Definitions using Syntactic Features,"This paper describes an approach to learning concept definitions which operates on fully parsed text. A subcorpus of the Dutch version of Wikipedia was searched for sentences which have the syntactic properties of definitions. Next, we experimented with various text classification techniques to distinguish actual definitions from other sentences. A maximum entropy classifier which incorporates features referring to the position of the sentence in the document as well as various syntactic features, gives the best results.",Learning to Identify Definitions using Syntactic Features,"This paper describes an approach to learning concept definitions which operates on fully parsed text. A subcorpus of the Dutch version of Wikipedia was searched for sentences which have the syntactic properties of definitions. Next, we experimented with various text classification techniques to distinguish actual definitions from other sentences. A maximum entropy classifier which incorporates features referring to the position of the sentence in the document as well as various syntactic features, gives the best results.",,"Learning to Identify Definitions using Syntactic Features. This paper describes an approach to learning concept definitions which operates on fully parsed text. A subcorpus of the Dutch version of Wikipedia was searched for sentences which have the syntactic properties of definitions. Next, we experimented with various text classification techniques to distinguish actual definitions from other sentences. A maximum entropy classifier which incorporates features referring to the position of the sentence in the document as well as various syntactic features, gives the best results.",2006
rei-etal-2017-artificial,https://aclanthology.org/W17-5032,0,,,,,,,"Artificial Error Generation with Machine Translation and Syntactic Patterns. Shortage of available training data is holding back progress in the area of automated error detection. This paper investigates two alternative methods for artificially generating writing errors, in order to create additional resources. We propose treating error generation as a machine translation task, where grammatically correct text is translated to contain errors. In addition, we explore a system for extracting textual patterns from an annotated corpus, which can then be used to insert errors into grammatically correct sentences. Our experiments show that the inclusion of artificially generated errors significantly improves error detection accuracy on both FCE and CoNLL 2014 datasets.",Artificial Error Generation with Machine Translation and Syntactic Patterns,"Shortage of available training data is holding back progress in the area of automated error detection. This paper investigates two alternative methods for artificially generating writing errors, in order to create additional resources. We propose treating error generation as a machine translation task, where grammatically correct text is translated to contain errors. In addition, we explore a system for extracting textual patterns from an annotated corpus, which can then be used to insert errors into grammatically correct sentences. Our experiments show that the inclusion of artificially generated errors significantly improves error detection accuracy on both FCE and CoNLL 2014 datasets.",Artificial Error Generation with Machine Translation and Syntactic Patterns,"Shortage of available training data is holding back progress in the area of automated error detection. This paper investigates two alternative methods for artificially generating writing errors, in order to create additional resources. We propose treating error generation as a machine translation task, where grammatically correct text is translated to contain errors. In addition, we explore a system for extracting textual patterns from an annotated corpus, which can then be used to insert errors into grammatically correct sentences. Our experiments show that the inclusion of artificially generated errors significantly improves error detection accuracy on both FCE and CoNLL 2014 datasets.",,"Artificial Error Generation with Machine Translation and Syntactic Patterns. Shortage of available training data is holding back progress in the area of automated error detection. This paper investigates two alternative methods for artificially generating writing errors, in order to create additional resources. We propose treating error generation as a machine translation task, where grammatically correct text is translated to contain errors. In addition, we explore a system for extracting textual patterns from an annotated corpus, which can then be used to insert errors into grammatically correct sentences. Our experiments show that the inclusion of artificially generated errors significantly improves error detection accuracy on both FCE and CoNLL 2014 datasets.",2017
feng-etal-2006-learning,https://aclanthology.org/N06-1027,0,,,,,,,"Learning to Detect Conversation Focus of Threaded Discussions. In this paper we present a novel featureenriched approach that learns to detect the conversation focus of threaded discussions by combining NLP analysis and IR techniques. Using the graph-based algorithm HITS, we integrate different features such as lexical similarity, poster trustworthiness, and speech act analysis of human conversations with featureoriented link generation functions. It is the first quantitative study to analyze human conversation focus in the context of online discussions that takes into account heterogeneous sources of evidence. Experimental results using a threaded discussion corpus from an undergraduate class show that it achieves significant performance improvements compared with the baseline system.",Learning to Detect Conversation Focus of Threaded Discussions,"In this paper we present a novel featureenriched approach that learns to detect the conversation focus of threaded discussions by combining NLP analysis and IR techniques. Using the graph-based algorithm HITS, we integrate different features such as lexical similarity, poster trustworthiness, and speech act analysis of human conversations with featureoriented link generation functions. It is the first quantitative study to analyze human conversation focus in the context of online discussions that takes into account heterogeneous sources of evidence. Experimental results using a threaded discussion corpus from an undergraduate class show that it achieves significant performance improvements compared with the baseline system.",Learning to Detect Conversation Focus of Threaded Discussions,"In this paper we present a novel featureenriched approach that learns to detect the conversation focus of threaded discussions by combining NLP analysis and IR techniques. Using the graph-based algorithm HITS, we integrate different features such as lexical similarity, poster trustworthiness, and speech act analysis of human conversations with featureoriented link generation functions. It is the first quantitative study to analyze human conversation focus in the context of online discussions that takes into account heterogeneous sources of evidence. Experimental results using a threaded discussion corpus from an undergraduate class show that it achieves significant performance improvements compared with the baseline system.","The work was supported in part by DARPA grant DOI-NBC Contract No. NBCHC050051, Learning by Reading, and in part by a grant from the Lord Corporation Foundation to the USC Distance Education Network. The authors want to thank Deepak Ravichandran, Feng Pan, and Rahul Bhagat for their helpful suggestions with the manuscript. We would also like to thank the HLT-NAACL reviewers for their valuable comments.","Learning to Detect Conversation Focus of Threaded Discussions. In this paper we present a novel featureenriched approach that learns to detect the conversation focus of threaded discussions by combining NLP analysis and IR techniques. Using the graph-based algorithm HITS, we integrate different features such as lexical similarity, poster trustworthiness, and speech act analysis of human conversations with featureoriented link generation functions. It is the first quantitative study to analyze human conversation focus in the context of online discussions that takes into account heterogeneous sources of evidence. Experimental results using a threaded discussion corpus from an undergraduate class show that it achieves significant performance improvements compared with the baseline system.",2006
hana-etal-2006-tagging,https://aclanthology.org/W06-2005,0,,,,,,,"Tagging Portuguese with a Spanish Tagger. We describe a knowledge and resource light system for an automatic morphological analysis and tagging of Brazilian Portuguese. 1 We avoid the use of labor intensive resources; particularly, large annotated corpora and lexicons. Instead, we use (i) an annotated corpus of Peninsular Spanish, a language related to Portuguese, (ii) an unannotated corpus of Portuguese, (iii) a description of Portuguese morphology on the level of a basic grammar book. We extend the similar work that we have done (Hana et al., 2004; Feldman et al., 2006) by proposing an alternative algorithm for cognate transfer that effectively projects the Spanish emission probabilities into Portuguese. Our experiments use minimal new human effort and show 21% error reduction over even emissions on a fine-grained tagset.",Tagging {P}ortuguese with a {S}panish Tagger,"We describe a knowledge and resource light system for an automatic morphological analysis and tagging of Brazilian Portuguese. 1 We avoid the use of labor intensive resources; particularly, large annotated corpora and lexicons. Instead, we use (i) an annotated corpus of Peninsular Spanish, a language related to Portuguese, (ii) an unannotated corpus of Portuguese, (iii) a description of Portuguese morphology on the level of a basic grammar book. We extend the similar work that we have done (Hana et al., 2004; Feldman et al., 2006) by proposing an alternative algorithm for cognate transfer that effectively projects the Spanish emission probabilities into Portuguese. Our experiments use minimal new human effort and show 21% error reduction over even emissions on a fine-grained tagset.",Tagging Portuguese with a Spanish Tagger,"We describe a knowledge and resource light system for an automatic morphological analysis and tagging of Brazilian Portuguese. 1 We avoid the use of labor intensive resources; particularly, large annotated corpora and lexicons. Instead, we use (i) an annotated corpus of Peninsular Spanish, a language related to Portuguese, (ii) an unannotated corpus of Portuguese, (iii) a description of Portuguese morphology on the level of a basic grammar book. We extend the similar work that we have done (Hana et al., 2004; Feldman et al., 2006) by proposing an alternative algorithm for cognate transfer that effectively projects the Spanish emission probabilities into Portuguese. Our experiments use minimal new human effort and show 21% error reduction over even emissions on a fine-grained tagset.","We would like to thank Maria das Graças Volpe Nunes, Sandra Maria Aluísio, and Ricardo Hasegawa for giving us access to the NILC corpus annotated with PALAVRAS and to Carlos Rodríguez Penagos for letting us use the Spanish part of the CLiC-TALP corpus.","Tagging Portuguese with a Spanish Tagger. We describe a knowledge and resource light system for an automatic morphological analysis and tagging of Brazilian Portuguese. 1 We avoid the use of labor intensive resources; particularly, large annotated corpora and lexicons. Instead, we use (i) an annotated corpus of Peninsular Spanish, a language related to Portuguese, (ii) an unannotated corpus of Portuguese, (iii) a description of Portuguese morphology on the level of a basic grammar book. We extend the similar work that we have done (Hana et al., 2004; Feldman et al., 2006) by proposing an alternative algorithm for cognate transfer that effectively projects the Spanish emission probabilities into Portuguese. Our experiments use minimal new human effort and show 21% error reduction over even emissions on a fine-grained tagset.",2006
emerson-2005-second,https://aclanthology.org/I05-3017,0,,,,,,,"The Second International Chinese Word Segmentation Bakeoff. The second international Chinese word segmentation bakeoff was held in the summer of 2005 to evaluate the current state of the art in word segmentation. Twenty three groups submitted 130 result sets over two tracks and four different corpora. We found that the technology has improved over the intervening two years, though the out-of-vocabulary problem is still or paramount importance.",The Second International {C}hinese Word Segmentation Bakeoff,"The second international Chinese word segmentation bakeoff was held in the summer of 2005 to evaluate the current state of the art in word segmentation. Twenty three groups submitted 130 result sets over two tracks and four different corpora. We found that the technology has improved over the intervening two years, though the out-of-vocabulary problem is still or paramount importance.",The Second International Chinese Word Segmentation Bakeoff,"The second international Chinese word segmentation bakeoff was held in the summer of 2005 to evaluate the current state of the art in word segmentation. Twenty three groups submitted 130 result sets over two tracks and four different corpora. We found that the technology has improved over the intervening two years, though the out-of-vocabulary problem is still or paramount importance.",,"The Second International Chinese Word Segmentation Bakeoff. The second international Chinese word segmentation bakeoff was held in the summer of 2005 to evaluate the current state of the art in word segmentation. Twenty three groups submitted 130 result sets over two tracks and four different corpora. We found that the technology has improved over the intervening two years, though the out-of-vocabulary problem is still or paramount importance.",2005
vulic-etal-2017-automatic,https://aclanthology.org/K17-1013,0,,,,,,,"Automatic Selection of Context Configurations for Improved Class-Specific Word Representations. This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We construct a context configuration space based on universal dependency relations between words, and efficiently search this space with an adapted beam search algorithm. In word similarity tasks for each word class, we show that our framework is both effective and efficient. Particularly, it improves the Spearman's ρ correlation with human scores on SimLex-999 over the best previously proposed class-specific contexts by 6 (A), 6 (V) and 5 (N) ρ points. With our selected context configurations, we train on only 14% (A), 26.2% (V), and 33.6% (N) of all dependency-based contexts, resulting in a reduced training time. Our results generalise: we show that the configurations our algorithm learns for one English training setup outperform previously proposed context types in another training setup for English. Moreover, basing the configuration space on universal dependencies, it is possible to transfer the learned configurations to German and Italian. We also demonstrate improved per-class results over other context types in these two languages.",Automatic Selection of Context Configurations for Improved Class-Specific Word Representations,"This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We construct a context configuration space based on universal dependency relations between words, and efficiently search this space with an adapted beam search algorithm. In word similarity tasks for each word class, we show that our framework is both effective and efficient. Particularly, it improves the Spearman's ρ correlation with human scores on SimLex-999 over the best previously proposed class-specific contexts by 6 (A), 6 (V) and 5 (N) ρ points. With our selected context configurations, we train on only 14% (A), 26.2% (V), and 33.6% (N) of all dependency-based contexts, resulting in a reduced training time. Our results generalise: we show that the configurations our algorithm learns for one English training setup outperform previously proposed context types in another training setup for English. Moreover, basing the configuration space on universal dependencies, it is possible to transfer the learned configurations to German and Italian. We also demonstrate improved per-class results over other context types in these two languages.",Automatic Selection of Context Configurations for Improved Class-Specific Word Representations,"This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We construct a context configuration space based on universal dependency relations between words, and efficiently search this space with an adapted beam search algorithm. In word similarity tasks for each word class, we show that our framework is both effective and efficient. Particularly, it improves the Spearman's ρ correlation with human scores on SimLex-999 over the best previously proposed class-specific contexts by 6 (A), 6 (V) and 5 (N) ρ points. With our selected context configurations, we train on only 14% (A), 26.2% (V), and 33.6% (N) of all dependency-based contexts, resulting in a reduced training time. Our results generalise: we show that the configurations our algorithm learns for one English training setup outperform previously proposed context types in another training setup for English. Moreover, basing the configuration space on universal dependencies, it is possible to transfer the learned configurations to German and Italian. We also demonstrate improved per-class results over other context types in these two languages.",This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). Roy Schwartz was supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI). The authors are grateful to the anonymous reviewers for their helpful and constructive suggestions.,"Automatic Selection of Context Configurations for Improved Class-Specific Word Representations. This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We construct a context configuration space based on universal dependency relations between words, and efficiently search this space with an adapted beam search algorithm. In word similarity tasks for each word class, we show that our framework is both effective and efficient. Particularly, it improves the Spearman's ρ correlation with human scores on SimLex-999 over the best previously proposed class-specific contexts by 6 (A), 6 (V) and 5 (N) ρ points. With our selected context configurations, we train on only 14% (A), 26.2% (V), and 33.6% (N) of all dependency-based contexts, resulting in a reduced training time. Our results generalise: we show that the configurations our algorithm learns for one English training setup outperform previously proposed context types in another training setup for English. Moreover, basing the configuration space on universal dependencies, it is possible to transfer the learned configurations to German and Italian. We also demonstrate improved per-class results over other context types in these two languages.",2017
mehdad-etal-2013-abstractive,https://aclanthology.org/W13-2117,0,,,,,,,"Abstractive Meeting Summarization with Entailment and Fusion. We propose a novel end-to-end framework for abstractive meeting summarization. We cluster sentences in the input into communities and build an entailment graph over the sentence communities to identify and select the most relevant sentences. We then aggregate those selected sentences by means of a word graph model. We exploit a ranking strategy to select the best path in the word graph as an abstract sentence. Despite not relying on the syntactic structure, our approach significantly outperforms previous models for meeting summarization in terms of informativeness. Moreover, the longer sentences generated by our method are competitive with shorter sentences generated by the previous word graph model in terms of grammaticality.",Abstractive Meeting Summarization with Entailment and Fusion,"We propose a novel end-to-end framework for abstractive meeting summarization. We cluster sentences in the input into communities and build an entailment graph over the sentence communities to identify and select the most relevant sentences. We then aggregate those selected sentences by means of a word graph model. We exploit a ranking strategy to select the best path in the word graph as an abstract sentence. Despite not relying on the syntactic structure, our approach significantly outperforms previous models for meeting summarization in terms of informativeness. Moreover, the longer sentences generated by our method are competitive with shorter sentences generated by the previous word graph model in terms of grammaticality.",Abstractive Meeting Summarization with Entailment and Fusion,"We propose a novel end-to-end framework for abstractive meeting summarization. We cluster sentences in the input into communities and build an entailment graph over the sentence communities to identify and select the most relevant sentences. We then aggregate those selected sentences by means of a word graph model. We exploit a ranking strategy to select the best path in the word graph as an abstract sentence. Despite not relying on the syntactic structure, our approach significantly outperforms previous models for meeting summarization in terms of informativeness. Moreover, the longer sentences generated by our method are competitive with shorter sentences generated by the previous word graph model in terms of grammaticality.","We would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the paper, our annotators for their valuable work, and the NSERC Business Intelligence Network for financial support.","Abstractive Meeting Summarization with Entailment and Fusion. We propose a novel end-to-end framework for abstractive meeting summarization. We cluster sentences in the input into communities and build an entailment graph over the sentence communities to identify and select the most relevant sentences. We then aggregate those selected sentences by means of a word graph model. We exploit a ranking strategy to select the best path in the word graph as an abstract sentence. Despite not relying on the syntactic structure, our approach significantly outperforms previous models for meeting summarization in terms of informativeness. Moreover, the longer sentences generated by our method are competitive with shorter sentences generated by the previous word graph model in terms of grammaticality.",2013
hsiao-etal-2007-word,https://aclanthology.org/O07-1011,0,,,,,,,"Word Translation Disambiguation via Dependency (利用依存關係之辭彙翻譯). We introduce a new method for automatically disambiguation of word translations by using dependency relationships. In our approach, we learn the relationships between translations and dependency relationships from a parallel corpus. The method consists of a training stage and a runtime stage. During the training stage, the system automatically learns a translation decision list based on source sentences and its dependency relationships. At runtime, for each content word in the given sentence, we give a most appropriate Chinese translation relevant to the context of the given sentence according to the decision list. We also describe the implementation of the proposed method using bilingual Hong Kong news and Hong Kong Hansard corpus. In the experiment, we use five different ways to translate content words in the test data and evaluate the results based an automatic BLEU-like evaluation methodology. Experimental results indicate that dependency relations can obviously help us to disambiguate word translations and some kinds of dependency are more effective than others.",Word Translation Disambiguation via Dependency (利用依存關係之辭彙翻譯),"We introduce a new method for automatically disambiguation of word translations by using dependency relationships. In our approach, we learn the relationships between translations and dependency relationships from a parallel corpus. The method consists of a training stage and a runtime stage. During the training stage, the system automatically learns a translation decision list based on source sentences and its dependency relationships. At runtime, for each content word in the given sentence, we give a most appropriate Chinese translation relevant to the context of the given sentence according to the decision list. We also describe the implementation of the proposed method using bilingual Hong Kong news and Hong Kong Hansard corpus. In the experiment, we use five different ways to translate content words in the test data and evaluate the results based an automatic BLEU-like evaluation methodology. Experimental results indicate that dependency relations can obviously help us to disambiguate word translations and some kinds of dependency are more effective than others.",Word Translation Disambiguation via Dependency (利用依存關係之辭彙翻譯),"We introduce a new method for automatically disambiguation of word translations by using dependency relationships. In our approach, we learn the relationships between translations and dependency relationships from a parallel corpus. The method consists of a training stage and a runtime stage. During the training stage, the system automatically learns a translation decision list based on source sentences and its dependency relationships. At runtime, for each content word in the given sentence, we give a most appropriate Chinese translation relevant to the context of the given sentence according to the decision list. We also describe the implementation of the proposed method using bilingual Hong Kong news and Hong Kong Hansard corpus. In the experiment, we use five different ways to translate content words in the test data and evaluate the results based an automatic BLEU-like evaluation methodology. Experimental results indicate that dependency relations can obviously help us to disambiguate word translations and some kinds of dependency are more effective than others.",,"Word Translation Disambiguation via Dependency (利用依存關係之辭彙翻譯). We introduce a new method for automatically disambiguation of word translations by using dependency relationships. In our approach, we learn the relationships between translations and dependency relationships from a parallel corpus. The method consists of a training stage and a runtime stage. During the training stage, the system automatically learns a translation decision list based on source sentences and its dependency relationships. At runtime, for each content word in the given sentence, we give a most appropriate Chinese translation relevant to the context of the given sentence according to the decision list. We also describe the implementation of the proposed method using bilingual Hong Kong news and Hong Kong Hansard corpus. In the experiment, we use five different ways to translate content words in the test data and evaluate the results based an automatic BLEU-like evaluation methodology. Experimental results indicate that dependency relations can obviously help us to disambiguate word translations and some kinds of dependency are more effective than others.",2007
hamoui-etal-2020-flodusta,https://aclanthology.org/2020.lrec-1.174,1,,,,peace_justice_and_strong_institutions,climate,,"FloDusTA: Saudi Tweets Dataset for Flood, Dust Storm, and Traffic Accident Events. The rise of social media platforms makes it a valuable information source of recent events and users' perspective towards them. Twitter has been one of the most important communication platforms in recent years. Event detection, one of the information extraction aspects, involves identifying specified types of events in the text. Detecting events from tweets can help to predict real-world events precisely. A serious challenge that faces Arabic event detection is the lack of Arabic datasets that can be exploited in detecting events. This paper will describe FloDusTA, which is a dataset of tweets that we have built for the purpose of developing an event detection system. The dataset contains tweets written in both Modern Standard Arabic and Saudi dialect. The process of building the dataset starting from tweets collection to annotation by human annotators will be present. The tweets are labeled with four labels: flood, dust storm, traffic accident, and non-event. The dataset was tested for classification and the result was strongly encouraging.","{F}lo{D}us{TA}: Saudi Tweets Dataset for Flood, Dust Storm, and Traffic Accident Events","The rise of social media platforms makes it a valuable information source of recent events and users' perspective towards them. Twitter has been one of the most important communication platforms in recent years. Event detection, one of the information extraction aspects, involves identifying specified types of events in the text. Detecting events from tweets can help to predict real-world events precisely. A serious challenge that faces Arabic event detection is the lack of Arabic datasets that can be exploited in detecting events. This paper will describe FloDusTA, which is a dataset of tweets that we have built for the purpose of developing an event detection system. The dataset contains tweets written in both Modern Standard Arabic and Saudi dialect. The process of building the dataset starting from tweets collection to annotation by human annotators will be present. The tweets are labeled with four labels: flood, dust storm, traffic accident, and non-event. The dataset was tested for classification and the result was strongly encouraging.","FloDusTA: Saudi Tweets Dataset for Flood, Dust Storm, and Traffic Accident Events","The rise of social media platforms makes it a valuable information source of recent events and users' perspective towards them. Twitter has been one of the most important communication platforms in recent years. Event detection, one of the information extraction aspects, involves identifying specified types of events in the text. Detecting events from tweets can help to predict real-world events precisely. A serious challenge that faces Arabic event detection is the lack of Arabic datasets that can be exploited in detecting events. This paper will describe FloDusTA, which is a dataset of tweets that we have built for the purpose of developing an event detection system. The dataset contains tweets written in both Modern Standard Arabic and Saudi dialect. The process of building the dataset starting from tweets collection to annotation by human annotators will be present. The tweets are labeled with four labels: flood, dust storm, traffic accident, and non-event. The dataset was tested for classification and the result was strongly encouraging."," Alsaedi, N., & Burnap, P. (2015). Arabic event detection in social media. In International Conference on Intelligent Text . Springer, Cham. Al-Twairesh, N., Al-Khalifa, H., Al-Salman, A., and Al-Ohali, Y. (2017). AraSenTi-Tweet: A Corpus for Arabic Sentiment Analysis of Saudi Tweets. Procedia Computer Science, 117, 63-72. Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378-382. Gu, Y., Qian, Z. (S., and Chen, F. (2016). From Twitter to detector: Real-time traffic incident detection using social media data. -7) . Sakaki, T., Okazaki, M., and Matsuo, Y. (2010, April).Earthquake shakes Twitter users: real-time event detection by social sensors. In Proceedings of the 19th international conference on World wide web (pp. 851-860). ACM. Schulz, A., Hadjakos, A., Paulheim, H., Nachtwey, J., and Mühlhäuser, M. (2013, June). A multi-indicator approach for geolocalization of tweets. In Seventh international AAAI conference on weblogs and social media. Youssef, A. M., Sefry, S. A., Pradhan, B., and Alfadail, E. A. (2015). Analysis on causes of flash flood in Jeddah city (Kingdom of Saudi Arabia) of 2009 and 2011 using multi-sensor remote sensing data and GIS. Geomatics, Natural Hazards and Risk, 7(3), 1018-1042.","FloDusTA: Saudi Tweets Dataset for Flood, Dust Storm, and Traffic Accident Events. The rise of social media platforms makes it a valuable information source of recent events and users' perspective towards them. Twitter has been one of the most important communication platforms in recent years. Event detection, one of the information extraction aspects, involves identifying specified types of events in the text. Detecting events from tweets can help to predict real-world events precisely. A serious challenge that faces Arabic event detection is the lack of Arabic datasets that can be exploited in detecting events. This paper will describe FloDusTA, which is a dataset of tweets that we have built for the purpose of developing an event detection system. The dataset contains tweets written in both Modern Standard Arabic and Saudi dialect. The process of building the dataset starting from tweets collection to annotation by human annotators will be present. The tweets are labeled with four labels: flood, dust storm, traffic accident, and non-event. The dataset was tested for classification and the result was strongly encouraging.",2020
ek-knuutinen-2017-mainstreaming,https://aclanthology.org/W17-0236,0,,,,,,,"Mainstreaming August Strindberg with Text Normalization. This article explores the application of text normalization methods based on Levenshtein distance and Statistical Machine Translation to the literary genre, specifically on the collected works of August Strindberg. The goal is to normalize archaic spellings to modern day spelling. The study finds evidence of success in text normalization, and explores some problems and improvements to the process of analysing mid-19th to early 20th century Swedish texts. This article is part of an ongoing project at Stockholm University which aims to create a corpus and webfriendly texts from Strindsberg's collected works.",Mainstreaming {A}ugust Strindberg with Text Normalization,"This article explores the application of text normalization methods based on Levenshtein distance and Statistical Machine Translation to the literary genre, specifically on the collected works of August Strindberg. The goal is to normalize archaic spellings to modern day spelling. The study finds evidence of success in text normalization, and explores some problems and improvements to the process of analysing mid-19th to early 20th century Swedish texts. This article is part of an ongoing project at Stockholm University which aims to create a corpus and webfriendly texts from Strindsberg's collected works.",Mainstreaming August Strindberg with Text Normalization,"This article explores the application of text normalization methods based on Levenshtein distance and Statistical Machine Translation to the literary genre, specifically on the collected works of August Strindberg. The goal is to normalize archaic spellings to modern day spelling. The study finds evidence of success in text normalization, and explores some problems and improvements to the process of analysing mid-19th to early 20th century Swedish texts. This article is part of an ongoing project at Stockholm University which aims to create a corpus and webfriendly texts from Strindsberg's collected works.",,"Mainstreaming August Strindberg with Text Normalization. This article explores the application of text normalization methods based on Levenshtein distance and Statistical Machine Translation to the literary genre, specifically on the collected works of August Strindberg. The goal is to normalize archaic spellings to modern day spelling. The study finds evidence of success in text normalization, and explores some problems and improvements to the process of analysing mid-19th to early 20th century Swedish texts. This article is part of an ongoing project at Stockholm University which aims to create a corpus and webfriendly texts from Strindsberg's collected works.",2017
damani-ghonge-2013-appropriately,https://aclanthology.org/D13-1017,0,,,,,,,"Appropriately Incorporating Statistical Significance in PMI. Two recent measures incorporate the notion of statistical significance in basic PMI formulation. In some tasks, we find that the new measures perform worse than the PMI. Our analysis shows that while the basic ideas in incorporating statistical significance in PMI are reasonable, they have been applied slightly inappropriately. By fixing this, we get new measures that improve performance over not just PMI but on other popular co-occurrence measures as well. In fact, the revised measures perform reasonably well compared with more resource intensive non co-occurrence based methods also.",Appropriately Incorporating Statistical Significance in {PMI},"Two recent measures incorporate the notion of statistical significance in basic PMI formulation. In some tasks, we find that the new measures perform worse than the PMI. Our analysis shows that while the basic ideas in incorporating statistical significance in PMI are reasonable, they have been applied slightly inappropriately. By fixing this, we get new measures that improve performance over not just PMI but on other popular co-occurrence measures as well. In fact, the revised measures perform reasonably well compared with more resource intensive non co-occurrence based methods also.",Appropriately Incorporating Statistical Significance in PMI,"Two recent measures incorporate the notion of statistical significance in basic PMI formulation. In some tasks, we find that the new measures perform worse than the PMI. Our analysis shows that while the basic ideas in incorporating statistical significance in PMI are reasonable, they have been applied slightly inappropriately. By fixing this, we get new measures that improve performance over not just PMI but on other popular co-occurrence measures as well. In fact, the revised measures perform reasonably well compared with more resource intensive non co-occurrence based methods also.",We thank Dipak Chaudhari for his help with the implementation.,"Appropriately Incorporating Statistical Significance in PMI. Two recent measures incorporate the notion of statistical significance in basic PMI formulation. In some tasks, we find that the new measures perform worse than the PMI. Our analysis shows that while the basic ideas in incorporating statistical significance in PMI are reasonable, they have been applied slightly inappropriately. By fixing this, we get new measures that improve performance over not just PMI but on other popular co-occurrence measures as well. In fact, the revised measures perform reasonably well compared with more resource intensive non co-occurrence based methods also.",2013
sikdar-gamback-2016-language,https://aclanthology.org/W16-5817,0,,,,,,,"Language Identification in Code-Switched Text Using Conditional Random Fields and Babelnet. The paper outlines a supervised approach to language identification in code-switched data, framing this as a sequence labeling task where the label of each token is identified using a classifier based on Conditional Random Fields and trained on a range of different features, extracted both from the training data and by using information from Babelnet and Babelfy. The method was tested on the development dataset provided by organizers of the shared task on language identification in codeswitched data, obtaining tweet level monolingual, code-switched and weighted F1-scores of 94%, 85% and 91%, respectively, with a token level accuracy of 95.8%. When evaluated on the unseen test data, the system achieved 90%, 85% and 87.4% monolingual, code-switched and weighted tweet level F1scores, and a token level accuracy of 95.7%.",Language Identification in Code-Switched Text Using Conditional Random Fields and Babelnet,"The paper outlines a supervised approach to language identification in code-switched data, framing this as a sequence labeling task where the label of each token is identified using a classifier based on Conditional Random Fields and trained on a range of different features, extracted both from the training data and by using information from Babelnet and Babelfy. The method was tested on the development dataset provided by organizers of the shared task on language identification in codeswitched data, obtaining tweet level monolingual, code-switched and weighted F1-scores of 94%, 85% and 91%, respectively, with a token level accuracy of 95.8%. When evaluated on the unseen test data, the system achieved 90%, 85% and 87.4% monolingual, code-switched and weighted tweet level F1scores, and a token level accuracy of 95.7%.",Language Identification in Code-Switched Text Using Conditional Random Fields and Babelnet,"The paper outlines a supervised approach to language identification in code-switched data, framing this as a sequence labeling task where the label of each token is identified using a classifier based on Conditional Random Fields and trained on a range of different features, extracted both from the training data and by using information from Babelnet and Babelfy. The method was tested on the development dataset provided by organizers of the shared task on language identification in codeswitched data, obtaining tweet level monolingual, code-switched and weighted F1-scores of 94%, 85% and 91%, respectively, with a token level accuracy of 95.8%. When evaluated on the unseen test data, the system achieved 90%, 85% and 87.4% monolingual, code-switched and weighted tweet level F1scores, and a token level accuracy of 95.7%.",,"Language Identification in Code-Switched Text Using Conditional Random Fields and Babelnet. The paper outlines a supervised approach to language identification in code-switched data, framing this as a sequence labeling task where the label of each token is identified using a classifier based on Conditional Random Fields and trained on a range of different features, extracted both from the training data and by using information from Babelnet and Babelfy. The method was tested on the development dataset provided by organizers of the shared task on language identification in codeswitched data, obtaining tweet level monolingual, code-switched and weighted F1-scores of 94%, 85% and 91%, respectively, with a token level accuracy of 95.8%. When evaluated on the unseen test data, the system achieved 90%, 85% and 87.4% monolingual, code-switched and weighted tweet level F1scores, and a token level accuracy of 95.7%.",2016
monson-etal-2004-data,http://www.lrec-conf.org/proceedings/lrec2004/pdf/747.pdf,0,,,,,,,"Data Collection and Analysis of Mapudungun Morphology for Spelling Correction. This paper describes part of a three year collaboration between Carnegie Mellon University's Language Technologies Institute, the",Data Collection and Analysis of {M}apudungun Morphology for Spelling Correction,"This paper describes part of a three year collaboration between Carnegie Mellon University's Language Technologies Institute, the",Data Collection and Analysis of Mapudungun Morphology for Spelling Correction,"This paper describes part of a three year collaboration between Carnegie Mellon University's Language Technologies Institute, the","This research was funded in part by NSF grant number IIS-0121-631. We would also like to thank the Chilean Ministry of Education funding the team at the Instituto de Estudios Indígenas, especially Carolina Huenchullán, the National Coordinator of the Chilean Ministry of Education's Programa de Educación Intercultural Bilingüe for her continuing support, and the team in Temuco-Flor Caniupil, Cristián Carrillán, Luis Canuipil, and Marcella Collío for their hard work in collecting, transcribing and translating the data. And Pascual Masullo for his expert linguistic advice.","Data Collection and Analysis of Mapudungun Morphology for Spelling Correction. This paper describes part of a three year collaboration between Carnegie Mellon University's Language Technologies Institute, the",2004
hohensee-bender-2012-getting,https://aclanthology.org/N12-1032,0,,,,,,,"Getting More from Morphology in Multilingual Dependency Parsing. We propose a linguistically motivated set of features to capture morphological agreement and add them to the MSTParser dependency parser. Compared to the built-in morphological feature set, ours is both much smaller and more accurate across a sample of 20 morphologically annotated treebanks. We find increases in accuracy of up to 5.3% absolute. While some of this results from the feature set capturing information unrelated to morphology, there is still significant improvement, up to 4.6% absolute, due to the agreement model.",Getting More from Morphology in Multilingual Dependency Parsing,"We propose a linguistically motivated set of features to capture morphological agreement and add them to the MSTParser dependency parser. Compared to the built-in morphological feature set, ours is both much smaller and more accurate across a sample of 20 morphologically annotated treebanks. We find increases in accuracy of up to 5.3% absolute. While some of this results from the feature set capturing information unrelated to morphology, there is still significant improvement, up to 4.6% absolute, due to the agreement model.",Getting More from Morphology in Multilingual Dependency Parsing,"We propose a linguistically motivated set of features to capture morphological agreement and add them to the MSTParser dependency parser. Compared to the built-in morphological feature set, ours is both much smaller and more accurate across a sample of 20 morphologically annotated treebanks. We find increases in accuracy of up to 5.3% absolute. While some of this results from the feature set capturing information unrelated to morphology, there is still significant improvement, up to 4.6% absolute, due to the agreement model.","We would like to thank everyone who assisted us in gathering treebanks, particularly Maite Oronoz and her colleagues at the University of the Basque Country and Yoav Goldberg, as well as three anonymous reviewers for their comments.","Getting More from Morphology in Multilingual Dependency Parsing. We propose a linguistically motivated set of features to capture morphological agreement and add them to the MSTParser dependency parser. Compared to the built-in morphological feature set, ours is both much smaller and more accurate across a sample of 20 morphologically annotated treebanks. We find increases in accuracy of up to 5.3% absolute. While some of this results from the feature set capturing information unrelated to morphology, there is still significant improvement, up to 4.6% absolute, due to the agreement model.",2012
doval-etal-2020-robustness,https://aclanthology.org/2020.lrec-1.495,0,,,,,,,"On the Robustness of Unsupervised and Semi-supervised Cross-lingual Word Embedding Learning. Cross-lingual word embeddings are vector representations of words in different languages where words with similar meaning are represented by similar vectors, regardless of the language. Recent developments which construct these embeddings by aligning monolingual spaces have shown that accurate alignments can be obtained with little or no supervision, which usually comes in the form of bilingual dictionaries. However, the focus has been on a particular controlled scenario for evaluation, and there is no strong evidence on how current state-of-the-art systems would fare with noisy text or for language pairs with major linguistic differences. In this paper we present an extensive evaluation over multiple cross-lingual embedding models, analyzing their strengths and limitations with respect to different variables such as target language, training corpora and amount of supervision. Our conclusions put in doubt the view that high-quality cross-lingual embeddings can always be learned without much supervision.",On the Robustness of Unsupervised and Semi-supervised Cross-lingual Word Embedding Learning,"Cross-lingual word embeddings are vector representations of words in different languages where words with similar meaning are represented by similar vectors, regardless of the language. Recent developments which construct these embeddings by aligning monolingual spaces have shown that accurate alignments can be obtained with little or no supervision, which usually comes in the form of bilingual dictionaries. However, the focus has been on a particular controlled scenario for evaluation, and there is no strong evidence on how current state-of-the-art systems would fare with noisy text or for language pairs with major linguistic differences. In this paper we present an extensive evaluation over multiple cross-lingual embedding models, analyzing their strengths and limitations with respect to different variables such as target language, training corpora and amount of supervision. Our conclusions put in doubt the view that high-quality cross-lingual embeddings can always be learned without much supervision.",On the Robustness of Unsupervised and Semi-supervised Cross-lingual Word Embedding Learning,"Cross-lingual word embeddings are vector representations of words in different languages where words with similar meaning are represented by similar vectors, regardless of the language. Recent developments which construct these embeddings by aligning monolingual spaces have shown that accurate alignments can be obtained with little or no supervision, which usually comes in the form of bilingual dictionaries. However, the focus has been on a particular controlled scenario for evaluation, and there is no strong evidence on how current state-of-the-art systems would fare with noisy text or for language pairs with major linguistic differences. In this paper we present an extensive evaluation over multiple cross-lingual embedding models, analyzing their strengths and limitations with respect to different variables such as target language, training corpora and amount of supervision. Our conclusions put in doubt the view that high-quality cross-lingual embeddings can always be learned without much supervision.",,"On the Robustness of Unsupervised and Semi-supervised Cross-lingual Word Embedding Learning. Cross-lingual word embeddings are vector representations of words in different languages where words with similar meaning are represented by similar vectors, regardless of the language. Recent developments which construct these embeddings by aligning monolingual spaces have shown that accurate alignments can be obtained with little or no supervision, which usually comes in the form of bilingual dictionaries. However, the focus has been on a particular controlled scenario for evaluation, and there is no strong evidence on how current state-of-the-art systems would fare with noisy text or for language pairs with major linguistic differences. In this paper we present an extensive evaluation over multiple cross-lingual embedding models, analyzing their strengths and limitations with respect to different variables such as target language, training corpora and amount of supervision. Our conclusions put in doubt the view that high-quality cross-lingual embeddings can always be learned without much supervision.",2020
naz-etal-2021-fjwu,https://aclanthology.org/2021.wmt-1.86,1,,,,health,,,"FJWU Participation for the WMT21 Biomedical Translation Task. In this paper we present the FJWU's system submitted to the biomedical shared task at WMT21. We prepared state-of-the-art multilingual neural machine translation systems for three languages (i.e. German, Spanish and French) with English as target language. Our NMT systems based on Transformer architecture, were trained on combination of indomain and out-domain parallel corpora developed using Information Retrieval (IR) and domain adaptation techniques.",{FJWU} Participation for the {WMT}21 Biomedical Translation Task,"In this paper we present the FJWU's system submitted to the biomedical shared task at WMT21. We prepared state-of-the-art multilingual neural machine translation systems for three languages (i.e. German, Spanish and French) with English as target language. Our NMT systems based on Transformer architecture, were trained on combination of indomain and out-domain parallel corpora developed using Information Retrieval (IR) and domain adaptation techniques.",FJWU Participation for the WMT21 Biomedical Translation Task,"In this paper we present the FJWU's system submitted to the biomedical shared task at WMT21. We prepared state-of-the-art multilingual neural machine translation systems for three languages (i.e. German, Spanish and French) with English as target language. Our NMT systems based on Transformer architecture, were trained on combination of indomain and out-domain parallel corpora developed using Information Retrieval (IR) and domain adaptation techniques.",This study is funded by the National Research Program for Universities (NRPU) by Higher Education Commission of Pakistan (5469/Punjab/NRPU/R&D/HEC/2016).,"FJWU Participation for the WMT21 Biomedical Translation Task. In this paper we present the FJWU's system submitted to the biomedical shared task at WMT21. We prepared state-of-the-art multilingual neural machine translation systems for three languages (i.e. German, Spanish and French) with English as target language. Our NMT systems based on Transformer architecture, were trained on combination of indomain and out-domain parallel corpora developed using Information Retrieval (IR) and domain adaptation techniques.",2021
reisert-etal-2014-corpus,https://aclanthology.org/W14-4910,1,,,,disinformation_and_fake_news,,,"A Corpus Study for Identifying Evidence on Microblogs. Microblogs are a popular way for users to communicate and have recently caught the attention of researchers in the natural language processing (NLP) field. However, regardless of their rising popularity, little attention has been given towards determining the properties of discourse relations for the rapid, large-scale microblog data. Therefore, given their importance for various NLP tasks, we begin a study of discourse relations on microblogs by focusing on evidence relations. As no annotated corpora for evidence relations on microblogs exist, we conduct a corpus study to identify such relations on Twitter, a popular microblogging service. We create annotation guidelines, conduct a large-scale annotation phase, and develop a corpus of annotated evidence relations. Finally, we report our observations, annotation difficulties, and data statistics.",A Corpus Study for Identifying Evidence on Microblogs,"Microblogs are a popular way for users to communicate and have recently caught the attention of researchers in the natural language processing (NLP) field. However, regardless of their rising popularity, little attention has been given towards determining the properties of discourse relations for the rapid, large-scale microblog data. Therefore, given their importance for various NLP tasks, we begin a study of discourse relations on microblogs by focusing on evidence relations. As no annotated corpora for evidence relations on microblogs exist, we conduct a corpus study to identify such relations on Twitter, a popular microblogging service. We create annotation guidelines, conduct a large-scale annotation phase, and develop a corpus of annotated evidence relations. Finally, we report our observations, annotation difficulties, and data statistics.",A Corpus Study for Identifying Evidence on Microblogs,"Microblogs are a popular way for users to communicate and have recently caught the attention of researchers in the natural language processing (NLP) field. However, regardless of their rising popularity, little attention has been given towards determining the properties of discourse relations for the rapid, large-scale microblog data. Therefore, given their importance for various NLP tasks, we begin a study of discourse relations on microblogs by focusing on evidence relations. As no annotated corpora for evidence relations on microblogs exist, we conduct a corpus study to identify such relations on Twitter, a popular microblogging service. We create annotation guidelines, conduct a large-scale annotation phase, and develop a corpus of annotated evidence relations. Finally, we report our observations, annotation difficulties, and data statistics.","We would like to acknowledge MEXT (Ministry of Education, Culture, Sports, Science and Technology) for their generous financial support via the Research Student Scholarship. This study was partly supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant No. 23240018 and Japan Science and Technology Agency (JST). Furthermore, we would like to also thank Eric Nichols (Honda Research Institute Japan Co., Ltd.) for his discussions on the topic of evidence relations.","A Corpus Study for Identifying Evidence on Microblogs. Microblogs are a popular way for users to communicate and have recently caught the attention of researchers in the natural language processing (NLP) field. However, regardless of their rising popularity, little attention has been given towards determining the properties of discourse relations for the rapid, large-scale microblog data. Therefore, given their importance for various NLP tasks, we begin a study of discourse relations on microblogs by focusing on evidence relations. As no annotated corpora for evidence relations on microblogs exist, we conduct a corpus study to identify such relations on Twitter, a popular microblogging service. We create annotation guidelines, conduct a large-scale annotation phase, and develop a corpus of annotated evidence relations. Finally, we report our observations, annotation difficulties, and data statistics.",2014
kwong-2009-phonological,https://aclanthology.org/W09-3516,0,,,,,,,Phonological Context Approximation and Homophone Treatment for NEWS 2009 English-Chinese Transliteration Shared Task. This paper describes our systems participating in the NEWS 2009 Machine Transliteration Shared Task. Two runs were submitted for the English-Chinese track. The system for the standard run is based on graphemic approximation of local phonological context. The one for the non-standard run is based on parallel modelling of sound and tone patterns for treating homophones in Chinese. Official results show that both systems stand in the mid range amongst all participating systems.,Phonological Context Approximation and Homophone Treatment for {NEWS} 2009 {E}nglish-{C}hinese Transliteration Shared Task,This paper describes our systems participating in the NEWS 2009 Machine Transliteration Shared Task. Two runs were submitted for the English-Chinese track. The system for the standard run is based on graphemic approximation of local phonological context. The one for the non-standard run is based on parallel modelling of sound and tone patterns for treating homophones in Chinese. Official results show that both systems stand in the mid range amongst all participating systems.,Phonological Context Approximation and Homophone Treatment for NEWS 2009 English-Chinese Transliteration Shared Task,This paper describes our systems participating in the NEWS 2009 Machine Transliteration Shared Task. Two runs were submitted for the English-Chinese track. The system for the standard run is based on graphemic approximation of local phonological context. The one for the non-standard run is based on parallel modelling of sound and tone patterns for treating homophones in Chinese. Official results show that both systems stand in the mid range amongst all participating systems.,The work described in this paper was substantially supported by a grant from City University of Hong Kong (Project No. 7002203).,Phonological Context Approximation and Homophone Treatment for NEWS 2009 English-Chinese Transliteration Shared Task. This paper describes our systems participating in the NEWS 2009 Machine Transliteration Shared Task. Two runs were submitted for the English-Chinese track. The system for the standard run is based on graphemic approximation of local phonological context. The one for the non-standard run is based on parallel modelling of sound and tone patterns for treating homophones in Chinese. Official results show that both systems stand in the mid range amongst all participating systems.,2009
faili-2009-partial,https://aclanthology.org/R09-1014,0,,,,,,,"From Partial toward Full Parsing. Full-Parsing systems able to analyze sentences robustly and completely at an appropriate accuracy can be useful in many computer applications like information retrieval and machine translation systems. Increasing the domain of locality by using tree-adjoining-grammars (TAG) caused some researchers to use it as a modeling formalism in their language application. But parsing with a rich grammar like TAG faces two main obstacles: low parsing speed and a lot of ambiguous syntactical parses. In order to decrease the parse time and these ambiguities, we use an idea of combining statistical chunker based on TAG formalism, with a heuristically rule-based search method to achieve the full parses. The partial parses induced from statistical chunker are basically resulted from a system named supertagger, and are followed by two different phases: error detection and error correction, which in each phase, different completion heuristics apply on the partial parses. The experiments on Penn Treebank show that by using a trained probability model considerable improvement in full-parsing rate is achieved.",From Partial toward Full Parsing,"Full-Parsing systems able to analyze sentences robustly and completely at an appropriate accuracy can be useful in many computer applications like information retrieval and machine translation systems. Increasing the domain of locality by using tree-adjoining-grammars (TAG) caused some researchers to use it as a modeling formalism in their language application. But parsing with a rich grammar like TAG faces two main obstacles: low parsing speed and a lot of ambiguous syntactical parses. In order to decrease the parse time and these ambiguities, we use an idea of combining statistical chunker based on TAG formalism, with a heuristically rule-based search method to achieve the full parses. The partial parses induced from statistical chunker are basically resulted from a system named supertagger, and are followed by two different phases: error detection and error correction, which in each phase, different completion heuristics apply on the partial parses. The experiments on Penn Treebank show that by using a trained probability model considerable improvement in full-parsing rate is achieved.",From Partial toward Full Parsing,"Full-Parsing systems able to analyze sentences robustly and completely at an appropriate accuracy can be useful in many computer applications like information retrieval and machine translation systems. Increasing the domain of locality by using tree-adjoining-grammars (TAG) caused some researchers to use it as a modeling formalism in their language application. But parsing with a rich grammar like TAG faces two main obstacles: low parsing speed and a lot of ambiguous syntactical parses. In order to decrease the parse time and these ambiguities, we use an idea of combining statistical chunker based on TAG formalism, with a heuristically rule-based search method to achieve the full parses. The partial parses induced from statistical chunker are basically resulted from a system named supertagger, and are followed by two different phases: error detection and error correction, which in each phase, different completion heuristics apply on the partial parses. The experiments on Penn Treebank show that by using a trained probability model considerable improvement in full-parsing rate is achieved.",,"From Partial toward Full Parsing. Full-Parsing systems able to analyze sentences robustly and completely at an appropriate accuracy can be useful in many computer applications like information retrieval and machine translation systems. Increasing the domain of locality by using tree-adjoining-grammars (TAG) caused some researchers to use it as a modeling formalism in their language application. But parsing with a rich grammar like TAG faces two main obstacles: low parsing speed and a lot of ambiguous syntactical parses. In order to decrease the parse time and these ambiguities, we use an idea of combining statistical chunker based on TAG formalism, with a heuristically rule-based search method to achieve the full parses. The partial parses induced from statistical chunker are basically resulted from a system named supertagger, and are followed by two different phases: error detection and error correction, which in each phase, different completion heuristics apply on the partial parses. The experiments on Penn Treebank show that by using a trained probability model considerable improvement in full-parsing rate is achieved.",2009
gao-vogel-2008-parallel,https://aclanthology.org/W08-0509,0,,,,,,,"Parallel Implementations of Word Alignment Tool. Training word alignment models on large corpora is a very time-consuming processes. This paper describes two parallel implementations of GIZA++ that accelerate this word alignment process. One of the implementations runs on computer clusters, the other runs on multi-processor system using multi-threading technology. Results show a near-linear speedup according to the number of CPUs used, and alignment quality is preserved.",Parallel Implementations of Word Alignment Tool,"Training word alignment models on large corpora is a very time-consuming processes. This paper describes two parallel implementations of GIZA++ that accelerate this word alignment process. One of the implementations runs on computer clusters, the other runs on multi-processor system using multi-threading technology. Results show a near-linear speedup according to the number of CPUs used, and alignment quality is preserved.",Parallel Implementations of Word Alignment Tool,"Training word alignment models on large corpora is a very time-consuming processes. This paper describes two parallel implementations of GIZA++ that accelerate this word alignment process. One of the implementations runs on computer clusters, the other runs on multi-processor system using multi-threading technology. Results show a near-linear speedup according to the number of CPUs used, and alignment quality is preserved.",,"Parallel Implementations of Word Alignment Tool. Training word alignment models on large corpora is a very time-consuming processes. This paper describes two parallel implementations of GIZA++ that accelerate this word alignment process. One of the implementations runs on computer clusters, the other runs on multi-processor system using multi-threading technology. Results show a near-linear speedup according to the number of CPUs used, and alignment quality is preserved.",2008
theologitis-1997-euramis,https://aclanthology.org/1997.eamt-1.3,0,,,,,,,"EURAMIS, the platform of the EC Translator. Linguistic technology brings new tools to the desktop of the translator: full-text retrieval systems, terminological systems, translation memories and machine translation. These systems are now being integrated into a single, seamless workflow in a large organisation, the Translation Service of the European Commission. SdTVista is used for full-text search of reference documents; Euramis powerful aligner creates translation memories which are stored in a central Linguistic Resources Database; not-found sentences are automatically sent to EC-Systran machine translation; pertinent terminology is retrieved in batch mode from Eurodicautom. The resulting resources are brought together on the translator's workbench. Screen-shots and a demonstration complete the presentation. Dimitri Theologitis Born in Athens, civil engineer, specialised in integrated transportation systems and computers. Opted for a major change of career in 1984 when he joined the Translation Service of the European Commission. Responsible for the Rationalisation of Working Methods from 1990. In 1994 became head of unit Development of Multilingual Computer Aids, a multilingual team active in the technological modernisation of the Translation Service.","{EURAMIS}, the platform of the {EC} Translator","Linguistic technology brings new tools to the desktop of the translator: full-text retrieval systems, terminological systems, translation memories and machine translation. These systems are now being integrated into a single, seamless workflow in a large organisation, the Translation Service of the European Commission. SdTVista is used for full-text search of reference documents; Euramis powerful aligner creates translation memories which are stored in a central Linguistic Resources Database; not-found sentences are automatically sent to EC-Systran machine translation; pertinent terminology is retrieved in batch mode from Eurodicautom. The resulting resources are brought together on the translator's workbench. Screen-shots and a demonstration complete the presentation. Dimitri Theologitis Born in Athens, civil engineer, specialised in integrated transportation systems and computers. Opted for a major change of career in 1984 when he joined the Translation Service of the European Commission. Responsible for the Rationalisation of Working Methods from 1990. In 1994 became head of unit Development of Multilingual Computer Aids, a multilingual team active in the technological modernisation of the Translation Service.","EURAMIS, the platform of the EC Translator","Linguistic technology brings new tools to the desktop of the translator: full-text retrieval systems, terminological systems, translation memories and machine translation. These systems are now being integrated into a single, seamless workflow in a large organisation, the Translation Service of the European Commission. SdTVista is used for full-text search of reference documents; Euramis powerful aligner creates translation memories which are stored in a central Linguistic Resources Database; not-found sentences are automatically sent to EC-Systran machine translation; pertinent terminology is retrieved in batch mode from Eurodicautom. The resulting resources are brought together on the translator's workbench. Screen-shots and a demonstration complete the presentation. Dimitri Theologitis Born in Athens, civil engineer, specialised in integrated transportation systems and computers. Opted for a major change of career in 1984 when he joined the Translation Service of the European Commission. Responsible for the Rationalisation of Working Methods from 1990. In 1994 became head of unit Development of Multilingual Computer Aids, a multilingual team active in the technological modernisation of the Translation Service.",,"EURAMIS, the platform of the EC Translator. Linguistic technology brings new tools to the desktop of the translator: full-text retrieval systems, terminological systems, translation memories and machine translation. These systems are now being integrated into a single, seamless workflow in a large organisation, the Translation Service of the European Commission. SdTVista is used for full-text search of reference documents; Euramis powerful aligner creates translation memories which are stored in a central Linguistic Resources Database; not-found sentences are automatically sent to EC-Systran machine translation; pertinent terminology is retrieved in batch mode from Eurodicautom. The resulting resources are brought together on the translator's workbench. Screen-shots and a demonstration complete the presentation. Dimitri Theologitis Born in Athens, civil engineer, specialised in integrated transportation systems and computers. Opted for a major change of career in 1984 when he joined the Translation Service of the European Commission. Responsible for the Rationalisation of Working Methods from 1990. In 1994 became head of unit Development of Multilingual Computer Aids, a multilingual team active in the technological modernisation of the Translation Service.",1997
popescu-belis-etal-2012-discourse,http://www.lrec-conf.org/proceedings/lrec2012/pdf/255_Paper.pdf,0,,,,,,,"Discourse-level Annotation over Europarl for Machine Translation: Connectives and Pronouns. This paper describes methods and results for the annotation of two discourse-level phenomena, connectives and pronouns, over a multilingual parallel corpus. Excerpts from Europarl in English and French have been annotated with disambiguation information for connectives and pronouns, for about 3600 tokens. This data is then used in several ways: for cross-linguistic studies, for training automatic disambiguation software, and ultimately for training and testing discourse-aware statistical machine translation systems. The paper presents the annotation procedures and their results in detail, and overviews the first systems trained on the annotated resources and their use for machine translation.",Discourse-level Annotation over {E}uroparl for Machine Translation: Connectives and Pronouns,"This paper describes methods and results for the annotation of two discourse-level phenomena, connectives and pronouns, over a multilingual parallel corpus. Excerpts from Europarl in English and French have been annotated with disambiguation information for connectives and pronouns, for about 3600 tokens. This data is then used in several ways: for cross-linguistic studies, for training automatic disambiguation software, and ultimately for training and testing discourse-aware statistical machine translation systems. The paper presents the annotation procedures and their results in detail, and overviews the first systems trained on the annotated resources and their use for machine translation.",Discourse-level Annotation over Europarl for Machine Translation: Connectives and Pronouns,"This paper describes methods and results for the annotation of two discourse-level phenomena, connectives and pronouns, over a multilingual parallel corpus. Excerpts from Europarl in English and French have been annotated with disambiguation information for connectives and pronouns, for about 3600 tokens. This data is then used in several ways: for cross-linguistic studies, for training automatic disambiguation software, and ultimately for training and testing discourse-aware statistical machine translation systems. The paper presents the annotation procedures and their results in detail, and overviews the first systems trained on the annotated resources and their use for machine translation.","We are grateful for the funding of this work to the Swiss National Science Foundation (SNSF), under its Sinergia program, grant n. CRSI22 127510. The resources described in this article will be made available through the project's website (www.idiap.ch/comtis) in the near future.","Discourse-level Annotation over Europarl for Machine Translation: Connectives and Pronouns. This paper describes methods and results for the annotation of two discourse-level phenomena, connectives and pronouns, over a multilingual parallel corpus. Excerpts from Europarl in English and French have been annotated with disambiguation information for connectives and pronouns, for about 3600 tokens. This data is then used in several ways: for cross-linguistic studies, for training automatic disambiguation software, and ultimately for training and testing discourse-aware statistical machine translation systems. The paper presents the annotation procedures and their results in detail, and overviews the first systems trained on the annotated resources and their use for machine translation.",2012
seddah-etal-2012-ubiquitous,http://www.lrec-conf.org/proceedings/lrec2012/pdf/1130_Paper.pdf,0,,,,,,,"Ubiquitous Usage of a Broad Coverage French Corpus: Processing the Est Republicain corpus. In this paper, we introduce a set of resources that we have derived from the EST RÉPUBLICAIN CORPUS, a large, freely-available collection of regional newspaper articles in French, totaling 150 million words. Our resources are the result of a full NLP treatment of the EST RÉPUBLICAIN CORPUS: handling of multi-word expressions, lemmatization, part-of-speech tagging, and syntactic parsing. Processing of the corpus is carried out using statistical machine-learning approaches-joint model of data driven lemmatization and partof-speech tagging, PCFG-LA and dependency based models for parsing-that have been shown to achieve state-of-the-art performance when evaluated on the French Treebank. Our derived resources are made freely available, and released according to the original Creative Common license for the EST RÉPUBLICAIN CORPUS. We additionally provide an overview of the use of these resources in various applications, in particular the use of generated word clusters from the corpus to alleviate lexical data sparseness for statistical parsing.",Ubiquitous Usage of a Broad Coverage {F}rench Corpus: Processing the {E}st {R}epublicain corpus,"In this paper, we introduce a set of resources that we have derived from the EST RÉPUBLICAIN CORPUS, a large, freely-available collection of regional newspaper articles in French, totaling 150 million words. Our resources are the result of a full NLP treatment of the EST RÉPUBLICAIN CORPUS: handling of multi-word expressions, lemmatization, part-of-speech tagging, and syntactic parsing. Processing of the corpus is carried out using statistical machine-learning approaches-joint model of data driven lemmatization and partof-speech tagging, PCFG-LA and dependency based models for parsing-that have been shown to achieve state-of-the-art performance when evaluated on the French Treebank. Our derived resources are made freely available, and released according to the original Creative Common license for the EST RÉPUBLICAIN CORPUS. We additionally provide an overview of the use of these resources in various applications, in particular the use of generated word clusters from the corpus to alleviate lexical data sparseness for statistical parsing.",Ubiquitous Usage of a Broad Coverage French Corpus: Processing the Est Republicain corpus,"In this paper, we introduce a set of resources that we have derived from the EST RÉPUBLICAIN CORPUS, a large, freely-available collection of regional newspaper articles in French, totaling 150 million words. Our resources are the result of a full NLP treatment of the EST RÉPUBLICAIN CORPUS: handling of multi-word expressions, lemmatization, part-of-speech tagging, and syntactic parsing. Processing of the corpus is carried out using statistical machine-learning approaches-joint model of data driven lemmatization and partof-speech tagging, PCFG-LA and dependency based models for parsing-that have been shown to achieve state-of-the-art performance when evaluated on the French Treebank. Our derived resources are made freely available, and released according to the original Creative Common license for the EST RÉPUBLICAIN CORPUS. We additionally provide an overview of the use of these resources in various applications, in particular the use of generated word clusters from the corpus to alleviate lexical data sparseness for statistical parsing.",We are very grateful to Bertrand Gaiffe and Kamel Nehbi from the CNRTL for making this corpus available. Many thanks to Grzegorz Chrupala for making MORFETTE available to us and for providing unlimited support on this work. This work was partly supported by the ANR Sequoia (ANR-08-EMER-013).,"Ubiquitous Usage of a Broad Coverage French Corpus: Processing the Est Republicain corpus. In this paper, we introduce a set of resources that we have derived from the EST RÉPUBLICAIN CORPUS, a large, freely-available collection of regional newspaper articles in French, totaling 150 million words. Our resources are the result of a full NLP treatment of the EST RÉPUBLICAIN CORPUS: handling of multi-word expressions, lemmatization, part-of-speech tagging, and syntactic parsing. Processing of the corpus is carried out using statistical machine-learning approaches-joint model of data driven lemmatization and partof-speech tagging, PCFG-LA and dependency based models for parsing-that have been shown to achieve state-of-the-art performance when evaluated on the French Treebank. Our derived resources are made freely available, and released according to the original Creative Common license for the EST RÉPUBLICAIN CORPUS. We additionally provide an overview of the use of these resources in various applications, in particular the use of generated word clusters from the corpus to alleviate lexical data sparseness for statistical parsing.",2012
berend-vincze-2012-evaluate,https://aclanthology.org/W12-3715,0,,,,,,,How to Evaluate Opinionated Keyphrase Extraction?. Evaluation often denotes a key issue in semantics-or subjectivity-related tasks. Here we discuss the difficulties of evaluating opinionated keyphrase extraction. We present our method to reduce the subjectivity of the task and to alleviate the evaluation process and we also compare the results of human and machine-based evaluation.,How to Evaluate Opinionated Keyphrase Extraction?,Evaluation often denotes a key issue in semantics-or subjectivity-related tasks. Here we discuss the difficulties of evaluating opinionated keyphrase extraction. We present our method to reduce the subjectivity of the task and to alleviate the evaluation process and we also compare the results of human and machine-based evaluation.,How to Evaluate Opinionated Keyphrase Extraction?,Evaluation often denotes a key issue in semantics-or subjectivity-related tasks. Here we discuss the difficulties of evaluating opinionated keyphrase extraction. We present our method to reduce the subjectivity of the task and to alleviate the evaluation process and we also compare the results of human and machine-based evaluation.,This work was supported in part by the NIH grant (project codename MASZEKER) of the Hungarian government.,How to Evaluate Opinionated Keyphrase Extraction?. Evaluation often denotes a key issue in semantics-or subjectivity-related tasks. Here we discuss the difficulties of evaluating opinionated keyphrase extraction. We present our method to reduce the subjectivity of the task and to alleviate the evaluation process and we also compare the results of human and machine-based evaluation.,2012
borin-etal-2014-linguistic,http://www.lrec-conf.org/proceedings/lrec2014/pdf/159_Paper.pdf,0,,,,,,,"Linguistic landscaping of South Asia using digital language resources: Genetic vs. areal linguistics. Like many other research fields, linguistics is entering the age of big data. We are now at a point where it is possible to see how new research questions can be formulated-and old research questions addressed from a new angle or established results verified-on the basis of exhaustive collections of data, rather than small, carefully selected samples. For example, South Asia is often mentioned in the literature as a classic example of a linguistic area, but there is no systematic, empirical study substantiating this claim. Examination of genealogical and areal relationships among South Asian languages requires a large-scale quantitative and qualitative comparative study, encompassing more than one language family. Further, such a study cannot be conducted manually, but needs to draw on extensive digitized language resources and state-of-the-art computational tools. We present some preliminary results of our large-scale investigation of the genealogical and areal relationships among the languages of this region, based on the linguistic descriptions available in the 19 tomes of Grierson's monumental Linguistic Survey of India (1903-1927), which is currently being digitized with the aim of turning the linguistic information in the LSI into a digital language resource suitable for a broad array of linguistic investigations.",Linguistic landscaping of {S}outh {A}sia using digital language resources: Genetic vs. areal linguistics,"Like many other research fields, linguistics is entering the age of big data. We are now at a point where it is possible to see how new research questions can be formulated-and old research questions addressed from a new angle or established results verified-on the basis of exhaustive collections of data, rather than small, carefully selected samples. For example, South Asia is often mentioned in the literature as a classic example of a linguistic area, but there is no systematic, empirical study substantiating this claim. Examination of genealogical and areal relationships among South Asian languages requires a large-scale quantitative and qualitative comparative study, encompassing more than one language family. Further, such a study cannot be conducted manually, but needs to draw on extensive digitized language resources and state-of-the-art computational tools. We present some preliminary results of our large-scale investigation of the genealogical and areal relationships among the languages of this region, based on the linguistic descriptions available in the 19 tomes of Grierson's monumental Linguistic Survey of India (1903-1927), which is currently being digitized with the aim of turning the linguistic information in the LSI into a digital language resource suitable for a broad array of linguistic investigations.",Linguistic landscaping of South Asia using digital language resources: Genetic vs. areal linguistics,"Like many other research fields, linguistics is entering the age of big data. We are now at a point where it is possible to see how new research questions can be formulated-and old research questions addressed from a new angle or established results verified-on the basis of exhaustive collections of data, rather than small, carefully selected samples. For example, South Asia is often mentioned in the literature as a classic example of a linguistic area, but there is no systematic, empirical study substantiating this claim. Examination of genealogical and areal relationships among South Asian languages requires a large-scale quantitative and qualitative comparative study, encompassing more than one language family. Further, such a study cannot be conducted manually, but needs to draw on extensive digitized language resources and state-of-the-art computational tools. We present some preliminary results of our large-scale investigation of the genealogical and areal relationships among the languages of this region, based on the linguistic descriptions available in the 19 tomes of Grierson's monumental Linguistic Survey of India (1903-1927), which is currently being digitized with the aim of turning the linguistic information in the LSI into a digital language resource suitable for a broad array of linguistic investigations.",,"Linguistic landscaping of South Asia using digital language resources: Genetic vs. areal linguistics. Like many other research fields, linguistics is entering the age of big data. We are now at a point where it is possible to see how new research questions can be formulated-and old research questions addressed from a new angle or established results verified-on the basis of exhaustive collections of data, rather than small, carefully selected samples. For example, South Asia is often mentioned in the literature as a classic example of a linguistic area, but there is no systematic, empirical study substantiating this claim. Examination of genealogical and areal relationships among South Asian languages requires a large-scale quantitative and qualitative comparative study, encompassing more than one language family. Further, such a study cannot be conducted manually, but needs to draw on extensive digitized language resources and state-of-the-art computational tools. We present some preliminary results of our large-scale investigation of the genealogical and areal relationships among the languages of this region, based on the linguistic descriptions available in the 19 tomes of Grierson's monumental Linguistic Survey of India (1903-1927), which is currently being digitized with the aim of turning the linguistic information in the LSI into a digital language resource suitable for a broad array of linguistic investigations.",2014
ling-etal-2011-reordering,https://aclanthology.org/P11-2079,0,,,,,,,"Reordering Modeling using Weighted Alignment Matrices. In most statistical machine translation systems, the phrase/rule extraction algorithm uses alignments in the 1-best form, which might contain spurious alignment points. The usage of weighted alignment matrices that encode all possible alignments has been shown to generate better phrase tables for phrase-based systems. We propose two algorithms to generate the well known MSD reordering model using weighted alignment matrices. Experiments on the IWSLT 2010 evaluation datasets for two language pairs with different alignment algorithms show that our methods produce more accurate reordering models, as can be shown by an increase over the regular MSD models of 0.4 BLEU points in the BTEC French to English test set, and of 1.5 BLEU points in the DIALOG Chinese to English test set.",Reordering Modeling using Weighted Alignment Matrices,"In most statistical machine translation systems, the phrase/rule extraction algorithm uses alignments in the 1-best form, which might contain spurious alignment points. The usage of weighted alignment matrices that encode all possible alignments has been shown to generate better phrase tables for phrase-based systems. We propose two algorithms to generate the well known MSD reordering model using weighted alignment matrices. Experiments on the IWSLT 2010 evaluation datasets for two language pairs with different alignment algorithms show that our methods produce more accurate reordering models, as can be shown by an increase over the regular MSD models of 0.4 BLEU points in the BTEC French to English test set, and of 1.5 BLEU points in the DIALOG Chinese to English test set.",Reordering Modeling using Weighted Alignment Matrices,"In most statistical machine translation systems, the phrase/rule extraction algorithm uses alignments in the 1-best form, which might contain spurious alignment points. The usage of weighted alignment matrices that encode all possible alignments has been shown to generate better phrase tables for phrase-based systems. We propose two algorithms to generate the well known MSD reordering model using weighted alignment matrices. Experiments on the IWSLT 2010 evaluation datasets for two language pairs with different alignment algorithms show that our methods produce more accurate reordering models, as can be shown by an increase over the regular MSD models of 0.4 BLEU points in the BTEC French to English test set, and of 1.5 BLEU points in the DIALOG Chinese to English test set.","This work was partially supported by FCT (INESC-ID multiannual funding) through the PIDDAC Program funds, and also through projects CMU-PT/HuMach/0039/2008 and CMU-PT/0005/2007. The PhD thesis of Tiago Luís is supported by FCT grant SFRH/BD/62151/2009. The PhD thesis of Wang Ling is supported by FCT grant SFRH/BD/51157/2010. The authors also wish to thank the anonymous reviewers for many helpful comments.","Reordering Modeling using Weighted Alignment Matrices. In most statistical machine translation systems, the phrase/rule extraction algorithm uses alignments in the 1-best form, which might contain spurious alignment points. The usage of weighted alignment matrices that encode all possible alignments has been shown to generate better phrase tables for phrase-based systems. We propose two algorithms to generate the well known MSD reordering model using weighted alignment matrices. Experiments on the IWSLT 2010 evaluation datasets for two language pairs with different alignment algorithms show that our methods produce more accurate reordering models, as can be shown by an increase over the regular MSD models of 0.4 BLEU points in the BTEC French to English test set, and of 1.5 BLEU points in the DIALOG Chinese to English test set.",2011
kato-etal-2006-woz,https://aclanthology.org/W06-3002,0,,,,,,,"WoZ Simulation of Interactive Question Answering. QACIAD (Question Answering Challenge for Information Access Dialogue) is an evaluation framework for measuring interactive question answering (QA) technologies. It assumes that users interactively collect information using a QA system for writing a report on a given topic and evaluates, among other things, the capabilities needed under such circumstances. This paper reports an experiment for examining the assumptions made by QACIAD. In this experiment, dialogues under the situation that QACIAD assumes are collected using WoZ (Wizard of Oz) simulating, which is frequently used for collecting dialogue data for designing speech dialogue systems, and then analyzed. The results indicate that the setting of QACIAD is real and appropriate and that one of the important capabilities for future interactive QA systems is providing cooperative and helpful responses.",{W}o{Z} Simulation of Interactive Question Answering,"QACIAD (Question Answering Challenge for Information Access Dialogue) is an evaluation framework for measuring interactive question answering (QA) technologies. It assumes that users interactively collect information using a QA system for writing a report on a given topic and evaluates, among other things, the capabilities needed under such circumstances. This paper reports an experiment for examining the assumptions made by QACIAD. In this experiment, dialogues under the situation that QACIAD assumes are collected using WoZ (Wizard of Oz) simulating, which is frequently used for collecting dialogue data for designing speech dialogue systems, and then analyzed. The results indicate that the setting of QACIAD is real and appropriate and that one of the important capabilities for future interactive QA systems is providing cooperative and helpful responses.",WoZ Simulation of Interactive Question Answering,"QACIAD (Question Answering Challenge for Information Access Dialogue) is an evaluation framework for measuring interactive question answering (QA) technologies. It assumes that users interactively collect information using a QA system for writing a report on a given topic and evaluates, among other things, the capabilities needed under such circumstances. This paper reports an experiment for examining the assumptions made by QACIAD. In this experiment, dialogues under the situation that QACIAD assumes are collected using WoZ (Wizard of Oz) simulating, which is frequently used for collecting dialogue data for designing speech dialogue systems, and then analyzed. The results indicate that the setting of QACIAD is real and appropriate and that one of the important capabilities for future interactive QA systems is providing cooperative and helpful responses.",,"WoZ Simulation of Interactive Question Answering. QACIAD (Question Answering Challenge for Information Access Dialogue) is an evaluation framework for measuring interactive question answering (QA) technologies. It assumes that users interactively collect information using a QA system for writing a report on a given topic and evaluates, among other things, the capabilities needed under such circumstances. This paper reports an experiment for examining the assumptions made by QACIAD. In this experiment, dialogues under the situation that QACIAD assumes are collected using WoZ (Wizard of Oz) simulating, which is frequently used for collecting dialogue data for designing speech dialogue systems, and then analyzed. The results indicate that the setting of QACIAD is real and appropriate and that one of the important capabilities for future interactive QA systems is providing cooperative and helpful responses.",2006
guo-etal-2016-unified,https://aclanthology.org/C16-1120,0,,,,,,,"A Unified Architecture for Semantic Role Labeling and Relation Classification. This paper describes a unified neural architecture for identifying and classifying multi-typed semantic relations between words in a sentence. We investigate two typical and well-studied tasks: semantic role labeling (SRL) which identifies the relations between predicates and arguments, and relation classification (RC) which focuses on the relation between two entities or nominals. While mostly studied separately in prior work, we show that the two tasks can be effectively connected and modeled using a general architecture. Experiments on CoNLL-2009 benchmark datasets show that our SRL models significantly outperform state-of-the-art approaches. Our RC models also yield competitive performance with the best published records. Furthermore, we show that the two tasks can be trained jointly with multi-task learning, resulting in additive significant improvements for SRL.",A Unified Architecture for Semantic Role Labeling and Relation Classification,"This paper describes a unified neural architecture for identifying and classifying multi-typed semantic relations between words in a sentence. We investigate two typical and well-studied tasks: semantic role labeling (SRL) which identifies the relations between predicates and arguments, and relation classification (RC) which focuses on the relation between two entities or nominals. While mostly studied separately in prior work, we show that the two tasks can be effectively connected and modeled using a general architecture. Experiments on CoNLL-2009 benchmark datasets show that our SRL models significantly outperform state-of-the-art approaches. Our RC models also yield competitive performance with the best published records. Furthermore, we show that the two tasks can be trained jointly with multi-task learning, resulting in additive significant improvements for SRL.",A Unified Architecture for Semantic Role Labeling and Relation Classification,"This paper describes a unified neural architecture for identifying and classifying multi-typed semantic relations between words in a sentence. We investigate two typical and well-studied tasks: semantic role labeling (SRL) which identifies the relations between predicates and arguments, and relation classification (RC) which focuses on the relation between two entities or nominals. While mostly studied separately in prior work, we show that the two tasks can be effectively connected and modeled using a general architecture. Experiments on CoNLL-2009 benchmark datasets show that our SRL models significantly outperform state-of-the-art approaches. Our RC models also yield competitive performance with the best published records. Furthermore, we show that the two tasks can be trained jointly with multi-task learning, resulting in additive significant improvements for SRL.",We are grateful to Tao Lei for providing the outputs of their systems. We thank the anonymous reviewers for their insightful comments and suggestions. This work was supported by the National Key Basic Research Program of China via grant 2014CB340503 and the National Natural Science Foundation of China (NSFC) via grant 61300113 and 61370164.,"A Unified Architecture for Semantic Role Labeling and Relation Classification. This paper describes a unified neural architecture for identifying and classifying multi-typed semantic relations between words in a sentence. We investigate two typical and well-studied tasks: semantic role labeling (SRL) which identifies the relations between predicates and arguments, and relation classification (RC) which focuses on the relation between two entities or nominals. While mostly studied separately in prior work, we show that the two tasks can be effectively connected and modeled using a general architecture. Experiments on CoNLL-2009 benchmark datasets show that our SRL models significantly outperform state-of-the-art approaches. Our RC models also yield competitive performance with the best published records. Furthermore, we show that the two tasks can be trained jointly with multi-task learning, resulting in additive significant improvements for SRL.",2016
nurani-venkitasubramanian-etal-2017-learning,https://aclanthology.org/W17-2003,0,,,,,,,"Learning to Recognize Animals by Watching Documentaries: Using Subtitles as Weak Supervision. We investigate animal recognition models learned from wildlife video documentaries by using the weak supervision of the textual subtitles. This is a challenging setting, since i) the animals occur in their natural habitat and are often largely occluded and ii) subtitles are to a great degree complementary to the visual content, providing a very weak supervisory signal. This is in contrast to most work on integrated vision and language in the literature, where textual descriptions are tightly linked to the image content, and often generated in a curated fashion for the task at hand. We investigate different image representations and models, in particular a support vector machine on top of activations of a pretrained convolutional neural network, as well as a Naive Bayes framework on a 'bag-of-activations' image representation, where each element of the bag is considered separately. This representation allows key components in the image to be isolated, in spite of vastly varying backgrounds and image clutter, without an object detection or image segmentation step. The methods are evaluated based on how well they transfer to unseen camera-trap images captured across diverse topographical regions under different environmental conditions and illumination settings, involving a large domain shift.",Learning to Recognize Animals by Watching Documentaries: Using Subtitles as Weak Supervision,"We investigate animal recognition models learned from wildlife video documentaries by using the weak supervision of the textual subtitles. This is a challenging setting, since i) the animals occur in their natural habitat and are often largely occluded and ii) subtitles are to a great degree complementary to the visual content, providing a very weak supervisory signal. This is in contrast to most work on integrated vision and language in the literature, where textual descriptions are tightly linked to the image content, and often generated in a curated fashion for the task at hand. We investigate different image representations and models, in particular a support vector machine on top of activations of a pretrained convolutional neural network, as well as a Naive Bayes framework on a 'bag-of-activations' image representation, where each element of the bag is considered separately. This representation allows key components in the image to be isolated, in spite of vastly varying backgrounds and image clutter, without an object detection or image segmentation step. The methods are evaluated based on how well they transfer to unseen camera-trap images captured across diverse topographical regions under different environmental conditions and illumination settings, involving a large domain shift.",Learning to Recognize Animals by Watching Documentaries: Using Subtitles as Weak Supervision,"We investigate animal recognition models learned from wildlife video documentaries by using the weak supervision of the textual subtitles. This is a challenging setting, since i) the animals occur in their natural habitat and are often largely occluded and ii) subtitles are to a great degree complementary to the visual content, providing a very weak supervisory signal. This is in contrast to most work on integrated vision and language in the literature, where textual descriptions are tightly linked to the image content, and often generated in a curated fashion for the task at hand. We investigate different image representations and models, in particular a support vector machine on top of activations of a pretrained convolutional neural network, as well as a Naive Bayes framework on a 'bag-of-activations' image representation, where each element of the bag is considered separately. This representation allows key components in the image to be isolated, in spite of vastly varying backgrounds and image clutter, without an object detection or image segmentation step. The methods are evaluated based on how well they transfer to unseen camera-trap images captured across diverse topographical regions under different environmental conditions and illumination settings, involving a large domain shift.",,"Learning to Recognize Animals by Watching Documentaries: Using Subtitles as Weak Supervision. We investigate animal recognition models learned from wildlife video documentaries by using the weak supervision of the textual subtitles. This is a challenging setting, since i) the animals occur in their natural habitat and are often largely occluded and ii) subtitles are to a great degree complementary to the visual content, providing a very weak supervisory signal. This is in contrast to most work on integrated vision and language in the literature, where textual descriptions are tightly linked to the image content, and often generated in a curated fashion for the task at hand. We investigate different image representations and models, in particular a support vector machine on top of activations of a pretrained convolutional neural network, as well as a Naive Bayes framework on a 'bag-of-activations' image representation, where each element of the bag is considered separately. This representation allows key components in the image to be isolated, in spite of vastly varying backgrounds and image clutter, without an object detection or image segmentation step. The methods are evaluated based on how well they transfer to unseen camera-trap images captured across diverse topographical regions under different environmental conditions and illumination settings, involving a large domain shift.",2017
pethe-etal-2020-chapter,https://aclanthology.org/2020.emnlp-main.672,0,,,,,,,"Chapter Captor: Text Segmentation in Novels. Books are typically segmented into chapters and sections, representing coherent subnarratives and topics. We investigate the task of predicting chapter boundaries, as a proxy for the general task of segmenting long texts. We build a Project Gutenberg chapter segmentation data set of 9,126 English novels, using a hybrid approach combining neural inference and rule matching to recognize chapter title headers in books, achieving an F1-score of 0.77 on this task. Using this annotated data as ground truth after removing structural cues, we present cut-based and neural methods for chapter segmentation, achieving an F1-score of 0.453 on the challenging task of exact break prediction over book-length documents. Finally, we reveal interesting historical trends in the chapter structure of novels.",{C}hapter {C}aptor: {T}ext {S}egmentation in {N}ovels,"Books are typically segmented into chapters and sections, representing coherent subnarratives and topics. We investigate the task of predicting chapter boundaries, as a proxy for the general task of segmenting long texts. We build a Project Gutenberg chapter segmentation data set of 9,126 English novels, using a hybrid approach combining neural inference and rule matching to recognize chapter title headers in books, achieving an F1-score of 0.77 on this task. Using this annotated data as ground truth after removing structural cues, we present cut-based and neural methods for chapter segmentation, achieving an F1-score of 0.453 on the challenging task of exact break prediction over book-length documents. Finally, we reveal interesting historical trends in the chapter structure of novels.",Chapter Captor: Text Segmentation in Novels,"Books are typically segmented into chapters and sections, representing coherent subnarratives and topics. We investigate the task of predicting chapter boundaries, as a proxy for the general task of segmenting long texts. We build a Project Gutenberg chapter segmentation data set of 9,126 English novels, using a hybrid approach combining neural inference and rule matching to recognize chapter title headers in books, achieving an F1-score of 0.77 on this task. Using this annotated data as ground truth after removing structural cues, we present cut-based and neural methods for chapter segmentation, achieving an F1-score of 0.453 on the challenging task of exact break prediction over book-length documents. Finally, we reveal interesting historical trends in the chapter structure of novels.","We thank the anonymous reviewers for their helpful feedback. This work was partially supported by NSF grants IIS-1926751, IIS-1927227, and IIS-1546113. ","Chapter Captor: Text Segmentation in Novels. Books are typically segmented into chapters and sections, representing coherent subnarratives and topics. We investigate the task of predicting chapter boundaries, as a proxy for the general task of segmenting long texts. We build a Project Gutenberg chapter segmentation data set of 9,126 English novels, using a hybrid approach combining neural inference and rule matching to recognize chapter title headers in books, achieving an F1-score of 0.77 on this task. Using this annotated data as ground truth after removing structural cues, we present cut-based and neural methods for chapter segmentation, achieving an F1-score of 0.453 on the challenging task of exact break prediction over book-length documents. Finally, we reveal interesting historical trends in the chapter structure of novels.",2020
piasecki-etal-2012-recognition,http://www.lrec-conf.org/proceedings/lrec2012/pdf/926_Paper.pdf,0,,,,,,,"Recognition of Polish Derivational Relations Based on Supervised Learning Scheme. The paper presents construction of Derywator-a language tool for the recognition of Polish derivational relations. It was built on the basis of machine learning in a way following the bootstrapping approach: a limited set of derivational pairs described manually by linguists in plWordNet is used to train Derivator. The tool is intended to be applied in semi-automated expansion of plWordNet with new instances of derivational relations. The training process is based on the construction of two transducers working in the opposite directions: one for prefixes and one for suffixes. Internal stem alternations are recognised, recorded in a form of mapping sequences and stored together with transducers. Raw results produced by Derivator undergo next corpus-based and morphological filtering. A set of derivational relations defined in plWordNet is presented. Results of tests for different derivational relations are discussed. A problem of the necessary corpus-based semantic filtering is analysed. The presented tool depends to a very little extent on the hand-crafted knowledge for a particular language, namely only a table of possible alternations and morphological filtering rules must be exchanged and it should not take longer than a couple of working days.",Recognition of {P}olish Derivational Relations Based on Supervised Learning Scheme,"The paper presents construction of Derywator-a language tool for the recognition of Polish derivational relations. It was built on the basis of machine learning in a way following the bootstrapping approach: a limited set of derivational pairs described manually by linguists in plWordNet is used to train Derivator. The tool is intended to be applied in semi-automated expansion of plWordNet with new instances of derivational relations. The training process is based on the construction of two transducers working in the opposite directions: one for prefixes and one for suffixes. Internal stem alternations are recognised, recorded in a form of mapping sequences and stored together with transducers. Raw results produced by Derivator undergo next corpus-based and morphological filtering. A set of derivational relations defined in plWordNet is presented. Results of tests for different derivational relations are discussed. A problem of the necessary corpus-based semantic filtering is analysed. The presented tool depends to a very little extent on the hand-crafted knowledge for a particular language, namely only a table of possible alternations and morphological filtering rules must be exchanged and it should not take longer than a couple of working days.",Recognition of Polish Derivational Relations Based on Supervised Learning Scheme,"The paper presents construction of Derywator-a language tool for the recognition of Polish derivational relations. It was built on the basis of machine learning in a way following the bootstrapping approach: a limited set of derivational pairs described manually by linguists in plWordNet is used to train Derivator. The tool is intended to be applied in semi-automated expansion of plWordNet with new instances of derivational relations. The training process is based on the construction of two transducers working in the opposite directions: one for prefixes and one for suffixes. Internal stem alternations are recognised, recorded in a form of mapping sequences and stored together with transducers. Raw results produced by Derivator undergo next corpus-based and morphological filtering. A set of derivational relations defined in plWordNet is presented. Results of tests for different derivational relations are discussed. A problem of the necessary corpus-based semantic filtering is analysed. The presented tool depends to a very little extent on the hand-crafted knowledge for a particular language, namely only a table of possible alternations and morphological filtering rules must be exchanged and it should not take longer than a couple of working days.","Work financed by the Polish Ministry of Education and Science, Project N N516 068637.","Recognition of Polish Derivational Relations Based on Supervised Learning Scheme. The paper presents construction of Derywator-a language tool for the recognition of Polish derivational relations. It was built on the basis of machine learning in a way following the bootstrapping approach: a limited set of derivational pairs described manually by linguists in plWordNet is used to train Derivator. The tool is intended to be applied in semi-automated expansion of plWordNet with new instances of derivational relations. The training process is based on the construction of two transducers working in the opposite directions: one for prefixes and one for suffixes. Internal stem alternations are recognised, recorded in a form of mapping sequences and stored together with transducers. Raw results produced by Derivator undergo next corpus-based and morphological filtering. A set of derivational relations defined in plWordNet is presented. Results of tests for different derivational relations are discussed. A problem of the necessary corpus-based semantic filtering is analysed. The presented tool depends to a very little extent on the hand-crafted knowledge for a particular language, namely only a table of possible alternations and morphological filtering rules must be exchanged and it should not take longer than a couple of working days.",2012
boudin-etal-2010-clinical,https://aclanthology.org/N10-1124,1,,,,health,,,"Clinical Information Retrieval using Document and PICO Structure. In evidence-based medicine, clinical questions involve four aspects: Patient/Problem (P), Intervention (I), Comparison (C) and Outcome (O), known as PICO elements. In this paper we present a method that extends the language modeling approach to incorporate both document structure and PICO query formulation. We present an analysis of the distribution of PICO elements in medical abstracts that motivates the use of a location-based weighting strategy. In experiments carried out on a collection of 1.5 million abstracts, the method was found to lead to an improvement of roughly 60% in MAP and 70% in P@10 as compared to state-of-the-art methods.",Clinical Information Retrieval using Document and {PICO} Structure,"In evidence-based medicine, clinical questions involve four aspects: Patient/Problem (P), Intervention (I), Comparison (C) and Outcome (O), known as PICO elements. In this paper we present a method that extends the language modeling approach to incorporate both document structure and PICO query formulation. We present an analysis of the distribution of PICO elements in medical abstracts that motivates the use of a location-based weighting strategy. In experiments carried out on a collection of 1.5 million abstracts, the method was found to lead to an improvement of roughly 60% in MAP and 70% in P@10 as compared to state-of-the-art methods.",Clinical Information Retrieval using Document and PICO Structure,"In evidence-based medicine, clinical questions involve four aspects: Patient/Problem (P), Intervention (I), Comparison (C) and Outcome (O), known as PICO elements. In this paper we present a method that extends the language modeling approach to incorporate both document structure and PICO query formulation. We present an analysis of the distribution of PICO elements in medical abstracts that motivates the use of a location-based weighting strategy. In experiments carried out on a collection of 1.5 million abstracts, the method was found to lead to an improvement of roughly 60% in MAP and 70% in P@10 as compared to state-of-the-art methods.","The work described in this paper was funded by the Social Sciences and Humanities Research Council (SSHRC). The authors would like to thank Dr. Ann McKibbon, Dr. Dina Demner-Fushman, Lorie Kloda, Laura Shea, Lucas Baire and Lixin Shi for their contribution in the project.","Clinical Information Retrieval using Document and PICO Structure. In evidence-based medicine, clinical questions involve four aspects: Patient/Problem (P), Intervention (I), Comparison (C) and Outcome (O), known as PICO elements. In this paper we present a method that extends the language modeling approach to incorporate both document structure and PICO query formulation. We present an analysis of the distribution of PICO elements in medical abstracts that motivates the use of a location-based weighting strategy. In experiments carried out on a collection of 1.5 million abstracts, the method was found to lead to an improvement of roughly 60% in MAP and 70% in P@10 as compared to state-of-the-art methods.",2010
safi-samghabadi-etal-2018-ritual,https://aclanthology.org/W18-4402,1,,,,hate_speech,,,"RiTUAL-UH at TRAC 2018 Shared Task: Aggression Identification. This paper presents our system for ""TRAC 2018 Shared Task on Aggression Identification"". Our best systems for the English dataset use a combination of lexical and semantic features. However, for Hindi data using only lexical features gave us the best results. We obtained weighted F1measures of 0.5921 for the English Facebook task (ranked 12th), 0.5663 for the English Social Media task (ranked 6th), 0.6292 for the Hindi Facebook task (ranked 1st), and 0.4853 for the Hindi Social Media task (ranked 2nd).",{R}i{TUAL}-{UH} at {TRAC} 2018 Shared Task: Aggression Identification,"This paper presents our system for ""TRAC 2018 Shared Task on Aggression Identification"". Our best systems for the English dataset use a combination of lexical and semantic features. However, for Hindi data using only lexical features gave us the best results. We obtained weighted F1measures of 0.5921 for the English Facebook task (ranked 12th), 0.5663 for the English Social Media task (ranked 6th), 0.6292 for the Hindi Facebook task (ranked 1st), and 0.4853 for the Hindi Social Media task (ranked 2nd).",RiTUAL-UH at TRAC 2018 Shared Task: Aggression Identification,"This paper presents our system for ""TRAC 2018 Shared Task on Aggression Identification"". Our best systems for the English dataset use a combination of lexical and semantic features. However, for Hindi data using only lexical features gave us the best results. We obtained weighted F1measures of 0.5921 for the English Facebook task (ranked 12th), 0.5663 for the English Social Media task (ranked 6th), 0.6292 for the Hindi Facebook task (ranked 1st), and 0.4853 for the Hindi Social Media task (ranked 2nd).",,"RiTUAL-UH at TRAC 2018 Shared Task: Aggression Identification. This paper presents our system for ""TRAC 2018 Shared Task on Aggression Identification"". Our best systems for the English dataset use a combination of lexical and semantic features. However, for Hindi data using only lexical features gave us the best results. We obtained weighted F1measures of 0.5921 for the English Facebook task (ranked 12th), 0.5663 for the English Social Media task (ranked 6th), 0.6292 for the Hindi Facebook task (ranked 1st), and 0.4853 for the Hindi Social Media task (ranked 2nd).",2018
mueller-waibel-2015-using,https://aclanthology.org/2015.iwslt-papers.7,0,,,,,,,Using language adaptive deep neural networks for improved multilingual speech recognition. ,Using language adaptive deep neural networks for improved multilingual speech recognition,,Using language adaptive deep neural networks for improved multilingual speech recognition,,,Using language adaptive deep neural networks for improved multilingual speech recognition. ,2015
silvestre-baquero-mitkov-2017-translation,https://doi.org/10.26615/978-954-452-042-7_006,0,,,,,,,Translation Memory Systems Have a Long Way to Go. ,Translation Memory Systems Have a Long Way to Go,,Translation Memory Systems Have a Long Way to Go,,,Translation Memory Systems Have a Long Way to Go. ,2017
steinberger-etal-2011-jrc,https://aclanthology.org/R11-1015,0,,,,,,,"JRC-NAMES: A Freely Available, Highly Multilingual Named Entity Resource. This paper describes a new, freely available, highly multilingual named entity resource for person and organisation names that has been compiled over seven years of large-scale multilingual news analysis combined with Wikipedia mining, resulting in 205,000 person and organisation names plus about the same number of spelling variants written in over 20 different scripts and in many more languages. This resource, produced as part of the Europe Media Monitor activity (EMM, http://emm.newsbrief.eu/overview.html), can be used for a number of purposes. These include improving name search in databases or on the internet, seeding machine learning systems to learn named entity recognition rules, improve machine translation results, and more. We describe here how this resource was created; we give statistics on its current size; we address the issue of morphological inflection; and we give details regarding its functionality. Updates to this resource will be made available daily.","{JRC}-{NAMES}: A Freely Available, Highly Multilingual Named Entity Resource","This paper describes a new, freely available, highly multilingual named entity resource for person and organisation names that has been compiled over seven years of large-scale multilingual news analysis combined with Wikipedia mining, resulting in 205,000 person and organisation names plus about the same number of spelling variants written in over 20 different scripts and in many more languages. This resource, produced as part of the Europe Media Monitor activity (EMM, http://emm.newsbrief.eu/overview.html), can be used for a number of purposes. These include improving name search in databases or on the internet, seeding machine learning systems to learn named entity recognition rules, improve machine translation results, and more. We describe here how this resource was created; we give statistics on its current size; we address the issue of morphological inflection; and we give details regarding its functionality. Updates to this resource will be made available daily.","JRC-NAMES: A Freely Available, Highly Multilingual Named Entity Resource","This paper describes a new, freely available, highly multilingual named entity resource for person and organisation names that has been compiled over seven years of large-scale multilingual news analysis combined with Wikipedia mining, resulting in 205,000 person and organisation names plus about the same number of spelling variants written in over 20 different scripts and in many more languages. This resource, produced as part of the Europe Media Monitor activity (EMM, http://emm.newsbrief.eu/overview.html), can be used for a number of purposes. These include improving name search in databases or on the internet, seeding machine learning systems to learn named entity recognition rules, improve machine translation results, and more. We describe here how this resource was created; we give statistics on its current size; we address the issue of morphological inflection; and we give details regarding its functionality. Updates to this resource will be made available daily.","The Europe Media Monitor EMM is a multiannual group effort involving many tasks, of which some are much less visible to the outside world. We would thus like to thank all past and present OPTIMA team members for their help and dedication. We would also like to thank our Unit Head Delilah Al Khudhairy for her support.","JRC-NAMES: A Freely Available, Highly Multilingual Named Entity Resource. This paper describes a new, freely available, highly multilingual named entity resource for person and organisation names that has been compiled over seven years of large-scale multilingual news analysis combined with Wikipedia mining, resulting in 205,000 person and organisation names plus about the same number of spelling variants written in over 20 different scripts and in many more languages. This resource, produced as part of the Europe Media Monitor activity (EMM, http://emm.newsbrief.eu/overview.html), can be used for a number of purposes. These include improving name search in databases or on the internet, seeding machine learning systems to learn named entity recognition rules, improve machine translation results, and more. We describe here how this resource was created; we give statistics on its current size; we address the issue of morphological inflection; and we give details regarding its functionality. Updates to this resource will be made available daily.",2011
li-etal-2022-seeking,https://aclanthology.org/2022.findings-acl.195,0,,,,,,,"Seeking Patterns, Not just Memorizing Procedures: Contrastive Learning for Solving Math Word Problems. Math Word Problem (MWP) solving needs to discover the quantitative relationships over natural language narratives. Recent work shows that existing models memorize procedures from context and rely on shallow heuristics to solve MWPs. In this paper, we look at this issue and argue that the cause is a lack of overall understanding of MWP patterns. We first investigate how a neural network understands patterns only from semantics, and observe that, if the prototype equations like n 1 + n 2 are the same, most problems get closer representations and those representations apart from them or close to other prototypes tend to produce wrong solutions. Inspired by it, we propose a contrastive learning approach, where the neural network perceives the divergence of patterns. We collect contrastive examples by converting the prototype equation into a tree and seeking similar tree structures. The solving model is trained with an auxiliary objective on the collected examples, resulting in the representations of problems with similar prototypes being pulled closer. We conduct experiments 1 on the Chinese dataset Math23k and the English dataset MathQA. Our method greatly improves the performance in monolingual and multilingual settings.","Seeking Patterns, Not just Memorizing Procedures: Contrastive Learning for Solving Math Word Problems","Math Word Problem (MWP) solving needs to discover the quantitative relationships over natural language narratives. Recent work shows that existing models memorize procedures from context and rely on shallow heuristics to solve MWPs. In this paper, we look at this issue and argue that the cause is a lack of overall understanding of MWP patterns. We first investigate how a neural network understands patterns only from semantics, and observe that, if the prototype equations like n 1 + n 2 are the same, most problems get closer representations and those representations apart from them or close to other prototypes tend to produce wrong solutions. Inspired by it, we propose a contrastive learning approach, where the neural network perceives the divergence of patterns. We collect contrastive examples by converting the prototype equation into a tree and seeking similar tree structures. The solving model is trained with an auxiliary objective on the collected examples, resulting in the representations of problems with similar prototypes being pulled closer. We conduct experiments 1 on the Chinese dataset Math23k and the English dataset MathQA. Our method greatly improves the performance in monolingual and multilingual settings.","Seeking Patterns, Not just Memorizing Procedures: Contrastive Learning for Solving Math Word Problems","Math Word Problem (MWP) solving needs to discover the quantitative relationships over natural language narratives. Recent work shows that existing models memorize procedures from context and rely on shallow heuristics to solve MWPs. In this paper, we look at this issue and argue that the cause is a lack of overall understanding of MWP patterns. We first investigate how a neural network understands patterns only from semantics, and observe that, if the prototype equations like n 1 + n 2 are the same, most problems get closer representations and those representations apart from them or close to other prototypes tend to produce wrong solutions. Inspired by it, we propose a contrastive learning approach, where the neural network perceives the divergence of patterns. We collect contrastive examples by converting the prototype equation into a tree and seeking similar tree structures. The solving model is trained with an auxiliary objective on the collected examples, resulting in the representations of problems with similar prototypes being pulled closer. We conduct experiments 1 on the Chinese dataset Math23k and the English dataset MathQA. Our method greatly improves the performance in monolingual and multilingual settings.",,"Seeking Patterns, Not just Memorizing Procedures: Contrastive Learning for Solving Math Word Problems. Math Word Problem (MWP) solving needs to discover the quantitative relationships over natural language narratives. Recent work shows that existing models memorize procedures from context and rely on shallow heuristics to solve MWPs. In this paper, we look at this issue and argue that the cause is a lack of overall understanding of MWP patterns. We first investigate how a neural network understands patterns only from semantics, and observe that, if the prototype equations like n 1 + n 2 are the same, most problems get closer representations and those representations apart from them or close to other prototypes tend to produce wrong solutions. Inspired by it, we propose a contrastive learning approach, where the neural network perceives the divergence of patterns. We collect contrastive examples by converting the prototype equation into a tree and seeking similar tree structures. The solving model is trained with an auxiliary objective on the collected examples, resulting in the representations of problems with similar prototypes being pulled closer. We conduct experiments 1 on the Chinese dataset Math23k and the English dataset MathQA. Our method greatly improves the performance in monolingual and multilingual settings.",2022
sakaguchi-etal-2018-comprehensive,https://aclanthology.org/L18-1050,0,,,,,,,"Comprehensive Annotation of Various Types of Temporal Information on the Time Axis. In order to make the temporal interpretation of text, there have been many studies linking event and temporal information, such as temporal ordering of events and timeline generation. To train and evaluate models in these studies, many corpora that associate event information with time information have been developed. In this paper, we propose an annotation scheme that anchors expressions in text to the time axis comprehensively, extending the previous studies in the following two points. One of the points is to annotate not only expressions with strong temporality but also expressions with weak temporality, such as states and habits. The other point is that various types of temporal information, such as frequency and duration, can be anchored to the time axis. Using this annotation scheme, we annotated a subset of Kyoto University Text Corpus. Since the corpus has already been annotated predicate-argument structures and coreference relations, it can be utilized for integrated information analysis of events, entities and time.",Comprehensive Annotation of Various Types of Temporal Information on the Time Axis,"In order to make the temporal interpretation of text, there have been many studies linking event and temporal information, such as temporal ordering of events and timeline generation. To train and evaluate models in these studies, many corpora that associate event information with time information have been developed. In this paper, we propose an annotation scheme that anchors expressions in text to the time axis comprehensively, extending the previous studies in the following two points. One of the points is to annotate not only expressions with strong temporality but also expressions with weak temporality, such as states and habits. The other point is that various types of temporal information, such as frequency and duration, can be anchored to the time axis. Using this annotation scheme, we annotated a subset of Kyoto University Text Corpus. Since the corpus has already been annotated predicate-argument structures and coreference relations, it can be utilized for integrated information analysis of events, entities and time.",Comprehensive Annotation of Various Types of Temporal Information on the Time Axis,"In order to make the temporal interpretation of text, there have been many studies linking event and temporal information, such as temporal ordering of events and timeline generation. To train and evaluate models in these studies, many corpora that associate event information with time information have been developed. In this paper, we propose an annotation scheme that anchors expressions in text to the time axis comprehensively, extending the previous studies in the following two points. One of the points is to annotate not only expressions with strong temporality but also expressions with weak temporality, such as states and habits. The other point is that various types of temporal information, such as frequency and duration, can be anchored to the time axis. Using this annotation scheme, we annotated a subset of Kyoto University Text Corpus. Since the corpus has already been annotated predicate-argument structures and coreference relations, it can be utilized for integrated information analysis of events, entities and time.","This work was partially supported by JST CREST Grant Number JPMJCR1301 including AIP challenge program, Japan. We also thank Manami Ishikawa, Marika Horiuchi and Natsuki Nikaido for their careful annotation.","Comprehensive Annotation of Various Types of Temporal Information on the Time Axis. In order to make the temporal interpretation of text, there have been many studies linking event and temporal information, such as temporal ordering of events and timeline generation. To train and evaluate models in these studies, many corpora that associate event information with time information have been developed. In this paper, we propose an annotation scheme that anchors expressions in text to the time axis comprehensively, extending the previous studies in the following two points. One of the points is to annotate not only expressions with strong temporality but also expressions with weak temporality, such as states and habits. The other point is that various types of temporal information, such as frequency and duration, can be anchored to the time axis. Using this annotation scheme, we annotated a subset of Kyoto University Text Corpus. Since the corpus has already been annotated predicate-argument structures and coreference relations, it can be utilized for integrated information analysis of events, entities and time.",2018
yu-2007-chinese,https://aclanthology.org/N07-2050,0,,,,,,,"Chinese Named Entity Recognition with Cascaded Hybrid Model. We propose a high-performance cascaded hybrid model for Chinese NER. Firstly, we use Boosting, a standard and theoretically wellfounded machine learning method to combine a set of weak classifiers together into a base system. Secondly, we introduce various types of heuristic human knowledge into Markov Logic Networks (MLNs), an effective combination of first-order logic and probabilistic graphical models to validate Boosting NER hypotheses. Experimental results show that the cascaded hybrid model significantly outperforms the state-of-the-art Boosting model.",{C}hinese Named Entity Recognition with Cascaded Hybrid Model,"We propose a high-performance cascaded hybrid model for Chinese NER. Firstly, we use Boosting, a standard and theoretically wellfounded machine learning method to combine a set of weak classifiers together into a base system. Secondly, we introduce various types of heuristic human knowledge into Markov Logic Networks (MLNs), an effective combination of first-order logic and probabilistic graphical models to validate Boosting NER hypotheses. Experimental results show that the cascaded hybrid model significantly outperforms the state-of-the-art Boosting model.",Chinese Named Entity Recognition with Cascaded Hybrid Model,"We propose a high-performance cascaded hybrid model for Chinese NER. Firstly, we use Boosting, a standard and theoretically wellfounded machine learning method to combine a set of weak classifiers together into a base system. Secondly, we introduce various types of heuristic human knowledge into Markov Logic Networks (MLNs), an effective combination of first-order logic and probabilistic graphical models to validate Boosting NER hypotheses. Experimental results show that the cascaded hybrid model significantly outperforms the state-of-the-art Boosting model.",,"Chinese Named Entity Recognition with Cascaded Hybrid Model. We propose a high-performance cascaded hybrid model for Chinese NER. Firstly, we use Boosting, a standard and theoretically wellfounded machine learning method to combine a set of weak classifiers together into a base system. Secondly, we introduce various types of heuristic human knowledge into Markov Logic Networks (MLNs), an effective combination of first-order logic and probabilistic graphical models to validate Boosting NER hypotheses. Experimental results show that the cascaded hybrid model significantly outperforms the state-of-the-art Boosting model.",2007
freedman-etal-2011-language,https://aclanthology.org/P11-2059,0,,,,,,,"Language Use: What can it tell us?. For 20 years, information extraction has focused on facts expressed in text. In contrast, this paper is a snapshot of research in progress on inferring properties and relationships among participants in dialogs, even though these properties/relationships need not be expressed as facts. For instance, can a machine detect that someone is attempting to persuade another to action or to change beliefs or is asserting their credibility? We report results on both English and Arabic discussion forums.",Language Use: What can it tell us?,"For 20 years, information extraction has focused on facts expressed in text. In contrast, this paper is a snapshot of research in progress on inferring properties and relationships among participants in dialogs, even though these properties/relationships need not be expressed as facts. For instance, can a machine detect that someone is attempting to persuade another to action or to change beliefs or is asserting their credibility? We report results on both English and Arabic discussion forums.",Language Use: What can it tell us?,"For 20 years, information extraction has focused on facts expressed in text. In contrast, this paper is a snapshot of research in progress on inferring properties and relationships among participants in dialogs, even though these properties/relationships need not be expressed as facts. For instance, can a machine detect that someone is attempting to persuade another to action or to change beliefs or is asserting their credibility? We report results on both English and Arabic discussion forums.","This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the _____. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI or the U.S. Government.","Language Use: What can it tell us?. For 20 years, information extraction has focused on facts expressed in text. In contrast, this paper is a snapshot of research in progress on inferring properties and relationships among participants in dialogs, even though these properties/relationships need not be expressed as facts. For instance, can a machine detect that someone is attempting to persuade another to action or to change beliefs or is asserting their credibility? We report results on both English and Arabic discussion forums.",2011
guinaudeau-strube-2013-graph,https://aclanthology.org/P13-1010,0,,,,,,,"Graph-based Local Coherence Modeling. We propose a computationally efficient graph-based approach for local coherence modeling. We evaluate our system on three tasks: sentence ordering, summary coherence rating and readability assessment. The performance is comparable to entity grid based approaches though these rely on a computationally expensive training phase and face data sparsity problems.",Graph-based Local Coherence Modeling,"We propose a computationally efficient graph-based approach for local coherence modeling. We evaluate our system on three tasks: sentence ordering, summary coherence rating and readability assessment. The performance is comparable to entity grid based approaches though these rely on a computationally expensive training phase and face data sparsity problems.",Graph-based Local Coherence Modeling,"We propose a computationally efficient graph-based approach for local coherence modeling. We evaluate our system on three tasks: sentence ordering, summary coherence rating and readability assessment. The performance is comparable to entity grid based approaches though these rely on a computationally expensive training phase and face data sparsity problems.","Acknowledgments. This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a HITS postdoctoral scholarship. We would like to thank Mirella Lapata and Regina Barzilay for making their data available and Micha Elsner for providing his toolkit.","Graph-based Local Coherence Modeling. We propose a computationally efficient graph-based approach for local coherence modeling. We evaluate our system on three tasks: sentence ordering, summary coherence rating and readability assessment. The performance is comparable to entity grid based approaches though these rely on a computationally expensive training phase and face data sparsity problems.",2013
schuster-hegelich-2022-berts,https://aclanthology.org/2022.findings-acl.89,0,,,,,,,"From BERT`s Point of View: Revealing the Prevailing Contextual Differences. Though successfully applied in research and industry large pretrained language models of the BERT family are not yet fully understood. While much research in the field of BERTology has tested whether specific knowledge can be extracted from layer activations, we invert the popular probing design to analyze the prevailing differences and clusters in BERT's high dimensional space. By extracting coarse features from masked token representations and predicting them by probing models with access to only partial information we can apprehend the variation from 'BERT's point of view'. By applying our new methodology to different datasets we show how much the differences can be described by syntax but further how they are to a great extent shaped by the most simple positional information.",From {BERT}{`}s {P}oint of {V}iew: {R}evealing the {P}revailing {C}ontextual {D}ifferences,"Though successfully applied in research and industry large pretrained language models of the BERT family are not yet fully understood. While much research in the field of BERTology has tested whether specific knowledge can be extracted from layer activations, we invert the popular probing design to analyze the prevailing differences and clusters in BERT's high dimensional space. By extracting coarse features from masked token representations and predicting them by probing models with access to only partial information we can apprehend the variation from 'BERT's point of view'. By applying our new methodology to different datasets we show how much the differences can be described by syntax but further how they are to a great extent shaped by the most simple positional information.",From BERT`s Point of View: Revealing the Prevailing Contextual Differences,"Though successfully applied in research and industry large pretrained language models of the BERT family are not yet fully understood. While much research in the field of BERTology has tested whether specific knowledge can be extracted from layer activations, we invert the popular probing design to analyze the prevailing differences and clusters in BERT's high dimensional space. By extracting coarse features from masked token representations and predicting them by probing models with access to only partial information we can apprehend the variation from 'BERT's point of view'. By applying our new methodology to different datasets we show how much the differences can be described by syntax but further how they are to a great extent shaped by the most simple positional information.",This work was supported by the Heinrich Böll Foundation through a doctoral scholarship. We would like to thank the anonymous reviewers for their valuable feedback.,"From BERT`s Point of View: Revealing the Prevailing Contextual Differences. Though successfully applied in research and industry large pretrained language models of the BERT family are not yet fully understood. While much research in the field of BERTology has tested whether specific knowledge can be extracted from layer activations, we invert the popular probing design to analyze the prevailing differences and clusters in BERT's high dimensional space. By extracting coarse features from masked token representations and predicting them by probing models with access to only partial information we can apprehend the variation from 'BERT's point of view'. By applying our new methodology to different datasets we show how much the differences can be described by syntax but further how they are to a great extent shaped by the most simple positional information.",2022
hossain-schwitter-2018-specifying,https://aclanthology.org/U18-1005,0,,,,,,,Specifying Conceptual Models Using Restricted Natural Language. The key activity to design an information system is conceptual modelling which brings out and describes the general knowledge that is required to build a system. In this paper we propose a novel approach to conceptual modelling where the domain experts will be able to specify and construct a model using a restricted form of natural language. A restricted natural language is a subset of a natural language that has well-defined computational properties and therefore can be translated unambiguously into a formal notation. We will argue that a restricted natural language is suitable for writing precise and consistent specifications that lead to executable conceptual models. Using a restricted natural language will allow the domain experts to describe a scenario in the terminology of the application domain without the need to formally encode this scenario. The resulting textual specification can then be automatically translated into the language of the desired conceptual modelling framework.,Specifying Conceptual Models Using Restricted Natural Language,The key activity to design an information system is conceptual modelling which brings out and describes the general knowledge that is required to build a system. In this paper we propose a novel approach to conceptual modelling where the domain experts will be able to specify and construct a model using a restricted form of natural language. A restricted natural language is a subset of a natural language that has well-defined computational properties and therefore can be translated unambiguously into a formal notation. We will argue that a restricted natural language is suitable for writing precise and consistent specifications that lead to executable conceptual models. Using a restricted natural language will allow the domain experts to describe a scenario in the terminology of the application domain without the need to formally encode this scenario. The resulting textual specification can then be automatically translated into the language of the desired conceptual modelling framework.,Specifying Conceptual Models Using Restricted Natural Language,The key activity to design an information system is conceptual modelling which brings out and describes the general knowledge that is required to build a system. In this paper we propose a novel approach to conceptual modelling where the domain experts will be able to specify and construct a model using a restricted form of natural language. A restricted natural language is a subset of a natural language that has well-defined computational properties and therefore can be translated unambiguously into a formal notation. We will argue that a restricted natural language is suitable for writing precise and consistent specifications that lead to executable conceptual models. Using a restricted natural language will allow the domain experts to describe a scenario in the terminology of the application domain without the need to formally encode this scenario. The resulting textual specification can then be automatically translated into the language of the desired conceptual modelling framework.,,Specifying Conceptual Models Using Restricted Natural Language. The key activity to design an information system is conceptual modelling which brings out and describes the general knowledge that is required to build a system. In this paper we propose a novel approach to conceptual modelling where the domain experts will be able to specify and construct a model using a restricted form of natural language. A restricted natural language is a subset of a natural language that has well-defined computational properties and therefore can be translated unambiguously into a formal notation. We will argue that a restricted natural language is suitable for writing precise and consistent specifications that lead to executable conceptual models. Using a restricted natural language will allow the domain experts to describe a scenario in the terminology of the application domain without the need to formally encode this scenario. The resulting textual specification can then be automatically translated into the language of the desired conceptual modelling framework.,2018
madaan-sadat-2020-multilingual,https://aclanthology.org/2020.wildre-1.6,0,,,,,,,"Multilingual Neural Machine Translation involving Indian Languages. Neural Machine Translations (NMT) models are capable of translating a single bilingual pair and require a new model for each new language pair. Multilingual Neural Machine Translation models are capable of translating multiple language pairs, even pairs which it hasn't seen before in training. Availability of parallel sentences is a known problem in machine translation. Multilingual NMT model leverages information from all the languages to improve itself and performs better. We propose a data augmentation technique that further improves this model profoundly. The technique helps achieve a jump of more than 15 points in BLEU score from the Multilingual NMT Model. A BLEU score of 36.2 was achieved for Sindhi-English translation, which is higher than any score on the leaderboard of the LoResMT SharedTask at MT Summit 2019, which provided the data for the experiments.",Multilingual Neural Machine Translation involving {I}ndian Languages,"Neural Machine Translations (NMT) models are capable of translating a single bilingual pair and require a new model for each new language pair. Multilingual Neural Machine Translation models are capable of translating multiple language pairs, even pairs which it hasn't seen before in training. Availability of parallel sentences is a known problem in machine translation. Multilingual NMT model leverages information from all the languages to improve itself and performs better. We propose a data augmentation technique that further improves this model profoundly. The technique helps achieve a jump of more than 15 points in BLEU score from the Multilingual NMT Model. A BLEU score of 36.2 was achieved for Sindhi-English translation, which is higher than any score on the leaderboard of the LoResMT SharedTask at MT Summit 2019, which provided the data for the experiments.",Multilingual Neural Machine Translation involving Indian Languages,"Neural Machine Translations (NMT) models are capable of translating a single bilingual pair and require a new model for each new language pair. Multilingual Neural Machine Translation models are capable of translating multiple language pairs, even pairs which it hasn't seen before in training. Availability of parallel sentences is a known problem in machine translation. Multilingual NMT model leverages information from all the languages to improve itself and performs better. We propose a data augmentation technique that further improves this model profoundly. The technique helps achieve a jump of more than 15 points in BLEU score from the Multilingual NMT Model. A BLEU score of 36.2 was achieved for Sindhi-English translation, which is higher than any score on the leaderboard of the LoResMT SharedTask at MT Summit 2019, which provided the data for the experiments.",,"Multilingual Neural Machine Translation involving Indian Languages. Neural Machine Translations (NMT) models are capable of translating a single bilingual pair and require a new model for each new language pair. Multilingual Neural Machine Translation models are capable of translating multiple language pairs, even pairs which it hasn't seen before in training. Availability of parallel sentences is a known problem in machine translation. Multilingual NMT model leverages information from all the languages to improve itself and performs better. We propose a data augmentation technique that further improves this model profoundly. The technique helps achieve a jump of more than 15 points in BLEU score from the Multilingual NMT Model. A BLEU score of 36.2 was achieved for Sindhi-English translation, which is higher than any score on the leaderboard of the LoResMT SharedTask at MT Summit 2019, which provided the data for the experiments.",2020
ueffing-etal-2002-generation,https://aclanthology.org/W02-1021,0,,,,,,,Generation of Word Graphs in Statistical Machine Translation. ,Generation of Word Graphs in Statistical Machine Translation,,Generation of Word Graphs in Statistical Machine Translation,,,Generation of Word Graphs in Statistical Machine Translation. ,2002
kuo-chen-2004-event,https://aclanthology.org/W04-0703,0,,,,,,,"Event Clustering on Streaming News Using Co-Reference Chains and Event Words. Event clustering on streaming news aims to group documents by events automatically. This paper employs co-reference chains to extract the most representative sentences, and then uses them to select the most informative features for clustering. Due to the long span of events, a fixed threshold approach prohibits the latter documents to be clustered and thus decreases the performance. A dynamic threshold using time decay function and spanning window is proposed. Besides the noun phrases in co-reference chains, event words in each sentence are also introduced to improve the related performance. Two models are proposed. The experimental results show that both event words and co-reference chains are useful on event clustering.",Event Clustering on Streaming News Using Co-Reference Chains and Event Words,"Event clustering on streaming news aims to group documents by events automatically. This paper employs co-reference chains to extract the most representative sentences, and then uses them to select the most informative features for clustering. Due to the long span of events, a fixed threshold approach prohibits the latter documents to be clustered and thus decreases the performance. A dynamic threshold using time decay function and spanning window is proposed. Besides the noun phrases in co-reference chains, event words in each sentence are also introduced to improve the related performance. Two models are proposed. The experimental results show that both event words and co-reference chains are useful on event clustering.",Event Clustering on Streaming News Using Co-Reference Chains and Event Words,"Event clustering on streaming news aims to group documents by events automatically. This paper employs co-reference chains to extract the most representative sentences, and then uses them to select the most informative features for clustering. Due to the long span of events, a fixed threshold approach prohibits the latter documents to be clustered and thus decreases the performance. A dynamic threshold using time decay function and spanning window is proposed. Besides the noun phrases in co-reference chains, event words in each sentence are also introduced to improve the related performance. Two models are proposed. The experimental results show that both event words and co-reference chains are useful on event clustering.",,"Event Clustering on Streaming News Using Co-Reference Chains and Event Words. Event clustering on streaming news aims to group documents by events automatically. This paper employs co-reference chains to extract the most representative sentences, and then uses them to select the most informative features for clustering. Due to the long span of events, a fixed threshold approach prohibits the latter documents to be clustered and thus decreases the performance. A dynamic threshold using time decay function and spanning window is proposed. Besides the noun phrases in co-reference chains, event words in each sentence are also introduced to improve the related performance. Two models are proposed. The experimental results show that both event words and co-reference chains are useful on event clustering.",2004
chert-etal-1998-ntu,https://aclanthology.org/X98-1022,0,,,,,,,"An NTU-Approach to Automatic Sentence Extraction for Summary Generation. Automatic summarization and information extraction are two important Internet services. MUC and SUMMAC play their appropriate roles in the next generation Internet. This paper focuses on the automatic summarization and proposes two different models to extract sentences for summary generation under two tasks initiated by SUMMAC-1. For categorization task, positive feature vectors and negative feature vectors are used cooperatively to construct generic, indicative summaries. For adhoc task, a text model based on relationship between nouns and verbs is used to filter out irrelevant discourse segment, to rank relevant sentences, and to generate the user-directed summaries. The result shows that the NormF of the best summary and that of the fixed summary for adhoc tasks are 0.456 and 0.447. The NormF of the best summary and that of the fixed summary for categorization task are 0.4090 and 0.4023. Our system outperforms the average system in categorization task but does a common job in adhoc task.",An {NTU}-Approach to Automatic Sentence Extraction for Summary Generation,"Automatic summarization and information extraction are two important Internet services. MUC and SUMMAC play their appropriate roles in the next generation Internet. This paper focuses on the automatic summarization and proposes two different models to extract sentences for summary generation under two tasks initiated by SUMMAC-1. For categorization task, positive feature vectors and negative feature vectors are used cooperatively to construct generic, indicative summaries. For adhoc task, a text model based on relationship between nouns and verbs is used to filter out irrelevant discourse segment, to rank relevant sentences, and to generate the user-directed summaries. The result shows that the NormF of the best summary and that of the fixed summary for adhoc tasks are 0.456 and 0.447. The NormF of the best summary and that of the fixed summary for categorization task are 0.4090 and 0.4023. Our system outperforms the average system in categorization task but does a common job in adhoc task.",An NTU-Approach to Automatic Sentence Extraction for Summary Generation,"Automatic summarization and information extraction are two important Internet services. MUC and SUMMAC play their appropriate roles in the next generation Internet. This paper focuses on the automatic summarization and proposes two different models to extract sentences for summary generation under two tasks initiated by SUMMAC-1. For categorization task, positive feature vectors and negative feature vectors are used cooperatively to construct generic, indicative summaries. For adhoc task, a text model based on relationship between nouns and verbs is used to filter out irrelevant discourse segment, to rank relevant sentences, and to generate the user-directed summaries. The result shows that the NormF of the best summary and that of the fixed summary for adhoc tasks are 0.456 and 0.447. The NormF of the best summary and that of the fixed summary for categorization task are 0.4090 and 0.4023. Our system outperforms the average system in categorization task but does a common job in adhoc task.",,"An NTU-Approach to Automatic Sentence Extraction for Summary Generation. Automatic summarization and information extraction are two important Internet services. MUC and SUMMAC play their appropriate roles in the next generation Internet. This paper focuses on the automatic summarization and proposes two different models to extract sentences for summary generation under two tasks initiated by SUMMAC-1. For categorization task, positive feature vectors and negative feature vectors are used cooperatively to construct generic, indicative summaries. For adhoc task, a text model based on relationship between nouns and verbs is used to filter out irrelevant discourse segment, to rank relevant sentences, and to generate the user-directed summaries. The result shows that the NormF of the best summary and that of the fixed summary for adhoc tasks are 0.456 and 0.447. The NormF of the best summary and that of the fixed summary for categorization task are 0.4090 and 0.4023. Our system outperforms the average system in categorization task but does a common job in adhoc task.",1998
kao-chen-2011-diagnosing,https://aclanthology.org/O11-2010,1,,,,education,,,"Diagnosing Discoursal Organization in Learner Writing via Conjunctive Adverbials (診斷學習者英語寫作篇章結構:以篇章連接副詞為例). The present study aims to investigate genre influence on the use and misuse of conjunctive adverbials (hereafter CAs) by compiling a learner corpus annotated with discoursal information on CAs. To do so, an online interface is constructed to collect and annotate data, and an annotating system for identifying the use and misuse of CAs is developed. The results show that genre difference has no impact on the use and misuse of CAs, but that there does exist a norm distribution of textual relations performed by CAs, indicating a preference preset in human cognition. Statistic analysis also shows that the proposed misuse patterns do significantly differ from one another in terms of appropriateness and necessity, ratifying the need to differentiate these misuse patterns. The results in the present study have three possible applications. First, the annotate data can serve as training data for developing technology that automatically diagnoses learner writing on the discoursal level. Second, the founding that textual relations performed by CAs form a distribution norm can be used as a principle to evaluate discoursal organization in learner writing. Lastly, the misuse framework not only identifies the location of misuse of CAs but also indicates direction for correction.",Diagnosing Discoursal Organization in Learner Writing via Conjunctive Adverbials (診斷學習者英語寫作篇章結構:以篇章連接副詞為例),"The present study aims to investigate genre influence on the use and misuse of conjunctive adverbials (hereafter CAs) by compiling a learner corpus annotated with discoursal information on CAs. To do so, an online interface is constructed to collect and annotate data, and an annotating system for identifying the use and misuse of CAs is developed. The results show that genre difference has no impact on the use and misuse of CAs, but that there does exist a norm distribution of textual relations performed by CAs, indicating a preference preset in human cognition. Statistic analysis also shows that the proposed misuse patterns do significantly differ from one another in terms of appropriateness and necessity, ratifying the need to differentiate these misuse patterns. The results in the present study have three possible applications. First, the annotate data can serve as training data for developing technology that automatically diagnoses learner writing on the discoursal level. Second, the founding that textual relations performed by CAs form a distribution norm can be used as a principle to evaluate discoursal organization in learner writing. Lastly, the misuse framework not only identifies the location of misuse of CAs but also indicates direction for correction.",Diagnosing Discoursal Organization in Learner Writing via Conjunctive Adverbials (診斷學習者英語寫作篇章結構:以篇章連接副詞為例),"The present study aims to investigate genre influence on the use and misuse of conjunctive adverbials (hereafter CAs) by compiling a learner corpus annotated with discoursal information on CAs. To do so, an online interface is constructed to collect and annotate data, and an annotating system for identifying the use and misuse of CAs is developed. The results show that genre difference has no impact on the use and misuse of CAs, but that there does exist a norm distribution of textual relations performed by CAs, indicating a preference preset in human cognition. Statistic analysis also shows that the proposed misuse patterns do significantly differ from one another in terms of appropriateness and necessity, ratifying the need to differentiate these misuse patterns. The results in the present study have three possible applications. First, the annotate data can serve as training data for developing technology that automatically diagnoses learner writing on the discoursal level. Second, the founding that textual relations performed by CAs form a distribution norm can be used as a principle to evaluate discoursal organization in learner writing. Lastly, the misuse framework not only identifies the location of misuse of CAs but also indicates direction for correction.",,"Diagnosing Discoursal Organization in Learner Writing via Conjunctive Adverbials (診斷學習者英語寫作篇章結構:以篇章連接副詞為例). The present study aims to investigate genre influence on the use and misuse of conjunctive adverbials (hereafter CAs) by compiling a learner corpus annotated with discoursal information on CAs. To do so, an online interface is constructed to collect and annotate data, and an annotating system for identifying the use and misuse of CAs is developed. The results show that genre difference has no impact on the use and misuse of CAs, but that there does exist a norm distribution of textual relations performed by CAs, indicating a preference preset in human cognition. Statistic analysis also shows that the proposed misuse patterns do significantly differ from one another in terms of appropriateness and necessity, ratifying the need to differentiate these misuse patterns. The results in the present study have three possible applications. First, the annotate data can serve as training data for developing technology that automatically diagnoses learner writing on the discoursal level. Second, the founding that textual relations performed by CAs form a distribution norm can be used as a principle to evaluate discoursal organization in learner writing. Lastly, the misuse framework not only identifies the location of misuse of CAs but also indicates direction for correction.",2011
macherey-och-2007-empirical,https://aclanthology.org/D07-1105,0,,,,,,,An Empirical Study on Computing Consensus Translations from Multiple Machine Translation Systems. This paper presents an empirical study on how different selections of input translation systems affect translation quality in system combination. We give empirical evidence that the systems to be combined should be of similar quality and need to be almost uncorrelated in order to be beneficial for system combination. Experimental results are presented for composite translations computed from large numbers of different research systems as well as a set of translation systems derived from one of the bestranked machine translation engines in the 2006 NIST machine translation evaluation.,An Empirical Study on Computing Consensus Translations from Multiple Machine Translation Systems,This paper presents an empirical study on how different selections of input translation systems affect translation quality in system combination. We give empirical evidence that the systems to be combined should be of similar quality and need to be almost uncorrelated in order to be beneficial for system combination. Experimental results are presented for composite translations computed from large numbers of different research systems as well as a set of translation systems derived from one of the bestranked machine translation engines in the 2006 NIST machine translation evaluation.,An Empirical Study on Computing Consensus Translations from Multiple Machine Translation Systems,This paper presents an empirical study on how different selections of input translation systems affect translation quality in system combination. We give empirical evidence that the systems to be combined should be of similar quality and need to be almost uncorrelated in order to be beneficial for system combination. Experimental results are presented for composite translations computed from large numbers of different research systems as well as a set of translation systems derived from one of the bestranked machine translation engines in the 2006 NIST machine translation evaluation.,,An Empirical Study on Computing Consensus Translations from Multiple Machine Translation Systems. This paper presents an empirical study on how different selections of input translation systems affect translation quality in system combination. We give empirical evidence that the systems to be combined should be of similar quality and need to be almost uncorrelated in order to be beneficial for system combination. Experimental results are presented for composite translations computed from large numbers of different research systems as well as a set of translation systems derived from one of the bestranked machine translation engines in the 2006 NIST machine translation evaluation.,2007
van-noord-bouma-1997-hdrug,https://aclanthology.org/W97-1513,0,,,,,,,"Hdrug. A Flexible and Extendible Development Environment for Natural Language Processing.. Alfa-informatica & BCN, University of Groningen vannoord, gosse@let, rug. nl Hdrug is an environment to develop grammars, parsers and generators for natural languages. The package is written in Sicstus Prolog and Tcl/Tk. The system provides a graphical user interface with a command interpreter, and a number of visualisation tools, including visualisation of feature structures, syntax trees, type hierarchies, lexical hierarchies, feature structure trees, definite clause definitions, grammar rules, lexical entries, and graphs of statistical information of various kinds. Hdrug is designed to be as flexible and extendible as possible. This is illustrated by the fact that Hdrug has been used both for the development of practical realtime systems, but also as a tool to experiment with new theoretical notions and alternative processing strategies. Grammatical formalisms that have been used range from context-free grammars to concatenative feature-based grammars (such as the grammars written for ALE) and nonconcatenative grammars such as Tree Adjoining Grammars.",Hdrug. A Flexible and Extendible Development Environment for Natural Language Processing.,"Alfa-informatica & BCN, University of Groningen {vannoord, gosse}@let, rug. nl Hdrug is an environment to develop grammars, parsers and generators for natural languages. The package is written in Sicstus Prolog and Tcl/Tk. The system provides a graphical user interface with a command interpreter, and a number of visualisation tools, including visualisation of feature structures, syntax trees, type hierarchies, lexical hierarchies, feature structure trees, definite clause definitions, grammar rules, lexical entries, and graphs of statistical information of various kinds. Hdrug is designed to be as flexible and extendible as possible. This is illustrated by the fact that Hdrug has been used both for the development of practical realtime systems, but also as a tool to experiment with new theoretical notions and alternative processing strategies. Grammatical formalisms that have been used range from context-free grammars to concatenative feature-based grammars (such as the grammars written for ALE) and nonconcatenative grammars such as Tree Adjoining Grammars.",Hdrug. A Flexible and Extendible Development Environment for Natural Language Processing.,"Alfa-informatica & BCN, University of Groningen vannoord, gosse@let, rug. nl Hdrug is an environment to develop grammars, parsers and generators for natural languages. The package is written in Sicstus Prolog and Tcl/Tk. The system provides a graphical user interface with a command interpreter, and a number of visualisation tools, including visualisation of feature structures, syntax trees, type hierarchies, lexical hierarchies, feature structure trees, definite clause definitions, grammar rules, lexical entries, and graphs of statistical information of various kinds. Hdrug is designed to be as flexible and extendible as possible. This is illustrated by the fact that Hdrug has been used both for the development of practical realtime systems, but also as a tool to experiment with new theoretical notions and alternative processing strategies. Grammatical formalisms that have been used range from context-free grammars to concatenative feature-based grammars (such as the grammars written for ALE) and nonconcatenative grammars such as Tree Adjoining Grammars.",Part of this research is being carried out within the framework of the Priority Programme Language and Speech Technology (TST). The TST-Programme is sponsored by NWO (Dutch Organisation for Scientific Research).,"Hdrug. A Flexible and Extendible Development Environment for Natural Language Processing.. Alfa-informatica & BCN, University of Groningen vannoord, gosse@let, rug. nl Hdrug is an environment to develop grammars, parsers and generators for natural languages. The package is written in Sicstus Prolog and Tcl/Tk. The system provides a graphical user interface with a command interpreter, and a number of visualisation tools, including visualisation of feature structures, syntax trees, type hierarchies, lexical hierarchies, feature structure trees, definite clause definitions, grammar rules, lexical entries, and graphs of statistical information of various kinds. Hdrug is designed to be as flexible and extendible as possible. This is illustrated by the fact that Hdrug has been used both for the development of practical realtime systems, but also as a tool to experiment with new theoretical notions and alternative processing strategies. Grammatical formalisms that have been used range from context-free grammars to concatenative feature-based grammars (such as the grammars written for ALE) and nonconcatenative grammars such as Tree Adjoining Grammars.",1997
langedijk-etal-2022-meta,https://aclanthology.org/2022.acl-long.582,0,,,,,,,"Meta-Learning for Fast Cross-Lingual Adaptation in Dependency Parsing. Meta-learning, or learning to learn, is a technique that can help to overcome resource scarcity in cross-lingual NLP problems, by enabling fast adaptation to new tasks. We apply model-agnostic meta-learning (MAML) to the task of cross-lingual dependency parsing. We train our model on a diverse set of languages to learn a parameter initialization that can adapt quickly to new languages. We find that meta-learning with pre-training can significantly improve upon the performance of language transfer and standard supervised learning baselines for a variety of unseen, typologically diverse, and low-resource languages, in a few-shot learning setup.",Meta-Learning for Fast Cross-Lingual Adaptation in Dependency Parsing,"Meta-learning, or learning to learn, is a technique that can help to overcome resource scarcity in cross-lingual NLP problems, by enabling fast adaptation to new tasks. We apply model-agnostic meta-learning (MAML) to the task of cross-lingual dependency parsing. We train our model on a diverse set of languages to learn a parameter initialization that can adapt quickly to new languages. We find that meta-learning with pre-training can significantly improve upon the performance of language transfer and standard supervised learning baselines for a variety of unseen, typologically diverse, and low-resource languages, in a few-shot learning setup.",Meta-Learning for Fast Cross-Lingual Adaptation in Dependency Parsing,"Meta-learning, or learning to learn, is a technique that can help to overcome resource scarcity in cross-lingual NLP problems, by enabling fast adaptation to new tasks. We apply model-agnostic meta-learning (MAML) to the task of cross-lingual dependency parsing. We train our model on a diverse set of languages to learn a parameter initialization that can adapt quickly to new languages. We find that meta-learning with pre-training can significantly improve upon the performance of language transfer and standard supervised learning baselines for a variety of unseen, typologically diverse, and low-resource languages, in a few-shot learning setup.",,"Meta-Learning for Fast Cross-Lingual Adaptation in Dependency Parsing. Meta-learning, or learning to learn, is a technique that can help to overcome resource scarcity in cross-lingual NLP problems, by enabling fast adaptation to new tasks. We apply model-agnostic meta-learning (MAML) to the task of cross-lingual dependency parsing. We train our model on a diverse set of languages to learn a parameter initialization that can adapt quickly to new languages. We find that meta-learning with pre-training can significantly improve upon the performance of language transfer and standard supervised learning baselines for a variety of unseen, typologically diverse, and low-resource languages, in a few-shot learning setup.",2022
kuhn-etal-2010-phrase,https://aclanthology.org/C10-1069,0,,,,,,,"Phrase Clustering for Smoothing TM Probabilities - or, How to Extract Paraphrases from Phrase Tables. This paper describes how to cluster together the phrases of a phrase-based statistical machine translation (SMT) system, using information in the phrase table itself. The clustering is symmetric and recursive: it is applied both to sourcelanguage and target-language phrases, and the clustering in one language helps determine the clustering in the other. The phrase clusters have many possible uses. This paper looks at one of these uses: smoothing the conditional translation model (TM) probabilities employed by the SMT system. We incorporated phrase-cluster-derived probability estimates into a baseline loglinear feature combination that included relative frequency and lexically-weighted conditional probability estimates. In Chinese-English (C-E) and French-English (F-E) learning curve experiments, we obtained a gain over the baseline in 29 of 30 tests, with a maximum gain of 0.55 BLEU points (though most gains were fairly small). The largest gains came with medium (200-400K sentence pairs) rather than with small (less than 100K sentence pairs) amounts of training data, contrary to what one would expect from the paraphrasing literature. We have only begun to explore the original smoothing approach described here.","Phrase Clustering for Smoothing {TM} Probabilities - or, How to Extract Paraphrases from Phrase Tables","This paper describes how to cluster together the phrases of a phrase-based statistical machine translation (SMT) system, using information in the phrase table itself. The clustering is symmetric and recursive: it is applied both to sourcelanguage and target-language phrases, and the clustering in one language helps determine the clustering in the other. The phrase clusters have many possible uses. This paper looks at one of these uses: smoothing the conditional translation model (TM) probabilities employed by the SMT system. We incorporated phrase-cluster-derived probability estimates into a baseline loglinear feature combination that included relative frequency and lexically-weighted conditional probability estimates. In Chinese-English (C-E) and French-English (F-E) learning curve experiments, we obtained a gain over the baseline in 29 of 30 tests, with a maximum gain of 0.55 BLEU points (though most gains were fairly small). The largest gains came with medium (200-400K sentence pairs) rather than with small (less than 100K sentence pairs) amounts of training data, contrary to what one would expect from the paraphrasing literature. We have only begun to explore the original smoothing approach described here.","Phrase Clustering for Smoothing TM Probabilities - or, How to Extract Paraphrases from Phrase Tables","This paper describes how to cluster together the phrases of a phrase-based statistical machine translation (SMT) system, using information in the phrase table itself. The clustering is symmetric and recursive: it is applied both to sourcelanguage and target-language phrases, and the clustering in one language helps determine the clustering in the other. The phrase clusters have many possible uses. This paper looks at one of these uses: smoothing the conditional translation model (TM) probabilities employed by the SMT system. We incorporated phrase-cluster-derived probability estimates into a baseline loglinear feature combination that included relative frequency and lexically-weighted conditional probability estimates. In Chinese-English (C-E) and French-English (F-E) learning curve experiments, we obtained a gain over the baseline in 29 of 30 tests, with a maximum gain of 0.55 BLEU points (though most gains were fairly small). The largest gains came with medium (200-400K sentence pairs) rather than with small (less than 100K sentence pairs) amounts of training data, contrary to what one would expect from the paraphrasing literature. We have only begun to explore the original smoothing approach described here.",,"Phrase Clustering for Smoothing TM Probabilities - or, How to Extract Paraphrases from Phrase Tables. This paper describes how to cluster together the phrases of a phrase-based statistical machine translation (SMT) system, using information in the phrase table itself. The clustering is symmetric and recursive: it is applied both to sourcelanguage and target-language phrases, and the clustering in one language helps determine the clustering in the other. The phrase clusters have many possible uses. This paper looks at one of these uses: smoothing the conditional translation model (TM) probabilities employed by the SMT system. We incorporated phrase-cluster-derived probability estimates into a baseline loglinear feature combination that included relative frequency and lexically-weighted conditional probability estimates. In Chinese-English (C-E) and French-English (F-E) learning curve experiments, we obtained a gain over the baseline in 29 of 30 tests, with a maximum gain of 0.55 BLEU points (though most gains were fairly small). The largest gains came with medium (200-400K sentence pairs) rather than with small (less than 100K sentence pairs) amounts of training data, contrary to what one would expect from the paraphrasing literature. We have only begun to explore the original smoothing approach described here.",2010
pasquier-2010-single,https://aclanthology.org/S10-1032,0,,,,,,,"Single Document Keyphrase Extraction Using Sentence Clustering and Latent Dirichlet Allocation. This paper describes the design of a system for extracting keyphrases from a single document The principle of the algorithm is to cluster sentences of the documents in order to highlight parts of text that are semantically related. The clusters of sentences, that reflect the themes of the document, are then analyzed to find the main topics of the text. Finally, the most important words, or groups of words, from these topics are proposed as keyphrases.",Single Document Keyphrase Extraction Using Sentence Clustering and {L}atent {D}irichlet {A}llocation,"This paper describes the design of a system for extracting keyphrases from a single document The principle of the algorithm is to cluster sentences of the documents in order to highlight parts of text that are semantically related. The clusters of sentences, that reflect the themes of the document, are then analyzed to find the main topics of the text. Finally, the most important words, or groups of words, from these topics are proposed as keyphrases.",Single Document Keyphrase Extraction Using Sentence Clustering and Latent Dirichlet Allocation,"This paper describes the design of a system for extracting keyphrases from a single document The principle of the algorithm is to cluster sentences of the documents in order to highlight parts of text that are semantically related. The clusters of sentences, that reflect the themes of the document, are then analyzed to find the main topics of the text. Finally, the most important words, or groups of words, from these topics are proposed as keyphrases.",,"Single Document Keyphrase Extraction Using Sentence Clustering and Latent Dirichlet Allocation. This paper describes the design of a system for extracting keyphrases from a single document The principle of the algorithm is to cluster sentences of the documents in order to highlight parts of text that are semantically related. The clusters of sentences, that reflect the themes of the document, are then analyzed to find the main topics of the text. Finally, the most important words, or groups of words, from these topics are proposed as keyphrases.",2010
von-essen-hesslow-2020-building,https://aclanthology.org/2020.pam-1.16,0,,,,,,,"Building a Swedish Question-Answering Model. High quality datasets for question answering exist in a few languages, but far from all. Producing such datasets for new languages requires extensive manual labour. In this work we look at different methods for using existing datasets to train question-answering models in languages lacking such datasets. We show that machine translation followed by cross-lingual projection is a viable way to create a full question-answering dataset in a new language. We introduce new methods both for bitext alignment, using optimal transport, and for direct cross-lingual projection, utilizing multilingual BERT. We show that our methods produce good Swedish question-answering models without any manual work. Finally, we apply our proposed methods on Spanish and evaluate it on the XQuAD and MLQA benchmarks where we achieve new state-of-the-art values of 80.4 F1 and 62.9 Exact Match (EM) points on the Spanish XQuAD corpus and 70.8 F1 and 53.0 EM on the Spanish MLQA corpus, showing that the technique is readily applicable to other languages.",Building a {S}wedish Question-Answering Model,"High quality datasets for question answering exist in a few languages, but far from all. Producing such datasets for new languages requires extensive manual labour. In this work we look at different methods for using existing datasets to train question-answering models in languages lacking such datasets. We show that machine translation followed by cross-lingual projection is a viable way to create a full question-answering dataset in a new language. We introduce new methods both for bitext alignment, using optimal transport, and for direct cross-lingual projection, utilizing multilingual BERT. We show that our methods produce good Swedish question-answering models without any manual work. Finally, we apply our proposed methods on Spanish and evaluate it on the XQuAD and MLQA benchmarks where we achieve new state-of-the-art values of 80.4 F1 and 62.9 Exact Match (EM) points on the Spanish XQuAD corpus and 70.8 F1 and 53.0 EM on the Spanish MLQA corpus, showing that the technique is readily applicable to other languages.",Building a Swedish Question-Answering Model,"High quality datasets for question answering exist in a few languages, but far from all. Producing such datasets for new languages requires extensive manual labour. In this work we look at different methods for using existing datasets to train question-answering models in languages lacking such datasets. We show that machine translation followed by cross-lingual projection is a viable way to create a full question-answering dataset in a new language. We introduce new methods both for bitext alignment, using optimal transport, and for direct cross-lingual projection, utilizing multilingual BERT. We show that our methods produce good Swedish question-answering models without any manual work. Finally, we apply our proposed methods on Spanish and evaluate it on the XQuAD and MLQA benchmarks where we achieve new state-of-the-art values of 80.4 F1 and 62.9 Exact Match (EM) points on the Spanish XQuAD corpus and 70.8 F1 and 53.0 EM on the Spanish MLQA corpus, showing that the technique is readily applicable to other languages.",,"Building a Swedish Question-Answering Model. High quality datasets for question answering exist in a few languages, but far from all. Producing such datasets for new languages requires extensive manual labour. In this work we look at different methods for using existing datasets to train question-answering models in languages lacking such datasets. We show that machine translation followed by cross-lingual projection is a viable way to create a full question-answering dataset in a new language. We introduce new methods both for bitext alignment, using optimal transport, and for direct cross-lingual projection, utilizing multilingual BERT. We show that our methods produce good Swedish question-answering models without any manual work. Finally, we apply our proposed methods on Spanish and evaluate it on the XQuAD and MLQA benchmarks where we achieve new state-of-the-art values of 80.4 F1 and 62.9 Exact Match (EM) points on the Spanish XQuAD corpus and 70.8 F1 and 53.0 EM on the Spanish MLQA corpus, showing that the technique is readily applicable to other languages.",2020
kelly-etal-2009-investigating,https://aclanthology.org/W09-0623,0,,,,,,,"Investigating Content Selection for Language Generation using Machine Learning. The content selection component of a natural language generation system decides which information should be communicated in its output. We use information from reports on the game of cricket. We first describe a simple factoid-to-text alignment algorithm then treat content selection as a collective classification problem and demonstrate that simple 'grouping' of statistics at various levels of granularity yields substantially improved results over a probabilistic baseline. We additionally show that holding back of specific types of input data, and linking database structures with commonality further increase performance.",Investigating Content Selection for Language Generation using Machine Learning,"The content selection component of a natural language generation system decides which information should be communicated in its output. We use information from reports on the game of cricket. We first describe a simple factoid-to-text alignment algorithm then treat content selection as a collective classification problem and demonstrate that simple 'grouping' of statistics at various levels of granularity yields substantially improved results over a probabilistic baseline. We additionally show that holding back of specific types of input data, and linking database structures with commonality further increase performance.",Investigating Content Selection for Language Generation using Machine Learning,"The content selection component of a natural language generation system decides which information should be communicated in its output. We use information from reports on the game of cricket. We first describe a simple factoid-to-text alignment algorithm then treat content selection as a collective classification problem and demonstrate that simple 'grouping' of statistics at various levels of granularity yields substantially improved results over a probabilistic baseline. We additionally show that holding back of specific types of input data, and linking database structures with commonality further increase performance.","This paper is based on Colin Kelly's M.Phil. thesis, written towards his completion of the University of Cambridge Computer Laboratory's Computer Speech, Text and Internet Technology course. Grateful thanks go to the EPSRC for funding.","Investigating Content Selection for Language Generation using Machine Learning. The content selection component of a natural language generation system decides which information should be communicated in its output. We use information from reports on the game of cricket. We first describe a simple factoid-to-text alignment algorithm then treat content selection as a collective classification problem and demonstrate that simple 'grouping' of statistics at various levels of granularity yields substantially improved results over a probabilistic baseline. We additionally show that holding back of specific types of input data, and linking database structures with commonality further increase performance.",2009
van-den-bogaert-etal-2020-mice,https://aclanthology.org/2020.eamt-1.59,0,,,,,,,"MICE: a middleware layer for MT. The MICE project (2018-2020) will deliver a middleware layer for improving the output quality of the eTranslation system of EC's Connecting Europe Facility through additional services, such as domain adaptation and named-entity recognition. It will also deliver a user portal, allowing for human post-editing.",{MICE}: a middleware layer for {MT},"The MICE project (2018-2020) will deliver a middleware layer for improving the output quality of the eTranslation system of EC's Connecting Europe Facility through additional services, such as domain adaptation and named-entity recognition. It will also deliver a user portal, allowing for human post-editing.",MICE: a middleware layer for MT,"The MICE project (2018-2020) will deliver a middleware layer for improving the output quality of the eTranslation system of EC's Connecting Europe Facility through additional services, such as domain adaptation and named-entity recognition. It will also deliver a user portal, allowing for human post-editing.",MICE is funded by the EC's CEF Telecom programme (project 2017-EU-IA-0169).,"MICE: a middleware layer for MT. The MICE project (2018-2020) will deliver a middleware layer for improving the output quality of the eTranslation system of EC's Connecting Europe Facility through additional services, such as domain adaptation and named-entity recognition. It will also deliver a user portal, allowing for human post-editing.",2020
deleger-zweigenbaum-2010-identifying,http://www.lrec-conf.org/proceedings/lrec2010/pdf/472_Paper.pdf,1,,,,education,,,"Identifying Paraphrases between Technical and Lay Corpora. In previous work, we presented a preliminary study to identify paraphrases between technical and lay discourse types from medical corpora dedicated to the French language. In this paper, we test the hypothesis that the same kinds of paraphrases as for French can be detected between English technical and lay discourse types and report the adaptation of our method from French to English. Starting from the constitution of monolingual comparable corpora, we extract two kinds of paraphrases: paraphrases between nominalizations and verbal constructions and paraphrases between neo-classical compounds and modern-language phrases. We do this relying on morphological resources and a set of extraction rules we adapt from the original approach for French. Results show that paraphrases could be identified with a rather good precision, and that these types of paraphrase are relevant in the context of the opposition between technical and lay discourse types. These observations are consistent with the results obtained for French, which demonstrates the portability of the approach as well as the similarity of the two languages as regards the use of those kinds of expressions in technical and lay discourse types.",Identifying Paraphrases between Technical and Lay Corpora,"In previous work, we presented a preliminary study to identify paraphrases between technical and lay discourse types from medical corpora dedicated to the French language. In this paper, we test the hypothesis that the same kinds of paraphrases as for French can be detected between English technical and lay discourse types and report the adaptation of our method from French to English. Starting from the constitution of monolingual comparable corpora, we extract two kinds of paraphrases: paraphrases between nominalizations and verbal constructions and paraphrases between neo-classical compounds and modern-language phrases. We do this relying on morphological resources and a set of extraction rules we adapt from the original approach for French. Results show that paraphrases could be identified with a rather good precision, and that these types of paraphrase are relevant in the context of the opposition between technical and lay discourse types. These observations are consistent with the results obtained for French, which demonstrates the portability of the approach as well as the similarity of the two languages as regards the use of those kinds of expressions in technical and lay discourse types.",Identifying Paraphrases between Technical and Lay Corpora,"In previous work, we presented a preliminary study to identify paraphrases between technical and lay discourse types from medical corpora dedicated to the French language. In this paper, we test the hypothesis that the same kinds of paraphrases as for French can be detected between English technical and lay discourse types and report the adaptation of our method from French to English. Starting from the constitution of monolingual comparable corpora, we extract two kinds of paraphrases: paraphrases between nominalizations and verbal constructions and paraphrases between neo-classical compounds and modern-language phrases. We do this relying on morphological resources and a set of extraction rules we adapt from the original approach for French. Results show that paraphrases could be identified with a rather good precision, and that these types of paraphrase are relevant in the context of the opposition between technical and lay discourse types. These observations are consistent with the results obtained for French, which demonstrates the portability of the approach as well as the similarity of the two languages as regards the use of those kinds of expressions in technical and lay discourse types.",,"Identifying Paraphrases between Technical and Lay Corpora. In previous work, we presented a preliminary study to identify paraphrases between technical and lay discourse types from medical corpora dedicated to the French language. In this paper, we test the hypothesis that the same kinds of paraphrases as for French can be detected between English technical and lay discourse types and report the adaptation of our method from French to English. Starting from the constitution of monolingual comparable corpora, we extract two kinds of paraphrases: paraphrases between nominalizations and verbal constructions and paraphrases between neo-classical compounds and modern-language phrases. We do this relying on morphological resources and a set of extraction rules we adapt from the original approach for French. Results show that paraphrases could be identified with a rather good precision, and that these types of paraphrase are relevant in the context of the opposition between technical and lay discourse types. These observations are consistent with the results obtained for French, which demonstrates the portability of the approach as well as the similarity of the two languages as regards the use of those kinds of expressions in technical and lay discourse types.",2010
rosenthal-mckeown-2013-columbia,https://aclanthology.org/S13-2079,0,,,,,,,"Columbia NLP: Sentiment Detection of Subjective Phrases in Social Media. We present a supervised sentiment detection system that classifies the polarity of subjective phrases as positive, negative, or neutral. It is tailored towards online genres, specifically Twitter, through the inclusion of dictionaries developed to capture vocabulary used in online conversations (e.g., slang and emoticons) as well as stylistic features common to social media. We show how to incorporate these new features within a state of the art system and evaluate it on subtask A in SemEval-2013 Task 2: Sentiment Analysis in Twitter.",{C}olumbia {NLP}: Sentiment Detection of Subjective Phrases in Social Media,"We present a supervised sentiment detection system that classifies the polarity of subjective phrases as positive, negative, or neutral. It is tailored towards online genres, specifically Twitter, through the inclusion of dictionaries developed to capture vocabulary used in online conversations (e.g., slang and emoticons) as well as stylistic features common to social media. We show how to incorporate these new features within a state of the art system and evaluate it on subtask A in SemEval-2013 Task 2: Sentiment Analysis in Twitter.",Columbia NLP: Sentiment Detection of Subjective Phrases in Social Media,"We present a supervised sentiment detection system that classifies the polarity of subjective phrases as positive, negative, or neutral. It is tailored towards online genres, specifically Twitter, through the inclusion of dictionaries developed to capture vocabulary used in online conversations (e.g., slang and emoticons) as well as stylistic features common to social media. We show how to incorporate these new features within a state of the art system and evaluate it on subtask A in SemEval-2013 Task 2: Sentiment Analysis in Twitter.","This research was partially funded by (a) the ODNI, IARPA, through the U.S. Army Research Lab and (b) the DARPA DEFT Program. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views, policies, or positions of IARPA, the ODNI, the Department of Defense, or the U.S. Government.","Columbia NLP: Sentiment Detection of Subjective Phrases in Social Media. We present a supervised sentiment detection system that classifies the polarity of subjective phrases as positive, negative, or neutral. It is tailored towards online genres, specifically Twitter, through the inclusion of dictionaries developed to capture vocabulary used in online conversations (e.g., slang and emoticons) as well as stylistic features common to social media. We show how to incorporate these new features within a state of the art system and evaluate it on subtask A in SemEval-2013 Task 2: Sentiment Analysis in Twitter.",2013
gupta-2020-finlp,https://aclanthology.org/2020.fnp-1.12,0,,,,finance,,,"FiNLP at FinCausal 2020 Task 1: Mixture of BERTs for Causal Sentence Identification in Financial Texts. This paper describes our system developed for the sub-task 1 of the FinCausal shared task in the FNP-FNS workshop held in conjunction with COLING-2020. The system classifies whether a financial news text segment contains causality or not. To address this task, we fine-tune and ensemble the generic and domain-specific BERT language models pre-trained on financial text corpora. The task data is highly imbalanced with the majority non-causal class; therefore, we train the models using strategies such as under-sampling, cost-sensitive learning, and data augmentation. Our best system achieves a weighted F1-score of 96.98 securing 4 th position on the evaluation leaderboard. The code is available at https:",{F}i{NLP} at {F}in{C}ausal 2020 Task 1: Mixture of {BERT}s for Causal Sentence Identification in Financial Texts,"This paper describes our system developed for the sub-task 1 of the FinCausal shared task in the FNP-FNS workshop held in conjunction with COLING-2020. The system classifies whether a financial news text segment contains causality or not. To address this task, we fine-tune and ensemble the generic and domain-specific BERT language models pre-trained on financial text corpora. The task data is highly imbalanced with the majority non-causal class; therefore, we train the models using strategies such as under-sampling, cost-sensitive learning, and data augmentation. Our best system achieves a weighted F1-score of 96.98 securing 4 th position on the evaluation leaderboard. The code is available at https:",FiNLP at FinCausal 2020 Task 1: Mixture of BERTs for Causal Sentence Identification in Financial Texts,"This paper describes our system developed for the sub-task 1 of the FinCausal shared task in the FNP-FNS workshop held in conjunction with COLING-2020. The system classifies whether a financial news text segment contains causality or not. To address this task, we fine-tune and ensemble the generic and domain-specific BERT language models pre-trained on financial text corpora. The task data is highly imbalanced with the majority non-causal class; therefore, we train the models using strategies such as under-sampling, cost-sensitive learning, and data augmentation. Our best system achieves a weighted F1-score of 96.98 securing 4 th position on the evaluation leaderboard. The code is available at https:",,"FiNLP at FinCausal 2020 Task 1: Mixture of BERTs for Causal Sentence Identification in Financial Texts. This paper describes our system developed for the sub-task 1 of the FinCausal shared task in the FNP-FNS workshop held in conjunction with COLING-2020. The system classifies whether a financial news text segment contains causality or not. To address this task, we fine-tune and ensemble the generic and domain-specific BERT language models pre-trained on financial text corpora. The task data is highly imbalanced with the majority non-causal class; therefore, we train the models using strategies such as under-sampling, cost-sensitive learning, and data augmentation. Our best system achieves a weighted F1-score of 96.98 securing 4 th position on the evaluation leaderboard. The code is available at https:",2020
bisk-etal-2016-natural,https://aclanthology.org/N16-1089,0,,,,,,,"Natural Language Communication with Robots. We propose a framework for devising empirically testable algorithms for bridging the communication gap between humans and robots. We instantiate our framework in the context of a problem setting in which humans give instructions to robots using unrestricted natural language commands, with instruction sequences being subservient to building complex goal configurations in a blocks world. We show how one can collect meaningful training data and we propose three neural architectures for interpreting contextually grounded natural language commands. The proposed architectures allow us to correctly understand/ground the blocks that the robot should move when instructed by a human who uses unrestricted language. The architectures have more difficulty in correctly understanding/grounding the spatial relations required to place blocks correctly, especially when the blocks are not easily identifiable.",Natural Language Communication with Robots,"We propose a framework for devising empirically testable algorithms for bridging the communication gap between humans and robots. We instantiate our framework in the context of a problem setting in which humans give instructions to robots using unrestricted natural language commands, with instruction sequences being subservient to building complex goal configurations in a blocks world. We show how one can collect meaningful training data and we propose three neural architectures for interpreting contextually grounded natural language commands. The proposed architectures allow us to correctly understand/ground the blocks that the robot should move when instructed by a human who uses unrestricted language. The architectures have more difficulty in correctly understanding/grounding the spatial relations required to place blocks correctly, especially when the blocks are not easily identifiable.",Natural Language Communication with Robots,"We propose a framework for devising empirically testable algorithms for bridging the communication gap between humans and robots. We instantiate our framework in the context of a problem setting in which humans give instructions to robots using unrestricted natural language commands, with instruction sequences being subservient to building complex goal configurations in a blocks world. We show how one can collect meaningful training data and we propose three neural architectures for interpreting contextually grounded natural language commands. The proposed architectures allow us to correctly understand/ground the blocks that the robot should move when instructed by a human who uses unrestricted language. The architectures have more difficulty in correctly understanding/grounding the spatial relations required to place blocks correctly, especially when the blocks are not easily identifiable.",This work was supported by Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO).,"Natural Language Communication with Robots. We propose a framework for devising empirically testable algorithms for bridging the communication gap between humans and robots. We instantiate our framework in the context of a problem setting in which humans give instructions to robots using unrestricted natural language commands, with instruction sequences being subservient to building complex goal configurations in a blocks world. We show how one can collect meaningful training data and we propose three neural architectures for interpreting contextually grounded natural language commands. The proposed architectures allow us to correctly understand/ground the blocks that the robot should move when instructed by a human who uses unrestricted language. The architectures have more difficulty in correctly understanding/grounding the spatial relations required to place blocks correctly, especially when the blocks are not easily identifiable.",2016
patchala-bhatnagar-2018-authorship,https://aclanthology.org/C18-1234,0,,,,,,,"Authorship Attribution By Consensus Among Multiple Features. Most existing research on authorship attribution uses various lexical, syntactic and semantic features. In this paper we demonstrate an effective template-based approach for combining various syntactic features of a document for authorship analysis. The parse-tree based features that we propose are independent of the topic of a document and reflect the innate writing styles of authors. We show that the use of templates including sub-trees of parse trees in conjunction with other syntactic features result in improved author attribution rates. Another contribution is the demonstration that Dempster's rule based combination of evidence from syntactic features performs better than other evidence-combination methods. We also demonstrate that our methodology works well for the case where actual author is not included in the candidate author set.",Authorship Attribution By Consensus Among Multiple Features,"Most existing research on authorship attribution uses various lexical, syntactic and semantic features. In this paper we demonstrate an effective template-based approach for combining various syntactic features of a document for authorship analysis. The parse-tree based features that we propose are independent of the topic of a document and reflect the innate writing styles of authors. We show that the use of templates including sub-trees of parse trees in conjunction with other syntactic features result in improved author attribution rates. Another contribution is the demonstration that Dempster's rule based combination of evidence from syntactic features performs better than other evidence-combination methods. We also demonstrate that our methodology works well for the case where actual author is not included in the candidate author set.",Authorship Attribution By Consensus Among Multiple Features,"Most existing research on authorship attribution uses various lexical, syntactic and semantic features. In this paper we demonstrate an effective template-based approach for combining various syntactic features of a document for authorship analysis. The parse-tree based features that we propose are independent of the topic of a document and reflect the innate writing styles of authors. We show that the use of templates including sub-trees of parse trees in conjunction with other syntactic features result in improved author attribution rates. Another contribution is the demonstration that Dempster's rule based combination of evidence from syntactic features performs better than other evidence-combination methods. We also demonstrate that our methodology works well for the case where actual author is not included in the candidate author set.",,"Authorship Attribution By Consensus Among Multiple Features. Most existing research on authorship attribution uses various lexical, syntactic and semantic features. In this paper we demonstrate an effective template-based approach for combining various syntactic features of a document for authorship analysis. The parse-tree based features that we propose are independent of the topic of a document and reflect the innate writing styles of authors. We show that the use of templates including sub-trees of parse trees in conjunction with other syntactic features result in improved author attribution rates. Another contribution is the demonstration that Dempster's rule based combination of evidence from syntactic features performs better than other evidence-combination methods. We also demonstrate that our methodology works well for the case where actual author is not included in the candidate author set.",2018
peng-etal-2016-news,https://aclanthology.org/P16-1037,0,,,,,,,"News Citation Recommendation with Implicit and Explicit Semantics. In this work, we focus on the problem of news citation recommendation. The task aims to recommend news citations for both authors and readers to create and search news references. Due to the sparsity issue of news citations and the engineering difficulty in obtaining information on authors, we focus on content similarity-based methods instead of collaborative filtering-based approaches. In this paper, we explore word embedding (i.e., implicit semantics) and grounded entities (i.e., explicit semantics) to address the variety and ambiguity issues of language. We formulate the problem as a reranking task and integrate different similarity measures under the learning to rank framework. We evaluate our approach on a real-world dataset. The experimental results show the efficacy of our method.",News Citation Recommendation with Implicit and Explicit Semantics,"In this work, we focus on the problem of news citation recommendation. The task aims to recommend news citations for both authors and readers to create and search news references. Due to the sparsity issue of news citations and the engineering difficulty in obtaining information on authors, we focus on content similarity-based methods instead of collaborative filtering-based approaches. In this paper, we explore word embedding (i.e., implicit semantics) and grounded entities (i.e., explicit semantics) to address the variety and ambiguity issues of language. We formulate the problem as a reranking task and integrate different similarity measures under the learning to rank framework. We evaluate our approach on a real-world dataset. The experimental results show the efficacy of our method.",News Citation Recommendation with Implicit and Explicit Semantics,"In this work, we focus on the problem of news citation recommendation. The task aims to recommend news citations for both authors and readers to create and search news references. Due to the sparsity issue of news citations and the engineering difficulty in obtaining information on authors, we focus on content similarity-based methods instead of collaborative filtering-based approaches. In this paper, we explore word embedding (i.e., implicit semantics) and grounded entities (i.e., explicit semantics) to address the variety and ambiguity issues of language. We formulate the problem as a reranking task and integrate different similarity measures under the learning to rank framework. We evaluate our approach on a real-world dataset. The experimental results show the efficacy of our method.",,"News Citation Recommendation with Implicit and Explicit Semantics. In this work, we focus on the problem of news citation recommendation. The task aims to recommend news citations for both authors and readers to create and search news references. Due to the sparsity issue of news citations and the engineering difficulty in obtaining information on authors, we focus on content similarity-based methods instead of collaborative filtering-based approaches. In this paper, we explore word embedding (i.e., implicit semantics) and grounded entities (i.e., explicit semantics) to address the variety and ambiguity issues of language. We formulate the problem as a reranking task and integrate different similarity measures under the learning to rank framework. We evaluate our approach on a real-world dataset. The experimental results show the efficacy of our method.",2016
zheng-etal-2022-fewnlu,https://aclanthology.org/2022.acl-long.38,0,,,,,,,"FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding. The few-shot natural language understanding (NLU) task has attracted much recent attention. However, prior methods have been evaluated under a disparate set of protocols, which hinders fair comparison and measuring progress of the field. To address this issue, we introduce an evaluation framework that improves previous evaluation procedures in three key aspects, i.e., test performance, dev-test correlation, and stability. Under this new evaluation framework, we re-evaluate several stateof-the-art few-shot methods for NLU tasks. Our framework reveals new insights: (1) both the absolute performance and relative gap of the methods were not accurately estimated in prior literature; (2) no single method dominates most tasks with consistent performance; (3) improvements of some methods diminish with a larger pretrained model; and (4) gains from different methods are often complementary and the best combined model performs close to a strong fully-supervised baseline. We open-source our toolkit, FewNLU, that implements our evaluation framework along with a number of state-of-the-art methods. 1 2",{F}ew{NLU}: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding,"The few-shot natural language understanding (NLU) task has attracted much recent attention. However, prior methods have been evaluated under a disparate set of protocols, which hinders fair comparison and measuring progress of the field. To address this issue, we introduce an evaluation framework that improves previous evaluation procedures in three key aspects, i.e., test performance, dev-test correlation, and stability. Under this new evaluation framework, we re-evaluate several stateof-the-art few-shot methods for NLU tasks. Our framework reveals new insights: (1) both the absolute performance and relative gap of the methods were not accurately estimated in prior literature; (2) no single method dominates most tasks with consistent performance; (3) improvements of some methods diminish with a larger pretrained model; and (4) gains from different methods are often complementary and the best combined model performs close to a strong fully-supervised baseline. We open-source our toolkit, FewNLU, that implements our evaluation framework along with a number of state-of-the-art methods. 1 2",FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding,"The few-shot natural language understanding (NLU) task has attracted much recent attention. However, prior methods have been evaluated under a disparate set of protocols, which hinders fair comparison and measuring progress of the field. To address this issue, we introduce an evaluation framework that improves previous evaluation procedures in three key aspects, i.e., test performance, dev-test correlation, and stability. Under this new evaluation framework, we re-evaluate several stateof-the-art few-shot methods for NLU tasks. Our framework reveals new insights: (1) both the absolute performance and relative gap of the methods were not accurately estimated in prior literature; (2) no single method dominates most tasks with consistent performance; (3) improvements of some methods diminish with a larger pretrained model; and (4) gains from different methods are often complementary and the best combined model performs close to a strong fully-supervised baseline. We open-source our toolkit, FewNLU, that implements our evaluation framework along with a number of state-of-the-art methods. 1 2",We thank Dani Yogatama for valuable feedback on a draft of this paper. ,"FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding. The few-shot natural language understanding (NLU) task has attracted much recent attention. However, prior methods have been evaluated under a disparate set of protocols, which hinders fair comparison and measuring progress of the field. To address this issue, we introduce an evaluation framework that improves previous evaluation procedures in three key aspects, i.e., test performance, dev-test correlation, and stability. Under this new evaluation framework, we re-evaluate several stateof-the-art few-shot methods for NLU tasks. Our framework reveals new insights: (1) both the absolute performance and relative gap of the methods were not accurately estimated in prior literature; (2) no single method dominates most tasks with consistent performance; (3) improvements of some methods diminish with a larger pretrained model; and (4) gains from different methods are often complementary and the best combined model performs close to a strong fully-supervised baseline. We open-source our toolkit, FewNLU, that implements our evaluation framework along with a number of state-of-the-art methods. 1 2",2022
liu-etal-2020-zero,https://aclanthology.org/2020.repl4nlp-1.1,0,,,,,,,"Zero-Resource Cross-Domain Named Entity Recognition. Existing models for cross-domain named entity recognition (NER) rely on numerous unlabeled corpus or labeled NER training data in target domains. However, collecting data for low-resource target domains is not only expensive but also time-consuming. Hence, we propose a cross-domain NER model that does not use any external resources. We first introduce a Multi-Task Learning (MTL) by adding a new objective function to detect whether tokens are named entities or not. We then introduce a framework called Mixture of Entity Experts (MoEE) to improve the robustness for zero-resource domain adaptation. Finally, experimental results show that our model outperforms strong unsupervised cross-domain sequence labeling models, and the performance of our model is close to that of the state-of-theart model which leverages extensive resources.",Zero-Resource Cross-Domain Named Entity Recognition,"Existing models for cross-domain named entity recognition (NER) rely on numerous unlabeled corpus or labeled NER training data in target domains. However, collecting data for low-resource target domains is not only expensive but also time-consuming. Hence, we propose a cross-domain NER model that does not use any external resources. We first introduce a Multi-Task Learning (MTL) by adding a new objective function to detect whether tokens are named entities or not. We then introduce a framework called Mixture of Entity Experts (MoEE) to improve the robustness for zero-resource domain adaptation. Finally, experimental results show that our model outperforms strong unsupervised cross-domain sequence labeling models, and the performance of our model is close to that of the state-of-theart model which leverages extensive resources.",Zero-Resource Cross-Domain Named Entity Recognition,"Existing models for cross-domain named entity recognition (NER) rely on numerous unlabeled corpus or labeled NER training data in target domains. However, collecting data for low-resource target domains is not only expensive but also time-consuming. Hence, we propose a cross-domain NER model that does not use any external resources. We first introduce a Multi-Task Learning (MTL) by adding a new objective function to detect whether tokens are named entities or not. We then introduce a framework called Mixture of Entity Experts (MoEE) to improve the robustness for zero-resource domain adaptation. Finally, experimental results show that our model outperforms strong unsupervised cross-domain sequence labeling models, and the performance of our model is close to that of the state-of-theart model which leverages extensive resources.","This work is partially funded by ITF/319/16FP and MRP/055/18 of the Innovation Technology Commission, the Hong Kong SAR Government.","Zero-Resource Cross-Domain Named Entity Recognition. Existing models for cross-domain named entity recognition (NER) rely on numerous unlabeled corpus or labeled NER training data in target domains. However, collecting data for low-resource target domains is not only expensive but also time-consuming. Hence, we propose a cross-domain NER model that does not use any external resources. We first introduce a Multi-Task Learning (MTL) by adding a new objective function to detect whether tokens are named entities or not. We then introduce a framework called Mixture of Entity Experts (MoEE) to improve the robustness for zero-resource domain adaptation. Finally, experimental results show that our model outperforms strong unsupervised cross-domain sequence labeling models, and the performance of our model is close to that of the state-of-theart model which leverages extensive resources.",2020
dufter-etal-2021-static,https://aclanthology.org/2021.naacl-main.186,0,,,,,,,"Static Embeddings as Efficient Knowledge Bases?. Recent research investigates factual knowledge stored in large pretrained language models (PLMs). Instead of structural knowledge base (KB) queries, masked sentences such as ""Paris is the capital of [MASK]"" are used as probes. The good performance on this analysis task has been interpreted as PLMs becoming potential repositories of factual knowledge. In experiments across ten linguistically diverse languages, we study knowledge contained in static embeddings. We show that, when restricting the output space to a candidate set, simple nearest neighbor matching using static embeddings performs better than PLMs. E.g., static embeddings perform 1.6% points better than BERT while just using 0.3% of energy for training. One important factor in their good comparative performance is that static embeddings are standardly learned for a large vocabulary. In contrast, BERT exploits its more sophisticated, but expensive ability to compose meaningful representations from a much smaller subword vocabulary. * Equal contribution-random order. Model Vocabulary Size p1 LAMA LAMA-UHN Oracle 22.0 23.7",Static Embeddings as Efficient Knowledge Bases?,"Recent research investigates factual knowledge stored in large pretrained language models (PLMs). Instead of structural knowledge base (KB) queries, masked sentences such as ""Paris is the capital of [MASK]"" are used as probes. The good performance on this analysis task has been interpreted as PLMs becoming potential repositories of factual knowledge. In experiments across ten linguistically diverse languages, we study knowledge contained in static embeddings. We show that, when restricting the output space to a candidate set, simple nearest neighbor matching using static embeddings performs better than PLMs. E.g., static embeddings perform 1.6% points better than BERT while just using 0.3% of energy for training. One important factor in their good comparative performance is that static embeddings are standardly learned for a large vocabulary. In contrast, BERT exploits its more sophisticated, but expensive ability to compose meaningful representations from a much smaller subword vocabulary. * Equal contribution-random order. Model Vocabulary Size p1 LAMA LAMA-UHN Oracle 22.0 23.7",Static Embeddings as Efficient Knowledge Bases?,"Recent research investigates factual knowledge stored in large pretrained language models (PLMs). Instead of structural knowledge base (KB) queries, masked sentences such as ""Paris is the capital of [MASK]"" are used as probes. The good performance on this analysis task has been interpreted as PLMs becoming potential repositories of factual knowledge. In experiments across ten linguistically diverse languages, we study knowledge contained in static embeddings. We show that, when restricting the output space to a candidate set, simple nearest neighbor matching using static embeddings performs better than PLMs. E.g., static embeddings perform 1.6% points better than BERT while just using 0.3% of energy for training. One important factor in their good comparative performance is that static embeddings are standardly learned for a large vocabulary. In contrast, BERT exploits its more sophisticated, but expensive ability to compose meaningful representations from a much smaller subword vocabulary. * Equal contribution-random order. Model Vocabulary Size p1 LAMA LAMA-UHN Oracle 22.0 23.7",Acknowledgements. This work was supported by the European Research Council (# 740516) and the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A. The authors of this work take full responsibility for its content. The first author was supported by the Bavarian research institute for digital transformation (bidt) through their fellowship program. We thank Yanai Elazar and the anonymous reviewers for valuable comments.,"Static Embeddings as Efficient Knowledge Bases?. Recent research investigates factual knowledge stored in large pretrained language models (PLMs). Instead of structural knowledge base (KB) queries, masked sentences such as ""Paris is the capital of [MASK]"" are used as probes. The good performance on this analysis task has been interpreted as PLMs becoming potential repositories of factual knowledge. In experiments across ten linguistically diverse languages, we study knowledge contained in static embeddings. We show that, when restricting the output space to a candidate set, simple nearest neighbor matching using static embeddings performs better than PLMs. E.g., static embeddings perform 1.6% points better than BERT while just using 0.3% of energy for training. One important factor in their good comparative performance is that static embeddings are standardly learned for a large vocabulary. In contrast, BERT exploits its more sophisticated, but expensive ability to compose meaningful representations from a much smaller subword vocabulary. * Equal contribution-random order. Model Vocabulary Size p1 LAMA LAMA-UHN Oracle 22.0 23.7",2021
chen-etal-2021-improving,https://aclanthology.org/2021.naacl-main.475,0,,,,,,,"Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection. Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries. We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document. This model is then used to select the best candidate as the final output summary. Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations. We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.",Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection,"Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries. We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document. This model is then used to select the best candidate as the final output summary. Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations. We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.",Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection,"Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries. We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document. This model is then used to select the best candidate as the final output summary. Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations. We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.","We thank Sunita Verma and Sugato Basu for valuable input and feedback on drafts of the paper. This work was supported in part by a Focused Award from Google, a gift from Tencent, and by Contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.","Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection. Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries. We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document. This model is then used to select the best candidate as the final output summary. Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations. We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.",2021
tsai-etal-2016-cross,https://aclanthology.org/K16-1022,0,,,,,,,"Cross-Lingual Named Entity Recognition via Wikification. Named Entity Recognition (NER) models for language L are typically trained using annotated data in that language. We study cross-lingual NER, where a model for NER in L is trained on another, source, language (or multiple source languages). We introduce a language independent method for NER, building on cross-lingual wikification, a technique that grounds words and phrases in non-English text into English Wikipedia entries. Thus, mentions in any language can be described using a set of categories and FreeBase types, yielding, as we show, strong language-independent features. With this insight, we propose an NER model that can be applied to all languages in Wikipedia. When trained on English, our model outperforms comparable approaches on the standard CoNLL datasets (Spanish, German, and Dutch) and also performs very well on lowresource languages (e.g., Turkish, Tagalog, Yoruba, Bengali, and Tamil) that have significantly smaller Wikipedia. Moreover, our method allows us to train on multiple source languages, typically improving NER results on the target languages. Finally, we show that our languageindependent features can be used also to enhance monolingual NER systems, yielding improved results for all 9 languages.",Cross-Lingual Named Entity Recognition via Wikification,"Named Entity Recognition (NER) models for language L are typically trained using annotated data in that language. We study cross-lingual NER, where a model for NER in L is trained on another, source, language (or multiple source languages). We introduce a language independent method for NER, building on cross-lingual wikification, a technique that grounds words and phrases in non-English text into English Wikipedia entries. Thus, mentions in any language can be described using a set of categories and FreeBase types, yielding, as we show, strong language-independent features. With this insight, we propose an NER model that can be applied to all languages in Wikipedia. When trained on English, our model outperforms comparable approaches on the standard CoNLL datasets (Spanish, German, and Dutch) and also performs very well on lowresource languages (e.g., Turkish, Tagalog, Yoruba, Bengali, and Tamil) that have significantly smaller Wikipedia. Moreover, our method allows us to train on multiple source languages, typically improving NER results on the target languages. Finally, we show that our languageindependent features can be used also to enhance monolingual NER systems, yielding improved results for all 9 languages.",Cross-Lingual Named Entity Recognition via Wikification,"Named Entity Recognition (NER) models for language L are typically trained using annotated data in that language. We study cross-lingual NER, where a model for NER in L is trained on another, source, language (or multiple source languages). We introduce a language independent method for NER, building on cross-lingual wikification, a technique that grounds words and phrases in non-English text into English Wikipedia entries. Thus, mentions in any language can be described using a set of categories and FreeBase types, yielding, as we show, strong language-independent features. With this insight, we propose an NER model that can be applied to all languages in Wikipedia. When trained on English, our model outperforms comparable approaches on the standard CoNLL datasets (Spanish, German, and Dutch) and also performs very well on lowresource languages (e.g., Turkish, Tagalog, Yoruba, Bengali, and Tamil) that have significantly smaller Wikipedia. Moreover, our method allows us to train on multiple source languages, typically improving NER results on the target languages. Finally, we show that our languageindependent features can be used also to enhance monolingual NER systems, yielding improved results for all 9 languages.","This research is supported by NIH grant U54-GM114838, a grant from the Allen Institute for Artificial Intelligence (allenai.org), and Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.","Cross-Lingual Named Entity Recognition via Wikification. Named Entity Recognition (NER) models for language L are typically trained using annotated data in that language. We study cross-lingual NER, where a model for NER in L is trained on another, source, language (or multiple source languages). We introduce a language independent method for NER, building on cross-lingual wikification, a technique that grounds words and phrases in non-English text into English Wikipedia entries. Thus, mentions in any language can be described using a set of categories and FreeBase types, yielding, as we show, strong language-independent features. With this insight, we propose an NER model that can be applied to all languages in Wikipedia. When trained on English, our model outperforms comparable approaches on the standard CoNLL datasets (Spanish, German, and Dutch) and also performs very well on lowresource languages (e.g., Turkish, Tagalog, Yoruba, Bengali, and Tamil) that have significantly smaller Wikipedia. Moreover, our method allows us to train on multiple source languages, typically improving NER results on the target languages. Finally, we show that our languageindependent features can be used also to enhance monolingual NER systems, yielding improved results for all 9 languages.",2016
arapov-herz-1973-frequency,https://aclanthology.org/C73-2001,0,,,,,,,"Frequency and Age as Characteristics of a Word. 1. The problem of rdation between the frequency and age of a word is only a small part of the general problem of opposition of the synchronic and diachronic aspects of language. The frequency is obviously a purely synchronic characteristic of a word whereas the age (the time interval t between the appearance of the word and the present moment) is a purely diachronic one. However there is a simple dependence between both characteristics: the old age of a word corresponds to a high frequency ranking and vice-versa: among the words with low frequency the proportion of ancient words is small. The existence of this dependency was first discovered by G. K. Zlvr (1947) .
2. To obtain this dependency in analytical form let us split the whole frequency dictionary into a number of groups of equal size (n words in each of the groups).* Each group consists of words of equal or nearly equal values of frequency. The most frequently used n words belong to the group with rank 1, the following n words constitute the group with rank 2 and the words having in the dictionary numbers from (i-1) n + 1 till i.n constitute a group with rank i.",Frequency and Age as Characteristics of a Word,"1. The problem of rdation between the frequency and age of a word is only a small part of the general problem of opposition of the synchronic and diachronic aspects of language. The frequency is obviously a purely synchronic characteristic of a word whereas the age (the time interval t between the appearance of the word and the present moment) is a purely diachronic one. However there is a simple dependence between both characteristics: the old age of a word corresponds to a high frequency ranking and vice-versa: among the words with low frequency the proportion of ancient words is small. The existence of this dependency was first discovered by G. K. Zlvr (1947) .
2. To obtain this dependency in analytical form let us split the whole frequency dictionary into a number of groups of equal size (n words in each of the groups).* Each group consists of words of equal or nearly equal values of frequency. The most frequently used n words belong to the group with rank 1, the following n words constitute the group with rank 2 and the words having in the dictionary numbers from (i-1) n + 1 till i.n constitute a group with rank i.",Frequency and Age as Characteristics of a Word,"1. The problem of rdation between the frequency and age of a word is only a small part of the general problem of opposition of the synchronic and diachronic aspects of language. The frequency is obviously a purely synchronic characteristic of a word whereas the age (the time interval t between the appearance of the word and the present moment) is a purely diachronic one. However there is a simple dependence between both characteristics: the old age of a word corresponds to a high frequency ranking and vice-versa: among the words with low frequency the proportion of ancient words is small. The existence of this dependency was first discovered by G. K. Zlvr (1947) .
2. To obtain this dependency in analytical form let us split the whole frequency dictionary into a number of groups of equal size (n words in each of the groups).* Each group consists of words of equal or nearly equal values of frequency. The most frequently used n words belong to the group with rank 1, the following n words constitute the group with rank 2 and the words having in the dictionary numbers from (i-1) n + 1 till i.n constitute a group with rank i.",,"Frequency and Age as Characteristics of a Word. 1. The problem of rdation between the frequency and age of a word is only a small part of the general problem of opposition of the synchronic and diachronic aspects of language. The frequency is obviously a purely synchronic characteristic of a word whereas the age (the time interval t between the appearance of the word and the present moment) is a purely diachronic one. However there is a simple dependence between both characteristics: the old age of a word corresponds to a high frequency ranking and vice-versa: among the words with low frequency the proportion of ancient words is small. The existence of this dependency was first discovered by G. K. Zlvr (1947) .
2. To obtain this dependency in analytical form let us split the whole frequency dictionary into a number of groups of equal size (n words in each of the groups).* Each group consists of words of equal or nearly equal values of frequency. The most frequently used n words belong to the group with rank 1, the following n words constitute the group with rank 2 and the words having in the dictionary numbers from (i-1) n + 1 till i.n constitute a group with rank i.",1973
zhang-liu-2015-corpus,https://aclanthology.org/Y15-2029,0,,,,,,,"A Corpus-based Comparatively Study on the Semantic Features and Syntactic patterns of Y\`ou/H\'ai in Mandarin Chinese. This study points out that Yòu () and Hái (✁) have their own prominent semantic features and syntactic patterns compared with each other. The differences reflect in the combination with verbs 1. Hái (✁) has absolute superiority in collocation with V+Bu (✂)+V, which tends to express [durative]. Yòu () has advantages in collocations with V+Le (✄)+V and derogatory verbs. Yòu ()+V+Le (✄)+V tends to express [repetition], and Yòu ()+derogatory verbs tends to express [repetition, derogatory]. We also find that the two words represent different semantic features when they match with grammatical aspect markers Le (✄), Zhe (☎) and Guo (✆). Different distributions have a close relation with their semantic features. This study is based on the investigation of the large-scale corpus and data statistics, applying methods of corpus linguistics, computational linguistics and semantic background model, etc. We also described and explained the language facts.",A Corpus-based Comparatively Study on the Semantic Features and Syntactic patterns of Y{\`o}u/H{\'a}i in {M}andarin {C}hinese,"This study points out that Yòu () and Hái (✁) have their own prominent semantic features and syntactic patterns compared with each other. The differences reflect in the combination with verbs 1. Hái (✁) has absolute superiority in collocation with V+Bu (✂)+V, which tends to express [durative]. Yòu () has advantages in collocations with V+Le (✄)+V and derogatory verbs. Yòu ()+V+Le (✄)+V tends to express [repetition], and Yòu ()+derogatory verbs tends to express [repetition, derogatory]. We also find that the two words represent different semantic features when they match with grammatical aspect markers Le (✄), Zhe (☎) and Guo (✆). Different distributions have a close relation with their semantic features. This study is based on the investigation of the large-scale corpus and data statistics, applying methods of corpus linguistics, computational linguistics and semantic background model, etc. We also described and explained the language facts.",A Corpus-based Comparatively Study on the Semantic Features and Syntactic patterns of Y\`ou/H\'ai in Mandarin Chinese,"This study points out that Yòu () and Hái (✁) have their own prominent semantic features and syntactic patterns compared with each other. The differences reflect in the combination with verbs 1. Hái (✁) has absolute superiority in collocation with V+Bu (✂)+V, which tends to express [durative]. Yòu () has advantages in collocations with V+Le (✄)+V and derogatory verbs. Yòu ()+V+Le (✄)+V tends to express [repetition], and Yòu ()+derogatory verbs tends to express [repetition, derogatory]. We also find that the two words represent different semantic features when they match with grammatical aspect markers Le (✄), Zhe (☎) and Guo (✆). Different distributions have a close relation with their semantic features. This study is based on the investigation of the large-scale corpus and data statistics, applying methods of corpus linguistics, computational linguistics and semantic background model, etc. We also described and explained the language facts.","The study is supported by 1) National Language Committee Research Project .2) The Fundamental Research Funds for the Central Universities, and the Research Funds of Beijing Language and Culture University (No.15YCX101). 3) Science Foundation of Beijing Language and Culture University (supported by ""the Fundamental Research Funds for the Central Universities"") (13ZDY03)","A Corpus-based Comparatively Study on the Semantic Features and Syntactic patterns of Y\`ou/H\'ai in Mandarin Chinese. This study points out that Yòu () and Hái (✁) have their own prominent semantic features and syntactic patterns compared with each other. The differences reflect in the combination with verbs 1. Hái (✁) has absolute superiority in collocation with V+Bu (✂)+V, which tends to express [durative]. Yòu () has advantages in collocations with V+Le (✄)+V and derogatory verbs. Yòu ()+V+Le (✄)+V tends to express [repetition], and Yòu ()+derogatory verbs tends to express [repetition, derogatory]. We also find that the two words represent different semantic features when they match with grammatical aspect markers Le (✄), Zhe (☎) and Guo (✆). Different distributions have a close relation with their semantic features. This study is based on the investigation of the large-scale corpus and data statistics, applying methods of corpus linguistics, computational linguistics and semantic background model, etc. We also described and explained the language facts.",2015
emele-1991-unification,https://aclanthology.org/P91-1042,0,,,,,,,Unification With Lazy Non-Redundant Copying. This paper presents a unification procedure which eliminates the redundant copying of structures by using a lazy incremental copying appr0a~:h to achieve structure sharing. Copying of structures accounts for a considerable amount of the total processing time. Several methods have been proposed to minimize the amount of necessary copying. Lazy Incremental Copying (LIC) is presented as a new solution to the copying problem. It synthesizes ideas of lazy copying with the notion of chronological dereferencing for achieving a high amount of structure sharing.,Unification With Lazy Non-Redundant Copying,This paper presents a unification procedure which eliminates the redundant copying of structures by using a lazy incremental copying appr0a~:h to achieve structure sharing. Copying of structures accounts for a considerable amount of the total processing time. Several methods have been proposed to minimize the amount of necessary copying. Lazy Incremental Copying (LIC) is presented as a new solution to the copying problem. It synthesizes ideas of lazy copying with the notion of chronological dereferencing for achieving a high amount of structure sharing.,Unification With Lazy Non-Redundant Copying,This paper presents a unification procedure which eliminates the redundant copying of structures by using a lazy incremental copying appr0a~:h to achieve structure sharing. Copying of structures accounts for a considerable amount of the total processing time. Several methods have been proposed to minimize the amount of necessary copying. Lazy Incremental Copying (LIC) is presented as a new solution to the copying problem. It synthesizes ideas of lazy copying with the notion of chronological dereferencing for achieving a high amount of structure sharing.,,Unification With Lazy Non-Redundant Copying. This paper presents a unification procedure which eliminates the redundant copying of structures by using a lazy incremental copying appr0a~:h to achieve structure sharing. Copying of structures accounts for a considerable amount of the total processing time. Several methods have been proposed to minimize the amount of necessary copying. Lazy Incremental Copying (LIC) is presented as a new solution to the copying problem. It synthesizes ideas of lazy copying with the notion of chronological dereferencing for achieving a high amount of structure sharing.,1991
tu-etal-2016-modeling,https://aclanthology.org/P16-1008,0,,,,,,,"Modeling Coverage for Neural Machine Translation. Attention mechanism has enhanced stateof-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT. 1",Modeling Coverage for Neural Machine Translation,"Attention mechanism has enhanced stateof-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT. 1",Modeling Coverage for Neural Machine Translation,"Attention mechanism has enhanced stateof-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT. 1",This work is supported by China National 973 project 2014CB340301. Yang Liu is supported by the National Natural Science Foundation of China (No. 61522204) and the 863 Program (2015AA011808). We thank the anonymous reviewers for their insightful comments.,"Modeling Coverage for Neural Machine Translation. Attention mechanism has enhanced stateof-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT. 1",2016
munteanu-etal-2004-improved,https://aclanthology.org/N04-1034,0,,,,,,,Improved Machine Translation Performance via Parallel Sentence Extraction from Comparable Corpora. ,Improved Machine Translation Performance via Parallel Sentence Extraction from Comparable Corpora,,Improved Machine Translation Performance via Parallel Sentence Extraction from Comparable Corpora,,,Improved Machine Translation Performance via Parallel Sentence Extraction from Comparable Corpora. ,2004
le-hoi-2020-video,https://aclanthology.org/2020.acl-main.518,0,,,,,,,"Video-Grounded Dialogues with Pretrained Generation Language Models. Pre-trained language models have shown remarkable success in improving various downstream NLP tasks due to their ability to capture dependencies in textual data and generate natural responses. In this paper, we leverage the power of pre-trained language models for improving video-grounded dialogue, which is very challenging and involves complex features of different dynamics: (1) Video features which can extend across both spatial and temporal dimensions; and (2) Dialogue features which involve semantic dependencies over multiple dialogue turns. We propose a framework by extending GPT-2 models to tackle these challenges by formulating videogrounded dialogue tasks as a sequence-tosequence task, combining both visual and textual representation into a structured sequence, and fine-tuning a large pre-trained GPT-2 network. Our framework allows fine-tuning language models to capture dependencies across multiple modalities over different levels of information: spatio-temporal level in video and token-sentence level in dialogue context. We achieve promising improvement on the AudioVisual Scene-Aware Dialogues (AVSD) benchmark from DSTC7, which supports a potential direction in this line of research.",Video-Grounded Dialogues with Pretrained Generation Language Models,"Pre-trained language models have shown remarkable success in improving various downstream NLP tasks due to their ability to capture dependencies in textual data and generate natural responses. In this paper, we leverage the power of pre-trained language models for improving video-grounded dialogue, which is very challenging and involves complex features of different dynamics: (1) Video features which can extend across both spatial and temporal dimensions; and (2) Dialogue features which involve semantic dependencies over multiple dialogue turns. We propose a framework by extending GPT-2 models to tackle these challenges by formulating videogrounded dialogue tasks as a sequence-tosequence task, combining both visual and textual representation into a structured sequence, and fine-tuning a large pre-trained GPT-2 network. Our framework allows fine-tuning language models to capture dependencies across multiple modalities over different levels of information: spatio-temporal level in video and token-sentence level in dialogue context. We achieve promising improvement on the AudioVisual Scene-Aware Dialogues (AVSD) benchmark from DSTC7, which supports a potential direction in this line of research.",Video-Grounded Dialogues with Pretrained Generation Language Models,"Pre-trained language models have shown remarkable success in improving various downstream NLP tasks due to their ability to capture dependencies in textual data and generate natural responses. In this paper, we leverage the power of pre-trained language models for improving video-grounded dialogue, which is very challenging and involves complex features of different dynamics: (1) Video features which can extend across both spatial and temporal dimensions; and (2) Dialogue features which involve semantic dependencies over multiple dialogue turns. We propose a framework by extending GPT-2 models to tackle these challenges by formulating videogrounded dialogue tasks as a sequence-tosequence task, combining both visual and textual representation into a structured sequence, and fine-tuning a large pre-trained GPT-2 network. Our framework allows fine-tuning language models to capture dependencies across multiple modalities over different levels of information: spatio-temporal level in video and token-sentence level in dialogue context. We achieve promising improvement on the AudioVisual Scene-Aware Dialogues (AVSD) benchmark from DSTC7, which supports a potential direction in this line of research.",,"Video-Grounded Dialogues with Pretrained Generation Language Models. Pre-trained language models have shown remarkable success in improving various downstream NLP tasks due to their ability to capture dependencies in textual data and generate natural responses. In this paper, we leverage the power of pre-trained language models for improving video-grounded dialogue, which is very challenging and involves complex features of different dynamics: (1) Video features which can extend across both spatial and temporal dimensions; and (2) Dialogue features which involve semantic dependencies over multiple dialogue turns. We propose a framework by extending GPT-2 models to tackle these challenges by formulating videogrounded dialogue tasks as a sequence-tosequence task, combining both visual and textual representation into a structured sequence, and fine-tuning a large pre-trained GPT-2 network. Our framework allows fine-tuning language models to capture dependencies across multiple modalities over different levels of information: spatio-temporal level in video and token-sentence level in dialogue context. We achieve promising improvement on the AudioVisual Scene-Aware Dialogues (AVSD) benchmark from DSTC7, which supports a potential direction in this line of research.",2020
pacak-1963-slavic,https://aclanthology.org/1963.earlymt-1.27,0,,,,,,,Slavic languages---comparative morphosyntactic research. ,{S}lavic languages{---}comparative morphosyntactic research,,Slavic languages---comparative morphosyntactic research,,,Slavic languages---comparative morphosyntactic research. ,1963
liu-etal-2021-continual,https://aclanthology.org/2021.findings-acl.239,0,,,,,,,"Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation. The data scarcity in low-resource languages has become a bottleneck to building robust neural machine translation systems. Finetuning a multilingual pre-trained model (e.g., mBART (Liu et al., 2020a)) on the translation task is a good approach for low-resource languages; however, its performance will be greatly limited when there are unseen languages in the translation pairs. In this paper, we present a continual pre-training (CPT) framework on mBART to effectively adapt it to unseen languages. We first construct noisy mixed-language text from the monolingual corpus of the target language in the translation pair to cover both the source and target languages, and then, we continue pretraining mBART to reconstruct the original monolingual text. Results show that our method can consistently improve the finetuning performance upon the mBART baseline, as well as other strong baselines, across all tested low-resource translation pairs containing unseen languages. Furthermore, our approach also boosts the performance on translation pairs where both languages are seen in the original mBART's pre-training. The code is available at https://github.com/ zliucr/cpt-nmt.",Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation,"The data scarcity in low-resource languages has become a bottleneck to building robust neural machine translation systems. Finetuning a multilingual pre-trained model (e.g., mBART (Liu et al., 2020a)) on the translation task is a good approach for low-resource languages; however, its performance will be greatly limited when there are unseen languages in the translation pairs. In this paper, we present a continual pre-training (CPT) framework on mBART to effectively adapt it to unseen languages. We first construct noisy mixed-language text from the monolingual corpus of the target language in the translation pair to cover both the source and target languages, and then, we continue pretraining mBART to reconstruct the original monolingual text. Results show that our method can consistently improve the finetuning performance upon the mBART baseline, as well as other strong baselines, across all tested low-resource translation pairs containing unseen languages. Furthermore, our approach also boosts the performance on translation pairs where both languages are seen in the original mBART's pre-training. The code is available at https://github.com/ zliucr/cpt-nmt.",Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation,"The data scarcity in low-resource languages has become a bottleneck to building robust neural machine translation systems. Finetuning a multilingual pre-trained model (e.g., mBART (Liu et al., 2020a)) on the translation task is a good approach for low-resource languages; however, its performance will be greatly limited when there are unseen languages in the translation pairs. In this paper, we present a continual pre-training (CPT) framework on mBART to effectively adapt it to unseen languages. We first construct noisy mixed-language text from the monolingual corpus of the target language in the translation pair to cover both the source and target languages, and then, we continue pretraining mBART to reconstruct the original monolingual text. Results show that our method can consistently improve the finetuning performance upon the mBART baseline, as well as other strong baselines, across all tested low-resource translation pairs containing unseen languages. Furthermore, our approach also boosts the performance on translation pairs where both languages are seen in the original mBART's pre-training. The code is available at https://github.com/ zliucr/cpt-nmt.","We want to say thanks to the anonymous reviewers for the insightful reviews and constructive feedback. This work is partially funded by ITF/319/16FP and MRP/055/18 of the Innovation Technology Commission, the Hong Kong SAR Government.","Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation. The data scarcity in low-resource languages has become a bottleneck to building robust neural machine translation systems. Finetuning a multilingual pre-trained model (e.g., mBART (Liu et al., 2020a)) on the translation task is a good approach for low-resource languages; however, its performance will be greatly limited when there are unseen languages in the translation pairs. In this paper, we present a continual pre-training (CPT) framework on mBART to effectively adapt it to unseen languages. We first construct noisy mixed-language text from the monolingual corpus of the target language in the translation pair to cover both the source and target languages, and then, we continue pretraining mBART to reconstruct the original monolingual text. Results show that our method can consistently improve the finetuning performance upon the mBART baseline, as well as other strong baselines, across all tested low-resource translation pairs containing unseen languages. Furthermore, our approach also boosts the performance on translation pairs where both languages are seen in the original mBART's pre-training. The code is available at https://github.com/ zliucr/cpt-nmt.",2021
austin-etal-1992-bbn,https://aclanthology.org/H92-1049,0,,,,,,,"BBN Real-Time Speech Recognition Demonstrations. Typically, real-time speech recognition -if achieved at all -is accomplished either by greatly simplifying the processing to be done, or by the use of special-purpose hardware. Each of these approaches has obvious problems. The former results in a substantial loss in accuracy, while the latter often results in obsolete hardware being developed at great expense and delay.
Starting in 1990 [1] [2] we have taken a different approach based on modifying the algorithms to provide increased speed without loss in accuracy. Our goal has been to use commercially available off-the-shelf (COTS) hardware to perform speech recognition. Initially, this meant using workstations with powerful but standard signal processing boards acting as accelerators. However, even these signal processing boards have two significant disadvantages:",{BBN} Real-Time Speech Recognition Demonstrations,"Typically, real-time speech recognition -if achieved at all -is accomplished either by greatly simplifying the processing to be done, or by the use of special-purpose hardware. Each of these approaches has obvious problems. The former results in a substantial loss in accuracy, while the latter often results in obsolete hardware being developed at great expense and delay.
Starting in 1990 [1] [2] we have taken a different approach based on modifying the algorithms to provide increased speed without loss in accuracy. Our goal has been to use commercially available off-the-shelf (COTS) hardware to perform speech recognition. Initially, this meant using workstations with powerful but standard signal processing boards acting as accelerators. However, even these signal processing boards have two significant disadvantages:",BBN Real-Time Speech Recognition Demonstrations,"Typically, real-time speech recognition -if achieved at all -is accomplished either by greatly simplifying the processing to be done, or by the use of special-purpose hardware. Each of these approaches has obvious problems. The former results in a substantial loss in accuracy, while the latter often results in obsolete hardware being developed at great expense and delay.
Starting in 1990 [1] [2] we have taken a different approach based on modifying the algorithms to provide increased speed without loss in accuracy. Our goal has been to use commercially available off-the-shelf (COTS) hardware to perform speech recognition. Initially, this meant using workstations with powerful but standard signal processing boards acting as accelerators. However, even these signal processing boards have two significant disadvantages:",,"BBN Real-Time Speech Recognition Demonstrations. Typically, real-time speech recognition -if achieved at all -is accomplished either by greatly simplifying the processing to be done, or by the use of special-purpose hardware. Each of these approaches has obvious problems. The former results in a substantial loss in accuracy, while the latter often results in obsolete hardware being developed at great expense and delay.
Starting in 1990 [1] [2] we have taken a different approach based on modifying the algorithms to provide increased speed without loss in accuracy. Our goal has been to use commercially available off-the-shelf (COTS) hardware to perform speech recognition. Initially, this meant using workstations with powerful but standard signal processing boards acting as accelerators. However, even these signal processing boards have two significant disadvantages:",1992
aggarwal-etal-2021-efficient,https://aclanthology.org/2021.ranlp-1.3,0,,,,,,,"Efficient Multilingual Text Classification for Indian Languages. India is one of the richest language hubs on the earth and is very diverse and multilingual. But apart from a few Indian languages, most of them are still considered to be resource poor. Since most of the NLP techniques either require linguistic knowledge that can only be developed by experts and native speakers of that language or they require a lot of labelled data which is again expensive to generate, the task of text classification becomes challenging for most of the Indian languages. The main objective of this paper is to see how one can benefit from the lexical similarity found in Indian languages in a multilingual scenario. Can a classification model trained on one Indian language be reused for other Indian languages? So, we performed zero-shot text classification via exploiting lexical similarity and we observed that our model performs best in those cases where the vocabulary overlap between the language datasets is maximum. Our experiments also confirm that a single multilingual model trained via exploiting language relatedness outperforms the baselines by significant margins.",Efficient Multilingual Text Classification for {I}ndian Languages,"India is one of the richest language hubs on the earth and is very diverse and multilingual. But apart from a few Indian languages, most of them are still considered to be resource poor. Since most of the NLP techniques either require linguistic knowledge that can only be developed by experts and native speakers of that language or they require a lot of labelled data which is again expensive to generate, the task of text classification becomes challenging for most of the Indian languages. The main objective of this paper is to see how one can benefit from the lexical similarity found in Indian languages in a multilingual scenario. Can a classification model trained on one Indian language be reused for other Indian languages? So, we performed zero-shot text classification via exploiting lexical similarity and we observed that our model performs best in those cases where the vocabulary overlap between the language datasets is maximum. Our experiments also confirm that a single multilingual model trained via exploiting language relatedness outperforms the baselines by significant margins.",Efficient Multilingual Text Classification for Indian Languages,"India is one of the richest language hubs on the earth and is very diverse and multilingual. But apart from a few Indian languages, most of them are still considered to be resource poor. Since most of the NLP techniques either require linguistic knowledge that can only be developed by experts and native speakers of that language or they require a lot of labelled data which is again expensive to generate, the task of text classification becomes challenging for most of the Indian languages. The main objective of this paper is to see how one can benefit from the lexical similarity found in Indian languages in a multilingual scenario. Can a classification model trained on one Indian language be reused for other Indian languages? So, we performed zero-shot text classification via exploiting lexical similarity and we observed that our model performs best in those cases where the vocabulary overlap between the language datasets is maximum. Our experiments also confirm that a single multilingual model trained via exploiting language relatedness outperforms the baselines by significant margins.",,"Efficient Multilingual Text Classification for Indian Languages. India is one of the richest language hubs on the earth and is very diverse and multilingual. But apart from a few Indian languages, most of them are still considered to be resource poor. Since most of the NLP techniques either require linguistic knowledge that can only be developed by experts and native speakers of that language or they require a lot of labelled data which is again expensive to generate, the task of text classification becomes challenging for most of the Indian languages. The main objective of this paper is to see how one can benefit from the lexical similarity found in Indian languages in a multilingual scenario. Can a classification model trained on one Indian language be reused for other Indian languages? So, we performed zero-shot text classification via exploiting lexical similarity and we observed that our model performs best in those cases where the vocabulary overlap between the language datasets is maximum. Our experiments also confirm that a single multilingual model trained via exploiting language relatedness outperforms the baselines by significant margins.",2021
uehara-harada-2020-unsupervised,https://aclanthology.org/2020.nlpbt-1.6,0,,,,,,,"Unsupervised Keyword Extraction for Full-Sentence VQA. In the majority of the existing Visual Question Answering (VQA) research, the answers consist of short, often single words, as per instructions given to the annotators during dataset construction. This study envisions a VQA task for natural situations, where the answers are more likely to be sentences rather than single words. To bridge the gap between this natural VQA and existing VQA approaches, a novel unsupervised keyword extraction method is proposed. The method is based on the principle that the full-sentence answers can be decomposed into two parts: one that contains new information answering the question (i.e., keywords), and one that contains information already included in the question. Discriminative decoders were designed to achieve such decomposition, and the method was experimentally implemented on VQA datasets containing full-sentence answers. The results show that the proposed model can accurately extract the keywords without being given explicit annotations describing them.",Unsupervised Keyword Extraction for Full-Sentence {VQA},"In the majority of the existing Visual Question Answering (VQA) research, the answers consist of short, often single words, as per instructions given to the annotators during dataset construction. This study envisions a VQA task for natural situations, where the answers are more likely to be sentences rather than single words. To bridge the gap between this natural VQA and existing VQA approaches, a novel unsupervised keyword extraction method is proposed. The method is based on the principle that the full-sentence answers can be decomposed into two parts: one that contains new information answering the question (i.e., keywords), and one that contains information already included in the question. Discriminative decoders were designed to achieve such decomposition, and the method was experimentally implemented on VQA datasets containing full-sentence answers. The results show that the proposed model can accurately extract the keywords without being given explicit annotations describing them.",Unsupervised Keyword Extraction for Full-Sentence VQA,"In the majority of the existing Visual Question Answering (VQA) research, the answers consist of short, often single words, as per instructions given to the annotators during dataset construction. This study envisions a VQA task for natural situations, where the answers are more likely to be sentences rather than single words. To bridge the gap between this natural VQA and existing VQA approaches, a novel unsupervised keyword extraction method is proposed. The method is based on the principle that the full-sentence answers can be decomposed into two parts: one that contains new information answering the question (i.e., keywords), and one that contains information already included in the question. Discriminative decoders were designed to achieve such decomposition, and the method was experimentally implemented on VQA datasets containing full-sentence answers. The results show that the proposed model can accurately extract the keywords without being given explicit annotations describing them.","Acknowledgement This work was partially supported by JST CREST Grant Number JP-MJCR1403, and partially supported by JSPS KAKENHI Grant Number JP19H01115 and JP20H05556. We would like to thank Yang Li, Sho Maeoki, Sho Inayoshi, and Antonio Tejerode-Pablos for helpful discussions.","Unsupervised Keyword Extraction for Full-Sentence VQA. In the majority of the existing Visual Question Answering (VQA) research, the answers consist of short, often single words, as per instructions given to the annotators during dataset construction. This study envisions a VQA task for natural situations, where the answers are more likely to be sentences rather than single words. To bridge the gap between this natural VQA and existing VQA approaches, a novel unsupervised keyword extraction method is proposed. The method is based on the principle that the full-sentence answers can be decomposed into two parts: one that contains new information answering the question (i.e., keywords), and one that contains information already included in the question. Discriminative decoders were designed to achieve such decomposition, and the method was experimentally implemented on VQA datasets containing full-sentence answers. The results show that the proposed model can accurately extract the keywords without being given explicit annotations describing them.",2020
dinu-moldovan-2021-automatic,https://aclanthology.org/2021.ranlp-1.41,1,,,,health,,,"Automatic Detection and Classification of Mental Illnesses from General Social Media Texts. Mental health is getting more and more attention recently, depression being a very common illness nowadays, but also other disorders like anxiety, obsessive-compulsive disorders, feeding disorders, autism, or attention-deficit/hyperactivity disorders. The huge amount of data from social media and the recent advances of deep learning models provide valuable means to automatically detecting mental disorders from plain text. In this article, we experiment with state-of-the-art methods on the SMHD mental health conditions dataset from Reddit (Cohan et al., 2018). Our contribution is threefold: using a dataset consisting of more illnesses than most studies, focusing on general text rather than mental health support groups and classification by posts rather than individuals or groups. For the automatic classification of the diseases, we employ three deep learning models: BERT, RoBERTa and XLNET. We double the baseline established by Cohan et al. (2018), on just a sample of their dataset. We improve the results obtained by Jiang et al. (2020) on post-level classification. The accuracy obtained by the eating disorder classifier is the highest due to the pregnant presence of discussions related to calories, diets, recipes etc., whereas depression had the lowest F1 score, probably because depression is more difficult to identify in linguistic acts.",Automatic Detection and Classification of Mental Illnesses from General Social Media Texts,"Mental health is getting more and more attention recently, depression being a very common illness nowadays, but also other disorders like anxiety, obsessive-compulsive disorders, feeding disorders, autism, or attention-deficit/hyperactivity disorders. The huge amount of data from social media and the recent advances of deep learning models provide valuable means to automatically detecting mental disorders from plain text. In this article, we experiment with state-of-the-art methods on the SMHD mental health conditions dataset from Reddit (Cohan et al., 2018). Our contribution is threefold: using a dataset consisting of more illnesses than most studies, focusing on general text rather than mental health support groups and classification by posts rather than individuals or groups. For the automatic classification of the diseases, we employ three deep learning models: BERT, RoBERTa and XLNET. We double the baseline established by Cohan et al. (2018), on just a sample of their dataset. We improve the results obtained by Jiang et al. (2020) on post-level classification. The accuracy obtained by the eating disorder classifier is the highest due to the pregnant presence of discussions related to calories, diets, recipes etc., whereas depression had the lowest F1 score, probably because depression is more difficult to identify in linguistic acts.",Automatic Detection and Classification of Mental Illnesses from General Social Media Texts,"Mental health is getting more and more attention recently, depression being a very common illness nowadays, but also other disorders like anxiety, obsessive-compulsive disorders, feeding disorders, autism, or attention-deficit/hyperactivity disorders. The huge amount of data from social media and the recent advances of deep learning models provide valuable means to automatically detecting mental disorders from plain text. In this article, we experiment with state-of-the-art methods on the SMHD mental health conditions dataset from Reddit (Cohan et al., 2018). Our contribution is threefold: using a dataset consisting of more illnesses than most studies, focusing on general text rather than mental health support groups and classification by posts rather than individuals or groups. For the automatic classification of the diseases, we employ three deep learning models: BERT, RoBERTa and XLNET. We double the baseline established by Cohan et al. (2018), on just a sample of their dataset. We improve the results obtained by Jiang et al. (2020) on post-level classification. The accuracy obtained by the eating disorder classifier is the highest due to the pregnant presence of discussions related to calories, diets, recipes etc., whereas depression had the lowest F1 score, probably because depression is more difficult to identify in linguistic acts.","This research is supported by a grant of the Ministry of Research, Innovation and Digitization, CNCS/CCCDI Metric DEPR CONT SCHIZ CONTR OCD CONT EAT CONT BPD CONT ADHD CONT PTSD CONT AUT CONT ANX CONT","Automatic Detection and Classification of Mental Illnesses from General Social Media Texts. Mental health is getting more and more attention recently, depression being a very common illness nowadays, but also other disorders like anxiety, obsessive-compulsive disorders, feeding disorders, autism, or attention-deficit/hyperactivity disorders. The huge amount of data from social media and the recent advances of deep learning models provide valuable means to automatically detecting mental disorders from plain text. In this article, we experiment with state-of-the-art methods on the SMHD mental health conditions dataset from Reddit (Cohan et al., 2018). Our contribution is threefold: using a dataset consisting of more illnesses than most studies, focusing on general text rather than mental health support groups and classification by posts rather than individuals or groups. For the automatic classification of the diseases, we employ three deep learning models: BERT, RoBERTa and XLNET. We double the baseline established by Cohan et al. (2018), on just a sample of their dataset. We improve the results obtained by Jiang et al. (2020) on post-level classification. The accuracy obtained by the eating disorder classifier is the highest due to the pregnant presence of discussions related to calories, diets, recipes etc., whereas depression had the lowest F1 score, probably because depression is more difficult to identify in linguistic acts.",2021
wang-etal-2021-secoco-self,https://aclanthology.org/2021.findings-emnlp.396,0,,,,,,,"Secoco: Self-Correcting Encoding for Neural Machine Translation. This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with input noise for robust neural machine translation by introducing self-correcting predictors. Different from previous robust approaches, Secoco enables NMT to explicitly correct noisy inputs and delete specific errors simultaneously with the translation decoding process. Secoco is able to achieve significant improvements of 1.6 BLEU points over strong baselines on two real-world test sets and a benchmark WMT dataset with good interpretability. The code and dataset are publicly available at https://github.com/rgwt123/Secoco.",Secoco: Self-Correcting Encoding for Neural Machine Translation,"This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with input noise for robust neural machine translation by introducing self-correcting predictors. Different from previous robust approaches, Secoco enables NMT to explicitly correct noisy inputs and delete specific errors simultaneously with the translation decoding process. Secoco is able to achieve significant improvements of 1.6 BLEU points over strong baselines on two real-world test sets and a benchmark WMT dataset with good interpretability. The code and dataset are publicly available at https://github.com/rgwt123/Secoco.",Secoco: Self-Correcting Encoding for Neural Machine Translation,"This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with input noise for robust neural machine translation by introducing self-correcting predictors. Different from previous robust approaches, Secoco enables NMT to explicitly correct noisy inputs and delete specific errors simultaneously with the translation decoding process. Secoco is able to achieve significant improvements of 1.6 BLEU points over strong baselines on two real-world test sets and a benchmark WMT dataset with good interpretability. The code and dataset are publicly available at https://github.com/rgwt123/Secoco.",Deyi Xiong was partially supported by the National Key Research and Development Program of China (Grant No.2019QY1802) and Natural Science Foundation of Tianjin (Grant No.19JCZDJC31400). We would like to thank the three anonymous reviewers for their insightful comments.,"Secoco: Self-Correcting Encoding for Neural Machine Translation. This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with input noise for robust neural machine translation by introducing self-correcting predictors. Different from previous robust approaches, Secoco enables NMT to explicitly correct noisy inputs and delete specific errors simultaneously with the translation decoding process. Secoco is able to achieve significant improvements of 1.6 BLEU points over strong baselines on two real-world test sets and a benchmark WMT dataset with good interpretability. The code and dataset are publicly available at https://github.com/rgwt123/Secoco.",2021
bustamante-diaz-2006-spelling,http://www.lrec-conf.org/proceedings/lrec2006/pdf/119_pdf.pdf,0,,,,,,,"Spelling Error Patterns in Spanish for Word Processing Applications. This paper reports findings from the elaboration of a typology of spelling errors for Spanish. It also discusses previous generalizations about spelling error patterns found in other studies and offers new insights on them. The typology is based on the analysis of around 76K misspellings found in real-life texts produced by humans. The main goal of the elaboration of the typology was to help in the implementation of a spell checker that detects context-independent misspellings in general unrestricted texts with the most common confusion pairs (i.e. error/correction pairs) to improve the set of ranked correction candidates for misspellings. We found that spelling errors are language dependent and are closely related to the orthographic rules of each language. The statistical data we provide on spelling error patterns in Spanish and their comparison with other data in other related works are the novel contribution of this paper. In this line, this paper shows that some of the general statements found in the literature about spelling error patterns apply mainly to English and cannot be extrapolated to other languages.",Spelling Error Patterns in {S}panish for Word Processing Applications,"This paper reports findings from the elaboration of a typology of spelling errors for Spanish. It also discusses previous generalizations about spelling error patterns found in other studies and offers new insights on them. The typology is based on the analysis of around 76K misspellings found in real-life texts produced by humans. The main goal of the elaboration of the typology was to help in the implementation of a spell checker that detects context-independent misspellings in general unrestricted texts with the most common confusion pairs (i.e. error/correction pairs) to improve the set of ranked correction candidates for misspellings. We found that spelling errors are language dependent and are closely related to the orthographic rules of each language. The statistical data we provide on spelling error patterns in Spanish and their comparison with other data in other related works are the novel contribution of this paper. In this line, this paper shows that some of the general statements found in the literature about spelling error patterns apply mainly to English and cannot be extrapolated to other languages.",Spelling Error Patterns in Spanish for Word Processing Applications,"This paper reports findings from the elaboration of a typology of spelling errors for Spanish. It also discusses previous generalizations about spelling error patterns found in other studies and offers new insights on them. The typology is based on the analysis of around 76K misspellings found in real-life texts produced by humans. The main goal of the elaboration of the typology was to help in the implementation of a spell checker that detects context-independent misspellings in general unrestricted texts with the most common confusion pairs (i.e. error/correction pairs) to improve the set of ranked correction candidates for misspellings. We found that spelling errors are language dependent and are closely related to the orthographic rules of each language. The statistical data we provide on spelling error patterns in Spanish and their comparison with other data in other related works are the novel contribution of this paper. In this line, this paper shows that some of the general statements found in the literature about spelling error patterns apply mainly to English and cannot be extrapolated to other languages.",,"Spelling Error Patterns in Spanish for Word Processing Applications. This paper reports findings from the elaboration of a typology of spelling errors for Spanish. It also discusses previous generalizations about spelling error patterns found in other studies and offers new insights on them. The typology is based on the analysis of around 76K misspellings found in real-life texts produced by humans. The main goal of the elaboration of the typology was to help in the implementation of a spell checker that detects context-independent misspellings in general unrestricted texts with the most common confusion pairs (i.e. error/correction pairs) to improve the set of ranked correction candidates for misspellings. We found that spelling errors are language dependent and are closely related to the orthographic rules of each language. The statistical data we provide on spelling error patterns in Spanish and their comparison with other data in other related works are the novel contribution of this paper. In this line, this paper shows that some of the general statements found in the literature about spelling error patterns apply mainly to English and cannot be extrapolated to other languages.",2006
koshorek-etal-2018-text,https://aclanthology.org/N18-2075,0,,,,,,,"Text Segmentation as a Supervised Learning Task. Text segmentation, the task of dividing a document into contiguous segments based on its semantic structure, is a longstanding challenge in language understanding. Previous work on text segmentation focused on unsupervised methods such as clustering or graph search, due to the paucity in labeled data. In this work, we formulate text segmentation as a supervised learning problem, and present a large new dataset for text segmentation that is automatically extracted and labeled from Wikipedia. Moreover, we develop a segmentation model based on this dataset and show that it generalizes well to unseen natural text.",Text Segmentation as a Supervised Learning Task,"Text segmentation, the task of dividing a document into contiguous segments based on its semantic structure, is a longstanding challenge in language understanding. Previous work on text segmentation focused on unsupervised methods such as clustering or graph search, due to the paucity in labeled data. In this work, we formulate text segmentation as a supervised learning problem, and present a large new dataset for text segmentation that is automatically extracted and labeled from Wikipedia. Moreover, we develop a segmentation model based on this dataset and show that it generalizes well to unseen natural text.",Text Segmentation as a Supervised Learning Task,"Text segmentation, the task of dividing a document into contiguous segments based on its semantic structure, is a longstanding challenge in language understanding. Previous work on text segmentation focused on unsupervised methods such as clustering or graph search, due to the paucity in labeled data. In this work, we formulate text segmentation as a supervised learning problem, and present a large new dataset for text segmentation that is automatically extracted and labeled from Wikipedia. Moreover, we develop a segmentation model based on this dataset and show that it generalizes well to unseen natural text.","We thank the anonymous reviewers for their constructive feedback. This work was supported by the Israel Science Foundation, grant 942/16.","Text Segmentation as a Supervised Learning Task. Text segmentation, the task of dividing a document into contiguous segments based on its semantic structure, is a longstanding challenge in language understanding. Previous work on text segmentation focused on unsupervised methods such as clustering or graph search, due to the paucity in labeled data. In this work, we formulate text segmentation as a supervised learning problem, and present a large new dataset for text segmentation that is automatically extracted and labeled from Wikipedia. Moreover, we develop a segmentation model based on this dataset and show that it generalizes well to unseen natural text.",2018
kaewphan-etal-2014-utu,https://aclanthology.org/S14-2143,1,,,,health,,,"UTU: Disease Mention Recognition and Normalization with CRFs and Vector Space Representations. In this paper we present our system participating in the SemEval-2014 Task 7 in both subtasks A and B, aiming at recognizing and normalizing disease and symptom mentions from electronic medical records respectively. In subtask A, we used an existing NER system, NERsuite, with our own feature set tailored for this task. For subtask B, we combined word vector representations and supervised machine learning to map the recognized mentions to the corresponding UMLS concepts. Our system was placed 2nd and 5th out of 21 participants on subtasks A and B respectively showing competitive performance.",{UTU}: Disease Mention Recognition and Normalization with {CRF}s and Vector Space Representations,"In this paper we present our system participating in the SemEval-2014 Task 7 in both subtasks A and B, aiming at recognizing and normalizing disease and symptom mentions from electronic medical records respectively. In subtask A, we used an existing NER system, NERsuite, with our own feature set tailored for this task. For subtask B, we combined word vector representations and supervised machine learning to map the recognized mentions to the corresponding UMLS concepts. Our system was placed 2nd and 5th out of 21 participants on subtasks A and B respectively showing competitive performance.",UTU: Disease Mention Recognition and Normalization with CRFs and Vector Space Representations,"In this paper we present our system participating in the SemEval-2014 Task 7 in both subtasks A and B, aiming at recognizing and normalizing disease and symptom mentions from electronic medical records respectively. In subtask A, we used an existing NER system, NERsuite, with our own feature set tailored for this task. For subtask B, we combined word vector representations and supervised machine learning to map the recognized mentions to the corresponding UMLS concepts. Our system was placed 2nd and 5th out of 21 participants on subtasks A and B respectively showing competitive performance.","Computational resources were provided by CSC -IT Center for Science Ltd, Espoo, Finland. This work was supported by the Academy of Finland.","UTU: Disease Mention Recognition and Normalization with CRFs and Vector Space Representations. In this paper we present our system participating in the SemEval-2014 Task 7 in both subtasks A and B, aiming at recognizing and normalizing disease and symptom mentions from electronic medical records respectively. In subtask A, we used an existing NER system, NERsuite, with our own feature set tailored for this task. For subtask B, we combined word vector representations and supervised machine learning to map the recognized mentions to the corresponding UMLS concepts. Our system was placed 2nd and 5th out of 21 participants on subtasks A and B respectively showing competitive performance.",2014
niv-1992-right,https://aclanthology.org/P92-1039,0,,,,,,,Right Association Revisited. Consideration of when Right Association works and when it fails lead to a restatement of this parsing principle in terms of the notion of heaviness. A computational investigation of a syntactically annotated corpus provides evidence for this proposal and suggest circumstances when RA is likely to make correct attachment predictions.,Right Association Revisited,Consideration of when Right Association works and when it fails lead to a restatement of this parsing principle in terms of the notion of heaviness. A computational investigation of a syntactically annotated corpus provides evidence for this proposal and suggest circumstances when RA is likely to make correct attachment predictions.,Right Association Revisited,Consideration of when Right Association works and when it fails lead to a restatement of this parsing principle in terms of the notion of heaviness. A computational investigation of a syntactically annotated corpus provides evidence for this proposal and suggest circumstances when RA is likely to make correct attachment predictions.,,Right Association Revisited. Consideration of when Right Association works and when it fails lead to a restatement of this parsing principle in terms of the notion of heaviness. A computational investigation of a syntactically annotated corpus provides evidence for this proposal and suggest circumstances when RA is likely to make correct attachment predictions.,1992
li-etal-2020-shallow,https://aclanthology.org/2020.emnlp-main.72,0,,,,,,,"Shallow-to-Deep Training for Neural Machine Translation. Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but training an extremely deep encoder is time consuming. Moreover, why deep models help NMT is an open question. In this paper, we investigate the behavior of a well-tuned deep Transformer system. We find that stacking layers is helpful in improving the representation ability of N-MT models and adjacent layers perform similarly. This inspires us to develop a shallowto-deep training method that learns deep models by stacking shallow models. In this way, we successfully train a Transformer system with a 54-layer encoder. Experimental results on WMT'16 English-German and WMT'14 English-French translation tasks show that it is 1.4 × faster than training from scratch, and achieves a BLEU score of 30.33 and 43.29 on two tasks. The code is publicly available at https://github.com/libeineu/ SDT-Training.",Shallow-to-Deep Training for Neural Machine Translation,"Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but training an extremely deep encoder is time consuming. Moreover, why deep models help NMT is an open question. In this paper, we investigate the behavior of a well-tuned deep Transformer system. We find that stacking layers is helpful in improving the representation ability of N-MT models and adjacent layers perform similarly. This inspires us to develop a shallowto-deep training method that learns deep models by stacking shallow models. In this way, we successfully train a Transformer system with a 54-layer encoder. Experimental results on WMT'16 English-German and WMT'14 English-French translation tasks show that it is 1.4 × faster than training from scratch, and achieves a BLEU score of 30.33 and 43.29 on two tasks. The code is publicly available at https://github.com/libeineu/ SDT-Training.",Shallow-to-Deep Training for Neural Machine Translation,"Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but training an extremely deep encoder is time consuming. Moreover, why deep models help NMT is an open question. In this paper, we investigate the behavior of a well-tuned deep Transformer system. We find that stacking layers is helpful in improving the representation ability of N-MT models and adjacent layers perform similarly. This inspires us to develop a shallowto-deep training method that learns deep models by stacking shallow models. In this way, we successfully train a Transformer system with a 54-layer encoder. Experimental results on WMT'16 English-German and WMT'14 English-French translation tasks show that it is 1.4 × faster than training from scratch, and achieves a BLEU score of 30.33 and 43.29 on two tasks. The code is publicly available at https://github.com/libeineu/ SDT-Training.","This work was supported in part by the National Science Foundation of China (Nos. 61876035 and 61732005), the National Key R&D Program of China (No. 2019QY1801). The authors would like to thank anonymous reviewers for their valuable comments. And thank Qiang Wang for the helpful advice to improve the paper.","Shallow-to-Deep Training for Neural Machine Translation. Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but training an extremely deep encoder is time consuming. Moreover, why deep models help NMT is an open question. In this paper, we investigate the behavior of a well-tuned deep Transformer system. We find that stacking layers is helpful in improving the representation ability of N-MT models and adjacent layers perform similarly. This inspires us to develop a shallowto-deep training method that learns deep models by stacking shallow models. In this way, we successfully train a Transformer system with a 54-layer encoder. Experimental results on WMT'16 English-German and WMT'14 English-French translation tasks show that it is 1.4 × faster than training from scratch, and achieves a BLEU score of 30.33 and 43.29 on two tasks. The code is publicly available at https://github.com/libeineu/ SDT-Training.",2020
arthur-etal-2021-multilingual,https://aclanthology.org/2021.findings-acl.420,0,,,,,,,"Multilingual Simultaneous Neural Machine Translation. Simultaneous machine translation (SIMT) involves translating source utterances to the target language in real-time before the speaker utterance completes. This paper proposes the multilingual approach to SIMT, where a single model simultaneously translates between multiple language-pairs. This not only results in more efficiency in terms of the number of models and parameters (hence simpler deployment), but may also lead to higher performing models by capturing commonalities among the languages. We further explore simple and effective multilingual architectures based on two strong recently proposed SIMT models. Our results on translating from two Germanic languages (German, Dutch) and three Romance languages (French, Italian, Romanian) into English show (i) the single multilingual model is on-par or better than individual models, and (ii) multilingual SIMT models trained based on language families are on-par or better than the universal model trained for all languages. 1",Multilingual Simultaneous Neural Machine Translation,"Simultaneous machine translation (SIMT) involves translating source utterances to the target language in real-time before the speaker utterance completes. This paper proposes the multilingual approach to SIMT, where a single model simultaneously translates between multiple language-pairs. This not only results in more efficiency in terms of the number of models and parameters (hence simpler deployment), but may also lead to higher performing models by capturing commonalities among the languages. We further explore simple and effective multilingual architectures based on two strong recently proposed SIMT models. Our results on translating from two Germanic languages (German, Dutch) and three Romance languages (French, Italian, Romanian) into English show (i) the single multilingual model is on-par or better than individual models, and (ii) multilingual SIMT models trained based on language families are on-par or better than the universal model trained for all languages. 1",Multilingual Simultaneous Neural Machine Translation,"Simultaneous machine translation (SIMT) involves translating source utterances to the target language in real-time before the speaker utterance completes. This paper proposes the multilingual approach to SIMT, where a single model simultaneously translates between multiple language-pairs. This not only results in more efficiency in terms of the number of models and parameters (hence simpler deployment), but may also lead to higher performing models by capturing commonalities among the languages. We further explore simple and effective multilingual architectures based on two strong recently proposed SIMT models. Our results on translating from two Germanic languages (German, Dutch) and three Romance languages (French, Italian, Romanian) into English show (i) the single multilingual model is on-par or better than individual models, and (ii) multilingual SIMT models trained based on language families are on-par or better than the universal model trained for all languages. 1",,"Multilingual Simultaneous Neural Machine Translation. Simultaneous machine translation (SIMT) involves translating source utterances to the target language in real-time before the speaker utterance completes. This paper proposes the multilingual approach to SIMT, where a single model simultaneously translates between multiple language-pairs. This not only results in more efficiency in terms of the number of models and parameters (hence simpler deployment), but may also lead to higher performing models by capturing commonalities among the languages. We further explore simple and effective multilingual architectures based on two strong recently proposed SIMT models. Our results on translating from two Germanic languages (German, Dutch) and three Romance languages (French, Italian, Romanian) into English show (i) the single multilingual model is on-par or better than individual models, and (ii) multilingual SIMT models trained based on language families are on-par or better than the universal model trained for all languages. 1",2021
gundapu-mamidi-2020-gundapusunil-semeval,https://aclanthology.org/2020.semeval-1.166,0,,,,,,,"Gundapusunil at SemEval-2020 Task 9: Syntactic Semantic LSTM Architecture for SENTIment Analysis of Code-MIXed Data. The phenomenon of mixing the vocabulary and syntax of multiple languages within the same utterance is called Code-Mixing. This is more evident in multilingual societies. In this paper, we have developed a system for SemEval 2020: Task 9 on Sentiment Analysis for Code-Mixed Social Media Text. Our system first generates two types of embeddings for the social media text. In those, the first one is character level embeddings to encode the character level information and to handle the out-of-vocabulary entries and the second one is FastText word embeddings for capturing morphology and semantics. These two embeddings were passed to the LSTM network and the system outperformed the baseline model.",Gundapusunil at {S}em{E}val-2020 Task 9: Syntactic Semantic {LSTM} Architecture for {SENTI}ment Analysis of Code-{MIX}ed Data,"The phenomenon of mixing the vocabulary and syntax of multiple languages within the same utterance is called Code-Mixing. This is more evident in multilingual societies. In this paper, we have developed a system for SemEval 2020: Task 9 on Sentiment Analysis for Code-Mixed Social Media Text. Our system first generates two types of embeddings for the social media text. In those, the first one is character level embeddings to encode the character level information and to handle the out-of-vocabulary entries and the second one is FastText word embeddings for capturing morphology and semantics. These two embeddings were passed to the LSTM network and the system outperformed the baseline model.",Gundapusunil at SemEval-2020 Task 9: Syntactic Semantic LSTM Architecture for SENTIment Analysis of Code-MIXed Data,"The phenomenon of mixing the vocabulary and syntax of multiple languages within the same utterance is called Code-Mixing. This is more evident in multilingual societies. In this paper, we have developed a system for SemEval 2020: Task 9 on Sentiment Analysis for Code-Mixed Social Media Text. Our system first generates two types of embeddings for the social media text. In those, the first one is character level embeddings to encode the character level information and to handle the out-of-vocabulary entries and the second one is FastText word embeddings for capturing morphology and semantics. These two embeddings were passed to the LSTM network and the system outperformed the baseline model.",,"Gundapusunil at SemEval-2020 Task 9: Syntactic Semantic LSTM Architecture for SENTIment Analysis of Code-MIXed Data. The phenomenon of mixing the vocabulary and syntax of multiple languages within the same utterance is called Code-Mixing. This is more evident in multilingual societies. In this paper, we have developed a system for SemEval 2020: Task 9 on Sentiment Analysis for Code-Mixed Social Media Text. Our system first generates two types of embeddings for the social media text. In those, the first one is character level embeddings to encode the character level information and to handle the out-of-vocabulary entries and the second one is FastText word embeddings for capturing morphology and semantics. These two embeddings were passed to the LSTM network and the system outperformed the baseline model.",2020
polak-polakova-1982-operation,https://aclanthology.org/C82-2058,0,,,,,,,"Operation Logic - A Database Management Operation System of Human-Like Information Processing. The paper contains the description of a database management computer operation system called operation logic. This system is a formal logic with well-deflned formulas as semantic language clauses and with reasoning by means of modus ponens rules. There are four frames-CLAUSE, QUESTION,",Operation Logic - A Database Management Operation System of Human-Like Information Processing,"The paper contains the description of a database management computer operation system called operation logic. This system is a formal logic with well-deflned formulas as semantic language clauses and with reasoning by means of modus ponens rules. There are four frames-CLAUSE, QUESTION,",Operation Logic - A Database Management Operation System of Human-Like Information Processing,"The paper contains the description of a database management computer operation system called operation logic. This system is a formal logic with well-deflned formulas as semantic language clauses and with reasoning by means of modus ponens rules. There are four frames-CLAUSE, QUESTION,",,"Operation Logic - A Database Management Operation System of Human-Like Information Processing. The paper contains the description of a database management computer operation system called operation logic. This system is a formal logic with well-deflned formulas as semantic language clauses and with reasoning by means of modus ponens rules. There are four frames-CLAUSE, QUESTION,",1982
jin-etal-2021-neural,https://aclanthology.org/2021.emnlp-main.80,0,,,,,,,"Neural Attention-Aware Hierarchical Topic Model. Neural topic models (NTMs) apply deep neural networks to topic modelling. Despite their success, NTMs generally ignore two important aspects: (1) only document-level word count information is utilized for the training, while more fine-grained sentence-level information is ignored, and (2) external semantic knowledge regarding documents, sentences and words are not exploited for the training. To address these issues, we propose a variational autoencoder (VAE) NTM model that jointly reconstructs the sentence and document word counts using combinations of bag-of-words (BoW) topical embeddings and pre-trained semantic embeddings. The pre-trained embeddings are first transformed into a common latent topical space to align their semantics with the BoW embeddings. Our model also features hierarchical KL divergence to leverage embeddings of each document to regularize those of their sentences, thereby paying more attention to semantically relevant sentences. Both quantitative and qualitative experiments have shown the efficacy of our model in 1) lowering the reconstruction errors at both the sentence and document levels, and 2) discovering more coherent topics from real-world datasets.",Neural Attention-Aware Hierarchical Topic Model,"Neural topic models (NTMs) apply deep neural networks to topic modelling. Despite their success, NTMs generally ignore two important aspects: (1) only document-level word count information is utilized for the training, while more fine-grained sentence-level information is ignored, and (2) external semantic knowledge regarding documents, sentences and words are not exploited for the training. To address these issues, we propose a variational autoencoder (VAE) NTM model that jointly reconstructs the sentence and document word counts using combinations of bag-of-words (BoW) topical embeddings and pre-trained semantic embeddings. The pre-trained embeddings are first transformed into a common latent topical space to align their semantics with the BoW embeddings. Our model also features hierarchical KL divergence to leverage embeddings of each document to regularize those of their sentences, thereby paying more attention to semantically relevant sentences. Both quantitative and qualitative experiments have shown the efficacy of our model in 1) lowering the reconstruction errors at both the sentence and document levels, and 2) discovering more coherent topics from real-world datasets.",Neural Attention-Aware Hierarchical Topic Model,"Neural topic models (NTMs) apply deep neural networks to topic modelling. Despite their success, NTMs generally ignore two important aspects: (1) only document-level word count information is utilized for the training, while more fine-grained sentence-level information is ignored, and (2) external semantic knowledge regarding documents, sentences and words are not exploited for the training. To address these issues, we propose a variational autoencoder (VAE) NTM model that jointly reconstructs the sentence and document word counts using combinations of bag-of-words (BoW) topical embeddings and pre-trained semantic embeddings. The pre-trained embeddings are first transformed into a common latent topical space to align their semantics with the BoW embeddings. Our model also features hierarchical KL divergence to leverage embeddings of each document to regularize those of their sentences, thereby paying more attention to semantically relevant sentences. Both quantitative and qualitative experiments have shown the efficacy of our model in 1) lowering the reconstruction errors at both the sentence and document levels, and 2) discovering more coherent topics from real-world datasets.",Yuan Jin and Wray Buntine were supported by the Australian Research Council under awards DE170100037. Wray Buntine was also sponsored by DARPA under agreement number FA8750-19-2-0501.,"Neural Attention-Aware Hierarchical Topic Model. Neural topic models (NTMs) apply deep neural networks to topic modelling. Despite their success, NTMs generally ignore two important aspects: (1) only document-level word count information is utilized for the training, while more fine-grained sentence-level information is ignored, and (2) external semantic knowledge regarding documents, sentences and words are not exploited for the training. To address these issues, we propose a variational autoencoder (VAE) NTM model that jointly reconstructs the sentence and document word counts using combinations of bag-of-words (BoW) topical embeddings and pre-trained semantic embeddings. The pre-trained embeddings are first transformed into a common latent topical space to align their semantics with the BoW embeddings. Our model also features hierarchical KL divergence to leverage embeddings of each document to regularize those of their sentences, thereby paying more attention to semantically relevant sentences. Both quantitative and qualitative experiments have shown the efficacy of our model in 1) lowering the reconstruction errors at both the sentence and document levels, and 2) discovering more coherent topics from real-world datasets.",2021
gupta-etal-2021-sumpubmed,https://aclanthology.org/2021.acl-srw.30,1,,,,health,industry_innovation_infrastructure,,"SumPubMed: Summarization Dataset of PubMed Scientific Articles. Most earlier work on text summarization is carried out on news article datasets. The summary in these datasets is naturally located at the beginning of the text. Hence, a model can spuriously utilize this correlation for summary generation instead of truly learning to summarize. To address this issue, we constructed a new dataset, SUMPUBMED, using scientific articles from the PubMed archive. We conducted a human analysis of summary coverage, redundancy, readability, coherence, and informativeness on SUMPUBMED. SUMPUBMED is challenging because (a) the summary is distributed throughout the text (not-localized on top), and (b) it contains rare domain-specific scientific terms. We observe that seq2seq models that adequately summarize news articles struggle to summarize SUMPUBMED. Thus, SUMPUBMED opens new avenues for the future improvement of models as well as the development of new evaluation metrics.",{SumPubMed}: Summarization Dataset of {P}ub{M}ed Scientific Articles,"Most earlier work on text summarization is carried out on news article datasets. The summary in these datasets is naturally located at the beginning of the text. Hence, a model can spuriously utilize this correlation for summary generation instead of truly learning to summarize. To address this issue, we constructed a new dataset, SUMPUBMED, using scientific articles from the PubMed archive. We conducted a human analysis of summary coverage, redundancy, readability, coherence, and informativeness on SUMPUBMED. SUMPUBMED is challenging because (a) the summary is distributed throughout the text (not-localized on top), and (b) it contains rare domain-specific scientific terms. We observe that seq2seq models that adequately summarize news articles struggle to summarize SUMPUBMED. Thus, SUMPUBMED opens new avenues for the future improvement of models as well as the development of new evaluation metrics.",SumPubMed: Summarization Dataset of PubMed Scientific Articles,"Most earlier work on text summarization is carried out on news article datasets. The summary in these datasets is naturally located at the beginning of the text. Hence, a model can spuriously utilize this correlation for summary generation instead of truly learning to summarize. To address this issue, we constructed a new dataset, SUMPUBMED, using scientific articles from the PubMed archive. We conducted a human analysis of summary coverage, redundancy, readability, coherence, and informativeness on SUMPUBMED. SUMPUBMED is challenging because (a) the summary is distributed throughout the text (not-localized on top), and (b) it contains rare domain-specific scientific terms. We observe that seq2seq models that adequately summarize news articles struggle to summarize SUMPUBMED. Thus, SUMPUBMED opens new avenues for the future improvement of models as well as the development of new evaluation metrics.","We would like to thank the ACL SRW anonymous reviewers for their useful feedback, comments, and suggestions.","SumPubMed: Summarization Dataset of PubMed Scientific Articles. Most earlier work on text summarization is carried out on news article datasets. The summary in these datasets is naturally located at the beginning of the text. Hence, a model can spuriously utilize this correlation for summary generation instead of truly learning to summarize. To address this issue, we constructed a new dataset, SUMPUBMED, using scientific articles from the PubMed archive. We conducted a human analysis of summary coverage, redundancy, readability, coherence, and informativeness on SUMPUBMED. SUMPUBMED is challenging because (a) the summary is distributed throughout the text (not-localized on top), and (b) it contains rare domain-specific scientific terms. We observe that seq2seq models that adequately summarize news articles struggle to summarize SUMPUBMED. Thus, SUMPUBMED opens new avenues for the future improvement of models as well as the development of new evaluation metrics.",2021
schiffman-mckeown-2005-context,https://aclanthology.org/H05-1090,1,,,,industry_innovation_infrastructure,,,"Context and Learning in Novelty Detection. We demonstrate the value of using context in a new-information detection system that achieved the highest precision scores at the Text Retrieval Conference's Novelty Track in 2004. In order to determine whether information within a sentence has been seen in material read previously, our system integrates information about the context of the sentence with novel words and named entities within the sentence, and uses a specialized learning algorithm to tune the system parameters.",Context and Learning in Novelty Detection,"We demonstrate the value of using context in a new-information detection system that achieved the highest precision scores at the Text Retrieval Conference's Novelty Track in 2004. In order to determine whether information within a sentence has been seen in material read previously, our system integrates information about the context of the sentence with novel words and named entities within the sentence, and uses a specialized learning algorithm to tune the system parameters.",Context and Learning in Novelty Detection,"We demonstrate the value of using context in a new-information detection system that achieved the highest precision scores at the Text Retrieval Conference's Novelty Track in 2004. In order to determine whether information within a sentence has been seen in material read previously, our system integrates information about the context of the sentence with novel words and named entities within the sentence, and uses a specialized learning algorithm to tune the system parameters.",,"Context and Learning in Novelty Detection. We demonstrate the value of using context in a new-information detection system that achieved the highest precision scores at the Text Retrieval Conference's Novelty Track in 2004. In order to determine whether information within a sentence has been seen in material read previously, our system integrates information about the context of the sentence with novel words and named entities within the sentence, and uses a specialized learning algorithm to tune the system parameters.",2005
pinnis-etal-2014-real,https://aclanthology.org/2014.amta-users.7,0,,,,,,,Real-world challenges in application of MT for localization: the Baltic case. ,Real-world challenges in application of {MT} for localization: the Baltic case,,Real-world challenges in application of MT for localization: the Baltic case,,,Real-world challenges in application of MT for localization: the Baltic case. ,2014
chung-etal-2021-splat,https://aclanthology.org/2021.naacl-main.152,0,,,,,,,"SPLAT: Speech-Language Joint Pre-Training for Spoken Language Understanding. Spoken language understanding (SLU) requires a model to analyze input acoustic signal to understand its linguistic content and make predictions. To boost the models' performance, various pre-training methods have been proposed to learn rich representations from large-scale unannotated speech and text. However, the inherent disparities between the two modalities necessitate a mutual analysis. In this paper, we propose a novel semisupervised learning framework, SPLAT, to jointly pre-train the speech and language modules. Besides conducting a self-supervised masked language modeling task on the two individual modules using unpaired speech and text, SPLAT aligns representations from the two modules in a shared latent space using a small amount of paired speech and text. Thus, during fine-tuning, the speech module alone can produce representations carrying both acoustic information and contextual semantic knowledge of an input acoustic signal. Experimental results verify the effectiveness of our approach on various SLU tasks. For example, SPLAT improves the previous stateof-the-art performance on the Spoken SQuAD dataset by more than 10%. ⇤ Equal contribution. The work was done when Yu-An Chung was interning at Microsoft.",{SPLAT}: Speech-Language Joint Pre-Training for Spoken Language Understanding,"Spoken language understanding (SLU) requires a model to analyze input acoustic signal to understand its linguistic content and make predictions. To boost the models' performance, various pre-training methods have been proposed to learn rich representations from large-scale unannotated speech and text. However, the inherent disparities between the two modalities necessitate a mutual analysis. In this paper, we propose a novel semisupervised learning framework, SPLAT, to jointly pre-train the speech and language modules. Besides conducting a self-supervised masked language modeling task on the two individual modules using unpaired speech and text, SPLAT aligns representations from the two modules in a shared latent space using a small amount of paired speech and text. Thus, during fine-tuning, the speech module alone can produce representations carrying both acoustic information and contextual semantic knowledge of an input acoustic signal. Experimental results verify the effectiveness of our approach on various SLU tasks. For example, SPLAT improves the previous stateof-the-art performance on the Spoken SQuAD dataset by more than 10%. ⇤ Equal contribution. The work was done when Yu-An Chung was interning at Microsoft.",SPLAT: Speech-Language Joint Pre-Training for Spoken Language Understanding,"Spoken language understanding (SLU) requires a model to analyze input acoustic signal to understand its linguistic content and make predictions. To boost the models' performance, various pre-training methods have been proposed to learn rich representations from large-scale unannotated speech and text. However, the inherent disparities between the two modalities necessitate a mutual analysis. In this paper, we propose a novel semisupervised learning framework, SPLAT, to jointly pre-train the speech and language modules. Besides conducting a self-supervised masked language modeling task on the two individual modules using unpaired speech and text, SPLAT aligns representations from the two modules in a shared latent space using a small amount of paired speech and text. Thus, during fine-tuning, the speech module alone can produce representations carrying both acoustic information and contextual semantic knowledge of an input acoustic signal. Experimental results verify the effectiveness of our approach on various SLU tasks. For example, SPLAT improves the previous stateof-the-art performance on the Spoken SQuAD dataset by more than 10%. ⇤ Equal contribution. The work was done when Yu-An Chung was interning at Microsoft.",,"SPLAT: Speech-Language Joint Pre-Training for Spoken Language Understanding. Spoken language understanding (SLU) requires a model to analyze input acoustic signal to understand its linguistic content and make predictions. To boost the models' performance, various pre-training methods have been proposed to learn rich representations from large-scale unannotated speech and text. However, the inherent disparities between the two modalities necessitate a mutual analysis. In this paper, we propose a novel semisupervised learning framework, SPLAT, to jointly pre-train the speech and language modules. Besides conducting a self-supervised masked language modeling task on the two individual modules using unpaired speech and text, SPLAT aligns representations from the two modules in a shared latent space using a small amount of paired speech and text. Thus, during fine-tuning, the speech module alone can produce representations carrying both acoustic information and contextual semantic knowledge of an input acoustic signal. Experimental results verify the effectiveness of our approach on various SLU tasks. For example, SPLAT improves the previous stateof-the-art performance on the Spoken SQuAD dataset by more than 10%. ⇤ Equal contribution. The work was done when Yu-An Chung was interning at Microsoft.",2021
ruseti-etal-2016-using,https://aclanthology.org/W16-1623,0,,,,,,,"Using Embedding Masks for Word Categorization. Word embeddings are widely used nowadays for many NLP tasks. They reduce the dimensionality of the vocabulary space, but most importantly they should capture (part of) the meaning of words. The new vector space used by the embeddings allows computation of semantic distances between words, while some word embeddings also permit simple vector operations (e.g. summation, difference) resembling analogical reasoning. This paper proposes a new operation on word embeddings aimed to capturing categorical information by first learning and then applying an embedding mask for each analyzed category. Thus, we conducted a series of experiments related to categorization of words based on their embeddings. Several classical approaches were compared together with the one introduced in the paper which uses different embedding masks learnt for each category.",Using Embedding Masks for Word Categorization,"Word embeddings are widely used nowadays for many NLP tasks. They reduce the dimensionality of the vocabulary space, but most importantly they should capture (part of) the meaning of words. The new vector space used by the embeddings allows computation of semantic distances between words, while some word embeddings also permit simple vector operations (e.g. summation, difference) resembling analogical reasoning. This paper proposes a new operation on word embeddings aimed to capturing categorical information by first learning and then applying an embedding mask for each analyzed category. Thus, we conducted a series of experiments related to categorization of words based on their embeddings. Several classical approaches were compared together with the one introduced in the paper which uses different embedding masks learnt for each category.",Using Embedding Masks for Word Categorization,"Word embeddings are widely used nowadays for many NLP tasks. They reduce the dimensionality of the vocabulary space, but most importantly they should capture (part of) the meaning of words. The new vector space used by the embeddings allows computation of semantic distances between words, while some word embeddings also permit simple vector operations (e.g. summation, difference) resembling analogical reasoning. This paper proposes a new operation on word embeddings aimed to capturing categorical information by first learning and then applying an embedding mask for each analyzed category. Thus, we conducted a series of experiments related to categorization of words based on their embeddings. Several classical approaches were compared together with the one introduced in the paper which uses different embedding masks learnt for each category.",,"Using Embedding Masks for Word Categorization. Word embeddings are widely used nowadays for many NLP tasks. They reduce the dimensionality of the vocabulary space, but most importantly they should capture (part of) the meaning of words. The new vector space used by the embeddings allows computation of semantic distances between words, while some word embeddings also permit simple vector operations (e.g. summation, difference) resembling analogical reasoning. This paper proposes a new operation on word embeddings aimed to capturing categorical information by first learning and then applying an embedding mask for each analyzed category. Thus, we conducted a series of experiments related to categorization of words based on their embeddings. Several classical approaches were compared together with the one introduced in the paper which uses different embedding masks learnt for each category.",2016
ma-etal-2010-multimodal,https://aclanthology.org/W10-1308,0,,,,,,,"A Multimodal Vocabulary for Augmentative and Alternative Communication from Sound/Image Label Datasets. Existing Augmentative and Alternative Communication vocabularies assign multimodal stimuli to words with multiple meanings. The ambiguity hampers the vocabulary effectiveness when used by people with language disabilities. For example, the noun ""a missing letter"" may refer to a character or a written message, and each corresponds to a different picture. A vocabulary with images and sounds unambiguously linked to words can better eliminate misunderstanding and assist communication for people with language disorders. We explore a new approach of creating such a vocabulary via automatically assigning semantically unambiguous groups of synonyms to sound and image labels. We propose an unsupervised word sense disambiguation (WSD) voting algorithm, which combines different semantic relatedness measures. Our voting algorithm achieved over 80% accuracy with a sound label dataset, which significantly outperforms WSD with individual measures. We also explore the use of human judgments of evocation between members of concept pairs, in the label disambiguation task. Results show that evocation achieves similar performance to most of the existing relatedness measures.",A Multimodal Vocabulary for Augmentative and Alternative Communication from Sound/Image Label Datasets,"Existing Augmentative and Alternative Communication vocabularies assign multimodal stimuli to words with multiple meanings. The ambiguity hampers the vocabulary effectiveness when used by people with language disabilities. For example, the noun ""a missing letter"" may refer to a character or a written message, and each corresponds to a different picture. A vocabulary with images and sounds unambiguously linked to words can better eliminate misunderstanding and assist communication for people with language disorders. We explore a new approach of creating such a vocabulary via automatically assigning semantically unambiguous groups of synonyms to sound and image labels. We propose an unsupervised word sense disambiguation (WSD) voting algorithm, which combines different semantic relatedness measures. Our voting algorithm achieved over 80% accuracy with a sound label dataset, which significantly outperforms WSD with individual measures. We also explore the use of human judgments of evocation between members of concept pairs, in the label disambiguation task. Results show that evocation achieves similar performance to most of the existing relatedness measures.",A Multimodal Vocabulary for Augmentative and Alternative Communication from Sound/Image Label Datasets,"Existing Augmentative and Alternative Communication vocabularies assign multimodal stimuli to words with multiple meanings. The ambiguity hampers the vocabulary effectiveness when used by people with language disabilities. For example, the noun ""a missing letter"" may refer to a character or a written message, and each corresponds to a different picture. A vocabulary with images and sounds unambiguously linked to words can better eliminate misunderstanding and assist communication for people with language disorders. We explore a new approach of creating such a vocabulary via automatically assigning semantically unambiguous groups of synonyms to sound and image labels. We propose an unsupervised word sense disambiguation (WSD) voting algorithm, which combines different semantic relatedness measures. Our voting algorithm achieved over 80% accuracy with a sound label dataset, which significantly outperforms WSD with individual measures. We also explore the use of human judgments of evocation between members of concept pairs, in the label disambiguation task. Results show that evocation achieves similar performance to most of the existing relatedness measures.", Figure 6 . Percentage of WSD results overlap between evocation and various relatedness measures.We thank the Kimberley and Frank H. Moss '71 Princeton SEAS Research Fund for supporting our project.,"A Multimodal Vocabulary for Augmentative and Alternative Communication from Sound/Image Label Datasets. Existing Augmentative and Alternative Communication vocabularies assign multimodal stimuli to words with multiple meanings. The ambiguity hampers the vocabulary effectiveness when used by people with language disabilities. For example, the noun ""a missing letter"" may refer to a character or a written message, and each corresponds to a different picture. A vocabulary with images and sounds unambiguously linked to words can better eliminate misunderstanding and assist communication for people with language disorders. We explore a new approach of creating such a vocabulary via automatically assigning semantically unambiguous groups of synonyms to sound and image labels. We propose an unsupervised word sense disambiguation (WSD) voting algorithm, which combines different semantic relatedness measures. Our voting algorithm achieved over 80% accuracy with a sound label dataset, which significantly outperforms WSD with individual measures. We also explore the use of human judgments of evocation between members of concept pairs, in the label disambiguation task. Results show that evocation achieves similar performance to most of the existing relatedness measures.",2010
selfridge-etal-2012-integrating,https://aclanthology.org/W12-1638,0,,,,,,,Integrating Incremental Speech Recognition and POMDP-Based Dialogue Systems. The goal of this paper is to present a first step toward integrating Incremental Speech Recognition (ISR) and Partially-Observable Markov Decision Process (POMDP) based dialogue systems. The former provides support for advanced turn-taking behavior while the other increases the semantic accuracy of speech recognition results. We present an Incremental Interaction Manager that supports the use of ISR with strictly turn-based dialogue managers. We then show that using a POMDP-based dialogue manager with ISR substantially improves the semantic accuracy of the incremental results.,Integrating Incremental Speech Recognition and {POMDP}-Based Dialogue Systems,The goal of this paper is to present a first step toward integrating Incremental Speech Recognition (ISR) and Partially-Observable Markov Decision Process (POMDP) based dialogue systems. The former provides support for advanced turn-taking behavior while the other increases the semantic accuracy of speech recognition results. We present an Incremental Interaction Manager that supports the use of ISR with strictly turn-based dialogue managers. We then show that using a POMDP-based dialogue manager with ISR substantially improves the semantic accuracy of the incremental results.,Integrating Incremental Speech Recognition and POMDP-Based Dialogue Systems,The goal of this paper is to present a first step toward integrating Incremental Speech Recognition (ISR) and Partially-Observable Markov Decision Process (POMDP) based dialogue systems. The former provides support for advanced turn-taking behavior while the other increases the semantic accuracy of speech recognition results. We present an Incremental Interaction Manager that supports the use of ISR with strictly turn-based dialogue managers. We then show that using a POMDP-based dialogue manager with ISR substantially improves the semantic accuracy of the incremental results.,"Thanks to Vincent Goffin for help with this work, and to the anonymous reviewers for their comments and critique. We acknowledge funding from the NSF under grant IIS-0713698.",Integrating Incremental Speech Recognition and POMDP-Based Dialogue Systems. The goal of this paper is to present a first step toward integrating Incremental Speech Recognition (ISR) and Partially-Observable Markov Decision Process (POMDP) based dialogue systems. The former provides support for advanced turn-taking behavior while the other increases the semantic accuracy of speech recognition results. We present an Incremental Interaction Manager that supports the use of ISR with strictly turn-based dialogue managers. We then show that using a POMDP-based dialogue manager with ISR substantially improves the semantic accuracy of the incremental results.,2012
ravenscroft-etal-2018-harrigt,https://aclanthology.org/P18-4004,1,,,,industry_innovation_infrastructure,,,"HarriGT: A Tool for Linking News to Science. Being able to reliably link scientific works to the newspaper articles that discuss them could provide a breakthrough in the way we rationalise and measure the impact of science on our society. Linking these articles is challenging because the language used in the two domains is very different, and the gathering of online resources to align the two is a substantial information retrieval endeavour. We present HarriGT, a semi-automated tool for building corpora of news articles linked to the scientific papers that they discuss. Our aim is to facilitate future development of information-retrieval tools for newspaper/scientific work citation linking. Har-riGT retrieves newspaper articles from an archive containing 17 years of UK web content. It also integrates with 3 large external citation networks, leveraging named entity extraction, and document classification to surface relevant examples of scientific literature to the user. We also provide a tuned candidate ranking algorithm to highlight potential links between scientific papers and newspaper articles to the user, in order of likelihood. HarriGT is provided as an open source tool (http: //harrigt.xyz).",{H}arri{GT}: A Tool for Linking News to Science,"Being able to reliably link scientific works to the newspaper articles that discuss them could provide a breakthrough in the way we rationalise and measure the impact of science on our society. Linking these articles is challenging because the language used in the two domains is very different, and the gathering of online resources to align the two is a substantial information retrieval endeavour. We present HarriGT, a semi-automated tool for building corpora of news articles linked to the scientific papers that they discuss. Our aim is to facilitate future development of information-retrieval tools for newspaper/scientific work citation linking. Har-riGT retrieves newspaper articles from an archive containing 17 years of UK web content. It also integrates with 3 large external citation networks, leveraging named entity extraction, and document classification to surface relevant examples of scientific literature to the user. We also provide a tuned candidate ranking algorithm to highlight potential links between scientific papers and newspaper articles to the user, in order of likelihood. HarriGT is provided as an open source tool (http: //harrigt.xyz).",HarriGT: A Tool for Linking News to Science,"Being able to reliably link scientific works to the newspaper articles that discuss them could provide a breakthrough in the way we rationalise and measure the impact of science on our society. Linking these articles is challenging because the language used in the two domains is very different, and the gathering of online resources to align the two is a substantial information retrieval endeavour. We present HarriGT, a semi-automated tool for building corpora of news articles linked to the scientific papers that they discuss. Our aim is to facilitate future development of information-retrieval tools for newspaper/scientific work citation linking. Har-riGT retrieves newspaper articles from an archive containing 17 years of UK web content. It also integrates with 3 large external citation networks, leveraging named entity extraction, and document classification to surface relevant examples of scientific literature to the user. We also provide a tuned candidate ranking algorithm to highlight potential links between scientific papers and newspaper articles to the user, in order of likelihood. HarriGT is provided as an open source tool (http: //harrigt.xyz).","We thank the EPSRC (grant EP/L016400/1) for funding us through the University of Warwick's CDT in Urban Science, the Alan Turing Institute and British Library for providing resources.","HarriGT: A Tool for Linking News to Science. Being able to reliably link scientific works to the newspaper articles that discuss them could provide a breakthrough in the way we rationalise and measure the impact of science on our society. Linking these articles is challenging because the language used in the two domains is very different, and the gathering of online resources to align the two is a substantial information retrieval endeavour. We present HarriGT, a semi-automated tool for building corpora of news articles linked to the scientific papers that they discuss. Our aim is to facilitate future development of information-retrieval tools for newspaper/scientific work citation linking. Har-riGT retrieves newspaper articles from an archive containing 17 years of UK web content. It also integrates with 3 large external citation networks, leveraging named entity extraction, and document classification to surface relevant examples of scientific literature to the user. We also provide a tuned candidate ranking algorithm to highlight potential links between scientific papers and newspaper articles to the user, in order of likelihood. HarriGT is provided as an open source tool (http: //harrigt.xyz).",2018
nayek-etal-2015-catalog,https://aclanthology.org/W15-5206,0,,,,,,,CATaLog: New Approaches to TM and Post Editing Interfaces. This paper explores a new TM-based CAT tool entitled CATaLog. New features have been integrated into the tool which aim to improve post-editing both in terms of performance and productivity. One of the new features of CATaLog is a color coding scheme that is based on the similarity between a particular input sentence and the segments retrieved from the TM. This color coding scheme will help translators to identify which part of the sentence is most likely to require post-editing thus demanding minimal effort and increasing productivity. We demonstrate the tool's functionalities using an English-Bengali dataset.,{CAT}a{L}og: New Approaches to {TM} and Post Editing Interfaces,This paper explores a new TM-based CAT tool entitled CATaLog. New features have been integrated into the tool which aim to improve post-editing both in terms of performance and productivity. One of the new features of CATaLog is a color coding scheme that is based on the similarity between a particular input sentence and the segments retrieved from the TM. This color coding scheme will help translators to identify which part of the sentence is most likely to require post-editing thus demanding minimal effort and increasing productivity. We demonstrate the tool's functionalities using an English-Bengali dataset.,CATaLog: New Approaches to TM and Post Editing Interfaces,This paper explores a new TM-based CAT tool entitled CATaLog. New features have been integrated into the tool which aim to improve post-editing both in terms of performance and productivity. One of the new features of CATaLog is a color coding scheme that is based on the similarity between a particular input sentence and the segments retrieved from the TM. This color coding scheme will help translators to identify which part of the sentence is most likely to require post-editing thus demanding minimal effort and increasing productivity. We demonstrate the tool's functionalities using an English-Bengali dataset.,"We would like to thank the anonymous NLP4TM reviewers who provided us valuable feedback to improve this paper as well as new ideas for future work.English to Indian language Machine Translation (EILMT) is a project funded by the Department of Information and Technology (DIT), Government of India.Santanu Pal is supported by the People Programme (Marie Curie Actions) of the European Union's Framework Programme (FP7/2007-2013) under REA grant agreement no 317471.",CATaLog: New Approaches to TM and Post Editing Interfaces. This paper explores a new TM-based CAT tool entitled CATaLog. New features have been integrated into the tool which aim to improve post-editing both in terms of performance and productivity. One of the new features of CATaLog is a color coding scheme that is based on the similarity between a particular input sentence and the segments retrieved from the TM. This color coding scheme will help translators to identify which part of the sentence is most likely to require post-editing thus demanding minimal effort and increasing productivity. We demonstrate the tool's functionalities using an English-Bengali dataset.,2015
arnold-etal-2016-tasty,https://aclanthology.org/C16-2024,0,,,,,,,"TASTY: Interactive Entity Linking As-You-Type. We introduce TASTY (Tag-as-you-type), a novel text editor for interactive entity linking as part of the writing process. Tasty supports the author of a text with complementary information about the mentioned entities shown in a 'live' exploration view. The system is automatically triggered by keystrokes, recognizes mention boundaries and disambiguates the mentioned entities to Wikipedia articles. The author can use seven operators to interact with the editor and refine the results according to his specific intention while writing. Our implementation captures syntactic and semantic context using a robust end-to-end LSTM sequence learner and word embeddings. We demonstrate the applicability of our system in English and German language for encyclopedic or medical text. Tasty is currently being tested in interactive applications for text production, such as scientific research, news editorial, medical anamnesis, help desks and product reviews.",{TASTY}: Interactive Entity Linking As-You-Type,"We introduce TASTY (Tag-as-you-type), a novel text editor for interactive entity linking as part of the writing process. Tasty supports the author of a text with complementary information about the mentioned entities shown in a 'live' exploration view. The system is automatically triggered by keystrokes, recognizes mention boundaries and disambiguates the mentioned entities to Wikipedia articles. The author can use seven operators to interact with the editor and refine the results according to his specific intention while writing. Our implementation captures syntactic and semantic context using a robust end-to-end LSTM sequence learner and word embeddings. We demonstrate the applicability of our system in English and German language for encyclopedic or medical text. Tasty is currently being tested in interactive applications for text production, such as scientific research, news editorial, medical anamnesis, help desks and product reviews.",TASTY: Interactive Entity Linking As-You-Type,"We introduce TASTY (Tag-as-you-type), a novel text editor for interactive entity linking as part of the writing process. Tasty supports the author of a text with complementary information about the mentioned entities shown in a 'live' exploration view. The system is automatically triggered by keystrokes, recognizes mention boundaries and disambiguates the mentioned entities to Wikipedia articles. The author can use seven operators to interact with the editor and refine the results according to his specific intention while writing. Our implementation captures syntactic and semantic context using a robust end-to-end LSTM sequence learner and word embeddings. We demonstrate the applicability of our system in English and German language for encyclopedic or medical text. Tasty is currently being tested in interactive applications for text production, such as scientific research, news editorial, medical anamnesis, help desks and product reviews.",Acknowledgements Our work is funded by the Federal Ministry of Economic Affairs and Energy (BMWi) under grant agreement 01MD15010B (Project: Smart Data Web).,"TASTY: Interactive Entity Linking As-You-Type. We introduce TASTY (Tag-as-you-type), a novel text editor for interactive entity linking as part of the writing process. Tasty supports the author of a text with complementary information about the mentioned entities shown in a 'live' exploration view. The system is automatically triggered by keystrokes, recognizes mention boundaries and disambiguates the mentioned entities to Wikipedia articles. The author can use seven operators to interact with the editor and refine the results according to his specific intention while writing. Our implementation captures syntactic and semantic context using a robust end-to-end LSTM sequence learner and word embeddings. We demonstrate the applicability of our system in English and German language for encyclopedic or medical text. Tasty is currently being tested in interactive applications for text production, such as scientific research, news editorial, medical anamnesis, help desks and product reviews.",2016
kim-sohn-2020-positive,https://aclanthology.org/2020.coling-main.191,0,,,,,,,"How Positive Are You: Text Style Transfer using Adaptive Style Embedding. The prevalent approach for unsupervised text style transfer is disentanglement between content and style. However, it is difficult to completely separate style information from the content. Other approaches allow the latent text representation to contain style and the target style to affect the generated output more than the latent representation does. In both approaches, however, it is impossible to adjust the strength of the style in the generated output. Moreover, those previous approaches typically perform both the sentence reconstruction and style control tasks in a single model, which complicates the overall architecture. In this paper, we address these issues by separating the model into a sentence reconstruction module and a style module. We use the Transformer-based autoencoder model for sentence reconstruction and the adaptive style embedding is learned directly in the style module. Because of this separation, each module can better focus on its own task. Moreover, we can vary the style strength of the generated sentence by changing the style of the embedding expression. Therefore, our approach not only controls the strength of the style, but also simplifies the model architecture. Experimental results show that our approach achieves better style transfer performance and content preservation than previous approaches. 1",How Positive Are You: Text Style Transfer using Adaptive Style Embedding,"The prevalent approach for unsupervised text style transfer is disentanglement between content and style. However, it is difficult to completely separate style information from the content. Other approaches allow the latent text representation to contain style and the target style to affect the generated output more than the latent representation does. In both approaches, however, it is impossible to adjust the strength of the style in the generated output. Moreover, those previous approaches typically perform both the sentence reconstruction and style control tasks in a single model, which complicates the overall architecture. In this paper, we address these issues by separating the model into a sentence reconstruction module and a style module. We use the Transformer-based autoencoder model for sentence reconstruction and the adaptive style embedding is learned directly in the style module. Because of this separation, each module can better focus on its own task. Moreover, we can vary the style strength of the generated sentence by changing the style of the embedding expression. Therefore, our approach not only controls the strength of the style, but also simplifies the model architecture. Experimental results show that our approach achieves better style transfer performance and content preservation than previous approaches. 1",How Positive Are You: Text Style Transfer using Adaptive Style Embedding,"The prevalent approach for unsupervised text style transfer is disentanglement between content and style. However, it is difficult to completely separate style information from the content. Other approaches allow the latent text representation to contain style and the target style to affect the generated output more than the latent representation does. In both approaches, however, it is impossible to adjust the strength of the style in the generated output. Moreover, those previous approaches typically perform both the sentence reconstruction and style control tasks in a single model, which complicates the overall architecture. In this paper, we address these issues by separating the model into a sentence reconstruction module and a style module. We use the Transformer-based autoencoder model for sentence reconstruction and the adaptive style embedding is learned directly in the style module. Because of this separation, each module can better focus on its own task. Moreover, we can vary the style strength of the generated sentence by changing the style of the embedding expression. Therefore, our approach not only controls the strength of the style, but also simplifies the model architecture. Experimental results show that our approach achieves better style transfer performance and content preservation than previous approaches. 1",This research was supported by the National Research Foundation of Korea grant funded by the Korea government (MSIT) (No. NRF-2019R1A2C1006608).,"How Positive Are You: Text Style Transfer using Adaptive Style Embedding. The prevalent approach for unsupervised text style transfer is disentanglement between content and style. However, it is difficult to completely separate style information from the content. Other approaches allow the latent text representation to contain style and the target style to affect the generated output more than the latent representation does. In both approaches, however, it is impossible to adjust the strength of the style in the generated output. Moreover, those previous approaches typically perform both the sentence reconstruction and style control tasks in a single model, which complicates the overall architecture. In this paper, we address these issues by separating the model into a sentence reconstruction module and a style module. We use the Transformer-based autoencoder model for sentence reconstruction and the adaptive style embedding is learned directly in the style module. Because of this separation, each module can better focus on its own task. Moreover, we can vary the style strength of the generated sentence by changing the style of the embedding expression. Therefore, our approach not only controls the strength of the style, but also simplifies the model architecture. Experimental results show that our approach achieves better style transfer performance and content preservation than previous approaches. 1",2020
dagan-etal-2021-co,https://aclanthology.org/2021.eacl-main.260,0,,,,,,,"Co-evolution of language and agents in referential games. Referential games offer a grounded learning environment for neural agents which accounts for the fact that language is functionally used to communicate. However, they do not take into account a second constraint considered to be fundamental for the shape of human language: that it must be learnable by new language learners. Cogswell et al. (2019) introduced cultural transmission within referential games through a changing population of agents to constrain the emerging language to be learnable. However, the resulting languages remain inherently biased by the agents' underlying capabilities. In this work, we introduce Language Transmission Simulator to model both cultural and architectural evolution in a population of agents. As our core contribution, we empirically show that the optimal situation is to take into account also the learning biases of the language learners and thus let language and agents coevolve. When we allow the agent population to evolve through architectural evolution, we achieve across the board improvements on all considered metrics and surpass the gains made with cultural transmission. These results stress the importance of studying the underlying agent architecture and pave the way to investigate the co-evolution of language and agent in language emergence studies.",Co-evolution of language and agents in referential games,"Referential games offer a grounded learning environment for neural agents which accounts for the fact that language is functionally used to communicate. However, they do not take into account a second constraint considered to be fundamental for the shape of human language: that it must be learnable by new language learners. Cogswell et al. (2019) introduced cultural transmission within referential games through a changing population of agents to constrain the emerging language to be learnable. However, the resulting languages remain inherently biased by the agents' underlying capabilities. In this work, we introduce Language Transmission Simulator to model both cultural and architectural evolution in a population of agents. As our core contribution, we empirically show that the optimal situation is to take into account also the learning biases of the language learners and thus let language and agents coevolve. When we allow the agent population to evolve through architectural evolution, we achieve across the board improvements on all considered metrics and surpass the gains made with cultural transmission. These results stress the importance of studying the underlying agent architecture and pave the way to investigate the co-evolution of language and agent in language emergence studies.",Co-evolution of language and agents in referential games,"Referential games offer a grounded learning environment for neural agents which accounts for the fact that language is functionally used to communicate. However, they do not take into account a second constraint considered to be fundamental for the shape of human language: that it must be learnable by new language learners. Cogswell et al. (2019) introduced cultural transmission within referential games through a changing population of agents to constrain the emerging language to be learnable. However, the resulting languages remain inherently biased by the agents' underlying capabilities. In this work, we introduce Language Transmission Simulator to model both cultural and architectural evolution in a population of agents. As our core contribution, we empirically show that the optimal situation is to take into account also the learning biases of the language learners and thus let language and agents coevolve. When we allow the agent population to evolve through architectural evolution, we achieve across the board improvements on all considered metrics and surpass the gains made with cultural transmission. These results stress the importance of studying the underlying agent architecture and pave the way to investigate the co-evolution of language and agent in language emergence studies.",We would like to thank Angeliki Lazaridou for her helpful discussions and feedback on previous iterations of this work.,"Co-evolution of language and agents in referential games. Referential games offer a grounded learning environment for neural agents which accounts for the fact that language is functionally used to communicate. However, they do not take into account a second constraint considered to be fundamental for the shape of human language: that it must be learnable by new language learners. Cogswell et al. (2019) introduced cultural transmission within referential games through a changing population of agents to constrain the emerging language to be learnable. However, the resulting languages remain inherently biased by the agents' underlying capabilities. In this work, we introduce Language Transmission Simulator to model both cultural and architectural evolution in a population of agents. As our core contribution, we empirically show that the optimal situation is to take into account also the learning biases of the language learners and thus let language and agents coevolve. When we allow the agent population to evolve through architectural evolution, we achieve across the board improvements on all considered metrics and surpass the gains made with cultural transmission. These results stress the importance of studying the underlying agent architecture and pave the way to investigate the co-evolution of language and agent in language emergence studies.",2021
schneider-1987-metal,https://aclanthology.org/1987.mtsummit-1.7,0,,,,,,,"The METAL System. Status 1987. 1. History 2. Hardware 3. Grammar 4. Lexicon 5. Development Tools 6. Current Applications and Quality 7. Research, Future Applications
In the late seventies, when there was a noticeable shortage of qualified technical translators versus the volume of required in-house translations, Siemens began to look for an operative machine translation system. It was intended to increase the productivity of the translators available, and to reduce the time required for the translation process. This is extremely critical if voluminous product documentation needs to be delivered on time.",The {METAL} System. Status 1987,"1. History 2. Hardware 3. Grammar 4. Lexicon 5. Development Tools 6. Current Applications and Quality 7. Research, Future Applications
In the late seventies, when there was a noticeable shortage of qualified technical translators versus the volume of required in-house translations, Siemens began to look for an operative machine translation system. It was intended to increase the productivity of the translators available, and to reduce the time required for the translation process. This is extremely critical if voluminous product documentation needs to be delivered on time.",The METAL System. Status 1987,"1. History 2. Hardware 3. Grammar 4. Lexicon 5. Development Tools 6. Current Applications and Quality 7. Research, Future Applications
In the late seventies, when there was a noticeable shortage of qualified technical translators versus the volume of required in-house translations, Siemens began to look for an operative machine translation system. It was intended to increase the productivity of the translators available, and to reduce the time required for the translation process. This is extremely critical if voluminous product documentation needs to be delivered on time.",,"The METAL System. Status 1987. 1. History 2. Hardware 3. Grammar 4. Lexicon 5. Development Tools 6. Current Applications and Quality 7. Research, Future Applications
In the late seventies, when there was a noticeable shortage of qualified technical translators versus the volume of required in-house translations, Siemens began to look for an operative machine translation system. It was intended to increase the productivity of the translators available, and to reduce the time required for the translation process. This is extremely critical if voluminous product documentation needs to be delivered on time.",1987
kuzman-etal-2019-neural,https://aclanthology.org/W19-7301,0,,,,,,,"Neural Machine Translation of Literary Texts from English to Slovene. Neural Machine Translation has shown promising performance in literary texts. Since literary machine translation has not yet been researched for the English-to-Slovene translation direction, this paper aims to fulfill this gap by presenting a comparison among bespoke NMT models, tailored to novels, and Google Neural Machine Translation. The translation models were evaluated by the BLEU and METEOR metrics, assessment of fluency and adequacy, and measurement of the postediting effort. The findings show that all evaluated approaches resulted in an increase in translation productivity. The translation model tailored to a specific author outperformed the model trained on a more diverse literary corpus, based on all metrics except the scores for fluency. However, the translation model by Google still outperforms all bespoke models. The evaluation reveals a very low inter-rater agreement on fluency and adequacy, based on the kappa coefficient values, and significant discrepancies between posteditors. This suggests that these methods might not be reliable, which should be addressed in future studies. Recent years have seen the advent of Neural Machine Translation (NMT), which has shown promising performance in literary texts",Neural Machine Translation of Literary Texts from {E}nglish to {S}lovene,"Neural Machine Translation has shown promising performance in literary texts. Since literary machine translation has not yet been researched for the English-to-Slovene translation direction, this paper aims to fulfill this gap by presenting a comparison among bespoke NMT models, tailored to novels, and Google Neural Machine Translation. The translation models were evaluated by the BLEU and METEOR metrics, assessment of fluency and adequacy, and measurement of the postediting effort. The findings show that all evaluated approaches resulted in an increase in translation productivity. The translation model tailored to a specific author outperformed the model trained on a more diverse literary corpus, based on all metrics except the scores for fluency. However, the translation model by Google still outperforms all bespoke models. The evaluation reveals a very low inter-rater agreement on fluency and adequacy, based on the kappa coefficient values, and significant discrepancies between posteditors. This suggests that these methods might not be reliable, which should be addressed in future studies. Recent years have seen the advent of Neural Machine Translation (NMT), which has shown promising performance in literary texts",Neural Machine Translation of Literary Texts from English to Slovene,"Neural Machine Translation has shown promising performance in literary texts. Since literary machine translation has not yet been researched for the English-to-Slovene translation direction, this paper aims to fulfill this gap by presenting a comparison among bespoke NMT models, tailored to novels, and Google Neural Machine Translation. The translation models were evaluated by the BLEU and METEOR metrics, assessment of fluency and adequacy, and measurement of the postediting effort. The findings show that all evaluated approaches resulted in an increase in translation productivity. The translation model tailored to a specific author outperformed the model trained on a more diverse literary corpus, based on all metrics except the scores for fluency. However, the translation model by Google still outperforms all bespoke models. The evaluation reveals a very low inter-rater agreement on fluency and adequacy, based on the kappa coefficient values, and significant discrepancies between posteditors. This suggests that these methods might not be reliable, which should be addressed in future studies. Recent years have seen the advent of Neural Machine Translation (NMT), which has shown promising performance in literary texts","This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289 (Insight), co-funded by the European Regional Development Fund.","Neural Machine Translation of Literary Texts from English to Slovene. Neural Machine Translation has shown promising performance in literary texts. Since literary machine translation has not yet been researched for the English-to-Slovene translation direction, this paper aims to fulfill this gap by presenting a comparison among bespoke NMT models, tailored to novels, and Google Neural Machine Translation. The translation models were evaluated by the BLEU and METEOR metrics, assessment of fluency and adequacy, and measurement of the postediting effort. The findings show that all evaluated approaches resulted in an increase in translation productivity. The translation model tailored to a specific author outperformed the model trained on a more diverse literary corpus, based on all metrics except the scores for fluency. However, the translation model by Google still outperforms all bespoke models. The evaluation reveals a very low inter-rater agreement on fluency and adequacy, based on the kappa coefficient values, and significant discrepancies between posteditors. This suggests that these methods might not be reliable, which should be addressed in future studies. Recent years have seen the advent of Neural Machine Translation (NMT), which has shown promising performance in literary texts",2019
vasconcellos-1989-place,https://aclanthology.org/1989.mtsummit-1.9,0,,,,,,,"The place of MT in an in-house translation service. At the Pan American Health Organization (PAHO), MT service is approaching its tenth anniversary. A special combination of characteristics have placed this operation in a class by itself. One of these characteristics is that the MT software (SPANAM and ENGSPAN and supporting programs) has been developed in-house by an international organization. PAHO was motivated by the dual need to: (1) meet the translation needs of its secretariat, and (2) disseminate information in its member countries. Thus MT at PAHO was conceived from the start as a public service.",The place of {MT} in an in-house translation service,"At the Pan American Health Organization (PAHO), MT service is approaching its tenth anniversary. A special combination of characteristics have placed this operation in a class by itself. One of these characteristics is that the MT software (SPANAM and ENGSPAN and supporting programs) has been developed in-house by an international organization. PAHO was motivated by the dual need to: (1) meet the translation needs of its secretariat, and (2) disseminate information in its member countries. Thus MT at PAHO was conceived from the start as a public service.",The place of MT in an in-house translation service,"At the Pan American Health Organization (PAHO), MT service is approaching its tenth anniversary. A special combination of characteristics have placed this operation in a class by itself. One of these characteristics is that the MT software (SPANAM and ENGSPAN and supporting programs) has been developed in-house by an international organization. PAHO was motivated by the dual need to: (1) meet the translation needs of its secretariat, and (2) disseminate information in its member countries. Thus MT at PAHO was conceived from the start as a public service.",,"The place of MT in an in-house translation service. At the Pan American Health Organization (PAHO), MT service is approaching its tenth anniversary. A special combination of characteristics have placed this operation in a class by itself. One of these characteristics is that the MT software (SPANAM and ENGSPAN and supporting programs) has been developed in-house by an international organization. PAHO was motivated by the dual need to: (1) meet the translation needs of its secretariat, and (2) disseminate information in its member countries. Thus MT at PAHO was conceived from the start as a public service.",1989
vanzo-etal-2014-context,https://aclanthology.org/C14-1221,0,,,,,,,"A context-based model for Sentiment Analysis in Twitter. Most of the recent literature on Sentiment Analysis over Twitter is tied to the idea that the sentiment is a function of an incoming tweet. However, tweets are filtered through streams of posts, so that a wider context, e.g. a topic, is always available. In this work, the contribution of this contextual information is investigated. We modeled the polarity detection problem as a sequential classification task over streams of tweets. A Markovian formulation of the Support Vector Machine discriminative model as embodied by the SVM hmm algorithm has been here employed to assign the sentiment polarity to entire sequences. The experimental evaluation proves that sequential tagging effectively embodies evidence about the contexts and is able to reach a relative increment in detection accuracy of around 20% in F1 measure. These results are particularly interesting as the approach is flexible and does not require manually coded resources. ColMustard : Amazing match yesterday!!#Bayern vs. #Freiburg 4-0 #easyvictory SergGray : @ColMustard Surely, but #Freiburg wasted lot of chances to score.. wrong substitutions by #Guardiola during the 2nd half!! ColMustard : @SergGray Yes, I totally agree with you about the substitutions! #Bayern #Freiburg This work is licenced under a Creative Commons Attribution 4.0 International License.",A context-based model for Sentiment Analysis in {T}witter,"Most of the recent literature on Sentiment Analysis over Twitter is tied to the idea that the sentiment is a function of an incoming tweet. However, tweets are filtered through streams of posts, so that a wider context, e.g. a topic, is always available. In this work, the contribution of this contextual information is investigated. We modeled the polarity detection problem as a sequential classification task over streams of tweets. A Markovian formulation of the Support Vector Machine discriminative model as embodied by the SVM hmm algorithm has been here employed to assign the sentiment polarity to entire sequences. The experimental evaluation proves that sequential tagging effectively embodies evidence about the contexts and is able to reach a relative increment in detection accuracy of around 20% in F1 measure. These results are particularly interesting as the approach is flexible and does not require manually coded resources. ColMustard : Amazing match yesterday!!#Bayern vs. #Freiburg 4-0 #easyvictory SergGray : @ColMustard Surely, but #Freiburg wasted lot of chances to score.. wrong substitutions by #Guardiola during the 2nd half!! ColMustard : @SergGray Yes, I totally agree with you about the substitutions! #Bayern #Freiburg This work is licenced under a Creative Commons Attribution 4.0 International License.",A context-based model for Sentiment Analysis in Twitter,"Most of the recent literature on Sentiment Analysis over Twitter is tied to the idea that the sentiment is a function of an incoming tweet. However, tweets are filtered through streams of posts, so that a wider context, e.g. a topic, is always available. In this work, the contribution of this contextual information is investigated. We modeled the polarity detection problem as a sequential classification task over streams of tweets. A Markovian formulation of the Support Vector Machine discriminative model as embodied by the SVM hmm algorithm has been here employed to assign the sentiment polarity to entire sequences. The experimental evaluation proves that sequential tagging effectively embodies evidence about the contexts and is able to reach a relative increment in detection accuracy of around 20% in F1 measure. These results are particularly interesting as the approach is flexible and does not require manually coded resources. ColMustard : Amazing match yesterday!!#Bayern vs. #Freiburg 4-0 #easyvictory SergGray : @ColMustard Surely, but #Freiburg wasted lot of chances to score.. wrong substitutions by #Guardiola during the 2nd half!! ColMustard : @SergGray Yes, I totally agree with you about the substitutions! #Bayern #Freiburg This work is licenced under a Creative Commons Attribution 4.0 International License.",,"A context-based model for Sentiment Analysis in Twitter. Most of the recent literature on Sentiment Analysis over Twitter is tied to the idea that the sentiment is a function of an incoming tweet. However, tweets are filtered through streams of posts, so that a wider context, e.g. a topic, is always available. In this work, the contribution of this contextual information is investigated. We modeled the polarity detection problem as a sequential classification task over streams of tweets. A Markovian formulation of the Support Vector Machine discriminative model as embodied by the SVM hmm algorithm has been here employed to assign the sentiment polarity to entire sequences. The experimental evaluation proves that sequential tagging effectively embodies evidence about the contexts and is able to reach a relative increment in detection accuracy of around 20% in F1 measure. These results are particularly interesting as the approach is flexible and does not require manually coded resources. ColMustard : Amazing match yesterday!!#Bayern vs. #Freiburg 4-0 #easyvictory SergGray : @ColMustard Surely, but #Freiburg wasted lot of chances to score.. wrong substitutions by #Guardiola during the 2nd half!! ColMustard : @SergGray Yes, I totally agree with you about the substitutions! #Bayern #Freiburg This work is licenced under a Creative Commons Attribution 4.0 International License.",2014
gupta-lehal-2011-punjabi,https://aclanthology.org/W11-3006,0,,,,,,,"Punjabi Language Stemmer for nouns and proper names. This paper concentrates on Punjabi language noun and proper name stemming. The purpose of stemming is to obtain the stem or radix of those words which are not found in dictionary. If stemmed word is present in dictionary, then that is a genuine word, otherwise it may be proper name or some invalid word. In Punjabi language stemming for nouns and proper names, an attempt is made to obtain stem or radix of a Punjabi word and then stem or radix is checked against Punjabi noun and proper name dictionary. An in depth analysis of Punjabi news corpus was made and various possible noun suffixes were identified like ੀ ਆਂ īāṃ, ਿੀਆਂ iāṃ, ੀ ਆਂ ūāṃ, ੀ ੀਂ āṃ, ੀ ਏ īē etc. and the various rules for noun and proper name stemming have been generated. Punjabi language stemmer for nouns and proper names is applied for Punjabi Text Summarization. The efficiency of Punjabi language noun and Proper name stemmer is 87.37%.",{P}unjabi Language Stemmer for nouns and proper names,"This paper concentrates on Punjabi language noun and proper name stemming. The purpose of stemming is to obtain the stem or radix of those words which are not found in dictionary. If stemmed word is present in dictionary, then that is a genuine word, otherwise it may be proper name or some invalid word. In Punjabi language stemming for nouns and proper names, an attempt is made to obtain stem or radix of a Punjabi word and then stem or radix is checked against Punjabi noun and proper name dictionary. An in depth analysis of Punjabi news corpus was made and various possible noun suffixes were identified like ੀ ਆਂ īāṃ, ਿੀਆਂ iāṃ, ੀ ਆਂ ūāṃ, ੀ ੀਂ āṃ, ੀ ਏ īē etc. and the various rules for noun and proper name stemming have been generated. Punjabi language stemmer for nouns and proper names is applied for Punjabi Text Summarization. The efficiency of Punjabi language noun and Proper name stemmer is 87.37%.",Punjabi Language Stemmer for nouns and proper names,"This paper concentrates on Punjabi language noun and proper name stemming. The purpose of stemming is to obtain the stem or radix of those words which are not found in dictionary. If stemmed word is present in dictionary, then that is a genuine word, otherwise it may be proper name or some invalid word. In Punjabi language stemming for nouns and proper names, an attempt is made to obtain stem or radix of a Punjabi word and then stem or radix is checked against Punjabi noun and proper name dictionary. An in depth analysis of Punjabi news corpus was made and various possible noun suffixes were identified like ੀ ਆਂ īāṃ, ਿੀਆਂ iāṃ, ੀ ਆਂ ūāṃ, ੀ ੀਂ āṃ, ੀ ਏ īē etc. and the various rules for noun and proper name stemming have been generated. Punjabi language stemmer for nouns and proper names is applied for Punjabi Text Summarization. The efficiency of Punjabi language noun and Proper name stemmer is 87.37%.",,"Punjabi Language Stemmer for nouns and proper names. This paper concentrates on Punjabi language noun and proper name stemming. The purpose of stemming is to obtain the stem or radix of those words which are not found in dictionary. If stemmed word is present in dictionary, then that is a genuine word, otherwise it may be proper name or some invalid word. In Punjabi language stemming for nouns and proper names, an attempt is made to obtain stem or radix of a Punjabi word and then stem or radix is checked against Punjabi noun and proper name dictionary. An in depth analysis of Punjabi news corpus was made and various possible noun suffixes were identified like ੀ ਆਂ īāṃ, ਿੀਆਂ iāṃ, ੀ ਆਂ ūāṃ, ੀ ੀਂ āṃ, ੀ ਏ īē etc. and the various rules for noun and proper name stemming have been generated. Punjabi language stemmer for nouns and proper names is applied for Punjabi Text Summarization. The efficiency of Punjabi language noun and Proper name stemmer is 87.37%.",2011
zhao-kawahara-2018-unified,https://aclanthology.org/W18-5021,0,,,,,,,"A Unified Neural Architecture for Joint Dialog Act Segmentation and Recognition in Spoken Dialog System. In spoken dialog systems (SDSs), dialog act (DA) segmentation and recognition provide essential information for response generation. A majority of previous works assumed ground-truth segmentation of DA units, which is not available from automatic speech recognition (ASR) in SDS. We propose a unified architecture based on neural networks, which consists of a sequence tagger for segmentation and a classifier for recognition. The DA recognition model is based on hierarchical neural networks to incorporate the context of preceding sentences. We investigate sharing some layers of the two components so that they can be trained jointly and learn generalized features from both tasks. An evaluation on the Switchboard Dialog Act (SwDA) corpus shows that the jointly-trained models outperform independently-trained models, single-step models, and other reported results in DA segmentation, recognition, and joint tasks.",A Unified Neural Architecture for Joint Dialog Act Segmentation and Recognition in Spoken Dialog System,"In spoken dialog systems (SDSs), dialog act (DA) segmentation and recognition provide essential information for response generation. A majority of previous works assumed ground-truth segmentation of DA units, which is not available from automatic speech recognition (ASR) in SDS. We propose a unified architecture based on neural networks, which consists of a sequence tagger for segmentation and a classifier for recognition. The DA recognition model is based on hierarchical neural networks to incorporate the context of preceding sentences. We investigate sharing some layers of the two components so that they can be trained jointly and learn generalized features from both tasks. An evaluation on the Switchboard Dialog Act (SwDA) corpus shows that the jointly-trained models outperform independently-trained models, single-step models, and other reported results in DA segmentation, recognition, and joint tasks.",A Unified Neural Architecture for Joint Dialog Act Segmentation and Recognition in Spoken Dialog System,"In spoken dialog systems (SDSs), dialog act (DA) segmentation and recognition provide essential information for response generation. A majority of previous works assumed ground-truth segmentation of DA units, which is not available from automatic speech recognition (ASR) in SDS. We propose a unified architecture based on neural networks, which consists of a sequence tagger for segmentation and a classifier for recognition. The DA recognition model is based on hierarchical neural networks to incorporate the context of preceding sentences. We investigate sharing some layers of the two components so that they can be trained jointly and learn generalized features from both tasks. An evaluation on the Switchboard Dialog Act (SwDA) corpus shows that the jointly-trained models outperform independently-trained models, single-step models, and other reported results in DA segmentation, recognition, and joint tasks.","This work was supported by JST ERATO Ishiguro Symbiotic Human-Robot Interaction program (Grant Number JPMJER1401), Japan.","A Unified Neural Architecture for Joint Dialog Act Segmentation and Recognition in Spoken Dialog System. In spoken dialog systems (SDSs), dialog act (DA) segmentation and recognition provide essential information for response generation. A majority of previous works assumed ground-truth segmentation of DA units, which is not available from automatic speech recognition (ASR) in SDS. We propose a unified architecture based on neural networks, which consists of a sequence tagger for segmentation and a classifier for recognition. The DA recognition model is based on hierarchical neural networks to incorporate the context of preceding sentences. We investigate sharing some layers of the two components so that they can be trained jointly and learn generalized features from both tasks. An evaluation on the Switchboard Dialog Act (SwDA) corpus shows that the jointly-trained models outperform independently-trained models, single-step models, and other reported results in DA segmentation, recognition, and joint tasks.",2018
crego-etal-2005-ngram,https://aclanthology.org/2005.iwslt-1.23,0,,,,,,,"Ngram-based versus Phrase-based Statistical Machine Translation. This work summarizes a comparison between two approaches to Statistical Machine Translation (SMT), namely Ngram-based and Phrase-based SMT. In both approaches, the translation process is based on bilingual units related by word-to-word alignments (pairs of source and target words), while the main differences are based on the extraction process of these units and the statistical modeling of the translation context. The study has been carried out on two different translation tasks (in terms of translation difficulty and amount of available training data), and allowing for distortion (reordering) in the decoding process. Thus it extends a previous work were both approaches were compared under monotone conditions. We finally report comparative results in terms of translation accuracy, computation time and memory size. Results show how the ngram-based approach outperforms the phrase-based approach by achieving similar accuracy scores in less computational time and with less memory needs.",Ngram-based versus Phrase-based Statistical Machine Translation,"This work summarizes a comparison between two approaches to Statistical Machine Translation (SMT), namely Ngram-based and Phrase-based SMT. In both approaches, the translation process is based on bilingual units related by word-to-word alignments (pairs of source and target words), while the main differences are based on the extraction process of these units and the statistical modeling of the translation context. The study has been carried out on two different translation tasks (in terms of translation difficulty and amount of available training data), and allowing for distortion (reordering) in the decoding process. Thus it extends a previous work were both approaches were compared under monotone conditions. We finally report comparative results in terms of translation accuracy, computation time and memory size. Results show how the ngram-based approach outperforms the phrase-based approach by achieving similar accuracy scores in less computational time and with less memory needs.",Ngram-based versus Phrase-based Statistical Machine Translation,"This work summarizes a comparison between two approaches to Statistical Machine Translation (SMT), namely Ngram-based and Phrase-based SMT. In both approaches, the translation process is based on bilingual units related by word-to-word alignments (pairs of source and target words), while the main differences are based on the extraction process of these units and the statistical modeling of the translation context. The study has been carried out on two different translation tasks (in terms of translation difficulty and amount of available training data), and allowing for distortion (reordering) in the decoding process. Thus it extends a previous work were both approaches were compared under monotone conditions. We finally report comparative results in terms of translation accuracy, computation time and memory size. Results show how the ngram-based approach outperforms the phrase-based approach by achieving similar accuracy scores in less computational time and with less memory needs.",,"Ngram-based versus Phrase-based Statistical Machine Translation. This work summarizes a comparison between two approaches to Statistical Machine Translation (SMT), namely Ngram-based and Phrase-based SMT. In both approaches, the translation process is based on bilingual units related by word-to-word alignments (pairs of source and target words), while the main differences are based on the extraction process of these units and the statistical modeling of the translation context. The study has been carried out on two different translation tasks (in terms of translation difficulty and amount of available training data), and allowing for distortion (reordering) in the decoding process. Thus it extends a previous work were both approaches were compared under monotone conditions. We finally report comparative results in terms of translation accuracy, computation time and memory size. Results show how the ngram-based approach outperforms the phrase-based approach by achieving similar accuracy scores in less computational time and with less memory needs.",2005
luo-etal-2018-auto,https://aclanthology.org/D18-1075,0,,,,,,,"An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation. Generating semantically coherent responses is still a major challenge in dialogue generation. Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs. To address this problem, we propose an Auto-Encoder Matching (AEM) model to learn such dependency. The model contains two auto-encoders and one mapping module. The auto-encoders learn the semantic representations of inputs and responses, and the mapping module learns to connect the utterance-level representations. Experimental results from automatic and human evaluations demonstrate that our model is capable of generating responses of high coherence and fluency compared to baseline models. 1",An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation,"Generating semantically coherent responses is still a major challenge in dialogue generation. Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs. To address this problem, we propose an Auto-Encoder Matching (AEM) model to learn such dependency. The model contains two auto-encoders and one mapping module. The auto-encoders learn the semantic representations of inputs and responses, and the mapping module learns to connect the utterance-level representations. Experimental results from automatic and human evaluations demonstrate that our model is capable of generating responses of high coherence and fluency compared to baseline models. 1",An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation,"Generating semantically coherent responses is still a major challenge in dialogue generation. Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs. To address this problem, we propose an Auto-Encoder Matching (AEM) model to learn such dependency. The model contains two auto-encoders and one mapping module. The auto-encoders learn the semantic representations of inputs and responses, and the mapping module learns to connect the utterance-level representations. Experimental results from automatic and human evaluations demonstrate that our model is capable of generating responses of high coherence and fluency compared to baseline models. 1",This work was supported in part by National Natural Science Foundation of China (No. 61673028). We thank all reviewers for providing the construc-tive suggestions. Xu Sun is the corresponding author of this paper.,"An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation. Generating semantically coherent responses is still a major challenge in dialogue generation. Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs. To address this problem, we propose an Auto-Encoder Matching (AEM) model to learn such dependency. The model contains two auto-encoders and one mapping module. The auto-encoders learn the semantic representations of inputs and responses, and the mapping module learns to connect the utterance-level representations. Experimental results from automatic and human evaluations demonstrate that our model is capable of generating responses of high coherence and fluency compared to baseline models. 1",2018
okumura-etal-2003-text,https://aclanthology.org/W03-0507,0,,,,,,,"Text Summarization Challenge 2 - Text summarization evaluation at NTCIR Workshop 3. We describe the outline of Text Summarization Challenge 2 (TSC2 hereafter), a sequel text summarization evaluation conducted as one of the tasks at the NTCIR Workshop 3. First, we describe briefly the previous evaluation, Text Summarization Challenge (TSC1) as introduction to TSC2. Then we explain TSC2 including the participants, the two tasks in TSC2, data used, evaluation methods for each task, and brief report on the results.",Text Summarization Challenge 2 - Text summarization evaluation at {NTCIR} Workshop 3,"We describe the outline of Text Summarization Challenge 2 (TSC2 hereafter), a sequel text summarization evaluation conducted as one of the tasks at the NTCIR Workshop 3. First, we describe briefly the previous evaluation, Text Summarization Challenge (TSC1) as introduction to TSC2. Then we explain TSC2 including the participants, the two tasks in TSC2, data used, evaluation methods for each task, and brief report on the results.",Text Summarization Challenge 2 - Text summarization evaluation at NTCIR Workshop 3,"We describe the outline of Text Summarization Challenge 2 (TSC2 hereafter), a sequel text summarization evaluation conducted as one of the tasks at the NTCIR Workshop 3. First, we describe briefly the previous evaluation, Text Summarization Challenge (TSC1) as introduction to TSC2. Then we explain TSC2 including the participants, the two tasks in TSC2, data used, evaluation methods for each task, and brief report on the results.",,"Text Summarization Challenge 2 - Text summarization evaluation at NTCIR Workshop 3. We describe the outline of Text Summarization Challenge 2 (TSC2 hereafter), a sequel text summarization evaluation conducted as one of the tasks at the NTCIR Workshop 3. First, we describe briefly the previous evaluation, Text Summarization Challenge (TSC1) as introduction to TSC2. Then we explain TSC2 including the participants, the two tasks in TSC2, data used, evaluation methods for each task, and brief report on the results.",2003
soboroff-harman-2005-novelty,https://aclanthology.org/H05-1014,1,,,,industry_innovation_infrastructure,,,"Novelty Detection: The TREC Experience. A challenge for search systems is to detect not only when an item is relevant to the user's information need, but also when it contains something new which the user has not seen before. In the TREC novelty track, the task was to highlight sentences containing relevant and new information in a short, topical document stream. This is analogous to highlighting key parts of a document for another person to read, and this kind of output can be useful as input to a summarization system. Search topics involved both news events and reported opinions on hot-button subjects. When people performed this task, they tended to select small blocks of consecutive sentences, whereas current systems identified many relevant and novel passages. We also found that opinions are much harder to track than events.",Novelty Detection: The {TREC} Experience,"A challenge for search systems is to detect not only when an item is relevant to the user's information need, but also when it contains something new which the user has not seen before. In the TREC novelty track, the task was to highlight sentences containing relevant and new information in a short, topical document stream. This is analogous to highlighting key parts of a document for another person to read, and this kind of output can be useful as input to a summarization system. Search topics involved both news events and reported opinions on hot-button subjects. When people performed this task, they tended to select small blocks of consecutive sentences, whereas current systems identified many relevant and novel passages. We also found that opinions are much harder to track than events.",Novelty Detection: The TREC Experience,"A challenge for search systems is to detect not only when an item is relevant to the user's information need, but also when it contains something new which the user has not seen before. In the TREC novelty track, the task was to highlight sentences containing relevant and new information in a short, topical document stream. This is analogous to highlighting key parts of a document for another person to read, and this kind of output can be useful as input to a summarization system. Search topics involved both news events and reported opinions on hot-button subjects. When people performed this task, they tended to select small blocks of consecutive sentences, whereas current systems identified many relevant and novel passages. We also found that opinions are much harder to track than events.",,"Novelty Detection: The TREC Experience. A challenge for search systems is to detect not only when an item is relevant to the user's information need, but also when it contains something new which the user has not seen before. In the TREC novelty track, the task was to highlight sentences containing relevant and new information in a short, topical document stream. This is analogous to highlighting key parts of a document for another person to read, and this kind of output can be useful as input to a summarization system. Search topics involved both news events and reported opinions on hot-button subjects. When people performed this task, they tended to select small blocks of consecutive sentences, whereas current systems identified many relevant and novel passages. We also found that opinions are much harder to track than events.",2005
eshghi-etal-2013-probabilistic,https://aclanthology.org/W13-0110,0,,,,,,,"Probabilistic induction for an incremental semantic grammar. We describe a method for learning an incremental semantic grammar from a corpus in which sentences are paired with logical forms as predicate-argument structure trees. Working in the framework of Dynamic Syntax, and assuming a set of generally available compositional mechanisms, we show how lexical entries can be learned as probabilistic procedures for the incremental projection of semantic structure, providing a grammar suitable for use in an incremental probabilistic parser. By inducing these from a corpus generated using an existing grammar, we demonstrate that this results in both good coverage and compatibility with the original entries, without requiring annotation at the word level. We show that this semantic approach to grammar induction has the novel ability to learn the syntactic and semantic constraints on pronouns. * We would like to thank Ruth Kempson and Yo Sato for helpful comments and discussion.",Probabilistic induction for an incremental semantic grammar,"We describe a method for learning an incremental semantic grammar from a corpus in which sentences are paired with logical forms as predicate-argument structure trees. Working in the framework of Dynamic Syntax, and assuming a set of generally available compositional mechanisms, we show how lexical entries can be learned as probabilistic procedures for the incremental projection of semantic structure, providing a grammar suitable for use in an incremental probabilistic parser. By inducing these from a corpus generated using an existing grammar, we demonstrate that this results in both good coverage and compatibility with the original entries, without requiring annotation at the word level. We show that this semantic approach to grammar induction has the novel ability to learn the syntactic and semantic constraints on pronouns. * We would like to thank Ruth Kempson and Yo Sato for helpful comments and discussion.",Probabilistic induction for an incremental semantic grammar,"We describe a method for learning an incremental semantic grammar from a corpus in which sentences are paired with logical forms as predicate-argument structure trees. Working in the framework of Dynamic Syntax, and assuming a set of generally available compositional mechanisms, we show how lexical entries can be learned as probabilistic procedures for the incremental projection of semantic structure, providing a grammar suitable for use in an incremental probabilistic parser. By inducing these from a corpus generated using an existing grammar, we demonstrate that this results in both good coverage and compatibility with the original entries, without requiring annotation at the word level. We show that this semantic approach to grammar induction has the novel ability to learn the syntactic and semantic constraints on pronouns. * We would like to thank Ruth Kempson and Yo Sato for helpful comments and discussion.",,"Probabilistic induction for an incremental semantic grammar. We describe a method for learning an incremental semantic grammar from a corpus in which sentences are paired with logical forms as predicate-argument structure trees. Working in the framework of Dynamic Syntax, and assuming a set of generally available compositional mechanisms, we show how lexical entries can be learned as probabilistic procedures for the incremental projection of semantic structure, providing a grammar suitable for use in an incremental probabilistic parser. By inducing these from a corpus generated using an existing grammar, we demonstrate that this results in both good coverage and compatibility with the original entries, without requiring annotation at the word level. We show that this semantic approach to grammar induction has the novel ability to learn the syntactic and semantic constraints on pronouns. * We would like to thank Ruth Kempson and Yo Sato for helpful comments and discussion.",2013
liu-etal-2021-universal,https://aclanthology.org/2021.cl-2.15,0,,,,,,,"Universal Discourse Representation Structure Parsing. We consider the task of crosslingual semantic parsing in the style of Discourse Representation Theory (DRT) where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide learning in other languages. We introduce Universal Discourse Representation Theory (UDRT), a variant of DRT that explicitly anchors semantic representations to tokens in the linguistic input. We develop a semantic parsing framework based on the Transformer architecture and utilize it to obtain semantic resources in multiple languages following two learning schemes. The many-to-one approach translates non-English text to English, and then runs a relatively accurate English parser on the translated text, while the one-to-many approach translates gold standard English to non-English text and trains multiple parsers (one per language) on the translations. Experimental results on the Parallel Meaning Bank show that our proposal outperforms strong baselines by a wide margin and can be used to construct (silver-standard) meaning banks for 99 languages.",Universal Discourse Representation Structure Parsing,"We consider the task of crosslingual semantic parsing in the style of Discourse Representation Theory (DRT) where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide learning in other languages. We introduce Universal Discourse Representation Theory (UDRT), a variant of DRT that explicitly anchors semantic representations to tokens in the linguistic input. We develop a semantic parsing framework based on the Transformer architecture and utilize it to obtain semantic resources in multiple languages following two learning schemes. The many-to-one approach translates non-English text to English, and then runs a relatively accurate English parser on the translated text, while the one-to-many approach translates gold standard English to non-English text and trains multiple parsers (one per language) on the translations. Experimental results on the Parallel Meaning Bank show that our proposal outperforms strong baselines by a wide margin and can be used to construct (silver-standard) meaning banks for 99 languages.",Universal Discourse Representation Structure Parsing,"We consider the task of crosslingual semantic parsing in the style of Discourse Representation Theory (DRT) where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide learning in other languages. We introduce Universal Discourse Representation Theory (UDRT), a variant of DRT that explicitly anchors semantic representations to tokens in the linguistic input. We develop a semantic parsing framework based on the Transformer architecture and utilize it to obtain semantic resources in multiple languages following two learning schemes. The many-to-one approach translates non-English text to English, and then runs a relatively accurate English parser on the translated text, while the one-to-many approach translates gold standard English to non-English text and trains multiple parsers (one per language) on the translations. Experimental results on the Parallel Meaning Bank show that our proposal outperforms strong baselines by a wide margin and can be used to construct (silver-standard) meaning banks for 99 languages.","We thank the anonymous reviewers for their feedback. We thank Alex Lascarides for her comments. We gratefully acknowledge the support of the European Research Council (Lapata, Liu; award number 681760), the EU H2020 project SUMMA (Cohen, Liu; grant agreement 688139) and Bloomberg (Cohen, Liu). This work was partly funded by the NWO-VICI grant ""Lost in Translation -Found in Meaning"" (288-89-003).","Universal Discourse Representation Structure Parsing. We consider the task of crosslingual semantic parsing in the style of Discourse Representation Theory (DRT) where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide learning in other languages. We introduce Universal Discourse Representation Theory (UDRT), a variant of DRT that explicitly anchors semantic representations to tokens in the linguistic input. We develop a semantic parsing framework based on the Transformer architecture and utilize it to obtain semantic resources in multiple languages following two learning schemes. The many-to-one approach translates non-English text to English, and then runs a relatively accurate English parser on the translated text, while the one-to-many approach translates gold standard English to non-English text and trains multiple parsers (one per language) on the translations. Experimental results on the Parallel Meaning Bank show that our proposal outperforms strong baselines by a wide margin and can be used to construct (silver-standard) meaning banks for 99 languages.",2021
ahlberg-etal-2015-paradigm,https://aclanthology.org/N15-1107,0,,,,,,,"Paradigm classification in supervised learning of morphology. Supervised morphological paradigm learning by identifying and aligning the longest common subsequence found in inflection tables has recently been proposed as a simple yet competitive way to induce morphological patterns. We combine this non-probabilistic strategy of inflection table generalization with a discriminative classifier to permit the reconstruction of complete inflection tables of unseen words. Our system learns morphological paradigms from labeled examples of inflection patterns (inflection tables) and then produces inflection tables from unseen lemmas or base forms. We evaluate the approach on datasets covering 11 different languages and show that this approach results in consistently higher accuracies vis-à-vis other methods on the same task, thus indicating that the general method is a viable approach to quickly creating highaccuracy morphological resources.",Paradigm classification in supervised learning of morphology,"Supervised morphological paradigm learning by identifying and aligning the longest common subsequence found in inflection tables has recently been proposed as a simple yet competitive way to induce morphological patterns. We combine this non-probabilistic strategy of inflection table generalization with a discriminative classifier to permit the reconstruction of complete inflection tables of unseen words. Our system learns morphological paradigms from labeled examples of inflection patterns (inflection tables) and then produces inflection tables from unseen lemmas or base forms. We evaluate the approach on datasets covering 11 different languages and show that this approach results in consistently higher accuracies vis-à-vis other methods on the same task, thus indicating that the general method is a viable approach to quickly creating highaccuracy morphological resources.",Paradigm classification in supervised learning of morphology,"Supervised morphological paradigm learning by identifying and aligning the longest common subsequence found in inflection tables has recently been proposed as a simple yet competitive way to induce morphological patterns. We combine this non-probabilistic strategy of inflection table generalization with a discriminative classifier to permit the reconstruction of complete inflection tables of unseen words. Our system learns morphological paradigms from labeled examples of inflection patterns (inflection tables) and then produces inflection tables from unseen lemmas or base forms. We evaluate the approach on datasets covering 11 different languages and show that this approach results in consistently higher accuracies vis-à-vis other methods on the same task, thus indicating that the general method is a viable approach to quickly creating highaccuracy morphological resources.",,"Paradigm classification in supervised learning of morphology. Supervised morphological paradigm learning by identifying and aligning the longest common subsequence found in inflection tables has recently been proposed as a simple yet competitive way to induce morphological patterns. We combine this non-probabilistic strategy of inflection table generalization with a discriminative classifier to permit the reconstruction of complete inflection tables of unseen words. Our system learns morphological paradigms from labeled examples of inflection patterns (inflection tables) and then produces inflection tables from unseen lemmas or base forms. We evaluate the approach on datasets covering 11 different languages and show that this approach results in consistently higher accuracies vis-à-vis other methods on the same task, thus indicating that the general method is a viable approach to quickly creating highaccuracy morphological resources.",2015
wu-etal-2020-improving-knowledge,https://aclanthology.org/2020.findings-emnlp.126,0,,,,,,,"Improving Knowledge-Aware Dialogue Response Generation by Using Human-Written Prototype Dialogues. Incorporating commonsense knowledge can alleviate the issue of generating generic responses in open-domain generative dialogue systems. However, selecting knowledge facts for the dialogue context is still a challenge. The widely used approach Entity Name Matching always retrieves irrelevant facts from the view of local entity words. This paper proposes a novel knowledge selection approach, Prototype-KR, and a knowledge-aware generative model, Prototype-KRG. Given a query, our approach first retrieves a set of prototype dialogues that are relevant to the query. We find knowledge facts used in prototype dialogues usually are highly relevant to the current query; thus, Prototype-KR ranks such knowledge facts based on the semantic similarity and then selects the most appropriate facts. Subsequently, Prototype-KRG can generate an informative response using the selected knowledge facts. Experiments demonstrate that our approach has achieved notable improvements on the most metrics, compared to generative baselines. Meanwhile, compared to IR(Retrieval)-based baselines, responses generated by our approach are more relevant to the context and have comparable informativeness.",Improving Knowledge-Aware Dialogue Response Generation by Using Human-Written Prototype Dialogues,"Incorporating commonsense knowledge can alleviate the issue of generating generic responses in open-domain generative dialogue systems. However, selecting knowledge facts for the dialogue context is still a challenge. The widely used approach Entity Name Matching always retrieves irrelevant facts from the view of local entity words. This paper proposes a novel knowledge selection approach, Prototype-KR, and a knowledge-aware generative model, Prototype-KRG. Given a query, our approach first retrieves a set of prototype dialogues that are relevant to the query. We find knowledge facts used in prototype dialogues usually are highly relevant to the current query; thus, Prototype-KR ranks such knowledge facts based on the semantic similarity and then selects the most appropriate facts. Subsequently, Prototype-KRG can generate an informative response using the selected knowledge facts. Experiments demonstrate that our approach has achieved notable improvements on the most metrics, compared to generative baselines. Meanwhile, compared to IR(Retrieval)-based baselines, responses generated by our approach are more relevant to the context and have comparable informativeness.",Improving Knowledge-Aware Dialogue Response Generation by Using Human-Written Prototype Dialogues,"Incorporating commonsense knowledge can alleviate the issue of generating generic responses in open-domain generative dialogue systems. However, selecting knowledge facts for the dialogue context is still a challenge. The widely used approach Entity Name Matching always retrieves irrelevant facts from the view of local entity words. This paper proposes a novel knowledge selection approach, Prototype-KR, and a knowledge-aware generative model, Prototype-KRG. Given a query, our approach first retrieves a set of prototype dialogues that are relevant to the query. We find knowledge facts used in prototype dialogues usually are highly relevant to the current query; thus, Prototype-KR ranks such knowledge facts based on the semantic similarity and then selects the most appropriate facts. Subsequently, Prototype-KRG can generate an informative response using the selected knowledge facts. Experiments demonstrate that our approach has achieved notable improvements on the most metrics, compared to generative baselines. Meanwhile, compared to IR(Retrieval)-based baselines, responses generated by our approach are more relevant to the context and have comparable informativeness.",This work is supported by the National Key R&D Program of China (Grant No. 2017YFB1002000).,"Improving Knowledge-Aware Dialogue Response Generation by Using Human-Written Prototype Dialogues. Incorporating commonsense knowledge can alleviate the issue of generating generic responses in open-domain generative dialogue systems. However, selecting knowledge facts for the dialogue context is still a challenge. The widely used approach Entity Name Matching always retrieves irrelevant facts from the view of local entity words. This paper proposes a novel knowledge selection approach, Prototype-KR, and a knowledge-aware generative model, Prototype-KRG. Given a query, our approach first retrieves a set of prototype dialogues that are relevant to the query. We find knowledge facts used in prototype dialogues usually are highly relevant to the current query; thus, Prototype-KR ranks such knowledge facts based on the semantic similarity and then selects the most appropriate facts. Subsequently, Prototype-KRG can generate an informative response using the selected knowledge facts. Experiments demonstrate that our approach has achieved notable improvements on the most metrics, compared to generative baselines. Meanwhile, compared to IR(Retrieval)-based baselines, responses generated by our approach are more relevant to the context and have comparable informativeness.",2020
feng-etal-2013-connotation,https://aclanthology.org/P13-1174,0,,,,,,,"Connotation Lexicon: A Dash of Sentiment Beneath the Surface Meaning. Understanding the connotation of words plays an important role in interpreting subtle shades of sentiment beyond denotative or surface meaning of text, as seemingly objective statements often allude nuanced sentiment of the writer, and even purposefully conjure emotion from the readers' minds. The focus of this paper is drawing nuanced, connotative sentiments from even those words that are objective on the surface, such as ""intelligence"", ""human"", and ""cheesecake"". We propose induction algorithms encoding a diverse set of linguistic insights (semantic prosody, distributional similarity, semantic parallelism of coordination) and prior knowledge drawn from lexical resources, resulting in the first broad-coverage connotation lexicon.",Connotation Lexicon: A Dash of Sentiment Beneath the Surface Meaning,"Understanding the connotation of words plays an important role in interpreting subtle shades of sentiment beyond denotative or surface meaning of text, as seemingly objective statements often allude nuanced sentiment of the writer, and even purposefully conjure emotion from the readers' minds. The focus of this paper is drawing nuanced, connotative sentiments from even those words that are objective on the surface, such as ""intelligence"", ""human"", and ""cheesecake"". We propose induction algorithms encoding a diverse set of linguistic insights (semantic prosody, distributional similarity, semantic parallelism of coordination) and prior knowledge drawn from lexical resources, resulting in the first broad-coverage connotation lexicon.",Connotation Lexicon: A Dash of Sentiment Beneath the Surface Meaning,"Understanding the connotation of words plays an important role in interpreting subtle shades of sentiment beyond denotative or surface meaning of text, as seemingly objective statements often allude nuanced sentiment of the writer, and even purposefully conjure emotion from the readers' minds. The focus of this paper is drawing nuanced, connotative sentiments from even those words that are objective on the surface, such as ""intelligence"", ""human"", and ""cheesecake"". We propose induction algorithms encoding a diverse set of linguistic insights (semantic prosody, distributional similarity, semantic parallelism of coordination) and prior knowledge drawn from lexical resources, resulting in the first broad-coverage connotation lexicon.","This research was supported in part by the Stony Brook University Office of the Vice President for Research. We thank reviewers for many insightful comments and suggestions, and for providing us with several very inspiring examples to work with.","Connotation Lexicon: A Dash of Sentiment Beneath the Surface Meaning. Understanding the connotation of words plays an important role in interpreting subtle shades of sentiment beyond denotative or surface meaning of text, as seemingly objective statements often allude nuanced sentiment of the writer, and even purposefully conjure emotion from the readers' minds. The focus of this paper is drawing nuanced, connotative sentiments from even those words that are objective on the surface, such as ""intelligence"", ""human"", and ""cheesecake"". We propose induction algorithms encoding a diverse set of linguistic insights (semantic prosody, distributional similarity, semantic parallelism of coordination) and prior knowledge drawn from lexical resources, resulting in the first broad-coverage connotation lexicon.",2013
cooper-stickland-etal-2021-recipes,https://aclanthology.org/2021.eacl-main.301,0,,,,,,,"Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation. There has been recent success in pre-training on monolingual data and fine-tuning on Machine Translation (MT), but it remains unclear how to best leverage a pre-trained model for a given MT task. This paper investigates the benefits and drawbacks of freezing parameters, and adding new ones, when fine-tuning a pre-trained model on MT. We focus on 1) Fine-tuning a model trained only on English monolingual data, BART. 2) Fine-tuning a model trained on monolingual data from 25 languages, mBART. For BART we get the best performance by freezing most of the model parameters, and adding extra positional embeddings. For mBART we match or outperform the performance of naive fine-tuning for most language pairs with the encoder, and most of the decoder, frozen. The encoder-decoder attention parameters are most important to finetune. When constraining ourselves to an outof-domain training set for Vietnamese to English we see the largest improvements over the fine-tuning baseline.",Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation,"There has been recent success in pre-training on monolingual data and fine-tuning on Machine Translation (MT), but it remains unclear how to best leverage a pre-trained model for a given MT task. This paper investigates the benefits and drawbacks of freezing parameters, and adding new ones, when fine-tuning a pre-trained model on MT. We focus on 1) Fine-tuning a model trained only on English monolingual data, BART. 2) Fine-tuning a model trained on monolingual data from 25 languages, mBART. For BART we get the best performance by freezing most of the model parameters, and adding extra positional embeddings. For mBART we match or outperform the performance of naive fine-tuning for most language pairs with the encoder, and most of the decoder, frozen. The encoder-decoder attention parameters are most important to finetune. When constraining ourselves to an outof-domain training set for Vietnamese to English we see the largest improvements over the fine-tuning baseline.",Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation,"There has been recent success in pre-training on monolingual data and fine-tuning on Machine Translation (MT), but it remains unclear how to best leverage a pre-trained model for a given MT task. This paper investigates the benefits and drawbacks of freezing parameters, and adding new ones, when fine-tuning a pre-trained model on MT. We focus on 1) Fine-tuning a model trained only on English monolingual data, BART. 2) Fine-tuning a model trained on monolingual data from 25 languages, mBART. For BART we get the best performance by freezing most of the model parameters, and adding extra positional embeddings. For mBART we match or outperform the performance of naive fine-tuning for most language pairs with the encoder, and most of the decoder, frozen. The encoder-decoder attention parameters are most important to finetune. When constraining ourselves to an outof-domain training set for Vietnamese to English we see the largest improvements over the fine-tuning baseline.","We'd like to thank James Cross, Mike Lewis, Naman Goyal, Jiatao Gu, Iain Murray, Yuqing Tang and Luke Zettlemoyer for useful discussion. We also thank our colleagues at FAIR and FAIAR for valuable feedback.","Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation. There has been recent success in pre-training on monolingual data and fine-tuning on Machine Translation (MT), but it remains unclear how to best leverage a pre-trained model for a given MT task. This paper investigates the benefits and drawbacks of freezing parameters, and adding new ones, when fine-tuning a pre-trained model on MT. We focus on 1) Fine-tuning a model trained only on English monolingual data, BART. 2) Fine-tuning a model trained on monolingual data from 25 languages, mBART. For BART we get the best performance by freezing most of the model parameters, and adding extra positional embeddings. For mBART we match or outperform the performance of naive fine-tuning for most language pairs with the encoder, and most of the decoder, frozen. The encoder-decoder attention parameters are most important to finetune. When constraining ourselves to an outof-domain training set for Vietnamese to English we see the largest improvements over the fine-tuning baseline.",2021
hasler-2004-ignore,http://www.lrec-conf.org/proceedings/lrec2004/pdf/338.pdf,0,,,,,,,"``Why do you Ignore me?'' - Proof that not all Direct Speech is Bad. In the automatic summarisation of written texts, direct speech is usually deemed unsuitable for inclusion in important sentences. This is due to the fact that humans do not usually include such quotations when they create summaries. In this paper, we argue that despite generally negative attitudes, direct speech can be useful for summarisation and ignoring it can result in the omission of important and relevant information. We present an analysis of a corpus of annotated newswire texts in which a substantial amount of speech is marked by different annotators, and describe when and why direct speech can be included in summaries. In an attempt to make direct speech more appropriate for summaries, we also describe rules currently being developed to transform it into a more summary-acceptable format.",{``}Why do you Ignore me?{''} - Proof that not all Direct Speech is Bad,"In the automatic summarisation of written texts, direct speech is usually deemed unsuitable for inclusion in important sentences. This is due to the fact that humans do not usually include such quotations when they create summaries. In this paper, we argue that despite generally negative attitudes, direct speech can be useful for summarisation and ignoring it can result in the omission of important and relevant information. We present an analysis of a corpus of annotated newswire texts in which a substantial amount of speech is marked by different annotators, and describe when and why direct speech can be included in summaries. In an attempt to make direct speech more appropriate for summaries, we also describe rules currently being developed to transform it into a more summary-acceptable format.",``Why do you Ignore me?'' - Proof that not all Direct Speech is Bad,"In the automatic summarisation of written texts, direct speech is usually deemed unsuitable for inclusion in important sentences. This is due to the fact that humans do not usually include such quotations when they create summaries. In this paper, we argue that despite generally negative attitudes, direct speech can be useful for summarisation and ignoring it can result in the omission of important and relevant information. We present an analysis of a corpus of annotated newswire texts in which a substantial amount of speech is marked by different annotators, and describe when and why direct speech can be included in summaries. In an attempt to make direct speech more appropriate for summaries, we also describe rules currently being developed to transform it into a more summary-acceptable format.",,"``Why do you Ignore me?'' - Proof that not all Direct Speech is Bad. In the automatic summarisation of written texts, direct speech is usually deemed unsuitable for inclusion in important sentences. This is due to the fact that humans do not usually include such quotations when they create summaries. In this paper, we argue that despite generally negative attitudes, direct speech can be useful for summarisation and ignoring it can result in the omission of important and relevant information. We present an analysis of a corpus of annotated newswire texts in which a substantial amount of speech is marked by different annotators, and describe when and why direct speech can be included in summaries. In an attempt to make direct speech more appropriate for summaries, we also describe rules currently being developed to transform it into a more summary-acceptable format.",2004
patwardhan-riloff-2009-unified,https://aclanthology.org/D09-1016,0,,,,,,,"A Unified Model of Phrasal and Sentential Evidence for Information Extraction. Information Extraction (IE) systems that extract role fillers for events typically look at the local context surrounding a phrase when deciding whether to extract it. Often, however, role fillers occur in clauses that are not directly linked to an event word. We present a new model for event extraction that jointly considers both the local context around a phrase along with the wider sentential context in a probabilistic framework. Our approach uses a sentential event recognizer and a plausible role-filler recognizer that is conditioned on event sentences. We evaluate our system on two IE data sets and show that our model performs well in comparison to existing IE systems that rely on local phrasal context.",A Unified Model of Phrasal and Sentential Evidence for Information Extraction,"Information Extraction (IE) systems that extract role fillers for events typically look at the local context surrounding a phrase when deciding whether to extract it. Often, however, role fillers occur in clauses that are not directly linked to an event word. We present a new model for event extraction that jointly considers both the local context around a phrase along with the wider sentential context in a probabilistic framework. Our approach uses a sentential event recognizer and a plausible role-filler recognizer that is conditioned on event sentences. We evaluate our system on two IE data sets and show that our model performs well in comparison to existing IE systems that rely on local phrasal context.",A Unified Model of Phrasal and Sentential Evidence for Information Extraction,"Information Extraction (IE) systems that extract role fillers for events typically look at the local context surrounding a phrase when deciding whether to extract it. Often, however, role fillers occur in clauses that are not directly linked to an event word. We present a new model for event extraction that jointly considers both the local context around a phrase along with the wider sentential context in a probabilistic framework. Our approach uses a sentential event recognizer and a plausible role-filler recognizer that is conditioned on event sentences. We evaluate our system on two IE data sets and show that our model performs well in comparison to existing IE systems that rely on local phrasal context.",This work has been supported in part by the Department of Homeland Security Grant N0014-07-1-0152. We are grateful to Nathan Gilbert and Adam Teichert for their help with the annotation of event sentences.,"A Unified Model of Phrasal and Sentential Evidence for Information Extraction. Information Extraction (IE) systems that extract role fillers for events typically look at the local context surrounding a phrase when deciding whether to extract it. Often, however, role fillers occur in clauses that are not directly linked to an event word. We present a new model for event extraction that jointly considers both the local context around a phrase along with the wider sentential context in a probabilistic framework. Our approach uses a sentential event recognizer and a plausible role-filler recognizer that is conditioned on event sentences. We evaluate our system on two IE data sets and show that our model performs well in comparison to existing IE systems that rely on local phrasal context.",2009
mcconachy-etal-1998-bayesian,https://aclanthology.org/W98-1212,0,,,,,,,"A Bayesian Approach to Automating Argumentation. Our argumentation system NAG uses Bayesian networks in a user model and in a normative model to assemble and assess nice arguments, that is arguments which balance persuasiveness with normative correctness. Attentional focus is simulated in both models to select relevant subnetworks for Bayesian propagation. Bayesian propagation in the user model is modified to represent some human cognitive weaknesses. The subnetworks are expanded in an iterative abductive process until argumentative goals are achieved in both models, when the argument is presented to the user.",A {B}ayesian Approach to Automating Argumentation,"Our argumentation system NAG uses Bayesian networks in a user model and in a normative model to assemble and assess nice arguments, that is arguments which balance persuasiveness with normative correctness. Attentional focus is simulated in both models to select relevant subnetworks for Bayesian propagation. Bayesian propagation in the user model is modified to represent some human cognitive weaknesses. The subnetworks are expanded in an iterative abductive process until argumentative goals are achieved in both models, when the argument is presented to the user.",A Bayesian Approach to Automating Argumentation,"Our argumentation system NAG uses Bayesian networks in a user model and in a normative model to assemble and assess nice arguments, that is arguments which balance persuasiveness with normative correctness. Attentional focus is simulated in both models to select relevant subnetworks for Bayesian propagation. Bayesian propagation in the user model is modified to represent some human cognitive weaknesses. The subnetworks are expanded in an iterative abductive process until argumentative goals are achieved in both models, when the argument is presented to the user.",This work was supported in part by Australian Research Council grant A49531227.,"A Bayesian Approach to Automating Argumentation. Our argumentation system NAG uses Bayesian networks in a user model and in a normative model to assemble and assess nice arguments, that is arguments which balance persuasiveness with normative correctness. Attentional focus is simulated in both models to select relevant subnetworks for Bayesian propagation. Bayesian propagation in the user model is modified to represent some human cognitive weaknesses. The subnetworks are expanded in an iterative abductive process until argumentative goals are achieved in both models, when the argument is presented to the user.",1998
rommel-1984-language,https://aclanthology.org/1984.bcs-1.36,0,,,,,,,"Language or information: a new role for the translator. After three days of hearing about machine translation, machine-aided translation, terminology, lexicography, in fact about the methodology and techniques that are making it possible to escalate the information flow to unprecedented proportions, I am astounded at my temerity in agreeing to speak about so pedestrian a subject as the role of the human translator. In vindication, may I say that I have spent a considerable number of years in training future generations of translators, so that this is perhaps an act of self-justification. I might add that what I have heard this week has not led me to believe that the translator's skills, unlike the compositor's, have become obsolete. I am convinced, however, that these skills must be adapted and expanded so that the translator can continue to play his vital role in the dissemination of information and as a ""keeper of the language"".",Language or information: a new role for the translator,"After three days of hearing about machine translation, machine-aided translation, terminology, lexicography, in fact about the methodology and techniques that are making it possible to escalate the information flow to unprecedented proportions, I am astounded at my temerity in agreeing to speak about so pedestrian a subject as the role of the human translator. In vindication, may I say that I have spent a considerable number of years in training future generations of translators, so that this is perhaps an act of self-justification. I might add that what I have heard this week has not led me to believe that the translator's skills, unlike the compositor's, have become obsolete. I am convinced, however, that these skills must be adapted and expanded so that the translator can continue to play his vital role in the dissemination of information and as a ""keeper of the language"".",Language or information: a new role for the translator,"After three days of hearing about machine translation, machine-aided translation, terminology, lexicography, in fact about the methodology and techniques that are making it possible to escalate the information flow to unprecedented proportions, I am astounded at my temerity in agreeing to speak about so pedestrian a subject as the role of the human translator. In vindication, may I say that I have spent a considerable number of years in training future generations of translators, so that this is perhaps an act of self-justification. I might add that what I have heard this week has not led me to believe that the translator's skills, unlike the compositor's, have become obsolete. I am convinced, however, that these skills must be adapted and expanded so that the translator can continue to play his vital role in the dissemination of information and as a ""keeper of the language"".",,"Language or information: a new role for the translator. After three days of hearing about machine translation, machine-aided translation, terminology, lexicography, in fact about the methodology and techniques that are making it possible to escalate the information flow to unprecedented proportions, I am astounded at my temerity in agreeing to speak about so pedestrian a subject as the role of the human translator. In vindication, may I say that I have spent a considerable number of years in training future generations of translators, so that this is perhaps an act of self-justification. I might add that what I have heard this week has not led me to believe that the translator's skills, unlike the compositor's, have become obsolete. I am convinced, however, that these skills must be adapted and expanded so that the translator can continue to play his vital role in the dissemination of information and as a ""keeper of the language"".",1984
min-etal-2000-typographical,http://www.lrec-conf.org/proceedings/lrec2000/pdf/221.pdf,0,,,,,,,"Typographical and Orthographical Spelling Error Correction. This paper focuses on selection techniques for best correction of misspelt words at the lexical level. Spelling errors are introduced by either cognitive or typographical mistakes. A robust spelling correction algorithm is needed to cover both cognitive and typographical errors. For the most effective spelling correction system, various strategies are considered in this paper: ranking heuristics, correction algorithms, and correction priority strategies for the best selection. The strategies also take account of error types, syntactic information, word frequency statistics, and character distance. The findings show that it is very hard to generalise the spelling correction strategy for various types of data sets such as typographical, orthographical, and scanning errors.",Typographical and Orthographical Spelling Error Correction,"This paper focuses on selection techniques for best correction of misspelt words at the lexical level. Spelling errors are introduced by either cognitive or typographical mistakes. A robust spelling correction algorithm is needed to cover both cognitive and typographical errors. For the most effective spelling correction system, various strategies are considered in this paper: ranking heuristics, correction algorithms, and correction priority strategies for the best selection. The strategies also take account of error types, syntactic information, word frequency statistics, and character distance. The findings show that it is very hard to generalise the spelling correction strategy for various types of data sets such as typographical, orthographical, and scanning errors.",Typographical and Orthographical Spelling Error Correction,"This paper focuses on selection techniques for best correction of misspelt words at the lexical level. Spelling errors are introduced by either cognitive or typographical mistakes. A robust spelling correction algorithm is needed to cover both cognitive and typographical errors. For the most effective spelling correction system, various strategies are considered in this paper: ranking heuristics, correction algorithms, and correction priority strategies for the best selection. The strategies also take account of error types, syntactic information, word frequency statistics, and character distance. The findings show that it is very hard to generalise the spelling correction strategy for various types of data sets such as typographical, orthographical, and scanning errors.",,"Typographical and Orthographical Spelling Error Correction. This paper focuses on selection techniques for best correction of misspelt words at the lexical level. Spelling errors are introduced by either cognitive or typographical mistakes. A robust spelling correction algorithm is needed to cover both cognitive and typographical errors. For the most effective spelling correction system, various strategies are considered in this paper: ranking heuristics, correction algorithms, and correction priority strategies for the best selection. The strategies also take account of error types, syntactic information, word frequency statistics, and character distance. The findings show that it is very hard to generalise the spelling correction strategy for various types of data sets such as typographical, orthographical, and scanning errors.",2000
schluter-2018-word,https://aclanthology.org/N18-2039,0,,,,,,,"The Word Analogy Testing Caveat. There are some important problems in the evaluation of word embeddings using standard word analogy tests. In particular, in virtue of the assumptions made by systems generating the embeddings, these remain tests over randomness. We show that even supposing there were such word analogy regularities that should be detected in the word embeddings obtained via unsupervised means, standard word analogy test implementation practices provide distorted or contrived results. We raise concerns regarding the use of Principal Component Analysis to 2 or 3 dimensions as a provision of visual evidence for the existence of word analogy relations in embeddings. Finally, we propose some solutions to these problems.",The Word Analogy Testing Caveat,"There are some important problems in the evaluation of word embeddings using standard word analogy tests. In particular, in virtue of the assumptions made by systems generating the embeddings, these remain tests over randomness. We show that even supposing there were such word analogy regularities that should be detected in the word embeddings obtained via unsupervised means, standard word analogy test implementation practices provide distorted or contrived results. We raise concerns regarding the use of Principal Component Analysis to 2 or 3 dimensions as a provision of visual evidence for the existence of word analogy relations in embeddings. Finally, we propose some solutions to these problems.",The Word Analogy Testing Caveat,"There are some important problems in the evaluation of word embeddings using standard word analogy tests. In particular, in virtue of the assumptions made by systems generating the embeddings, these remain tests over randomness. We show that even supposing there were such word analogy regularities that should be detected in the word embeddings obtained via unsupervised means, standard word analogy test implementation practices provide distorted or contrived results. We raise concerns regarding the use of Principal Component Analysis to 2 or 3 dimensions as a provision of visual evidence for the existence of word analogy relations in embeddings. Finally, we propose some solutions to these problems.",,"The Word Analogy Testing Caveat. There are some important problems in the evaluation of word embeddings using standard word analogy tests. In particular, in virtue of the assumptions made by systems generating the embeddings, these remain tests over randomness. We show that even supposing there were such word analogy regularities that should be detected in the word embeddings obtained via unsupervised means, standard word analogy test implementation practices provide distorted or contrived results. We raise concerns regarding the use of Principal Component Analysis to 2 or 3 dimensions as a provision of visual evidence for the existence of word analogy relations in embeddings. Finally, we propose some solutions to these problems.",2018
keller-1995-towards,https://aclanthology.org/E95-1045,0,,,,,,,"Towards an Account of Extraposition in HPSG. This paper investigates the syntax of extraposition in the HPSG framework. We present English and German data (partly taken from corpora), and provide an analysis using a nonlocal dependency and lexical rules. The condition for binding the dependency is formulated relative to the antecedent of the extraposed phrase, which entails that no fixed site for extraposition exists. Our account allows to explains the interaction of extraposition with fronting and coordination, and predicts constraints on multiple extraposition.",Towards an Account of Extraposition in {HPSG},"This paper investigates the syntax of extraposition in the HPSG framework. We present English and German data (partly taken from corpora), and provide an analysis using a nonlocal dependency and lexical rules. The condition for binding the dependency is formulated relative to the antecedent of the extraposed phrase, which entails that no fixed site for extraposition exists. Our account allows to explains the interaction of extraposition with fronting and coordination, and predicts constraints on multiple extraposition.",Towards an Account of Extraposition in HPSG,"This paper investigates the syntax of extraposition in the HPSG framework. We present English and German data (partly taken from corpora), and provide an analysis using a nonlocal dependency and lexical rules. The condition for binding the dependency is formulated relative to the antecedent of the extraposed phrase, which entails that no fixed site for extraposition exists. Our account allows to explains the interaction of extraposition with fronting and coordination, and predicts constraints on multiple extraposition.",,"Towards an Account of Extraposition in HPSG. This paper investigates the syntax of extraposition in the HPSG framework. We present English and German data (partly taken from corpora), and provide an analysis using a nonlocal dependency and lexical rules. The condition for binding the dependency is formulated relative to the antecedent of the extraposed phrase, which entails that no fixed site for extraposition exists. Our account allows to explains the interaction of extraposition with fronting and coordination, and predicts constraints on multiple extraposition.",1995
kiefer-etal-2002-novel,https://aclanthology.org/C02-1075,0,,,,,,,"A Novel Disambiguation Method for Unification-Based Grammars Using Probabilistic Context-Free Approximations. We present a novel disambiguation method for unification-based grammars (UBGs). In contrast to other methods, our approach obviates the need for probability models on the UBG side in that it shifts the responsibility to simpler context-free models, indirectly obtained from the UBG. Our approach has three advantages: (i) training can be effectively done in practice, (ii) parsing and disambiguation of context-free readings requires only cubic time, and (iii) involved probability distributions are mathematically clean. In an experiment for a mid-size UBG, we show that our novel approach is feasible. Using unsupervised training, we achieve 88% accuracy on an exact-match task.",A Novel Disambiguation Method for Unification-Based Grammars Using Probabilistic Context-Free Approximations,"We present a novel disambiguation method for unification-based grammars (UBGs). In contrast to other methods, our approach obviates the need for probability models on the UBG side in that it shifts the responsibility to simpler context-free models, indirectly obtained from the UBG. Our approach has three advantages: (i) training can be effectively done in practice, (ii) parsing and disambiguation of context-free readings requires only cubic time, and (iii) involved probability distributions are mathematically clean. In an experiment for a mid-size UBG, we show that our novel approach is feasible. Using unsupervised training, we achieve 88% accuracy on an exact-match task.",A Novel Disambiguation Method for Unification-Based Grammars Using Probabilistic Context-Free Approximations,"We present a novel disambiguation method for unification-based grammars (UBGs). In contrast to other methods, our approach obviates the need for probability models on the UBG side in that it shifts the responsibility to simpler context-free models, indirectly obtained from the UBG. Our approach has three advantages: (i) training can be effectively done in practice, (ii) parsing and disambiguation of context-free readings requires only cubic time, and (iii) involved probability distributions are mathematically clean. In an experiment for a mid-size UBG, we show that our novel approach is feasible. Using unsupervised training, we achieve 88% accuracy on an exact-match task.","This research was supported by the German Federal Ministry for Education, Science, Research, and Technology under grant no. 01 IW 002 and EU grant no. IST-1999-11438. ","A Novel Disambiguation Method for Unification-Based Grammars Using Probabilistic Context-Free Approximations. We present a novel disambiguation method for unification-based grammars (UBGs). In contrast to other methods, our approach obviates the need for probability models on the UBG side in that it shifts the responsibility to simpler context-free models, indirectly obtained from the UBG. Our approach has three advantages: (i) training can be effectively done in practice, (ii) parsing and disambiguation of context-free readings requires only cubic time, and (iii) involved probability distributions are mathematically clean. In an experiment for a mid-size UBG, we show that our novel approach is feasible. Using unsupervised training, we achieve 88% accuracy on an exact-match task.",2002
porzel-baudis-2004-tao,https://aclanthology.org/N04-1027,0,,,,,,,"The Tao of CHI: Towards Effective Human-Computer Interaction. End-to-end evaluations of conversational dialogue systems with naive users are currently uncovering severe usability problems that result in low task completion rates. Preliminary analyses suggest that these problems are related to the system's dialogue management and turntaking behavior. We present the results of experiments designed to take a detailed look at the effects of that behavior. Based on the resulting findings, we spell out a set of criteria which lie orthogonal to dialogue quality, but nevertheless constitute an integral part of a more comprehensive view on dialogue felicity as a function of dialogue quality and efficiency.",The Tao of {CHI}: Towards Effective Human-Computer Interaction,"End-to-end evaluations of conversational dialogue systems with naive users are currently uncovering severe usability problems that result in low task completion rates. Preliminary analyses suggest that these problems are related to the system's dialogue management and turntaking behavior. We present the results of experiments designed to take a detailed look at the effects of that behavior. Based on the resulting findings, we spell out a set of criteria which lie orthogonal to dialogue quality, but nevertheless constitute an integral part of a more comprehensive view on dialogue felicity as a function of dialogue quality and efficiency.",The Tao of CHI: Towards Effective Human-Computer Interaction,"End-to-end evaluations of conversational dialogue systems with naive users are currently uncovering severe usability problems that result in low task completion rates. Preliminary analyses suggest that these problems are related to the system's dialogue management and turntaking behavior. We present the results of experiments designed to take a detailed look at the effects of that behavior. Based on the resulting findings, we spell out a set of criteria which lie orthogonal to dialogue quality, but nevertheless constitute an integral part of a more comprehensive view on dialogue felicity as a function of dialogue quality and efficiency.","This work has been partially funded by the German Federal Ministry of Research and Technology (BMBF) and by the Klaus Tschira Foundation as part of the SMARTKOM, SMARTWEB, and EDU projects. We would like to thank the International Computer Science Institute in Berkeley for their help in collecting the data especially, Lila Finhill, Thilo Pfau, Adam Janin and Fey Parrill.","The Tao of CHI: Towards Effective Human-Computer Interaction. End-to-end evaluations of conversational dialogue systems with naive users are currently uncovering severe usability problems that result in low task completion rates. Preliminary analyses suggest that these problems are related to the system's dialogue management and turntaking behavior. We present the results of experiments designed to take a detailed look at the effects of that behavior. Based on the resulting findings, we spell out a set of criteria which lie orthogonal to dialogue quality, but nevertheless constitute an integral part of a more comprehensive view on dialogue felicity as a function of dialogue quality and efficiency.",2004
ashok-etal-2014-dialogue,https://aclanthology.org/W14-4317,0,,,,,,,"Dialogue Act Modeling for Non-Visual Web Access. Speech-enabled dialogue systems have the potential to enhance the ease with which blind individuals can interact with the Web beyond what is possible with screen readers-the currently available assistive technology which narrates the textual content on the screen and provides shortcuts to navigate the content. In this paper, we present a dialogue act model towards developing a speech enabled browsing system. The model is based on the corpus data that was collected in a wizard-of-oz study with 24 blind individuals who were assigned a gamut of browsing tasks. The development of the model included extensive experiments with assorted feature sets and classifiers; the outcomes of the experiments and the analysis of the results are presented.",Dialogue Act Modeling for Non-Visual Web Access,"Speech-enabled dialogue systems have the potential to enhance the ease with which blind individuals can interact with the Web beyond what is possible with screen readers-the currently available assistive technology which narrates the textual content on the screen and provides shortcuts to navigate the content. In this paper, we present a dialogue act model towards developing a speech enabled browsing system. The model is based on the corpus data that was collected in a wizard-of-oz study with 24 blind individuals who were assigned a gamut of browsing tasks. The development of the model included extensive experiments with assorted feature sets and classifiers; the outcomes of the experiments and the analysis of the results are presented.",Dialogue Act Modeling for Non-Visual Web Access,"Speech-enabled dialogue systems have the potential to enhance the ease with which blind individuals can interact with the Web beyond what is possible with screen readers-the currently available assistive technology which narrates the textual content on the screen and provides shortcuts to navigate the content. In this paper, we present a dialogue act model towards developing a speech enabled browsing system. The model is based on the corpus data that was collected in a wizard-of-oz study with 24 blind individuals who were assigned a gamut of browsing tasks. The development of the model included extensive experiments with assorted feature sets and classifiers; the outcomes of the experiments and the analysis of the results are presented.",Research reported in this publication was supported by the National Eye Institute of the National Institutes of Health under award number 1R43EY21962-1A1. We would like to thank Lighthouse Guild International and Dr. William Seiple in particular for helping conduct user studies.,"Dialogue Act Modeling for Non-Visual Web Access. Speech-enabled dialogue systems have the potential to enhance the ease with which blind individuals can interact with the Web beyond what is possible with screen readers-the currently available assistive technology which narrates the textual content on the screen and provides shortcuts to navigate the content. In this paper, we present a dialogue act model towards developing a speech enabled browsing system. The model is based on the corpus data that was collected in a wizard-of-oz study with 24 blind individuals who were assigned a gamut of browsing tasks. The development of the model included extensive experiments with assorted feature sets and classifiers; the outcomes of the experiments and the analysis of the results are presented.",2014
kuroda-2010-arguments,https://aclanthology.org/Y10-1052,0,,,,,,,"Arguments for Parallel Distributed Parsing: Toward the Integration of Lexical and Sublexical (Semantic) Parsings. This paper illustrates the idea of parallel distributed parsing (PDP), which allows us to integrate lexical and sublexical analyses. PDP is proposed for providing a new model of efficient, information-rich parses that can remedy the data sparseness problem. 1) The example and explanation were taken from http://nlp.stanford.edu/projects/shallow-parsing.shtml.",Arguments for Parallel Distributed Parsing: Toward the Integration of Lexical and Sublexical (Semantic) Parsings,"This paper illustrates the idea of parallel distributed parsing (PDP), which allows us to integrate lexical and sublexical analyses. PDP is proposed for providing a new model of efficient, information-rich parses that can remedy the data sparseness problem. 1) The example and explanation were taken from http://nlp.stanford.edu/projects/shallow-parsing.shtml.",Arguments for Parallel Distributed Parsing: Toward the Integration of Lexical and Sublexical (Semantic) Parsings,"This paper illustrates the idea of parallel distributed parsing (PDP), which allows us to integrate lexical and sublexical analyses. PDP is proposed for providing a new model of efficient, information-rich parses that can remedy the data sparseness problem. 1) The example and explanation were taken from http://nlp.stanford.edu/projects/shallow-parsing.shtml.",,"Arguments for Parallel Distributed Parsing: Toward the Integration of Lexical and Sublexical (Semantic) Parsings. This paper illustrates the idea of parallel distributed parsing (PDP), which allows us to integrate lexical and sublexical analyses. PDP is proposed for providing a new model of efficient, information-rich parses that can remedy the data sparseness problem. 1) The example and explanation were taken from http://nlp.stanford.edu/projects/shallow-parsing.shtml.",2010
liu-etal-2021-morphological,https://aclanthology.org/2021.americasnlp-1.10,0,,,,,,,"Morphological Segmentation for Seneca. This study takes up the task of low-resource morphological segmentation for Seneca, a critically endangered and morphologically complex Native American language primarily spoken in what is now New York State and Ontario. The labeled data in our experiments comes from two sources: one digitized from a publicly available grammar book and the other collected from informal sources. We treat these two sources as distinct domains and investigate different evaluation designs for model selection. The first design abides by standard practices and evaluates models with the in-domain development set, while the second one carries out evaluation using a development domain, or the out-of-domain development set. Across a series of monolingual and cross-linguistic training settings, our results demonstrate the utility of neural encoderdecoder architecture when coupled with multitask learning.",Morphological Segmentation for {S}eneca,"This study takes up the task of low-resource morphological segmentation for Seneca, a critically endangered and morphologically complex Native American language primarily spoken in what is now New York State and Ontario. The labeled data in our experiments comes from two sources: one digitized from a publicly available grammar book and the other collected from informal sources. We treat these two sources as distinct domains and investigate different evaluation designs for model selection. The first design abides by standard practices and evaluates models with the in-domain development set, while the second one carries out evaluation using a development domain, or the out-of-domain development set. Across a series of monolingual and cross-linguistic training settings, our results demonstrate the utility of neural encoderdecoder architecture when coupled with multitask learning.",Morphological Segmentation for Seneca,"This study takes up the task of low-resource morphological segmentation for Seneca, a critically endangered and morphologically complex Native American language primarily spoken in what is now New York State and Ontario. The labeled data in our experiments comes from two sources: one digitized from a publicly available grammar book and the other collected from informal sources. We treat these two sources as distinct domains and investigate different evaluation designs for model selection. The first design abides by standard practices and evaluates models with the in-domain development set, while the second one carries out evaluation using a development domain, or the out-of-domain development set. Across a series of monolingual and cross-linguistic training settings, our results demonstrate the utility of neural encoderdecoder architecture when coupled with multitask learning.","We are grateful for the cooperation and support of the Seneca Nation of Indians. This material is based upon work supported by the National Science Foundation under Grant No. 1761562. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.","Morphological Segmentation for Seneca. This study takes up the task of low-resource morphological segmentation for Seneca, a critically endangered and morphologically complex Native American language primarily spoken in what is now New York State and Ontario. The labeled data in our experiments comes from two sources: one digitized from a publicly available grammar book and the other collected from informal sources. We treat these two sources as distinct domains and investigate different evaluation designs for model selection. The first design abides by standard practices and evaluates models with the in-domain development set, while the second one carries out evaluation using a development domain, or the out-of-domain development set. Across a series of monolingual and cross-linguistic training settings, our results demonstrate the utility of neural encoderdecoder architecture when coupled with multitask learning.",2021
rei-etal-2021-mt,https://aclanthology.org/2021.acl-demo.9,0,,,,,,,"MT-Telescope: An interactive platform for contrastive evaluation of MT systems. We present MT-TELESCOPE, a visualization platform designed to facilitate comparative analysis of the output quality of two Machine Translation (MT) systems. While automated MT evaluation metrics are commonly used to evaluate MT systems at a corpus-level, our platform supports fine-grained segment-level analysis and interactive visualisations that expose the fundamental differences in the performance of the compared systems. MT-TELESCOPE also supports dynamic corpus filtering to enable focused analysis on specific phenomena such as; translation of named entities, handling of terminology, and the impact of input segment length on translation quality. Furthermore, the platform provides a bootstrapped t-test for statistical significance as a means of evaluating the rigor of the resulting system ranking. MT-TELESCOPE is open source 1 , written in Python, and is built around a user friendly and dynamic web interface. Complementing other existing tools, our platform is designed to facilitate and promote the broader adoption of more rigorous analysis practices in the evaluation of MT quality.",{MT}-{T}elescope: {A}n interactive platform for contrastive evaluation of {MT} systems,"We present MT-TELESCOPE, a visualization platform designed to facilitate comparative analysis of the output quality of two Machine Translation (MT) systems. While automated MT evaluation metrics are commonly used to evaluate MT systems at a corpus-level, our platform supports fine-grained segment-level analysis and interactive visualisations that expose the fundamental differences in the performance of the compared systems. MT-TELESCOPE also supports dynamic corpus filtering to enable focused analysis on specific phenomena such as; translation of named entities, handling of terminology, and the impact of input segment length on translation quality. Furthermore, the platform provides a bootstrapped t-test for statistical significance as a means of evaluating the rigor of the resulting system ranking. MT-TELESCOPE is open source 1 , written in Python, and is built around a user friendly and dynamic web interface. Complementing other existing tools, our platform is designed to facilitate and promote the broader adoption of more rigorous analysis practices in the evaluation of MT quality.",MT-Telescope: An interactive platform for contrastive evaluation of MT systems,"We present MT-TELESCOPE, a visualization platform designed to facilitate comparative analysis of the output quality of two Machine Translation (MT) systems. While automated MT evaluation metrics are commonly used to evaluate MT systems at a corpus-level, our platform supports fine-grained segment-level analysis and interactive visualisations that expose the fundamental differences in the performance of the compared systems. MT-TELESCOPE also supports dynamic corpus filtering to enable focused analysis on specific phenomena such as; translation of named entities, handling of terminology, and the impact of input segment length on translation quality. Furthermore, the platform provides a bootstrapped t-test for statistical significance as a means of evaluating the rigor of the resulting system ranking. MT-TELESCOPE is open source 1 , written in Python, and is built around a user friendly and dynamic web interface. Complementing other existing tools, our platform is designed to facilitate and promote the broader adoption of more rigorous analysis practices in the evaluation of MT quality.","We are grateful to the Unbabel MT team, specially Austin Matthews and João Alves, for their valuable feedback. This work was supported in part by the P2020 Program through projects MAIA and Unbabel4EU, supervised by ANI under contract numbers 045909 and 042671, respectively.","MT-Telescope: An interactive platform for contrastive evaluation of MT systems. We present MT-TELESCOPE, a visualization platform designed to facilitate comparative analysis of the output quality of two Machine Translation (MT) systems. While automated MT evaluation metrics are commonly used to evaluate MT systems at a corpus-level, our platform supports fine-grained segment-level analysis and interactive visualisations that expose the fundamental differences in the performance of the compared systems. MT-TELESCOPE also supports dynamic corpus filtering to enable focused analysis on specific phenomena such as; translation of named entities, handling of terminology, and the impact of input segment length on translation quality. Furthermore, the platform provides a bootstrapped t-test for statistical significance as a means of evaluating the rigor of the resulting system ranking. MT-TELESCOPE is open source 1 , written in Python, and is built around a user friendly and dynamic web interface. Complementing other existing tools, our platform is designed to facilitate and promote the broader adoption of more rigorous analysis practices in the evaluation of MT quality.",2021
bartsch-2004-annotating,http://www.lrec-conf.org/proceedings/lrec2004/pdf/361.pdf,0,,,,,,,Annotating a Corpus for Building a Domain-specific Knowledge Base. The project described in this paper seeks to develop a knowledge base for the domain of data processing in construction-a sub-domain of mechanical engineering-based on a corpus of authentic natural language text. Central in this undertaking is the annotation of the relevant linguistic and conceptual units and structures which are to form the basis of the knowledge base. This paper describes the levels of annotation and the ontology on which the knowledge base is going to be modelled and sketches some of the linguistic relations which are used in building the knowledge base.,Annotating a Corpus for Building a Domain-specific Knowledge Base,The project described in this paper seeks to develop a knowledge base for the domain of data processing in construction-a sub-domain of mechanical engineering-based on a corpus of authentic natural language text. Central in this undertaking is the annotation of the relevant linguistic and conceptual units and structures which are to form the basis of the knowledge base. This paper describes the levels of annotation and the ontology on which the knowledge base is going to be modelled and sketches some of the linguistic relations which are used in building the knowledge base.,Annotating a Corpus for Building a Domain-specific Knowledge Base,The project described in this paper seeks to develop a knowledge base for the domain of data processing in construction-a sub-domain of mechanical engineering-based on a corpus of authentic natural language text. Central in this undertaking is the annotation of the relevant linguistic and conceptual units and structures which are to form the basis of the knowledge base. This paper describes the levels of annotation and the ontology on which the knowledge base is going to be modelled and sketches some of the linguistic relations which are used in building the knowledge base.,,Annotating a Corpus for Building a Domain-specific Knowledge Base. The project described in this paper seeks to develop a knowledge base for the domain of data processing in construction-a sub-domain of mechanical engineering-based on a corpus of authentic natural language text. Central in this undertaking is the annotation of the relevant linguistic and conceptual units and structures which are to form the basis of the knowledge base. This paper describes the levels of annotation and the ontology on which the knowledge base is going to be modelled and sketches some of the linguistic relations which are used in building the knowledge base.,2004
martins-etal-2012-structured,https://aclanthology.org/N12-4002,0,,,,,,,"Structured Sparsity in Natural Language Processing: Models, Algorithms and Applications. This tutorial will cover recent advances in sparse modeling with diverse applications in natural language processing (NLP). A sparse model is one that uses a relatively small number of features to map an input to an output, such as a label sequence or parse tree. The advantages of sparsity are, among others, compactness and interpretability; in fact, sparsity is currently a major theme in statistics, machine learning, and signal processing. The goal of sparsity can be seen in terms of earlier goals of feature selection and therefore model selection (Della","Structured Sparsity in Natural Language Processing: Models, Algorithms and Applications","This tutorial will cover recent advances in sparse modeling with diverse applications in natural language processing (NLP). A sparse model is one that uses a relatively small number of features to map an input to an output, such as a label sequence or parse tree. The advantages of sparsity are, among others, compactness and interpretability; in fact, sparsity is currently a major theme in statistics, machine learning, and signal processing. The goal of sparsity can be seen in terms of earlier goals of feature selection and therefore model selection (Della","Structured Sparsity in Natural Language Processing: Models, Algorithms and Applications","This tutorial will cover recent advances in sparse modeling with diverse applications in natural language processing (NLP). A sparse model is one that uses a relatively small number of features to map an input to an output, such as a label sequence or parse tree. The advantages of sparsity are, among others, compactness and interpretability; in fact, sparsity is currently a major theme in statistics, machine learning, and signal processing. The goal of sparsity can be seen in terms of earlier goals of feature selection and therefore model selection (Della",This tutorial was enabled by support from the following organizations: ,"Structured Sparsity in Natural Language Processing: Models, Algorithms and Applications. This tutorial will cover recent advances in sparse modeling with diverse applications in natural language processing (NLP). A sparse model is one that uses a relatively small number of features to map an input to an output, such as a label sequence or parse tree. The advantages of sparsity are, among others, compactness and interpretability; in fact, sparsity is currently a major theme in statistics, machine learning, and signal processing. The goal of sparsity can be seen in terms of earlier goals of feature selection and therefore model selection (Della",2012
malmasi-dras-2014-chinese,https://aclanthology.org/E14-4019,0,,,,,,,"Chinese Native Language Identification. We present the first application of Native Language Identification (NLI) to non-English data. Motivated by theories of language transfer, NLI is the task of identifying a writer's native language (L1) based on their writings in a second language (the L2). An NLI system was applied to Chinese learner texts using topicindependent syntactic models to assess their accuracy. We find that models using part-of-speech tags, context-free grammar production rules and function words are highly effective, achieving a maximum accuracy of 71%. Interestingly, we also find that when applied to equivalent English data, the model performance is almost identical. This finding suggests a systematic pattern of cross-linguistic transfer may exist, where the degree of transfer is independent of the L1 and L2.",{C}hinese Native Language Identification,"We present the first application of Native Language Identification (NLI) to non-English data. Motivated by theories of language transfer, NLI is the task of identifying a writer's native language (L1) based on their writings in a second language (the L2). An NLI system was applied to Chinese learner texts using topicindependent syntactic models to assess their accuracy. We find that models using part-of-speech tags, context-free grammar production rules and function words are highly effective, achieving a maximum accuracy of 71%. Interestingly, we also find that when applied to equivalent English data, the model performance is almost identical. This finding suggests a systematic pattern of cross-linguistic transfer may exist, where the degree of transfer is independent of the L1 and L2.",Chinese Native Language Identification,"We present the first application of Native Language Identification (NLI) to non-English data. Motivated by theories of language transfer, NLI is the task of identifying a writer's native language (L1) based on their writings in a second language (the L2). An NLI system was applied to Chinese learner texts using topicindependent syntactic models to assess their accuracy. We find that models using part-of-speech tags, context-free grammar production rules and function words are highly effective, achieving a maximum accuracy of 71%. Interestingly, we also find that when applied to equivalent English data, the model performance is almost identical. This finding suggests a systematic pattern of cross-linguistic transfer may exist, where the degree of transfer is independent of the L1 and L2.","We wish to thank Associate Professor Maolin Wang for providing access to the CLC corpus, and Zhendong Zhao for his assistance. We also thank the reviewers for their constructive feedback.","Chinese Native Language Identification. We present the first application of Native Language Identification (NLI) to non-English data. Motivated by theories of language transfer, NLI is the task of identifying a writer's native language (L1) based on their writings in a second language (the L2). An NLI system was applied to Chinese learner texts using topicindependent syntactic models to assess their accuracy. We find that models using part-of-speech tags, context-free grammar production rules and function words are highly effective, achieving a maximum accuracy of 71%. Interestingly, we also find that when applied to equivalent English data, the model performance is almost identical. This finding suggests a systematic pattern of cross-linguistic transfer may exist, where the degree of transfer is independent of the L1 and L2.",2014
obeidat-etal-2019-description,https://aclanthology.org/N19-1087,0,,,,,,,"Description-Based Zero-shot Fine-Grained Entity Typing. Fine-grained Entity typing (FGET) is the task of assigning a fine-grained type from a hierarchy to entity mentions in the text. As the taxonomy of types evolves continuously, it is desirable for an entity typing system to be able to recognize novel types without additional training. This work proposes a zero-shot entity typing approach that utilizes the type description available from Wikipedia to build a distributed semantic representation of the types. During training, our system learns to align the entity mentions and their corresponding type representations on the known types. At test time, any new type can be incorporated into the system given its Wikipedia descriptions. We evaluate our approach on FIGER, a public benchmark entity tying dataset. Because the existing test set of FIGER covers only a small portion of the fine-grained types, we create a new test set by manually annotating a portion of the noisy training data. Our experiments demonstrate the effectiveness of the proposed method in recognizing novel types that are not present in the training data.",Description-Based Zero-shot Fine-Grained Entity Typing,"Fine-grained Entity typing (FGET) is the task of assigning a fine-grained type from a hierarchy to entity mentions in the text. As the taxonomy of types evolves continuously, it is desirable for an entity typing system to be able to recognize novel types without additional training. This work proposes a zero-shot entity typing approach that utilizes the type description available from Wikipedia to build a distributed semantic representation of the types. During training, our system learns to align the entity mentions and their corresponding type representations on the known types. At test time, any new type can be incorporated into the system given its Wikipedia descriptions. We evaluate our approach on FIGER, a public benchmark entity tying dataset. Because the existing test set of FIGER covers only a small portion of the fine-grained types, we create a new test set by manually annotating a portion of the noisy training data. Our experiments demonstrate the effectiveness of the proposed method in recognizing novel types that are not present in the training data.",Description-Based Zero-shot Fine-Grained Entity Typing,"Fine-grained Entity typing (FGET) is the task of assigning a fine-grained type from a hierarchy to entity mentions in the text. As the taxonomy of types evolves continuously, it is desirable for an entity typing system to be able to recognize novel types without additional training. This work proposes a zero-shot entity typing approach that utilizes the type description available from Wikipedia to build a distributed semantic representation of the types. During training, our system learns to align the entity mentions and their corresponding type representations on the known types. At test time, any new type can be incorporated into the system given its Wikipedia descriptions. We evaluate our approach on FIGER, a public benchmark entity tying dataset. Because the existing test set of FIGER covers only a small portion of the fine-grained types, we create a new test set by manually annotating a portion of the noisy training data. Our experiments demonstrate the effectiveness of the proposed method in recognizing novel types that are not present in the training data.",We thank Jordan University of Science and Technology for Ph.D. fellowship (to R. O.).,"Description-Based Zero-shot Fine-Grained Entity Typing. Fine-grained Entity typing (FGET) is the task of assigning a fine-grained type from a hierarchy to entity mentions in the text. As the taxonomy of types evolves continuously, it is desirable for an entity typing system to be able to recognize novel types without additional training. This work proposes a zero-shot entity typing approach that utilizes the type description available from Wikipedia to build a distributed semantic representation of the types. During training, our system learns to align the entity mentions and their corresponding type representations on the known types. At test time, any new type can be incorporated into the system given its Wikipedia descriptions. We evaluate our approach on FIGER, a public benchmark entity tying dataset. Because the existing test set of FIGER covers only a small portion of the fine-grained types, we create a new test set by manually annotating a portion of the noisy training data. Our experiments demonstrate the effectiveness of the proposed method in recognizing novel types that are not present in the training data.",2019
shirai-etal-1980-trial,https://aclanthology.org/C80-1070,0,,,,,,,"A Trial of Japanese Text Input System Using Speech Recognition. Since written Japanese texts are expressed by many kinds of characters, input technique is most difficult when Japanese information is processed by computers. Therefore a task-independent Japanese text input system which has a speech analyzer as a main input device and a keyboard as an auxiliary device was designed and it has been implemented. The outline and experience of this system is described in this paper.
The system consists of the phoneme discrimination part and the word discrimination part.",A Trial of {J}apanese Text Input System Using Speech Recognition,"Since written Japanese texts are expressed by many kinds of characters, input technique is most difficult when Japanese information is processed by computers. Therefore a task-independent Japanese text input system which has a speech analyzer as a main input device and a keyboard as an auxiliary device was designed and it has been implemented. The outline and experience of this system is described in this paper.
The system consists of the phoneme discrimination part and the word discrimination part.",A Trial of Japanese Text Input System Using Speech Recognition,"Since written Japanese texts are expressed by many kinds of characters, input technique is most difficult when Japanese information is processed by computers. Therefore a task-independent Japanese text input system which has a speech analyzer as a main input device and a keyboard as an auxiliary device was designed and it has been implemented. The outline and experience of this system is described in this paper.
The system consists of the phoneme discrimination part and the word discrimination part.","The authors wish to thank J.Kubota, T. Kobayashi and M.Ohashi for their contributions to designing and developing this system.","A Trial of Japanese Text Input System Using Speech Recognition. Since written Japanese texts are expressed by many kinds of characters, input technique is most difficult when Japanese information is processed by computers. Therefore a task-independent Japanese text input system which has a speech analyzer as a main input device and a keyboard as an auxiliary device was designed and it has been implemented. The outline and experience of this system is described in this paper.
The system consists of the phoneme discrimination part and the word discrimination part.",1980
cubel-etal-2003-adapting,https://aclanthology.org/2003.eamt-1.6,0,,,,,,,"Adapting finite-state translation to the TransType2 project. Machine translation can play an important role nowadays, helping communication between people. One of the projects in this field is TransType2 1. Its purpose is to develop an innovative, interactive machine translation system. TransType2 aims at facilitating the task of producing high-quality translations, and make the translation task more cost-effective for human translators. To achieve this goal, stochastic finite-state transducers are being used. Stochastic finite-state transducers are generated by means of hybrid finite-state and statistical alignment techniques. Viterbi parsing procedure with stochastic finite-state transducers have been adapted to take into account the source sentence to be translated and the target prefix given by the human translator. Experiments have been carried out with a corpus of printer manuals. The first results showed that with this preliminary prototype, users can only type a 15% of the words instead the whole complete translated text.",Adapting finite-state translation to the {T}rans{T}ype2 project,"Machine translation can play an important role nowadays, helping communication between people. One of the projects in this field is TransType2 1. Its purpose is to develop an innovative, interactive machine translation system. TransType2 aims at facilitating the task of producing high-quality translations, and make the translation task more cost-effective for human translators. To achieve this goal, stochastic finite-state transducers are being used. Stochastic finite-state transducers are generated by means of hybrid finite-state and statistical alignment techniques. Viterbi parsing procedure with stochastic finite-state transducers have been adapted to take into account the source sentence to be translated and the target prefix given by the human translator. Experiments have been carried out with a corpus of printer manuals. The first results showed that with this preliminary prototype, users can only type a 15% of the words instead the whole complete translated text.",Adapting finite-state translation to the TransType2 project,"Machine translation can play an important role nowadays, helping communication between people. One of the projects in this field is TransType2 1. Its purpose is to develop an innovative, interactive machine translation system. TransType2 aims at facilitating the task of producing high-quality translations, and make the translation task more cost-effective for human translators. To achieve this goal, stochastic finite-state transducers are being used. Stochastic finite-state transducers are generated by means of hybrid finite-state and statistical alignment techniques. Viterbi parsing procedure with stochastic finite-state transducers have been adapted to take into account the source sentence to be translated and the target prefix given by the human translator. Experiments have been carried out with a corpus of printer manuals. The first results showed that with this preliminary prototype, users can only type a 15% of the words instead the whole complete translated text.",The authors would like to thank the reasearchers involved in the TT2 project who have developed the methodologies that are presented in this paper.This work has been supported by the European Union under the IST Programme (IST-2001-32091).,"Adapting finite-state translation to the TransType2 project. Machine translation can play an important role nowadays, helping communication between people. One of the projects in this field is TransType2 1. Its purpose is to develop an innovative, interactive machine translation system. TransType2 aims at facilitating the task of producing high-quality translations, and make the translation task more cost-effective for human translators. To achieve this goal, stochastic finite-state transducers are being used. Stochastic finite-state transducers are generated by means of hybrid finite-state and statistical alignment techniques. Viterbi parsing procedure with stochastic finite-state transducers have been adapted to take into account the source sentence to be translated and the target prefix given by the human translator. Experiments have been carried out with a corpus of printer manuals. The first results showed that with this preliminary prototype, users can only type a 15% of the words instead the whole complete translated text.",2003
libovicky-pecina-2014-tolerant,https://aclanthology.org/W14-3353,0,,,,,,,"Tolerant BLEU: a Submission to the WMT14 Metrics Task. This paper describes a machine translation metric submitted to the WMT14 Metrics Task. It is a simple modification of the standard BLEU metric using a monolingual alignment of reference and test sentences. The alignment is computed as a minimum weighted maximum bipartite matching of the translated and the reference sentence words with respect to the relative edit distance of the word prefixes and suffixes. The aligned words are included in the n-gram precision computation with a penalty proportional to the matching distance. The proposed tBLEU metric is designed to be more tolerant to errors in inflection, which usually does not effect the understandability of a sentence, and therefore be more suitable for measuring quality of translation into morphologically richer languages.",Tolerant {BLEU}: a Submission to the {WMT}14 Metrics Task,"This paper describes a machine translation metric submitted to the WMT14 Metrics Task. It is a simple modification of the standard BLEU metric using a monolingual alignment of reference and test sentences. The alignment is computed as a minimum weighted maximum bipartite matching of the translated and the reference sentence words with respect to the relative edit distance of the word prefixes and suffixes. The aligned words are included in the n-gram precision computation with a penalty proportional to the matching distance. The proposed tBLEU metric is designed to be more tolerant to errors in inflection, which usually does not effect the understandability of a sentence, and therefore be more suitable for measuring quality of translation into morphologically richer languages.",Tolerant BLEU: a Submission to the WMT14 Metrics Task,"This paper describes a machine translation metric submitted to the WMT14 Metrics Task. It is a simple modification of the standard BLEU metric using a monolingual alignment of reference and test sentences. The alignment is computed as a minimum weighted maximum bipartite matching of the translated and the reference sentence words with respect to the relative edit distance of the word prefixes and suffixes. The aligned words are included in the n-gram precision computation with a penalty proportional to the matching distance. The proposed tBLEU metric is designed to be more tolerant to errors in inflection, which usually does not effect the understandability of a sentence, and therefore be more suitable for measuring quality of translation into morphologically richer languages.",This research has been funded by the Czech Science Foundation (grant n. P103/12/G084) and the EU FP7 project Khresmoi (contract no. 257528).,"Tolerant BLEU: a Submission to the WMT14 Metrics Task. This paper describes a machine translation metric submitted to the WMT14 Metrics Task. It is a simple modification of the standard BLEU metric using a monolingual alignment of reference and test sentences. The alignment is computed as a minimum weighted maximum bipartite matching of the translated and the reference sentence words with respect to the relative edit distance of the word prefixes and suffixes. The aligned words are included in the n-gram precision computation with a penalty proportional to the matching distance. The proposed tBLEU metric is designed to be more tolerant to errors in inflection, which usually does not effect the understandability of a sentence, and therefore be more suitable for measuring quality of translation into morphologically richer languages.",2014
mao-etal-2008-chinese,https://aclanthology.org/I08-4013,0,,,,,,,"Chinese Word Segmentation and Named Entity Recognition Based on Conditional Random Fields. Chinese word segmentation (CWS), named entity recognition (NER) and part-ofspeech tagging is the lexical processing in Chinese language. This paper describes the work on these tasks done by France Telecom Team (Beijing) at the fourth International Chinese Language Processing Bakeoff. In particular, we employ Conditional Random Fields with different features for these tasks. In order to improve NER relatively low recall; we exploit non-local features and alleviate class imbalanced distribution on NER dataset to enhance the recall and keep its relatively high precision. Some other post-processing measures such as consistency checking and transformation-based error-driven learning are used to improve word segmentation performance. Our systems participated in most CWS and POS tagging evaluations and all the NER tracks. As a result, our NER system achieves the first ranks on MSRA open track and MSRA/CityU closed track. Our CWS system achieves the first rank on CityU open track, which means that our systems achieve state-of-the-art performance on Chinese lexical processing.",{C}hinese Word Segmentation and Named Entity Recognition Based on Conditional Random Fields,"Chinese word segmentation (CWS), named entity recognition (NER) and part-ofspeech tagging is the lexical processing in Chinese language. This paper describes the work on these tasks done by France Telecom Team (Beijing) at the fourth International Chinese Language Processing Bakeoff. In particular, we employ Conditional Random Fields with different features for these tasks. In order to improve NER relatively low recall; we exploit non-local features and alleviate class imbalanced distribution on NER dataset to enhance the recall and keep its relatively high precision. Some other post-processing measures such as consistency checking and transformation-based error-driven learning are used to improve word segmentation performance. Our systems participated in most CWS and POS tagging evaluations and all the NER tracks. As a result, our NER system achieves the first ranks on MSRA open track and MSRA/CityU closed track. Our CWS system achieves the first rank on CityU open track, which means that our systems achieve state-of-the-art performance on Chinese lexical processing.",Chinese Word Segmentation and Named Entity Recognition Based on Conditional Random Fields,"Chinese word segmentation (CWS), named entity recognition (NER) and part-ofspeech tagging is the lexical processing in Chinese language. This paper describes the work on these tasks done by France Telecom Team (Beijing) at the fourth International Chinese Language Processing Bakeoff. In particular, we employ Conditional Random Fields with different features for these tasks. In order to improve NER relatively low recall; we exploit non-local features and alleviate class imbalanced distribution on NER dataset to enhance the recall and keep its relatively high precision. Some other post-processing measures such as consistency checking and transformation-based error-driven learning are used to improve word segmentation performance. Our systems participated in most CWS and POS tagging evaluations and all the NER tracks. As a result, our NER system achieves the first ranks on MSRA open track and MSRA/CityU closed track. Our CWS system achieves the first rank on CityU open track, which means that our systems achieve state-of-the-art performance on Chinese lexical processing.",,"Chinese Word Segmentation and Named Entity Recognition Based on Conditional Random Fields. Chinese word segmentation (CWS), named entity recognition (NER) and part-ofspeech tagging is the lexical processing in Chinese language. This paper describes the work on these tasks done by France Telecom Team (Beijing) at the fourth International Chinese Language Processing Bakeoff. In particular, we employ Conditional Random Fields with different features for these tasks. In order to improve NER relatively low recall; we exploit non-local features and alleviate class imbalanced distribution on NER dataset to enhance the recall and keep its relatively high precision. Some other post-processing measures such as consistency checking and transformation-based error-driven learning are used to improve word segmentation performance. Our systems participated in most CWS and POS tagging evaluations and all the NER tracks. As a result, our NER system achieves the first ranks on MSRA open track and MSRA/CityU closed track. Our CWS system achieves the first rank on CityU open track, which means that our systems achieve state-of-the-art performance on Chinese lexical processing.",2008
egan-2010-cross,https://aclanthology.org/2010.amta-government.5,0,,,,,,,Cross Lingual Arabic Blog Alerting (COLABA). ,Cross Lingual {A}rabic Blog Alerting ({COLABA}),,Cross Lingual Arabic Blog Alerting (COLABA),,,Cross Lingual Arabic Blog Alerting (COLABA). ,2010
kaplan-1997-lexical,https://aclanthology.org/W97-1508,0,,,,,,,"Lexical Resource Reconciliation in the Xerox Linguistic Environment. This paper motivates and describes those aspects of the Xerox Linguistic Environment (XLE) that facilitate the construction of broad-coverage Lexical Functional grammars by incorporating morphological and lexical material from external resources. Because that material can be incorrect, incomplete, or otherwise incompatible with the grammar, mechanisms are provided to correct and augment the external material to suit the needs of the grammar developer. This can be accomplished without direct modification of the incorporated material, which is often infeasible or undesirable. Externally-developed finite-state morphological analyzers are reconciled with grammar requirements by run-time simulation of finite-state calculus operations for combining transducers. Lexical entries derived by automatic extraction from on-line dictionaries or via corpus-analysis tools are incorporated and reconciled by extending the LFG lexicon formalism to allow fine-tuned integration of information from difference sources.",Lexical Resource Reconciliation in the Xerox Linguistic Environment,"This paper motivates and describes those aspects of the Xerox Linguistic Environment (XLE) that facilitate the construction of broad-coverage Lexical Functional grammars by incorporating morphological and lexical material from external resources. Because that material can be incorrect, incomplete, or otherwise incompatible with the grammar, mechanisms are provided to correct and augment the external material to suit the needs of the grammar developer. This can be accomplished without direct modification of the incorporated material, which is often infeasible or undesirable. Externally-developed finite-state morphological analyzers are reconciled with grammar requirements by run-time simulation of finite-state calculus operations for combining transducers. Lexical entries derived by automatic extraction from on-line dictionaries or via corpus-analysis tools are incorporated and reconciled by extending the LFG lexicon formalism to allow fine-tuned integration of information from difference sources.",Lexical Resource Reconciliation in the Xerox Linguistic Environment,"This paper motivates and describes those aspects of the Xerox Linguistic Environment (XLE) that facilitate the construction of broad-coverage Lexical Functional grammars by incorporating morphological and lexical material from external resources. Because that material can be incorrect, incomplete, or otherwise incompatible with the grammar, mechanisms are provided to correct and augment the external material to suit the needs of the grammar developer. This can be accomplished without direct modification of the incorporated material, which is often infeasible or undesirable. Externally-developed finite-state morphological analyzers are reconciled with grammar requirements by run-time simulation of finite-state calculus operations for combining transducers. Lexical entries derived by automatic extraction from on-line dictionaries or via corpus-analysis tools are incorporated and reconciled by extending the LFG lexicon formalism to allow fine-tuned integration of information from difference sources.","We would like to thank the participants of the Pargram Parallel Grammar project for raising the issues motivating the work described in this paper, in particular Miriam Butt and Christian Rohrer for identifying the lexicon-related problems, and Tracy Holloway King and Marfa-Eugenia Nifio for bringing morphological problems to our attention. We also thank John Maxwell for his contribution towards formulating one of the approaches described, and Max Copperman for his help in implementing the facilities. And we thank Max Copperman, Mary Dalrymple, and John Maxwell for their editorial assistance.","Lexical Resource Reconciliation in the Xerox Linguistic Environment. This paper motivates and describes those aspects of the Xerox Linguistic Environment (XLE) that facilitate the construction of broad-coverage Lexical Functional grammars by incorporating morphological and lexical material from external resources. Because that material can be incorrect, incomplete, or otherwise incompatible with the grammar, mechanisms are provided to correct and augment the external material to suit the needs of the grammar developer. This can be accomplished without direct modification of the incorporated material, which is often infeasible or undesirable. Externally-developed finite-state morphological analyzers are reconciled with grammar requirements by run-time simulation of finite-state calculus operations for combining transducers. Lexical entries derived by automatic extraction from on-line dictionaries or via corpus-analysis tools are incorporated and reconciled by extending the LFG lexicon formalism to allow fine-tuned integration of information from difference sources.",1997
tiedemann-nygaard-2004-opus,http://www.lrec-conf.org/proceedings/lrec2004/pdf/320.pdf,0,,,,,,,The OPUS Corpus - Parallel and Free: \urlhttp://logos.uio.no/opus. The OPUS corpus is a growing collection of translated documents collected from the internet. The current version contains about 30 million words in 60 languages. The entire corpus is sentence aligned and it also contains linguistic markup for certain languages.,The {OPUS} Corpus - Parallel and Free: \url{http://logos.uio.no/opus},The OPUS corpus is a growing collection of translated documents collected from the internet. The current version contains about 30 million words in 60 languages. The entire corpus is sentence aligned and it also contains linguistic markup for certain languages.,The OPUS Corpus - Parallel and Free: \urlhttp://logos.uio.no/opus,The OPUS corpus is a growing collection of translated documents collected from the internet. The current version contains about 30 million words in 60 languages. The entire corpus is sentence aligned and it also contains linguistic markup for certain languages.,,The OPUS Corpus - Parallel and Free: \urlhttp://logos.uio.no/opus. The OPUS corpus is a growing collection of translated documents collected from the internet. The current version contains about 30 million words in 60 languages. The entire corpus is sentence aligned and it also contains linguistic markup for certain languages.,2004
junczys-dowmunt-grundkiewicz-2016-log,https://aclanthology.org/W16-2378,0,,,,,,,"Log-linear Combinations of Monolingual and Bilingual Neural Machine Translation Models for Automatic Post-Editing. This paper describes the submission of the AMU (Adam Mickiewicz University) team to the Automatic Post-Editing (APE) task of WMT 2016. We explore the application of neural translation models to the APE problem and achieve good results by treating different models as components in a log-linear model, allowing for multiple inputs (the MT-output and the source) that are decoded to the same target language (post-edited translations). A simple string-matching penalty integrated within the log-linear model is used to control for higher faithfulness with regard to the raw machine translation output. To overcome the problem of too little training data, we generate large amounts of artificial data. Our submission improves over the uncorrected baseline on the unseen test set by-3.2% TER and +5.5% BLEU and outperforms any other system submitted to the shared-task by a large margin.",Log-linear Combinations of Monolingual and Bilingual Neural Machine Translation Models for Automatic Post-Editing,"This paper describes the submission of the AMU (Adam Mickiewicz University) team to the Automatic Post-Editing (APE) task of WMT 2016. We explore the application of neural translation models to the APE problem and achieve good results by treating different models as components in a log-linear model, allowing for multiple inputs (the MT-output and the source) that are decoded to the same target language (post-edited translations). A simple string-matching penalty integrated within the log-linear model is used to control for higher faithfulness with regard to the raw machine translation output. To overcome the problem of too little training data, we generate large amounts of artificial data. Our submission improves over the uncorrected baseline on the unseen test set by-3.2% TER and +5.5% BLEU and outperforms any other system submitted to the shared-task by a large margin.",Log-linear Combinations of Monolingual and Bilingual Neural Machine Translation Models for Automatic Post-Editing,"This paper describes the submission of the AMU (Adam Mickiewicz University) team to the Automatic Post-Editing (APE) task of WMT 2016. We explore the application of neural translation models to the APE problem and achieve good results by treating different models as components in a log-linear model, allowing for multiple inputs (the MT-output and the source) that are decoded to the same target language (post-edited translations). A simple string-matching penalty integrated within the log-linear model is used to control for higher faithfulness with regard to the raw machine translation output. To overcome the problem of too little training data, we generate large amounts of artificial data. Our submission improves over the uncorrected baseline on the unseen test set by-3.2% TER and +5.5% BLEU and outperforms any other system submitted to the shared-task by a large margin.","This work is partially funded by the National Science Centre, Poland (Grant No. 2014/15/N/ST6/02330).","Log-linear Combinations of Monolingual and Bilingual Neural Machine Translation Models for Automatic Post-Editing. This paper describes the submission of the AMU (Adam Mickiewicz University) team to the Automatic Post-Editing (APE) task of WMT 2016. We explore the application of neural translation models to the APE problem and achieve good results by treating different models as components in a log-linear model, allowing for multiple inputs (the MT-output and the source) that are decoded to the same target language (post-edited translations). A simple string-matching penalty integrated within the log-linear model is used to control for higher faithfulness with regard to the raw machine translation output. To overcome the problem of too little training data, we generate large amounts of artificial data. Our submission improves over the uncorrected baseline on the unseen test set by-3.2% TER and +5.5% BLEU and outperforms any other system submitted to the shared-task by a large margin.",2016
scicluna-strapparava-2020-vroav,https://aclanthology.org/2020.lrec-1.742,0,,,,,,,"VROAV: Using Iconicity to Visually Represent Abstract Verbs. ness is a feature of semantics that limits our ability to visualise every conceivable concept represented by a word. By tapping into the visual representation of words, we explore the common semantic elements that link words to each other. Visual languages like sign languages have been found to reveal enlightening patterns across signs of similar meanings, pointing towards the possibility of identifying clusters of iconic meanings in words. Thanks to this insight, along with an understanding of verb predicates achieved from VerbNet, this study produced VROAV (Visual Representation of Abstract Verbs): a novel verb classification system based on the shape and movement of verbs. The outcome includes 20 classes of abstract verbs and their visual representations, which were tested for validity in an online survey. Considerable agreement between participants, who judged graphic animations based on representativeness, suggests a positive way forward for this proposal, which may be developed as a language learning aid in educational contexts or as a multimodal language comprehension tool for digital text.",{VROAV}: Using Iconicity to Visually Represent Abstract Verbs,"ness is a feature of semantics that limits our ability to visualise every conceivable concept represented by a word. By tapping into the visual representation of words, we explore the common semantic elements that link words to each other. Visual languages like sign languages have been found to reveal enlightening patterns across signs of similar meanings, pointing towards the possibility of identifying clusters of iconic meanings in words. Thanks to this insight, along with an understanding of verb predicates achieved from VerbNet, this study produced VROAV (Visual Representation of Abstract Verbs): a novel verb classification system based on the shape and movement of verbs. The outcome includes 20 classes of abstract verbs and their visual representations, which were tested for validity in an online survey. Considerable agreement between participants, who judged graphic animations based on representativeness, suggests a positive way forward for this proposal, which may be developed as a language learning aid in educational contexts or as a multimodal language comprehension tool for digital text.",VROAV: Using Iconicity to Visually Represent Abstract Verbs,"ness is a feature of semantics that limits our ability to visualise every conceivable concept represented by a word. By tapping into the visual representation of words, we explore the common semantic elements that link words to each other. Visual languages like sign languages have been found to reveal enlightening patterns across signs of similar meanings, pointing towards the possibility of identifying clusters of iconic meanings in words. Thanks to this insight, along with an understanding of verb predicates achieved from VerbNet, this study produced VROAV (Visual Representation of Abstract Verbs): a novel verb classification system based on the shape and movement of verbs. The outcome includes 20 classes of abstract verbs and their visual representations, which were tested for validity in an online survey. Considerable agreement between participants, who judged graphic animations based on representativeness, suggests a positive way forward for this proposal, which may be developed as a language learning aid in educational contexts or as a multimodal language comprehension tool for digital text.",,"VROAV: Using Iconicity to Visually Represent Abstract Verbs. ness is a feature of semantics that limits our ability to visualise every conceivable concept represented by a word. By tapping into the visual representation of words, we explore the common semantic elements that link words to each other. Visual languages like sign languages have been found to reveal enlightening patterns across signs of similar meanings, pointing towards the possibility of identifying clusters of iconic meanings in words. Thanks to this insight, along with an understanding of verb predicates achieved from VerbNet, this study produced VROAV (Visual Representation of Abstract Verbs): a novel verb classification system based on the shape and movement of verbs. The outcome includes 20 classes of abstract verbs and their visual representations, which were tested for validity in an online survey. Considerable agreement between participants, who judged graphic animations based on representativeness, suggests a positive way forward for this proposal, which may be developed as a language learning aid in educational contexts or as a multimodal language comprehension tool for digital text.",2020
beck-etal-2013-shef,https://aclanthology.org/W13-2241,0,,,,,,,"SHEF-Lite: When Less is More for Translation Quality Estimation. We describe the results of our submissions to the WMT13 Shared Task on Quality Estimation (subtasks 1.1 and 1.3). Our submissions use the framework of Gaussian Processes to investigate lightweight approaches for this problem. We focus on two approaches, one based on feature selection and another based on active learning. Using only 25 (out of 160) features, our model resulting from feature selection ranked 1st place in the scoring variant of subtask 1.1 and 3rd place in the ranking variant of the subtask, while the active learning model reached 2nd place in the scoring variant using only ∼25% of the available instances for training. These results give evidence that Gaussian Processes achieve the state of the art performance as a modelling approach for translation quality estimation, and that carefully selecting features and instances for the problem can further improve or at least maintain the same performance levels while making the problem less resource-intensive.",{SHEF}-{L}ite: When Less is More for Translation Quality Estimation,"We describe the results of our submissions to the WMT13 Shared Task on Quality Estimation (subtasks 1.1 and 1.3). Our submissions use the framework of Gaussian Processes to investigate lightweight approaches for this problem. We focus on two approaches, one based on feature selection and another based on active learning. Using only 25 (out of 160) features, our model resulting from feature selection ranked 1st place in the scoring variant of subtask 1.1 and 3rd place in the ranking variant of the subtask, while the active learning model reached 2nd place in the scoring variant using only ∼25% of the available instances for training. These results give evidence that Gaussian Processes achieve the state of the art performance as a modelling approach for translation quality estimation, and that carefully selecting features and instances for the problem can further improve or at least maintain the same performance levels while making the problem less resource-intensive.",SHEF-Lite: When Less is More for Translation Quality Estimation,"We describe the results of our submissions to the WMT13 Shared Task on Quality Estimation (subtasks 1.1 and 1.3). Our submissions use the framework of Gaussian Processes to investigate lightweight approaches for this problem. We focus on two approaches, one based on feature selection and another based on active learning. Using only 25 (out of 160) features, our model resulting from feature selection ranked 1st place in the scoring variant of subtask 1.1 and 3rd place in the ranking variant of the subtask, while the active learning model reached 2nd place in the scoring variant using only ∼25% of the available instances for training. These results give evidence that Gaussian Processes achieve the state of the art performance as a modelling approach for translation quality estimation, and that carefully selecting features and instances for the problem can further improve or at least maintain the same performance levels while making the problem less resource-intensive.","This work was supported by funding from CNPq/Brazil (No. 237999/2012-9, Daniel Beck) and from the EU FP7- ICT QTLaunchPad project (No. 296347, Kashif Shah and Lucia Specia).","SHEF-Lite: When Less is More for Translation Quality Estimation. We describe the results of our submissions to the WMT13 Shared Task on Quality Estimation (subtasks 1.1 and 1.3). Our submissions use the framework of Gaussian Processes to investigate lightweight approaches for this problem. We focus on two approaches, one based on feature selection and another based on active learning. Using only 25 (out of 160) features, our model resulting from feature selection ranked 1st place in the scoring variant of subtask 1.1 and 3rd place in the ranking variant of the subtask, while the active learning model reached 2nd place in the scoring variant using only ∼25% of the available instances for training. These results give evidence that Gaussian Processes achieve the state of the art performance as a modelling approach for translation quality estimation, and that carefully selecting features and instances for the problem can further improve or at least maintain the same performance levels while making the problem less resource-intensive.",2013
aarts-1992-uniform,https://aclanthology.org/C92-4183,0,,,,,,,Uniform Recognition for Acyclic Context-Sensitive Grammars is NP-complete. Context-sensitive grammars in which each rule is of the forln aZfl-~ (-*Tfl are acyclic if the associated context-free grammar with the rules Z ~ 3' is acyclic. The problem whether an intmt string is in the language generated by an acyclic contextsensitive grammar is NP-conlplete.,Uniform Recognition for Acyclic Context-Sensitive Grammars is {NP}-complete,Context-sensitive grammars in which each rule is of the forln aZfl-~ (-*Tfl are acyclic if the associated context-free grammar with the rules Z ~ 3' is acyclic. The problem whether an intmt string is in the language generated by an acyclic contextsensitive grammar is NP-conlplete.,Uniform Recognition for Acyclic Context-Sensitive Grammars is NP-complete,Context-sensitive grammars in which each rule is of the forln aZfl-~ (-*Tfl are acyclic if the associated context-free grammar with the rules Z ~ 3' is acyclic. The problem whether an intmt string is in the language generated by an acyclic contextsensitive grammar is NP-conlplete.,"Acknowledgements I want to thank Peter van Erode Boa.s, Reinhard Muskens, Mart Trautwein and Theo Jansen for their comments on carher versions of this paper.",Uniform Recognition for Acyclic Context-Sensitive Grammars is NP-complete. Context-sensitive grammars in which each rule is of the forln aZfl-~ (-*Tfl are acyclic if the associated context-free grammar with the rules Z ~ 3' is acyclic. The problem whether an intmt string is in the language generated by an acyclic contextsensitive grammar is NP-conlplete.,1992
oraby-etal-2017-serious,https://aclanthology.org/W17-5537,0,,,,,,,"Are you serious?: Rhetorical Questions and Sarcasm in Social Media Dialog. Effective models of social dialog must understand a broad range of rhetorical and figurative devices. Rhetorical questions (RQs) are a type of figurative language whose aim is to achieve a pragmatic goal, such as structuring an argument, being persuasive, emphasizing a point, or being ironic. While there are computational models for other forms of figurative language, rhetorical questions have received little attention to date. We expand a small dataset from previous work, presenting a corpus of 10,270 RQs from debate forums and Twitter that represent different discourse functions. We show that we can clearly distinguish between RQs and sincere questions (0.76 F1). We then show that RQs can be used both sarcastically and non-sarcastically, observing that non-sarcastic (other) uses of RQs are frequently argumentative in forums, and persuasive in tweets. We present experiments to distinguish between these uses of RQs using SVM and LSTM models that represent linguistic features and post-level context, achieving results as high as 0.76 F1 for SARCASTIC and 0.77 F1 for OTHER in forums, and 0.83 F1 for both SARCASTIC and OTHER in tweets. We supplement our quantitative experiments with an in-depth characterization of the linguistic variation in RQs. 1 Subjects could provide multiple discourse functions for RQs, thus the frequencies do not add to 1.",Are you serious?: Rhetorical Questions and Sarcasm in Social Media Dialog,"Effective models of social dialog must understand a broad range of rhetorical and figurative devices. Rhetorical questions (RQs) are a type of figurative language whose aim is to achieve a pragmatic goal, such as structuring an argument, being persuasive, emphasizing a point, or being ironic. While there are computational models for other forms of figurative language, rhetorical questions have received little attention to date. We expand a small dataset from previous work, presenting a corpus of 10,270 RQs from debate forums and Twitter that represent different discourse functions. We show that we can clearly distinguish between RQs and sincere questions (0.76 F1). We then show that RQs can be used both sarcastically and non-sarcastically, observing that non-sarcastic (other) uses of RQs are frequently argumentative in forums, and persuasive in tweets. We present experiments to distinguish between these uses of RQs using SVM and LSTM models that represent linguistic features and post-level context, achieving results as high as 0.76 F1 for SARCASTIC and 0.77 F1 for OTHER in forums, and 0.83 F1 for both SARCASTIC and OTHER in tweets. We supplement our quantitative experiments with an in-depth characterization of the linguistic variation in RQs. 1 Subjects could provide multiple discourse functions for RQs, thus the frequencies do not add to 1.",Are you serious?: Rhetorical Questions and Sarcasm in Social Media Dialog,"Effective models of social dialog must understand a broad range of rhetorical and figurative devices. Rhetorical questions (RQs) are a type of figurative language whose aim is to achieve a pragmatic goal, such as structuring an argument, being persuasive, emphasizing a point, or being ironic. While there are computational models for other forms of figurative language, rhetorical questions have received little attention to date. We expand a small dataset from previous work, presenting a corpus of 10,270 RQs from debate forums and Twitter that represent different discourse functions. We show that we can clearly distinguish between RQs and sincere questions (0.76 F1). We then show that RQs can be used both sarcastically and non-sarcastically, observing that non-sarcastic (other) uses of RQs are frequently argumentative in forums, and persuasive in tweets. We present experiments to distinguish between these uses of RQs using SVM and LSTM models that represent linguistic features and post-level context, achieving results as high as 0.76 F1 for SARCASTIC and 0.77 F1 for OTHER in forums, and 0.83 F1 for both SARCASTIC and OTHER in tweets. We supplement our quantitative experiments with an in-depth characterization of the linguistic variation in RQs. 1 Subjects could provide multiple discourse functions for RQs, thus the frequencies do not add to 1.","This work was funded by NSF CISE RI 1302668, under the Robust Intelligence Program.","Are you serious?: Rhetorical Questions and Sarcasm in Social Media Dialog. Effective models of social dialog must understand a broad range of rhetorical and figurative devices. Rhetorical questions (RQs) are a type of figurative language whose aim is to achieve a pragmatic goal, such as structuring an argument, being persuasive, emphasizing a point, or being ironic. While there are computational models for other forms of figurative language, rhetorical questions have received little attention to date. We expand a small dataset from previous work, presenting a corpus of 10,270 RQs from debate forums and Twitter that represent different discourse functions. We show that we can clearly distinguish between RQs and sincere questions (0.76 F1). We then show that RQs can be used both sarcastically and non-sarcastically, observing that non-sarcastic (other) uses of RQs are frequently argumentative in forums, and persuasive in tweets. We present experiments to distinguish between these uses of RQs using SVM and LSTM models that represent linguistic features and post-level context, achieving results as high as 0.76 F1 for SARCASTIC and 0.77 F1 for OTHER in forums, and 0.83 F1 for both SARCASTIC and OTHER in tweets. We supplement our quantitative experiments with an in-depth characterization of the linguistic variation in RQs. 1 Subjects could provide multiple discourse functions for RQs, thus the frequencies do not add to 1.",2017
puduppully-etal-2019-data,https://aclanthology.org/P19-1195,0,,,,,,,"Data-to-text Generation with Entity Modeling. Recent approaches to data-to-text generation have shown great promise thanks to the use of large-scale datasets and the application of neural network architectures which are trained end-to-end. These models rely on representation learning to select content appropriately, structure it coherently, and verbalize it grammatically, treating entities as nothing more than vocabulary tokens. In this work we propose an entity-centric neural architecture for data-to-text generation. Our model creates entity-specific representations which are dynamically updated. Text is generated conditioned on the data input and entity memory representations using hierarchical attention at each time step. We present experiments on the ROTOWIRE benchmark and a (five times larger) new dataset on the baseball domain which we create. Our results show that the proposed model outperforms competitive baselines in automatic and human evaluation. 1",Data-to-text Generation with Entity Modeling,"Recent approaches to data-to-text generation have shown great promise thanks to the use of large-scale datasets and the application of neural network architectures which are trained end-to-end. These models rely on representation learning to select content appropriately, structure it coherently, and verbalize it grammatically, treating entities as nothing more than vocabulary tokens. In this work we propose an entity-centric neural architecture for data-to-text generation. Our model creates entity-specific representations which are dynamically updated. Text is generated conditioned on the data input and entity memory representations using hierarchical attention at each time step. We present experiments on the ROTOWIRE benchmark and a (five times larger) new dataset on the baseball domain which we create. Our results show that the proposed model outperforms competitive baselines in automatic and human evaluation. 1",Data-to-text Generation with Entity Modeling,"Recent approaches to data-to-text generation have shown great promise thanks to the use of large-scale datasets and the application of neural network architectures which are trained end-to-end. These models rely on representation learning to select content appropriately, structure it coherently, and verbalize it grammatically, treating entities as nothing more than vocabulary tokens. In this work we propose an entity-centric neural architecture for data-to-text generation. Our model creates entity-specific representations which are dynamically updated. Text is generated conditioned on the data input and entity memory representations using hierarchical attention at each time step. We present experiments on the ROTOWIRE benchmark and a (five times larger) new dataset on the baseball domain which we create. Our results show that the proposed model outperforms competitive baselines in automatic and human evaluation. 1",We would like to thank Adam Lopez for helpful discussions. We acknowledge the financial support of the European Research Council (Lapata; award number 681760).,"Data-to-text Generation with Entity Modeling. Recent approaches to data-to-text generation have shown great promise thanks to the use of large-scale datasets and the application of neural network architectures which are trained end-to-end. These models rely on representation learning to select content appropriately, structure it coherently, and verbalize it grammatically, treating entities as nothing more than vocabulary tokens. In this work we propose an entity-centric neural architecture for data-to-text generation. Our model creates entity-specific representations which are dynamically updated. Text is generated conditioned on the data input and entity memory representations using hierarchical attention at each time step. We present experiments on the ROTOWIRE benchmark and a (five times larger) new dataset on the baseball domain which we create. Our results show that the proposed model outperforms competitive baselines in automatic and human evaluation. 1",2019
p-r-etal-2017-hitachi,https://aclanthology.org/S17-2176,1,,,,health,,,"Hitachi at SemEval-2017 Task 12: System for temporal information extraction from clinical notes. This paper describes the system developed for the task of temporal information extraction from clinical narratives in the context of the 2017 Clinical TempEval challenge. Clinical TempEval 2017 addressed the problem of temporal reasoning in the clinical domain by providing annotated clinical notes, pathology and radiology reports in line with Clinical Tem-pEval challenges 2015/16, across two different evaluation phases focusing on cross domain adaptation. Our team focused on subtasks involving extractions of temporal spans and relations for which the developed systems showed average F-score of 0.45 and 0.47 across the two phases of evaluations.",Hitachi at {S}em{E}val-2017 Task 12: System for temporal information extraction from clinical notes,"This paper describes the system developed for the task of temporal information extraction from clinical narratives in the context of the 2017 Clinical TempEval challenge. Clinical TempEval 2017 addressed the problem of temporal reasoning in the clinical domain by providing annotated clinical notes, pathology and radiology reports in line with Clinical Tem-pEval challenges 2015/16, across two different evaluation phases focusing on cross domain adaptation. Our team focused on subtasks involving extractions of temporal spans and relations for which the developed systems showed average F-score of 0.45 and 0.47 across the two phases of evaluations.",Hitachi at SemEval-2017 Task 12: System for temporal information extraction from clinical notes,"This paper describes the system developed for the task of temporal information extraction from clinical narratives in the context of the 2017 Clinical TempEval challenge. Clinical TempEval 2017 addressed the problem of temporal reasoning in the clinical domain by providing annotated clinical notes, pathology and radiology reports in line with Clinical Tem-pEval challenges 2015/16, across two different evaluation phases focusing on cross domain adaptation. Our team focused on subtasks involving extractions of temporal spans and relations for which the developed systems showed average F-score of 0.45 and 0.47 across the two phases of evaluations.",We thank Mayo clinic and Clinical TempEval organizers for providing access to THYME corpus and other helps provided for our participation in the competition.,"Hitachi at SemEval-2017 Task 12: System for temporal information extraction from clinical notes. This paper describes the system developed for the task of temporal information extraction from clinical narratives in the context of the 2017 Clinical TempEval challenge. Clinical TempEval 2017 addressed the problem of temporal reasoning in the clinical domain by providing annotated clinical notes, pathology and radiology reports in line with Clinical Tem-pEval challenges 2015/16, across two different evaluation phases focusing on cross domain adaptation. Our team focused on subtasks involving extractions of temporal spans and relations for which the developed systems showed average F-score of 0.45 and 0.47 across the two phases of evaluations.",2017
slocum-1988-morphological,https://aclanthology.org/A88-1031,0,,,,,,,"Morphological Processing in the Nabu System. Processing system under development in the Human Interface Laboratory at MCC, for shareholder companies. Its morphological component is designed to perform a number of different functions. This has been used to produce a complete analyzer for Arabic; very substantial analyzers for English, French, German, and Spanish; and small collections of rules for Russian and Japanese. In addition, other functions have been implemented for several of these languages.
In this paper we discuss our philosophy, which constrained our design decisions; elaborate some specific functions a morphological component should support; survey some competing approaches; describe our technique, which provides the necessary functionality while meeting the other design constraints; and support our approach by characterizing our success in developing/testing processors for various combinations of language and function.",Morphological Processing in the {N}abu System,"Processing system under development in the Human Interface Laboratory at MCC, for shareholder companies. Its morphological component is designed to perform a number of different functions. This has been used to produce a complete analyzer for Arabic; very substantial analyzers for English, French, German, and Spanish; and small collections of rules for Russian and Japanese. In addition, other functions have been implemented for several of these languages.
In this paper we discuss our philosophy, which constrained our design decisions; elaborate some specific functions a morphological component should support; survey some competing approaches; describe our technique, which provides the necessary functionality while meeting the other design constraints; and support our approach by characterizing our success in developing/testing processors for various combinations of language and function.",Morphological Processing in the Nabu System,"Processing system under development in the Human Interface Laboratory at MCC, for shareholder companies. Its morphological component is designed to perform a number of different functions. This has been used to produce a complete analyzer for Arabic; very substantial analyzers for English, French, German, and Spanish; and small collections of rules for Russian and Japanese. In addition, other functions have been implemented for several of these languages.
In this paper we discuss our philosophy, which constrained our design decisions; elaborate some specific functions a morphological component should support; survey some competing approaches; describe our technique, which provides the necessary functionality while meeting the other design constraints; and support our approach by characterizing our success in developing/testing processors for various combinations of language and function.",,"Morphological Processing in the Nabu System. Processing system under development in the Human Interface Laboratory at MCC, for shareholder companies. Its morphological component is designed to perform a number of different functions. This has been used to produce a complete analyzer for Arabic; very substantial analyzers for English, French, German, and Spanish; and small collections of rules for Russian and Japanese. In addition, other functions have been implemented for several of these languages.
In this paper we discuss our philosophy, which constrained our design decisions; elaborate some specific functions a morphological component should support; survey some competing approaches; describe our technique, which provides the necessary functionality while meeting the other design constraints; and support our approach by characterizing our success in developing/testing processors for various combinations of language and function.",1988
more-tsarfaty-2016-data,https://aclanthology.org/C16-1033,0,,,,,,,"Data-Driven Morphological Analysis and Disambiguation for Morphologically Rich Languages and Universal Dependencies. Parsing texts into universal dependencies (UD) in realistic scenarios requires infrastructure for morphological analysis and disambiguation (MA&D) of typologically different languages as a first tier. MA&D is particularly challenging in morphologically rich languages (MRLs), where the ambiguous space-delimited tokens ought to be disambiguated with respect to their constituent morphemes. Here we present a novel, language-agnostic, framework for MA&D, based on a transition system with two variants, word-based and morpheme-based, and a dedicated transition to mitigate the biases of variable-length morpheme sequences. Our experiments on a Modern Hebrew case study outperform the state of the art, and we show that the morpheme-based MD consistently outperforms our word-based variant. We further illustrate the utility and multilingual coverage of our framework by morphologically analyzing and disambiguating the large set of languages in the UD treebanks.",Data-Driven Morphological Analysis and Disambiguation for Morphologically Rich Languages and {U}niversal {D}ependencies,"Parsing texts into universal dependencies (UD) in realistic scenarios requires infrastructure for morphological analysis and disambiguation (MA&D) of typologically different languages as a first tier. MA&D is particularly challenging in morphologically rich languages (MRLs), where the ambiguous space-delimited tokens ought to be disambiguated with respect to their constituent morphemes. Here we present a novel, language-agnostic, framework for MA&D, based on a transition system with two variants, word-based and morpheme-based, and a dedicated transition to mitigate the biases of variable-length morpheme sequences. Our experiments on a Modern Hebrew case study outperform the state of the art, and we show that the morpheme-based MD consistently outperforms our word-based variant. We further illustrate the utility and multilingual coverage of our framework by morphologically analyzing and disambiguating the large set of languages in the UD treebanks.",Data-Driven Morphological Analysis and Disambiguation for Morphologically Rich Languages and Universal Dependencies,"Parsing texts into universal dependencies (UD) in realistic scenarios requires infrastructure for morphological analysis and disambiguation (MA&D) of typologically different languages as a first tier. MA&D is particularly challenging in morphologically rich languages (MRLs), where the ambiguous space-delimited tokens ought to be disambiguated with respect to their constituent morphemes. Here we present a novel, language-agnostic, framework for MA&D, based on a transition system with two variants, word-based and morpheme-based, and a dedicated transition to mitigate the biases of variable-length morpheme sequences. Our experiments on a Modern Hebrew case study outperform the state of the art, and we show that the morpheme-based MD consistently outperforms our word-based variant. We further illustrate the utility and multilingual coverage of our framework by morphologically analyzing and disambiguating the large set of languages in the UD treebanks.",,"Data-Driven Morphological Analysis and Disambiguation for Morphologically Rich Languages and Universal Dependencies. Parsing texts into universal dependencies (UD) in realistic scenarios requires infrastructure for morphological analysis and disambiguation (MA&D) of typologically different languages as a first tier. MA&D is particularly challenging in morphologically rich languages (MRLs), where the ambiguous space-delimited tokens ought to be disambiguated with respect to their constituent morphemes. Here we present a novel, language-agnostic, framework for MA&D, based on a transition system with two variants, word-based and morpheme-based, and a dedicated transition to mitigate the biases of variable-length morpheme sequences. Our experiments on a Modern Hebrew case study outperform the state of the art, and we show that the morpheme-based MD consistently outperforms our word-based variant. We further illustrate the utility and multilingual coverage of our framework by morphologically analyzing and disambiguating the large set of languages in the UD treebanks.",2016
moss-1990-growing,https://aclanthology.org/1990.tc-1.3,0,,,,,,,"The growing range of document preparation systems. What I intend to do today is to look at the complete document preparation scene from the simplest systems to the most complex. I will examine how each level is relevant to the translator and how each level relates to each other.
I will finish by examining why there have been major leaps forward in document preparation systems for translators in the last couple of years and the likely way forward.",The growing range of document preparation systems,"What I intend to do today is to look at the complete document preparation scene from the simplest systems to the most complex. I will examine how each level is relevant to the translator and how each level relates to each other.
I will finish by examining why there have been major leaps forward in document preparation systems for translators in the last couple of years and the likely way forward.",The growing range of document preparation systems,"What I intend to do today is to look at the complete document preparation scene from the simplest systems to the most complex. I will examine how each level is relevant to the translator and how each level relates to each other.
I will finish by examining why there have been major leaps forward in document preparation systems for translators in the last couple of years and the likely way forward.",,"The growing range of document preparation systems. What I intend to do today is to look at the complete document preparation scene from the simplest systems to the most complex. I will examine how each level is relevant to the translator and how each level relates to each other.
I will finish by examining why there have been major leaps forward in document preparation systems for translators in the last couple of years and the likely way forward.",1990
gu-cercone-2006-segment,https://aclanthology.org/P06-1061,0,,,,,,,"Segment-Based Hidden Markov Models for Information Extraction. Hidden Markov models (HMMs) are powerful statistical models that have found successful applications in Information Extraction (IE). In current approaches to applying HMMs to IE, an HMM is used to model text at the document level. This modelling might cause undesired redundancy in extraction in the sense that more than one filler is identified and extracted. We propose to use HMMs to model text at the segment level, in which the extraction process consists of two steps: a segment retrieval step followed by an extraction step. In order to retrieve extractionrelevant segments from documents, we introduce a method to use HMMs to model and retrieve segments. Our experimental results show that the resulting segment HMM IE system not only achieves near zero extraction redundancy, but also has better overall extraction performance than traditional document HMM IE systems.",Segment-Based Hidden {M}arkov Models for Information Extraction,"Hidden Markov models (HMMs) are powerful statistical models that have found successful applications in Information Extraction (IE). In current approaches to applying HMMs to IE, an HMM is used to model text at the document level. This modelling might cause undesired redundancy in extraction in the sense that more than one filler is identified and extracted. We propose to use HMMs to model text at the segment level, in which the extraction process consists of two steps: a segment retrieval step followed by an extraction step. In order to retrieve extractionrelevant segments from documents, we introduce a method to use HMMs to model and retrieve segments. Our experimental results show that the resulting segment HMM IE system not only achieves near zero extraction redundancy, but also has better overall extraction performance than traditional document HMM IE systems.",Segment-Based Hidden Markov Models for Information Extraction,"Hidden Markov models (HMMs) are powerful statistical models that have found successful applications in Information Extraction (IE). In current approaches to applying HMMs to IE, an HMM is used to model text at the document level. This modelling might cause undesired redundancy in extraction in the sense that more than one filler is identified and extracted. We propose to use HMMs to model text at the segment level, in which the extraction process consists of two steps: a segment retrieval step followed by an extraction step. In order to retrieve extractionrelevant segments from documents, we introduce a method to use HMMs to model and retrieve segments. Our experimental results show that the resulting segment HMM IE system not only achieves near zero extraction redundancy, but also has better overall extraction performance than traditional document HMM IE systems.",,"Segment-Based Hidden Markov Models for Information Extraction. Hidden Markov models (HMMs) are powerful statistical models that have found successful applications in Information Extraction (IE). In current approaches to applying HMMs to IE, an HMM is used to model text at the document level. This modelling might cause undesired redundancy in extraction in the sense that more than one filler is identified and extracted. We propose to use HMMs to model text at the segment level, in which the extraction process consists of two steps: a segment retrieval step followed by an extraction step. In order to retrieve extractionrelevant segments from documents, we introduce a method to use HMMs to model and retrieve segments. Our experimental results show that the resulting segment HMM IE system not only achieves near zero extraction redundancy, but also has better overall extraction performance than traditional document HMM IE systems.",2006
schmidtke-groves-2019-automatic,https://aclanthology.org/W19-6729,0,,,,,,,"Automatic Translation for Software with Safe Velocity. We report on a model for machine translation (MT) of software, without review, for the Microsoft Office product range. We have deployed an automated localisation workflow, known as Automated Translation (AT) for software, which identifies resource strings as suitable and safe for MT without post-editing. The model makes use of string profiling, user impact assessment, MT quality estimation, and customer feedback mechanisms. This allows us to introduce automatic translation at a safe velocity, with a minimal risk to customer satisfaction. Quality constraints limit the volume of MT in relation to human translation, with published low-quality MT limited to not exceed 10% of total word count. The AT for software model has been deployed into production for most of the Office product range, for 37 languages. It allows us to MT and publish without review over 20% of the word count for some languages and products. To date, we have processed more than 1 million words with this model, and so far have not seen any measurable negative impact on customer satisfaction.",Automatic Translation for Software with Safe Velocity,"We report on a model for machine translation (MT) of software, without review, for the Microsoft Office product range. We have deployed an automated localisation workflow, known as Automated Translation (AT) for software, which identifies resource strings as suitable and safe for MT without post-editing. The model makes use of string profiling, user impact assessment, MT quality estimation, and customer feedback mechanisms. This allows us to introduce automatic translation at a safe velocity, with a minimal risk to customer satisfaction. Quality constraints limit the volume of MT in relation to human translation, with published low-quality MT limited to not exceed 10% of total word count. The AT for software model has been deployed into production for most of the Office product range, for 37 languages. It allows us to MT and publish without review over 20% of the word count for some languages and products. To date, we have processed more than 1 million words with this model, and so far have not seen any measurable negative impact on customer satisfaction.",Automatic Translation for Software with Safe Velocity,"We report on a model for machine translation (MT) of software, without review, for the Microsoft Office product range. We have deployed an automated localisation workflow, known as Automated Translation (AT) for software, which identifies resource strings as suitable and safe for MT without post-editing. The model makes use of string profiling, user impact assessment, MT quality estimation, and customer feedback mechanisms. This allows us to introduce automatic translation at a safe velocity, with a minimal risk to customer satisfaction. Quality constraints limit the volume of MT in relation to human translation, with published low-quality MT limited to not exceed 10% of total word count. The AT for software model has been deployed into production for most of the Office product range, for 37 languages. It allows us to MT and publish without review over 20% of the word count for some languages and products. To date, we have processed more than 1 million words with this model, and so far have not seen any measurable negative impact on customer satisfaction.","The AT for software model was developed by the Office GSX (Global Service Experience) team in the Microsoft European Development Centre, from 2017 to 2018. The following people were involved; Siobhan Ashton, Antonio Benítez Lopez, Brian Comerford, Gemma Devine, Vincent Gadani, Craig Jeffares, Sankar Kumar Indraganti, Anton Masalovich, David Moran, Glen Poor and Simone Van Bruggen, in addition to the authors.","Automatic Translation for Software with Safe Velocity. We report on a model for machine translation (MT) of software, without review, for the Microsoft Office product range. We have deployed an automated localisation workflow, known as Automated Translation (AT) for software, which identifies resource strings as suitable and safe for MT without post-editing. The model makes use of string profiling, user impact assessment, MT quality estimation, and customer feedback mechanisms. This allows us to introduce automatic translation at a safe velocity, with a minimal risk to customer satisfaction. Quality constraints limit the volume of MT in relation to human translation, with published low-quality MT limited to not exceed 10% of total word count. The AT for software model has been deployed into production for most of the Office product range, for 37 languages. It allows us to MT and publish without review over 20% of the word count for some languages and products. To date, we have processed more than 1 million words with this model, and so far have not seen any measurable negative impact on customer satisfaction.",2019
chu-qian-2001-locating,https://aclanthology.org/O01-2003,0,,,,,,,"Locating Boundaries for Prosodic Constituents in Unrestricted Mandarin Texts. This paper proposes a three-tier prosodic hierarchy, including prosodic word, intermediate phrase and intonational phrase tiers, for Mandarin that emphasizes the use of the prosodic word instead of the lexical word as the basic prosodic unit. Both the surface difference and perceptual difference show that this is helpful for achieving high naturalness in text-to-speech conversion. Three approaches, the basic CART approach, the bottom-up hierarchical approach and the modified hierarchical approach, are presented for locating the boundaries of three prosodic constituents in unrestricted Mandarin texts. Two sets of features are used in the basic CART method: one contains syntactic phrasal information and the other does not. The one with syntactic phrasal information results in about a 1% increase in accuracy and an 11% decrease in error-cost. The performance of the modified hierarchical method produces the highest accuracy, 83%, and lowest error cost when no syntactic phrasal information is provided. It shows advantages in detecting the boundaries of intonational phrases at locations without breaking punctuation. 71.1% precision and 52.4% recall are achieved. Experiments on acceptability reveal that only 26% of the mis-assigned break indices are real infelicitous errors, and that the perceptual difference between the automatically assigned break indices and the manually annotated break indices are small.",Locating Boundaries for Prosodic Constituents in Unrestricted {M}andarin Texts,"This paper proposes a three-tier prosodic hierarchy, including prosodic word, intermediate phrase and intonational phrase tiers, for Mandarin that emphasizes the use of the prosodic word instead of the lexical word as the basic prosodic unit. Both the surface difference and perceptual difference show that this is helpful for achieving high naturalness in text-to-speech conversion. Three approaches, the basic CART approach, the bottom-up hierarchical approach and the modified hierarchical approach, are presented for locating the boundaries of three prosodic constituents in unrestricted Mandarin texts. Two sets of features are used in the basic CART method: one contains syntactic phrasal information and the other does not. The one with syntactic phrasal information results in about a 1% increase in accuracy and an 11% decrease in error-cost. The performance of the modified hierarchical method produces the highest accuracy, 83%, and lowest error cost when no syntactic phrasal information is provided. It shows advantages in detecting the boundaries of intonational phrases at locations without breaking punctuation. 71.1% precision and 52.4% recall are achieved. Experiments on acceptability reveal that only 26% of the mis-assigned break indices are real infelicitous errors, and that the perceptual difference between the automatically assigned break indices and the manually annotated break indices are small.",Locating Boundaries for Prosodic Constituents in Unrestricted Mandarin Texts,"This paper proposes a three-tier prosodic hierarchy, including prosodic word, intermediate phrase and intonational phrase tiers, for Mandarin that emphasizes the use of the prosodic word instead of the lexical word as the basic prosodic unit. Both the surface difference and perceptual difference show that this is helpful for achieving high naturalness in text-to-speech conversion. Three approaches, the basic CART approach, the bottom-up hierarchical approach and the modified hierarchical approach, are presented for locating the boundaries of three prosodic constituents in unrestricted Mandarin texts. Two sets of features are used in the basic CART method: one contains syntactic phrasal information and the other does not. The one with syntactic phrasal information results in about a 1% increase in accuracy and an 11% decrease in error-cost. The performance of the modified hierarchical method produces the highest accuracy, 83%, and lowest error cost when no syntactic phrasal information is provided. It shows advantages in detecting the boundaries of intonational phrases at locations without breaking punctuation. 71.1% precision and 52.4% recall are achieved. Experiments on acceptability reveal that only 26% of the mis-assigned break indices are real infelicitous errors, and that the perceptual difference between the automatically assigned break indices and the manually annotated break indices are small.",The authors thank Dr. Ming Zhou for providing the block-based robust dependency parser as a toolkit for use in this study. Thanks go to everybody who took part in the perceptual test. The authors are especially grateful to all the reviewers for their valuable remarks and suggestions.,"Locating Boundaries for Prosodic Constituents in Unrestricted Mandarin Texts. This paper proposes a three-tier prosodic hierarchy, including prosodic word, intermediate phrase and intonational phrase tiers, for Mandarin that emphasizes the use of the prosodic word instead of the lexical word as the basic prosodic unit. Both the surface difference and perceptual difference show that this is helpful for achieving high naturalness in text-to-speech conversion. Three approaches, the basic CART approach, the bottom-up hierarchical approach and the modified hierarchical approach, are presented for locating the boundaries of three prosodic constituents in unrestricted Mandarin texts. Two sets of features are used in the basic CART method: one contains syntactic phrasal information and the other does not. The one with syntactic phrasal information results in about a 1% increase in accuracy and an 11% decrease in error-cost. The performance of the modified hierarchical method produces the highest accuracy, 83%, and lowest error cost when no syntactic phrasal information is provided. It shows advantages in detecting the boundaries of intonational phrases at locations without breaking punctuation. 71.1% precision and 52.4% recall are achieved. Experiments on acceptability reveal that only 26% of the mis-assigned break indices are real infelicitous errors, and that the perceptual difference between the automatically assigned break indices and the manually annotated break indices are small.",2001
chakrabarty-etal-2020-r,https://aclanthology.org/2020.acl-main.711,0,,,,,,,"R\^3: Reverse, Retrieve, and Rank for Sarcasm Generation with Commonsense Knowledge. We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence. Our method employs a retrieve-andedit framework to instantiate two major characteristics of sarcasm: reversal of valence and semantic incongruity with the context, which could include shared commonsense or world knowledge between the speaker and the listener. While prior works on sarcasm generation predominantly focus on context incongruity, we show that combining valence reversal and semantic incongruity based on commonsense knowledge generates sarcastic messages of higher quality based on several criteria. Human evaluation shows that our system generates sarcasm better than human judges 34% of the time, and better than a reinforced hybrid baseline 90% of the time.","{R}{\^{}}3: Reverse, Retrieve, and Rank for Sarcasm Generation with Commonsense Knowledge","We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence. Our method employs a retrieve-andedit framework to instantiate two major characteristics of sarcasm: reversal of valence and semantic incongruity with the context, which could include shared commonsense or world knowledge between the speaker and the listener. While prior works on sarcasm generation predominantly focus on context incongruity, we show that combining valence reversal and semantic incongruity based on commonsense knowledge generates sarcastic messages of higher quality based on several criteria. Human evaluation shows that our system generates sarcasm better than human judges 34% of the time, and better than a reinforced hybrid baseline 90% of the time.","R\^3: Reverse, Retrieve, and Rank for Sarcasm Generation with Commonsense Knowledge","We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence. Our method employs a retrieve-andedit framework to instantiate two major characteristics of sarcasm: reversal of valence and semantic incongruity with the context, which could include shared commonsense or world knowledge between the speaker and the listener. While prior works on sarcasm generation predominantly focus on context incongruity, we show that combining valence reversal and semantic incongruity based on commonsense knowledge generates sarcastic messages of higher quality based on several criteria. Human evaluation shows that our system generates sarcasm better than human judges 34% of the time, and better than a reinforced hybrid baseline 90% of the time.","This work was supported in part by the MCS program under Cooperative Agreement N66001-19-2-4032 and the CwC program under Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. The authors would like to thank Christopher Hidey, John Kropf, Anusha Bala and Christopher Robert Kedzie for useful discussions. The authors also thank members of PLUSLab at the University Of Southern California and the anonymous reviewers for helpful comments.","R\^3: Reverse, Retrieve, and Rank for Sarcasm Generation with Commonsense Knowledge. We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence. Our method employs a retrieve-andedit framework to instantiate two major characteristics of sarcasm: reversal of valence and semantic incongruity with the context, which could include shared commonsense or world knowledge between the speaker and the listener. While prior works on sarcasm generation predominantly focus on context incongruity, we show that combining valence reversal and semantic incongruity based on commonsense knowledge generates sarcastic messages of higher quality based on several criteria. Human evaluation shows that our system generates sarcasm better than human judges 34% of the time, and better than a reinforced hybrid baseline 90% of the time.",2020
wan-xiao-2008-collabrank,https://aclanthology.org/C08-1122,0,,,,,,,"CollabRank: Towards a Collaborative Approach to Single-Document Keyphrase Extraction. Previous methods usually conduct the keyphrase extraction task for single documents separately without interactions for each document, under the assumption that the documents are considered independent of each other. This paper proposes a novel approach named Col-labRank to collaborative single-document keyphrase extraction by making use of mutual influences of multiple documents within a cluster context. CollabRank is implemented by first employing the clustering algorithm to obtain appropriate document clusters, and then using the graph-based ranking algorithm for collaborative single-document keyphrase extraction within each cluster. Experimental results demonstrate the encouraging performance of the proposed approach. Different clustering algorithms have been investigated and we find that the system performance relies positively on the quality of document clusters.",{C}ollab{R}ank: Towards a Collaborative Approach to Single-Document Keyphrase Extraction,"Previous methods usually conduct the keyphrase extraction task for single documents separately without interactions for each document, under the assumption that the documents are considered independent of each other. This paper proposes a novel approach named Col-labRank to collaborative single-document keyphrase extraction by making use of mutual influences of multiple documents within a cluster context. CollabRank is implemented by first employing the clustering algorithm to obtain appropriate document clusters, and then using the graph-based ranking algorithm for collaborative single-document keyphrase extraction within each cluster. Experimental results demonstrate the encouraging performance of the proposed approach. Different clustering algorithms have been investigated and we find that the system performance relies positively on the quality of document clusters.",CollabRank: Towards a Collaborative Approach to Single-Document Keyphrase Extraction,"Previous methods usually conduct the keyphrase extraction task for single documents separately without interactions for each document, under the assumption that the documents are considered independent of each other. This paper proposes a novel approach named Col-labRank to collaborative single-document keyphrase extraction by making use of mutual influences of multiple documents within a cluster context. CollabRank is implemented by first employing the clustering algorithm to obtain appropriate document clusters, and then using the graph-based ranking algorithm for collaborative single-document keyphrase extraction within each cluster. Experimental results demonstrate the encouraging performance of the proposed approach. Different clustering algorithms have been investigated and we find that the system performance relies positively on the quality of document clusters.","This work was supported by the National Science Foundation of China (No.60703064), the Research Fund for the Doctoral Program of Higher Education of China (No.20070001059) ","CollabRank: Towards a Collaborative Approach to Single-Document Keyphrase Extraction. Previous methods usually conduct the keyphrase extraction task for single documents separately without interactions for each document, under the assumption that the documents are considered independent of each other. This paper proposes a novel approach named Col-labRank to collaborative single-document keyphrase extraction by making use of mutual influences of multiple documents within a cluster context. CollabRank is implemented by first employing the clustering algorithm to obtain appropriate document clusters, and then using the graph-based ranking algorithm for collaborative single-document keyphrase extraction within each cluster. Experimental results demonstrate the encouraging performance of the proposed approach. Different clustering algorithms have been investigated and we find that the system performance relies positively on the quality of document clusters.",2008
stallard-1989-unification,https://aclanthology.org/H89-2006,0,,,,,,,"Unification-Based Semantic Interpretation in the BBN Spoken Language System. This paper describes the current state of work on unification-based semantic interpretation in HARC (for Hear and Recognize Continous speech) the BBN Spoken Language System. It presents the implementation of an integrated syntax/semantics grammar written in a unification formalism similar to Definite Clause Grammar. This formalism is described, and its use in solving a number of semantic interpretation problems is shown. These include, among others, the encoding of semantic selectional restrictions and the representation of relational nouns and their modifiers.",Unification-Based Semantic Interpretation in the {BBN} Spoken Language System,"This paper describes the current state of work on unification-based semantic interpretation in HARC (for Hear and Recognize Continous speech) the BBN Spoken Language System. It presents the implementation of an integrated syntax/semantics grammar written in a unification formalism similar to Definite Clause Grammar. This formalism is described, and its use in solving a number of semantic interpretation problems is shown. These include, among others, the encoding of semantic selectional restrictions and the representation of relational nouns and their modifiers.",Unification-Based Semantic Interpretation in the BBN Spoken Language System,"This paper describes the current state of work on unification-based semantic interpretation in HARC (for Hear and Recognize Continous speech) the BBN Spoken Language System. It presents the implementation of an integrated syntax/semantics grammar written in a unification formalism similar to Definite Clause Grammar. This formalism is described, and its use in solving a number of semantic interpretation problems is shown. These include, among others, the encoding of semantic selectional restrictions and the representation of relational nouns and their modifiers.","The work reported here was supported by the Advanced Research Projects Agency and was monitored by the Office of Naval Research under Contract No. 00014-89-C-0008. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the United States Government.The author would like to thank Andy Haas, who was the original impetus behind the change to a unification-","Unification-Based Semantic Interpretation in the BBN Spoken Language System. This paper describes the current state of work on unification-based semantic interpretation in HARC (for Hear and Recognize Continous speech) the BBN Spoken Language System. It presents the implementation of an integrated syntax/semantics grammar written in a unification formalism similar to Definite Clause Grammar. This formalism is described, and its use in solving a number of semantic interpretation problems is shown. These include, among others, the encoding of semantic selectional restrictions and the representation of relational nouns and their modifiers.",1989
frank-petty-2020-sequence,https://aclanthology.org/2020.crac-1.16,0,,,,,,,"Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora. Reflexive anaphora present a challenge for semantic interpretation: their meaning varies depending on context in a way that appears to require abstract variables. Past work has raised doubts about the ability of recurrent networks to meet this challenge. In this paper, we explore this question in the context of a fragment of English that incorporates the relevant sort of contextual variability. We consider sequence-to-sequence architectures with recurrent units and show that such networks are capable of learning semantic interpretations for reflexive anaphora which generalize to novel antecedents. We explore the effect of attention mechanisms and different recurrent unit types on the type of training data that is needed for success as measured in two ways: how much lexical support is needed to induce an abstract reflexive meaning (i.e., how many distinct reflexive antecedents must occur during training) and what contexts must a noun phrase occur in to support generalization of reflexive interpretation to this noun phrase?",Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora,"Reflexive anaphora present a challenge for semantic interpretation: their meaning varies depending on context in a way that appears to require abstract variables. Past work has raised doubts about the ability of recurrent networks to meet this challenge. In this paper, we explore this question in the context of a fragment of English that incorporates the relevant sort of contextual variability. We consider sequence-to-sequence architectures with recurrent units and show that such networks are capable of learning semantic interpretations for reflexive anaphora which generalize to novel antecedents. We explore the effect of attention mechanisms and different recurrent unit types on the type of training data that is needed for success as measured in two ways: how much lexical support is needed to induce an abstract reflexive meaning (i.e., how many distinct reflexive antecedents must occur during training) and what contexts must a noun phrase occur in to support generalization of reflexive interpretation to this noun phrase?",Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora,"Reflexive anaphora present a challenge for semantic interpretation: their meaning varies depending on context in a way that appears to require abstract variables. Past work has raised doubts about the ability of recurrent networks to meet this challenge. In this paper, we explore this question in the context of a fragment of English that incorporates the relevant sort of contextual variability. We consider sequence-to-sequence architectures with recurrent units and show that such networks are capable of learning semantic interpretations for reflexive anaphora which generalize to novel antecedents. We explore the effect of attention mechanisms and different recurrent unit types on the type of training data that is needed for success as measured in two ways: how much lexical support is needed to induce an abstract reflexive meaning (i.e., how many distinct reflexive antecedents must occur during training) and what contexts must a noun phrase occur in to support generalization of reflexive interpretation to this noun phrase?","For helpful comments and discussion of this work, we are grateful to Shayna Sragovicz, Noah Amsel, Tal Linzen and the members of the Computational Linguistics at Yale (CLAY) and the JHU Computation and Psycholinguistics labs. This work has been supported in part by NSF grant BCS-1919321 and a Yale College Summer Experience Award. Code for replicating these experiments can be found on the Computational Linguistics at the CLAY Lab GitHub transductions and logos repositories.","Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora. Reflexive anaphora present a challenge for semantic interpretation: their meaning varies depending on context in a way that appears to require abstract variables. Past work has raised doubts about the ability of recurrent networks to meet this challenge. In this paper, we explore this question in the context of a fragment of English that incorporates the relevant sort of contextual variability. We consider sequence-to-sequence architectures with recurrent units and show that such networks are capable of learning semantic interpretations for reflexive anaphora which generalize to novel antecedents. We explore the effect of attention mechanisms and different recurrent unit types on the type of training data that is needed for success as measured in two ways: how much lexical support is needed to induce an abstract reflexive meaning (i.e., how many distinct reflexive antecedents must occur during training) and what contexts must a noun phrase occur in to support generalization of reflexive interpretation to this noun phrase?",2020
hoang-koehn-2009-improving,https://aclanthology.org/E09-1043,0,,,,,,,"Improving Mid-Range Re-Ordering Using Templates of Factors. We extend the factored translation model (Koehn and Hoang, 2007) to allow translations of longer phrases composed of factors such as POS and morphological tags to act as templates for the selection and reordering of surface phrase translation. We also reintroduce the use of alignment information within the decoder, which forms an integral part of decoding in the Alignment Template System (Och, 2002), into phrase-based decoding. Results show an increase in translation performance of up to 1.0% BLEU for out-of-domain French-English translation. We also show how this method compares and relates to lexicalized reordering.",Improving Mid-Range Re-Ordering Using Templates of Factors,"We extend the factored translation model (Koehn and Hoang, 2007) to allow translations of longer phrases composed of factors such as POS and morphological tags to act as templates for the selection and reordering of surface phrase translation. We also reintroduce the use of alignment information within the decoder, which forms an integral part of decoding in the Alignment Template System (Och, 2002), into phrase-based decoding. Results show an increase in translation performance of up to 1.0% BLEU for out-of-domain French-English translation. We also show how this method compares and relates to lexicalized reordering.",Improving Mid-Range Re-Ordering Using Templates of Factors,"We extend the factored translation model (Koehn and Hoang, 2007) to allow translations of longer phrases composed of factors such as POS and morphological tags to act as templates for the selection and reordering of surface phrase translation. We also reintroduce the use of alignment information within the decoder, which forms an integral part of decoding in the Alignment Template System (Och, 2002), into phrase-based decoding. Results show an increase in translation performance of up to 1.0% BLEU for out-of-domain French-English translation. We also show how this method compares and relates to lexicalized reordering.",This work was supported by the EuroMatrix project funded by the European Commission (6th Framework Programme) and made use of the resources provided by the Edinburgh Compute and Data Facility (http://www.ecdf.ed.ac.uk/). The ECDF is partially supported by the eDIKT initiative (http://www.edikt.org.uk/).,"Improving Mid-Range Re-Ordering Using Templates of Factors. We extend the factored translation model (Koehn and Hoang, 2007) to allow translations of longer phrases composed of factors such as POS and morphological tags to act as templates for the selection and reordering of surface phrase translation. We also reintroduce the use of alignment information within the decoder, which forms an integral part of decoding in the Alignment Template System (Och, 2002), into phrase-based decoding. Results show an increase in translation performance of up to 1.0% BLEU for out-of-domain French-English translation. We also show how this method compares and relates to lexicalized reordering.",2009
volodina-etal-2021-dalaj,https://aclanthology.org/2021.nlp4call-1.3,0,,,,,,,"DaLAJ -- a dataset for linguistic acceptability judgments for Swedish. We present DaLAJ 1.0, a Dataset for Linguistic Acceptability Judgments for Swedish, comprising 9 596 sentences in its first version. DaLAJ is based on the SweLL second language learner data (Volodina et al., 2019), consisting of essays at different levels of proficiency. To make sure the dataset can be freely available despite the GDPR regulations, we have sentence-scrambled learner essays and removed part of the metadata about learners, keeping for each sentence only information about the mother tongue and the level of the course where the essay has been written. We use the normalized version of learner language as the basis for DaLAJ sentences, and keep only one error per sentence. We repeat the same sentence for each individual correction tag used in the sentence. For DaLAJ 1.0 four error categories of 35 available in SweLL are used, all connected to lexical or wordbuilding choices. The dataset is included in the SwedishGlue benchmark. 1 Below, we describe the format of the dataset, our insights and motivation for the chosen approach to data sharing.",{D}a{LAJ} {--} a dataset for linguistic acceptability judgments for {S}wedish,"We present DaLAJ 1.0, a Dataset for Linguistic Acceptability Judgments for Swedish, comprising 9 596 sentences in its first version. DaLAJ is based on the SweLL second language learner data (Volodina et al., 2019), consisting of essays at different levels of proficiency. To make sure the dataset can be freely available despite the GDPR regulations, we have sentence-scrambled learner essays and removed part of the metadata about learners, keeping for each sentence only information about the mother tongue and the level of the course where the essay has been written. We use the normalized version of learner language as the basis for DaLAJ sentences, and keep only one error per sentence. We repeat the same sentence for each individual correction tag used in the sentence. For DaLAJ 1.0 four error categories of 35 available in SweLL are used, all connected to lexical or wordbuilding choices. The dataset is included in the SwedishGlue benchmark. 1 Below, we describe the format of the dataset, our insights and motivation for the chosen approach to data sharing.",DaLAJ -- a dataset for linguistic acceptability judgments for Swedish,"We present DaLAJ 1.0, a Dataset for Linguistic Acceptability Judgments for Swedish, comprising 9 596 sentences in its first version. DaLAJ is based on the SweLL second language learner data (Volodina et al., 2019), consisting of essays at different levels of proficiency. To make sure the dataset can be freely available despite the GDPR regulations, we have sentence-scrambled learner essays and removed part of the metadata about learners, keeping for each sentence only information about the mother tongue and the level of the course where the essay has been written. We use the normalized version of learner language as the basis for DaLAJ sentences, and keep only one error per sentence. We repeat the same sentence for each individual correction tag used in the sentence. For DaLAJ 1.0 four error categories of 35 available in SweLL are used, all connected to lexical or wordbuilding choices. The dataset is included in the SwedishGlue benchmark. 1 Below, we describe the format of the dataset, our insights and motivation for the chosen approach to data sharing.","This work has been supported by Nationella Språkbanken -jointly funded by its 10 partner institutions and the Swedish Research Council (dnr 2017-00626), as well as partly supported by a grant from the Swedish Riksbankens Jubileumsfond (SweLL -research infrastructure for Swedish as a second language, dnr IN16-0464:1).","DaLAJ -- a dataset for linguistic acceptability judgments for Swedish. We present DaLAJ 1.0, a Dataset for Linguistic Acceptability Judgments for Swedish, comprising 9 596 sentences in its first version. DaLAJ is based on the SweLL second language learner data (Volodina et al., 2019), consisting of essays at different levels of proficiency. To make sure the dataset can be freely available despite the GDPR regulations, we have sentence-scrambled learner essays and removed part of the metadata about learners, keeping for each sentence only information about the mother tongue and the level of the course where the essay has been written. We use the normalized version of learner language as the basis for DaLAJ sentences, and keep only one error per sentence. We repeat the same sentence for each individual correction tag used in the sentence. For DaLAJ 1.0 four error categories of 35 available in SweLL are used, all connected to lexical or wordbuilding choices. The dataset is included in the SwedishGlue benchmark. 1 Below, we describe the format of the dataset, our insights and motivation for the chosen approach to data sharing.",2021
hassan-etal-2011-identifying,https://aclanthology.org/P11-2104,0,,,,,,,"Identifying the Semantic Orientation of Foreign Words. We present a method for identifying the positive or negative semantic orientation of foreign words. Identifying the semantic orientation of words has numerous applications in the areas of text classification, analysis of product review, analysis of responses to surveys, and mining online discussions. Identifying the semantic orientation of English words has been extensively studied in literature. Most of this work assumes the existence of resources (e.g. Wordnet, seeds, etc) that do not exist in foreign languages. In this work, we describe a method based on constructing a multilingual network connecting English and foreign words. We use this network to identify the semantic orientation of foreign words based on connection between words in the same language as well as multilingual connections. The method is experimentally tested using a manually labeled set of positive and negative words and has shown very promising results.",Identifying the Semantic Orientation of Foreign Words,"We present a method for identifying the positive or negative semantic orientation of foreign words. Identifying the semantic orientation of words has numerous applications in the areas of text classification, analysis of product review, analysis of responses to surveys, and mining online discussions. Identifying the semantic orientation of English words has been extensively studied in literature. Most of this work assumes the existence of resources (e.g. Wordnet, seeds, etc) that do not exist in foreign languages. In this work, we describe a method based on constructing a multilingual network connecting English and foreign words. We use this network to identify the semantic orientation of foreign words based on connection between words in the same language as well as multilingual connections. The method is experimentally tested using a manually labeled set of positive and negative words and has shown very promising results.",Identifying the Semantic Orientation of Foreign Words,"We present a method for identifying the positive or negative semantic orientation of foreign words. Identifying the semantic orientation of words has numerous applications in the areas of text classification, analysis of product review, analysis of responses to surveys, and mining online discussions. Identifying the semantic orientation of English words has been extensively studied in literature. Most of this work assumes the existence of resources (e.g. Wordnet, seeds, etc) that do not exist in foreign languages. In this work, we describe a method based on constructing a multilingual network connecting English and foreign words. We use this network to identify the semantic orientation of foreign words based on connection between words in the same language as well as multilingual connections. The method is experimentally tested using a manually labeled set of positive and negative words and has shown very promising results.","This research was funded in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the U.S. Army Research Lab. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the ofcial views or policies of IARPA, the ODNI or the U.S. Government.","Identifying the Semantic Orientation of Foreign Words. We present a method for identifying the positive or negative semantic orientation of foreign words. Identifying the semantic orientation of words has numerous applications in the areas of text classification, analysis of product review, analysis of responses to surveys, and mining online discussions. Identifying the semantic orientation of English words has been extensively studied in literature. Most of this work assumes the existence of resources (e.g. Wordnet, seeds, etc) that do not exist in foreign languages. In this work, we describe a method based on constructing a multilingual network connecting English and foreign words. We use this network to identify the semantic orientation of foreign words based on connection between words in the same language as well as multilingual connections. The method is experimentally tested using a manually labeled set of positive and negative words and has shown very promising results.",2011
zhang-chai-2021-hierarchical,https://aclanthology.org/2021.findings-acl.368,0,,,,,,,"Hierarchical Task Learning from Language Instructions with Unified Transformers and Self-Monitoring. Despite recent progress, learning new tasks through language instructions remains an extremely challenging problem. On the AL-FRED benchmark for task learning, the published state-of-the-art system only achieves a task success rate of less than 10% in an unseen environment, compared to the human performance of over 90%. To address this issue, this paper takes a closer look at task learning. In a departure from a widely applied end-toend architecture, we decomposed task learning into three sub-problems: sub-goal planning, scene navigation, and object manipulation; and developed a model HiTUT 1 (stands for Hierarchical Tasks via Unified Transformers) that addresses each sub-problem in a unified manner to learn a hierarchical task structure. On the ALFRED benchmark, HiTUT has achieved the best performance with a remarkably higher generalization ability. In the unseen environment, HiTUT achieves over 160% performance gain in success rate compared to the previous state of the art. The explicit representation of task structures also enables an in-depth understanding of the nature of the problem and the ability of the agent, which provides insight for future benchmark development and evaluation.",Hierarchical Task Learning from Language Instructions with Unified Transformers and Self-Monitoring,"Despite recent progress, learning new tasks through language instructions remains an extremely challenging problem. On the AL-FRED benchmark for task learning, the published state-of-the-art system only achieves a task success rate of less than 10% in an unseen environment, compared to the human performance of over 90%. To address this issue, this paper takes a closer look at task learning. In a departure from a widely applied end-toend architecture, we decomposed task learning into three sub-problems: sub-goal planning, scene navigation, and object manipulation; and developed a model HiTUT 1 (stands for Hierarchical Tasks via Unified Transformers) that addresses each sub-problem in a unified manner to learn a hierarchical task structure. On the ALFRED benchmark, HiTUT has achieved the best performance with a remarkably higher generalization ability. In the unseen environment, HiTUT achieves over 160% performance gain in success rate compared to the previous state of the art. The explicit representation of task structures also enables an in-depth understanding of the nature of the problem and the ability of the agent, which provides insight for future benchmark development and evaluation.",Hierarchical Task Learning from Language Instructions with Unified Transformers and Self-Monitoring,"Despite recent progress, learning new tasks through language instructions remains an extremely challenging problem. On the AL-FRED benchmark for task learning, the published state-of-the-art system only achieves a task success rate of less than 10% in an unseen environment, compared to the human performance of over 90%. To address this issue, this paper takes a closer look at task learning. In a departure from a widely applied end-toend architecture, we decomposed task learning into three sub-problems: sub-goal planning, scene navigation, and object manipulation; and developed a model HiTUT 1 (stands for Hierarchical Tasks via Unified Transformers) that addresses each sub-problem in a unified manner to learn a hierarchical task structure. On the ALFRED benchmark, HiTUT has achieved the best performance with a remarkably higher generalization ability. In the unseen environment, HiTUT achieves over 160% performance gain in success rate compared to the previous state of the art. The explicit representation of task structures also enables an in-depth understanding of the nature of the problem and the ability of the agent, which provides insight for future benchmark development and evaluation.",This work is supported by the National Science Foundation (IIS-1949634). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions.,"Hierarchical Task Learning from Language Instructions with Unified Transformers and Self-Monitoring. Despite recent progress, learning new tasks through language instructions remains an extremely challenging problem. On the AL-FRED benchmark for task learning, the published state-of-the-art system only achieves a task success rate of less than 10% in an unseen environment, compared to the human performance of over 90%. To address this issue, this paper takes a closer look at task learning. In a departure from a widely applied end-toend architecture, we decomposed task learning into three sub-problems: sub-goal planning, scene navigation, and object manipulation; and developed a model HiTUT 1 (stands for Hierarchical Tasks via Unified Transformers) that addresses each sub-problem in a unified manner to learn a hierarchical task structure. On the ALFRED benchmark, HiTUT has achieved the best performance with a remarkably higher generalization ability. In the unseen environment, HiTUT achieves over 160% performance gain in success rate compared to the previous state of the art. The explicit representation of task structures also enables an in-depth understanding of the nature of the problem and the ability of the agent, which provides insight for future benchmark development and evaluation.",2021
tambouratzis-etal-2012-evaluating,https://aclanthology.org/C12-1157,0,,,,,,,"Evaluating the Translation Accuracy of a Novel Language-Independent MT Methodology. The current paper evaluates the performance of the PRESEMT methodology, which facilitates the creation of machine translation (MT) systems for different language pairs. This methodology aims to develop a hybrid MT system that extracts translation information from large, predominantly monolingual corpora, using pattern recognition techniques. PRESEMT has been designed to have the lowest possible requirements on specialised resources and tools, given that for many languages (especially less widely used ones) only limited linguistic resources are available. In PRESEMT, the main translation process is divided into two phases, the first determining the overall structure of a target language (TL) sentence, and the second disambiguating between alternative translations for words or phrases and establishing local word order. This paper describes the latest version of the system and evaluates its translation accuracy, while also benchmarking the PRESEMT performance by comparing it with other established MT systems using objective measures.",Evaluating the Translation Accuracy of a Novel Language-Independent {MT} Methodology,"The current paper evaluates the performance of the PRESEMT methodology, which facilitates the creation of machine translation (MT) systems for different language pairs. This methodology aims to develop a hybrid MT system that extracts translation information from large, predominantly monolingual corpora, using pattern recognition techniques. PRESEMT has been designed to have the lowest possible requirements on specialised resources and tools, given that for many languages (especially less widely used ones) only limited linguistic resources are available. In PRESEMT, the main translation process is divided into two phases, the first determining the overall structure of a target language (TL) sentence, and the second disambiguating between alternative translations for words or phrases and establishing local word order. This paper describes the latest version of the system and evaluates its translation accuracy, while also benchmarking the PRESEMT performance by comparing it with other established MT systems using objective measures.",Evaluating the Translation Accuracy of a Novel Language-Independent MT Methodology,"The current paper evaluates the performance of the PRESEMT methodology, which facilitates the creation of machine translation (MT) systems for different language pairs. This methodology aims to develop a hybrid MT system that extracts translation information from large, predominantly monolingual corpora, using pattern recognition techniques. PRESEMT has been designed to have the lowest possible requirements on specialised resources and tools, given that for many languages (especially less widely used ones) only limited linguistic resources are available. In PRESEMT, the main translation process is divided into two phases, the first determining the overall structure of a target language (TL) sentence, and the second disambiguating between alternative translations for words or phrases and establishing local word order. This paper describes the latest version of the system and evaluates its translation accuracy, while also benchmarking the PRESEMT performance by comparing it with other established MT systems using objective measures.",,"Evaluating the Translation Accuracy of a Novel Language-Independent MT Methodology. The current paper evaluates the performance of the PRESEMT methodology, which facilitates the creation of machine translation (MT) systems for different language pairs. This methodology aims to develop a hybrid MT system that extracts translation information from large, predominantly monolingual corpora, using pattern recognition techniques. PRESEMT has been designed to have the lowest possible requirements on specialised resources and tools, given that for many languages (especially less widely used ones) only limited linguistic resources are available. In PRESEMT, the main translation process is divided into two phases, the first determining the overall structure of a target language (TL) sentence, and the second disambiguating between alternative translations for words or phrases and establishing local word order. This paper describes the latest version of the system and evaluates its translation accuracy, while also benchmarking the PRESEMT performance by comparing it with other established MT systems using objective measures.",2012
rohrer-1986-linguistic,https://aclanthology.org/C86-1084,0,,,,,,,"Linguistic Bases For Machine Translation. My aim in organizing this panel is to stimulate the discussion between researchers working on MT and linguists interested in formal syntax and semantics. I am convinced that a closer cooperation will be fruitful for both sides. I will be talking about experimental MT or MT as a research project and not as a development project.[l ] A. The relation between MT and theoretical linguistics Researchers in MT do not work with linguistic theories which are 'on vogue' today. The two special issues on MT of the journal Computational Linguistics (CL 1985) contain eight contributions of the leading teams. In the bibliography of these articles you don't find names like Chomsky, Montague, Bresnan, Gazdar, Kamp, Barwise, Perry etc.[2] Syntactic theories like GB, GPSG, LFG are not mentioned (with one exception: R. Johnson et al. (1985 0.165) praise I.FG for its 'perspicuous notation', but do not (or not yet) incorporate ideas from LFG into their theory of MT). There arc no references whatsoever to recent semantic theories.",Linguistic Bases For Machine Translation,"My aim in organizing this panel is to stimulate the discussion between researchers working on MT and linguists interested in formal syntax and semantics. I am convinced that a closer cooperation will be fruitful for both sides. I will be talking about experimental MT or MT as a research project and not as a development project.[l ] A. The relation between MT and theoretical linguistics Researchers in MT do not work with linguistic theories which are 'on vogue' today. The two special issues on MT of the journal Computational Linguistics (CL 1985) contain eight contributions of the leading teams. In the bibliography of these articles you don't find names like Chomsky, Montague, Bresnan, Gazdar, Kamp, Barwise, Perry etc.[2] Syntactic theories like GB, GPSG, LFG are not mentioned (with one exception: R. Johnson et al. (1985 0.165) praise I.FG for its 'perspicuous notation', but do not (or not yet) incorporate ideas from LFG into their theory of MT). There arc no references whatsoever to recent semantic theories.",Linguistic Bases For Machine Translation,"My aim in organizing this panel is to stimulate the discussion between researchers working on MT and linguists interested in formal syntax and semantics. I am convinced that a closer cooperation will be fruitful for both sides. I will be talking about experimental MT or MT as a research project and not as a development project.[l ] A. The relation between MT and theoretical linguistics Researchers in MT do not work with linguistic theories which are 'on vogue' today. The two special issues on MT of the journal Computational Linguistics (CL 1985) contain eight contributions of the leading teams. In the bibliography of these articles you don't find names like Chomsky, Montague, Bresnan, Gazdar, Kamp, Barwise, Perry etc.[2] Syntactic theories like GB, GPSG, LFG are not mentioned (with one exception: R. Johnson et al. (1985 0.165) praise I.FG for its 'perspicuous notation', but do not (or not yet) incorporate ideas from LFG into their theory of MT). There arc no references whatsoever to recent semantic theories.",,"Linguistic Bases For Machine Translation. My aim in organizing this panel is to stimulate the discussion between researchers working on MT and linguists interested in formal syntax and semantics. I am convinced that a closer cooperation will be fruitful for both sides. I will be talking about experimental MT or MT as a research project and not as a development project.[l ] A. The relation between MT and theoretical linguistics Researchers in MT do not work with linguistic theories which are 'on vogue' today. The two special issues on MT of the journal Computational Linguistics (CL 1985) contain eight contributions of the leading teams. In the bibliography of these articles you don't find names like Chomsky, Montague, Bresnan, Gazdar, Kamp, Barwise, Perry etc.[2] Syntactic theories like GB, GPSG, LFG are not mentioned (with one exception: R. Johnson et al. (1985 0.165) praise I.FG for its 'perspicuous notation', but do not (or not yet) incorporate ideas from LFG into their theory of MT). There arc no references whatsoever to recent semantic theories.",1986
gabbard-kulick-2008-construct,https://aclanthology.org/P08-2053,0,,,,,,,"Construct State Modification in the Arabic Treebank. Earlier work in parsing Arabic has speculated that attachment to construct state constructions decreases parsing performance. We make this speculation precise and define the problem of attachment to construct state constructions in the Arabic Treebank. We present the first statistics that quantify the problem. We provide a baseline and the results from a first attempt at a discriminative learning procedure for this task, achieving 80% accuracy.",Construct State Modification in the {A}rabic Treebank,"Earlier work in parsing Arabic has speculated that attachment to construct state constructions decreases parsing performance. We make this speculation precise and define the problem of attachment to construct state constructions in the Arabic Treebank. We present the first statistics that quantify the problem. We provide a baseline and the results from a first attempt at a discriminative learning procedure for this task, achieving 80% accuracy.",Construct State Modification in the Arabic Treebank,"Earlier work in parsing Arabic has speculated that attachment to construct state constructions decreases parsing performance. We make this speculation precise and define the problem of attachment to construct state constructions in the Arabic Treebank. We present the first statistics that quantify the problem. We provide a baseline and the results from a first attempt at a discriminative learning procedure for this task, achieving 80% accuracy.","We thank Mitch Marcus, Ann Bies, Mohamed Maamouri, and the members of the Arabic Treebank project for helpful discussions. This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract Nos. HR0011-06-C-0022 and HR0011-06-1-0003. The content of this paper does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.","Construct State Modification in the Arabic Treebank. Earlier work in parsing Arabic has speculated that attachment to construct state constructions decreases parsing performance. We make this speculation precise and define the problem of attachment to construct state constructions in the Arabic Treebank. We present the first statistics that quantify the problem. We provide a baseline and the results from a first attempt at a discriminative learning procedure for this task, achieving 80% accuracy.",2008
hedeland-etal-2018-introducing,https://aclanthology.org/L18-1370,0,,,,,,,"Introducing the CLARIN Knowledge Centre for Linguistic Diversity and Language Documentation. The European digital research infrastructure CLARIN (Common Language Resources and Technology Infrastructure) is building a Knowledge Sharing Infrastructure (KSI) to ensure that existing knowledge and expertise is easily available both for the CLARIN community and for the humanities research communities for which CLARIN is being developed. Within the Knowledge Sharing Infrastructure, so called Knowledge Centres comprise one or more physical institutions with particular expertise in certain areas and are committed to providing their expertise in the form of reliable knowledge-sharing services. In this paper, we present the ninth K Centre-the CLARIN Knowledge Centre for Linguistic Diversity and Language Documentation (CKLD)-and the expertise and services provided by the member institutions at the Universities of London (ELAR/SWLI), Cologne (DCH/IfDH/IfL) and Hamburg (HZSK/INEL). The centre offers information on current best practices, available resources and tools, and gives advice on technological and methodological matters for researchers working within relevant fields.",Introducing the {CLARIN} Knowledge Centre for Linguistic Diversity and Language Documentation,"The European digital research infrastructure CLARIN (Common Language Resources and Technology Infrastructure) is building a Knowledge Sharing Infrastructure (KSI) to ensure that existing knowledge and expertise is easily available both for the CLARIN community and for the humanities research communities for which CLARIN is being developed. Within the Knowledge Sharing Infrastructure, so called Knowledge Centres comprise one or more physical institutions with particular expertise in certain areas and are committed to providing their expertise in the form of reliable knowledge-sharing services. In this paper, we present the ninth K Centre-the CLARIN Knowledge Centre for Linguistic Diversity and Language Documentation (CKLD)-and the expertise and services provided by the member institutions at the Universities of London (ELAR/SWLI), Cologne (DCH/IfDH/IfL) and Hamburg (HZSK/INEL). The centre offers information on current best practices, available resources and tools, and gives advice on technological and methodological matters for researchers working within relevant fields.",Introducing the CLARIN Knowledge Centre for Linguistic Diversity and Language Documentation,"The European digital research infrastructure CLARIN (Common Language Resources and Technology Infrastructure) is building a Knowledge Sharing Infrastructure (KSI) to ensure that existing knowledge and expertise is easily available both for the CLARIN community and for the humanities research communities for which CLARIN is being developed. Within the Knowledge Sharing Infrastructure, so called Knowledge Centres comprise one or more physical institutions with particular expertise in certain areas and are committed to providing their expertise in the form of reliable knowledge-sharing services. In this paper, we present the ninth K Centre-the CLARIN Knowledge Centre for Linguistic Diversity and Language Documentation (CKLD)-and the expertise and services provided by the member institutions at the Universities of London (ELAR/SWLI), Cologne (DCH/IfDH/IfL) and Hamburg (HZSK/INEL). The centre offers information on current best practices, available resources and tools, and gives advice on technological and methodological matters for researchers working within relevant fields.",,"Introducing the CLARIN Knowledge Centre for Linguistic Diversity and Language Documentation. The European digital research infrastructure CLARIN (Common Language Resources and Technology Infrastructure) is building a Knowledge Sharing Infrastructure (KSI) to ensure that existing knowledge and expertise is easily available both for the CLARIN community and for the humanities research communities for which CLARIN is being developed. Within the Knowledge Sharing Infrastructure, so called Knowledge Centres comprise one or more physical institutions with particular expertise in certain areas and are committed to providing their expertise in the form of reliable knowledge-sharing services. In this paper, we present the ninth K Centre-the CLARIN Knowledge Centre for Linguistic Diversity and Language Documentation (CKLD)-and the expertise and services provided by the member institutions at the Universities of London (ELAR/SWLI), Cologne (DCH/IfDH/IfL) and Hamburg (HZSK/INEL). The centre offers information on current best practices, available resources and tools, and gives advice on technological and methodological matters for researchers working within relevant fields.",2018
wong-etal-2021-cross,https://aclanthology.org/2021.acl-long.548,0,,,,,,,"Cross-replication Reliability - An Empirical Approach to Interpreting Inter-rater Reliability. When collecting annotations and labeled data from humans, a standard practice is to use inter-rater reliability (IRR) as a measure of data goodness (Hallgren, 2012). Metrics such as Krippendorff's alpha or Cohen's kappa are typically required to be above a threshold of 0.6 (Landis and Koch, 1977). These absolute thresholds are unreasonable for crowdsourced data from annotators with high cultural and training variances, especially on subjective topics. We present a new alternative to interpreting IRR that is more empirical and contextualized. It is based upon benchmarking IRR against baseline measures in a replication, one of which is a novel cross-replication reliability (xRR) measure based on Cohen's (1960) kappa. We call this approach the xRR framework. We opensource a replication dataset of 4 million human judgements of facial expressions and analyze it with the proposed framework. We argue this framework can be used to measure the quality of crowdsourced datasets.",Cross-replication Reliability - An Empirical Approach to Interpreting Inter-rater Reliability,"When collecting annotations and labeled data from humans, a standard practice is to use inter-rater reliability (IRR) as a measure of data goodness (Hallgren, 2012). Metrics such as Krippendorff's alpha or Cohen's kappa are typically required to be above a threshold of 0.6 (Landis and Koch, 1977). These absolute thresholds are unreasonable for crowdsourced data from annotators with high cultural and training variances, especially on subjective topics. We present a new alternative to interpreting IRR that is more empirical and contextualized. It is based upon benchmarking IRR against baseline measures in a replication, one of which is a novel cross-replication reliability (xRR) measure based on Cohen's (1960) kappa. We call this approach the xRR framework. We opensource a replication dataset of 4 million human judgements of facial expressions and analyze it with the proposed framework. We argue this framework can be used to measure the quality of crowdsourced datasets.",Cross-replication Reliability - An Empirical Approach to Interpreting Inter-rater Reliability,"When collecting annotations and labeled data from humans, a standard practice is to use inter-rater reliability (IRR) as a measure of data goodness (Hallgren, 2012). Metrics such as Krippendorff's alpha or Cohen's kappa are typically required to be above a threshold of 0.6 (Landis and Koch, 1977). These absolute thresholds are unreasonable for crowdsourced data from annotators with high cultural and training variances, especially on subjective topics. We present a new alternative to interpreting IRR that is more empirical and contextualized. It is based upon benchmarking IRR against baseline measures in a replication, one of which is a novel cross-replication reliability (xRR) measure based on Cohen's (1960) kappa. We call this approach the xRR framework. We opensource a replication dataset of 4 million human judgements of facial expressions and analyze it with the proposed framework. We argue this framework can be used to measure the quality of crowdsourced datasets.",We like to thank Gautam Prasad and Alan Cowen for their work on collecting and sharing the IRep dataset and opensourcing it.,"Cross-replication Reliability - An Empirical Approach to Interpreting Inter-rater Reliability. When collecting annotations and labeled data from humans, a standard practice is to use inter-rater reliability (IRR) as a measure of data goodness (Hallgren, 2012). Metrics such as Krippendorff's alpha or Cohen's kappa are typically required to be above a threshold of 0.6 (Landis and Koch, 1977). These absolute thresholds are unreasonable for crowdsourced data from annotators with high cultural and training variances, especially on subjective topics. We present a new alternative to interpreting IRR that is more empirical and contextualized. It is based upon benchmarking IRR against baseline measures in a replication, one of which is a novel cross-replication reliability (xRR) measure based on Cohen's (1960) kappa. We call this approach the xRR framework. We opensource a replication dataset of 4 million human judgements of facial expressions and analyze it with the proposed framework. We argue this framework can be used to measure the quality of crowdsourced datasets.",2021
gandhe-traum-2010-ive,https://aclanthology.org/W10-4345,0,,,,,,,"I've said it before, and I'll say it again: An empirical investigation of the upper bound of the selection approach to dialogue. We perform a study of existing dialogue corpora to establish the theoretical maximum performance of the selection approach to simulating human dialogue behavior in unseen dialogues. This maximum is the proportion of test utterances for which an exact or approximate match exists in the corresponding training corpus. The results indicate that some domains seem quite suitable for a corpusbased selection approach, with over half of the test utterances having been seen before in the corpus, while other domains show much more novelty compared to previous dialogues.","{I}{'}ve said it before, and {I}{'}ll say it again: An empirical investigation of the upper bound of the selection approach to dialogue","We perform a study of existing dialogue corpora to establish the theoretical maximum performance of the selection approach to simulating human dialogue behavior in unseen dialogues. This maximum is the proportion of test utterances for which an exact or approximate match exists in the corresponding training corpus. The results indicate that some domains seem quite suitable for a corpusbased selection approach, with over half of the test utterances having been seen before in the corpus, while other domains show much more novelty compared to previous dialogues.","I've said it before, and I'll say it again: An empirical investigation of the upper bound of the selection approach to dialogue","We perform a study of existing dialogue corpora to establish the theoretical maximum performance of the selection approach to simulating human dialogue behavior in unseen dialogues. This maximum is the proportion of test utterances for which an exact or approximate match exists in the corresponding training corpus. The results indicate that some domains seem quite suitable for a corpusbased selection approach, with over half of the test utterances having been seen before in the corpus, while other domains show much more novelty compared to previous dialogues.","This work has been sponsored by the U.S. Army Research, Development, and Engineering Command (RDE-COM). Statements and opinions expressed do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred. We would like to thank Ron Artstein and others at ICT for compiling the ICT Corpora used in this study.","I've said it before, and I'll say it again: An empirical investigation of the upper bound of the selection approach to dialogue. We perform a study of existing dialogue corpora to establish the theoretical maximum performance of the selection approach to simulating human dialogue behavior in unseen dialogues. This maximum is the proportion of test utterances for which an exact or approximate match exists in the corresponding training corpus. The results indicate that some domains seem quite suitable for a corpusbased selection approach, with over half of the test utterances having been seen before in the corpus, while other domains show much more novelty compared to previous dialogues.",2010
celano-2020-gradient,https://aclanthology.org/2020.lt4hala-1.19,0,,,,,,,"A Gradient Boosting-Seq2Seq System for Latin POS Tagging and Lemmatization. The paper presents the system used in the EvaLatin shared task to POS tag and lemmatize Latin. It consists of two components. A gradient boosting machine (LightGBM) is used for POS tagging, mainly fed with pre-computed word embeddings of a window of seven contiguous tokens-the token at hand plus the three preceding and following ones-per target feature value. Word embeddings are trained on the texts of the Perseus Digital Library, Patrologia Latina, and Biblioteca Digitale di Testi Tardo Antichi, which together comprise a high number of texts of different genres from the Classical Age to Late Antiquity. Word forms plus the outputted POS labels are used to feed a Seq2Seq algorithm implemented in Keras to predict lemmas. The final shared-task accuracies measured for Classical Latin texts are in line with state-of-the-art POS taggers (∼96%) and lemmatizers (∼95%).",A Gradient Boosting-{S}eq2{S}eq System for {L}atin {POS} Tagging and Lemmatization,"The paper presents the system used in the EvaLatin shared task to POS tag and lemmatize Latin. It consists of two components. A gradient boosting machine (LightGBM) is used for POS tagging, mainly fed with pre-computed word embeddings of a window of seven contiguous tokens-the token at hand plus the three preceding and following ones-per target feature value. Word embeddings are trained on the texts of the Perseus Digital Library, Patrologia Latina, and Biblioteca Digitale di Testi Tardo Antichi, which together comprise a high number of texts of different genres from the Classical Age to Late Antiquity. Word forms plus the outputted POS labels are used to feed a Seq2Seq algorithm implemented in Keras to predict lemmas. The final shared-task accuracies measured for Classical Latin texts are in line with state-of-the-art POS taggers (∼96%) and lemmatizers (∼95%).",A Gradient Boosting-Seq2Seq System for Latin POS Tagging and Lemmatization,"The paper presents the system used in the EvaLatin shared task to POS tag and lemmatize Latin. It consists of two components. A gradient boosting machine (LightGBM) is used for POS tagging, mainly fed with pre-computed word embeddings of a window of seven contiguous tokens-the token at hand plus the three preceding and following ones-per target feature value. Word embeddings are trained on the texts of the Perseus Digital Library, Patrologia Latina, and Biblioteca Digitale di Testi Tardo Antichi, which together comprise a high number of texts of different genres from the Classical Age to Late Antiquity. Word forms plus the outputted POS labels are used to feed a Seq2Seq algorithm implemented in Keras to predict lemmas. The final shared-task accuracies measured for Classical Latin texts are in line with state-of-the-art POS taggers (∼96%) and lemmatizers (∼95%).",,"A Gradient Boosting-Seq2Seq System for Latin POS Tagging and Lemmatization. The paper presents the system used in the EvaLatin shared task to POS tag and lemmatize Latin. It consists of two components. A gradient boosting machine (LightGBM) is used for POS tagging, mainly fed with pre-computed word embeddings of a window of seven contiguous tokens-the token at hand plus the three preceding and following ones-per target feature value. Word embeddings are trained on the texts of the Perseus Digital Library, Patrologia Latina, and Biblioteca Digitale di Testi Tardo Antichi, which together comprise a high number of texts of different genres from the Classical Age to Late Antiquity. Word forms plus the outputted POS labels are used to feed a Seq2Seq algorithm implemented in Keras to predict lemmas. The final shared-task accuracies measured for Classical Latin texts are in line with state-of-the-art POS taggers (∼96%) and lemmatizers (∼95%).",2020
pluss-piwek-2016-measuring,https://aclanthology.org/C16-1181,0,,,,,,,"Measuring Non-cooperation in Dialogue. This paper introduces a novel method for measuring non-cooperation in dialogue. The key idea is that linguistic non-cooperation can be measured in terms of the extent to which dialogue participants deviate from conventions regarding the proper introduction and discharging of conversational obligations (e.g., the obligation to respond to a question). Previous work on non-cooperation has focused mainly on non-linguistic task-related non-cooperation or modelled non-cooperation in terms of special rules describing non-cooperative behaviours. In contrast, we start from rules for normal/correct dialogue behaviour-i.e., a dialogue game-which in principle can be derived from a corpus of cooperative dialogues, and provide a quantitative measure for the degree to which participants comply with these rules. We evaluated the model on a corpus of political interviews, with encouraging results. The model predicts accurately the degree of cooperation for one of the two dialogue game roles (interviewer) and also the relative cooperation for both roles (i.e., which interlocutor in the conversation was most cooperative). Being able to measure cooperation has applications in many areas from the analysis-manual, semi and fully automatic-of natural language interactions to human-like virtual personal assistants, tutoring agents, sophisticated dialogue systems, and role-playing virtual humans.",Measuring Non-cooperation in Dialogue,"This paper introduces a novel method for measuring non-cooperation in dialogue. The key idea is that linguistic non-cooperation can be measured in terms of the extent to which dialogue participants deviate from conventions regarding the proper introduction and discharging of conversational obligations (e.g., the obligation to respond to a question). Previous work on non-cooperation has focused mainly on non-linguistic task-related non-cooperation or modelled non-cooperation in terms of special rules describing non-cooperative behaviours. In contrast, we start from rules for normal/correct dialogue behaviour-i.e., a dialogue game-which in principle can be derived from a corpus of cooperative dialogues, and provide a quantitative measure for the degree to which participants comply with these rules. We evaluated the model on a corpus of political interviews, with encouraging results. The model predicts accurately the degree of cooperation for one of the two dialogue game roles (interviewer) and also the relative cooperation for both roles (i.e., which interlocutor in the conversation was most cooperative). Being able to measure cooperation has applications in many areas from the analysis-manual, semi and fully automatic-of natural language interactions to human-like virtual personal assistants, tutoring agents, sophisticated dialogue systems, and role-playing virtual humans.",Measuring Non-cooperation in Dialogue,"This paper introduces a novel method for measuring non-cooperation in dialogue. The key idea is that linguistic non-cooperation can be measured in terms of the extent to which dialogue participants deviate from conventions regarding the proper introduction and discharging of conversational obligations (e.g., the obligation to respond to a question). Previous work on non-cooperation has focused mainly on non-linguistic task-related non-cooperation or modelled non-cooperation in terms of special rules describing non-cooperative behaviours. In contrast, we start from rules for normal/correct dialogue behaviour-i.e., a dialogue game-which in principle can be derived from a corpus of cooperative dialogues, and provide a quantitative measure for the degree to which participants comply with these rules. We evaluated the model on a corpus of political interviews, with encouraging results. The model predicts accurately the degree of cooperation for one of the two dialogue game roles (interviewer) and also the relative cooperation for both roles (i.e., which interlocutor in the conversation was most cooperative). Being able to measure cooperation has applications in many areas from the analysis-manual, semi and fully automatic-of natural language interactions to human-like virtual personal assistants, tutoring agents, sophisticated dialogue systems, and role-playing virtual humans.",,"Measuring Non-cooperation in Dialogue. This paper introduces a novel method for measuring non-cooperation in dialogue. The key idea is that linguistic non-cooperation can be measured in terms of the extent to which dialogue participants deviate from conventions regarding the proper introduction and discharging of conversational obligations (e.g., the obligation to respond to a question). Previous work on non-cooperation has focused mainly on non-linguistic task-related non-cooperation or modelled non-cooperation in terms of special rules describing non-cooperative behaviours. In contrast, we start from rules for normal/correct dialogue behaviour-i.e., a dialogue game-which in principle can be derived from a corpus of cooperative dialogues, and provide a quantitative measure for the degree to which participants comply with these rules. We evaluated the model on a corpus of political interviews, with encouraging results. The model predicts accurately the degree of cooperation for one of the two dialogue game roles (interviewer) and also the relative cooperation for both roles (i.e., which interlocutor in the conversation was most cooperative). Being able to measure cooperation has applications in many areas from the analysis-manual, semi and fully automatic-of natural language interactions to human-like virtual personal assistants, tutoring agents, sophisticated dialogue systems, and role-playing virtual humans.",2016
sun-etal-2020-colake,https://aclanthology.org/2020.coling-main.327,0,,,,,,,"CoLAKE: Contextualized Language and Knowledge Embedding. With the emerging branch of incorporating factual knowledge into pre-trained language models such as BERT, most existing models consider shallow, static, and separately pre-trained entity embeddings, which limits the performance gains of these models. Few works explore the potential of deep contextualized knowledge representation when injecting knowledge. In this paper, we propose the Contextualized Language and Knowledge Embedding (CoLAKE), which jointly learns contextualized representation for both language and knowledge with the extended MLM objective. Instead of injecting only entity embeddings, CoLAKE extracts the knowledge context of an entity from large-scale knowledge bases. To handle the heterogeneity of knowledge context and language context, we integrate them in a unified data structure, word-knowledge graph (WK graph). CoLAKE is pre-trained on large-scale WK graphs with the modified Transformer encoder. We conduct experiments on knowledge-driven tasks, knowledge probing tasks, and language understanding tasks. Experimental results show that CoLAKE outperforms previous counterparts on most of the tasks. Besides, CoLAKE achieves surprisingly high performance on our synthetic task called word-knowledge graph completion, which shows the superiority of simultaneously contextualizing language and knowledge representation. 1 * Work done during internship at Amazon Shanghai AI Lab.",{C}o{LAKE}: Contextualized Language and Knowledge Embedding,"With the emerging branch of incorporating factual knowledge into pre-trained language models such as BERT, most existing models consider shallow, static, and separately pre-trained entity embeddings, which limits the performance gains of these models. Few works explore the potential of deep contextualized knowledge representation when injecting knowledge. In this paper, we propose the Contextualized Language and Knowledge Embedding (CoLAKE), which jointly learns contextualized representation for both language and knowledge with the extended MLM objective. Instead of injecting only entity embeddings, CoLAKE extracts the knowledge context of an entity from large-scale knowledge bases. To handle the heterogeneity of knowledge context and language context, we integrate them in a unified data structure, word-knowledge graph (WK graph). CoLAKE is pre-trained on large-scale WK graphs with the modified Transformer encoder. We conduct experiments on knowledge-driven tasks, knowledge probing tasks, and language understanding tasks. Experimental results show that CoLAKE outperforms previous counterparts on most of the tasks. Besides, CoLAKE achieves surprisingly high performance on our synthetic task called word-knowledge graph completion, which shows the superiority of simultaneously contextualizing language and knowledge representation. 1 * Work done during internship at Amazon Shanghai AI Lab.",CoLAKE: Contextualized Language and Knowledge Embedding,"With the emerging branch of incorporating factual knowledge into pre-trained language models such as BERT, most existing models consider shallow, static, and separately pre-trained entity embeddings, which limits the performance gains of these models. Few works explore the potential of deep contextualized knowledge representation when injecting knowledge. In this paper, we propose the Contextualized Language and Knowledge Embedding (CoLAKE), which jointly learns contextualized representation for both language and knowledge with the extended MLM objective. Instead of injecting only entity embeddings, CoLAKE extracts the knowledge context of an entity from large-scale knowledge bases. To handle the heterogeneity of knowledge context and language context, we integrate them in a unified data structure, word-knowledge graph (WK graph). CoLAKE is pre-trained on large-scale WK graphs with the modified Transformer encoder. We conduct experiments on knowledge-driven tasks, knowledge probing tasks, and language understanding tasks. Experimental results show that CoLAKE outperforms previous counterparts on most of the tasks. Besides, CoLAKE achieves surprisingly high performance on our synthetic task called word-knowledge graph completion, which shows the superiority of simultaneously contextualizing language and knowledge representation. 1 * Work done during internship at Amazon Shanghai AI Lab.",,"CoLAKE: Contextualized Language and Knowledge Embedding. With the emerging branch of incorporating factual knowledge into pre-trained language models such as BERT, most existing models consider shallow, static, and separately pre-trained entity embeddings, which limits the performance gains of these models. Few works explore the potential of deep contextualized knowledge representation when injecting knowledge. In this paper, we propose the Contextualized Language and Knowledge Embedding (CoLAKE), which jointly learns contextualized representation for both language and knowledge with the extended MLM objective. Instead of injecting only entity embeddings, CoLAKE extracts the knowledge context of an entity from large-scale knowledge bases. To handle the heterogeneity of knowledge context and language context, we integrate them in a unified data structure, word-knowledge graph (WK graph). CoLAKE is pre-trained on large-scale WK graphs with the modified Transformer encoder. We conduct experiments on knowledge-driven tasks, knowledge probing tasks, and language understanding tasks. Experimental results show that CoLAKE outperforms previous counterparts on most of the tasks. Besides, CoLAKE achieves surprisingly high performance on our synthetic task called word-knowledge graph completion, which shows the superiority of simultaneously contextualizing language and knowledge representation. 1 * Work done during internship at Amazon Shanghai AI Lab.",2020
harkema-etal-2004-large-scale,https://aclanthology.org/W04-3110,1,,,,health,,,"A Large Scale Terminology Resource for Biomedical Text Processing. In this paper we discuss the design, implementation, and use of Termino, a large scale terminological resource for text processing. Dealing with terminology is a difficult but unavoidable task for language processing applications, such as Information Extraction in technical domains. Complex, heterogeneous information must be stored about large numbers of terms. At the same time term recognition must be performed in realistic times. Termino attempts to reconcile this tension by maintaining a flexible, extensible relational database for storing terminological information and compiling finite state machines from this database to do term lookup. While Termino has been developed for biomedical applications, its general design allows it to be used for term processing in any domain.",A Large Scale Terminology Resource for Biomedical Text Processing,"In this paper we discuss the design, implementation, and use of Termino, a large scale terminological resource for text processing. Dealing with terminology is a difficult but unavoidable task for language processing applications, such as Information Extraction in technical domains. Complex, heterogeneous information must be stored about large numbers of terms. At the same time term recognition must be performed in realistic times. Termino attempts to reconcile this tension by maintaining a flexible, extensible relational database for storing terminological information and compiling finite state machines from this database to do term lookup. While Termino has been developed for biomedical applications, its general design allows it to be used for term processing in any domain.",A Large Scale Terminology Resource for Biomedical Text Processing,"In this paper we discuss the design, implementation, and use of Termino, a large scale terminological resource for text processing. Dealing with terminology is a difficult but unavoidable task for language processing applications, such as Information Extraction in technical domains. Complex, heterogeneous information must be stored about large numbers of terms. At the same time term recognition must be performed in realistic times. Termino attempts to reconcile this tension by maintaining a flexible, extensible relational database for storing terminological information and compiling finite state machines from this database to do term lookup. While Termino has been developed for biomedical applications, its general design allows it to be used for term processing in any domain.",,"A Large Scale Terminology Resource for Biomedical Text Processing. In this paper we discuss the design, implementation, and use of Termino, a large scale terminological resource for text processing. Dealing with terminology is a difficult but unavoidable task for language processing applications, such as Information Extraction in technical domains. Complex, heterogeneous information must be stored about large numbers of terms. At the same time term recognition must be performed in realistic times. Termino attempts to reconcile this tension by maintaining a flexible, extensible relational database for storing terminological information and compiling finite state machines from this database to do term lookup. While Termino has been developed for biomedical applications, its general design allows it to be used for term processing in any domain.",2004
abercrombie-batista-navarro-2020-parlvote,https://aclanthology.org/2020.lrec-1.624,1,,,,peace_justice_and_strong_institutions,,,"ParlVote: A Corpus for Sentiment Analysis of Political Debates. Debate transcripts from the UK Parliament contain information about the positions taken by politicians towards important topics, but are difficult for people to process manually. While sentiment analysis of debate speeches could facilitate understanding of the speakers' stated opinions, datasets currently available for this task are small when compared to the benchmark corpora in other domains. We present ParlVote, a new, larger corpus of parliamentary debate speeches for use in the evaluation of sentiment analysis systems for the political domain. We also perform a number of initial experiments on this dataset, testing a variety of approaches to the classification of sentiment polarity in debate speeches. These include a linear classifier as well as a neural network trained using a transformer word embedding model (BERT), and fine-tuned on the parliamentary speeches. We find that in many scenarios, a linear classifier trained on a bag-of-words text representation achieves the best results. However, with the largest dataset, the transformer-based model combined with a neural classifier provides the best performance. We suggest that further experimentation with classification models and observations of the debate content and structure are required, and that there remains much room for improvement in parliamentary sentiment analysis.",{P}arl{V}ote: A Corpus for Sentiment Analysis of Political Debates,"Debate transcripts from the UK Parliament contain information about the positions taken by politicians towards important topics, but are difficult for people to process manually. While sentiment analysis of debate speeches could facilitate understanding of the speakers' stated opinions, datasets currently available for this task are small when compared to the benchmark corpora in other domains. We present ParlVote, a new, larger corpus of parliamentary debate speeches for use in the evaluation of sentiment analysis systems for the political domain. We also perform a number of initial experiments on this dataset, testing a variety of approaches to the classification of sentiment polarity in debate speeches. These include a linear classifier as well as a neural network trained using a transformer word embedding model (BERT), and fine-tuned on the parliamentary speeches. We find that in many scenarios, a linear classifier trained on a bag-of-words text representation achieves the best results. However, with the largest dataset, the transformer-based model combined with a neural classifier provides the best performance. We suggest that further experimentation with classification models and observations of the debate content and structure are required, and that there remains much room for improvement in parliamentary sentiment analysis.",ParlVote: A Corpus for Sentiment Analysis of Political Debates,"Debate transcripts from the UK Parliament contain information about the positions taken by politicians towards important topics, but are difficult for people to process manually. While sentiment analysis of debate speeches could facilitate understanding of the speakers' stated opinions, datasets currently available for this task are small when compared to the benchmark corpora in other domains. We present ParlVote, a new, larger corpus of parliamentary debate speeches for use in the evaluation of sentiment analysis systems for the political domain. We also perform a number of initial experiments on this dataset, testing a variety of approaches to the classification of sentiment polarity in debate speeches. These include a linear classifier as well as a neural network trained using a transformer word embedding model (BERT), and fine-tuned on the parliamentary speeches. We find that in many scenarios, a linear classifier trained on a bag-of-words text representation achieves the best results. However, with the largest dataset, the transformer-based model combined with a neural classifier provides the best performance. We suggest that further experimentation with classification models and observations of the debate content and structure are required, and that there remains much room for improvement in parliamentary sentiment analysis.",The authors would like to thank the anonymous reviewers for their helpful comments. ,"ParlVote: A Corpus for Sentiment Analysis of Political Debates. Debate transcripts from the UK Parliament contain information about the positions taken by politicians towards important topics, but are difficult for people to process manually. While sentiment analysis of debate speeches could facilitate understanding of the speakers' stated opinions, datasets currently available for this task are small when compared to the benchmark corpora in other domains. We present ParlVote, a new, larger corpus of parliamentary debate speeches for use in the evaluation of sentiment analysis systems for the political domain. We also perform a number of initial experiments on this dataset, testing a variety of approaches to the classification of sentiment polarity in debate speeches. These include a linear classifier as well as a neural network trained using a transformer word embedding model (BERT), and fine-tuned on the parliamentary speeches. We find that in many scenarios, a linear classifier trained on a bag-of-words text representation achieves the best results. However, with the largest dataset, the transformer-based model combined with a neural classifier provides the best performance. We suggest that further experimentation with classification models and observations of the debate content and structure are required, and that there remains much room for improvement in parliamentary sentiment analysis.",2020
kibble-van-deemter-2000-coreference,http://www.lrec-conf.org/proceedings/lrec2000/pdf/100.pdf,0,,,,,,,"Coreference Annotation: Whither?. The terms coreference and anaphora tend to be used inconsistently and interchangeably in much empirically-oriented work in NLP, and this threatens to lead to incoherent analyses of texts and arbitrary loss of information. This paper discusses the role of coreference annotation in Information Extraction, focussing on the coreference scheme defined for the MUC-7 evaluation exercise. We point out deficiencies in that scheme and make some suggestions towards a new annotation philosophy.",Coreference Annotation: Whither?,"The terms coreference and anaphora tend to be used inconsistently and interchangeably in much empirically-oriented work in NLP, and this threatens to lead to incoherent analyses of texts and arbitrary loss of information. This paper discusses the role of coreference annotation in Information Extraction, focussing on the coreference scheme defined for the MUC-7 evaluation exercise. We point out deficiencies in that scheme and make some suggestions towards a new annotation philosophy.",Coreference Annotation: Whither?,"The terms coreference and anaphora tend to be used inconsistently and interchangeably in much empirically-oriented work in NLP, and this threatens to lead to incoherent analyses of texts and arbitrary loss of information. This paper discusses the role of coreference annotation in Information Extraction, focussing on the coreference scheme defined for the MUC-7 evaluation exercise. We point out deficiencies in that scheme and make some suggestions towards a new annotation philosophy.","We are grateful to Lynette Hirschman and Breck Baldwin for their very constructive responses to a presentation on the topic of this paper (van Deemter and Kibble, 1999). Rodger Kibble's participation in this research was funded by the UK EPSRC as part of the GNOME (GR/L51126) and RAGS (GR/L77102) projects.","Coreference Annotation: Whither?. The terms coreference and anaphora tend to be used inconsistently and interchangeably in much empirically-oriented work in NLP, and this threatens to lead to incoherent analyses of texts and arbitrary loss of information. This paper discusses the role of coreference annotation in Information Extraction, focussing on the coreference scheme defined for the MUC-7 evaluation exercise. We point out deficiencies in that scheme and make some suggestions towards a new annotation philosophy.",2000
sikdar-gamback-2016-feature,https://aclanthology.org/W16-3922,0,,,,,,,"Feature-Rich Twitter Named Entity Recognition and Classification. Twitter named entity recognition is the process of identifying proper names and classifying them into some predefined labels/categories. The paper introduces a Twitter named entity system using a supervised machine learning approach, namely Conditional Random Fields. A large set of different features was developed and the system was trained using these. The Twitter named entity task can be divided into two parts: i) Named entity extraction from tweets and ii) Twitter name classification into ten different types. For Twitter named entity recognition on unseen test data, our system obtained the second highest F 1 score in the shared task: 63.22%. The system performance on the classification task was worse, with an F 1 measure of 40.06% on unseen test data, which was the fourth best of the ten systems participating in the shared task.",Feature-Rich {T}witter Named Entity Recognition and Classification,"Twitter named entity recognition is the process of identifying proper names and classifying them into some predefined labels/categories. The paper introduces a Twitter named entity system using a supervised machine learning approach, namely Conditional Random Fields. A large set of different features was developed and the system was trained using these. The Twitter named entity task can be divided into two parts: i) Named entity extraction from tweets and ii) Twitter name classification into ten different types. For Twitter named entity recognition on unseen test data, our system obtained the second highest F 1 score in the shared task: 63.22%. The system performance on the classification task was worse, with an F 1 measure of 40.06% on unseen test data, which was the fourth best of the ten systems participating in the shared task.",Feature-Rich Twitter Named Entity Recognition and Classification,"Twitter named entity recognition is the process of identifying proper names and classifying them into some predefined labels/categories. The paper introduces a Twitter named entity system using a supervised machine learning approach, namely Conditional Random Fields. A large set of different features was developed and the system was trained using these. The Twitter named entity task can be divided into two parts: i) Named entity extraction from tweets and ii) Twitter name classification into ten different types. For Twitter named entity recognition on unseen test data, our system obtained the second highest F 1 score in the shared task: 63.22%. The system performance on the classification task was worse, with an F 1 measure of 40.06% on unseen test data, which was the fourth best of the ten systems participating in the shared task.",,"Feature-Rich Twitter Named Entity Recognition and Classification. Twitter named entity recognition is the process of identifying proper names and classifying them into some predefined labels/categories. The paper introduces a Twitter named entity system using a supervised machine learning approach, namely Conditional Random Fields. A large set of different features was developed and the system was trained using these. The Twitter named entity task can be divided into two parts: i) Named entity extraction from tweets and ii) Twitter name classification into ten different types. For Twitter named entity recognition on unseen test data, our system obtained the second highest F 1 score in the shared task: 63.22%. The system performance on the classification task was worse, with an F 1 measure of 40.06% on unseen test data, which was the fourth best of the ten systems participating in the shared task.",2016
barthelemy-2009-karamel,https://aclanthology.org/W09-0802,0,,,,,,,The Karamel System and Semitic Languages: Structured Multi-Tiered Morphology. Karamel is a system for finite-state morphology which is multi-tape and uses a typed Cartesian product to relate tapes in a structured way. It implements statically compiled feature structures. Its language allows the use of regular expressions and Generalized Restriction rules to define multi-tape transducers. Both simultaneous and successive application of local constraints are possible. This system is interesting for describing rich and structured morphologies such as the morphology of Semitic languages.,The {K}aramel System and {S}emitic Languages: Structured Multi-Tiered Morphology,Karamel is a system for finite-state morphology which is multi-tape and uses a typed Cartesian product to relate tapes in a structured way. It implements statically compiled feature structures. Its language allows the use of regular expressions and Generalized Restriction rules to define multi-tape transducers. Both simultaneous and successive application of local constraints are possible. This system is interesting for describing rich and structured morphologies such as the morphology of Semitic languages.,The Karamel System and Semitic Languages: Structured Multi-Tiered Morphology,Karamel is a system for finite-state morphology which is multi-tape and uses a typed Cartesian product to relate tapes in a structured way. It implements statically compiled feature structures. Its language allows the use of regular expressions and Generalized Restriction rules to define multi-tape transducers. Both simultaneous and successive application of local constraints are possible. This system is interesting for describing rich and structured morphologies such as the morphology of Semitic languages.,,The Karamel System and Semitic Languages: Structured Multi-Tiered Morphology. Karamel is a system for finite-state morphology which is multi-tape and uses a typed Cartesian product to relate tapes in a structured way. It implements statically compiled feature structures. Its language allows the use of regular expressions and Generalized Restriction rules to define multi-tape transducers. Both simultaneous and successive application of local constraints are possible. This system is interesting for describing rich and structured morphologies such as the morphology of Semitic languages.,2009
sanchez-badeka-2014-linguistic,https://aclanthology.org/2014.amta-users.1,0,,,,business_use,,,Linguistic QA for MT of user-generated content at eBay. ,Linguistic {QA} for {MT} of user-generated content at e{B}ay,,Linguistic QA for MT of user-generated content at eBay,,,Linguistic QA for MT of user-generated content at eBay. ,2014
skadina-pinnis-2017-nmt,https://aclanthology.org/I17-1038,0,,,,,,,"NMT or SMT: Case Study of a Narrow-domain English-Latvian Post-editing Project. The recent technological shift in machine translation from statistical machine translation (SMT) to neural machine translation (NMT) raises the question of the strengths and weaknesses of NMT. In this paper, we present an analysis of NMT and SMT systems' outputs from narrow domain English-Latvian MT systems that were trained on a rather small amount of data. We analyze post-edits produced by professional translators and manually annotated errors in these outputs. Analysis of post-edits allowed us to conclude that both approaches are comparably successful, allowing for an increase in translators' productivity, with the NMT system showing slightly worse results. Through the analysis of annotated errors, we found that NMT translations are more fluent than SMT translations. However, errors related to accuracy, especially, mistranslation and omission errors, occur more often in NMT outputs. The word form errors, that characterize the morphological richness of Latvian, are frequent for both systems, but slightly fewer in NMT outputs.",{NMT} or {SMT}: Case Study of a Narrow-domain {E}nglish-{L}atvian Post-editing Project,"The recent technological shift in machine translation from statistical machine translation (SMT) to neural machine translation (NMT) raises the question of the strengths and weaknesses of NMT. In this paper, we present an analysis of NMT and SMT systems' outputs from narrow domain English-Latvian MT systems that were trained on a rather small amount of data. We analyze post-edits produced by professional translators and manually annotated errors in these outputs. Analysis of post-edits allowed us to conclude that both approaches are comparably successful, allowing for an increase in translators' productivity, with the NMT system showing slightly worse results. Through the analysis of annotated errors, we found that NMT translations are more fluent than SMT translations. However, errors related to accuracy, especially, mistranslation and omission errors, occur more often in NMT outputs. The word form errors, that characterize the morphological richness of Latvian, are frequent for both systems, but slightly fewer in NMT outputs.",NMT or SMT: Case Study of a Narrow-domain English-Latvian Post-editing Project,"The recent technological shift in machine translation from statistical machine translation (SMT) to neural machine translation (NMT) raises the question of the strengths and weaknesses of NMT. In this paper, we present an analysis of NMT and SMT systems' outputs from narrow domain English-Latvian MT systems that were trained on a rather small amount of data. We analyze post-edits produced by professional translators and manually annotated errors in these outputs. Analysis of post-edits allowed us to conclude that both approaches are comparably successful, allowing for an increase in translators' productivity, with the NMT system showing slightly worse results. Through the analysis of annotated errors, we found that NMT translations are more fluent than SMT translations. However, errors related to accuracy, especially, mistranslation and omission errors, occur more often in NMT outputs. The word form errors, that characterize the morphological richness of Latvian, are frequent for both systems, but slightly fewer in NMT outputs.","We would like to thank Tilde's Localization Department for the hard work they did to prepare material for the analysis presented in this paper. The work within the QT21 project has received funding from the European Union under grant agreement n • 645452. The research has been supported by the ICT Competence Centre (www.itkc.lv) within the project ""2.2. Prototype of a Software and Hardware Platform for Integration of Machine Translation in Corporate Infrastructure"" of EU Structural funds, ID n • 1.2.1.1/16/A/007.","NMT or SMT: Case Study of a Narrow-domain English-Latvian Post-editing Project. The recent technological shift in machine translation from statistical machine translation (SMT) to neural machine translation (NMT) raises the question of the strengths and weaknesses of NMT. In this paper, we present an analysis of NMT and SMT systems' outputs from narrow domain English-Latvian MT systems that were trained on a rather small amount of data. We analyze post-edits produced by professional translators and manually annotated errors in these outputs. Analysis of post-edits allowed us to conclude that both approaches are comparably successful, allowing for an increase in translators' productivity, with the NMT system showing slightly worse results. Through the analysis of annotated errors, we found that NMT translations are more fluent than SMT translations. However, errors related to accuracy, especially, mistranslation and omission errors, occur more often in NMT outputs. The word form errors, that characterize the morphological richness of Latvian, are frequent for both systems, but slightly fewer in NMT outputs.",2017
blache-2014-challenging,https://aclanthology.org/W14-0501,0,,,,,,,"Challenging incrementality in human language processing: two operations for a cognitive architecture. The description of language complexity and the cognitive load related to the different linguistic phenomena is a key issue for the understanding of language processing. Many studies have focused on the identification of specific parameters that can lead to a simplification or on the contrary to a complexification of the processing (e.g. the different difficulty models proposed in (Gibson, 2000) , (Warren and Gibson, 2002) , (Hawkins, 2001) ). Similarly, different simplification factors can be identified, such as the notion of activation, relying on syntactic priming effects making it possible to predict (or activate) a word (Vasishth, 2003) . Several studies have shown that complexity factors are cumulative (Keller, 2005) , but can be offset by simplification (Blache et al., 2006) . It is therefore necessary to adopt a global point of view of language processing, explaining the interplay between positive and negative cumulativity, in other words compensation effects.
From the computational point of view, some models can account more or less explicitly for these phenomena. This is the case of the Surprisal index (Hale, 2001) , offering for each word an assessment of its integration costs into the syntactic structure. This evaluation is done starting from the probability of the possible solutions. On their side, symbolic approaches also provide an estimation of the activation degree, depending on the number and weight of syntactic relations to the current word (Blache et al., 2006) ; (Blache, 2013) .",Challenging incrementality in human language processing: two operations for a cognitive architecture,"The description of language complexity and the cognitive load related to the different linguistic phenomena is a key issue for the understanding of language processing. Many studies have focused on the identification of specific parameters that can lead to a simplification or on the contrary to a complexification of the processing (e.g. the different difficulty models proposed in (Gibson, 2000) , (Warren and Gibson, 2002) , (Hawkins, 2001) ). Similarly, different simplification factors can be identified, such as the notion of activation, relying on syntactic priming effects making it possible to predict (or activate) a word (Vasishth, 2003) . Several studies have shown that complexity factors are cumulative (Keller, 2005) , but can be offset by simplification (Blache et al., 2006) . It is therefore necessary to adopt a global point of view of language processing, explaining the interplay between positive and negative cumulativity, in other words compensation effects.
From the computational point of view, some models can account more or less explicitly for these phenomena. This is the case of the Surprisal index (Hale, 2001) , offering for each word an assessment of its integration costs into the syntactic structure. This evaluation is done starting from the probability of the possible solutions. On their side, symbolic approaches also provide an estimation of the activation degree, depending on the number and weight of syntactic relations to the current word (Blache et al., 2006) ; (Blache, 2013) .",Challenging incrementality in human language processing: two operations for a cognitive architecture,"The description of language complexity and the cognitive load related to the different linguistic phenomena is a key issue for the understanding of language processing. Many studies have focused on the identification of specific parameters that can lead to a simplification or on the contrary to a complexification of the processing (e.g. the different difficulty models proposed in (Gibson, 2000) , (Warren and Gibson, 2002) , (Hawkins, 2001) ). Similarly, different simplification factors can be identified, such as the notion of activation, relying on syntactic priming effects making it possible to predict (or activate) a word (Vasishth, 2003) . Several studies have shown that complexity factors are cumulative (Keller, 2005) , but can be offset by simplification (Blache et al., 2006) . It is therefore necessary to adopt a global point of view of language processing, explaining the interplay between positive and negative cumulativity, in other words compensation effects.
From the computational point of view, some models can account more or less explicitly for these phenomena. This is the case of the Surprisal index (Hale, 2001) , offering for each word an assessment of its integration costs into the syntactic structure. This evaluation is done starting from the probability of the possible solutions. On their side, symbolic approaches also provide an estimation of the activation degree, depending on the number and weight of syntactic relations to the current word (Blache et al., 2006) ; (Blache, 2013) .","This work, carried out within the Labex BLRI (ANR-11-LABX-0036), has benefited from support from the French government, managed by the French National Agency for Research (ANR), under the project title Investments of the Future A*MIDEX (ANR-11-IDEX-0001-02).","Challenging incrementality in human language processing: two operations for a cognitive architecture. The description of language complexity and the cognitive load related to the different linguistic phenomena is a key issue for the understanding of language processing. Many studies have focused on the identification of specific parameters that can lead to a simplification or on the contrary to a complexification of the processing (e.g. the different difficulty models proposed in (Gibson, 2000) , (Warren and Gibson, 2002) , (Hawkins, 2001) ). Similarly, different simplification factors can be identified, such as the notion of activation, relying on syntactic priming effects making it possible to predict (or activate) a word (Vasishth, 2003) . Several studies have shown that complexity factors are cumulative (Keller, 2005) , but can be offset by simplification (Blache et al., 2006) . It is therefore necessary to adopt a global point of view of language processing, explaining the interplay between positive and negative cumulativity, in other words compensation effects.
From the computational point of view, some models can account more or less explicitly for these phenomena. This is the case of the Surprisal index (Hale, 2001) , offering for each word an assessment of its integration costs into the syntactic structure. This evaluation is done starting from the probability of the possible solutions. On their side, symbolic approaches also provide an estimation of the activation degree, depending on the number and weight of syntactic relations to the current word (Blache et al., 2006) ; (Blache, 2013) .",2014
trojahn-etal-2008-framework,http://www.lrec-conf.org/proceedings/lrec2008/pdf/270_paper.pdf,0,,,,,,,"A Framework for Multilingual Ontology Mapping. In the field of ontology mapping, multilingual ontology mapping is an issue that is not well explored. This paper proposes a framework for mapping of multilingual Description Logics (DL) ontologies. First, the DL source ontology is translated to the target ontology language, using a lexical database or a dictionary, generating a DL translated ontology. The target and the translated ontologies are then used as input for the mapping process. The mappings are computed by specialized agents using different mapping approaches. Next, these agents use argumentation to exchange their local results, in order to agree on the obtained mappings. Based on their preferences and confidence of the arguments, the agents compute their preferred mapping sets. The arguments in such preferred sets are viewed as the set of globally acceptable arguments. A DL mapping ontology is generated as result of the mapping process. In this paper we focus on the process of generating the DL translated ontology.",A Framework for Multilingual Ontology Mapping,"In the field of ontology mapping, multilingual ontology mapping is an issue that is not well explored. This paper proposes a framework for mapping of multilingual Description Logics (DL) ontologies. First, the DL source ontology is translated to the target ontology language, using a lexical database or a dictionary, generating a DL translated ontology. The target and the translated ontologies are then used as input for the mapping process. The mappings are computed by specialized agents using different mapping approaches. Next, these agents use argumentation to exchange their local results, in order to agree on the obtained mappings. Based on their preferences and confidence of the arguments, the agents compute their preferred mapping sets. The arguments in such preferred sets are viewed as the set of globally acceptable arguments. A DL mapping ontology is generated as result of the mapping process. In this paper we focus on the process of generating the DL translated ontology.",A Framework for Multilingual Ontology Mapping,"In the field of ontology mapping, multilingual ontology mapping is an issue that is not well explored. This paper proposes a framework for mapping of multilingual Description Logics (DL) ontologies. First, the DL source ontology is translated to the target ontology language, using a lexical database or a dictionary, generating a DL translated ontology. The target and the translated ontologies are then used as input for the mapping process. The mappings are computed by specialized agents using different mapping approaches. Next, these agents use argumentation to exchange their local results, in order to agree on the obtained mappings. Based on their preferences and confidence of the arguments, the agents compute their preferred mapping sets. The arguments in such preferred sets are viewed as the set of globally acceptable arguments. A DL mapping ontology is generated as result of the mapping process. In this paper we focus on the process of generating the DL translated ontology.",,"A Framework for Multilingual Ontology Mapping. In the field of ontology mapping, multilingual ontology mapping is an issue that is not well explored. This paper proposes a framework for mapping of multilingual Description Logics (DL) ontologies. First, the DL source ontology is translated to the target ontology language, using a lexical database or a dictionary, generating a DL translated ontology. The target and the translated ontologies are then used as input for the mapping process. The mappings are computed by specialized agents using different mapping approaches. Next, these agents use argumentation to exchange their local results, in order to agree on the obtained mappings. Based on their preferences and confidence of the arguments, the agents compute their preferred mapping sets. The arguments in such preferred sets are viewed as the set of globally acceptable arguments. A DL mapping ontology is generated as result of the mapping process. In this paper we focus on the process of generating the DL translated ontology.",2008
litman-1986-linguistic,https://aclanthology.org/P86-1033,0,,,,,,,"Linguistic Coherence: A Plan-Based Alternative. To fully understand a sequence of utterances, one must be able to infer implicit relationships between the utterances. Although the identification of sets of utterance relationships forms the basis for many theories of discourse, the formalization and recognition of such relationships has proven to be an extremely difficult computational task. This paper presents a plan-based approach to the representation and recognition of implicit relationships between utterances. Relationships are formulated as discourse plans, which allows their representation in terms of planning operators and their computation via a plan recognition process. By incorporating complex inferential processes relating utterances into a plan-based framework, a formalization and computability not available in the earlier works is provided.",Linguistic Coherence: A Plan-Based Alternative,"To fully understand a sequence of utterances, one must be able to infer implicit relationships between the utterances. Although the identification of sets of utterance relationships forms the basis for many theories of discourse, the formalization and recognition of such relationships has proven to be an extremely difficult computational task. This paper presents a plan-based approach to the representation and recognition of implicit relationships between utterances. Relationships are formulated as discourse plans, which allows their representation in terms of planning operators and their computation via a plan recognition process. By incorporating complex inferential processes relating utterances into a plan-based framework, a formalization and computability not available in the earlier works is provided.",Linguistic Coherence: A Plan-Based Alternative,"To fully understand a sequence of utterances, one must be able to infer implicit relationships between the utterances. Although the identification of sets of utterance relationships forms the basis for many theories of discourse, the formalization and recognition of such relationships has proven to be an extremely difficult computational task. This paper presents a plan-based approach to the representation and recognition of implicit relationships between utterances. Relationships are formulated as discourse plans, which allows their representation in terms of planning operators and their computation via a plan recognition process. By incorporating complex inferential processes relating utterances into a plan-based framework, a formalization and computability not available in the earlier works is provided.","I would like to thank Julia Hirschberg, Marcia Derr, Mark Jones, Mark Kahrs, and Henry Kautz for their helpful comments on drafts of this paper.","Linguistic Coherence: A Plan-Based Alternative. To fully understand a sequence of utterances, one must be able to infer implicit relationships between the utterances. Although the identification of sets of utterance relationships forms the basis for many theories of discourse, the formalization and recognition of such relationships has proven to be an extremely difficult computational task. This paper presents a plan-based approach to the representation and recognition of implicit relationships between utterances. Relationships are formulated as discourse plans, which allows their representation in terms of planning operators and their computation via a plan recognition process. By incorporating complex inferential processes relating utterances into a plan-based framework, a formalization and computability not available in the earlier works is provided.",1986
dragoni-2018-neurosent,https://aclanthology.org/S18-1013,1,,,,peace_justice_and_strong_institutions,,,"NEUROSENT-PDI at SemEval-2018 Task 1: Leveraging a Multi-Domain Sentiment Model for Inferring Polarity in Micro-blog Text. This paper describes the NeuroSent system that participated in SemEval 2018 Task 1. Our system takes a supervised approach that builds on neural networks and word embeddings. Word embeddings were built by starting from a repository of user generated reviews. Thus, they are specific for sentiment analysis tasks. Then, tweets are converted in the corresponding vector representation and given as input to the neural network with the aim of learning the different semantics contained in each emotion taken into account by the SemEval task. The output layer has been adapted based on the characteristics of each subtask. Preliminary results obtained on the provided training set are encouraging for pursuing the investigation into this direction.",{NEUROSENT}-{PDI} at {S}em{E}val-2018 Task 1: Leveraging a Multi-Domain Sentiment Model for Inferring Polarity in Micro-blog Text,"This paper describes the NeuroSent system that participated in SemEval 2018 Task 1. Our system takes a supervised approach that builds on neural networks and word embeddings. Word embeddings were built by starting from a repository of user generated reviews. Thus, they are specific for sentiment analysis tasks. Then, tweets are converted in the corresponding vector representation and given as input to the neural network with the aim of learning the different semantics contained in each emotion taken into account by the SemEval task. The output layer has been adapted based on the characteristics of each subtask. Preliminary results obtained on the provided training set are encouraging for pursuing the investigation into this direction.",NEUROSENT-PDI at SemEval-2018 Task 1: Leveraging a Multi-Domain Sentiment Model for Inferring Polarity in Micro-blog Text,"This paper describes the NeuroSent system that participated in SemEval 2018 Task 1. Our system takes a supervised approach that builds on neural networks and word embeddings. Word embeddings were built by starting from a repository of user generated reviews. Thus, they are specific for sentiment analysis tasks. Then, tweets are converted in the corresponding vector representation and given as input to the neural network with the aim of learning the different semantics contained in each emotion taken into account by the SemEval task. The output layer has been adapted based on the characteristics of each subtask. Preliminary results obtained on the provided training set are encouraging for pursuing the investigation into this direction.",,"NEUROSENT-PDI at SemEval-2018 Task 1: Leveraging a Multi-Domain Sentiment Model for Inferring Polarity in Micro-blog Text. This paper describes the NeuroSent system that participated in SemEval 2018 Task 1. Our system takes a supervised approach that builds on neural networks and word embeddings. Word embeddings were built by starting from a repository of user generated reviews. Thus, they are specific for sentiment analysis tasks. Then, tweets are converted in the corresponding vector representation and given as input to the neural network with the aim of learning the different semantics contained in each emotion taken into account by the SemEval task. The output layer has been adapted based on the characteristics of each subtask. Preliminary results obtained on the provided training set are encouraging for pursuing the investigation into this direction.",2018
brixey-etal-2017-shihbot,https://aclanthology.org/W17-5544,1,,,,health,,,"SHIHbot: A Facebook chatbot for Sexual Health Information on HIV/AIDS. We present the implementation of an autonomous chatbot, SHIHbot, deployed on Facebook, which answers a wide variety of sexual health questions on HIV/AIDS. The chatbot's response database is compiled from professional medical and public health resources in order to provide reliable information to users. The system's backend is NPCEditor, a response selection platform trained on linked questions and answers; to our knowledge this is the first retrieval-based chatbot deployed on a large public social network.",{SHIH}bot: A {F}acebook chatbot for Sexual Health Information on {HIV}/{AIDS},"We present the implementation of an autonomous chatbot, SHIHbot, deployed on Facebook, which answers a wide variety of sexual health questions on HIV/AIDS. The chatbot's response database is compiled from professional medical and public health resources in order to provide reliable information to users. The system's backend is NPCEditor, a response selection platform trained on linked questions and answers; to our knowledge this is the first retrieval-based chatbot deployed on a large public social network.",SHIHbot: A Facebook chatbot for Sexual Health Information on HIV/AIDS,"We present the implementation of an autonomous chatbot, SHIHbot, deployed on Facebook, which answers a wide variety of sexual health questions on HIV/AIDS. The chatbot's response database is compiled from professional medical and public health resources in order to provide reliable information to users. The system's backend is NPCEditor, a response selection platform trained on linked questions and answers; to our knowledge this is the first retrieval-based chatbot deployed on a large public social network.","Many thanks to Professors Milind Tambe and Eric Rice for helping to develop this work and for promoting artificial intelligence for social good. The first, seventh and eighth authors were supported in part by the U.S. Army; statements and opinions expressed do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred.","SHIHbot: A Facebook chatbot for Sexual Health Information on HIV/AIDS. We present the implementation of an autonomous chatbot, SHIHbot, deployed on Facebook, which answers a wide variety of sexual health questions on HIV/AIDS. The chatbot's response database is compiled from professional medical and public health resources in order to provide reliable information to users. The system's backend is NPCEditor, a response selection platform trained on linked questions and answers; to our knowledge this is the first retrieval-based chatbot deployed on a large public social network.",2017
do-carmo-2019-edit,https://aclanthology.org/W19-7001,0,,,,,,,"Edit distances do not describe editing, but they can be useful for translation process research. Translation process research (TPR) aims at describing what translators do, and one of the technical dimensions of translators' work is editing (applying detailed changes to text). In this presentation, we will analyze how different methods for process data collection describe editing. We will review keyloggers used in typical TPR applications, track changes used by word processors, and edit rates based on estimation of edit distances. The purpose of this presentation is to discuss the limitations of these methods when describing editing behavior, and to incentivize researchers in looking for ways to present process data in simplified formats, closer to those that describe product data.","Edit distances do not describe editing, but they can be useful for translation process research","Translation process research (TPR) aims at describing what translators do, and one of the technical dimensions of translators' work is editing (applying detailed changes to text). In this presentation, we will analyze how different methods for process data collection describe editing. We will review keyloggers used in typical TPR applications, track changes used by word processors, and edit rates based on estimation of edit distances. The purpose of this presentation is to discuss the limitations of these methods when describing editing behavior, and to incentivize researchers in looking for ways to present process data in simplified formats, closer to those that describe product data.","Edit distances do not describe editing, but they can be useful for translation process research","Translation process research (TPR) aims at describing what translators do, and one of the technical dimensions of translators' work is editing (applying detailed changes to text). In this presentation, we will analyze how different methods for process data collection describe editing. We will review keyloggers used in typical TPR applications, track changes used by word processors, and edit rates based on estimation of edit distances. The purpose of this presentation is to discuss the limitations of these methods when describing editing behavior, and to incentivize researchers in looking for ways to present process data in simplified formats, closer to those that describe product data.",This Project has received funding from the European Union's Horizon 2020 research and innovation programme under the EDGE COFUND Marie Skłodowska-Curie Grant Agreement no. 713567. This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number 13/RC/2077.,"Edit distances do not describe editing, but they can be useful for translation process research. Translation process research (TPR) aims at describing what translators do, and one of the technical dimensions of translators' work is editing (applying detailed changes to text). In this presentation, we will analyze how different methods for process data collection describe editing. We will review keyloggers used in typical TPR applications, track changes used by word processors, and edit rates based on estimation of edit distances. The purpose of this presentation is to discuss the limitations of these methods when describing editing behavior, and to incentivize researchers in looking for ways to present process data in simplified formats, closer to those that describe product data.",2019
huang-etal-2021-seq2emo,https://aclanthology.org/2021.naacl-main.375,0,,,,,,,"Seq2Emo: A Sequence to Multi-Label Emotion Classification Model. Multi-label emotion classification is an important task in NLP and is essential to many applications. In this work, we propose a sequence-to-emotion (Seq2Emo) approach, which implicitly models emotion correlations in a bi-directional decoder. Experiments on SemEval'18 and GoEmotions datasets show that our approach outperforms state-of-the-art methods (without using external data). In particular, Seq2Emo outperforms the binary relevance (BR) and classifier chain (CC) approaches in a fair setting. 1",{S}eq2{E}mo: A Sequence to Multi-Label Emotion Classification Model,"Multi-label emotion classification is an important task in NLP and is essential to many applications. In this work, we propose a sequence-to-emotion (Seq2Emo) approach, which implicitly models emotion correlations in a bi-directional decoder. Experiments on SemEval'18 and GoEmotions datasets show that our approach outperforms state-of-the-art methods (without using external data). In particular, Seq2Emo outperforms the binary relevance (BR) and classifier chain (CC) approaches in a fair setting. 1",Seq2Emo: A Sequence to Multi-Label Emotion Classification Model,"Multi-label emotion classification is an important task in NLP and is essential to many applications. In this work, we propose a sequence-to-emotion (Seq2Emo) approach, which implicitly models emotion correlations in a bi-directional decoder. Experiments on SemEval'18 and GoEmotions datasets show that our approach outperforms state-of-the-art methods (without using external data). In particular, Seq2Emo outperforms the binary relevance (BR) and classifier chain (CC) approaches in a fair setting. 1",We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant Nos. RGPIN-2020-04465 and RGPIN-2020-04440. Chenyang Huang is supported by the Borealis AI Graduate Fellowship Program. Lili Mou and Osmar Zaïane are supported by the Amii Fellow Program and the Canada CIFAR AI Chair Program. This research is also supported in part by Compute Canada (www.computecanada.ca).,"Seq2Emo: A Sequence to Multi-Label Emotion Classification Model. Multi-label emotion classification is an important task in NLP and is essential to many applications. In this work, we propose a sequence-to-emotion (Seq2Emo) approach, which implicitly models emotion correlations in a bi-directional decoder. Experiments on SemEval'18 and GoEmotions datasets show that our approach outperforms state-of-the-art methods (without using external data). In particular, Seq2Emo outperforms the binary relevance (BR) and classifier chain (CC) approaches in a fair setting. 1",2021
tay-etal-2018-attentive,https://aclanthology.org/D18-1381,0,,,,,,,"Attentive Gated Lexicon Reader with Contrastive Contextual Co-Attention for Sentiment Classification. This paper proposes a new neural architecture that exploits readily available sentiment lexicon resources. The key idea is that that incorporating a word-level prior can aid in the representation learning process, eventually improving model performance. To this end, our model employs two distinctly unique components, i.e., (1) we introduce a lexicon-driven contextual attention mechanism to imbue lexicon words with long-range contextual information and (2), we introduce a contrastive co-attention mechanism that models contrasting polarities between all positive and negative words in a sentence. Via extensive experiments, we show that our approach outperforms many other neural baselines on sentiment classification tasks on multiple benchmark datasets. * Denotes equal contribution.",Attentive Gated Lexicon Reader with Contrastive Contextual Co-Attention for Sentiment Classification,"This paper proposes a new neural architecture that exploits readily available sentiment lexicon resources. The key idea is that that incorporating a word-level prior can aid in the representation learning process, eventually improving model performance. To this end, our model employs two distinctly unique components, i.e., (1) we introduce a lexicon-driven contextual attention mechanism to imbue lexicon words with long-range contextual information and (2), we introduce a contrastive co-attention mechanism that models contrasting polarities between all positive and negative words in a sentence. Via extensive experiments, we show that our approach outperforms many other neural baselines on sentiment classification tasks on multiple benchmark datasets. * Denotes equal contribution.",Attentive Gated Lexicon Reader with Contrastive Contextual Co-Attention for Sentiment Classification,"This paper proposes a new neural architecture that exploits readily available sentiment lexicon resources. The key idea is that that incorporating a word-level prior can aid in the representation learning process, eventually improving model performance. To this end, our model employs two distinctly unique components, i.e., (1) we introduce a lexicon-driven contextual attention mechanism to imbue lexicon words with long-range contextual information and (2), we introduce a contrastive co-attention mechanism that models contrasting polarities between all positive and negative words in a sentence. Via extensive experiments, we show that our approach outperforms many other neural baselines on sentiment classification tasks on multiple benchmark datasets. * Denotes equal contribution.",,"Attentive Gated Lexicon Reader with Contrastive Contextual Co-Attention for Sentiment Classification. This paper proposes a new neural architecture that exploits readily available sentiment lexicon resources. The key idea is that that incorporating a word-level prior can aid in the representation learning process, eventually improving model performance. To this end, our model employs two distinctly unique components, i.e., (1) we introduce a lexicon-driven contextual attention mechanism to imbue lexicon words with long-range contextual information and (2), we introduce a contrastive co-attention mechanism that models contrasting polarities between all positive and negative words in a sentence. Via extensive experiments, we show that our approach outperforms many other neural baselines on sentiment classification tasks on multiple benchmark datasets. * Denotes equal contribution.",2018
rikters-2015-multi,https://aclanthology.org/W15-4102,0,,,,,,,Multi-system machine translation using online APIs for English-Latvian. This paper describes a hybrid machine translation (HMT) system that employs several online MT system application program interfaces (APIs) forming a Multi-System Machine Translation (MSMT) approach. The goal is to improve the automated translation of English-Latvian texts over each of the individual MT APIs. The selection of the best hypothesis translation is done by calculating the perplexity for each hypothesis. Experiment results show a slight improvement of BLEU score and WER (word error rate).,Multi-system machine translation using online {API}s for {E}nglish-{L}atvian,This paper describes a hybrid machine translation (HMT) system that employs several online MT system application program interfaces (APIs) forming a Multi-System Machine Translation (MSMT) approach. The goal is to improve the automated translation of English-Latvian texts over each of the individual MT APIs. The selection of the best hypothesis translation is done by calculating the perplexity for each hypothesis. Experiment results show a slight improvement of BLEU score and WER (word error rate).,Multi-system machine translation using online APIs for English-Latvian,This paper describes a hybrid machine translation (HMT) system that employs several online MT system application program interfaces (APIs) forming a Multi-System Machine Translation (MSMT) approach. The goal is to improve the automated translation of English-Latvian texts over each of the individual MT APIs. The selection of the best hypothesis translation is done by calculating the perplexity for each hypothesis. Experiment results show a slight improvement of BLEU score and WER (word error rate).,"This research work was supported by the research project ""Optimization methods of large scale statistical models for innovative machine translation technologies"", project financed by The State Education Development Agency (Latvia) and European Regional Development Fund, contract No. 2013/0038/2DP/2.1.1.1.0/13/APIA/ VI-AA/029. The author would also like to thank Inguna Skadiņa for advices and contributions, and the anonymous reviewers for their comments and suggestions.",Multi-system machine translation using online APIs for English-Latvian. This paper describes a hybrid machine translation (HMT) system that employs several online MT system application program interfaces (APIs) forming a Multi-System Machine Translation (MSMT) approach. The goal is to improve the automated translation of English-Latvian texts over each of the individual MT APIs. The selection of the best hypothesis translation is done by calculating the perplexity for each hypothesis. Experiment results show a slight improvement of BLEU score and WER (word error rate).,2015
chandu-etal-2018-code,https://aclanthology.org/W18-3204,0,,,,,,,"Code-Mixed Question Answering Challenge: Crowd-sourcing Data and Techniques. Code-Mixing (CM) is the phenomenon of alternating between two or more languages which is prevalent in bi-and multilingual communities. Most NLP applications today are still designed with the assumption of a single interaction language and are most likely to break given a CM utterance with multiple languages mixed at a morphological, phrase or sentence level. For example, popular commercial search engines do not yet fully understand the intents expressed in CM queries. As a first step towards fostering research which supports CM in NLP applications, we systematically crowd-sourced and curated an evaluation dataset for factoid question answering in three CM languages-Hinglish (Hindi+English), Tenglish (Telugu+English) and Tamlish (Tamil+English) which belong to two language families. We share the details of our data collection process, techniques which were used to avoid inducing lexical bias amongst the crowd workers and other CM specific linguistic properties of the dataset. Our final dataset, which is available freely for research purposes, has 1,694 Hinglish, 2,848 Tamlish and 1,391 Tenglish factoid questions and their answers. We discuss the techniques used by the participants for the first edition of this ongoing challenge.",Code-Mixed Question Answering Challenge: Crowd-sourcing Data and Techniques,"Code-Mixing (CM) is the phenomenon of alternating between two or more languages which is prevalent in bi-and multilingual communities. Most NLP applications today are still designed with the assumption of a single interaction language and are most likely to break given a CM utterance with multiple languages mixed at a morphological, phrase or sentence level. For example, popular commercial search engines do not yet fully understand the intents expressed in CM queries. As a first step towards fostering research which supports CM in NLP applications, we systematically crowd-sourced and curated an evaluation dataset for factoid question answering in three CM languages-Hinglish (Hindi+English), Tenglish (Telugu+English) and Tamlish (Tamil+English) which belong to two language families. We share the details of our data collection process, techniques which were used to avoid inducing lexical bias amongst the crowd workers and other CM specific linguistic properties of the dataset. Our final dataset, which is available freely for research purposes, has 1,694 Hinglish, 2,848 Tamlish and 1,391 Tenglish factoid questions and their answers. We discuss the techniques used by the participants for the first edition of this ongoing challenge.",Code-Mixed Question Answering Challenge: Crowd-sourcing Data and Techniques,"Code-Mixing (CM) is the phenomenon of alternating between two or more languages which is prevalent in bi-and multilingual communities. Most NLP applications today are still designed with the assumption of a single interaction language and are most likely to break given a CM utterance with multiple languages mixed at a morphological, phrase or sentence level. For example, popular commercial search engines do not yet fully understand the intents expressed in CM queries. As a first step towards fostering research which supports CM in NLP applications, we systematically crowd-sourced and curated an evaluation dataset for factoid question answering in three CM languages-Hinglish (Hindi+English), Tenglish (Telugu+English) and Tamlish (Tamil+English) which belong to two language families. We share the details of our data collection process, techniques which were used to avoid inducing lexical bias amongst the crowd workers and other CM specific linguistic properties of the dataset. Our final dataset, which is available freely for research purposes, has 1,694 Hinglish, 2,848 Tamlish and 1,391 Tenglish factoid questions and their answers. We discuss the techniques used by the participants for the first edition of this ongoing challenge.",,"Code-Mixed Question Answering Challenge: Crowd-sourcing Data and Techniques. Code-Mixing (CM) is the phenomenon of alternating between two or more languages which is prevalent in bi-and multilingual communities. Most NLP applications today are still designed with the assumption of a single interaction language and are most likely to break given a CM utterance with multiple languages mixed at a morphological, phrase or sentence level. For example, popular commercial search engines do not yet fully understand the intents expressed in CM queries. As a first step towards fostering research which supports CM in NLP applications, we systematically crowd-sourced and curated an evaluation dataset for factoid question answering in three CM languages-Hinglish (Hindi+English), Tenglish (Telugu+English) and Tamlish (Tamil+English) which belong to two language families. We share the details of our data collection process, techniques which were used to avoid inducing lexical bias amongst the crowd workers and other CM specific linguistic properties of the dataset. Our final dataset, which is available freely for research purposes, has 1,694 Hinglish, 2,848 Tamlish and 1,391 Tenglish factoid questions and their answers. We discuss the techniques used by the participants for the first edition of this ongoing challenge.",2018
bourdon-etal-1998-case,https://aclanthology.org/W98-0510,0,,,,,,,"A Case Study in Implementing Dependency-Based Grammars. In creating an English grammar checking software product, we implemented a large-coverage grammar based on the dependency grammar formalism. This implementation required some adaptation of current linguistic description to prevent serious overgeneration of parse trees. Here, • we present one particular example, that of preposition stranding and dangling prepositions, where implementing an alternative to existing linguistic analyses is warranted to limit such overgeneration.",A Case Study in Implementing Dependency-Based Grammars,"In creating an English grammar checking software product, we implemented a large-coverage grammar based on the dependency grammar formalism. This implementation required some adaptation of current linguistic description to prevent serious overgeneration of parse trees. Here, • we present one particular example, that of preposition stranding and dangling prepositions, where implementing an alternative to existing linguistic analyses is warranted to limit such overgeneration.",A Case Study in Implementing Dependency-Based Grammars,"In creating an English grammar checking software product, we implemented a large-coverage grammar based on the dependency grammar formalism. This implementation required some adaptation of current linguistic description to prevent serious overgeneration of parse trees. Here, • we present one particular example, that of preposition stranding and dangling prepositions, where implementing an alternative to existing linguistic analyses is warranted to limit such overgeneration.","We would like to thank Les Logiciels Machina Sapiens inc. for supporting us in writing this paper. We are endebted to all the people, past and present, who have contributed to the development of the grammar checkers.We thank Mary Howatt for editing advice and anonymous reviewers for their useful comments. All errors remain those of the authors.","A Case Study in Implementing Dependency-Based Grammars. In creating an English grammar checking software product, we implemented a large-coverage grammar based on the dependency grammar formalism. This implementation required some adaptation of current linguistic description to prevent serious overgeneration of parse trees. Here, • we present one particular example, that of preposition stranding and dangling prepositions, where implementing an alternative to existing linguistic analyses is warranted to limit such overgeneration.",1998
nn-1981-technical-correspondence,https://aclanthology.org/J81-1005,0,,,,,,,"Technical Correspondence: On the Utility of Computing Inferences in Data Base Query Systems. On the Utility of Computing Inferences in Data Base Query Systems these implementations were significantly more efficient, but checked a somewhat narrower class of presumptions than COOP. 6. Damerau mentions that queries with non-empty responses can also make presumptions. This is certainly true, even in more subtle ways than noted. (For example, ""What is the youngest assistant professors salary?"" presumes that there is more than one assistant professor.) Issues such as these are indeed currently under investigation. Overall, we are pleased to see that Damerau has raised some very important issues and we hope that this exchange will be helpful to the natural language processing community.",Technical Correspondence: On the Utility of Computing Inferences in Data Base Query Systems,"On the Utility of Computing Inferences in Data Base Query Systems these implementations were significantly more efficient, but checked a somewhat narrower class of presumptions than COOP. 6. Damerau mentions that queries with non-empty responses can also make presumptions. This is certainly true, even in more subtle ways than noted. (For example, ""What is the youngest assistant professors salary?"" presumes that there is more than one assistant professor.) Issues such as these are indeed currently under investigation. Overall, we are pleased to see that Damerau has raised some very important issues and we hope that this exchange will be helpful to the natural language processing community.",Technical Correspondence: On the Utility of Computing Inferences in Data Base Query Systems,"On the Utility of Computing Inferences in Data Base Query Systems these implementations were significantly more efficient, but checked a somewhat narrower class of presumptions than COOP. 6. Damerau mentions that queries with non-empty responses can also make presumptions. This is certainly true, even in more subtle ways than noted. (For example, ""What is the youngest assistant professors salary?"" presumes that there is more than one assistant professor.) Issues such as these are indeed currently under investigation. Overall, we are pleased to see that Damerau has raised some very important issues and we hope that this exchange will be helpful to the natural language processing community.",,"Technical Correspondence: On the Utility of Computing Inferences in Data Base Query Systems. On the Utility of Computing Inferences in Data Base Query Systems these implementations were significantly more efficient, but checked a somewhat narrower class of presumptions than COOP. 6. Damerau mentions that queries with non-empty responses can also make presumptions. This is certainly true, even in more subtle ways than noted. (For example, ""What is the youngest assistant professors salary?"" presumes that there is more than one assistant professor.) Issues such as these are indeed currently under investigation. Overall, we are pleased to see that Damerau has raised some very important issues and we hope that this exchange will be helpful to the natural language processing community.",1981
hershcovich-etal-2019-syntactic,https://aclanthology.org/W19-2009,0,,,,,,,"Syntactic Interchangeability in Word Embedding Models. Nearest neighbors in word embedding models are commonly observed to be semantically similar, but the relations between them can vary greatly. We investigate the extent to which word embedding models preserve syntactic interchangeability, as reflected by distances between word vectors, and the effect of hyper-parameters-context window size in particular. We use part of speech (POS) as a proxy for syntactic interchangeability, as generally speaking, words with the same POS are syntactically valid in the same contexts. We also investigate the relationship between interchangeability and similarity as judged by commonly-used word similarity benchmarks, and correlate the result with the performance of word embedding models on these benchmarks. Our results will inform future research and applications in the selection of word embedding model, suggesting a principle for an appropriate selection of the context window size parameter depending on the use-case.",Syntactic Interchangeability in Word Embedding Models,"Nearest neighbors in word embedding models are commonly observed to be semantically similar, but the relations between them can vary greatly. We investigate the extent to which word embedding models preserve syntactic interchangeability, as reflected by distances between word vectors, and the effect of hyper-parameters-context window size in particular. We use part of speech (POS) as a proxy for syntactic interchangeability, as generally speaking, words with the same POS are syntactically valid in the same contexts. We also investigate the relationship between interchangeability and similarity as judged by commonly-used word similarity benchmarks, and correlate the result with the performance of word embedding models on these benchmarks. Our results will inform future research and applications in the selection of word embedding model, suggesting a principle for an appropriate selection of the context window size parameter depending on the use-case.",Syntactic Interchangeability in Word Embedding Models,"Nearest neighbors in word embedding models are commonly observed to be semantically similar, but the relations between them can vary greatly. We investigate the extent to which word embedding models preserve syntactic interchangeability, as reflected by distances between word vectors, and the effect of hyper-parameters-context window size in particular. We use part of speech (POS) as a proxy for syntactic interchangeability, as generally speaking, words with the same POS are syntactically valid in the same contexts. We also investigate the relationship between interchangeability and similarity as judged by commonly-used word similarity benchmarks, and correlate the result with the performance of word embedding models on these benchmarks. Our results will inform future research and applications in the selection of word embedding model, suggesting a principle for an appropriate selection of the context window size parameter depending on the use-case.",We thank the anonymous reviewers for their helpful comments.,"Syntactic Interchangeability in Word Embedding Models. Nearest neighbors in word embedding models are commonly observed to be semantically similar, but the relations between them can vary greatly. We investigate the extent to which word embedding models preserve syntactic interchangeability, as reflected by distances between word vectors, and the effect of hyper-parameters-context window size in particular. We use part of speech (POS) as a proxy for syntactic interchangeability, as generally speaking, words with the same POS are syntactically valid in the same contexts. We also investigate the relationship between interchangeability and similarity as judged by commonly-used word similarity benchmarks, and correlate the result with the performance of word embedding models on these benchmarks. Our results will inform future research and applications in the selection of word embedding model, suggesting a principle for an appropriate selection of the context window size parameter depending on the use-case.",2019
treharne-etal-2006-towards,https://aclanthology.org/U06-1025,1,,,,industry_innovation_infrastructure,,,"Towards Cognitive Optimisation of a Search Engine Interface. Search engine interfaces come in a range of variations from the familiar text-based approach to the more experimental graphical systems. It is rare however that psychological or human factors research is undertaken to properly evaluate or optimize the systems, and to the extent this has been done the results have tended to contradict some of the assumptions that have driven search engine design. Our research is focussed on a model in which at least 100 hits are selected from a corpus of documents based on a set of query words and displayed graphically. Matrix manipulation techniques in the SVD/LSA family are used to identify significant dimensions and display documents according to a subset of these dimensions. The research questions we are investigating in this context relate to the computational methods (how to rescale the data), the linguistic information (how to characterize a document), and the visual attributes (which linguistic dimensions to display using which attributes).",Towards Cognitive Optimisation of a Search Engine Interface,"Search engine interfaces come in a range of variations from the familiar text-based approach to the more experimental graphical systems. It is rare however that psychological or human factors research is undertaken to properly evaluate or optimize the systems, and to the extent this has been done the results have tended to contradict some of the assumptions that have driven search engine design. Our research is focussed on a model in which at least 100 hits are selected from a corpus of documents based on a set of query words and displayed graphically. Matrix manipulation techniques in the SVD/LSA family are used to identify significant dimensions and display documents according to a subset of these dimensions. The research questions we are investigating in this context relate to the computational methods (how to rescale the data), the linguistic information (how to characterize a document), and the visual attributes (which linguistic dimensions to display using which attributes).",Towards Cognitive Optimisation of a Search Engine Interface,"Search engine interfaces come in a range of variations from the familiar text-based approach to the more experimental graphical systems. It is rare however that psychological or human factors research is undertaken to properly evaluate or optimize the systems, and to the extent this has been done the results have tended to contradict some of the assumptions that have driven search engine design. Our research is focussed on a model in which at least 100 hits are selected from a corpus of documents based on a set of query words and displayed graphically. Matrix manipulation techniques in the SVD/LSA family are used to identify significant dimensions and display documents according to a subset of these dimensions. The research questions we are investigating in this context relate to the computational methods (how to rescale the data), the linguistic information (how to characterize a document), and the visual attributes (which linguistic dimensions to display using which attributes).",,"Towards Cognitive Optimisation of a Search Engine Interface. Search engine interfaces come in a range of variations from the familiar text-based approach to the more experimental graphical systems. It is rare however that psychological or human factors research is undertaken to properly evaluate or optimize the systems, and to the extent this has been done the results have tended to contradict some of the assumptions that have driven search engine design. Our research is focussed on a model in which at least 100 hits are selected from a corpus of documents based on a set of query words and displayed graphically. Matrix manipulation techniques in the SVD/LSA family are used to identify significant dimensions and display documents according to a subset of these dimensions. The research questions we are investigating in this context relate to the computational methods (how to rescale the data), the linguistic information (how to characterize a document), and the visual attributes (which linguistic dimensions to display using which attributes).",2006
nielsen-2019-danish,https://aclanthology.org/2019.gwc-1.5,0,,,,,,,Danish in Wikidata lexemes. Wikidata introduced support for lexicographic data in 2018. Here we describe the lexicographic part of Wikidata as well as experiences with setting up lexemes for the Danish language. We note various possible annotations for lexemes as well as discuss various choices made.,{D}anish in {W}ikidata lexemes,Wikidata introduced support for lexicographic data in 2018. Here we describe the lexicographic part of Wikidata as well as experiences with setting up lexemes for the Danish language. We note various possible annotations for lexemes as well as discuss various choices made.,Danish in Wikidata lexemes,Wikidata introduced support for lexicographic data in 2018. Here we describe the lexicographic part of Wikidata as well as experiences with setting up lexemes for the Danish language. We note various possible annotations for lexemes as well as discuss various choices made.,"We thank Bolette Sandford Pedersen, Sanni Nimb, Sabine Kirchmeier, Nicolai Hartvig Sørensen and Lars Kai Hansen for discussions and answering questions, and the reviewers for suggestions for improvement of the manuscript. This work is funded by the Innovation Fund Denmark through the projects DAnish Center for Big Data Analytics driven Innovation (DABAI) and Teaching platform for developing and automatically tracking early stage literacy skills (ATEL).",Danish in Wikidata lexemes. Wikidata introduced support for lexicographic data in 2018. Here we describe the lexicographic part of Wikidata as well as experiences with setting up lexemes for the Danish language. We note various possible annotations for lexemes as well as discuss various choices made.,2019
hur-etal-2020-domain,https://aclanthology.org/2020.bionlp-1.17,1,,,,health,,,"Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes. Identifying the reasons for antibiotic administration in veterinary records is a critical component of understanding antimicrobial usage patterns. This informs antimicrobial stewardship programs designed to fight antimicrobial resistance, a major health crisis affecting both humans and animals in which veterinarians have an important role to play. We propose a document classification approach to determine the reason for administration of a given drug, with particular focus on domain adaptation from one drug to another, and instance selection to minimize annotation effort.",Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes,"Identifying the reasons for antibiotic administration in veterinary records is a critical component of understanding antimicrobial usage patterns. This informs antimicrobial stewardship programs designed to fight antimicrobial resistance, a major health crisis affecting both humans and animals in which veterinarians have an important role to play. We propose a document classification approach to determine the reason for administration of a given drug, with particular focus on domain adaptation from one drug to another, and instance selection to minimize annotation effort.",Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes,"Identifying the reasons for antibiotic administration in veterinary records is a critical component of understanding antimicrobial usage patterns. This informs antimicrobial stewardship programs designed to fight antimicrobial resistance, a major health crisis affecting both humans and animals in which veterinarians have an important role to play. We propose a document classification approach to determine the reason for administration of a given drug, with particular focus on domain adaptation from one drug to another, and instance selection to minimize annotation effort.","We thank Simon Sȗster, Afshin Rahimi, and the anonymous reviewers for their insightful comments and valuable suggestions.This research was undertaken with the assistance of information and other resources from the VetCompass Australia consortium under the project ""VetCompass Australia: Big Data and Realtime Surveillance for Veterinary Science"", which is supported by the Australian Government through the Australian Research Council LIEF scheme (LE160100026).","Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes. Identifying the reasons for antibiotic administration in veterinary records is a critical component of understanding antimicrobial usage patterns. This informs antimicrobial stewardship programs designed to fight antimicrobial resistance, a major health crisis affecting both humans and animals in which veterinarians have an important role to play. We propose a document classification approach to determine the reason for administration of a given drug, with particular focus on domain adaptation from one drug to another, and instance selection to minimize annotation effort.",2020
korkontzelos-etal-2009-graph,https://aclanthology.org/W09-1705,0,,,,,,,"Graph Connectivity Measures for Unsupervised Parameter Tuning of Graph-Based Sense Induction Systems.. Word Sense Induction (WSI) is the task of identifying the different senses (uses) of a target word in a given text. This paper focuses on the unsupervised estimation of the free parameters of a graph-based WSI method, and explores the use of eight Graph Connectivity Measures (GCM) that assess the degree of connectivity in a graph. Given a target word and a set of parameters, GCM evaluate the connectivity of the produced clusters, which correspond to subgraphs of the initial (unclustered) graph. Each parameter setting is assigned a score according to one of the GCM and the highest scoring setting is then selected. Our evaluation on the nouns of SemEval-2007 WSI task (SWSI) shows that: (1) all GCM estimate a set of parameters which significantly outperform the worst performing parameter setting in both SWSI evaluation schemes, (2) all GCM estimate a set of parameters which outperform the Most Frequent Sense (MFS) baseline by a statistically significant amount in the supervised evaluation scheme, and (3) two of the measures estimate a set of parameters that performs closely to a set of parameters estimated in supervised manner.",Graph Connectivity Measures for Unsupervised Parameter Tuning of Graph-Based Sense Induction Systems.,"Word Sense Induction (WSI) is the task of identifying the different senses (uses) of a target word in a given text. This paper focuses on the unsupervised estimation of the free parameters of a graph-based WSI method, and explores the use of eight Graph Connectivity Measures (GCM) that assess the degree of connectivity in a graph. Given a target word and a set of parameters, GCM evaluate the connectivity of the produced clusters, which correspond to subgraphs of the initial (unclustered) graph. Each parameter setting is assigned a score according to one of the GCM and the highest scoring setting is then selected. Our evaluation on the nouns of SemEval-2007 WSI task (SWSI) shows that: (1) all GCM estimate a set of parameters which significantly outperform the worst performing parameter setting in both SWSI evaluation schemes, (2) all GCM estimate a set of parameters which outperform the Most Frequent Sense (MFS) baseline by a statistically significant amount in the supervised evaluation scheme, and (3) two of the measures estimate a set of parameters that performs closely to a set of parameters estimated in supervised manner.",Graph Connectivity Measures for Unsupervised Parameter Tuning of Graph-Based Sense Induction Systems.,"Word Sense Induction (WSI) is the task of identifying the different senses (uses) of a target word in a given text. This paper focuses on the unsupervised estimation of the free parameters of a graph-based WSI method, and explores the use of eight Graph Connectivity Measures (GCM) that assess the degree of connectivity in a graph. Given a target word and a set of parameters, GCM evaluate the connectivity of the produced clusters, which correspond to subgraphs of the initial (unclustered) graph. Each parameter setting is assigned a score according to one of the GCM and the highest scoring setting is then selected. Our evaluation on the nouns of SemEval-2007 WSI task (SWSI) shows that: (1) all GCM estimate a set of parameters which significantly outperform the worst performing parameter setting in both SWSI evaluation schemes, (2) all GCM estimate a set of parameters which outperform the Most Frequent Sense (MFS) baseline by a statistically significant amount in the supervised evaluation scheme, and (3) two of the measures estimate a set of parameters that performs closely to a set of parameters estimated in supervised manner.",,"Graph Connectivity Measures for Unsupervised Parameter Tuning of Graph-Based Sense Induction Systems.. Word Sense Induction (WSI) is the task of identifying the different senses (uses) of a target word in a given text. This paper focuses on the unsupervised estimation of the free parameters of a graph-based WSI method, and explores the use of eight Graph Connectivity Measures (GCM) that assess the degree of connectivity in a graph. Given a target word and a set of parameters, GCM evaluate the connectivity of the produced clusters, which correspond to subgraphs of the initial (unclustered) graph. Each parameter setting is assigned a score according to one of the GCM and the highest scoring setting is then selected. Our evaluation on the nouns of SemEval-2007 WSI task (SWSI) shows that: (1) all GCM estimate a set of parameters which significantly outperform the worst performing parameter setting in both SWSI evaluation schemes, (2) all GCM estimate a set of parameters which outperform the Most Frequent Sense (MFS) baseline by a statistically significant amount in the supervised evaluation scheme, and (3) two of the measures estimate a set of parameters that performs closely to a set of parameters estimated in supervised manner.",2009
yan-nakashole-2021-grounded,https://aclanthology.org/2021.nlp4posimpact-1.16,0,,,,,,,"A Grounded Well-being Conversational Agent with Multiple Interaction Modes: Preliminary Results. Technologies for enhancing well-being, healthcare vigilance and monitoring are on the rise. However, despite patient interest, such technologies suffer from low adoption. One hypothesis for this limited adoption is loss of human interaction that is central to doctorpatient encounters. In this paper we seek to address this limitation via a conversational agent that adopts one aspect of in-person doctor-patient interactions: A human avatar to facilitate medical grounded question answering. This is akin to the in-person scenario where the doctor may point to the human body or the patient may point to their own body to express their conditions. Additionally, our agent has multiple interaction modes, that may give more options for the patient to use the agent, not just for medical question answering, but also to engage in conversations about general topics and current events. Both the avatar, and the multiple interaction modes could help improve adherence. We present a high level overview of the design of our agent, Marie Bot Wellbeing. We also report implementation details of our early prototype , and present preliminary results.",A Grounded Well-being Conversational Agent with Multiple Interaction Modes: Preliminary Results,"Technologies for enhancing well-being, healthcare vigilance and monitoring are on the rise. However, despite patient interest, such technologies suffer from low adoption. One hypothesis for this limited adoption is loss of human interaction that is central to doctorpatient encounters. In this paper we seek to address this limitation via a conversational agent that adopts one aspect of in-person doctor-patient interactions: A human avatar to facilitate medical grounded question answering. This is akin to the in-person scenario where the doctor may point to the human body or the patient may point to their own body to express their conditions. Additionally, our agent has multiple interaction modes, that may give more options for the patient to use the agent, not just for medical question answering, but also to engage in conversations about general topics and current events. Both the avatar, and the multiple interaction modes could help improve adherence. We present a high level overview of the design of our agent, Marie Bot Wellbeing. We also report implementation details of our early prototype , and present preliminary results.",A Grounded Well-being Conversational Agent with Multiple Interaction Modes: Preliminary Results,"Technologies for enhancing well-being, healthcare vigilance and monitoring are on the rise. However, despite patient interest, such technologies suffer from low adoption. One hypothesis for this limited adoption is loss of human interaction that is central to doctorpatient encounters. In this paper we seek to address this limitation via a conversational agent that adopts one aspect of in-person doctor-patient interactions: A human avatar to facilitate medical grounded question answering. This is akin to the in-person scenario where the doctor may point to the human body or the patient may point to their own body to express their conditions. Additionally, our agent has multiple interaction modes, that may give more options for the patient to use the agent, not just for medical question answering, but also to engage in conversations about general topics and current events. Both the avatar, and the multiple interaction modes could help improve adherence. We present a high level overview of the design of our agent, Marie Bot Wellbeing. We also report implementation details of our early prototype , and present preliminary results.",,"A Grounded Well-being Conversational Agent with Multiple Interaction Modes: Preliminary Results. Technologies for enhancing well-being, healthcare vigilance and monitoring are on the rise. However, despite patient interest, such technologies suffer from low adoption. One hypothesis for this limited adoption is loss of human interaction that is central to doctorpatient encounters. In this paper we seek to address this limitation via a conversational agent that adopts one aspect of in-person doctor-patient interactions: A human avatar to facilitate medical grounded question answering. This is akin to the in-person scenario where the doctor may point to the human body or the patient may point to their own body to express their conditions. Additionally, our agent has multiple interaction modes, that may give more options for the patient to use the agent, not just for medical question answering, but also to engage in conversations about general topics and current events. Both the avatar, and the multiple interaction modes could help improve adherence. We present a high level overview of the design of our agent, Marie Bot Wellbeing. We also report implementation details of our early prototype , and present preliminary results.",2021
qian-etal-2019-comparative,https://aclanthology.org/W19-6714,1,,,,peace_justice_and_strong_institutions,,,"A Comparative Study of English-Chinese Translations of Court Texts by Machine and Human Translators and the Word2Vec Based Similarity Measure's Ability To Gauge Human Evaluation Biases. In this comparative study, a jury instruction scenario was used to test the translating capabilities of multiple machine translation tools and a human translator with extensive court experience. Three certified translators/interpreters subjectively evaluated the target texts generated using adequacy and fluency as the evaluation metrics. This subjective evaluation found that the machine generated results had much poorer adequacy and fluency compared with results produced by their human counterpart. Human translators can use strategic omission and explicitation strategies such as addition, paraphrasing, substitution, and repetition to remove ambiguity, and achieve a natural flow in the target language. We also investigate instances where human evaluators have major disagreements and found that human experts could have very biased views. On the other hand, a word2vec based algorithm, if given a good reference translation, can serve as a robust and reliable similarity reference to quantify human evalutors' biases beacuse it was trained on a large corpus using neural network models. Even though the machine generated versions had better fluency performance compared to their adequacy",A Comparative Study of {E}nglish-{C}hinese Translations of Court Texts by Machine and Human Translators and the {W}ord2{V}ec Based Similarity Measure{'}s Ability To Gauge Human Evaluation Biases,"In this comparative study, a jury instruction scenario was used to test the translating capabilities of multiple machine translation tools and a human translator with extensive court experience. Three certified translators/interpreters subjectively evaluated the target texts generated using adequacy and fluency as the evaluation metrics. This subjective evaluation found that the machine generated results had much poorer adequacy and fluency compared with results produced by their human counterpart. Human translators can use strategic omission and explicitation strategies such as addition, paraphrasing, substitution, and repetition to remove ambiguity, and achieve a natural flow in the target language. We also investigate instances where human evaluators have major disagreements and found that human experts could have very biased views. On the other hand, a word2vec based algorithm, if given a good reference translation, can serve as a robust and reliable similarity reference to quantify human evalutors' biases beacuse it was trained on a large corpus using neural network models. Even though the machine generated versions had better fluency performance compared to their adequacy",A Comparative Study of English-Chinese Translations of Court Texts by Machine and Human Translators and the Word2Vec Based Similarity Measure's Ability To Gauge Human Evaluation Biases,"In this comparative study, a jury instruction scenario was used to test the translating capabilities of multiple machine translation tools and a human translator with extensive court experience. Three certified translators/interpreters subjectively evaluated the target texts generated using adequacy and fluency as the evaluation metrics. This subjective evaluation found that the machine generated results had much poorer adequacy and fluency compared with results produced by their human counterpart. Human translators can use strategic omission and explicitation strategies such as addition, paraphrasing, substitution, and repetition to remove ambiguity, and achieve a natural flow in the target language. We also investigate instances where human evaluators have major disagreements and found that human experts could have very biased views. On the other hand, a word2vec based algorithm, if given a good reference translation, can serve as a robust and reliable similarity reference to quantify human evalutors' biases beacuse it was trained on a large corpus using neural network models. Even though the machine generated versions had better fluency performance compared to their adequacy",,"A Comparative Study of English-Chinese Translations of Court Texts by Machine and Human Translators and the Word2Vec Based Similarity Measure's Ability To Gauge Human Evaluation Biases. In this comparative study, a jury instruction scenario was used to test the translating capabilities of multiple machine translation tools and a human translator with extensive court experience. Three certified translators/interpreters subjectively evaluated the target texts generated using adequacy and fluency as the evaluation metrics. This subjective evaluation found that the machine generated results had much poorer adequacy and fluency compared with results produced by their human counterpart. Human translators can use strategic omission and explicitation strategies such as addition, paraphrasing, substitution, and repetition to remove ambiguity, and achieve a natural flow in the target language. We also investigate instances where human evaluators have major disagreements and found that human experts could have very biased views. On the other hand, a word2vec based algorithm, if given a good reference translation, can serve as a robust and reliable similarity reference to quantify human evalutors' biases beacuse it was trained on a large corpus using neural network models. Even though the machine generated versions had better fluency performance compared to their adequacy",2019
liu-etal-2022-end,https://aclanthology.org/2022.findings-acl.46,0,,,,,,,"End-to-End Segmentation-based News Summarization. In this paper, we bring a new way of digesting news content by introducing the task of segmenting a news article into multiple sections and generating the corresponding summary to each section. We make two contributions towards this new task. First, we create and make available a dataset, SEGNEWS, consisting of 27k news articles with sections and aligned heading-style section summaries. Second, we propose a novel segmentation-based language generation model adapted from pretrained language models that can jointly segment a document and produce the summary for each section. Experimental results on SEG-NEWS demonstrate that our model can outperform several state-of-the-art sequence-tosequence generation models for this new task.",End-to-End Segmentation-based News Summarization,"In this paper, we bring a new way of digesting news content by introducing the task of segmenting a news article into multiple sections and generating the corresponding summary to each section. We make two contributions towards this new task. First, we create and make available a dataset, SEGNEWS, consisting of 27k news articles with sections and aligned heading-style section summaries. Second, we propose a novel segmentation-based language generation model adapted from pretrained language models that can jointly segment a document and produce the summary for each section. Experimental results on SEG-NEWS demonstrate that our model can outperform several state-of-the-art sequence-tosequence generation models for this new task.",End-to-End Segmentation-based News Summarization,"In this paper, we bring a new way of digesting news content by introducing the task of segmenting a news article into multiple sections and generating the corresponding summary to each section. We make two contributions towards this new task. First, we create and make available a dataset, SEGNEWS, consisting of 27k news articles with sections and aligned heading-style section summaries. Second, we propose a novel segmentation-based language generation model adapted from pretrained language models that can jointly segment a document and produce the summary for each section. Experimental results on SEG-NEWS demonstrate that our model can outperform several state-of-the-art sequence-tosequence generation models for this new task.",,"End-to-End Segmentation-based News Summarization. In this paper, we bring a new way of digesting news content by introducing the task of segmenting a news article into multiple sections and generating the corresponding summary to each section. We make two contributions towards this new task. First, we create and make available a dataset, SEGNEWS, consisting of 27k news articles with sections and aligned heading-style section summaries. Second, we propose a novel segmentation-based language generation model adapted from pretrained language models that can jointly segment a document and produce the summary for each section. Experimental results on SEG-NEWS demonstrate that our model can outperform several state-of-the-art sequence-tosequence generation models for this new task.",2022
schulze-wettendorf-etal-2014-snap,https://aclanthology.org/S14-2101,0,,,,,,,"SNAP: A Multi-Stage XML-Pipeline for Aspect Based Sentiment Analysis. This paper describes the SNAP system, which participated in Task 4 of SemEval-2014: Aspect Based Sentiment Analysis. We use an XML-based pipeline that combines several independent components to perform each subtask. Key resources used by the system are Bing Liu's sentiment lexicon, Stanford CoreNLP, RFTagger, several machine learning algorithms and WordNet. SNAP achieved satisfactory results in the evaluation, placing in the top half of the field for most subtasks.",{SNAP}: A Multi-Stage {XML}-Pipeline for Aspect Based Sentiment Analysis,"This paper describes the SNAP system, which participated in Task 4 of SemEval-2014: Aspect Based Sentiment Analysis. We use an XML-based pipeline that combines several independent components to perform each subtask. Key resources used by the system are Bing Liu's sentiment lexicon, Stanford CoreNLP, RFTagger, several machine learning algorithms and WordNet. SNAP achieved satisfactory results in the evaluation, placing in the top half of the field for most subtasks.",SNAP: A Multi-Stage XML-Pipeline for Aspect Based Sentiment Analysis,"This paper describes the SNAP system, which participated in Task 4 of SemEval-2014: Aspect Based Sentiment Analysis. We use an XML-based pipeline that combines several independent components to perform each subtask. Key resources used by the system are Bing Liu's sentiment lexicon, Stanford CoreNLP, RFTagger, several machine learning algorithms and WordNet. SNAP achieved satisfactory results in the evaluation, placing in the top half of the field for most subtasks.",,"SNAP: A Multi-Stage XML-Pipeline for Aspect Based Sentiment Analysis. This paper describes the SNAP system, which participated in Task 4 of SemEval-2014: Aspect Based Sentiment Analysis. We use an XML-based pipeline that combines several independent components to perform each subtask. Key resources used by the system are Bing Liu's sentiment lexicon, Stanford CoreNLP, RFTagger, several machine learning algorithms and WordNet. SNAP achieved satisfactory results in the evaluation, placing in the top half of the field for most subtasks.",2014
jindal-2018-generating,https://aclanthology.org/N18-4020,0,,,,,,,"Generating Image Captions in Arabic using Root-Word Based Recurrent Neural Networks and Deep Neural Networks. Image caption generation has gathered widespread interest in the artificial intelligence community. Automatic generation of an image description requires both computer vision and natural language processing techniques. While, there has been advanced research in English caption generation, research on generating Arabic descriptions of an image is extremely limited. Semitic languages like Arabic are heavily influenced by root-words. We leverage this critical dependency of Arabic to generate captions of an image directly in Arabic using root-word based Recurrent Neural Network and Deep Neural Networks. Experimental results on datasets from various Middle Eastern newspaper websites allow us to report the first BLEU score for direct Arabic caption generation. We also compare the results of our approach with BLEU score captions generated in English and translated into Arabic. Experimental results confirm that generating image captions using root-words directly in Arabic significantly outperforms the English-Arabic translated captions using state-of-the-art methods.",Generating Image Captions in {A}rabic using Root-Word Based Recurrent Neural Networks and Deep Neural Networks,"Image caption generation has gathered widespread interest in the artificial intelligence community. Automatic generation of an image description requires both computer vision and natural language processing techniques. While, there has been advanced research in English caption generation, research on generating Arabic descriptions of an image is extremely limited. Semitic languages like Arabic are heavily influenced by root-words. We leverage this critical dependency of Arabic to generate captions of an image directly in Arabic using root-word based Recurrent Neural Network and Deep Neural Networks. Experimental results on datasets from various Middle Eastern newspaper websites allow us to report the first BLEU score for direct Arabic caption generation. We also compare the results of our approach with BLEU score captions generated in English and translated into Arabic. Experimental results confirm that generating image captions using root-words directly in Arabic significantly outperforms the English-Arabic translated captions using state-of-the-art methods.",Generating Image Captions in Arabic using Root-Word Based Recurrent Neural Networks and Deep Neural Networks,"Image caption generation has gathered widespread interest in the artificial intelligence community. Automatic generation of an image description requires both computer vision and natural language processing techniques. While, there has been advanced research in English caption generation, research on generating Arabic descriptions of an image is extremely limited. Semitic languages like Arabic are heavily influenced by root-words. We leverage this critical dependency of Arabic to generate captions of an image directly in Arabic using root-word based Recurrent Neural Network and Deep Neural Networks. Experimental results on datasets from various Middle Eastern newspaper websites allow us to report the first BLEU score for direct Arabic caption generation. We also compare the results of our approach with BLEU score captions generated in English and translated into Arabic. Experimental results confirm that generating image captions using root-words directly in Arabic significantly outperforms the English-Arabic translated captions using state-of-the-art methods.",,"Generating Image Captions in Arabic using Root-Word Based Recurrent Neural Networks and Deep Neural Networks. Image caption generation has gathered widespread interest in the artificial intelligence community. Automatic generation of an image description requires both computer vision and natural language processing techniques. While, there has been advanced research in English caption generation, research on generating Arabic descriptions of an image is extremely limited. Semitic languages like Arabic are heavily influenced by root-words. We leverage this critical dependency of Arabic to generate captions of an image directly in Arabic using root-word based Recurrent Neural Network and Deep Neural Networks. Experimental results on datasets from various Middle Eastern newspaper websites allow us to report the first BLEU score for direct Arabic caption generation. We also compare the results of our approach with BLEU score captions generated in English and translated into Arabic. Experimental results confirm that generating image captions using root-words directly in Arabic significantly outperforms the English-Arabic translated captions using state-of-the-art methods.",2018
hale-etal-2018-finding,https://aclanthology.org/P18-1254,0,,,,,,,"Finding syntax in human encephalography with beam search. Recurrent neural network grammars (RNNGs) are generative models of (tree, string) pairs that rely on neural networks to evaluate derivational choices. Parsing with them using beam search yields a variety of incremental complexity metrics such as word surprisal and parser action count. When used as regressors against human electrophysiological responses to naturalistic text, they derive two amplitude effects: an early peak and a P600-like later peak. By contrast, a non-syntactic neural language model yields no reliable effects. Model comparisons attribute the early peak to syntactic composition within the RNNG. This pattern of results recommends the RNNG+beam search combination as a mechanistic model of the syntactic processing that occurs during normal human language comprehension.",Finding syntax in human encephalography with beam search,"Recurrent neural network grammars (RNNGs) are generative models of (tree, string) pairs that rely on neural networks to evaluate derivational choices. Parsing with them using beam search yields a variety of incremental complexity metrics such as word surprisal and parser action count. When used as regressors against human electrophysiological responses to naturalistic text, they derive two amplitude effects: an early peak and a P600-like later peak. By contrast, a non-syntactic neural language model yields no reliable effects. Model comparisons attribute the early peak to syntactic composition within the RNNG. This pattern of results recommends the RNNG+beam search combination as a mechanistic model of the syntactic processing that occurs during normal human language comprehension.",Finding syntax in human encephalography with beam search,"Recurrent neural network grammars (RNNGs) are generative models of (tree, string) pairs that rely on neural networks to evaluate derivational choices. Parsing with them using beam search yields a variety of incremental complexity metrics such as word surprisal and parser action count. When used as regressors against human electrophysiological responses to naturalistic text, they derive two amplitude effects: an early peak and a P600-like later peak. By contrast, a non-syntactic neural language model yields no reliable effects. Model comparisons attribute the early peak to syntactic composition within the RNNG. This pattern of results recommends the RNNG+beam search combination as a mechanistic model of the syntactic processing that occurs during normal human language comprehension.",This material is based upon work supported by the National Science Foundation under Grants No. 1607441 and No. 1607251. We thank Max Cantor and Rachel Eby for helping with data collection.,"Finding syntax in human encephalography with beam search. Recurrent neural network grammars (RNNGs) are generative models of (tree, string) pairs that rely on neural networks to evaluate derivational choices. Parsing with them using beam search yields a variety of incremental complexity metrics such as word surprisal and parser action count. When used as regressors against human electrophysiological responses to naturalistic text, they derive two amplitude effects: an early peak and a P600-like later peak. By contrast, a non-syntactic neural language model yields no reliable effects. Model comparisons attribute the early peak to syntactic composition within the RNNG. This pattern of results recommends the RNNG+beam search combination as a mechanistic model of the syntactic processing that occurs during normal human language comprehension.",2018
lee-etal-2016-feature,https://aclanthology.org/W16-4204,1,,,,health,privacy_protection,,"Feature-Augmented Neural Networks for Patient Note De-identification. Patient notes contain a wealth of information of potentially great interest to medical investigators. However, to protect patients' privacy, Protected Health Information (PHI) must be removed from the patient notes before they can be legally released, a process known as patient note de-identification. The main objective for a de-identification system is to have the highest possible recall. Recently, the first neural-network-based de-identification system has been proposed, yielding state-of-the-art results. Unlike other systems, it does not rely on human-engineered features, which allows it to be quickly deployed, but does not leverage knowledge from human experts or from electronic health records (EHRs). In this work, we explore a method to incorporate human-engineered features as well as features derived from EHRs to a neural-network-based de-identification system. Our results show that the addition of features, especially the EHR-derived features, further improves the state-of-the-art in patient note de-identification, including for some of the most sensitive PHI types such as patient names. Since in a real-life setting patient notes typically come with EHRs, we recommend developers of de-identification systems to leverage the information EHRs contain.",Feature-Augmented Neural Networks for Patient Note De-identification,"Patient notes contain a wealth of information of potentially great interest to medical investigators. However, to protect patients' privacy, Protected Health Information (PHI) must be removed from the patient notes before they can be legally released, a process known as patient note de-identification. The main objective for a de-identification system is to have the highest possible recall. Recently, the first neural-network-based de-identification system has been proposed, yielding state-of-the-art results. Unlike other systems, it does not rely on human-engineered features, which allows it to be quickly deployed, but does not leverage knowledge from human experts or from electronic health records (EHRs). In this work, we explore a method to incorporate human-engineered features as well as features derived from EHRs to a neural-network-based de-identification system. Our results show that the addition of features, especially the EHR-derived features, further improves the state-of-the-art in patient note de-identification, including for some of the most sensitive PHI types such as patient names. Since in a real-life setting patient notes typically come with EHRs, we recommend developers of de-identification systems to leverage the information EHRs contain.",Feature-Augmented Neural Networks for Patient Note De-identification,"Patient notes contain a wealth of information of potentially great interest to medical investigators. However, to protect patients' privacy, Protected Health Information (PHI) must be removed from the patient notes before they can be legally released, a process known as patient note de-identification. The main objective for a de-identification system is to have the highest possible recall. Recently, the first neural-network-based de-identification system has been proposed, yielding state-of-the-art results. Unlike other systems, it does not rely on human-engineered features, which allows it to be quickly deployed, but does not leverage knowledge from human experts or from electronic health records (EHRs). In this work, we explore a method to incorporate human-engineered features as well as features derived from EHRs to a neural-network-based de-identification system. Our results show that the addition of features, especially the EHR-derived features, further improves the state-of-the-art in patient note de-identification, including for some of the most sensitive PHI types such as patient names. Since in a real-life setting patient notes typically come with EHRs, we recommend developers of de-identification systems to leverage the information EHRs contain.","The project was supported by Philips Research. The content is solely the responsibility of the authors and does not necessarily represent the official views of Philips Research. We warmly thank Michele Filannino, Alistair Johnson, Li-wei Lehman, Roger Mark, and Tom Pollard for their helpful suggestions and technical assistance.","Feature-Augmented Neural Networks for Patient Note De-identification. Patient notes contain a wealth of information of potentially great interest to medical investigators. However, to protect patients' privacy, Protected Health Information (PHI) must be removed from the patient notes before they can be legally released, a process known as patient note de-identification. The main objective for a de-identification system is to have the highest possible recall. Recently, the first neural-network-based de-identification system has been proposed, yielding state-of-the-art results. Unlike other systems, it does not rely on human-engineered features, which allows it to be quickly deployed, but does not leverage knowledge from human experts or from electronic health records (EHRs). In this work, we explore a method to incorporate human-engineered features as well as features derived from EHRs to a neural-network-based de-identification system. Our results show that the addition of features, especially the EHR-derived features, further improves the state-of-the-art in patient note de-identification, including for some of the most sensitive PHI types such as patient names. Since in a real-life setting patient notes typically come with EHRs, we recommend developers of de-identification systems to leverage the information EHRs contain.",2016
sahlgren-coster-2004-using,https://aclanthology.org/C04-1070,0,,,,,,,"Using Bag-of-Concepts to Improve the Performance of Support Vector Machines in Text Categorization. This paper investigates the use of conceptbased representations for text categorization. We introduce a new approach to create concept-based text representations, and apply it to a standard text categorization collection. The representations are used as input to a Support Vector Machine classifier, and the results show that there are certain categories for which concept-based representations constitute a viable supplement to word-based ones. We also demonstrate how the performance of the Support Vector Machine can be improved by combining representations.",Using Bag-of-Concepts to Improve the Performance of Support Vector Machines in Text Categorization,"This paper investigates the use of conceptbased representations for text categorization. We introduce a new approach to create concept-based text representations, and apply it to a standard text categorization collection. The representations are used as input to a Support Vector Machine classifier, and the results show that there are certain categories for which concept-based representations constitute a viable supplement to word-based ones. We also demonstrate how the performance of the Support Vector Machine can be improved by combining representations.",Using Bag-of-Concepts to Improve the Performance of Support Vector Machines in Text Categorization,"This paper investigates the use of conceptbased representations for text categorization. We introduce a new approach to create concept-based text representations, and apply it to a standard text categorization collection. The representations are used as input to a Support Vector Machine classifier, and the results show that there are certain categories for which concept-based representations constitute a viable supplement to word-based ones. We also demonstrate how the performance of the Support Vector Machine can be improved by combining representations.","We have introduced a new method for producing concept-based (BoC) text representations, and we have compared the performance of an SVM classifier on the Reuters-21578 collection using both traditional word-based (BoW), and concept-based representations. The results show that BoC representations outperform BoW when only counting the ten largest categories, and that a combination of BoW and BoC representations improve the performance of the SVM over all categories.We conclude that concept-based representations constitute a viable supplement to wordbased ones, and that there are categories in the Reuters-21578 collection that benefit from using concept-based representations.","Using Bag-of-Concepts to Improve the Performance of Support Vector Machines in Text Categorization. This paper investigates the use of conceptbased representations for text categorization. We introduce a new approach to create concept-based text representations, and apply it to a standard text categorization collection. The representations are used as input to a Support Vector Machine classifier, and the results show that there are certain categories for which concept-based representations constitute a viable supplement to word-based ones. We also demonstrate how the performance of the Support Vector Machine can be improved by combining representations.",2004
xiao-etal-2005-principles,https://aclanthology.org/I05-1072,0,,,,,,,"Principles of Non-stationary Hidden Markov Model and Its Applications to Sequence Labeling Task. Hidden Markov Model (Hmm) is one of the most popular language models. To improve its predictive power, one of Hmm hypotheses, named limited history hypothesis, is usually relaxed. Then Higher-order Hmm is built up. But there are several severe problems hampering the applications of highorder Hmm, such as the problem of parameter space explosion, data sparseness problem and system resource exhaustion problem. From another point of view, this paper relaxes the other Hmm hypothesis, named stationary (time invariant) hypothesis, makes use of time information and proposes a non-stationary Hmm (NSHmm). This paper describes NSHmm in detail, including its definition, the representation of time information, the algorithms and the parameter space and so on. Moreover, to further reduce the parameter space for mobile applications, this paper proposes a variant form of NSHmm (VNSHmm). Then NSHmm and VNSHmm are applied to two sequence labeling tasks: pos tagging and pinyin-tocharacter conversion. Experiment results show that compared with Hmm, NSHmm and VNSHmm can greatly reduce the error rate in both of the two tasks, which proves that they have much more predictive power than Hmm does.",Principles of Non-stationary Hidden {M}arkov Model and Its Applications to Sequence Labeling Task,"Hidden Markov Model (Hmm) is one of the most popular language models. To improve its predictive power, one of Hmm hypotheses, named limited history hypothesis, is usually relaxed. Then Higher-order Hmm is built up. But there are several severe problems hampering the applications of highorder Hmm, such as the problem of parameter space explosion, data sparseness problem and system resource exhaustion problem. From another point of view, this paper relaxes the other Hmm hypothesis, named stationary (time invariant) hypothesis, makes use of time information and proposes a non-stationary Hmm (NSHmm). This paper describes NSHmm in detail, including its definition, the representation of time information, the algorithms and the parameter space and so on. Moreover, to further reduce the parameter space for mobile applications, this paper proposes a variant form of NSHmm (VNSHmm). Then NSHmm and VNSHmm are applied to two sequence labeling tasks: pos tagging and pinyin-tocharacter conversion. Experiment results show that compared with Hmm, NSHmm and VNSHmm can greatly reduce the error rate in both of the two tasks, which proves that they have much more predictive power than Hmm does.",Principles of Non-stationary Hidden Markov Model and Its Applications to Sequence Labeling Task,"Hidden Markov Model (Hmm) is one of the most popular language models. To improve its predictive power, one of Hmm hypotheses, named limited history hypothesis, is usually relaxed. Then Higher-order Hmm is built up. But there are several severe problems hampering the applications of highorder Hmm, such as the problem of parameter space explosion, data sparseness problem and system resource exhaustion problem. From another point of view, this paper relaxes the other Hmm hypothesis, named stationary (time invariant) hypothesis, makes use of time information and proposes a non-stationary Hmm (NSHmm). This paper describes NSHmm in detail, including its definition, the representation of time information, the algorithms and the parameter space and so on. Moreover, to further reduce the parameter space for mobile applications, this paper proposes a variant form of NSHmm (VNSHmm). Then NSHmm and VNSHmm are applied to two sequence labeling tasks: pos tagging and pinyin-tocharacter conversion. Experiment results show that compared with Hmm, NSHmm and VNSHmm can greatly reduce the error rate in both of the two tasks, which proves that they have much more predictive power than Hmm does.",This investigation was supported emphatically by the National Natural Science Foundation of China (No.60435020) and the High Technology Research and Development Programme of China (2002AA117010-09).We especially thank the three anonymous reviewers for their valuable suggestions and comments.,"Principles of Non-stationary Hidden Markov Model and Its Applications to Sequence Labeling Task. Hidden Markov Model (Hmm) is one of the most popular language models. To improve its predictive power, one of Hmm hypotheses, named limited history hypothesis, is usually relaxed. Then Higher-order Hmm is built up. But there are several severe problems hampering the applications of highorder Hmm, such as the problem of parameter space explosion, data sparseness problem and system resource exhaustion problem. From another point of view, this paper relaxes the other Hmm hypothesis, named stationary (time invariant) hypothesis, makes use of time information and proposes a non-stationary Hmm (NSHmm). This paper describes NSHmm in detail, including its definition, the representation of time information, the algorithms and the parameter space and so on. Moreover, to further reduce the parameter space for mobile applications, this paper proposes a variant form of NSHmm (VNSHmm). Then NSHmm and VNSHmm are applied to two sequence labeling tasks: pos tagging and pinyin-tocharacter conversion. Experiment results show that compared with Hmm, NSHmm and VNSHmm can greatly reduce the error rate in both of the two tasks, which proves that they have much more predictive power than Hmm does.",2005
amin-etal-2020-data,https://aclanthology.org/2020.bionlp-1.20,1,,,,health,,,"A Data-driven Approach for Noise Reduction in Distantly Supervised Biomedical Relation Extraction. Fact triples are a common form of structured knowledge used within the biomedical domain. As the amount of unstructured scientific texts continues to grow, manual annotation of these texts for the task of relation extraction becomes increasingly expensive. Distant supervision offers a viable approach to combat this by quickly producing large amounts of labeled, but considerably noisy, data. We aim to reduce such noise by extending an entity-enriched relation classification BERT model to the problem of multiple instance learning, and defining a simple data encoding scheme that significantly reduces noise, reaching state-of-the-art performance for distantly-supervised biomedical relation extraction. Our approach further encodes knowledge about the direction of relation triples, allowing for increased focus on relation learning by reducing noise and alleviating the need for joint learning with knowledge graph completion.",A Data-driven Approach for Noise Reduction in Distantly Supervised Biomedical Relation Extraction,"Fact triples are a common form of structured knowledge used within the biomedical domain. As the amount of unstructured scientific texts continues to grow, manual annotation of these texts for the task of relation extraction becomes increasingly expensive. Distant supervision offers a viable approach to combat this by quickly producing large amounts of labeled, but considerably noisy, data. We aim to reduce such noise by extending an entity-enriched relation classification BERT model to the problem of multiple instance learning, and defining a simple data encoding scheme that significantly reduces noise, reaching state-of-the-art performance for distantly-supervised biomedical relation extraction. Our approach further encodes knowledge about the direction of relation triples, allowing for increased focus on relation learning by reducing noise and alleviating the need for joint learning with knowledge graph completion.",A Data-driven Approach for Noise Reduction in Distantly Supervised Biomedical Relation Extraction,"Fact triples are a common form of structured knowledge used within the biomedical domain. As the amount of unstructured scientific texts continues to grow, manual annotation of these texts for the task of relation extraction becomes increasingly expensive. Distant supervision offers a viable approach to combat this by quickly producing large amounts of labeled, but considerably noisy, data. We aim to reduce such noise by extending an entity-enriched relation classification BERT model to the problem of multiple instance learning, and defining a simple data encoding scheme that significantly reduces noise, reaching state-of-the-art performance for distantly-supervised biomedical relation extraction. Our approach further encodes knowledge about the direction of relation triples, allowing for increased focus on relation learning by reducing noise and alleviating the need for joint learning with knowledge graph completion.",The authors would like to thank the anonymous reviewers for helpful feedback. The work was partially funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 777107 through the project Precise4Q and by the German Federal Ministry of Education and Research (BMBF) through the project DEEPLEE (01IW17001).,"A Data-driven Approach for Noise Reduction in Distantly Supervised Biomedical Relation Extraction. Fact triples are a common form of structured knowledge used within the biomedical domain. As the amount of unstructured scientific texts continues to grow, manual annotation of these texts for the task of relation extraction becomes increasingly expensive. Distant supervision offers a viable approach to combat this by quickly producing large amounts of labeled, but considerably noisy, data. We aim to reduce such noise by extending an entity-enriched relation classification BERT model to the problem of multiple instance learning, and defining a simple data encoding scheme that significantly reduces noise, reaching state-of-the-art performance for distantly-supervised biomedical relation extraction. Our approach further encodes knowledge about the direction of relation triples, allowing for increased focus on relation learning by reducing noise and alleviating the need for joint learning with knowledge graph completion.",2020
daille-2003-conceptual,https://aclanthology.org/W03-1802,0,,,,,,,"Conceptual Structuring through Term Variations. Term extraction systems are now an integral part of the compiling of specialized dictionaries and updating of term banks. In this paper, we present a term detection approach that discovers, structures, and infers conceptual relationships between terms for French. Conceptual relationships are deduced from specific types of term variations, morphological and syntagmatic, and are expressed through lexical functions. The linguistic precision of the conceptual structuring through morphological variations is of 95 %. 2 Conceptual systems Terms are generally classified using partitive and generic relationships to be presented in a thesaural structure. But other relationships exist, the so-called complex relationships (Sager, 1990, pages 34-35) which are domain and application dependent. Examples of such complex relationships are: FALLOUT is caused by NUCLEAR EXPLOSION COALMINE is a place for COAL-MINING",Conceptual Structuring through Term Variations,"Term extraction systems are now an integral part of the compiling of specialized dictionaries and updating of term banks. In this paper, we present a term detection approach that discovers, structures, and infers conceptual relationships between terms for French. Conceptual relationships are deduced from specific types of term variations, morphological and syntagmatic, and are expressed through lexical functions. The linguistic precision of the conceptual structuring through morphological variations is of 95 %. 2 Conceptual systems Terms are generally classified using partitive and generic relationships to be presented in a thesaural structure. But other relationships exist, the so-called complex relationships (Sager, 1990, pages 34-35) which are domain and application dependent. Examples of such complex relationships are: FALLOUT is caused by NUCLEAR EXPLOSION COALMINE is a place for COAL-MINING",Conceptual Structuring through Term Variations,"Term extraction systems are now an integral part of the compiling of specialized dictionaries and updating of term banks. In this paper, we present a term detection approach that discovers, structures, and infers conceptual relationships between terms for French. Conceptual relationships are deduced from specific types of term variations, morphological and syntagmatic, and are expressed through lexical functions. The linguistic precision of the conceptual structuring through morphological variations is of 95 %. 2 Conceptual systems Terms are generally classified using partitive and generic relationships to be presented in a thesaural structure. But other relationships exist, the so-called complex relationships (Sager, 1990, pages 34-35) which are domain and application dependent. Examples of such complex relationships are: FALLOUT is caused by NUCLEAR EXPLOSION COALMINE is a place for COAL-MINING",,"Conceptual Structuring through Term Variations. Term extraction systems are now an integral part of the compiling of specialized dictionaries and updating of term banks. In this paper, we present a term detection approach that discovers, structures, and infers conceptual relationships between terms for French. Conceptual relationships are deduced from specific types of term variations, morphological and syntagmatic, and are expressed through lexical functions. The linguistic precision of the conceptual structuring through morphological variations is of 95 %. 2 Conceptual systems Terms are generally classified using partitive and generic relationships to be presented in a thesaural structure. But other relationships exist, the so-called complex relationships (Sager, 1990, pages 34-35) which are domain and application dependent. Examples of such complex relationships are: FALLOUT is caused by NUCLEAR EXPLOSION COALMINE is a place for COAL-MINING",2003
tokunaga-etal-2005-automatic,https://aclanthology.org/I05-1010,0,,,,,,,"Automatic Discovery of Attribute Words from Web Documents. We propose a method of acquiring attribute words for a wide range of objects from Japanese Web documents. The method is a simple unsupervised method that utilizes the statistics of words, lexico-syntactic patterns, and HTML tags. To evaluate the attribute words, we also establish criteria and a procedure based on question-answerability about the candidate word. 1 We use C to denote both the class and its class label (the word representing the class). We also use A to denote both the attribute and the word representing it.",Automatic Discovery of Attribute Words from Web Documents,"We propose a method of acquiring attribute words for a wide range of objects from Japanese Web documents. The method is a simple unsupervised method that utilizes the statistics of words, lexico-syntactic patterns, and HTML tags. To evaluate the attribute words, we also establish criteria and a procedure based on question-answerability about the candidate word. 1 We use C to denote both the class and its class label (the word representing the class). We also use A to denote both the attribute and the word representing it.",Automatic Discovery of Attribute Words from Web Documents,"We propose a method of acquiring attribute words for a wide range of objects from Japanese Web documents. The method is a simple unsupervised method that utilizes the statistics of words, lexico-syntactic patterns, and HTML tags. To evaluate the attribute words, we also establish criteria and a procedure based on question-answerability about the candidate word. 1 We use C to denote both the class and its class label (the word representing the class). We also use A to denote both the attribute and the word representing it.",,"Automatic Discovery of Attribute Words from Web Documents. We propose a method of acquiring attribute words for a wide range of objects from Japanese Web documents. The method is a simple unsupervised method that utilizes the statistics of words, lexico-syntactic patterns, and HTML tags. To evaluate the attribute words, we also establish criteria and a procedure based on question-answerability about the candidate word. 1 We use C to denote both the class and its class label (the word representing the class). We also use A to denote both the attribute and the word representing it.",2005
gomez-1982-towards,https://aclanthology.org/P82-1006,0,,,,,,,"Towards a Theory of Comprehension of Declarative Contexts. An outline of a theory of comprehension of declarative contexts is presented. The main aspect of the theory being developed is based on Kant's distinction between concepts as rules (we have called them conceptual specialists) and concepts as an abstract representation (schemata, frames). Comprehension is viewed as a process dependent on the conceptual specialists (they contain the inferential knowledge), the schemata or frames (they contain the declarative knowledge), and a parser. The function of the parser is to produce a segmentation of the sentences in a case frame structure, thus determininig the meaning of prepositions, polysemous verbs, noun group etc. The function of this parser is not to produce an output to be interpreted by semantic routines or an interpreter~ but to start the parsing process and proceed until a concept relevant to the theme of the text is recognized. Then the concept takes control of the comprehension process overriding the lower level linguistic process. Hence comprehension is viewed as a process in which high level sources of knowledge (concepts) override lower level linguistic processes.",Towards a Theory of Comprehension of Declarative Contexts,"An outline of a theory of comprehension of declarative contexts is presented. The main aspect of the theory being developed is based on Kant's distinction between concepts as rules (we have called them conceptual specialists) and concepts as an abstract representation (schemata, frames). Comprehension is viewed as a process dependent on the conceptual specialists (they contain the inferential knowledge), the schemata or frames (they contain the declarative knowledge), and a parser. The function of the parser is to produce a segmentation of the sentences in a case frame structure, thus determininig the meaning of prepositions, polysemous verbs, noun group etc. The function of this parser is not to produce an output to be interpreted by semantic routines or an interpreter~ but to start the parsing process and proceed until a concept relevant to the theme of the text is recognized. Then the concept takes control of the comprehension process overriding the lower level linguistic process. Hence comprehension is viewed as a process in which high level sources of knowledge (concepts) override lower level linguistic processes.",Towards a Theory of Comprehension of Declarative Contexts,"An outline of a theory of comprehension of declarative contexts is presented. The main aspect of the theory being developed is based on Kant's distinction between concepts as rules (we have called them conceptual specialists) and concepts as an abstract representation (schemata, frames). Comprehension is viewed as a process dependent on the conceptual specialists (they contain the inferential knowledge), the schemata or frames (they contain the declarative knowledge), and a parser. The function of the parser is to produce a segmentation of the sentences in a case frame structure, thus determininig the meaning of prepositions, polysemous verbs, noun group etc. The function of this parser is not to produce an output to be interpreted by semantic routines or an interpreter~ but to start the parsing process and proceed until a concept relevant to the theme of the text is recognized. Then the concept takes control of the comprehension process overriding the lower level linguistic process. Hence comprehension is viewed as a process in which high level sources of knowledge (concepts) override lower level linguistic processes.","This research was supported by the Air Force Office of Scientific Research under contract F49620-79-0152, and was done in part while the author was a member of the AI group at the Ohio State University.I would llke to thank Amar Mukhopadhyay for reading and providing constructive comments on drafts of this paper, and Mrs. Robin Cone for her wonderful work in typing it.","Towards a Theory of Comprehension of Declarative Contexts. An outline of a theory of comprehension of declarative contexts is presented. The main aspect of the theory being developed is based on Kant's distinction between concepts as rules (we have called them conceptual specialists) and concepts as an abstract representation (schemata, frames). Comprehension is viewed as a process dependent on the conceptual specialists (they contain the inferential knowledge), the schemata or frames (they contain the declarative knowledge), and a parser. The function of the parser is to produce a segmentation of the sentences in a case frame structure, thus determininig the meaning of prepositions, polysemous verbs, noun group etc. The function of this parser is not to produce an output to be interpreted by semantic routines or an interpreter~ but to start the parsing process and proceed until a concept relevant to the theme of the text is recognized. Then the concept takes control of the comprehension process overriding the lower level linguistic process. Hence comprehension is viewed as a process in which high level sources of knowledge (concepts) override lower level linguistic processes.",1982
cassani-etal-2015-distributional,https://aclanthology.org/W15-2406,0,,,,,,,"Which distributional cues help the most? Unsupervised contexts selection for lexical category acquisition. Starting from the distributional bootstrapping hypothesis, we propose an unsupervised model that selects the most useful distributional information according to its salience in the input, incorporating psycholinguistic evidence. With a supervised Parts-of-Speech tagging experiment, we provide preliminary results suggesting that the distributional contexts extracted by our model yield similar performances as compared to current approaches from the literature, with a gain in psychological plausibility. We also introduce a more principled way to evaluate the effectiveness of distributional contexts in helping learners to group words in syntactic categories.",Which distributional cues help the most? Unsupervised contexts selection for lexical category acquisition,"Starting from the distributional bootstrapping hypothesis, we propose an unsupervised model that selects the most useful distributional information according to its salience in the input, incorporating psycholinguistic evidence. With a supervised Parts-of-Speech tagging experiment, we provide preliminary results suggesting that the distributional contexts extracted by our model yield similar performances as compared to current approaches from the literature, with a gain in psychological plausibility. We also introduce a more principled way to evaluate the effectiveness of distributional contexts in helping learners to group words in syntactic categories.",Which distributional cues help the most? Unsupervised contexts selection for lexical category acquisition,"Starting from the distributional bootstrapping hypothesis, we propose an unsupervised model that selects the most useful distributional information according to its salience in the input, incorporating psycholinguistic evidence. With a supervised Parts-of-Speech tagging experiment, we provide preliminary results suggesting that the distributional contexts extracted by our model yield similar performances as compared to current approaches from the literature, with a gain in psychological plausibility. We also introduce a more principled way to evaluate the effectiveness of distributional contexts in helping learners to group words in syntactic categories.",The presented research was supported by a BOF/TOP grant (ID 29072) of the Research Council of the University of Antwerp.,"Which distributional cues help the most? Unsupervised contexts selection for lexical category acquisition. Starting from the distributional bootstrapping hypothesis, we propose an unsupervised model that selects the most useful distributional information according to its salience in the input, incorporating psycholinguistic evidence. With a supervised Parts-of-Speech tagging experiment, we provide preliminary results suggesting that the distributional contexts extracted by our model yield similar performances as compared to current approaches from the literature, with a gain in psychological plausibility. We also introduce a more principled way to evaluate the effectiveness of distributional contexts in helping learners to group words in syntactic categories.",2015
grois-2005-learning,https://aclanthology.org/P05-2015,0,,,,,,,"Learning Strategies for Open-Domain Natural Language Question Answering. This work presents a model for learning inference procedures for story comprehension through inductive generalization and reinforcement learning, based on classified examples. The learned inference procedures (or strategies) are represented as of sequences of transformation rules. The approach is compared to three prior systems, and experimental results are presented demonstrating the efficacy of the model.",Learning Strategies for Open-Domain Natural Language Question Answering,"This work presents a model for learning inference procedures for story comprehension through inductive generalization and reinforcement learning, based on classified examples. The learned inference procedures (or strategies) are represented as of sequences of transformation rules. The approach is compared to three prior systems, and experimental results are presented demonstrating the efficacy of the model.",Learning Strategies for Open-Domain Natural Language Question Answering,"This work presents a model for learning inference procedures for story comprehension through inductive generalization and reinforcement learning, based on classified examples. The learned inference procedures (or strategies) are represented as of sequences of transformation rules. The approach is compared to three prior systems, and experimental results are presented demonstrating the efficacy of the model.",,"Learning Strategies for Open-Domain Natural Language Question Answering. This work presents a model for learning inference procedures for story comprehension through inductive generalization and reinforcement learning, based on classified examples. The learned inference procedures (or strategies) are represented as of sequences of transformation rules. The approach is compared to three prior systems, and experimental results are presented demonstrating the efficacy of the model.",2005
moreno-etal-2002-speechdat,http://www.lrec-conf.org/proceedings/lrec2002/pdf/269.pdf,0,,,,,,,"SpeechDat across all America: SALA II. SALA II is a project co-sponsored by several companies that focuses on collecting linguistic data dedicated for training speaker independent speech recognizers for mobile/cellular network telephone applications. The goal of the project is to produce SpeechDatlike databases in all the significant languages and dialects spoken across Latin America, US and Canada. Utterances will be recorded directly from calls made from cellular telephones and are composed by read text and answers to specific questions. The goal of the project should be reached within the year 2003.",{S}peech{D}at across all {A}merica: {SALA} {II},"SALA II is a project co-sponsored by several companies that focuses on collecting linguistic data dedicated for training speaker independent speech recognizers for mobile/cellular network telephone applications. The goal of the project is to produce SpeechDatlike databases in all the significant languages and dialects spoken across Latin America, US and Canada. Utterances will be recorded directly from calls made from cellular telephones and are composed by read text and answers to specific questions. The goal of the project should be reached within the year 2003.",SpeechDat across all America: SALA II,"SALA II is a project co-sponsored by several companies that focuses on collecting linguistic data dedicated for training speaker independent speech recognizers for mobile/cellular network telephone applications. The goal of the project is to produce SpeechDatlike databases in all the significant languages and dialects spoken across Latin America, US and Canada. Utterances will be recorded directly from calls made from cellular telephones and are composed by read text and answers to specific questions. The goal of the project should be reached within the year 2003.",,"SpeechDat across all America: SALA II. SALA II is a project co-sponsored by several companies that focuses on collecting linguistic data dedicated for training speaker independent speech recognizers for mobile/cellular network telephone applications. The goal of the project is to produce SpeechDatlike databases in all the significant languages and dialects spoken across Latin America, US and Canada. Utterances will be recorded directly from calls made from cellular telephones and are composed by read text and answers to specific questions. The goal of the project should be reached within the year 2003.",2002
pham-etal-2016-convolutional,https://aclanthology.org/D16-1123,0,,,,,,,"Convolutional Neural Network Language Models. Convolutional Neural Networks (CNNs) have shown to yield very strong results in several Computer Vision tasks. Their application to language has received much less attention, and it has mainly focused on static classification tasks, such as sentence classification for Sentiment Analysis or relation extraction. In this work, we study the application of CNNs to language modeling, a dynamic, sequential prediction task that needs models to capture local as well as long-range dependency information. Our contribution is twofold. First, we show that CNNs achieve 11-26% better absolute performance than feed-forward neural language models, demonstrating their potential for language representation even in sequential tasks. As for recurrent models, our model outperforms RNNs but is below state of the art LSTM models. Second, we gain some understanding of the behavior of the model, showing that CNNs in language act as feature detectors at a high level of abstraction, like in Computer Vision, and that the model can profitably use information from as far as 16 words before the target.",Convolutional Neural Network Language Models,"Convolutional Neural Networks (CNNs) have shown to yield very strong results in several Computer Vision tasks. Their application to language has received much less attention, and it has mainly focused on static classification tasks, such as sentence classification for Sentiment Analysis or relation extraction. In this work, we study the application of CNNs to language modeling, a dynamic, sequential prediction task that needs models to capture local as well as long-range dependency information. Our contribution is twofold. First, we show that CNNs achieve 11-26% better absolute performance than feed-forward neural language models, demonstrating their potential for language representation even in sequential tasks. As for recurrent models, our model outperforms RNNs but is below state of the art LSTM models. Second, we gain some understanding of the behavior of the model, showing that CNNs in language act as feature detectors at a high level of abstraction, like in Computer Vision, and that the model can profitably use information from as far as 16 words before the target.",Convolutional Neural Network Language Models,"Convolutional Neural Networks (CNNs) have shown to yield very strong results in several Computer Vision tasks. Their application to language has received much less attention, and it has mainly focused on static classification tasks, such as sentence classification for Sentiment Analysis or relation extraction. In this work, we study the application of CNNs to language modeling, a dynamic, sequential prediction task that needs models to capture local as well as long-range dependency information. Our contribution is twofold. First, we show that CNNs achieve 11-26% better absolute performance than feed-forward neural language models, demonstrating their potential for language representation even in sequential tasks. As for recurrent models, our model outperforms RNNs but is below state of the art LSTM models. Second, we gain some understanding of the behavior of the model, showing that CNNs in language act as feature detectors at a high level of abstraction, like in Computer Vision, and that the model can profitably use information from as far as 16 words before the target.",We thank Marco Baroni and three anonymous reviewers for fruitful feedback. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 655577 (LOVe); ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES) and the Erasmus Mundus Scholarship for Joint Master Programs. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used in our research.,"Convolutional Neural Network Language Models. Convolutional Neural Networks (CNNs) have shown to yield very strong results in several Computer Vision tasks. Their application to language has received much less attention, and it has mainly focused on static classification tasks, such as sentence classification for Sentiment Analysis or relation extraction. In this work, we study the application of CNNs to language modeling, a dynamic, sequential prediction task that needs models to capture local as well as long-range dependency information. Our contribution is twofold. First, we show that CNNs achieve 11-26% better absolute performance than feed-forward neural language models, demonstrating their potential for language representation even in sequential tasks. As for recurrent models, our model outperforms RNNs but is below state of the art LSTM models. Second, we gain some understanding of the behavior of the model, showing that CNNs in language act as feature detectors at a high level of abstraction, like in Computer Vision, and that the model can profitably use information from as far as 16 words before the target.",2016
padmakumar-he-2021-unsupervised,https://aclanthology.org/2021.eacl-main.213,0,,,,,,,"Unsupervised Extractive Summarization using Pointwise Mutual Information. Unsupervised approaches to extractive summarization usually rely on a notion of sentence importance defined by the semantic similarity between a sentence and the document. We propose new metrics of relevance and redundancy using pointwise mutual information (PMI) between sentences, which can be easily computed by a pre-trained language model. Intuitively, a relevant sentence allows readers to infer the document content (high PMI with the document), and a redundant sentence can be inferred from the summary (high PMI with the summary). We then develop a greedy sentence selection algorithm to maximize relevance and minimize redundancy of extracted sentences. We show that our method outperforms similarity-based methods on datasets in a range of domains including news, medical journal articles, and personal anecdotes.",Unsupervised Extractive Summarization using Pointwise Mutual Information,"Unsupervised approaches to extractive summarization usually rely on a notion of sentence importance defined by the semantic similarity between a sentence and the document. We propose new metrics of relevance and redundancy using pointwise mutual information (PMI) between sentences, which can be easily computed by a pre-trained language model. Intuitively, a relevant sentence allows readers to infer the document content (high PMI with the document), and a redundant sentence can be inferred from the summary (high PMI with the summary). We then develop a greedy sentence selection algorithm to maximize relevance and minimize redundancy of extracted sentences. We show that our method outperforms similarity-based methods on datasets in a range of domains including news, medical journal articles, and personal anecdotes.",Unsupervised Extractive Summarization using Pointwise Mutual Information,"Unsupervised approaches to extractive summarization usually rely on a notion of sentence importance defined by the semantic similarity between a sentence and the document. We propose new metrics of relevance and redundancy using pointwise mutual information (PMI) between sentences, which can be easily computed by a pre-trained language model. Intuitively, a relevant sentence allows readers to infer the document content (high PMI with the document), and a redundant sentence can be inferred from the summary (high PMI with the summary). We then develop a greedy sentence selection algorithm to maximize relevance and minimize redundancy of extracted sentences. We show that our method outperforms similarity-based methods on datasets in a range of domains including news, medical journal articles, and personal anecdotes.",,"Unsupervised Extractive Summarization using Pointwise Mutual Information. Unsupervised approaches to extractive summarization usually rely on a notion of sentence importance defined by the semantic similarity between a sentence and the document. We propose new metrics of relevance and redundancy using pointwise mutual information (PMI) between sentences, which can be easily computed by a pre-trained language model. Intuitively, a relevant sentence allows readers to infer the document content (high PMI with the document), and a redundant sentence can be inferred from the summary (high PMI with the summary). We then develop a greedy sentence selection algorithm to maximize relevance and minimize redundancy of extracted sentences. We show that our method outperforms similarity-based methods on datasets in a range of domains including news, medical journal articles, and personal anecdotes.",2021
swanson-etal-2020-rationalizing,https://aclanthology.org/2020.acl-main.496,0,,,,,,,"Rationalizing Text Matching: Learning Sparse Alignments via Optimal Transport. Selecting input features of top relevance has become a popular method for building selfexplaining models. In this work, we extend this selective rationalization approach to text matching, where the goal is to jointly select and align text pieces, such as tokens or sentences, as a justification for the downstream prediction. Our approach employs optimal transport (OT) to find a minimal cost alignment between the inputs. However, directly applying OT often produces dense and therefore uninterpretable alignments. To overcome this limitation, we introduce novel constrained variants of the OT problem that result in highly sparse alignments with controllable sparsity. Our model is end-to-end differentiable using the Sinkhorn algorithm for OT and can be trained without any alignment annotations. We evaluate our model on the Stack-Exchange, MultiNews, e-SNLI, and MultiRC datasets. Our model achieves very sparse rationale selections with high fidelity while preserving prediction accuracy compared to strong attention baseline models. † * Denotes equal contribution.",Rationalizing Text Matching: {L}earning Sparse Alignments via Optimal Transport,"Selecting input features of top relevance has become a popular method for building selfexplaining models. In this work, we extend this selective rationalization approach to text matching, where the goal is to jointly select and align text pieces, such as tokens or sentences, as a justification for the downstream prediction. Our approach employs optimal transport (OT) to find a minimal cost alignment between the inputs. However, directly applying OT often produces dense and therefore uninterpretable alignments. To overcome this limitation, we introduce novel constrained variants of the OT problem that result in highly sparse alignments with controllable sparsity. Our model is end-to-end differentiable using the Sinkhorn algorithm for OT and can be trained without any alignment annotations. We evaluate our model on the Stack-Exchange, MultiNews, e-SNLI, and MultiRC datasets. Our model achieves very sparse rationale selections with high fidelity while preserving prediction accuracy compared to strong attention baseline models. † * Denotes equal contribution.",Rationalizing Text Matching: Learning Sparse Alignments via Optimal Transport,"Selecting input features of top relevance has become a popular method for building selfexplaining models. In this work, we extend this selective rationalization approach to text matching, where the goal is to jointly select and align text pieces, such as tokens or sentences, as a justification for the downstream prediction. Our approach employs optimal transport (OT) to find a minimal cost alignment between the inputs. However, directly applying OT often produces dense and therefore uninterpretable alignments. To overcome this limitation, we introduce novel constrained variants of the OT problem that result in highly sparse alignments with controllable sparsity. Our model is end-to-end differentiable using the Sinkhorn algorithm for OT and can be trained without any alignment annotations. We evaluate our model on the Stack-Exchange, MultiNews, e-SNLI, and MultiRC datasets. Our model achieves very sparse rationale selections with high fidelity while preserving prediction accuracy compared to strong attention baseline models. † * Denotes equal contribution.","We thank Jesse Michel, Derek Chen, Yi Yang, and the anonymous reviewers for their valuable discussions. We thank Sam Altschul, Derek Chen, Amit Ganatra, Alex Lin, James Mullenbach, Jen Seale, Siddharth Varia, and Lei Xu for providing the human evaluation.","Rationalizing Text Matching: Learning Sparse Alignments via Optimal Transport. Selecting input features of top relevance has become a popular method for building selfexplaining models. In this work, we extend this selective rationalization approach to text matching, where the goal is to jointly select and align text pieces, such as tokens or sentences, as a justification for the downstream prediction. Our approach employs optimal transport (OT) to find a minimal cost alignment between the inputs. However, directly applying OT often produces dense and therefore uninterpretable alignments. To overcome this limitation, we introduce novel constrained variants of the OT problem that result in highly sparse alignments with controllable sparsity. Our model is end-to-end differentiable using the Sinkhorn algorithm for OT and can be trained without any alignment annotations. We evaluate our model on the Stack-Exchange, MultiNews, e-SNLI, and MultiRC datasets. Our model achieves very sparse rationale selections with high fidelity while preserving prediction accuracy compared to strong attention baseline models. † * Denotes equal contribution.",2020
negi-buitelaar-2015-curse,https://aclanthology.org/W15-0115,0,,,,,,,"Curse or Boon? Presence of Subjunctive Mood in Opinionated Text. In addition to the expression of positive and negative sentiments in the reviews, customers often tend to express wishes and suggestions regarding improvements in a product/service, which could be worth extracting. Subjunctive mood is often present in sentences which speak about a possibility or action that has not yet occurred. While this phenomena poses challenges to the identification of positive and negative sentiments hidden in a text, it can be helpful to identify wishes and suggestions. In this paper, we extract features from a small dataset of subjunctive mood, and use those features to identify wishes and suggestions in opinionated text. Our study validates that subjunctive features can be good features for the detection of wishes. However, with the given dataset, such features did not perform well for suggestion detection.",Curse or Boon? Presence of Subjunctive Mood in Opinionated Text,"In addition to the expression of positive and negative sentiments in the reviews, customers often tend to express wishes and suggestions regarding improvements in a product/service, which could be worth extracting. Subjunctive mood is often present in sentences which speak about a possibility or action that has not yet occurred. While this phenomena poses challenges to the identification of positive and negative sentiments hidden in a text, it can be helpful to identify wishes and suggestions. In this paper, we extract features from a small dataset of subjunctive mood, and use those features to identify wishes and suggestions in opinionated text. Our study validates that subjunctive features can be good features for the detection of wishes. However, with the given dataset, such features did not perform well for suggestion detection.",Curse or Boon? Presence of Subjunctive Mood in Opinionated Text,"In addition to the expression of positive and negative sentiments in the reviews, customers often tend to express wishes and suggestions regarding improvements in a product/service, which could be worth extracting. Subjunctive mood is often present in sentences which speak about a possibility or action that has not yet occurred. While this phenomena poses challenges to the identification of positive and negative sentiments hidden in a text, it can be helpful to identify wishes and suggestions. In this paper, we extract features from a small dataset of subjunctive mood, and use those features to identify wishes and suggestions in opinionated text. Our study validates that subjunctive features can be good features for the detection of wishes. However, with the given dataset, such features did not perform well for suggestion detection.","This work has been funded by the the European Union's Horizon 2020 programme under grant agreement No 644632 MixedEmotions, and Science Foundation Ireland under Grant Number SFI/12/RC/2289.","Curse or Boon? Presence of Subjunctive Mood in Opinionated Text. In addition to the expression of positive and negative sentiments in the reviews, customers often tend to express wishes and suggestions regarding improvements in a product/service, which could be worth extracting. Subjunctive mood is often present in sentences which speak about a possibility or action that has not yet occurred. While this phenomena poses challenges to the identification of positive and negative sentiments hidden in a text, it can be helpful to identify wishes and suggestions. In this paper, we extract features from a small dataset of subjunctive mood, and use those features to identify wishes and suggestions in opinionated text. Our study validates that subjunctive features can be good features for the detection of wishes. However, with the given dataset, such features did not perform well for suggestion detection.",2015
sarkar-bandyopadhyay-2008-design,https://aclanthology.org/I08-3012,0,,,,,,,"Design of a Rule-based Stemmer for Natural Language Text in Bengali. This paper presents a rule-based approach for finding out the stems from text in Bengali, a resource-poor language. It starts by introducing the concept of orthographic syllable, the basic orthographic unit of Bengali. Then it discusses the morphological structure of the tokens for different parts of speech, formalizes the inflection rule constructs and formulates a quantitative ranking measure for potential candidate stems of a token. These concepts are applied in the design and implementation of an extensible architecture of a stemmer system for Bengali text. The accuracy of the system is calculated to be ~89% and above.",Design of a Rule-based Stemmer for Natural Language Text in {B}engali,"This paper presents a rule-based approach for finding out the stems from text in Bengali, a resource-poor language. It starts by introducing the concept of orthographic syllable, the basic orthographic unit of Bengali. Then it discusses the morphological structure of the tokens for different parts of speech, formalizes the inflection rule constructs and formulates a quantitative ranking measure for potential candidate stems of a token. These concepts are applied in the design and implementation of an extensible architecture of a stemmer system for Bengali text. The accuracy of the system is calculated to be ~89% and above.",Design of a Rule-based Stemmer for Natural Language Text in Bengali,"This paper presents a rule-based approach for finding out the stems from text in Bengali, a resource-poor language. It starts by introducing the concept of orthographic syllable, the basic orthographic unit of Bengali. Then it discusses the morphological structure of the tokens for different parts of speech, formalizes the inflection rule constructs and formulates a quantitative ranking measure for potential candidate stems of a token. These concepts are applied in the design and implementation of an extensible architecture of a stemmer system for Bengali text. The accuracy of the system is calculated to be ~89% and above.",,"Design of a Rule-based Stemmer for Natural Language Text in Bengali. This paper presents a rule-based approach for finding out the stems from text in Bengali, a resource-poor language. It starts by introducing the concept of orthographic syllable, the basic orthographic unit of Bengali. Then it discusses the morphological structure of the tokens for different parts of speech, formalizes the inflection rule constructs and formulates a quantitative ranking measure for potential candidate stems of a token. These concepts are applied in the design and implementation of an extensible architecture of a stemmer system for Bengali text. The accuracy of the system is calculated to be ~89% and above.",2008
dobrowolski-etal-2021-samsung,https://aclanthology.org/2021.wat-1.27,0,,,,,,,"Samsung R\&D Institute Poland submission to WAT 2021 Indic Language Multilingual Task. This paper describes the submission to the WAT 2021 Indic Language Multilingual Task by Samsung R&D Institute Poland. The task covered translation between 10 Indic Languages (Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil and Telugu) and English. We combined a variety of techniques: transliteration, filtering, backtranslation, domain adaptation, knowledge-distillation and finally ensembling of NMT models. We applied an effective approach to low-resource training that consist of pretraining on backtranslations and tuning on parallel corpora. We experimented with two different domainadaptation techniques which significantly improved translation quality when applied to monolingual corpora. We researched and applied a novel approach for finding the best hyperparameters for ensembling a number of translation models. All techniques combined gave significant improvement-up to +8 BLEU over baseline results. The quality of the models has been confirmed by the human evaluation where SRPOL models scored best for all 5 manually evaluated languages.",{S}amsung {R}{\&}{D} Institute {P}oland submission to {WAT} 2021 Indic Language Multilingual Task,"This paper describes the submission to the WAT 2021 Indic Language Multilingual Task by Samsung R&D Institute Poland. The task covered translation between 10 Indic Languages (Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil and Telugu) and English. We combined a variety of techniques: transliteration, filtering, backtranslation, domain adaptation, knowledge-distillation and finally ensembling of NMT models. We applied an effective approach to low-resource training that consist of pretraining on backtranslations and tuning on parallel corpora. We experimented with two different domainadaptation techniques which significantly improved translation quality when applied to monolingual corpora. We researched and applied a novel approach for finding the best hyperparameters for ensembling a number of translation models. All techniques combined gave significant improvement-up to +8 BLEU over baseline results. The quality of the models has been confirmed by the human evaluation where SRPOL models scored best for all 5 manually evaluated languages.",Samsung R\&D Institute Poland submission to WAT 2021 Indic Language Multilingual Task,"This paper describes the submission to the WAT 2021 Indic Language Multilingual Task by Samsung R&D Institute Poland. The task covered translation between 10 Indic Languages (Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil and Telugu) and English. We combined a variety of techniques: transliteration, filtering, backtranslation, domain adaptation, knowledge-distillation and finally ensembling of NMT models. We applied an effective approach to low-resource training that consist of pretraining on backtranslations and tuning on parallel corpora. We experimented with two different domainadaptation techniques which significantly improved translation quality when applied to monolingual corpora. We researched and applied a novel approach for finding the best hyperparameters for ensembling a number of translation models. All techniques combined gave significant improvement-up to +8 BLEU over baseline results. The quality of the models has been confirmed by the human evaluation where SRPOL models scored best for all 5 manually evaluated languages.",,"Samsung R\&D Institute Poland submission to WAT 2021 Indic Language Multilingual Task. This paper describes the submission to the WAT 2021 Indic Language Multilingual Task by Samsung R&D Institute Poland. The task covered translation between 10 Indic Languages (Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil and Telugu) and English. We combined a variety of techniques: transliteration, filtering, backtranslation, domain adaptation, knowledge-distillation and finally ensembling of NMT models. We applied an effective approach to low-resource training that consist of pretraining on backtranslations and tuning on parallel corpora. We experimented with two different domainadaptation techniques which significantly improved translation quality when applied to monolingual corpora. We researched and applied a novel approach for finding the best hyperparameters for ensembling a number of translation models. All techniques combined gave significant improvement-up to +8 BLEU over baseline results. The quality of the models has been confirmed by the human evaluation where SRPOL models scored best for all 5 manually evaluated languages.",2021
boggild-andersen-1990-valence,https://aclanthology.org/W89-0113,0,,,,,,,"Valence Frames Used for Syntactic Disambiguation in the EUROTRA-DK Model. The EEC Machine Translation Programme E U RO TRA is a multi lingual, transfer-based, module-structured machine translation project. The result of the analysis, the interface structure, is bcised on a dependency grammar combined with a frame theory. The valency frames, specified in the lexicon, enable the grammar to analyse or generate the sentences. If information about the syntactical structure of the slot fillers is added to the lexicon, certain erroneous analyses may be discarded exclusively on a syntactical basis, and complex transfer may in some cases be avoided. Where semantic tind syntactical differences are related, problems of am biguity may be solved as well. This will be exemplified, and the frame theory will be explained. The paper concentrates on the valency of verbs; according to the E U RO TRA theory the verb is the governor o f a sentence.",Valence Frames Used for Syntactic Disambiguation in the {EUROTRA}-{DK} Model,"The EEC Machine Translation Programme E U RO TRA is a multi lingual, transfer-based, module-structured machine translation project. The result of the analysis, the interface structure, is bcised on a dependency grammar combined with a frame theory. The valency frames, specified in the lexicon, enable the grammar to analyse or generate the sentences. If information about the syntactical structure of the slot fillers is added to the lexicon, certain erroneous analyses may be discarded exclusively on a syntactical basis, and complex transfer may in some cases be avoided. Where semantic tind syntactical differences are related, problems of am biguity may be solved as well. This will be exemplified, and the frame theory will be explained. The paper concentrates on the valency of verbs; according to the E U RO TRA theory the verb is the governor o f a sentence.",Valence Frames Used for Syntactic Disambiguation in the EUROTRA-DK Model,"The EEC Machine Translation Programme E U RO TRA is a multi lingual, transfer-based, module-structured machine translation project. The result of the analysis, the interface structure, is bcised on a dependency grammar combined with a frame theory. The valency frames, specified in the lexicon, enable the grammar to analyse or generate the sentences. If information about the syntactical structure of the slot fillers is added to the lexicon, certain erroneous analyses may be discarded exclusively on a syntactical basis, and complex transfer may in some cases be avoided. Where semantic tind syntactical differences are related, problems of am biguity may be solved as well. This will be exemplified, and the frame theory will be explained. The paper concentrates on the valency of verbs; according to the E U RO TRA theory the verb is the governor o f a sentence.",,"Valence Frames Used for Syntactic Disambiguation in the EUROTRA-DK Model. The EEC Machine Translation Programme E U RO TRA is a multi lingual, transfer-based, module-structured machine translation project. The result of the analysis, the interface structure, is bcised on a dependency grammar combined with a frame theory. The valency frames, specified in the lexicon, enable the grammar to analyse or generate the sentences. If information about the syntactical structure of the slot fillers is added to the lexicon, certain erroneous analyses may be discarded exclusively on a syntactical basis, and complex transfer may in some cases be avoided. Where semantic tind syntactical differences are related, problems of am biguity may be solved as well. This will be exemplified, and the frame theory will be explained. The paper concentrates on the valency of verbs; according to the E U RO TRA theory the verb is the governor o f a sentence.",1990
zuber-1982-explicit,https://aclanthology.org/C82-2073,0,,,,,,,"Explicit Sentences and Syntactic Complexity. (2) Leslie Is a student (3) Leslie Is a woman and Leslle Is a student It is clear however that nelther (2) nor (3) can be considered as an ""exact"" translation of (1). Sentence (2) does not carry the information that Leslie Is a woman and sentence 3does not carry thls-information in the same way as (1)i the fact ~hat Leslie is a woman Is presupposed by (1) whereas it is asserted by (3). In other words sentence (3) is more explgclt than sentence (1). Following Keenan (1973) we will say that a sentence S is more explicit than a sentence TIff S and T have the same consequences but some, presupposition ofT Is an assertion of S.
Not only translations can be more explicit. Per instance (5) is more explicit that (4) since (4) presupposes (6) wherens (5) esserts (6):",Explicit Sentences and Syntactic Complexity,"(2) Leslie Is a student (3) Leslie Is a woman and Leslle Is a student It is clear however that nelther (2) nor (3) can be considered as an ""exact"" translation of (1). Sentence (2) does not carry the information that Leslie Is a woman and sentence 3does not carry thls-information in the same way as (1)i the fact ~hat Leslie is a woman Is presupposed by (1) whereas it is asserted by (3). In other words sentence (3) is more explgclt than sentence (1). Following Keenan (1973) we will say that a sentence S is more explicit than a sentence TIff S and T have the same consequences but some, presupposition ofT Is an assertion of S.
Not only translations can be more explicit. Per instance (5) is more explicit that (4) since (4) presupposes (6) wherens (5) esserts (6):",Explicit Sentences and Syntactic Complexity,"(2) Leslie Is a student (3) Leslie Is a woman and Leslle Is a student It is clear however that nelther (2) nor (3) can be considered as an ""exact"" translation of (1). Sentence (2) does not carry the information that Leslie Is a woman and sentence 3does not carry thls-information in the same way as (1)i the fact ~hat Leslie is a woman Is presupposed by (1) whereas it is asserted by (3). In other words sentence (3) is more explgclt than sentence (1). Following Keenan (1973) we will say that a sentence S is more explicit than a sentence TIff S and T have the same consequences but some, presupposition ofT Is an assertion of S.
Not only translations can be more explicit. Per instance (5) is more explicit that (4) since (4) presupposes (6) wherens (5) esserts (6):",,"Explicit Sentences and Syntactic Complexity. (2) Leslie Is a student (3) Leslie Is a woman and Leslle Is a student It is clear however that nelther (2) nor (3) can be considered as an ""exact"" translation of (1). Sentence (2) does not carry the information that Leslie Is a woman and sentence 3does not carry thls-information in the same way as (1)i the fact ~hat Leslie is a woman Is presupposed by (1) whereas it is asserted by (3). In other words sentence (3) is more explgclt than sentence (1). Following Keenan (1973) we will say that a sentence S is more explicit than a sentence TIff S and T have the same consequences but some, presupposition ofT Is an assertion of S.
Not only translations can be more explicit. Per instance (5) is more explicit that (4) since (4) presupposes (6) wherens (5) esserts (6):",1982
nagaraju-etal-2017-rule,https://aclanthology.org/W17-7550,0,,,,,,,"Rule Based Approch of Clause Boundary Identification in Telugu. One of the major challenges in Natural Language Processing is identifying Clauses and their Boundaries in Computational Linguistics. This paper attempts to develop an Automatic Clause Boundary Identifier (CBI) for Telugu language. The language Telugu belongs to South-Central Dravidian language family with features of head-final, leftbranching and morphologically agglutinative in nature (Bh. Krishnamurti, 2003). A huge amount of corpus is studied to frame the rules for identifying clause boundaries and these rules are trained to a computational algorithm and also discussed some of the issues in identifying clause boundaries. A clause boundary annotated corpus can be developed from raw text which can be used to train a machine learning algorithm which in turns helps in development of a Hybrid Clause Boundary Identification Tool for Telugu. Its implementation and evaluation are discussed in this paper.",Rule Based Approch of Clause Boundary Identification in {T}elugu,"One of the major challenges in Natural Language Processing is identifying Clauses and their Boundaries in Computational Linguistics. This paper attempts to develop an Automatic Clause Boundary Identifier (CBI) for Telugu language. The language Telugu belongs to South-Central Dravidian language family with features of head-final, leftbranching and morphologically agglutinative in nature (Bh. Krishnamurti, 2003). A huge amount of corpus is studied to frame the rules for identifying clause boundaries and these rules are trained to a computational algorithm and also discussed some of the issues in identifying clause boundaries. A clause boundary annotated corpus can be developed from raw text which can be used to train a machine learning algorithm which in turns helps in development of a Hybrid Clause Boundary Identification Tool for Telugu. Its implementation and evaluation are discussed in this paper.",Rule Based Approch of Clause Boundary Identification in Telugu,"One of the major challenges in Natural Language Processing is identifying Clauses and their Boundaries in Computational Linguistics. This paper attempts to develop an Automatic Clause Boundary Identifier (CBI) for Telugu language. The language Telugu belongs to South-Central Dravidian language family with features of head-final, leftbranching and morphologically agglutinative in nature (Bh. Krishnamurti, 2003). A huge amount of corpus is studied to frame the rules for identifying clause boundaries and these rules are trained to a computational algorithm and also discussed some of the issues in identifying clause boundaries. A clause boundary annotated corpus can be developed from raw text which can be used to train a machine learning algorithm which in turns helps in development of a Hybrid Clause Boundary Identification Tool for Telugu. Its implementation and evaluation are discussed in this paper.",,"Rule Based Approch of Clause Boundary Identification in Telugu. One of the major challenges in Natural Language Processing is identifying Clauses and their Boundaries in Computational Linguistics. This paper attempts to develop an Automatic Clause Boundary Identifier (CBI) for Telugu language. The language Telugu belongs to South-Central Dravidian language family with features of head-final, leftbranching and morphologically agglutinative in nature (Bh. Krishnamurti, 2003). A huge amount of corpus is studied to frame the rules for identifying clause boundaries and these rules are trained to a computational algorithm and also discussed some of the issues in identifying clause boundaries. A clause boundary annotated corpus can be developed from raw text which can be used to train a machine learning algorithm which in turns helps in development of a Hybrid Clause Boundary Identification Tool for Telugu. Its implementation and evaluation are discussed in this paper.",2017
ye-ling-2019-distant,https://aclanthology.org/N19-1288,0,,,,,,,"Distant Supervision Relation Extraction with Intra-Bag and Inter-Bag Attentions. This paper presents a neural relation extraction method to deal with the noisy training data generated by distant supervision. Previous studies mainly focus on sentence-level de-noising by designing neural networks with intra-bag attentions. In this paper, both intrabag and inter-bag attentions are considered in order to deal with the noise at sentence-level and bag-level respectively. First, relationaware bag representations are calculated by weighting sentence embeddings using intrabag attentions. Here, each possible relation is utilized as the query for attention calculation instead of only using the target relation in conventional methods. Furthermore, the representation of a group of bags in the training set which share the same relation label is calculated by weighting bag representations using a similarity-based inter-bag attention module. Finally, a bag group is utilized as a training sample when building our relation extractor. Experimental results on the New York Times dataset demonstrate the effectiveness of our proposed intra-bag and inter-bag attention modules. Our method also achieves better relation extraction accuracy than state-of-the-art methods on this dataset 1 .",Distant Supervision Relation Extraction with Intra-Bag and Inter-Bag Attentions,"This paper presents a neural relation extraction method to deal with the noisy training data generated by distant supervision. Previous studies mainly focus on sentence-level de-noising by designing neural networks with intra-bag attentions. In this paper, both intrabag and inter-bag attentions are considered in order to deal with the noise at sentence-level and bag-level respectively. First, relationaware bag representations are calculated by weighting sentence embeddings using intrabag attentions. Here, each possible relation is utilized as the query for attention calculation instead of only using the target relation in conventional methods. Furthermore, the representation of a group of bags in the training set which share the same relation label is calculated by weighting bag representations using a similarity-based inter-bag attention module. Finally, a bag group is utilized as a training sample when building our relation extractor. Experimental results on the New York Times dataset demonstrate the effectiveness of our proposed intra-bag and inter-bag attention modules. Our method also achieves better relation extraction accuracy than state-of-the-art methods on this dataset 1 .",Distant Supervision Relation Extraction with Intra-Bag and Inter-Bag Attentions,"This paper presents a neural relation extraction method to deal with the noisy training data generated by distant supervision. Previous studies mainly focus on sentence-level de-noising by designing neural networks with intra-bag attentions. In this paper, both intrabag and inter-bag attentions are considered in order to deal with the noise at sentence-level and bag-level respectively. First, relationaware bag representations are calculated by weighting sentence embeddings using intrabag attentions. Here, each possible relation is utilized as the query for attention calculation instead of only using the target relation in conventional methods. Furthermore, the representation of a group of bags in the training set which share the same relation label is calculated by weighting bag representations using a similarity-based inter-bag attention module. Finally, a bag group is utilized as a training sample when building our relation extractor. Experimental results on the New York Times dataset demonstrate the effectiveness of our proposed intra-bag and inter-bag attention modules. Our method also achieves better relation extraction accuracy than state-of-the-art methods on this dataset 1 .",We thank the anonymous reviewers for their valuable comments.,"Distant Supervision Relation Extraction with Intra-Bag and Inter-Bag Attentions. This paper presents a neural relation extraction method to deal with the noisy training data generated by distant supervision. Previous studies mainly focus on sentence-level de-noising by designing neural networks with intra-bag attentions. In this paper, both intrabag and inter-bag attentions are considered in order to deal with the noise at sentence-level and bag-level respectively. First, relationaware bag representations are calculated by weighting sentence embeddings using intrabag attentions. Here, each possible relation is utilized as the query for attention calculation instead of only using the target relation in conventional methods. Furthermore, the representation of a group of bags in the training set which share the same relation label is calculated by weighting bag representations using a similarity-based inter-bag attention module. Finally, a bag group is utilized as a training sample when building our relation extractor. Experimental results on the New York Times dataset demonstrate the effectiveness of our proposed intra-bag and inter-bag attention modules. Our method also achieves better relation extraction accuracy than state-of-the-art methods on this dataset 1 .",2019
fan-etal-2015-hpsg,https://aclanthology.org/W15-3303,0,,,,,,,"An HPSG-based Shared-Grammar for the Chinese Languages: ZHONG [|]. This paper introduces our attempts to model the Chinese language using HPSG and MRS. Chinese refers to a family of various languages including Mandarin Chinese, Cantonese, Min, etc. These languages share a large amount of structure, though they may differ in orthography, lexicon, and syntax. To model these, we are building a family of grammars: ZHONG [ ]. This grammar contains instantiations of various Chinese languages, sharing descriptions where possible. Currently we have prototype grammars for Cantonese and Mandarin in both simplified and traditional script, all based on a common core. The grammars also have facilities for robust parsing, sentence generation, and unknown word handling.",An {HPSG}-based Shared-Grammar for the {C}hinese Languages: {ZHONG} [|],"This paper introduces our attempts to model the Chinese language using HPSG and MRS. Chinese refers to a family of various languages including Mandarin Chinese, Cantonese, Min, etc. These languages share a large amount of structure, though they may differ in orthography, lexicon, and syntax. To model these, we are building a family of grammars: ZHONG [ ]. This grammar contains instantiations of various Chinese languages, sharing descriptions where possible. Currently we have prototype grammars for Cantonese and Mandarin in both simplified and traditional script, all based on a common core. The grammars also have facilities for robust parsing, sentence generation, and unknown word handling.",An HPSG-based Shared-Grammar for the Chinese Languages: ZHONG [|],"This paper introduces our attempts to model the Chinese language using HPSG and MRS. Chinese refers to a family of various languages including Mandarin Chinese, Cantonese, Min, etc. These languages share a large amount of structure, though they may differ in orthography, lexicon, and syntax. To model these, we are building a family of grammars: ZHONG [ ]. This grammar contains instantiations of various Chinese languages, sharing descriptions where possible. Currently we have prototype grammars for Cantonese and Mandarin in both simplified and traditional script, all based on a common core. The grammars also have facilities for robust parsing, sentence generation, and unknown word handling.","We would like to express special thanks to Justin Chunlei Yang and Dan Flickinger for their enormous work on ManGO, which our current grammar is based on. In addition, we received much inspiration from Yi Zhang and Rui Wang and their Mandarin Chinese Grammar. We are grateful to Michael Wayne Goodman, Luis Mortado da Costa, Bo Chen, Joanna Sio Ut Seong, Shan Wang, František Kratochvíl, Huizhen Wang, Wenjie Wang, Giulia Bonansinga, David Moeljadi, Tuấn Anh Lê, Woodley Packard, Leslie Lee, and Jong-Bok Kim for their help and comments. Valuable comments from four anonymous reviewers are also much appreciated. Of course, we are solely responsible for all the remaining errors and infelicities. This research was supported in part by","An HPSG-based Shared-Grammar for the Chinese Languages: ZHONG [|]. This paper introduces our attempts to model the Chinese language using HPSG and MRS. Chinese refers to a family of various languages including Mandarin Chinese, Cantonese, Min, etc. These languages share a large amount of structure, though they may differ in orthography, lexicon, and syntax. To model these, we are building a family of grammars: ZHONG [ ]. This grammar contains instantiations of various Chinese languages, sharing descriptions where possible. Currently we have prototype grammars for Cantonese and Mandarin in both simplified and traditional script, all based on a common core. The grammars also have facilities for robust parsing, sentence generation, and unknown word handling.",2015
he-etal-2019-pointer,https://aclanthology.org/U19-1013,0,,,,,,,"A Pointer Network Architecture for Context-Dependent Semantic Parsing. Semantic parsing targets at mapping human utterances into structured meaning representations, such as logical forms, programming snippets, SQL queries etc. In this work, we focus on logical form generation, which is extracted from an automated email assistant system. Since this task is dialogue-oriented, information across utterances must be well handled. Furthermore, certain inputs from users are used as arguments for the logical form, which requires a parser to distinguish the functional words and content words. Hence, an intelligent parser should be able to switch between generation mode and copy mode. In order to address the aforementioned issues, we equip the vanilla seq2seq model with a pointer network and a context-dependent architecture to generate more accurate logical forms. Our model achieves state-of-the-art performance on the email assistant task.",A Pointer Network Architecture for Context-Dependent Semantic Parsing,"Semantic parsing targets at mapping human utterances into structured meaning representations, such as logical forms, programming snippets, SQL queries etc. In this work, we focus on logical form generation, which is extracted from an automated email assistant system. Since this task is dialogue-oriented, information across utterances must be well handled. Furthermore, certain inputs from users are used as arguments for the logical form, which requires a parser to distinguish the functional words and content words. Hence, an intelligent parser should be able to switch between generation mode and copy mode. In order to address the aforementioned issues, we equip the vanilla seq2seq model with a pointer network and a context-dependent architecture to generate more accurate logical forms. Our model achieves state-of-the-art performance on the email assistant task.",A Pointer Network Architecture for Context-Dependent Semantic Parsing,"Semantic parsing targets at mapping human utterances into structured meaning representations, such as logical forms, programming snippets, SQL queries etc. In this work, we focus on logical form generation, which is extracted from an automated email assistant system. Since this task is dialogue-oriented, information across utterances must be well handled. Furthermore, certain inputs from users are used as arguments for the logical form, which requires a parser to distinguish the functional words and content words. Hence, an intelligent parser should be able to switch between generation mode and copy mode. In order to address the aforementioned issues, we equip the vanilla seq2seq model with a pointer network and a context-dependent architecture to generate more accurate logical forms. Our model achieves state-of-the-art performance on the email assistant task.",,"A Pointer Network Architecture for Context-Dependent Semantic Parsing. Semantic parsing targets at mapping human utterances into structured meaning representations, such as logical forms, programming snippets, SQL queries etc. In this work, we focus on logical form generation, which is extracted from an automated email assistant system. Since this task is dialogue-oriented, information across utterances must be well handled. Furthermore, certain inputs from users are used as arguments for the logical form, which requires a parser to distinguish the functional words and content words. Hence, an intelligent parser should be able to switch between generation mode and copy mode. In order to address the aforementioned issues, we equip the vanilla seq2seq model with a pointer network and a context-dependent architecture to generate more accurate logical forms. Our model achieves state-of-the-art performance on the email assistant task.",2019
munkhdalai-yu-2017-neural-semantic,https://aclanthology.org/E17-1038,0,,,,,,,"Neural Semantic Encoders. We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders. NSE is equipped with a novel memory update rule and has a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read, compose and write operations. NSE can also access 1 multiple and shared memories. In this paper, we demonstrated the effectiveness and the flexibility of NSE on five different natural language tasks: natural language inference, question answering, sentence classification, document sentiment analysis and machine translation where NSE achieved state-of-the-art performance when evaluated on publically available benchmarks. For example, our shared-memory model showed an encouraging result on neural machine translation, improving an attention-based baseline by approximately 1.0 BLEU.",Neural Semantic Encoders,"We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders. NSE is equipped with a novel memory update rule and has a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read, compose and write operations. NSE can also access 1 multiple and shared memories. In this paper, we demonstrated the effectiveness and the flexibility of NSE on five different natural language tasks: natural language inference, question answering, sentence classification, document sentiment analysis and machine translation where NSE achieved state-of-the-art performance when evaluated on publically available benchmarks. For example, our shared-memory model showed an encouraging result on neural machine translation, improving an attention-based baseline by approximately 1.0 BLEU.",Neural Semantic Encoders,"We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders. NSE is equipped with a novel memory update rule and has a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read, compose and write operations. NSE can also access 1 multiple and shared memories. In this paper, we demonstrated the effectiveness and the flexibility of NSE on five different natural language tasks: natural language inference, question answering, sentence classification, document sentiment analysis and machine translation where NSE achieved state-of-the-art performance when evaluated on publically available benchmarks. For example, our shared-memory model showed an encouraging result on neural machine translation, improving an attention-based baseline by approximately 1.0 BLEU.","We would like to thank Abhyuday Jagannatha and the anonymous reviewers for their insightful comments and suggestions. This work was supported in part by the grant HL125089 from the National Institutes of Health (NIH). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.","Neural Semantic Encoders. We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders. NSE is equipped with a novel memory update rule and has a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read, compose and write operations. NSE can also access 1 multiple and shared memories. In this paper, we demonstrated the effectiveness and the flexibility of NSE on five different natural language tasks: natural language inference, question answering, sentence classification, document sentiment analysis and machine translation where NSE achieved state-of-the-art performance when evaluated on publically available benchmarks. For example, our shared-memory model showed an encouraging result on neural machine translation, improving an attention-based baseline by approximately 1.0 BLEU.",2017
han-etal-2015-chinese,https://aclanthology.org/W15-3103,0,,,,,,,"Chinese Named Entity Recognition with Graph-based Semi-supervised Learning Model. Named entity recognition (NER) plays an important role in the NLP literature. The traditional methods tend to employ large annotated corpus to achieve a high performance. Different with many semi-supervised learning models for NER task, in this paper, we employ the graph-based semi-supervised learning (GBSSL) method to utilize the freely available unlabeled data. The experiment shows that the unlabeled corpus can enhance the state-of-theart conditional random field (CRF) learning model and has potential to improve the tagging accuracy even though the margin is a little weak and not satisfying in current experiments.",{C}hinese Named Entity Recognition with Graph-based Semi-supervised Learning Model,"Named entity recognition (NER) plays an important role in the NLP literature. The traditional methods tend to employ large annotated corpus to achieve a high performance. Different with many semi-supervised learning models for NER task, in this paper, we employ the graph-based semi-supervised learning (GBSSL) method to utilize the freely available unlabeled data. The experiment shows that the unlabeled corpus can enhance the state-of-theart conditional random field (CRF) learning model and has potential to improve the tagging accuracy even though the margin is a little weak and not satisfying in current experiments.",Chinese Named Entity Recognition with Graph-based Semi-supervised Learning Model,"Named entity recognition (NER) plays an important role in the NLP literature. The traditional methods tend to employ large annotated corpus to achieve a high performance. Different with many semi-supervised learning models for NER task, in this paper, we employ the graph-based semi-supervised learning (GBSSL) method to utilize the freely available unlabeled data. The experiment shows that the unlabeled corpus can enhance the state-of-theart conditional random field (CRF) learning model and has potential to improve the tagging accuracy even though the margin is a little weak and not satisfying in current experiments.",This work was supported by the Research Committee of the University of Macau (Grant No. MYRG2015-00175-FST and MYRG2015-00188-FST) and the Science and Technology Development Fund of Macau (Grant No. 057/2014/A). The first author was supported by ,"Chinese Named Entity Recognition with Graph-based Semi-supervised Learning Model. Named entity recognition (NER) plays an important role in the NLP literature. The traditional methods tend to employ large annotated corpus to achieve a high performance. Different with many semi-supervised learning models for NER task, in this paper, we employ the graph-based semi-supervised learning (GBSSL) method to utilize the freely available unlabeled data. The experiment shows that the unlabeled corpus can enhance the state-of-theart conditional random field (CRF) learning model and has potential to improve the tagging accuracy even though the margin is a little weak and not satisfying in current experiments.",2015
leone-etal-2020-building,https://aclanthology.org/2020.lrec-1.366,0,,,,,,,"Building Semantic Grams of Human Knowledge. Word senses are typically defined with textual definitions for human consumption and, in computational lexicons, put in context via lexical-semantic relations such as synonymy, antonymy, hypernymy, etc. In this paper we embrace a radically different paradigm that provides a slot-filler structure, called ""semagram"", to define the meaning of words in terms of their prototypical semantic information. We propose a semagram-based knowledge model composed of 26 semantic relationships which integrates features from a range of different sources, such as computational lexicons and property norms. We describe an annotation exercise regarding 50 concepts over 10 different categories and put forward different automated approaches for extending the semagram base to thousands of concepts. We finally evaluate the impact of the proposed resource on a semantic similarity task, showing significant improvements over state-of-the-art word embeddings. We release the complete semagram base and other data at http://nlp.uniroma1.it/semagrams.",Building Semantic Grams of Human Knowledge,"Word senses are typically defined with textual definitions for human consumption and, in computational lexicons, put in context via lexical-semantic relations such as synonymy, antonymy, hypernymy, etc. In this paper we embrace a radically different paradigm that provides a slot-filler structure, called ""semagram"", to define the meaning of words in terms of their prototypical semantic information. We propose a semagram-based knowledge model composed of 26 semantic relationships which integrates features from a range of different sources, such as computational lexicons and property norms. We describe an annotation exercise regarding 50 concepts over 10 different categories and put forward different automated approaches for extending the semagram base to thousands of concepts. We finally evaluate the impact of the proposed resource on a semantic similarity task, showing significant improvements over state-of-the-art word embeddings. We release the complete semagram base and other data at http://nlp.uniroma1.it/semagrams.",Building Semantic Grams of Human Knowledge,"Word senses are typically defined with textual definitions for human consumption and, in computational lexicons, put in context via lexical-semantic relations such as synonymy, antonymy, hypernymy, etc. In this paper we embrace a radically different paradigm that provides a slot-filler structure, called ""semagram"", to define the meaning of words in terms of their prototypical semantic information. We propose a semagram-based knowledge model composed of 26 semantic relationships which integrates features from a range of different sources, such as computational lexicons and property norms. We describe an annotation exercise regarding 50 concepts over 10 different categories and put forward different automated approaches for extending the semagram base to thousands of concepts. We finally evaluate the impact of the proposed resource on a semantic similarity task, showing significant improvements over state-of-the-art word embeddings. We release the complete semagram base and other data at http://nlp.uniroma1.it/semagrams.",The last author gratefully acknowledges the support of the ERC Consolidator Grant MOUSSE No. 726487 under the European Union's Horizon 2020 research and innovation programme.,"Building Semantic Grams of Human Knowledge. Word senses are typically defined with textual definitions for human consumption and, in computational lexicons, put in context via lexical-semantic relations such as synonymy, antonymy, hypernymy, etc. In this paper we embrace a radically different paradigm that provides a slot-filler structure, called ""semagram"", to define the meaning of words in terms of their prototypical semantic information. We propose a semagram-based knowledge model composed of 26 semantic relationships which integrates features from a range of different sources, such as computational lexicons and property norms. We describe an annotation exercise regarding 50 concepts over 10 different categories and put forward different automated approaches for extending the semagram base to thousands of concepts. We finally evaluate the impact of the proposed resource on a semantic similarity task, showing significant improvements over state-of-the-art word embeddings. We release the complete semagram base and other data at http://nlp.uniroma1.it/semagrams.",2020
kotonya-etal-2021-graph,https://aclanthology.org/2021.fever-1.3,1,,,,disinformation_and_fake_news,,,"Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification. This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset. We experiment with both a multi-task learning paradigm to jointly train a graph attention network for both the task of evidence extraction and veracity prediction, as well as a single objective graph model for solely learning veracity prediction and separate evidence extraction. In both instances, we employ a framework for per-cell linearization of tabular evidence, thus allowing us to treat evidence from tables as sequences. The templates we employ for linearizing tables capture the context as well as the content of table data. We furthermore provide a case study to show the interpretability our approach. Our best performing system achieves a FEVEROUS score of 0.23 and 53% label accuracy on the blind test data. 1 * Work done while the author was an intern at J.P. Morgan AI Research.",Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification,"This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset. We experiment with both a multi-task learning paradigm to jointly train a graph attention network for both the task of evidence extraction and veracity prediction, as well as a single objective graph model for solely learning veracity prediction and separate evidence extraction. In both instances, we employ a framework for per-cell linearization of tabular evidence, thus allowing us to treat evidence from tables as sequences. The templates we employ for linearizing tables capture the context as well as the content of table data. We furthermore provide a case study to show the interpretability our approach. Our best performing system achieves a FEVEROUS score of 0.23 and 53% label accuracy on the blind test data. 1 * Work done while the author was an intern at J.P. Morgan AI Research.",Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification,"This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset. We experiment with both a multi-task learning paradigm to jointly train a graph attention network for both the task of evidence extraction and veracity prediction, as well as a single objective graph model for solely learning veracity prediction and separate evidence extraction. In both instances, we employ a framework for per-cell linearization of tabular evidence, thus allowing us to treat evidence from tables as sequences. The templates we employ for linearizing tables capture the context as well as the content of table data. We furthermore provide a case study to show the interpretability our approach. Our best performing system achieves a FEVEROUS score of 0.23 and 53% label accuracy on the blind test data. 1 * Work done while the author was an intern at J.P. Morgan AI Research.",,"Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification. This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset. We experiment with both a multi-task learning paradigm to jointly train a graph attention network for both the task of evidence extraction and veracity prediction, as well as a single objective graph model for solely learning veracity prediction and separate evidence extraction. In both instances, we employ a framework for per-cell linearization of tabular evidence, thus allowing us to treat evidence from tables as sequences. The templates we employ for linearizing tables capture the context as well as the content of table data. We furthermore provide a case study to show the interpretability our approach. Our best performing system achieves a FEVEROUS score of 0.23 and 53% label accuracy on the blind test data. 1 * Work done while the author was an intern at J.P. Morgan AI Research.",2021
molla-van-zaanen-2005-learning,https://aclanthology.org/U05-1005,0,,,,,,,"Learning of Graph Rules for Question Answering. AnswerFinder is a framework for the development of question-answering systems. An-swerFinder is currently being used to test the applicability of graph representations for the detection and extraction of answers. In this paper we briefly describe AnswerFinder and introduce our method to learn graph patterns that link questions with their corresponding answers in arbitrary sentences. The method is based on the translation of the logical forms of questions and answer sentences into graphs, and the application of operations based on graph overlaps and the construction of paths within graphs. The method is general and can be applied to any graph-based representation of the contents of questions and answers.",Learning of Graph Rules for Question Answering,"AnswerFinder is a framework for the development of question-answering systems. An-swerFinder is currently being used to test the applicability of graph representations for the detection and extraction of answers. In this paper we briefly describe AnswerFinder and introduce our method to learn graph patterns that link questions with their corresponding answers in arbitrary sentences. The method is based on the translation of the logical forms of questions and answer sentences into graphs, and the application of operations based on graph overlaps and the construction of paths within graphs. The method is general and can be applied to any graph-based representation of the contents of questions and answers.",Learning of Graph Rules for Question Answering,"AnswerFinder is a framework for the development of question-answering systems. An-swerFinder is currently being used to test the applicability of graph representations for the detection and extraction of answers. In this paper we briefly describe AnswerFinder and introduce our method to learn graph patterns that link questions with their corresponding answers in arbitrary sentences. The method is based on the translation of the logical forms of questions and answer sentences into graphs, and the application of operations based on graph overlaps and the construction of paths within graphs. The method is general and can be applied to any graph-based representation of the contents of questions and answers.","This research is funded by the Australian Research Council, ARC Discovery Grant no DP0450750.","Learning of Graph Rules for Question Answering. AnswerFinder is a framework for the development of question-answering systems. An-swerFinder is currently being used to test the applicability of graph representations for the detection and extraction of answers. In this paper we briefly describe AnswerFinder and introduce our method to learn graph patterns that link questions with their corresponding answers in arbitrary sentences. The method is based on the translation of the logical forms of questions and answer sentences into graphs, and the application of operations based on graph overlaps and the construction of paths within graphs. The method is general and can be applied to any graph-based representation of the contents of questions and answers.",2005
shao-etal-2018-greedy,https://aclanthology.org/D18-1510,0,,,,,,,"Greedy Search with Probabilistic N-gram Matching for Neural Machine Translation. Neural machine translation (NMT) models are usually trained with the word-level loss using the teacher forcing algorithm, which not only evaluates the translation improperly but also suffers from exposure bias. Sequence-level training under the reinforcement framework can mitigate the problems of the word-level loss, but its performance is unstable due to the high variance of the gradient estimation. On these grounds, we present a method with a differentiable sequence-level training objective based on probabilistic n-gram matching which can avoid the reinforcement framework. In addition, this method performs greedy search in the training which uses the predicted words as context just as at inference to alleviate the problem of exposure bias. Experiment results on the NIST Chinese-to-English translation tasks show that our method significantly outperforms the reinforcement-based algorithms and achieves an improvement of 1.5 BLEU points on average over a strong baseline system.",Greedy Search with Probabilistic N-gram Matching for Neural Machine Translation,"Neural machine translation (NMT) models are usually trained with the word-level loss using the teacher forcing algorithm, which not only evaluates the translation improperly but also suffers from exposure bias. Sequence-level training under the reinforcement framework can mitigate the problems of the word-level loss, but its performance is unstable due to the high variance of the gradient estimation. On these grounds, we present a method with a differentiable sequence-level training objective based on probabilistic n-gram matching which can avoid the reinforcement framework. In addition, this method performs greedy search in the training which uses the predicted words as context just as at inference to alleviate the problem of exposure bias. Experiment results on the NIST Chinese-to-English translation tasks show that our method significantly outperforms the reinforcement-based algorithms and achieves an improvement of 1.5 BLEU points on average over a strong baseline system.",Greedy Search with Probabilistic N-gram Matching for Neural Machine Translation,"Neural machine translation (NMT) models are usually trained with the word-level loss using the teacher forcing algorithm, which not only evaluates the translation improperly but also suffers from exposure bias. Sequence-level training under the reinforcement framework can mitigate the problems of the word-level loss, but its performance is unstable due to the high variance of the gradient estimation. On these grounds, we present a method with a differentiable sequence-level training objective based on probabilistic n-gram matching which can avoid the reinforcement framework. In addition, this method performs greedy search in the training which uses the predicted words as context just as at inference to alleviate the problem of exposure bias. Experiment results on the NIST Chinese-to-English translation tasks show that our method significantly outperforms the reinforcement-based algorithms and achieves an improvement of 1.5 BLEU points on average over a strong baseline system.",We thank the anonymous reviewers for their insightful comments. This work was supported by the National Natural Science Foundation of China (NSFC) under the project NO.61472428 and the project NO. 61662077.,"Greedy Search with Probabilistic N-gram Matching for Neural Machine Translation. Neural machine translation (NMT) models are usually trained with the word-level loss using the teacher forcing algorithm, which not only evaluates the translation improperly but also suffers from exposure bias. Sequence-level training under the reinforcement framework can mitigate the problems of the word-level loss, but its performance is unstable due to the high variance of the gradient estimation. On these grounds, we present a method with a differentiable sequence-level training objective based on probabilistic n-gram matching which can avoid the reinforcement framework. In addition, this method performs greedy search in the training which uses the predicted words as context just as at inference to alleviate the problem of exposure bias. Experiment results on the NIST Chinese-to-English translation tasks show that our method significantly outperforms the reinforcement-based algorithms and achieves an improvement of 1.5 BLEU points on average over a strong baseline system.",2018
hall-nivre-2006-generic,https://aclanthology.org/W05-1708,0,,,,,,,"A generic architecture for data-driven dependency parsing. We present a software architecture for data-driven dependency parsing of unrestricted natural language text, which achieves a strict modularization of parsing algorithm, feature model and learning method such that these parameters can be varied independently. The design has been realized in MaltParser, which supports several parsing algorithms and learning methods, for which complex feature models can be defined in a special description language. 2 Inductive dependency parsing Given a set R of dependency types, we define a dependency graph for a sentence x = (w 1 ,. .. , w n) to be a labeled directed graph G =",A generic architecture for data-driven dependency parsing,"We present a software architecture for data-driven dependency parsing of unrestricted natural language text, which achieves a strict modularization of parsing algorithm, feature model and learning method such that these parameters can be varied independently. The design has been realized in MaltParser, which supports several parsing algorithms and learning methods, for which complex feature models can be defined in a special description language. 2 Inductive dependency parsing Given a set R of dependency types, we define a dependency graph for a sentence x = (w 1 ,. .. , w n) to be a labeled directed graph G =",A generic architecture for data-driven dependency parsing,"We present a software architecture for data-driven dependency parsing of unrestricted natural language text, which achieves a strict modularization of parsing algorithm, feature model and learning method such that these parameters can be varied independently. The design has been realized in MaltParser, which supports several parsing algorithms and learning methods, for which complex feature models can be defined in a special description language. 2 Inductive dependency parsing Given a set R of dependency types, we define a dependency graph for a sentence x = (w 1 ,. .. , w n) to be a labeled directed graph G =",,"A generic architecture for data-driven dependency parsing. We present a software architecture for data-driven dependency parsing of unrestricted natural language text, which achieves a strict modularization of parsing algorithm, feature model and learning method such that these parameters can be varied independently. The design has been realized in MaltParser, which supports several parsing algorithms and learning methods, for which complex feature models can be defined in a special description language. 2 Inductive dependency parsing Given a set R of dependency types, we define a dependency graph for a sentence x = (w 1 ,. .. , w n) to be a labeled directed graph G =",2006
jiang-etal-2018-supervised,https://aclanthology.org/P18-1252,0,,,,,,,"Supervised Treebank Conversion: Data and Approaches. Xinzhou Jiang 2 * , Bo Zhang 2 , Zhenghua Li 1,2 , Min Zhang 1,2 , Sheng Li 3 , Luo Si 3 1. Institute of Artificial Intelligence, Soochow University, Suzhou, China 2. School of Computer Science and Technology, Soochow University, Suzhou, China xzjiang, bzhang17@stu.suda.edu.cn, zhli13,minzhang@suda.edu.cn 3. Alibaba Inc., Hangzhou, China lisheng.ls,luo.si@alibaba-inc.com
Treebank conversion is a straightforward and effective way to exploit various heterogeneous treebanks for boosting parsing accuracy. However, previous work mainly focuses on unsupervised treebank conversion and makes little progress due to the lack of manually labeled data where each sentence has two syntactic trees complying with two different guidelines at the same time, referred as bi-tree aligned data.",Supervised Treebank Conversion: Data and Approaches,"Xinzhou Jiang 2 * , Bo Zhang 2 , Zhenghua Li 1,2 , Min Zhang 1,2 , Sheng Li 3 , Luo Si 3 1. Institute of Artificial Intelligence, Soochow University, Suzhou, China 2. School of Computer Science and Technology, Soochow University, Suzhou, China {xzjiang, bzhang17}@stu.suda.edu.cn, {zhli13,minzhang}@suda.edu.cn 3. Alibaba Inc., Hangzhou, China {lisheng.ls,luo.si}@alibaba-inc.com
Treebank conversion is a straightforward and effective way to exploit various heterogeneous treebanks for boosting parsing accuracy. However, previous work mainly focuses on unsupervised treebank conversion and makes little progress due to the lack of manually labeled data where each sentence has two syntactic trees complying with two different guidelines at the same time, referred as bi-tree aligned data.",Supervised Treebank Conversion: Data and Approaches,"Xinzhou Jiang 2 * , Bo Zhang 2 , Zhenghua Li 1,2 , Min Zhang 1,2 , Sheng Li 3 , Luo Si 3 1. Institute of Artificial Intelligence, Soochow University, Suzhou, China 2. School of Computer Science and Technology, Soochow University, Suzhou, China xzjiang, bzhang17@stu.suda.edu.cn, zhli13,minzhang@suda.edu.cn 3. Alibaba Inc., Hangzhou, China lisheng.ls,luo.si@alibaba-inc.com
Treebank conversion is a straightforward and effective way to exploit various heterogeneous treebanks for boosting parsing accuracy. However, previous work mainly focuses on unsupervised treebank conversion and makes little progress due to the lack of manually labeled data where each sentence has two syntactic trees complying with two different guidelines at the same time, referred as bi-tree aligned data.","The authors would like to thank the anonymous reviewers for the helpful comments. We are greatly grateful to all participants in data annotation for their hard work. We also thank Guodong Zhou and Wenliang Chen for the helpful discussions, and Meishan Zhang for his help on the re-implementation of the Biaffine Parser. This work was supported by National Natural Science Foundation of China (Grant No. 61525205, 61502325 61432013), and was also partially supported by the joint research project of Alibaba and Soochow University.","Supervised Treebank Conversion: Data and Approaches. Xinzhou Jiang 2 * , Bo Zhang 2 , Zhenghua Li 1,2 , Min Zhang 1,2 , Sheng Li 3 , Luo Si 3 1. Institute of Artificial Intelligence, Soochow University, Suzhou, China 2. School of Computer Science and Technology, Soochow University, Suzhou, China xzjiang, bzhang17@stu.suda.edu.cn, zhli13,minzhang@suda.edu.cn 3. Alibaba Inc., Hangzhou, China lisheng.ls,luo.si@alibaba-inc.com
Treebank conversion is a straightforward and effective way to exploit various heterogeneous treebanks for boosting parsing accuracy. However, previous work mainly focuses on unsupervised treebank conversion and makes little progress due to the lack of manually labeled data where each sentence has two syntactic trees complying with two different guidelines at the same time, referred as bi-tree aligned data.",2018
sulem-etal-2015-conceptual,https://aclanthology.org/W15-3502,0,,,,,,,"Conceptual Annotations Preserve Structure Across Translations: A French-English Case Study. Divergence of syntactic structures between languages constitutes a major challenge in using linguistic structure in Machine Translation (MT) systems. Here, we examine the potential of semantic structures. While semantic annotation is appealing as a source of cross-linguistically stable structures, little has been accomplished in demonstrating this stability through a detailed corpus study. In this paper, we experiment with the UCCA conceptual-cognitive annotation scheme in an English-French case study. First, we show that UCCA can be used to annotate French, through a systematic type-level analysis of the major French grammatical phenomena. Second, we annotate a parallel English-French corpus with UCCA, and quantify the similarity of the structures on both sides. Results show a high degree of stability across translations, supporting the usage of semantic annotations over syntactic ones in structure-aware MT systems.",Conceptual Annotations Preserve Structure Across Translations: A {F}rench-{E}nglish Case Study,"Divergence of syntactic structures between languages constitutes a major challenge in using linguistic structure in Machine Translation (MT) systems. Here, we examine the potential of semantic structures. While semantic annotation is appealing as a source of cross-linguistically stable structures, little has been accomplished in demonstrating this stability through a detailed corpus study. In this paper, we experiment with the UCCA conceptual-cognitive annotation scheme in an English-French case study. First, we show that UCCA can be used to annotate French, through a systematic type-level analysis of the major French grammatical phenomena. Second, we annotate a parallel English-French corpus with UCCA, and quantify the similarity of the structures on both sides. Results show a high degree of stability across translations, supporting the usage of semantic annotations over syntactic ones in structure-aware MT systems.",Conceptual Annotations Preserve Structure Across Translations: A French-English Case Study,"Divergence of syntactic structures between languages constitutes a major challenge in using linguistic structure in Machine Translation (MT) systems. Here, we examine the potential of semantic structures. While semantic annotation is appealing as a source of cross-linguistically stable structures, little has been accomplished in demonstrating this stability through a detailed corpus study. In this paper, we experiment with the UCCA conceptual-cognitive annotation scheme in an English-French case study. First, we show that UCCA can be used to annotate French, through a systematic type-level analysis of the major French grammatical phenomena. Second, we annotate a parallel English-French corpus with UCCA, and quantify the similarity of the structures on both sides. Results show a high degree of stability across translations, supporting the usage of semantic annotations over syntactic ones in structure-aware MT systems.","We would like to thank Roy Schwartz for helpful comments. This research was supported by the Language, Logic and Cognition Center (LLCC) at the Hebrew University of Jerusalem (for the first author) and by the ERC Advanced Fellowship 249520 GRAMPLUS (for the second author).","Conceptual Annotations Preserve Structure Across Translations: A French-English Case Study. Divergence of syntactic structures between languages constitutes a major challenge in using linguistic structure in Machine Translation (MT) systems. Here, we examine the potential of semantic structures. While semantic annotation is appealing as a source of cross-linguistically stable structures, little has been accomplished in demonstrating this stability through a detailed corpus study. In this paper, we experiment with the UCCA conceptual-cognitive annotation scheme in an English-French case study. First, we show that UCCA can be used to annotate French, through a systematic type-level analysis of the major French grammatical phenomena. Second, we annotate a parallel English-French corpus with UCCA, and quantify the similarity of the structures on both sides. Results show a high degree of stability across translations, supporting the usage of semantic annotations over syntactic ones in structure-aware MT systems.",2015
kalouli-etal-2021-really,https://aclanthology.org/2021.iwcs-1.13,0,,,,,,,"Is that really a question? Going beyond factoid questions in NLP. Research in NLP has mainly focused on factoid questions, with the goal of finding quick and reliable ways of matching a query to an answer. However, human discourse involves more than that: it contains non-canonical questions deployed to achieve specific communicative goals. In this paper, we investigate this under-studied aspect of NLP by introducing a targeted task, creating an appropriate corpus for the task and providing baseline models of diverse nature. With this, we are also able to generate useful insights on the task and open the way for future research in this direction.",Is that really a question? Going beyond factoid questions in {NLP},"Research in NLP has mainly focused on factoid questions, with the goal of finding quick and reliable ways of matching a query to an answer. However, human discourse involves more than that: it contains non-canonical questions deployed to achieve specific communicative goals. In this paper, we investigate this under-studied aspect of NLP by introducing a targeted task, creating an appropriate corpus for the task and providing baseline models of diverse nature. With this, we are also able to generate useful insights on the task and open the way for future research in this direction.",Is that really a question? Going beyond factoid questions in NLP,"Research in NLP has mainly focused on factoid questions, with the goal of finding quick and reliable ways of matching a query to an answer. However, human discourse involves more than that: it contains non-canonical questions deployed to achieve specific communicative goals. In this paper, we investigate this under-studied aspect of NLP by introducing a targeted task, creating an appropriate corpus for the task and providing baseline models of diverse nature. With this, we are also able to generate useful insights on the task and open the way for future research in this direction.","We thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for funding within project BU 1806/10-2 ""Questions Visualized"" of the FOR2111 ""Questions at the Interfaces"". We also thank our annotators, as well as the anonymous reviewers for their helpful comments.","Is that really a question? Going beyond factoid questions in NLP. Research in NLP has mainly focused on factoid questions, with the goal of finding quick and reliable ways of matching a query to an answer. However, human discourse involves more than that: it contains non-canonical questions deployed to achieve specific communicative goals. In this paper, we investigate this under-studied aspect of NLP by introducing a targeted task, creating an appropriate corpus for the task and providing baseline models of diverse nature. With this, we are also able to generate useful insights on the task and open the way for future research in this direction.",2021
malmasi-dras-2015-large,https://aclanthology.org/N15-1160,0,,,,,,,"Large-Scale Native Language Identification with Cross-Corpus Evaluation. We present a large-scale Native Language Identification (NLI) experiment on new data, with a focus on cross-corpus evaluation to identify corpus-and genre-independent language transfer features. We test a new corpus and show it is comparable to other NLI corpora and suitable for this task. Cross-corpus evaluation on two large corpora achieves good accuracy and evidences the existence of reliable language transfer features, but lower performance also suggests that NLI models are not completely portable across corpora. Finally, we present a brief case study of features distinguishing Japanese learners' English writing, demonstrating the presence of cross-corpus and cross-genre language transfer features that are highly applicable to SLA and ESL research.",Large-Scale Native Language Identification with Cross-Corpus Evaluation,"We present a large-scale Native Language Identification (NLI) experiment on new data, with a focus on cross-corpus evaluation to identify corpus-and genre-independent language transfer features. We test a new corpus and show it is comparable to other NLI corpora and suitable for this task. Cross-corpus evaluation on two large corpora achieves good accuracy and evidences the existence of reliable language transfer features, but lower performance also suggests that NLI models are not completely portable across corpora. Finally, we present a brief case study of features distinguishing Japanese learners' English writing, demonstrating the presence of cross-corpus and cross-genre language transfer features that are highly applicable to SLA and ESL research.",Large-Scale Native Language Identification with Cross-Corpus Evaluation,"We present a large-scale Native Language Identification (NLI) experiment on new data, with a focus on cross-corpus evaluation to identify corpus-and genre-independent language transfer features. We test a new corpus and show it is comparable to other NLI corpora and suitable for this task. Cross-corpus evaluation on two large corpora achieves good accuracy and evidences the existence of reliable language transfer features, but lower performance also suggests that NLI models are not completely portable across corpora. Finally, we present a brief case study of features distinguishing Japanese learners' English writing, demonstrating the presence of cross-corpus and cross-genre language transfer features that are highly applicable to SLA and ESL research.",,"Large-Scale Native Language Identification with Cross-Corpus Evaluation. We present a large-scale Native Language Identification (NLI) experiment on new data, with a focus on cross-corpus evaluation to identify corpus-and genre-independent language transfer features. We test a new corpus and show it is comparable to other NLI corpora and suitable for this task. Cross-corpus evaluation on two large corpora achieves good accuracy and evidences the existence of reliable language transfer features, but lower performance also suggests that NLI models are not completely portable across corpora. Finally, we present a brief case study of features distinguishing Japanese learners' English writing, demonstrating the presence of cross-corpus and cross-genre language transfer features that are highly applicable to SLA and ESL research.",2015
sultan-etal-2014-dls,https://aclanthology.org/S14-2039,0,,,,,,,"DLS@CU: Sentence Similarity from Word Alignment. We present an algorithm for computing the semantic similarity between two sentences. It adopts the hypothesis that semantic similarity is a monotonically increasing function of the degree to which (1) the two sentences contain similar semantic units, and (2) such units occur in similar semantic contexts. With a simplistic operationalization of the notion of semantic units with individual words, we experimentally show that this hypothesis can lead to state-of-the-art results for sentencelevel semantic similarity. At the Sem-Eval 2014 STS task (task 10), our system demonstrated the best performance (measured by correlation with human annotations) among 38 system runs.",{DLS}@{CU}: Sentence Similarity from Word Alignment,"We present an algorithm for computing the semantic similarity between two sentences. It adopts the hypothesis that semantic similarity is a monotonically increasing function of the degree to which (1) the two sentences contain similar semantic units, and (2) such units occur in similar semantic contexts. With a simplistic operationalization of the notion of semantic units with individual words, we experimentally show that this hypothesis can lead to state-of-the-art results for sentencelevel semantic similarity. At the Sem-Eval 2014 STS task (task 10), our system demonstrated the best performance (measured by correlation with human annotations) among 38 system runs.",DLS@CU: Sentence Similarity from Word Alignment,"We present an algorithm for computing the semantic similarity between two sentences. It adopts the hypothesis that semantic similarity is a monotonically increasing function of the degree to which (1) the two sentences contain similar semantic units, and (2) such units occur in similar semantic contexts. With a simplistic operationalization of the notion of semantic units with individual words, we experimentally show that this hypothesis can lead to state-of-the-art results for sentencelevel semantic similarity. At the Sem-Eval 2014 STS task (task 10), our system demonstrated the best performance (measured by correlation with human annotations) among 38 system runs.",This material is based in part upon work supported by the National Science Foundation under Grant,"DLS@CU: Sentence Similarity from Word Alignment. We present an algorithm for computing the semantic similarity between two sentences. It adopts the hypothesis that semantic similarity is a monotonically increasing function of the degree to which (1) the two sentences contain similar semantic units, and (2) such units occur in similar semantic contexts. With a simplistic operationalization of the notion of semantic units with individual words, we experimentally show that this hypothesis can lead to state-of-the-art results for sentencelevel semantic similarity. At the Sem-Eval 2014 STS task (task 10), our system demonstrated the best performance (measured by correlation with human annotations) among 38 system runs.",2014
surcin-etal-2005-evaluation,https://aclanthology.org/2005.mtsummit-papers.16,0,,,,,,,"Evaluation of Machine Translation with Predictive Metrics beyond BLEU/NIST: CESTA Evaluation Campaign \# 1. In this paper, we report on the results of a full-size evaluation campaign of various MT systems. This campaign is novel compared to the classical DARPA/NIST MT evaluation campaigns in the sense that French is the target language, and that it includes an experiment of meta-evaluation of various metrics claiming to better predict different attributes of translation quality. We first describe the campaign, its context, its protocol and the data we used. Then we summarise the results obtained by the participating systems and discuss the meta-evaluation of the metrics used.",Evaluation of Machine Translation with Predictive Metrics beyond {BLEU}/{NIST}: {CESTA} Evaluation Campaign {\#} 1,"In this paper, we report on the results of a full-size evaluation campaign of various MT systems. This campaign is novel compared to the classical DARPA/NIST MT evaluation campaigns in the sense that French is the target language, and that it includes an experiment of meta-evaluation of various metrics claiming to better predict different attributes of translation quality. We first describe the campaign, its context, its protocol and the data we used. Then we summarise the results obtained by the participating systems and discuss the meta-evaluation of the metrics used.",Evaluation of Machine Translation with Predictive Metrics beyond BLEU/NIST: CESTA Evaluation Campaign \# 1,"In this paper, we report on the results of a full-size evaluation campaign of various MT systems. This campaign is novel compared to the classical DARPA/NIST MT evaluation campaigns in the sense that French is the target language, and that it includes an experiment of meta-evaluation of various metrics claiming to better predict different attributes of translation quality. We first describe the campaign, its context, its protocol and the data we used. Then we summarise the results obtained by the participating systems and discuss the meta-evaluation of the metrics used.",,"Evaluation of Machine Translation with Predictive Metrics beyond BLEU/NIST: CESTA Evaluation Campaign \# 1. In this paper, we report on the results of a full-size evaluation campaign of various MT systems. This campaign is novel compared to the classical DARPA/NIST MT evaluation campaigns in the sense that French is the target language, and that it includes an experiment of meta-evaluation of various metrics claiming to better predict different attributes of translation quality. We first describe the campaign, its context, its protocol and the data we used. Then we summarise the results obtained by the participating systems and discuss the meta-evaluation of the metrics used.",2005
choi-2016-sketch,https://aclanthology.org/W16-6607,0,,,,,,,"Sketch-to-Text Generation: Toward Contextual, Creative, and Coherent Composition. The need for natural language generation (NLG) arises in diverse, multimodal contexts: ranging from describing stories captured in a photograph, to instructing how to prepare a dish using a given set of ingredients, and to composing a sonnet for a given topic phrase. One common challenge among these types of NLG tasks is that the generation model often needs to work with relatively loose semantic correspondence between the input prompt and the desired output text. For example, an image caption that appeals to readers may require pragmatic interpretation of the scene beyond the literal content of the image. Similarly, composing a new recipe requires working out detailed how-to instructions that are not directly specified by the given set of ingredient names. In this talk, I will discuss our recent approaches to generating contextual, creative, and coherent text given a relatively lean and noisy input prompt with respect to three NLG tasks: (1) creative image captioning, (2) recipe composition, and (3) sonnet composition. A recurring theme is that our models learn most of the end-to-end mappings between the input and the output directly from data without requiring manual annotations for intermediate meaning representations. I will conclude the talk by discussing the strengths and the limitations of these types of data-driven approaches and point to avenues for future research.","Sketch-to-Text Generation: Toward Contextual, Creative, and Coherent Composition","The need for natural language generation (NLG) arises in diverse, multimodal contexts: ranging from describing stories captured in a photograph, to instructing how to prepare a dish using a given set of ingredients, and to composing a sonnet for a given topic phrase. One common challenge among these types of NLG tasks is that the generation model often needs to work with relatively loose semantic correspondence between the input prompt and the desired output text. For example, an image caption that appeals to readers may require pragmatic interpretation of the scene beyond the literal content of the image. Similarly, composing a new recipe requires working out detailed how-to instructions that are not directly specified by the given set of ingredient names. In this talk, I will discuss our recent approaches to generating contextual, creative, and coherent text given a relatively lean and noisy input prompt with respect to three NLG tasks: (1) creative image captioning, (2) recipe composition, and (3) sonnet composition. A recurring theme is that our models learn most of the end-to-end mappings between the input and the output directly from data without requiring manual annotations for intermediate meaning representations. I will conclude the talk by discussing the strengths and the limitations of these types of data-driven approaches and point to avenues for future research.","Sketch-to-Text Generation: Toward Contextual, Creative, and Coherent Composition","The need for natural language generation (NLG) arises in diverse, multimodal contexts: ranging from describing stories captured in a photograph, to instructing how to prepare a dish using a given set of ingredients, and to composing a sonnet for a given topic phrase. One common challenge among these types of NLG tasks is that the generation model often needs to work with relatively loose semantic correspondence between the input prompt and the desired output text. For example, an image caption that appeals to readers may require pragmatic interpretation of the scene beyond the literal content of the image. Similarly, composing a new recipe requires working out detailed how-to instructions that are not directly specified by the given set of ingredient names. In this talk, I will discuss our recent approaches to generating contextual, creative, and coherent text given a relatively lean and noisy input prompt with respect to three NLG tasks: (1) creative image captioning, (2) recipe composition, and (3) sonnet composition. A recurring theme is that our models learn most of the end-to-end mappings between the input and the output directly from data without requiring manual annotations for intermediate meaning representations. I will conclude the talk by discussing the strengths and the limitations of these types of data-driven approaches and point to avenues for future research.",,"Sketch-to-Text Generation: Toward Contextual, Creative, and Coherent Composition. The need for natural language generation (NLG) arises in diverse, multimodal contexts: ranging from describing stories captured in a photograph, to instructing how to prepare a dish using a given set of ingredients, and to composing a sonnet for a given topic phrase. One common challenge among these types of NLG tasks is that the generation model often needs to work with relatively loose semantic correspondence between the input prompt and the desired output text. For example, an image caption that appeals to readers may require pragmatic interpretation of the scene beyond the literal content of the image. Similarly, composing a new recipe requires working out detailed how-to instructions that are not directly specified by the given set of ingredient names. In this talk, I will discuss our recent approaches to generating contextual, creative, and coherent text given a relatively lean and noisy input prompt with respect to three NLG tasks: (1) creative image captioning, (2) recipe composition, and (3) sonnet composition. A recurring theme is that our models learn most of the end-to-end mappings between the input and the output directly from data without requiring manual annotations for intermediate meaning representations. I will conclude the talk by discussing the strengths and the limitations of these types of data-driven approaches and point to avenues for future research.",2016
gardent-2011-generation,https://aclanthology.org/2011.jeptalnrecital-invite.3,0,,,,,,,"G\'en\'eration de phrase : entr\'ee, algorithmes et applications (Sentence Generation: Input, Algorithms and Applications). Sentence Generation maps abstract linguistic representations into sentences. A necessary part of any natural language generation system, sentence generation has also recently received increasing attention in applications such as transfer based machine translation (cf. the LOGON project) and natural language interfaces to knowledge bases (e.g., to verbalise, to author and/or to query ontologies).","G{\'e}n{\'e}ration de phrase : entr{\'e}e, algorithmes et applications (Sentence Generation: Input, Algorithms and Applications)","Sentence Generation maps abstract linguistic representations into sentences. A necessary part of any natural language generation system, sentence generation has also recently received increasing attention in applications such as transfer based machine translation (cf. the LOGON project) and natural language interfaces to knowledge bases (e.g., to verbalise, to author and/or to query ontologies).","G\'en\'eration de phrase : entr\'ee, algorithmes et applications (Sentence Generation: Input, Algorithms and Applications)","Sentence Generation maps abstract linguistic representations into sentences. A necessary part of any natural language generation system, sentence generation has also recently received increasing attention in applications such as transfer based machine translation (cf. the LOGON project) and natural language interfaces to knowledge bases (e.g., to verbalise, to author and/or to query ontologies).",,"G\'en\'eration de phrase : entr\'ee, algorithmes et applications (Sentence Generation: Input, Algorithms and Applications). Sentence Generation maps abstract linguistic representations into sentences. A necessary part of any natural language generation system, sentence generation has also recently received increasing attention in applications such as transfer based machine translation (cf. the LOGON project) and natural language interfaces to knowledge bases (e.g., to verbalise, to author and/or to query ontologies).",2011
zhu-etal-2020-language,https://aclanthology.org/2020.acl-main.150,0,,,,,,,"Language-aware Interlingua for Multilingual Neural Machine Translation. Multilingual neural machine translation (NMT) has led to impressive accuracy improvements in low-resource scenarios by sharing common linguistic information across languages. However, the traditional multilingual model fails to capture the diversity and specificity of different languages, resulting in inferior performance compared with individual models that are sufficiently trained. In this paper, we incorporate a language-aware interlingua into the Encoder-Decoder architecture. The interlingual network enables the model to learn a language-independent representation from the semantic spaces of different languages, while still allowing for language-specific specialization of a particular language-pair. Experiments show that our proposed method achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with strong individual models.",Language-aware Interlingua for Multilingual Neural Machine Translation,"Multilingual neural machine translation (NMT) has led to impressive accuracy improvements in low-resource scenarios by sharing common linguistic information across languages. However, the traditional multilingual model fails to capture the diversity and specificity of different languages, resulting in inferior performance compared with individual models that are sufficiently trained. In this paper, we incorporate a language-aware interlingua into the Encoder-Decoder architecture. The interlingual network enables the model to learn a language-independent representation from the semantic spaces of different languages, while still allowing for language-specific specialization of a particular language-pair. Experiments show that our proposed method achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with strong individual models.",Language-aware Interlingua for Multilingual Neural Machine Translation,"Multilingual neural machine translation (NMT) has led to impressive accuracy improvements in low-resource scenarios by sharing common linguistic information across languages. However, the traditional multilingual model fails to capture the diversity and specificity of different languages, resulting in inferior performance compared with individual models that are sufficiently trained. In this paper, we incorporate a language-aware interlingua into the Encoder-Decoder architecture. The interlingual network enables the model to learn a language-independent representation from the semantic spaces of different languages, while still allowing for language-specific specialization of a particular language-pair. Experiments show that our proposed method achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with strong individual models.",,"Language-aware Interlingua for Multilingual Neural Machine Translation. Multilingual neural machine translation (NMT) has led to impressive accuracy improvements in low-resource scenarios by sharing common linguistic information across languages. However, the traditional multilingual model fails to capture the diversity and specificity of different languages, resulting in inferior performance compared with individual models that are sufficiently trained. In this paper, we incorporate a language-aware interlingua into the Encoder-Decoder architecture. The interlingual network enables the model to learn a language-independent representation from the semantic spaces of different languages, while still allowing for language-specific specialization of a particular language-pair. Experiments show that our proposed method achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with strong individual models.",2020
sakaguchi-etal-2016-reassessing,https://aclanthology.org/Q16-1013,0,,,,,,,"Reassessing the Goals of Grammatical Error Correction: Fluency Instead of Grammaticality. The field of grammatical error correction (GEC) has grown substantially in recent years, with research directed at both evaluation metrics and improved system performance against those metrics. One unvisited assumption, however, is the reliance of GEC evaluation on error-coded corpora, which contain specific labeled corrections. We examine current practices and show that GEC's reliance on such corpora unnaturally constrains annotation and automatic evaluation, resulting in (a) sentences that do not sound acceptable to native speakers and (b) system rankings that do not correlate with human judgments. In light of this, we propose an alternate approach that jettisons costly error coding in favor of unannotated, whole-sentence rewrites. We compare the performance of existing metrics over different gold-standard annotations, and show that automatic evaluation with our new annotation scheme has very strong correlation with expert rankings (ρ = 0.82). As a result, we advocate for a fundamental and necessary shift in the goal of GEC, from correcting small, labeled error types, to producing text that has native fluency.",Reassessing the Goals of Grammatical Error Correction: Fluency Instead of Grammaticality,"The field of grammatical error correction (GEC) has grown substantially in recent years, with research directed at both evaluation metrics and improved system performance against those metrics. One unvisited assumption, however, is the reliance of GEC evaluation on error-coded corpora, which contain specific labeled corrections. We examine current practices and show that GEC's reliance on such corpora unnaturally constrains annotation and automatic evaluation, resulting in (a) sentences that do not sound acceptable to native speakers and (b) system rankings that do not correlate with human judgments. In light of this, we propose an alternate approach that jettisons costly error coding in favor of unannotated, whole-sentence rewrites. We compare the performance of existing metrics over different gold-standard annotations, and show that automatic evaluation with our new annotation scheme has very strong correlation with expert rankings (ρ = 0.82). As a result, we advocate for a fundamental and necessary shift in the goal of GEC, from correcting small, labeled error types, to producing text that has native fluency.",Reassessing the Goals of Grammatical Error Correction: Fluency Instead of Grammaticality,"The field of grammatical error correction (GEC) has grown substantially in recent years, with research directed at both evaluation metrics and improved system performance against those metrics. One unvisited assumption, however, is the reliance of GEC evaluation on error-coded corpora, which contain specific labeled corrections. We examine current practices and show that GEC's reliance on such corpora unnaturally constrains annotation and automatic evaluation, resulting in (a) sentences that do not sound acceptable to native speakers and (b) system rankings that do not correlate with human judgments. In light of this, we propose an alternate approach that jettisons costly error coding in favor of unannotated, whole-sentence rewrites. We compare the performance of existing metrics over different gold-standard annotations, and show that automatic evaluation with our new annotation scheme has very strong correlation with expert rankings (ρ = 0.82). As a result, we advocate for a fundamental and necessary shift in the goal of GEC, from correcting small, labeled error types, to producing text that has native fluency.","We would like to thank Christopher Bryant, Mariano Felice, Roman Grundkiewicz and Marcin Junczys-Dowmunt for providing data and code. We would also like to thank the TACL editor, Chris Quirk, and the three anonymous reviewers for their comments and feedback. This material is based upon work partially supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1232825.","Reassessing the Goals of Grammatical Error Correction: Fluency Instead of Grammaticality. The field of grammatical error correction (GEC) has grown substantially in recent years, with research directed at both evaluation metrics and improved system performance against those metrics. One unvisited assumption, however, is the reliance of GEC evaluation on error-coded corpora, which contain specific labeled corrections. We examine current practices and show that GEC's reliance on such corpora unnaturally constrains annotation and automatic evaluation, resulting in (a) sentences that do not sound acceptable to native speakers and (b) system rankings that do not correlate with human judgments. In light of this, we propose an alternate approach that jettisons costly error coding in favor of unannotated, whole-sentence rewrites. We compare the performance of existing metrics over different gold-standard annotations, and show that automatic evaluation with our new annotation scheme has very strong correlation with expert rankings (ρ = 0.82). As a result, we advocate for a fundamental and necessary shift in the goal of GEC, from correcting small, labeled error types, to producing text that has native fluency.",2016
kirchner-2020-insights,https://aclanthology.org/2020.eamt-1.38,0,,,,general_purpose_productivity,,,"Insights from Gathering MT Productivity Metrics at Scale. In this paper, we describe Dell EMC's framework to automatically collect MTrelated productivity metrics from a large translation supply chain over an extended period of time, the characteristics and volume of the gathered data, and the insights from analyzing the data to guide our MT strategy. Aligning tools, processes and people required decisions, concessions and contributions from Dell management, technology providers, tool implementors, LSPs and linguists to harvest data at scale over 2+ years while Dell EMC migrated from customized SMT to generic NMT and then customized NMT systems.",Insights from Gathering {MT} Productivity Metrics at Scale,"In this paper, we describe Dell EMC's framework to automatically collect MTrelated productivity metrics from a large translation supply chain over an extended period of time, the characteristics and volume of the gathered data, and the insights from analyzing the data to guide our MT strategy. Aligning tools, processes and people required decisions, concessions and contributions from Dell management, technology providers, tool implementors, LSPs and linguists to harvest data at scale over 2+ years while Dell EMC migrated from customized SMT to generic NMT and then customized NMT systems.",Insights from Gathering MT Productivity Metrics at Scale,"In this paper, we describe Dell EMC's framework to automatically collect MTrelated productivity metrics from a large translation supply chain over an extended period of time, the characteristics and volume of the gathered data, and the insights from analyzing the data to guide our MT strategy. Aligning tools, processes and people required decisions, concessions and contributions from Dell management, technology providers, tool implementors, LSPs and linguists to harvest data at scale over 2+ years while Dell EMC migrated from customized SMT to generic NMT and then customized NMT systems.","The following individuals and organizations were instrumental in creating an environment to harvest MT metrics automatically for Dell EMC: Nancy Anderson, head of the EMC translation team at the time supported the proposal to take translations ""online"". She negotiated with our LSPs the necessary process and tools concessions. Keith Brazil and his team at Translations.com optimized GlobalLink as a collaborative platform for a multi-vendor supply chain. Jaap van der Meer proposed an integration with the TAUS DQF Dashboard. TAUS and","Insights from Gathering MT Productivity Metrics at Scale. In this paper, we describe Dell EMC's framework to automatically collect MTrelated productivity metrics from a large translation supply chain over an extended period of time, the characteristics and volume of the gathered data, and the insights from analyzing the data to guide our MT strategy. Aligning tools, processes and people required decisions, concessions and contributions from Dell management, technology providers, tool implementors, LSPs and linguists to harvest data at scale over 2+ years while Dell EMC migrated from customized SMT to generic NMT and then customized NMT systems.",2020
yoo-2001-floating,https://aclanthology.org/Y01-1020,0,,,,,,,"Floating Quantifiers and Lexical Specification of Quantifier Retrieval. Floating quantifiers (FQs) in English exhibit both universal and language-specific properties, and this paper shows that such syntactic and semantic characteristics can be explained in terms of a constraint-based, lexical approach to the construction within the framework of Head-Driven Phrase Structure Grammar (HPSG). Based on the assumption that FQs are base-generated VP modifiers, this paper proposes an account in which the semantic contribution of FQs consists of a ""lexically retrieved"" universal quantifier taking scope over the VP meaning.",Floating Quantifiers and Lexical Specification of Quantifier Retrieval,"Floating quantifiers (FQs) in English exhibit both universal and language-specific properties, and this paper shows that such syntactic and semantic characteristics can be explained in terms of a constraint-based, lexical approach to the construction within the framework of Head-Driven Phrase Structure Grammar (HPSG). Based on the assumption that FQs are base-generated VP modifiers, this paper proposes an account in which the semantic contribution of FQs consists of a ""lexically retrieved"" universal quantifier taking scope over the VP meaning.",Floating Quantifiers and Lexical Specification of Quantifier Retrieval,"Floating quantifiers (FQs) in English exhibit both universal and language-specific properties, and this paper shows that such syntactic and semantic characteristics can be explained in terms of a constraint-based, lexical approach to the construction within the framework of Head-Driven Phrase Structure Grammar (HPSG). Based on the assumption that FQs are base-generated VP modifiers, this paper proposes an account in which the semantic contribution of FQs consists of a ""lexically retrieved"" universal quantifier taking scope over the VP meaning.",,"Floating Quantifiers and Lexical Specification of Quantifier Retrieval. Floating quantifiers (FQs) in English exhibit both universal and language-specific properties, and this paper shows that such syntactic and semantic characteristics can be explained in terms of a constraint-based, lexical approach to the construction within the framework of Head-Driven Phrase Structure Grammar (HPSG). Based on the assumption that FQs are base-generated VP modifiers, this paper proposes an account in which the semantic contribution of FQs consists of a ""lexically retrieved"" universal quantifier taking scope over the VP meaning.",2001
rogers-1996-model,https://aclanthology.org/P96-1002,0,,,,,,,"A Model-Theoretic Framework for Theories of Syntax. A natural next step in the evolution of constraint-based grammar formalisms from rewriting formalisms is to abstract fully away from the details of the grammar mechanism-to express syntactic theories purely in terms of the properties of the class of structures they license. By focusing on the structural properties of languages rather than on mechanisms for generating or checking structures that exhibit those properties, this model-theoretic approach can offer simpler and significantly clearer expression of theories and can potentially provide a uniform formalization, allowing disparate theories to be compared on the basis of those properties. We discuss L2,p, a monadic second-order logical framework for such an approach to syntax that has the distinctive virtue of being superficially expressive-supporting direct statement of most linguistically significant syntactic properties-but having well-defined strong generative capacity-languages are definable in L2K,p iff they are strongly context-free. We draw examples from the realms of GPSG and GB.",A Model-Theoretic Framework for Theories of Syntax,"A natural next step in the evolution of constraint-based grammar formalisms from rewriting formalisms is to abstract fully away from the details of the grammar mechanism-to express syntactic theories purely in terms of the properties of the class of structures they license. By focusing on the structural properties of languages rather than on mechanisms for generating or checking structures that exhibit those properties, this model-theoretic approach can offer simpler and significantly clearer expression of theories and can potentially provide a uniform formalization, allowing disparate theories to be compared on the basis of those properties. We discuss L2,p, a monadic second-order logical framework for such an approach to syntax that has the distinctive virtue of being superficially expressive-supporting direct statement of most linguistically significant syntactic properties-but having well-defined strong generative capacity-languages are definable in L2K,p iff they are strongly context-free. We draw examples from the realms of GPSG and GB.",A Model-Theoretic Framework for Theories of Syntax,"A natural next step in the evolution of constraint-based grammar formalisms from rewriting formalisms is to abstract fully away from the details of the grammar mechanism-to express syntactic theories purely in terms of the properties of the class of structures they license. By focusing on the structural properties of languages rather than on mechanisms for generating or checking structures that exhibit those properties, this model-theoretic approach can offer simpler and significantly clearer expression of theories and can potentially provide a uniform formalization, allowing disparate theories to be compared on the basis of those properties. We discuss L2,p, a monadic second-order logical framework for such an approach to syntax that has the distinctive virtue of being superficially expressive-supporting direct statement of most linguistically significant syntactic properties-but having well-defined strong generative capacity-languages are definable in L2K,p iff they are strongly context-free. We draw examples from the realms of GPSG and GB.",,"A Model-Theoretic Framework for Theories of Syntax. A natural next step in the evolution of constraint-based grammar formalisms from rewriting formalisms is to abstract fully away from the details of the grammar mechanism-to express syntactic theories purely in terms of the properties of the class of structures they license. By focusing on the structural properties of languages rather than on mechanisms for generating or checking structures that exhibit those properties, this model-theoretic approach can offer simpler and significantly clearer expression of theories and can potentially provide a uniform formalization, allowing disparate theories to be compared on the basis of those properties. We discuss L2,p, a monadic second-order logical framework for such an approach to syntax that has the distinctive virtue of being superficially expressive-supporting direct statement of most linguistically significant syntactic properties-but having well-defined strong generative capacity-languages are definable in L2K,p iff they are strongly context-free. We draw examples from the realms of GPSG and GB.",1996
wolfe-etal-2015-predicate,https://aclanthology.org/N15-1002,0,,,,,,,"Predicate Argument Alignment using a Global Coherence Model. We present a joint model for predicate argument alignment. We leverage multiple sources of semantic information, including temporal ordering constraints between events. These are combined in a max-margin framework to find a globally consistent view of entities and events across multiple documents, which leads to improvements over a very strong local baseline.",Predicate Argument Alignment using a Global Coherence Model,"We present a joint model for predicate argument alignment. We leverage multiple sources of semantic information, including temporal ordering constraints between events. These are combined in a max-margin framework to find a globally consistent view of entities and events across multiple documents, which leads to improvements over a very strong local baseline.",Predicate Argument Alignment using a Global Coherence Model,"We present a joint model for predicate argument alignment. We leverage multiple sources of semantic information, including temporal ordering constraints between events. These are combined in a max-margin framework to find a globally consistent view of entities and events across multiple documents, which leads to improvements over a very strong local baseline.",,"Predicate Argument Alignment using a Global Coherence Model. We present a joint model for predicate argument alignment. We leverage multiple sources of semantic information, including temporal ordering constraints between events. These are combined in a max-margin framework to find a globally consistent view of entities and events across multiple documents, which leads to improvements over a very strong local baseline.",2015
hashimoto-etal-2019-high,https://aclanthology.org/W19-5212,0,,,,,,,"A High-Quality Multilingual Dataset for Structured Documentation Translation. This paper presents a high-quality multilingual dataset for the documentation domain to advance research on localization of structured text. Unlike widely-used datasets for translation of plain text, we collect XML-structured parallel text segments from the online documentation for an enterprise software platform. These Web pages have been professionally translated from English into 16 languages and maintained by domain experts, and around 100,000 text segments are available for each language pair. We build and evaluate translation models for seven target languages from English, with several different copy mechanisms and an XML-constrained beam search. We also experiment with a non-English pair to show that our dataset has the potential to explicitly enable 17 × 16 translation settings. Our experiments show that learning to translate with the XML tags improves translation accuracy, and the beam search accurately generates XML structures. We also discuss tradeoffs of using the copy mechanisms by focusing on translation of numerical words and named entities. We further provide a detailed human analysis of gaps between the model output and human translations for real-world applications, including suitability for post-editing. * Now at Google Brain. 1 https://www.ldc.upenn.edu/-Example (a) English: You can use this report on your Community Management Home dashboard or in Community Workspaces under DashboardsHome . Japanese: このレポートは、 [コミュニティ管理 ] のホームのダッシュボード、または コミュニティワークスペース の [ダッシュボード ] [ホーム] で使用できます。-Example (b) English: Results with bothbeach and house in the searchable fields of the record. Japanese: レコードの検索可能な項目に beach と house の 両方が含まれている結果。-Example (c) English: You can only predefine this field to an email address. You can predefine it using either T (used to define email addresses) or To Recipients (used to define contact, lead, and user IDs).",A High-Quality Multilingual Dataset for Structured Documentation Translation,"This paper presents a high-quality multilingual dataset for the documentation domain to advance research on localization of structured text. Unlike widely-used datasets for translation of plain text, we collect XML-structured parallel text segments from the online documentation for an enterprise software platform. These Web pages have been professionally translated from English into 16 languages and maintained by domain experts, and around 100,000 text segments are available for each language pair. We build and evaluate translation models for seven target languages from English, with several different copy mechanisms and an XML-constrained beam search. We also experiment with a non-English pair to show that our dataset has the potential to explicitly enable 17 × 16 translation settings. Our experiments show that learning to translate with the XML tags improves translation accuracy, and the beam search accurately generates XML structures. We also discuss tradeoffs of using the copy mechanisms by focusing on translation of numerical words and named entities. We further provide a detailed human analysis of gaps between the model output and human translations for real-world applications, including suitability for post-editing. * Now at Google Brain. 1 https://www.ldc.upenn.edu/-Example (a) English: You can use this report on your Community Management Home dashboard or in Community Workspaces under DashboardsHome . Japanese: このレポートは、 [コミュニティ管理 ] のホームのダッシュボード、または コミュニティワークスペース の [ダッシュボード ] [ホーム] で使用できます。-Example (b) English: Results with bothbeach and house in the searchable fields of the record. Japanese: レコードの検索可能な項目に beach と house の 両方が含まれている結果。-Example (c) English: You can only predefine this field to an email address. You can predefine it using either T (used to define email addresses) or To Recipients (used to define contact, lead, and user IDs).",A High-Quality Multilingual Dataset for Structured Documentation Translation,"This paper presents a high-quality multilingual dataset for the documentation domain to advance research on localization of structured text. Unlike widely-used datasets for translation of plain text, we collect XML-structured parallel text segments from the online documentation for an enterprise software platform. These Web pages have been professionally translated from English into 16 languages and maintained by domain experts, and around 100,000 text segments are available for each language pair. We build and evaluate translation models for seven target languages from English, with several different copy mechanisms and an XML-constrained beam search. We also experiment with a non-English pair to show that our dataset has the potential to explicitly enable 17 × 16 translation settings. Our experiments show that learning to translate with the XML tags improves translation accuracy, and the beam search accurately generates XML structures. We also discuss tradeoffs of using the copy mechanisms by focusing on translation of numerical words and named entities. We further provide a detailed human analysis of gaps between the model output and human translations for real-world applications, including suitability for post-editing. * Now at Google Brain. 1 https://www.ldc.upenn.edu/-Example (a) English: You can use this report on your Community Management Home dashboard or in Community Workspaces under DashboardsHome . Japanese: このレポートは、 [コミュニティ管理 ] のホームのダッシュボード、または コミュニティワークスペース の [ダッシュボード ] [ホーム] で使用できます。-Example (b) English: Results with bothbeach and house in the searchable fields of the record. Japanese: レコードの検索可能な項目に beach と house の 両方が含まれている結果。-Example (c) English: You can only predefine this field to an email address. You can predefine it using either T (used to define email addresses) or To Recipients (used to define contact, lead, and user IDs).",We thank anonymous reviewers and Xi Victoria Lin for their helpful feedbacks.,"A High-Quality Multilingual Dataset for Structured Documentation Translation. This paper presents a high-quality multilingual dataset for the documentation domain to advance research on localization of structured text. Unlike widely-used datasets for translation of plain text, we collect XML-structured parallel text segments from the online documentation for an enterprise software platform. These Web pages have been professionally translated from English into 16 languages and maintained by domain experts, and around 100,000 text segments are available for each language pair. We build and evaluate translation models for seven target languages from English, with several different copy mechanisms and an XML-constrained beam search. We also experiment with a non-English pair to show that our dataset has the potential to explicitly enable 17 × 16 translation settings. Our experiments show that learning to translate with the XML tags improves translation accuracy, and the beam search accurately generates XML structures. We also discuss tradeoffs of using the copy mechanisms by focusing on translation of numerical words and named entities. We further provide a detailed human analysis of gaps between the model output and human translations for real-world applications, including suitability for post-editing. * Now at Google Brain. 1 https://www.ldc.upenn.edu/-Example (a) English: You can use this report on your Community Management Home dashboard or in Community Workspaces under DashboardsHome . Japanese: このレポートは、 [コミュニティ管理 ] のホームのダッシュボード、または コミュニティワークスペース の [ダッシュボード ] [ホーム] で使用できます。-Example (b) English: Results with bothbeach and house in the searchable fields of the record. Japanese: レコードの検索可能な項目に beach と house の 両方が含まれている結果。-Example (c) English: You can only predefine this field to an email address. You can predefine it using either T (used to define email addresses) or To Recipients (used to define contact, lead, and user IDs).",2019
ahrenberg-2019-towards,https://aclanthology.org/W19-8011,0,,,,,,,"Towards an adequate account of parataxis in Universal Dependencies. The parataxis relation as defined for Universal Dependencies 2.0 is general and, for this reason, sometimes hard to distinguish from competing analyses, such as coordination, conj, or apposition, appos. The specific subtypes that are listed for parataxis are also quite different in character. In this study we first show that the actual practice by UD-annotators is varied, using the parallel UD (PUD-) treebanks as data. We then review the current definitions and guidelines and suggest improvements.",Towards an adequate account of parataxis in {U}niversal {D}ependencies,"The parataxis relation as defined for Universal Dependencies 2.0 is general and, for this reason, sometimes hard to distinguish from competing analyses, such as coordination, conj, or apposition, appos. The specific subtypes that are listed for parataxis are also quite different in character. In this study we first show that the actual practice by UD-annotators is varied, using the parallel UD (PUD-) treebanks as data. We then review the current definitions and guidelines and suggest improvements.",Towards an adequate account of parataxis in Universal Dependencies,"The parataxis relation as defined for Universal Dependencies 2.0 is general and, for this reason, sometimes hard to distinguish from competing analyses, such as coordination, conj, or apposition, appos. The specific subtypes that are listed for parataxis are also quite different in character. In this study we first show that the actual practice by UD-annotators is varied, using the parallel UD (PUD-) treebanks as data. We then review the current definitions and guidelines and suggest improvements.",,"Towards an adequate account of parataxis in Universal Dependencies. The parataxis relation as defined for Universal Dependencies 2.0 is general and, for this reason, sometimes hard to distinguish from competing analyses, such as coordination, conj, or apposition, appos. The specific subtypes that are listed for parataxis are also quite different in character. In this study we first show that the actual practice by UD-annotators is varied, using the parallel UD (PUD-) treebanks as data. We then review the current definitions and guidelines and suggest improvements.",2019
atterer-schlangen-2009-rubisc,https://aclanthology.org/W09-0509,0,,,,,,,"RUBISC - a Robust Unification-Based Incremental Semantic Chunker. We present RUBISC, a new incremental chunker that can perform incremental slot filling and revising as it receives a stream of words. Slot values can influence each other via a unification mechanism. Chunks correspond to sense units, and end-of-sentence detection is done incrementally based on a notion of semantic/pragmatic completeness. One of RU-BISC's main fields of application is in dialogue systems where it can contribute to responsiveness and hence naturalness, because it can provide a partial or complete semantics of an utterance while the speaker is still speaking. The chunker is evaluated on a German transcribed speech corpus and achieves a concept error rate of 43.3% and an F-Score of 81.5.",{RUBISC} - a Robust Unification-Based Incremental Semantic Chunker,"We present RUBISC, a new incremental chunker that can perform incremental slot filling and revising as it receives a stream of words. Slot values can influence each other via a unification mechanism. Chunks correspond to sense units, and end-of-sentence detection is done incrementally based on a notion of semantic/pragmatic completeness. One of RU-BISC's main fields of application is in dialogue systems where it can contribute to responsiveness and hence naturalness, because it can provide a partial or complete semantics of an utterance while the speaker is still speaking. The chunker is evaluated on a German transcribed speech corpus and achieves a concept error rate of 43.3% and an F-Score of 81.5.",RUBISC - a Robust Unification-Based Incremental Semantic Chunker,"We present RUBISC, a new incremental chunker that can perform incremental slot filling and revising as it receives a stream of words. Slot values can influence each other via a unification mechanism. Chunks correspond to sense units, and end-of-sentence detection is done incrementally based on a notion of semantic/pragmatic completeness. One of RU-BISC's main fields of application is in dialogue systems where it can contribute to responsiveness and hence naturalness, because it can provide a partial or complete semantics of an utterance while the speaker is still speaking. The chunker is evaluated on a German transcribed speech corpus and achieves a concept error rate of 43.3% and an F-Score of 81.5.",This work was funded by the DFG Emmy-Noether grant SCHL845/3-1. Many thanks to Ewan Klein for valuable comments. All errors are of course ours.,"RUBISC - a Robust Unification-Based Incremental Semantic Chunker. We present RUBISC, a new incremental chunker that can perform incremental slot filling and revising as it receives a stream of words. Slot values can influence each other via a unification mechanism. Chunks correspond to sense units, and end-of-sentence detection is done incrementally based on a notion of semantic/pragmatic completeness. One of RU-BISC's main fields of application is in dialogue systems where it can contribute to responsiveness and hence naturalness, because it can provide a partial or complete semantics of an utterance while the speaker is still speaking. The chunker is evaluated on a German transcribed speech corpus and achieves a concept error rate of 43.3% and an F-Score of 81.5.",2009
cook-etal-2016-dictionary,https://aclanthology.org/W16-3006,1,,,,health,,,"A dictionary- and rule-based system for identification of bacteria and habitats in text. The number of scientific papers published each year is growing exponentially and given the rate of this growth, automated information extraction is needed to efficiently extract information from this corpus. A critical first step in this process is to accurately recognize the names of entities in text. Previous efforts, such as SPECIES, have identified bacteria strain names, among other taxonomic groups, but have been limited to those names present in NCBI taxonomy. We have implemented a dictionary-based named entity tagger, TagIt, that is followed by a rule based expansion system to identify bacteria strain names and habitats and resolve them to the closest match possible in the NCBI taxonomy and the OntoBiotope ontology respectively. The rule based post processing steps expand acronyms, and extend strain names according to a set of rules, which captures additional aliases and strains that are not present in the dictionary. TagIt has the best performance out of three entries to BioNLP-ST BB3 cat+ner, with an overall SER of 0.628 on the independent test set.",A dictionary- and rule-based system for identification of bacteria and habitats in text,"The number of scientific papers published each year is growing exponentially and given the rate of this growth, automated information extraction is needed to efficiently extract information from this corpus. A critical first step in this process is to accurately recognize the names of entities in text. Previous efforts, such as SPECIES, have identified bacteria strain names, among other taxonomic groups, but have been limited to those names present in NCBI taxonomy. We have implemented a dictionary-based named entity tagger, TagIt, that is followed by a rule based expansion system to identify bacteria strain names and habitats and resolve them to the closest match possible in the NCBI taxonomy and the OntoBiotope ontology respectively. The rule based post processing steps expand acronyms, and extend strain names according to a set of rules, which captures additional aliases and strains that are not present in the dictionary. TagIt has the best performance out of three entries to BioNLP-ST BB3 cat+ner, with an overall SER of 0.628 on the independent test set.",A dictionary- and rule-based system for identification of bacteria and habitats in text,"The number of scientific papers published each year is growing exponentially and given the rate of this growth, automated information extraction is needed to efficiently extract information from this corpus. A critical first step in this process is to accurately recognize the names of entities in text. Previous efforts, such as SPECIES, have identified bacteria strain names, among other taxonomic groups, but have been limited to those names present in NCBI taxonomy. We have implemented a dictionary-based named entity tagger, TagIt, that is followed by a rule based expansion system to identify bacteria strain names and habitats and resolve them to the closest match possible in the NCBI taxonomy and the OntoBiotope ontology respectively. The rule based post processing steps expand acronyms, and extend strain names according to a set of rules, which captures additional aliases and strains that are not present in the dictionary. TagIt has the best performance out of three entries to BioNLP-ST BB3 cat+ner, with an overall SER of 0.628 on the independent test set.","EU BON (EU FP7 Contract No. 308454 program), the Micro B3 Project (287589), the Earth System Science and Environmental Management COST Action (ES1103) and the Novo Nordisk Foundation (NNF14CC0001).","A dictionary- and rule-based system for identification of bacteria and habitats in text. The number of scientific papers published each year is growing exponentially and given the rate of this growth, automated information extraction is needed to efficiently extract information from this corpus. A critical first step in this process is to accurately recognize the names of entities in text. Previous efforts, such as SPECIES, have identified bacteria strain names, among other taxonomic groups, but have been limited to those names present in NCBI taxonomy. We have implemented a dictionary-based named entity tagger, TagIt, that is followed by a rule based expansion system to identify bacteria strain names and habitats and resolve them to the closest match possible in the NCBI taxonomy and the OntoBiotope ontology respectively. The rule based post processing steps expand acronyms, and extend strain names according to a set of rules, which captures additional aliases and strains that are not present in the dictionary. TagIt has the best performance out of three entries to BioNLP-ST BB3 cat+ner, with an overall SER of 0.628 on the independent test set.",2016
chen-etal-2021-advpicker,https://aclanthology.org/2021.acl-long.61,0,,,,,,,"AdvPicker: Effectively Leveraging Unlabeled Data via Adversarial Discriminator for Cross-Lingual NER. Neural methods have been shown to achieve high performance in Named Entity Recognition (NER), but rely on costly high-quality labeled data for training, which is not always available across languages. While previous works have shown that unlabeled data in a target language can be used to improve crosslingual model performance, we propose a novel adversarial approach (AdvPicker) to better leverage such data and further improve results. We design an adversarial learning framework in which an encoder learns entity domain knowledge from labeled source-language data and better shared features are captured via adversarial training-where a discriminator selects less language-dependent target-language data via similarity to the source language. Experimental results on standard benchmark datasets well demonstrate that the proposed method benefits strongly from this data selection process and outperforms existing state-ofthe-art methods; without requiring any additional external resources (e.g., gazetteers or via machine translation). 1",{A}dv{P}icker: {E}ffectively {L}everaging {U}nlabeled {D}ata via {A}dversarial {D}iscriminator for {C}ross-{L}ingual {NER},"Neural methods have been shown to achieve high performance in Named Entity Recognition (NER), but rely on costly high-quality labeled data for training, which is not always available across languages. While previous works have shown that unlabeled data in a target language can be used to improve crosslingual model performance, we propose a novel adversarial approach (AdvPicker) to better leverage such data and further improve results. We design an adversarial learning framework in which an encoder learns entity domain knowledge from labeled source-language data and better shared features are captured via adversarial training-where a discriminator selects less language-dependent target-language data via similarity to the source language. Experimental results on standard benchmark datasets well demonstrate that the proposed method benefits strongly from this data selection process and outperforms existing state-ofthe-art methods; without requiring any additional external resources (e.g., gazetteers or via machine translation). 1",AdvPicker: Effectively Leveraging Unlabeled Data via Adversarial Discriminator for Cross-Lingual NER,"Neural methods have been shown to achieve high performance in Named Entity Recognition (NER), but rely on costly high-quality labeled data for training, which is not always available across languages. While previous works have shown that unlabeled data in a target language can be used to improve crosslingual model performance, we propose a novel adversarial approach (AdvPicker) to better leverage such data and further improve results. We design an adversarial learning framework in which an encoder learns entity domain knowledge from labeled source-language data and better shared features are captured via adversarial training-where a discriminator selects less language-dependent target-language data via similarity to the source language. Experimental results on standard benchmark datasets well demonstrate that the proposed method benefits strongly from this data selection process and outperforms existing state-ofthe-art methods; without requiring any additional external resources (e.g., gazetteers or via machine translation). 1",,"AdvPicker: Effectively Leveraging Unlabeled Data via Adversarial Discriminator for Cross-Lingual NER. Neural methods have been shown to achieve high performance in Named Entity Recognition (NER), but rely on costly high-quality labeled data for training, which is not always available across languages. While previous works have shown that unlabeled data in a target language can be used to improve crosslingual model performance, we propose a novel adversarial approach (AdvPicker) to better leverage such data and further improve results. We design an adversarial learning framework in which an encoder learns entity domain knowledge from labeled source-language data and better shared features are captured via adversarial training-where a discriminator selects less language-dependent target-language data via similarity to the source language. Experimental results on standard benchmark datasets well demonstrate that the proposed method benefits strongly from this data selection process and outperforms existing state-ofthe-art methods; without requiring any additional external resources (e.g., gazetteers or via machine translation). 1",2021
meister-cotterell-2021-language,https://aclanthology.org/2021.acl-long.414,0,,,,,,,"Language Model Evaluation Beyond Perplexity. We propose an alternate approach to quantifying how well language models learn natural language: we ask how well they match the statistical tendencies of natural language. To answer this question, we analyze whether text generated from language models exhibits the statistical tendencies present in the humangenerated text on which they were trained. We provide a framework-paired with significance tests-for evaluating the fit of language models to these trends. We find that neural language models appear to learn only a subset of the tendencies considered, but align much more closely with empirical trends than proposed theoretical distributions (when present). Further, the fit to different distributions is highly-dependent on both model architecture and generation strategy. As concrete examples, text generated under the nucleus sampling scheme adheres more closely to the typetoken relationship of natural language than text produced using standard ancestral sampling; text from LSTMs reflects the natural language distributions over length, stopwords, and symbols surprisingly well.",Language Model Evaluation Beyond Perplexity,"We propose an alternate approach to quantifying how well language models learn natural language: we ask how well they match the statistical tendencies of natural language. To answer this question, we analyze whether text generated from language models exhibits the statistical tendencies present in the humangenerated text on which they were trained. We provide a framework-paired with significance tests-for evaluating the fit of language models to these trends. We find that neural language models appear to learn only a subset of the tendencies considered, but align much more closely with empirical trends than proposed theoretical distributions (when present). Further, the fit to different distributions is highly-dependent on both model architecture and generation strategy. As concrete examples, text generated under the nucleus sampling scheme adheres more closely to the typetoken relationship of natural language than text produced using standard ancestral sampling; text from LSTMs reflects the natural language distributions over length, stopwords, and symbols surprisingly well.",Language Model Evaluation Beyond Perplexity,"We propose an alternate approach to quantifying how well language models learn natural language: we ask how well they match the statistical tendencies of natural language. To answer this question, we analyze whether text generated from language models exhibits the statistical tendencies present in the humangenerated text on which they were trained. We provide a framework-paired with significance tests-for evaluating the fit of language models to these trends. We find that neural language models appear to learn only a subset of the tendencies considered, but align much more closely with empirical trends than proposed theoretical distributions (when present). Further, the fit to different distributions is highly-dependent on both model architecture and generation strategy. As concrete examples, text generated under the nucleus sampling scheme adheres more closely to the typetoken relationship of natural language than text produced using standard ancestral sampling; text from LSTMs reflects the natural language distributions over length, stopwords, and symbols surprisingly well.","We thank Adhi Kuncoro for helpful discussion and feedback in the middle stages of our work and Tiago Pimentel, Jason Wei, and our anonymous reviewers for insightful feedback on the manuscript. We additionally thank B. Bou for his concern.","Language Model Evaluation Beyond Perplexity. We propose an alternate approach to quantifying how well language models learn natural language: we ask how well they match the statistical tendencies of natural language. To answer this question, we analyze whether text generated from language models exhibits the statistical tendencies present in the humangenerated text on which they were trained. We provide a framework-paired with significance tests-for evaluating the fit of language models to these trends. We find that neural language models appear to learn only a subset of the tendencies considered, but align much more closely with empirical trends than proposed theoretical distributions (when present). Further, the fit to different distributions is highly-dependent on both model architecture and generation strategy. As concrete examples, text generated under the nucleus sampling scheme adheres more closely to the typetoken relationship of natural language than text produced using standard ancestral sampling; text from LSTMs reflects the natural language distributions over length, stopwords, and symbols surprisingly well.",2021
cui-bollegala-2019-self,https://aclanthology.org/R19-1025,0,,,,,,,"Self-Adaptation for Unsupervised Domain Adaptation. Lack of labelled data in the target domain for training is a common problem in domain adaptation. To overcome this problem, we propose a novel unsupervised domain adaptation method that combines projection and self-training based approaches. Using the labelled data from the source domain, we first learn a projection that maximises the distance among the nearest neighbours with opposite labels in the source domain. Next, we project the source domain labelled data using the learnt projection and train a classifier for the target class prediction. We then use the trained classifier to predict pseudo labels for the target domain unlabelled data. Finally, we learn a projection for the target domain as we did for the source domain using the pseudo-labelled target domain data, where we maximise the distance between nearest neighbours having opposite pseudo labels. Experiments on a standard benchmark dataset for domain adaptation show that the proposed method consistently outperforms numerous baselines and returns competitive results comparable to that of SOTA including self-training, tri-training, and neural adaptations.",Self-Adaptation for Unsupervised Domain Adaptation,"Lack of labelled data in the target domain for training is a common problem in domain adaptation. To overcome this problem, we propose a novel unsupervised domain adaptation method that combines projection and self-training based approaches. Using the labelled data from the source domain, we first learn a projection that maximises the distance among the nearest neighbours with opposite labels in the source domain. Next, we project the source domain labelled data using the learnt projection and train a classifier for the target class prediction. We then use the trained classifier to predict pseudo labels for the target domain unlabelled data. Finally, we learn a projection for the target domain as we did for the source domain using the pseudo-labelled target domain data, where we maximise the distance between nearest neighbours having opposite pseudo labels. Experiments on a standard benchmark dataset for domain adaptation show that the proposed method consistently outperforms numerous baselines and returns competitive results comparable to that of SOTA including self-training, tri-training, and neural adaptations.",Self-Adaptation for Unsupervised Domain Adaptation,"Lack of labelled data in the target domain for training is a common problem in domain adaptation. To overcome this problem, we propose a novel unsupervised domain adaptation method that combines projection and self-training based approaches. Using the labelled data from the source domain, we first learn a projection that maximises the distance among the nearest neighbours with opposite labels in the source domain. Next, we project the source domain labelled data using the learnt projection and train a classifier for the target class prediction. We then use the trained classifier to predict pseudo labels for the target domain unlabelled data. Finally, we learn a projection for the target domain as we did for the source domain using the pseudo-labelled target domain data, where we maximise the distance between nearest neighbours having opposite pseudo labels. Experiments on a standard benchmark dataset for domain adaptation show that the proposed method consistently outperforms numerous baselines and returns competitive results comparable to that of SOTA including self-training, tri-training, and neural adaptations.",,"Self-Adaptation for Unsupervised Domain Adaptation. Lack of labelled data in the target domain for training is a common problem in domain adaptation. To overcome this problem, we propose a novel unsupervised domain adaptation method that combines projection and self-training based approaches. Using the labelled data from the source domain, we first learn a projection that maximises the distance among the nearest neighbours with opposite labels in the source domain. Next, we project the source domain labelled data using the learnt projection and train a classifier for the target class prediction. We then use the trained classifier to predict pseudo labels for the target domain unlabelled data. Finally, we learn a projection for the target domain as we did for the source domain using the pseudo-labelled target domain data, where we maximise the distance between nearest neighbours having opposite pseudo labels. Experiments on a standard benchmark dataset for domain adaptation show that the proposed method consistently outperforms numerous baselines and returns competitive results comparable to that of SOTA including self-training, tri-training, and neural adaptations.",2019
graham-etal-2015-accurate,https://aclanthology.org/N15-1124,0,,,,,,,"Accurate Evaluation of Segment-level Machine Translation Metrics. Evaluation of segment-level machine translation metrics is currently hampered by: (1) low inter-annotator agreement levels in human assessments; (2) lack of an effective mechanism for evaluation of translations of equal quality; and (3) lack of methods of significance testing improvements over a baseline. In this paper, we provide solutions to each of these challenges and outline a new human evaluation methodology aimed specifically at assessment of segment-level metrics. We replicate the human evaluation component of WMT-13 and reveal that the current state-of-the-art performance of segment-level metrics is better than previously believed. Three segment-level metrics-METEOR, NLEPOR and SENTBLEU-MOSES-are found to correlate with human assessment at a level not significantly outperformed by any other metric in both the individual language pair assessment for Spanish-to-English and the aggregated set of 9 language pairs.",Accurate Evaluation of Segment-level Machine Translation Metrics,"Evaluation of segment-level machine translation metrics is currently hampered by: (1) low inter-annotator agreement levels in human assessments; (2) lack of an effective mechanism for evaluation of translations of equal quality; and (3) lack of methods of significance testing improvements over a baseline. In this paper, we provide solutions to each of these challenges and outline a new human evaluation methodology aimed specifically at assessment of segment-level metrics. We replicate the human evaluation component of WMT-13 and reveal that the current state-of-the-art performance of segment-level metrics is better than previously believed. Three segment-level metrics-METEOR, NLEPOR and SENTBLEU-MOSES-are found to correlate with human assessment at a level not significantly outperformed by any other metric in both the individual language pair assessment for Spanish-to-English and the aggregated set of 9 language pairs.",Accurate Evaluation of Segment-level Machine Translation Metrics,"Evaluation of segment-level machine translation metrics is currently hampered by: (1) low inter-annotator agreement levels in human assessments; (2) lack of an effective mechanism for evaluation of translations of equal quality; and (3) lack of methods of significance testing improvements over a baseline. In this paper, we provide solutions to each of these challenges and outline a new human evaluation methodology aimed specifically at assessment of segment-level metrics. We replicate the human evaluation component of WMT-13 and reveal that the current state-of-the-art performance of segment-level metrics is better than previously believed. Three segment-level metrics-METEOR, NLEPOR and SENTBLEU-MOSES-are found to correlate with human assessment at a level not significantly outperformed by any other metric in both the individual language pair assessment for Spanish-to-English and the aggregated set of 9 language pairs.",We wish to thank the anonymous reviewers for their valuable comments. This research was supported by funding from the Australian Research Council and Science Foundation Ireland (Grant 12/CE/12267).,"Accurate Evaluation of Segment-level Machine Translation Metrics. Evaluation of segment-level machine translation metrics is currently hampered by: (1) low inter-annotator agreement levels in human assessments; (2) lack of an effective mechanism for evaluation of translations of equal quality; and (3) lack of methods of significance testing improvements over a baseline. In this paper, we provide solutions to each of these challenges and outline a new human evaluation methodology aimed specifically at assessment of segment-level metrics. We replicate the human evaluation component of WMT-13 and reveal that the current state-of-the-art performance of segment-level metrics is better than previously believed. Three segment-level metrics-METEOR, NLEPOR and SENTBLEU-MOSES-are found to correlate with human assessment at a level not significantly outperformed by any other metric in both the individual language pair assessment for Spanish-to-English and the aggregated set of 9 language pairs.",2015
augenstein-etal-2017-semeval,https://aclanthology.org/S17-2091,1,,,,industry_innovation_infrastructure,,,"SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications. We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities. Keyphrase Extraction (TASK), as well as extracting semantic relations between keywords, e.g. Keyphrase Extraction HYPONYM-OF Information Extraction. These tasks are related to the tasks of named entity recognition, named entity",{S}em{E}val 2017 Task 10: {S}cience{IE} - Extracting Keyphrases and Relations from Scientific Publications,"We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities. Keyphrase Extraction (TASK), as well as extracting semantic relations between keywords, e.g. Keyphrase Extraction HYPONYM-OF Information Extraction. These tasks are related to the tasks of named entity recognition, named entity",SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications,"We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities. Keyphrase Extraction (TASK), as well as extracting semantic relations between keywords, e.g. Keyphrase Extraction HYPONYM-OF Information Extraction. These tasks are related to the tasks of named entity recognition, named entity",We would like to thank Elsevier for supporting this shared task. Special thanks go to Ronald Daniel Jr. for his feedback on the task setup and Pontus Stenetorp for his advice on brat and shared task organisation.,"SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications. We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities. Keyphrase Extraction (TASK), as well as extracting semantic relations between keywords, e.g. Keyphrase Extraction HYPONYM-OF Information Extraction. These tasks are related to the tasks of named entity recognition, named entity",2017
ramsay-field-2009-using,https://aclanthology.org/W09-3717,0,,,,,,,"Using English for commonsense knowledge. The work reported here arises from an attempt to provide a body of simple information about diet and its effect on various common medical conditions. Expressing this knowledge in natural language has a number of advantages. It also raises a number of difficult issues. We will consider solutions, and partial solutions, to these issues below. 1 Commonse knowledge Suppose you wanted to have a system that could provide advice about what you should and should not eat if you suffer from various common medical conditions. You might expect, at the very least, to be able to have dialogues like (1). (1) a. User: I am allergic to eggs. Computer: OK User: Should I eat pancakes Computer: No, because pancakes contain eggs, and eating things which contain eggs will make you ill if you are allergic to eggs. b. User: My son is very fat. Computer: OK User: Should he go swimming. Computer: Yes, because swimming is a form of exercise, and exercise is good for people who are overweight.",Using {E}nglish for commonsense knowledge,"The work reported here arises from an attempt to provide a body of simple information about diet and its effect on various common medical conditions. Expressing this knowledge in natural language has a number of advantages. It also raises a number of difficult issues. We will consider solutions, and partial solutions, to these issues below. 1 Commonse knowledge Suppose you wanted to have a system that could provide advice about what you should and should not eat if you suffer from various common medical conditions. You might expect, at the very least, to be able to have dialogues like (1). (1) a. User: I am allergic to eggs. Computer: OK User: Should I eat pancakes Computer: No, because pancakes contain eggs, and eating things which contain eggs will make you ill if you are allergic to eggs. b. User: My son is very fat. Computer: OK User: Should he go swimming. Computer: Yes, because swimming is a form of exercise, and exercise is good for people who are overweight.",Using English for commonsense knowledge,"The work reported here arises from an attempt to provide a body of simple information about diet and its effect on various common medical conditions. Expressing this knowledge in natural language has a number of advantages. It also raises a number of difficult issues. We will consider solutions, and partial solutions, to these issues below. 1 Commonse knowledge Suppose you wanted to have a system that could provide advice about what you should and should not eat if you suffer from various common medical conditions. You might expect, at the very least, to be able to have dialogues like (1). (1) a. User: I am allergic to eggs. Computer: OK User: Should I eat pancakes Computer: No, because pancakes contain eggs, and eating things which contain eggs will make you ill if you are allergic to eggs. b. User: My son is very fat. Computer: OK User: Should he go swimming. Computer: Yes, because swimming is a form of exercise, and exercise is good for people who are overweight.",,"Using English for commonsense knowledge. The work reported here arises from an attempt to provide a body of simple information about diet and its effect on various common medical conditions. Expressing this knowledge in natural language has a number of advantages. It also raises a number of difficult issues. We will consider solutions, and partial solutions, to these issues below. 1 Commonse knowledge Suppose you wanted to have a system that could provide advice about what you should and should not eat if you suffer from various common medical conditions. You might expect, at the very least, to be able to have dialogues like (1). (1) a. User: I am allergic to eggs. Computer: OK User: Should I eat pancakes Computer: No, because pancakes contain eggs, and eating things which contain eggs will make you ill if you are allergic to eggs. b. User: My son is very fat. Computer: OK User: Should he go swimming. Computer: Yes, because swimming is a form of exercise, and exercise is good for people who are overweight.",2009
hirao-etal-2017-enumeration,https://aclanthology.org/E17-1037,0,,,,,,,"Enumeration of Extractive Oracle Summaries. To analyze the limitations and the future directions of the extractive summarization paradigm, this paper proposes an Integer Linear Programming (ILP) formulation to obtain extractive oracle summaries in terms of ROUGE n. We also propose an algorithm that enumerates all of the oracle summaries for a set of reference summaries to exploit F-measures that evaluate which system summaries contain how many sentences that are extracted as an oracle summary. Our experimental results obtained from Document Understanding Conference (DUC) corpora demonstrated the following: (1) room still exists to improve the performance of extractive summarization; (2) the F-measures derived from the enumerated oracle summaries have significantly stronger correlations with human judgment than those derived from single oracle summaries.",Enumeration of Extractive Oracle Summaries,"To analyze the limitations and the future directions of the extractive summarization paradigm, this paper proposes an Integer Linear Programming (ILP) formulation to obtain extractive oracle summaries in terms of ROUGE n. We also propose an algorithm that enumerates all of the oracle summaries for a set of reference summaries to exploit F-measures that evaluate which system summaries contain how many sentences that are extracted as an oracle summary. Our experimental results obtained from Document Understanding Conference (DUC) corpora demonstrated the following: (1) room still exists to improve the performance of extractive summarization; (2) the F-measures derived from the enumerated oracle summaries have significantly stronger correlations with human judgment than those derived from single oracle summaries.",Enumeration of Extractive Oracle Summaries,"To analyze the limitations and the future directions of the extractive summarization paradigm, this paper proposes an Integer Linear Programming (ILP) formulation to obtain extractive oracle summaries in terms of ROUGE n. We also propose an algorithm that enumerates all of the oracle summaries for a set of reference summaries to exploit F-measures that evaluate which system summaries contain how many sentences that are extracted as an oracle summary. Our experimental results obtained from Document Understanding Conference (DUC) corpora demonstrated the following: (1) room still exists to improve the performance of extractive summarization; (2) the F-measures derived from the enumerated oracle summaries have significantly stronger correlations with human judgment than those derived from single oracle summaries.",The authors thank three anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.,"Enumeration of Extractive Oracle Summaries. To analyze the limitations and the future directions of the extractive summarization paradigm, this paper proposes an Integer Linear Programming (ILP) formulation to obtain extractive oracle summaries in terms of ROUGE n. We also propose an algorithm that enumerates all of the oracle summaries for a set of reference summaries to exploit F-measures that evaluate which system summaries contain how many sentences that are extracted as an oracle summary. Our experimental results obtained from Document Understanding Conference (DUC) corpora demonstrated the following: (1) room still exists to improve the performance of extractive summarization; (2) the F-measures derived from the enumerated oracle summaries have significantly stronger correlations with human judgment than those derived from single oracle summaries.",2017
okumura-hovy-1994-lexicon,https://aclanthology.org/1994.amta-1.23,0,,,,,,,"Lexicon-to-Ontology Concept Association Using a Bilingual Dictionary. This paper describes a semi-automatic method for associating a Japanese lexicon with a semantic concept taxonomy called an ontology, using a Japanese-English bilingual dictionary as a ""bridge"". The ontology supports semantic processing in a knowledge-based machine translation system by providing a set of language-neutral symbols with semantic information. To put the ontology to use, lexical items of each language of interest must be linked to appropriate ontology items. The association of ontology items with lexical items of various languages is a process fraught with difficulty: since much of this work depends on the subjective decisions of human workers, large MT dictionaries tend to be subject to some dispersion and inconsistency. The problem we focus on here is how to associate concepts in the ontology with Japanese lexical entities by automatic methods, since it is too difficult to define adequately many concepts manually. We have designed three algorithms to associate a Japanese lexicon with the concepts of the ontology: the equivalent-word match, the argument match, and the example match.",Lexicon-to-Ontology Concept Association Using a Bilingual Dictionary,"This paper describes a semi-automatic method for associating a Japanese lexicon with a semantic concept taxonomy called an ontology, using a Japanese-English bilingual dictionary as a ""bridge"". The ontology supports semantic processing in a knowledge-based machine translation system by providing a set of language-neutral symbols with semantic information. To put the ontology to use, lexical items of each language of interest must be linked to appropriate ontology items. The association of ontology items with lexical items of various languages is a process fraught with difficulty: since much of this work depends on the subjective decisions of human workers, large MT dictionaries tend to be subject to some dispersion and inconsistency. The problem we focus on here is how to associate concepts in the ontology with Japanese lexical entities by automatic methods, since it is too difficult to define adequately many concepts manually. We have designed three algorithms to associate a Japanese lexicon with the concepts of the ontology: the equivalent-word match, the argument match, and the example match.",Lexicon-to-Ontology Concept Association Using a Bilingual Dictionary,"This paper describes a semi-automatic method for associating a Japanese lexicon with a semantic concept taxonomy called an ontology, using a Japanese-English bilingual dictionary as a ""bridge"". The ontology supports semantic processing in a knowledge-based machine translation system by providing a set of language-neutral symbols with semantic information. To put the ontology to use, lexical items of each language of interest must be linked to appropriate ontology items. The association of ontology items with lexical items of various languages is a process fraught with difficulty: since much of this work depends on the subjective decisions of human workers, large MT dictionaries tend to be subject to some dispersion and inconsistency. The problem we focus on here is how to associate concepts in the ontology with Japanese lexical entities by automatic methods, since it is too difficult to define adequately many concepts manually. We have designed three algorithms to associate a Japanese lexicon with the concepts of the ontology: the equivalent-word match, the argument match, and the example match.",We would like to thank Kevin Knight and Matthew Haines for their significant assistance with this work. We also appreciate Kazunori Muraki of NEC Labs. for his support.,"Lexicon-to-Ontology Concept Association Using a Bilingual Dictionary. This paper describes a semi-automatic method for associating a Japanese lexicon with a semantic concept taxonomy called an ontology, using a Japanese-English bilingual dictionary as a ""bridge"". The ontology supports semantic processing in a knowledge-based machine translation system by providing a set of language-neutral symbols with semantic information. To put the ontology to use, lexical items of each language of interest must be linked to appropriate ontology items. The association of ontology items with lexical items of various languages is a process fraught with difficulty: since much of this work depends on the subjective decisions of human workers, large MT dictionaries tend to be subject to some dispersion and inconsistency. The problem we focus on here is how to associate concepts in the ontology with Japanese lexical entities by automatic methods, since it is too difficult to define adequately many concepts manually. We have designed three algorithms to associate a Japanese lexicon with the concepts of the ontology: the equivalent-word match, the argument match, and the example match.",1994
stanojevic-steedman-2021-formal,https://aclanthology.org/2021.cl-1.2,0,,,,,,,"Formal Basis of a Language Universal. Steedman (2020) proposes as a formal universal of natural language grammar that grammatical permutations of the kind that have given rise to transformational rules are limited to a class known to mathematicians and computer scientists as the ""separable"" permutations. This class of permutations is exactly the class that can be expressed in combinatory categorial grammars (CCGs). The excluded non-separable permutations do in fact seem to be absent in a number of studies of crosslinguistic variation in word order in nominal and verbal constructions. The number of permutations that are separable grows in the number n of lexical elements in the construction as the Large Schröder Number S n−1. Because that number grows much more slowly than the n! number of all permutations, this generalization is also of considerable practical interest for computational applications such as parsing and machine translation. The present article examines the mathematical and computational origins of this restriction, and the reason it is exactly captured in CCG without the imposition of any further constraints.",Formal Basis of a Language Universal,"Steedman (2020) proposes as a formal universal of natural language grammar that grammatical permutations of the kind that have given rise to transformational rules are limited to a class known to mathematicians and computer scientists as the ""separable"" permutations. This class of permutations is exactly the class that can be expressed in combinatory categorial grammars (CCGs). The excluded non-separable permutations do in fact seem to be absent in a number of studies of crosslinguistic variation in word order in nominal and verbal constructions. The number of permutations that are separable grows in the number n of lexical elements in the construction as the Large Schröder Number S n−1. Because that number grows much more slowly than the n! number of all permutations, this generalization is also of considerable practical interest for computational applications such as parsing and machine translation. The present article examines the mathematical and computational origins of this restriction, and the reason it is exactly captured in CCG without the imposition of any further constraints.",Formal Basis of a Language Universal,"Steedman (2020) proposes as a formal universal of natural language grammar that grammatical permutations of the kind that have given rise to transformational rules are limited to a class known to mathematicians and computer scientists as the ""separable"" permutations. This class of permutations is exactly the class that can be expressed in combinatory categorial grammars (CCGs). The excluded non-separable permutations do in fact seem to be absent in a number of studies of crosslinguistic variation in word order in nominal and verbal constructions. The number of permutations that are separable grows in the number n of lexical elements in the construction as the Large Schröder Number S n−1. Because that number grows much more slowly than the n! number of all permutations, this generalization is also of considerable practical interest for computational applications such as parsing and machine translation. The present article examines the mathematical and computational origins of this restriction, and the reason it is exactly captured in CCG without the imposition of any further constraints.","We are grateful to Peter Buneman, Shay Cohen, Paula Merlo, Chris Stone, Bonnie Webber, and the Referees for Computational Linguistics for helpful comments and advice. The work was supported by ERC Advanced Fellowship 742137 SEMANTAX.","Formal Basis of a Language Universal. Steedman (2020) proposes as a formal universal of natural language grammar that grammatical permutations of the kind that have given rise to transformational rules are limited to a class known to mathematicians and computer scientists as the ""separable"" permutations. This class of permutations is exactly the class that can be expressed in combinatory categorial grammars (CCGs). The excluded non-separable permutations do in fact seem to be absent in a number of studies of crosslinguistic variation in word order in nominal and verbal constructions. The number of permutations that are separable grows in the number n of lexical elements in the construction as the Large Schröder Number S n−1. Because that number grows much more slowly than the n! number of all permutations, this generalization is also of considerable practical interest for computational applications such as parsing and machine translation. The present article examines the mathematical and computational origins of this restriction, and the reason it is exactly captured in CCG without the imposition of any further constraints.",2021
horacek-1997-generating,https://aclanthology.org/W97-1408,0,,,,,,,"Generating Referential Descriptions in Multimedia Environments. All known algorithms dedicated to the generation of referential descriptions use natural language alone to accomplish this communicative goal. Motivated by some limitations underlying these algorithms and the resulting restrictions in their scope, we attempt to extend the basic schema of these procedures to multimedia environments, that is, to descriptions consisting of images and text. We discuss several issues in this enterprise, including the transfer of basic ingredients to images and the hereby reinterpretation of language-specific concepts, matters of choice in the generation process, and the extended application potential in some typical scenarios. Moreover, we sketch our intended area of application, the identification of a particular object in the large visualization of mathematical proofs, which has some characteristic properties of each of these scenarios. Our achievement lies in extending the scope of techniques for generating referential descriptions through the incorporation of multimedia components and in enhancing the application areas for these techniques.",Generating Referential Descriptions in Multimedia Environments,"All known algorithms dedicated to the generation of referential descriptions use natural language alone to accomplish this communicative goal. Motivated by some limitations underlying these algorithms and the resulting restrictions in their scope, we attempt to extend the basic schema of these procedures to multimedia environments, that is, to descriptions consisting of images and text. We discuss several issues in this enterprise, including the transfer of basic ingredients to images and the hereby reinterpretation of language-specific concepts, matters of choice in the generation process, and the extended application potential in some typical scenarios. Moreover, we sketch our intended area of application, the identification of a particular object in the large visualization of mathematical proofs, which has some characteristic properties of each of these scenarios. Our achievement lies in extending the scope of techniques for generating referential descriptions through the incorporation of multimedia components and in enhancing the application areas for these techniques.",Generating Referential Descriptions in Multimedia Environments,"All known algorithms dedicated to the generation of referential descriptions use natural language alone to accomplish this communicative goal. Motivated by some limitations underlying these algorithms and the resulting restrictions in their scope, we attempt to extend the basic schema of these procedures to multimedia environments, that is, to descriptions consisting of images and text. We discuss several issues in this enterprise, including the transfer of basic ingredients to images and the hereby reinterpretation of language-specific concepts, matters of choice in the generation process, and the extended application potential in some typical scenarios. Moreover, we sketch our intended area of application, the identification of a particular object in the large visualization of mathematical proofs, which has some characteristic properties of each of these scenarios. Our achievement lies in extending the scope of techniques for generating referential descriptions through the incorporation of multimedia components and in enhancing the application areas for these techniques.",The graphical proof visualization component by which the proof tree representations depicted in this paper are produced has been designed and implemented by Stephan Hess. Work on this component is going on.,"Generating Referential Descriptions in Multimedia Environments. All known algorithms dedicated to the generation of referential descriptions use natural language alone to accomplish this communicative goal. Motivated by some limitations underlying these algorithms and the resulting restrictions in their scope, we attempt to extend the basic schema of these procedures to multimedia environments, that is, to descriptions consisting of images and text. We discuss several issues in this enterprise, including the transfer of basic ingredients to images and the hereby reinterpretation of language-specific concepts, matters of choice in the generation process, and the extended application potential in some typical scenarios. Moreover, we sketch our intended area of application, the identification of a particular object in the large visualization of mathematical proofs, which has some characteristic properties of each of these scenarios. Our achievement lies in extending the scope of techniques for generating referential descriptions through the incorporation of multimedia components and in enhancing the application areas for these techniques.",1997
lopatkova-kettnerova-2016-alternations,https://aclanthology.org/W16-3804,0,,,,,,,"Alternations: From Lexicon to Grammar And Back Again. An excellent example of a phenomenon bridging a lexicon and a grammar is provided by grammaticalized alternations (e.g., passivization, reflexivity, and reciprocity): these alternations represent productive grammatical processes which are, however, lexically determined. While grammaticalized alternations keep lexical meaning of verbs unchanged, they are usually characterized by various changes in their morphosyntactic structure. In this contribution, we demonstrate on the example of reciprocity and its representation in the valency lexicon of Czech verbs, VALLEX how a linguistic description of complex (and still systemic) changes characteristic of grammaticalized alternations can benefit from an integration of grammatical rules into a valency lexicon. In contrast to other types of grammaticalized alternations, reciprocity in Czech has received relatively little attention although it closely interacts with various linguistic phenomena (e.g., with light verbs, diatheses, and reflexivity).",{A}lternations: From Lexicon to Grammar And Back Again,"An excellent example of a phenomenon bridging a lexicon and a grammar is provided by grammaticalized alternations (e.g., passivization, reflexivity, and reciprocity): these alternations represent productive grammatical processes which are, however, lexically determined. While grammaticalized alternations keep lexical meaning of verbs unchanged, they are usually characterized by various changes in their morphosyntactic structure. In this contribution, we demonstrate on the example of reciprocity and its representation in the valency lexicon of Czech verbs, VALLEX how a linguistic description of complex (and still systemic) changes characteristic of grammaticalized alternations can benefit from an integration of grammatical rules into a valency lexicon. In contrast to other types of grammaticalized alternations, reciprocity in Czech has received relatively little attention although it closely interacts with various linguistic phenomena (e.g., with light verbs, diatheses, and reflexivity).",Alternations: From Lexicon to Grammar And Back Again,"An excellent example of a phenomenon bridging a lexicon and a grammar is provided by grammaticalized alternations (e.g., passivization, reflexivity, and reciprocity): these alternations represent productive grammatical processes which are, however, lexically determined. While grammaticalized alternations keep lexical meaning of verbs unchanged, they are usually characterized by various changes in their morphosyntactic structure. In this contribution, we demonstrate on the example of reciprocity and its representation in the valency lexicon of Czech verbs, VALLEX how a linguistic description of complex (and still systemic) changes characteristic of grammaticalized alternations can benefit from an integration of grammatical rules into a valency lexicon. In contrast to other types of grammaticalized alternations, reciprocity in Czech has received relatively little attention although it closely interacts with various linguistic phenomena (e.g., with light verbs, diatheses, and reflexivity).","The work on this project was partially supported by the grant GA 15-09979S of the Grant Agency of the Czech Republic. This work has been using language resources developed, stored, and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).","Alternations: From Lexicon to Grammar And Back Again. An excellent example of a phenomenon bridging a lexicon and a grammar is provided by grammaticalized alternations (e.g., passivization, reflexivity, and reciprocity): these alternations represent productive grammatical processes which are, however, lexically determined. While grammaticalized alternations keep lexical meaning of verbs unchanged, they are usually characterized by various changes in their morphosyntactic structure. In this contribution, we demonstrate on the example of reciprocity and its representation in the valency lexicon of Czech verbs, VALLEX how a linguistic description of complex (and still systemic) changes characteristic of grammaticalized alternations can benefit from an integration of grammatical rules into a valency lexicon. In contrast to other types of grammaticalized alternations, reciprocity in Czech has received relatively little attention although it closely interacts with various linguistic phenomena (e.g., with light verbs, diatheses, and reflexivity).",2016
loaiciga-wehrli-2015-rule,https://aclanthology.org/W15-2512,0,,,,,,,"Rule-Based Pronominal Anaphora Treatment for Machine Translation. In this paper we describe the rule-based MT system Its-2 developed at the University of Geneva and submitted for the shared task on pronoun translation organized within the Second DiscoMT Workshop. For improving pronoun translation, an Anaphora Resolution (AR) step based on Chomsky's Binding Theory and Hobbs' algorithm has been implemented. Since this strategy is currently restricted to 3rd person personal pronouns (i.e. they, it translated as elle, elles, il, ils only), absolute performance is affected. However, qualitative differences between the submitted system and a baseline without the AR procedure can be observed.",Rule-Based Pronominal Anaphora Treatment for Machine Translation,"In this paper we describe the rule-based MT system Its-2 developed at the University of Geneva and submitted for the shared task on pronoun translation organized within the Second DiscoMT Workshop. For improving pronoun translation, an Anaphora Resolution (AR) step based on Chomsky's Binding Theory and Hobbs' algorithm has been implemented. Since this strategy is currently restricted to 3rd person personal pronouns (i.e. they, it translated as elle, elles, il, ils only), absolute performance is affected. However, qualitative differences between the submitted system and a baseline without the AR procedure can be observed.",Rule-Based Pronominal Anaphora Treatment for Machine Translation,"In this paper we describe the rule-based MT system Its-2 developed at the University of Geneva and submitted for the shared task on pronoun translation organized within the Second DiscoMT Workshop. For improving pronoun translation, an Anaphora Resolution (AR) step based on Chomsky's Binding Theory and Hobbs' algorithm has been implemented. Since this strategy is currently restricted to 3rd person personal pronouns (i.e. they, it translated as elle, elles, il, ils only), absolute performance is affected. However, qualitative differences between the submitted system and a baseline without the AR procedure can be observed.",,"Rule-Based Pronominal Anaphora Treatment for Machine Translation. In this paper we describe the rule-based MT system Its-2 developed at the University of Geneva and submitted for the shared task on pronoun translation organized within the Second DiscoMT Workshop. For improving pronoun translation, an Anaphora Resolution (AR) step based on Chomsky's Binding Theory and Hobbs' algorithm has been implemented. Since this strategy is currently restricted to 3rd person personal pronouns (i.e. they, it translated as elle, elles, il, ils only), absolute performance is affected. However, qualitative differences between the submitted system and a baseline without the AR procedure can be observed.",2015
harriett-1983-tools,https://aclanthology.org/1983.tc-1.2,0,,,,,,,"The tools for the job: an overview. Today, 10 November 1983, is just fifty-two days away from 1984. If you have read George Orwell's famous novel, then you will know that he predicted the invasion of video screens which would monitor everything you do and say, in order to ensure that everyone was loyal to the State. If he walked round offices and homes today he could be forgiven for believing that his prediction, made back in 1949, had already come true. But he wasn't far from the truth, was he? If a video screen is not attached to every product, then a microchip is certain to be incorporated. Even filing systems use microprocessors today, so that at the touch of a button the document you require, one out of thousands, appears in front of you without you having to search for it.
Technological development is marvellous, if used for everyone's benefit. But I wonder how many of you will believe that the developments in speech recognition and speech synthesis are beneficial to you. At the Telecoms 83 exhibition held in Geneva two weeks ago, the Japanese company, NEC, showed off its world leadership in speech technology by demonstrating a research model of an automatic interpreting system. A conversation was held in Japanese and English, and another in English and Spanish; both were taking place as if the language barrier just didn't exist. At the moment only around 150 words are utilised, but it is not simply word recognition: it is continuous speech recognition with sentences being composed which are almost grammatically correct. NEC is also researching a speaker-independent system which can recognise words spoken by a variety of people, with the aim of producing an operational automatic interpreting system by the turn of the century.",The tools for the job: an overview,"Today, 10 November 1983, is just fifty-two days away from 1984. If you have read George Orwell's famous novel, then you will know that he predicted the invasion of video screens which would monitor everything you do and say, in order to ensure that everyone was loyal to the State. If he walked round offices and homes today he could be forgiven for believing that his prediction, made back in 1949, had already come true. But he wasn't far from the truth, was he? If a video screen is not attached to every product, then a microchip is certain to be incorporated. Even filing systems use microprocessors today, so that at the touch of a button the document you require, one out of thousands, appears in front of you without you having to search for it.
Technological development is marvellous, if used for everyone's benefit. But I wonder how many of you will believe that the developments in speech recognition and speech synthesis are beneficial to you. At the Telecoms 83 exhibition held in Geneva two weeks ago, the Japanese company, NEC, showed off its world leadership in speech technology by demonstrating a research model of an automatic interpreting system. A conversation was held in Japanese and English, and another in English and Spanish; both were taking place as if the language barrier just didn't exist. At the moment only around 150 words are utilised, but it is not simply word recognition: it is continuous speech recognition with sentences being composed which are almost grammatically correct. NEC is also researching a speaker-independent system which can recognise words spoken by a variety of people, with the aim of producing an operational automatic interpreting system by the turn of the century.",The tools for the job: an overview,"Today, 10 November 1983, is just fifty-two days away from 1984. If you have read George Orwell's famous novel, then you will know that he predicted the invasion of video screens which would monitor everything you do and say, in order to ensure that everyone was loyal to the State. If he walked round offices and homes today he could be forgiven for believing that his prediction, made back in 1949, had already come true. But he wasn't far from the truth, was he? If a video screen is not attached to every product, then a microchip is certain to be incorporated. Even filing systems use microprocessors today, so that at the touch of a button the document you require, one out of thousands, appears in front of you without you having to search for it.
Technological development is marvellous, if used for everyone's benefit. But I wonder how many of you will believe that the developments in speech recognition and speech synthesis are beneficial to you. At the Telecoms 83 exhibition held in Geneva two weeks ago, the Japanese company, NEC, showed off its world leadership in speech technology by demonstrating a research model of an automatic interpreting system. A conversation was held in Japanese and English, and another in English and Spanish; both were taking place as if the language barrier just didn't exist. At the moment only around 150 words are utilised, but it is not simply word recognition: it is continuous speech recognition with sentences being composed which are almost grammatically correct. NEC is also researching a speaker-independent system which can recognise words spoken by a variety of people, with the aim of producing an operational automatic interpreting system by the turn of the century.",,"The tools for the job: an overview. Today, 10 November 1983, is just fifty-two days away from 1984. If you have read George Orwell's famous novel, then you will know that he predicted the invasion of video screens which would monitor everything you do and say, in order to ensure that everyone was loyal to the State. If he walked round offices and homes today he could be forgiven for believing that his prediction, made back in 1949, had already come true. But he wasn't far from the truth, was he? If a video screen is not attached to every product, then a microchip is certain to be incorporated. Even filing systems use microprocessors today, so that at the touch of a button the document you require, one out of thousands, appears in front of you without you having to search for it.
Technological development is marvellous, if used for everyone's benefit. But I wonder how many of you will believe that the developments in speech recognition and speech synthesis are beneficial to you. At the Telecoms 83 exhibition held in Geneva two weeks ago, the Japanese company, NEC, showed off its world leadership in speech technology by demonstrating a research model of an automatic interpreting system. A conversation was held in Japanese and English, and another in English and Spanish; both were taking place as if the language barrier just didn't exist. At the moment only around 150 words are utilised, but it is not simply word recognition: it is continuous speech recognition with sentences being composed which are almost grammatically correct. NEC is also researching a speaker-independent system which can recognise words spoken by a variety of people, with the aim of producing an operational automatic interpreting system by the turn of the century.",1983
munot-nenkova-2019-emotion,https://aclanthology.org/N19-3003,0,,,,,,,"Emotion Impacts Speech Recognition Performance. It has been established that the performance of speech recognition systems depends on multiple factors including the lexical content, speaker identity and dialect. Here we use three English datasets of acted emotion to demonstrate that emotional content also impacts the performance of commercial systems. On two of the corpora, emotion is a bigger contributor to recognition errors than speaker identity and on two, neutral speech is recognized considerably better than emotional speech. We further evaluate the commercial systems on spontaneous interactions that contain portions of emotional speech. We propose and validate on the acted datasets, a method that allows us to evaluate the overall impact of emotion on recognition even when manual transcripts are not available. Using this method, we show that emotion in natural spontaneous dialogue is a less prominent but still significant factor in recognition accuracy.",Emotion Impacts Speech Recognition Performance,"It has been established that the performance of speech recognition systems depends on multiple factors including the lexical content, speaker identity and dialect. Here we use three English datasets of acted emotion to demonstrate that emotional content also impacts the performance of commercial systems. On two of the corpora, emotion is a bigger contributor to recognition errors than speaker identity and on two, neutral speech is recognized considerably better than emotional speech. We further evaluate the commercial systems on spontaneous interactions that contain portions of emotional speech. We propose and validate on the acted datasets, a method that allows us to evaluate the overall impact of emotion on recognition even when manual transcripts are not available. Using this method, we show that emotion in natural spontaneous dialogue is a less prominent but still significant factor in recognition accuracy.",Emotion Impacts Speech Recognition Performance,"It has been established that the performance of speech recognition systems depends on multiple factors including the lexical content, speaker identity and dialect. Here we use three English datasets of acted emotion to demonstrate that emotional content also impacts the performance of commercial systems. On two of the corpora, emotion is a bigger contributor to recognition errors than speaker identity and on two, neutral speech is recognized considerably better than emotional speech. We further evaluate the commercial systems on spontaneous interactions that contain portions of emotional speech. We propose and validate on the acted datasets, a method that allows us to evaluate the overall impact of emotion on recognition even when manual transcripts are not available. Using this method, we show that emotion in natural spontaneous dialogue is a less prominent but still significant factor in recognition accuracy.",,"Emotion Impacts Speech Recognition Performance. It has been established that the performance of speech recognition systems depends on multiple factors including the lexical content, speaker identity and dialect. Here we use three English datasets of acted emotion to demonstrate that emotional content also impacts the performance of commercial systems. On two of the corpora, emotion is a bigger contributor to recognition errors than speaker identity and on two, neutral speech is recognized considerably better than emotional speech. We further evaluate the commercial systems on spontaneous interactions that contain portions of emotional speech. We propose and validate on the acted datasets, a method that allows us to evaluate the overall impact of emotion on recognition even when manual transcripts are not available. Using this method, we show that emotion in natural spontaneous dialogue is a less prominent but still significant factor in recognition accuracy.",2019
johnson-zhang-2017-deep,https://aclanthology.org/P17-1052,0,,,,,,,"Deep Pyramid Convolutional Neural Networks for Text Categorization. This paper proposes a low-complexity word-level deep convolutional neural network (CNN) architecture for text categorization that can efficiently represent longrange associations in text. In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data. However, the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications. Moreover, it was shown recently that shallow word-level CNNs are more accurate and much faster than the state-of-the-art very deep nets such as character-level CNNs even in the setting of large training data. Motivated by these findings, we carefully studied deepening of word-level CNNs to capture global representations of text, and found a simple network architecture with which the best accuracy can be obtained by increasing the network depth without increasing computational cost by much. We call it deep pyramid CNN. The proposed model with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic categorization.",Deep Pyramid Convolutional Neural Networks for Text Categorization,"This paper proposes a low-complexity word-level deep convolutional neural network (CNN) architecture for text categorization that can efficiently represent longrange associations in text. In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data. However, the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications. Moreover, it was shown recently that shallow word-level CNNs are more accurate and much faster than the state-of-the-art very deep nets such as character-level CNNs even in the setting of large training data. Motivated by these findings, we carefully studied deepening of word-level CNNs to capture global representations of text, and found a simple network architecture with which the best accuracy can be obtained by increasing the network depth without increasing computational cost by much. We call it deep pyramid CNN. The proposed model with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic categorization.",Deep Pyramid Convolutional Neural Networks for Text Categorization,"This paper proposes a low-complexity word-level deep convolutional neural network (CNN) architecture for text categorization that can efficiently represent longrange associations in text. In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data. However, the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications. Moreover, it was shown recently that shallow word-level CNNs are more accurate and much faster than the state-of-the-art very deep nets such as character-level CNNs even in the setting of large training data. Motivated by these findings, we carefully studied deepening of word-level CNNs to capture global representations of text, and found a simple network architecture with which the best accuracy can be obtained by increasing the network depth without increasing computational cost by much. We call it deep pyramid CNN. The proposed model with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic categorization.",,"Deep Pyramid Convolutional Neural Networks for Text Categorization. This paper proposes a low-complexity word-level deep convolutional neural network (CNN) architecture for text categorization that can efficiently represent longrange associations in text. In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data. However, the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications. Moreover, it was shown recently that shallow word-level CNNs are more accurate and much faster than the state-of-the-art very deep nets such as character-level CNNs even in the setting of large training data. Motivated by these findings, we carefully studied deepening of word-level CNNs to capture global representations of text, and found a simple network architecture with which the best accuracy can be obtained by increasing the network depth without increasing computational cost by much. We call it deep pyramid CNN. The proposed model with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic categorization.",2017
mayfield-finin-2012-evaluating,https://aclanthology.org/W12-3013,0,,,,,,,"Evaluating the Quality of a Knowledge Base Populated from Text. The steady progress of information extraction systems has been helped by sound methodologies for evaluating their performance in controlled experiments. Annual events like MUC, ACE and TAC have developed evaluation approaches enabling researchers to score and rank their systems relative to reference results. Yet these evaluations have only assessed component technologies needed by a knowledge base population system; none has required the construction of a knowledge base that is then evaluated directly. We describe an approach to the direct evaluation of a knowledge base and an instantiation that will be used in a 2012 TAC Knowledge Base Population track.",Evaluating the Quality of a Knowledge Base Populated from Text,"The steady progress of information extraction systems has been helped by sound methodologies for evaluating their performance in controlled experiments. Annual events like MUC, ACE and TAC have developed evaluation approaches enabling researchers to score and rank their systems relative to reference results. Yet these evaluations have only assessed component technologies needed by a knowledge base population system; none has required the construction of a knowledge base that is then evaluated directly. We describe an approach to the direct evaluation of a knowledge base and an instantiation that will be used in a 2012 TAC Knowledge Base Population track.",Evaluating the Quality of a Knowledge Base Populated from Text,"The steady progress of information extraction systems has been helped by sound methodologies for evaluating their performance in controlled experiments. Annual events like MUC, ACE and TAC have developed evaluation approaches enabling researchers to score and rank their systems relative to reference results. Yet these evaluations have only assessed component technologies needed by a knowledge base population system; none has required the construction of a knowledge base that is then evaluated directly. We describe an approach to the direct evaluation of a knowledge base and an instantiation that will be used in a 2012 TAC Knowledge Base Population track.",,"Evaluating the Quality of a Knowledge Base Populated from Text. The steady progress of information extraction systems has been helped by sound methodologies for evaluating their performance in controlled experiments. Annual events like MUC, ACE and TAC have developed evaluation approaches enabling researchers to score and rank their systems relative to reference results. Yet these evaluations have only assessed component technologies needed by a knowledge base population system; none has required the construction of a knowledge base that is then evaluated directly. We describe an approach to the direct evaluation of a knowledge base and an instantiation that will be used in a 2012 TAC Knowledge Base Population track.",2012
hajicova-kucerova-2002-argument,http://www.lrec-conf.org/proceedings/lrec2002/pdf/63.pdf,0,,,,,,,"Argument/Valency Structure in PropBank, LCS Database and Prague Dependency Treebank: A Comparative Pilot Study. ","Argument/Valency Structure in {P}rop{B}ank, {LCS} Database and {P}rague Dependency Treebank: A Comparative Pilot Study",,"Argument/Valency Structure in PropBank, LCS Database and Prague Dependency Treebank: A Comparative Pilot Study",,,"Argument/Valency Structure in PropBank, LCS Database and Prague Dependency Treebank: A Comparative Pilot Study. ",2002
de-amaral-2013-rule,https://aclanthology.org/R13-2009,0,,,,,,,"Rule-based Named Entity Extraction For Ontology Population. Currently, Text analysis techniques such as named entity recognition rely mainly on ontologies which represent the semantics of an application domain. To build such an ontology from specialized texts, this article presents a tool which detects proper names, locations and dates from texts by using manually written linguistic rules. The most challenging task is to extract not only entities but also interpret the information and adapt in a specific corpus in French.",Rule-based Named Entity Extraction For Ontology Population,"Currently, Text analysis techniques such as named entity recognition rely mainly on ontologies which represent the semantics of an application domain. To build such an ontology from specialized texts, this article presents a tool which detects proper names, locations and dates from texts by using manually written linguistic rules. The most challenging task is to extract not only entities but also interpret the information and adapt in a specific corpus in French.",Rule-based Named Entity Extraction For Ontology Population,"Currently, Text analysis techniques such as named entity recognition rely mainly on ontologies which represent the semantics of an application domain. To build such an ontology from specialized texts, this article presents a tool which detects proper names, locations and dates from texts by using manually written linguistic rules. The most challenging task is to extract not only entities but also interpret the information and adapt in a specific corpus in French.",,"Rule-based Named Entity Extraction For Ontology Population. Currently, Text analysis techniques such as named entity recognition rely mainly on ontologies which represent the semantics of an application domain. To build such an ontology from specialized texts, this article presents a tool which detects proper names, locations and dates from texts by using manually written linguistic rules. The most challenging task is to extract not only entities but also interpret the information and adapt in a specific corpus in French.",2013
li-etal-2018-joint-learning,https://aclanthology.org/K18-2006,0,,,,,,,"Joint Learning of POS and Dependencies for Multilingual Universal Dependency Parsing. This paper describes the system of team LeisureX in the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Our system predicts the part-of-speech tag and dependency tree jointly. For the basic tasks, including tokenization, lemmatization and morphology prediction, we employ the official baseline model (UDPipe). To train the low-resource languages, we adopt a sampling method based on other richresource languages. Our system achieves a macro-average of 68.31% LAS F1 score, with an improvement of 2.51% compared with the UDPipe.",Joint Learning of {POS} and Dependencies for Multilingual {U}niversal {D}ependency Parsing,"This paper describes the system of team LeisureX in the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Our system predicts the part-of-speech tag and dependency tree jointly. For the basic tasks, including tokenization, lemmatization and morphology prediction, we employ the official baseline model (UDPipe). To train the low-resource languages, we adopt a sampling method based on other richresource languages. Our system achieves a macro-average of 68.31% LAS F1 score, with an improvement of 2.51% compared with the UDPipe.",Joint Learning of POS and Dependencies for Multilingual Universal Dependency Parsing,"This paper describes the system of team LeisureX in the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Our system predicts the part-of-speech tag and dependency tree jointly. For the basic tasks, including tokenization, lemmatization and morphology prediction, we employ the official baseline model (UDPipe). To train the low-resource languages, we adopt a sampling method based on other richresource languages. Our system achieves a macro-average of 68.31% LAS F1 score, with an improvement of 2.51% compared with the UDPipe.",,"Joint Learning of POS and Dependencies for Multilingual Universal Dependency Parsing. This paper describes the system of team LeisureX in the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Our system predicts the part-of-speech tag and dependency tree jointly. For the basic tasks, including tokenization, lemmatization and morphology prediction, we employ the official baseline model (UDPipe). To train the low-resource languages, we adopt a sampling method based on other richresource languages. Our system achieves a macro-average of 68.31% LAS F1 score, with an improvement of 2.51% compared with the UDPipe.",2018
tam-etal-2007-bilingual,https://aclanthology.org/P07-1066,0,,,,,,,"Bilingual-LSA Based LM Adaptation for Spoken Language Translation. We propose a novel approach to crosslingual language model (LM) adaptation based on bilingual Latent Semantic Analysis (bLSA). A bLSA model is introduced which enables latent topic distributions to be efficiently transferred across languages by enforcing a one-to-one topic correspondence during training. Using the proposed bLSA framework crosslingual LM adaptation can be performed by, first, inferring the topic posterior distribution of the source text and then applying the inferred distribution to the target language N-gram LM via marginal adaptation. The proposed framework also enables rapid bootstrapping of LSA models for new languages based on a source LSA model from another language. On Chinese to English speech and text translation the proposed bLSA framework successfully reduced word perplexity of the English LM by over 27% for a unigram LM and up to 13.6% for a 4-gram LM. Furthermore, the proposed approach consistently improved machine translation quality on both speech and text based adaptation.",Bilingual-{LSA} Based {LM} Adaptation for Spoken Language Translation,"We propose a novel approach to crosslingual language model (LM) adaptation based on bilingual Latent Semantic Analysis (bLSA). A bLSA model is introduced which enables latent topic distributions to be efficiently transferred across languages by enforcing a one-to-one topic correspondence during training. Using the proposed bLSA framework crosslingual LM adaptation can be performed by, first, inferring the topic posterior distribution of the source text and then applying the inferred distribution to the target language N-gram LM via marginal adaptation. The proposed framework also enables rapid bootstrapping of LSA models for new languages based on a source LSA model from another language. On Chinese to English speech and text translation the proposed bLSA framework successfully reduced word perplexity of the English LM by over 27% for a unigram LM and up to 13.6% for a 4-gram LM. Furthermore, the proposed approach consistently improved machine translation quality on both speech and text based adaptation.",Bilingual-LSA Based LM Adaptation for Spoken Language Translation,"We propose a novel approach to crosslingual language model (LM) adaptation based on bilingual Latent Semantic Analysis (bLSA). A bLSA model is introduced which enables latent topic distributions to be efficiently transferred across languages by enforcing a one-to-one topic correspondence during training. Using the proposed bLSA framework crosslingual LM adaptation can be performed by, first, inferring the topic posterior distribution of the source text and then applying the inferred distribution to the target language N-gram LM via marginal adaptation. The proposed framework also enables rapid bootstrapping of LSA models for new languages based on a source LSA model from another language. On Chinese to English speech and text translation the proposed bLSA framework successfully reduced word perplexity of the English LM by over 27% for a unigram LM and up to 13.6% for a 4-gram LM. Furthermore, the proposed approach consistently improved machine translation quality on both speech and text based adaptation.","This work is partly supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-06-2-0001. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.","Bilingual-LSA Based LM Adaptation for Spoken Language Translation. We propose a novel approach to crosslingual language model (LM) adaptation based on bilingual Latent Semantic Analysis (bLSA). A bLSA model is introduced which enables latent topic distributions to be efficiently transferred across languages by enforcing a one-to-one topic correspondence during training. Using the proposed bLSA framework crosslingual LM adaptation can be performed by, first, inferring the topic posterior distribution of the source text and then applying the inferred distribution to the target language N-gram LM via marginal adaptation. The proposed framework also enables rapid bootstrapping of LSA models for new languages based on a source LSA model from another language. On Chinese to English speech and text translation the proposed bLSA framework successfully reduced word perplexity of the English LM by over 27% for a unigram LM and up to 13.6% for a 4-gram LM. Furthermore, the proposed approach consistently improved machine translation quality on both speech and text based adaptation.",2007
liu-etal-2017-using-context,https://aclanthology.org/D17-1231,0,,,,,,,"Using Context Information for Dialog Act Classification in DNN Framework. Previous work on dialog act (DA) classification has investigated different methods, such as hidden Markov models, maximum entropy, conditional random fields, graphical models, and support vector machines. A few recent studies explored using deep learning neural networks for DA classification, however, it is not clear yet what is the best method for using dialog context or DA sequential information, and how much gain it brings. This paper proposes several ways of using context information for DA classification, all in the deep learning framework. The baseline system classifies each utterance using the convolutional neural networks (CNN). Our proposed methods include using hierarchical models (recurrent neural networks (RNN) or CNN) for DA sequence tagging where the bottom layer takes the sentence CNN representation as input, concatenating predictions from the previous utterances with the CNN vector for classification, and performing sequence decoding based on the predictions from the sentence CNN model. We conduct thorough experiments and comparisons on the Switchboard corpus, demonstrate that incorporating context information significantly improves DA classification, and show that we achieve new state-of-the-art performance for this task.",Using Context Information for Dialog Act Classification in {DNN} Framework,"Previous work on dialog act (DA) classification has investigated different methods, such as hidden Markov models, maximum entropy, conditional random fields, graphical models, and support vector machines. A few recent studies explored using deep learning neural networks for DA classification, however, it is not clear yet what is the best method for using dialog context or DA sequential information, and how much gain it brings. This paper proposes several ways of using context information for DA classification, all in the deep learning framework. The baseline system classifies each utterance using the convolutional neural networks (CNN). Our proposed methods include using hierarchical models (recurrent neural networks (RNN) or CNN) for DA sequence tagging where the bottom layer takes the sentence CNN representation as input, concatenating predictions from the previous utterances with the CNN vector for classification, and performing sequence decoding based on the predictions from the sentence CNN model. We conduct thorough experiments and comparisons on the Switchboard corpus, demonstrate that incorporating context information significantly improves DA classification, and show that we achieve new state-of-the-art performance for this task.",Using Context Information for Dialog Act Classification in DNN Framework,"Previous work on dialog act (DA) classification has investigated different methods, such as hidden Markov models, maximum entropy, conditional random fields, graphical models, and support vector machines. A few recent studies explored using deep learning neural networks for DA classification, however, it is not clear yet what is the best method for using dialog context or DA sequential information, and how much gain it brings. This paper proposes several ways of using context information for DA classification, all in the deep learning framework. The baseline system classifies each utterance using the convolutional neural networks (CNN). Our proposed methods include using hierarchical models (recurrent neural networks (RNN) or CNN) for DA sequence tagging where the bottom layer takes the sentence CNN representation as input, concatenating predictions from the previous utterances with the CNN vector for classification, and performing sequence decoding based on the predictions from the sentence CNN model. We conduct thorough experiments and comparisons on the Switchboard corpus, demonstrate that incorporating context information significantly improves DA classification, and show that we achieve new state-of-the-art performance for this task.","The authors thank Yandi Xia for preparing the Switchboard data, Xian Qian, Antoine Raux and Benoit Dumoulin for various discussions.","Using Context Information for Dialog Act Classification in DNN Framework. Previous work on dialog act (DA) classification has investigated different methods, such as hidden Markov models, maximum entropy, conditional random fields, graphical models, and support vector machines. A few recent studies explored using deep learning neural networks for DA classification, however, it is not clear yet what is the best method for using dialog context or DA sequential information, and how much gain it brings. This paper proposes several ways of using context information for DA classification, all in the deep learning framework. The baseline system classifies each utterance using the convolutional neural networks (CNN). Our proposed methods include using hierarchical models (recurrent neural networks (RNN) or CNN) for DA sequence tagging where the bottom layer takes the sentence CNN representation as input, concatenating predictions from the previous utterances with the CNN vector for classification, and performing sequence decoding based on the predictions from the sentence CNN model. We conduct thorough experiments and comparisons on the Switchboard corpus, demonstrate that incorporating context information significantly improves DA classification, and show that we achieve new state-of-the-art performance for this task.",2017
stilo-velardi-2017-hashtag,https://aclanthology.org/J17-1005,0,,,,,,,"Hashtag Sense Clustering Based on Temporal Similarity. Hashtags are creative labels used in micro-blogs to characterize the topic of a message/discussion. Regardless of the use for which they were originally intended, hashtags cannot be used as a means to cluster messages with similar content. First, because hashtags are created in a spontaneous and highly dynamic way by users in multiple languages, the same topic can be associated with different hashtags, and conversely, the same hashtag may refer to different topics in different time periods. Second, contrary to common words, hashtag disambiguation is complicated by the fact that no sense catalogs (e.g., Wikipedia or WordNet) are available; and, furthermore, hashtag labels are difficult to analyze, as they often consist of acronyms, concatenated words, and so forth. A common way to determine the meaning of hashtags has been to analyze their context, but, as we have just pointed out, hashtags can have multiple and variable meanings. In this article, we propose a temporal sense clustering algorithm based on the idea that semantically related hashtags have similar and synchronous usage patterns.",Hashtag Sense Clustering Based on Temporal Similarity,"Hashtags are creative labels used in micro-blogs to characterize the topic of a message/discussion. Regardless of the use for which they were originally intended, hashtags cannot be used as a means to cluster messages with similar content. First, because hashtags are created in a spontaneous and highly dynamic way by users in multiple languages, the same topic can be associated with different hashtags, and conversely, the same hashtag may refer to different topics in different time periods. Second, contrary to common words, hashtag disambiguation is complicated by the fact that no sense catalogs (e.g., Wikipedia or WordNet) are available; and, furthermore, hashtag labels are difficult to analyze, as they often consist of acronyms, concatenated words, and so forth. A common way to determine the meaning of hashtags has been to analyze their context, but, as we have just pointed out, hashtags can have multiple and variable meanings. In this article, we propose a temporal sense clustering algorithm based on the idea that semantically related hashtags have similar and synchronous usage patterns.",Hashtag Sense Clustering Based on Temporal Similarity,"Hashtags are creative labels used in micro-blogs to characterize the topic of a message/discussion. Regardless of the use for which they were originally intended, hashtags cannot be used as a means to cluster messages with similar content. First, because hashtags are created in a spontaneous and highly dynamic way by users in multiple languages, the same topic can be associated with different hashtags, and conversely, the same hashtag may refer to different topics in different time periods. Second, contrary to common words, hashtag disambiguation is complicated by the fact that no sense catalogs (e.g., Wikipedia or WordNet) are available; and, furthermore, hashtag labels are difficult to analyze, as they often consist of acronyms, concatenated words, and so forth. A common way to determine the meaning of hashtags has been to analyze their context, but, as we have just pointed out, hashtags can have multiple and variable meanings. In this article, we propose a temporal sense clustering algorithm based on the idea that semantically related hashtags have similar and synchronous usage patterns.",,"Hashtag Sense Clustering Based on Temporal Similarity. Hashtags are creative labels used in micro-blogs to characterize the topic of a message/discussion. Regardless of the use for which they were originally intended, hashtags cannot be used as a means to cluster messages with similar content. First, because hashtags are created in a spontaneous and highly dynamic way by users in multiple languages, the same topic can be associated with different hashtags, and conversely, the same hashtag may refer to different topics in different time periods. Second, contrary to common words, hashtag disambiguation is complicated by the fact that no sense catalogs (e.g., Wikipedia or WordNet) are available; and, furthermore, hashtag labels are difficult to analyze, as they often consist of acronyms, concatenated words, and so forth. A common way to determine the meaning of hashtags has been to analyze their context, but, as we have just pointed out, hashtags can have multiple and variable meanings. In this article, we propose a temporal sense clustering algorithm based on the idea that semantically related hashtags have similar and synchronous usage patterns.",2017
doan-etal-2021-phomt,https://aclanthology.org/2021.emnlp-main.369,0,,,,,,,"PhoMT: A High-Quality and Large-Scale Benchmark Dataset for Vietnamese-English Machine Translation. We introduce a high-quality and large-scale Vietnamese-English parallel dataset of 3.02M sentence pairs, which is 2.9M pairs larger than the benchmark Vietnamese-English machine translation corpus IWSLT15. We conduct experiments comparing strong neural baselines and well-known automatic translation engines on our dataset and find that in both automatic and human evaluations: the best performance is obtained by fine-tuning the pretrained sequence-to-sequence denoising autoencoder mBART. To our best knowledge, this is the first large-scale Vietnamese-English machine translation study. We hope our publicly available dataset and study can serve as a starting point for future research and applications on Vietnamese-English machine translation.",{P}ho{MT}: A High-Quality and Large-Scale Benchmark Dataset for {V}ietnamese-{E}nglish Machine Translation,"We introduce a high-quality and large-scale Vietnamese-English parallel dataset of 3.02M sentence pairs, which is 2.9M pairs larger than the benchmark Vietnamese-English machine translation corpus IWSLT15. We conduct experiments comparing strong neural baselines and well-known automatic translation engines on our dataset and find that in both automatic and human evaluations: the best performance is obtained by fine-tuning the pretrained sequence-to-sequence denoising autoencoder mBART. To our best knowledge, this is the first large-scale Vietnamese-English machine translation study. We hope our publicly available dataset and study can serve as a starting point for future research and applications on Vietnamese-English machine translation.",PhoMT: A High-Quality and Large-Scale Benchmark Dataset for Vietnamese-English Machine Translation,"We introduce a high-quality and large-scale Vietnamese-English parallel dataset of 3.02M sentence pairs, which is 2.9M pairs larger than the benchmark Vietnamese-English machine translation corpus IWSLT15. We conduct experiments comparing strong neural baselines and well-known automatic translation engines on our dataset and find that in both automatic and human evaluations: the best performance is obtained by fine-tuning the pretrained sequence-to-sequence denoising autoencoder mBART. To our best knowledge, this is the first large-scale Vietnamese-English machine translation study. We hope our publicly available dataset and study can serve as a starting point for future research and applications on Vietnamese-English machine translation.",The authors would like to thank the anonymous reviewers for their helpful feedback.,"PhoMT: A High-Quality and Large-Scale Benchmark Dataset for Vietnamese-English Machine Translation. We introduce a high-quality and large-scale Vietnamese-English parallel dataset of 3.02M sentence pairs, which is 2.9M pairs larger than the benchmark Vietnamese-English machine translation corpus IWSLT15. We conduct experiments comparing strong neural baselines and well-known automatic translation engines on our dataset and find that in both automatic and human evaluations: the best performance is obtained by fine-tuning the pretrained sequence-to-sequence denoising autoencoder mBART. To our best knowledge, this is the first large-scale Vietnamese-English machine translation study. We hope our publicly available dataset and study can serve as a starting point for future research and applications on Vietnamese-English machine translation.",2021
lopopolo-etal-2019-dependency,https://aclanthology.org/W19-2909,0,,,,,,,"Dependency Parsing with your Eyes: Dependency Structure Predicts Eye Regressions During Reading. Backward saccades during reading have been hypothesized to be involved in structural reanalysis, or to be related to the level of text difficulty. We test the hypothesis that backward saccades are involved in online syntactic analysis. If this is the case we expect that saccades will coincide, at least partially, with the edges of the relations computed by a dependency parser. In order to test this, we analyzed a large eye-tracking dataset collected while 102 participants read three short narrative texts. Our results show a relation between backward saccades and the syntactic structure of sentences.",Dependency Parsing with your Eyes: Dependency Structure Predicts Eye Regressions During Reading,"Backward saccades during reading have been hypothesized to be involved in structural reanalysis, or to be related to the level of text difficulty. We test the hypothesis that backward saccades are involved in online syntactic analysis. If this is the case we expect that saccades will coincide, at least partially, with the edges of the relations computed by a dependency parser. In order to test this, we analyzed a large eye-tracking dataset collected while 102 participants read three short narrative texts. Our results show a relation between backward saccades and the syntactic structure of sentences.",Dependency Parsing with your Eyes: Dependency Structure Predicts Eye Regressions During Reading,"Backward saccades during reading have been hypothesized to be involved in structural reanalysis, or to be related to the level of text difficulty. We test the hypothesis that backward saccades are involved in online syntactic analysis. If this is the case we expect that saccades will coincide, at least partially, with the edges of the relations computed by a dependency parser. In order to test this, we analyzed a large eye-tracking dataset collected while 102 participants read three short narrative texts. Our results show a relation between backward saccades and the syntactic structure of sentences.",The work presented here was funded by the Netherlands Organisation for Scientific Research (NWO) Gravitation Grant 024.001.006 to the Language in Interaction Consortium. The authors thank Marloes Mak for providing the eye-tracker data and help in the analyses.,"Dependency Parsing with your Eyes: Dependency Structure Predicts Eye Regressions During Reading. Backward saccades during reading have been hypothesized to be involved in structural reanalysis, or to be related to the level of text difficulty. We test the hypothesis that backward saccades are involved in online syntactic analysis. If this is the case we expect that saccades will coincide, at least partially, with the edges of the relations computed by a dependency parser. In order to test this, we analyzed a large eye-tracking dataset collected while 102 participants read three short narrative texts. Our results show a relation between backward saccades and the syntactic structure of sentences.",2019
forcada-2003-45,https://aclanthology.org/2003.mtsummit-tttt.2,0,,,,,,,A 45-hour computers in translation course. This paper describes how a 45-hour Computers in Translation course is actually taught to 3rd-year translation students at the University of Alacant; the course described started in year 1995-1996 and has undergone substantial redesign until its present form. It is hoped that this description may be of use to instructors who are forced to teach a similar subject in such as small slot of time and need some design guidelines.,A 45-hour computers in translation course,This paper describes how a 45-hour Computers in Translation course is actually taught to 3rd-year translation students at the University of Alacant; the course described started in year 1995-1996 and has undergone substantial redesign until its present form. It is hoped that this description may be of use to instructors who are forced to teach a similar subject in such as small slot of time and need some design guidelines.,A 45-hour computers in translation course,This paper describes how a 45-hour Computers in Translation course is actually taught to 3rd-year translation students at the University of Alacant; the course described started in year 1995-1996 and has undergone substantial redesign until its present form. It is hoped that this description may be of use to instructors who are forced to teach a similar subject in such as small slot of time and need some design guidelines.,Acknowledgements: I thank Andy Way for comments and suggestions on the manuscript.,A 45-hour computers in translation course. This paper describes how a 45-hour Computers in Translation course is actually taught to 3rd-year translation students at the University of Alacant; the course described started in year 1995-1996 and has undergone substantial redesign until its present form. It is hoped that this description may be of use to instructors who are forced to teach a similar subject in such as small slot of time and need some design guidelines.,2003
savoldi-etal-2021-gender,https://aclanthology.org/2021.tacl-1.51,1,,,,gender_equality,,,"Gender Bias in Machine Translation. Machine translation (MT) technology has facilitated our daily tasks by providing accessible shortcuts for gathering, processing, and communicating information. However, it can suffer from biases that harm users and society at large. As a relatively new field of inquiry, studies of gender bias in MT still lack cohesion. This advocates for a unified framework to ease future research. To this end, we: i) critically review current conceptualizations of bias in light of theoretical insights from related disciplines, ii) summarize previous analyses aimed at assessing gender bias in MT, iii) discuss the mitigating strategies proposed so far, and iv) point toward potential directions for future work.",Gender Bias in Machine Translation,"Machine translation (MT) technology has facilitated our daily tasks by providing accessible shortcuts for gathering, processing, and communicating information. However, it can suffer from biases that harm users and society at large. As a relatively new field of inquiry, studies of gender bias in MT still lack cohesion. This advocates for a unified framework to ease future research. To this end, we: i) critically review current conceptualizations of bias in light of theoretical insights from related disciplines, ii) summarize previous analyses aimed at assessing gender bias in MT, iii) discuss the mitigating strategies proposed so far, and iv) point toward potential directions for future work.",Gender Bias in Machine Translation,"Machine translation (MT) technology has facilitated our daily tasks by providing accessible shortcuts for gathering, processing, and communicating information. However, it can suffer from biases that harm users and society at large. As a relatively new field of inquiry, studies of gender bias in MT still lack cohesion. This advocates for a unified framework to ease future research. To this end, we: i) critically review current conceptualizations of bias in light of theoretical insights from related disciplines, ii) summarize previous analyses aimed at assessing gender bias in MT, iii) discuss the mitigating strategies proposed so far, and iv) point toward potential directions for future work.",We would like to thank the anonymous reviewers and the TACL Action Editors. Their insightful comments helped us improve on the current version of the paper.,"Gender Bias in Machine Translation. Machine translation (MT) technology has facilitated our daily tasks by providing accessible shortcuts for gathering, processing, and communicating information. However, it can suffer from biases that harm users and society at large. As a relatively new field of inquiry, studies of gender bias in MT still lack cohesion. This advocates for a unified framework to ease future research. To this end, we: i) critically review current conceptualizations of bias in light of theoretical insights from related disciplines, ii) summarize previous analyses aimed at assessing gender bias in MT, iii) discuss the mitigating strategies proposed so far, and iv) point toward potential directions for future work.",2021
masala-etal-2020-robert,https://aclanthology.org/2020.coling-main.581,0,,,,,,,"RoBERT -- A Romanian BERT Model. Deep pre-trained language models tend to become ubiquitous in the field of Natural Language Processing (NLP). These models learn contextualized representations by using a huge amount of unlabeled text data and obtain state of the art results on a multitude of NLP tasks, by enabling efficient transfer learning. For other languages besides English, there are limited options of such models, most of which are trained only on multilingual corpora. In this paper we introduce a Romanian-only pre-trained BERT model-RoBERT-and compare it with different multilingual models on seven Romanian specific NLP tasks grouped into three categories, namely: sentiment analysis, dialect and cross-dialect topic identification, and diacritics restoration. Our model surpasses the multilingual models, as well as a another mono-lingual implementation of BERT, on all tasks.",{R}o{BERT} {--} A {R}omanian {BERT} Model,"Deep pre-trained language models tend to become ubiquitous in the field of Natural Language Processing (NLP). These models learn contextualized representations by using a huge amount of unlabeled text data and obtain state of the art results on a multitude of NLP tasks, by enabling efficient transfer learning. For other languages besides English, there are limited options of such models, most of which are trained only on multilingual corpora. In this paper we introduce a Romanian-only pre-trained BERT model-RoBERT-and compare it with different multilingual models on seven Romanian specific NLP tasks grouped into three categories, namely: sentiment analysis, dialect and cross-dialect topic identification, and diacritics restoration. Our model surpasses the multilingual models, as well as a another mono-lingual implementation of BERT, on all tasks.",RoBERT -- A Romanian BERT Model,"Deep pre-trained language models tend to become ubiquitous in the field of Natural Language Processing (NLP). These models learn contextualized representations by using a huge amount of unlabeled text data and obtain state of the art results on a multitude of NLP tasks, by enabling efficient transfer learning. For other languages besides English, there are limited options of such models, most of which are trained only on multilingual corpora. In this paper we introduce a Romanian-only pre-trained BERT model-RoBERT-and compare it with different multilingual models on seven Romanian specific NLP tasks grouped into three categories, namely: sentiment analysis, dialect and cross-dialect topic identification, and diacritics restoration. Our model surpasses the multilingual models, as well as a another mono-lingual implementation of BERT, on all tasks.","This research was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS -UEFISCDI, project number PN-III 54PCCDI /2018, INTELLIT -""Prezervarea s , i valorificarea patrimoniului literar românesc folosind solut , ii digitale inteligente pentru extragerea s , i sistematizarea de cunos , tint , e"", by the ""Semantic Media Analytics -SeMAntic"" subsidiary contract no. 20176/30.10.2019, from the NETIO project ID: P 40 270, MySMIS Code: 105976, as well as by ""Spacetime Vision -Towards Unsupervised Learning in the 4D World"", project Code: EEA-RO-NO-2018-0496.","RoBERT -- A Romanian BERT Model. Deep pre-trained language models tend to become ubiquitous in the field of Natural Language Processing (NLP). These models learn contextualized representations by using a huge amount of unlabeled text data and obtain state of the art results on a multitude of NLP tasks, by enabling efficient transfer learning. For other languages besides English, there are limited options of such models, most of which are trained only on multilingual corpora. In this paper we introduce a Romanian-only pre-trained BERT model-RoBERT-and compare it with different multilingual models on seven Romanian specific NLP tasks grouped into three categories, namely: sentiment analysis, dialect and cross-dialect topic identification, and diacritics restoration. Our model surpasses the multilingual models, as well as a another mono-lingual implementation of BERT, on all tasks.",2020
ji-smith-2017-neural,https://aclanthology.org/P17-1092,0,,,,,,,"Neural Discourse Structure for Text Categorization. We show that discourse structure, as defined by Rhetorical Structure Theory and provided by an existing discourse parser, benefits text categorization. Our approach uses a recursive neural network and a newly proposed attention mechanism to compute a representation of the text that focuses on salient content, from the perspective of both RST and the task. Experiments consider variants of the approach and illustrate its strengths and weaknesses.",Neural Discourse Structure for Text Categorization,"We show that discourse structure, as defined by Rhetorical Structure Theory and provided by an existing discourse parser, benefits text categorization. Our approach uses a recursive neural network and a newly proposed attention mechanism to compute a representation of the text that focuses on salient content, from the perspective of both RST and the task. Experiments consider variants of the approach and illustrate its strengths and weaknesses.",Neural Discourse Structure for Text Categorization,"We show that discourse structure, as defined by Rhetorical Structure Theory and provided by an existing discourse parser, benefits text categorization. Our approach uses a recursive neural network and a newly proposed attention mechanism to compute a representation of the text that focuses on salient content, from the perspective of both RST and the task. Experiments consider variants of the approach and illustrate its strengths and weaknesses.",We thank anonymous reviewers and members of Noah's ARK for helpful feedback on this work. We thank Dallas Card and Jesse Dodge for helping prepare the Media Frames Corpus and the Congressional bill corpus. This work was made possible by a University of Washington Innovation Award.,"Neural Discourse Structure for Text Categorization. We show that discourse structure, as defined by Rhetorical Structure Theory and provided by an existing discourse parser, benefits text categorization. Our approach uses a recursive neural network and a newly proposed attention mechanism to compute a representation of the text that focuses on salient content, from the perspective of both RST and the task. Experiments consider variants of the approach and illustrate its strengths and weaknesses.",2017
sidner-etal-2013-demonstration,https://aclanthology.org/W13-4024,1,,,,health,,,"Demonstration of an Always-On Companion for Isolated Older Adults. We summarize the status of an ongoing project to develop and evaluate a companion for isolated older adults. Four key scientific issues in the project are: embodiment, interaction paradigm, engagement and relationship. The system architecture is extensible and handles realtime behaviors. The system supports multiple activities, including discussing the weather, playing cards, telling stories, exercise coaching and video conferencing. A live, working demo system will be presented at the meeting.",Demonstration of an Always-On Companion for Isolated Older Adults,"We summarize the status of an ongoing project to develop and evaluate a companion for isolated older adults. Four key scientific issues in the project are: embodiment, interaction paradigm, engagement and relationship. The system architecture is extensible and handles realtime behaviors. The system supports multiple activities, including discussing the weather, playing cards, telling stories, exercise coaching and video conferencing. A live, working demo system will be presented at the meeting.",Demonstration of an Always-On Companion for Isolated Older Adults,"We summarize the status of an ongoing project to develop and evaluate a companion for isolated older adults. Four key scientific issues in the project are: embodiment, interaction paradigm, engagement and relationship. The system architecture is extensible and handles realtime behaviors. The system supports multiple activities, including discussing the weather, playing cards, telling stories, exercise coaching and video conferencing. A live, working demo system will be presented at the meeting.","This work is supported in part by the National Science Foundation under award IIS-1012083. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.","Demonstration of an Always-On Companion for Isolated Older Adults. We summarize the status of an ongoing project to develop and evaluate a companion for isolated older adults. Four key scientific issues in the project are: embodiment, interaction paradigm, engagement and relationship. The system architecture is extensible and handles realtime behaviors. The system supports multiple activities, including discussing the weather, playing cards, telling stories, exercise coaching and video conferencing. A live, working demo system will be presented at the meeting.",2013
nie-bansal-2017-shortcut,https://aclanthology.org/W17-5308,0,,,,,,,"Shortcut-Stacked Sentence Encoders for Multi-Domain Inference. We present a simple sequential sentence encoder for multi-domain natural language inference. Our encoder is based on stacked bidirectional LSTM-RNNs with shortcut connections and fine-tuning of word embeddings. The overall supervised model uses the above encoder to encode two input sentences into two vectors, and then uses a classifier over the vector combination to label the relationship between these two sentences as that of entailment, contradiction, or neural. Our Shortcut-Stacked sentence encoders achieve strong improvements over existing encoders on matched and mismatched multi-domain natural language inference (top singlemodel result in the EMNLP RepEval 2017 Shared Task (Nangia et al., 2017)). Moreover, they achieve the new state-of-theart encoding result on the original SNLI dataset (Bowman et al., 2015).",Shortcut-Stacked Sentence Encoders for Multi-Domain Inference,"We present a simple sequential sentence encoder for multi-domain natural language inference. Our encoder is based on stacked bidirectional LSTM-RNNs with shortcut connections and fine-tuning of word embeddings. The overall supervised model uses the above encoder to encode two input sentences into two vectors, and then uses a classifier over the vector combination to label the relationship between these two sentences as that of entailment, contradiction, or neural. Our Shortcut-Stacked sentence encoders achieve strong improvements over existing encoders on matched and mismatched multi-domain natural language inference (top singlemodel result in the EMNLP RepEval 2017 Shared Task (Nangia et al., 2017)). Moreover, they achieve the new state-of-theart encoding result on the original SNLI dataset (Bowman et al., 2015).",Shortcut-Stacked Sentence Encoders for Multi-Domain Inference,"We present a simple sequential sentence encoder for multi-domain natural language inference. Our encoder is based on stacked bidirectional LSTM-RNNs with shortcut connections and fine-tuning of word embeddings. The overall supervised model uses the above encoder to encode two input sentences into two vectors, and then uses a classifier over the vector combination to label the relationship between these two sentences as that of entailment, contradiction, or neural. Our Shortcut-Stacked sentence encoders achieve strong improvements over existing encoders on matched and mismatched multi-domain natural language inference (top singlemodel result in the EMNLP RepEval 2017 Shared Task (Nangia et al., 2017)). Moreover, they achieve the new state-of-theart encoding result on the original SNLI dataset (Bowman et al., 2015).","We thank the shared task organizers and the anonymous reviewers. This work was partially supported by a Google Faculty Research Award, an IBM Faculty Award, a Bloomberg Data Science Research Grant, and NVidia GPU awards.","Shortcut-Stacked Sentence Encoders for Multi-Domain Inference. We present a simple sequential sentence encoder for multi-domain natural language inference. Our encoder is based on stacked bidirectional LSTM-RNNs with shortcut connections and fine-tuning of word embeddings. The overall supervised model uses the above encoder to encode two input sentences into two vectors, and then uses a classifier over the vector combination to label the relationship between these two sentences as that of entailment, contradiction, or neural. Our Shortcut-Stacked sentence encoders achieve strong improvements over existing encoders on matched and mismatched multi-domain natural language inference (top singlemodel result in the EMNLP RepEval 2017 Shared Task (Nangia et al., 2017)). Moreover, they achieve the new state-of-theart encoding result on the original SNLI dataset (Bowman et al., 2015).",2017
ohara-wiebe-2003-preposition,https://aclanthology.org/W03-0411,0,,,,,,,"Preposition Semantic Classification via Treebank and FrameNet. This paper reports on experiments in classifying the semantic role annotations assigned to prepositional phrases in both the PENN TREEBANK and FRAMENET. In both cases, experiments are done to see how the prepositions can be classified given the dataset's role inventory, using standard word-sense disambiguation features. In addition to using traditional word collocations, the experiments incorporate class-based collocations in the form of WordNet hypernyms. For Treebank, the word collocations achieve slightly better performance: 78.5% versus 77.4% when separate classifiers are used per preposition. When using a single classifier for all of the prepositions together, the combined approach yields a significant gain at 85.8% accuracy versus 81.3% for wordonly collocations. For FrameNet, the combined use of both collocation types achieves better performance for the individual classifiers: 70.3% versus 68.5%. However, classification using a single classifier is not effective due to confusion among the fine-grained roles.",Preposition Semantic Classification via Treebank and {F}rame{N}et,"This paper reports on experiments in classifying the semantic role annotations assigned to prepositional phrases in both the PENN TREEBANK and FRAMENET. In both cases, experiments are done to see how the prepositions can be classified given the dataset's role inventory, using standard word-sense disambiguation features. In addition to using traditional word collocations, the experiments incorporate class-based collocations in the form of WordNet hypernyms. For Treebank, the word collocations achieve slightly better performance: 78.5% versus 77.4% when separate classifiers are used per preposition. When using a single classifier for all of the prepositions together, the combined approach yields a significant gain at 85.8% accuracy versus 81.3% for wordonly collocations. For FrameNet, the combined use of both collocation types achieves better performance for the individual classifiers: 70.3% versus 68.5%. However, classification using a single classifier is not effective due to confusion among the fine-grained roles.",Preposition Semantic Classification via Treebank and FrameNet,"This paper reports on experiments in classifying the semantic role annotations assigned to prepositional phrases in both the PENN TREEBANK and FRAMENET. In both cases, experiments are done to see how the prepositions can be classified given the dataset's role inventory, using standard word-sense disambiguation features. In addition to using traditional word collocations, the experiments incorporate class-based collocations in the form of WordNet hypernyms. For Treebank, the word collocations achieve slightly better performance: 78.5% versus 77.4% when separate classifiers are used per preposition. When using a single classifier for all of the prepositions together, the combined approach yields a significant gain at 85.8% accuracy versus 81.3% for wordonly collocations. For FrameNet, the combined use of both collocation types achieves better performance for the individual classifiers: 70.3% versus 68.5%. However, classification using a single classifier is not effective due to confusion among the fine-grained roles.",The first author is supported by a generous GAANN fellowship from the Department of Education. Some of the work used computing resources at NMSU made possible through MII Grants EIA-9810732 and EIA-0220590.,"Preposition Semantic Classification via Treebank and FrameNet. This paper reports on experiments in classifying the semantic role annotations assigned to prepositional phrases in both the PENN TREEBANK and FRAMENET. In both cases, experiments are done to see how the prepositions can be classified given the dataset's role inventory, using standard word-sense disambiguation features. In addition to using traditional word collocations, the experiments incorporate class-based collocations in the form of WordNet hypernyms. For Treebank, the word collocations achieve slightly better performance: 78.5% versus 77.4% when separate classifiers are used per preposition. When using a single classifier for all of the prepositions together, the combined approach yields a significant gain at 85.8% accuracy versus 81.3% for wordonly collocations. For FrameNet, the combined use of both collocation types achieves better performance for the individual classifiers: 70.3% versus 68.5%. However, classification using a single classifier is not effective due to confusion among the fine-grained roles.",2003
boufaden-2003-ontology,https://aclanthology.org/P03-2002,0,,,,,,,"An Ontology-based Semantic Tagger for IE system. In this paper, we present a method for the semantic tagging of word chunks extracted from a written transcription of conversations. This work is part of an ongoing project for an information extraction system in the field of maritime Search And Rescue (SAR). Our purpose is to automatically annotate parts of texts with concepts from a SAR ontology. Our approach combines two knowledge sources a SAR ontology and the Wordsmyth dictionarythesaurus, and it uses a similarity measure for the classification. Evaluation is carried out by comparing the output of the system with key answers of predefined extraction templates.",An Ontology-based Semantic Tagger for {IE} system,"In this paper, we present a method for the semantic tagging of word chunks extracted from a written transcription of conversations. This work is part of an ongoing project for an information extraction system in the field of maritime Search And Rescue (SAR). Our purpose is to automatically annotate parts of texts with concepts from a SAR ontology. Our approach combines two knowledge sources a SAR ontology and the Wordsmyth dictionarythesaurus, and it uses a similarity measure for the classification. Evaluation is carried out by comparing the output of the system with key answers of predefined extraction templates.",An Ontology-based Semantic Tagger for IE system,"In this paper, we present a method for the semantic tagging of word chunks extracted from a written transcription of conversations. This work is part of an ongoing project for an information extraction system in the field of maritime Search And Rescue (SAR). Our purpose is to automatically annotate parts of texts with concepts from a SAR ontology. Our approach combines two knowledge sources a SAR ontology and the Wordsmyth dictionarythesaurus, and it uses a similarity measure for the classification. Evaluation is carried out by comparing the output of the system with key answers of predefined extraction templates.",We are grateful to Robert Parks at Wordsmyth organization for giving us the electronic Wordsmyth version. Thanks to the Defense Research Establishment Valcartier for providing us with the dialog transcriptions and to National Search and rescue Secretariat for the valuable SAR manuals.,"An Ontology-based Semantic Tagger for IE system. In this paper, we present a method for the semantic tagging of word chunks extracted from a written transcription of conversations. This work is part of an ongoing project for an information extraction system in the field of maritime Search And Rescue (SAR). Our purpose is to automatically annotate parts of texts with concepts from a SAR ontology. Our approach combines two knowledge sources a SAR ontology and the Wordsmyth dictionarythesaurus, and it uses a similarity measure for the classification. Evaluation is carried out by comparing the output of the system with key answers of predefined extraction templates.",2003
bauer-etal-2012-dependency,http://www.lrec-conf.org/proceedings/lrec2012/pdf/1037_Paper.pdf,0,,,,,,,"The Dependency-Parsed FrameNet Corpus. When training semantic role labeling systems, the syntax of example sentences is of particular importance. Unfortunately, for the FrameNet annotated sentences, there is no standard parsed version. The integration of the automatic parse of an annotated sentence with its semantic annotation, while conceptually straightforward, is complex in practice. We present a standard dataset that is publicly available and that can be used in future research. This dataset contains parser-generated dependency structures (with POS tags and lemmas) for all FrameNet 1.5 sentences, with nodes automatically associated with FrameNet annotations.",The Dependency-Parsed {F}rame{N}et Corpus,"When training semantic role labeling systems, the syntax of example sentences is of particular importance. Unfortunately, for the FrameNet annotated sentences, there is no standard parsed version. The integration of the automatic parse of an annotated sentence with its semantic annotation, while conceptually straightforward, is complex in practice. We present a standard dataset that is publicly available and that can be used in future research. This dataset contains parser-generated dependency structures (with POS tags and lemmas) for all FrameNet 1.5 sentences, with nodes automatically associated with FrameNet annotations.",The Dependency-Parsed FrameNet Corpus,"When training semantic role labeling systems, the syntax of example sentences is of particular importance. Unfortunately, for the FrameNet annotated sentences, there is no standard parsed version. The integration of the automatic parse of an annotated sentence with its semantic annotation, while conceptually straightforward, is complex in practice. We present a standard dataset that is publicly available and that can be used in future research. This dataset contains parser-generated dependency structures (with POS tags and lemmas) for all FrameNet 1.5 sentences, with nodes automatically associated with FrameNet annotations.",,"The Dependency-Parsed FrameNet Corpus. When training semantic role labeling systems, the syntax of example sentences is of particular importance. Unfortunately, for the FrameNet annotated sentences, there is no standard parsed version. The integration of the automatic parse of an annotated sentence with its semantic annotation, while conceptually straightforward, is complex in practice. We present a standard dataset that is publicly available and that can be used in future research. This dataset contains parser-generated dependency structures (with POS tags and lemmas) for all FrameNet 1.5 sentences, with nodes automatically associated with FrameNet annotations.",2012
sai-sharma-2021-towards,https://aclanthology.org/2021.dravidianlangtech-1.3,1,,,,hate_speech,,,"Towards Offensive Language Identification for Dravidian Languages. Offensive speech identification in countries like India poses several challenges due to the usage of code-mixed and romanized variants of multiple languages by the users in their posts on social media. The challenge of offensive language identification on social media for Dravidian languages is harder, considering the low resources available for the same. In this paper, we explored the zeroshot learning and few-shot learning paradigms based on multilingual language models for offensive speech detection in code-mixed and romanized variants of three Dravidian languages-Malayalam, Tamil, and Kannada. We propose a novel and flexible approach of selective translation and transliteration to reap better results from fine-tuning and ensembling multilingual transformer networks like XLM-RoBERTa and mBERT. We implemented pretrained, fine-tuned, and ensembled versions of XLM-RoBERTa for offensive speech classification. Further, we experimented with interlanguage, inter-task, and multi-task transfer learning techniques to leverage the rich resources available for offensive speech identification in the English language and to enrich the models with knowledge transfer from related tasks. The proposed models yielded good results and are promising for effective offensive speech identification in low resource settings. 1",Towards Offensive Language Identification for {D}ravidian Languages,"Offensive speech identification in countries like India poses several challenges due to the usage of code-mixed and romanized variants of multiple languages by the users in their posts on social media. The challenge of offensive language identification on social media for Dravidian languages is harder, considering the low resources available for the same. In this paper, we explored the zeroshot learning and few-shot learning paradigms based on multilingual language models for offensive speech detection in code-mixed and romanized variants of three Dravidian languages-Malayalam, Tamil, and Kannada. We propose a novel and flexible approach of selective translation and transliteration to reap better results from fine-tuning and ensembling multilingual transformer networks like XLM-RoBERTa and mBERT. We implemented pretrained, fine-tuned, and ensembled versions of XLM-RoBERTa for offensive speech classification. Further, we experimented with interlanguage, inter-task, and multi-task transfer learning techniques to leverage the rich resources available for offensive speech identification in the English language and to enrich the models with knowledge transfer from related tasks. The proposed models yielded good results and are promising for effective offensive speech identification in low resource settings. 1",Towards Offensive Language Identification for Dravidian Languages,"Offensive speech identification in countries like India poses several challenges due to the usage of code-mixed and romanized variants of multiple languages by the users in their posts on social media. The challenge of offensive language identification on social media for Dravidian languages is harder, considering the low resources available for the same. In this paper, we explored the zeroshot learning and few-shot learning paradigms based on multilingual language models for offensive speech detection in code-mixed and romanized variants of three Dravidian languages-Malayalam, Tamil, and Kannada. We propose a novel and flexible approach of selective translation and transliteration to reap better results from fine-tuning and ensembling multilingual transformer networks like XLM-RoBERTa and mBERT. We implemented pretrained, fine-tuned, and ensembled versions of XLM-RoBERTa for offensive speech classification. Further, we experimented with interlanguage, inter-task, and multi-task transfer learning techniques to leverage the rich resources available for offensive speech identification in the English language and to enrich the models with knowledge transfer from related tasks. The proposed models yielded good results and are promising for effective offensive speech identification in low resource settings. 1","The authors would like to convey their sincere thanks to the Department of Science and Technology (ICPS Division), New Delhi, India, for providing financial assistance under the Data Science (DS) Research of Interdisciplinary Cyber Physical Systems (ICPS) Programme [DST /ICPS /CLUS-TER /Data Science/2018/Proposal-16:(T-856)] at the department of computer science, Birla Institute of Technology and Science, Pilani, India.","Towards Offensive Language Identification for Dravidian Languages. Offensive speech identification in countries like India poses several challenges due to the usage of code-mixed and romanized variants of multiple languages by the users in their posts on social media. The challenge of offensive language identification on social media for Dravidian languages is harder, considering the low resources available for the same. In this paper, we explored the zeroshot learning and few-shot learning paradigms based on multilingual language models for offensive speech detection in code-mixed and romanized variants of three Dravidian languages-Malayalam, Tamil, and Kannada. We propose a novel and flexible approach of selective translation and transliteration to reap better results from fine-tuning and ensembling multilingual transformer networks like XLM-RoBERTa and mBERT. We implemented pretrained, fine-tuned, and ensembled versions of XLM-RoBERTa for offensive speech classification. Further, we experimented with interlanguage, inter-task, and multi-task transfer learning techniques to leverage the rich resources available for offensive speech identification in the English language and to enrich the models with knowledge transfer from related tasks. The proposed models yielded good results and are promising for effective offensive speech identification in low resource settings. 1",2021
nanba-etal-2009-automatic,https://aclanthology.org/P09-2052,0,,,,,,,"Automatic Compilation of Travel Information from Automatically Identified Travel Blogs. In this paper, we propose a method for compiling travel information automatically. For the compilation, we focus on travel blogs, which are defined as travel journals written by bloggers in diary form. We consider that travel blogs are a useful information source for obtaining travel information, because many bloggers' travel experiences are written in this form. Therefore, we identified travel blogs in a blog database and extracted travel information from them. We have confirmed the effectiveness of our method by experiment. For the identification of travel blogs, we obtained scores of 38.1% for Recall and 86.7% for Precision. In the extraction of travel information from travel blogs, we obtained 74.0% for Precision at the top 100 extracted local products, thereby confirming that travel blogs are a useful source of travel information.",Automatic Compilation of Travel Information from Automatically Identified Travel Blogs,"In this paper, we propose a method for compiling travel information automatically. For the compilation, we focus on travel blogs, which are defined as travel journals written by bloggers in diary form. We consider that travel blogs are a useful information source for obtaining travel information, because many bloggers' travel experiences are written in this form. Therefore, we identified travel blogs in a blog database and extracted travel information from them. We have confirmed the effectiveness of our method by experiment. For the identification of travel blogs, we obtained scores of 38.1% for Recall and 86.7% for Precision. In the extraction of travel information from travel blogs, we obtained 74.0% for Precision at the top 100 extracted local products, thereby confirming that travel blogs are a useful source of travel information.",Automatic Compilation of Travel Information from Automatically Identified Travel Blogs,"In this paper, we propose a method for compiling travel information automatically. For the compilation, we focus on travel blogs, which are defined as travel journals written by bloggers in diary form. We consider that travel blogs are a useful information source for obtaining travel information, because many bloggers' travel experiences are written in this form. Therefore, we identified travel blogs in a blog database and extracted travel information from them. We have confirmed the effectiveness of our method by experiment. For the identification of travel blogs, we obtained scores of 38.1% for Recall and 86.7% for Precision. In the extraction of travel information from travel blogs, we obtained 74.0% for Precision at the top 100 extracted local products, thereby confirming that travel blogs are a useful source of travel information.",,"Automatic Compilation of Travel Information from Automatically Identified Travel Blogs. In this paper, we propose a method for compiling travel information automatically. For the compilation, we focus on travel blogs, which are defined as travel journals written by bloggers in diary form. We consider that travel blogs are a useful information source for obtaining travel information, because many bloggers' travel experiences are written in this form. Therefore, we identified travel blogs in a blog database and extracted travel information from them. We have confirmed the effectiveness of our method by experiment. For the identification of travel blogs, we obtained scores of 38.1% for Recall and 86.7% for Precision. In the extraction of travel information from travel blogs, we obtained 74.0% for Precision at the top 100 extracted local products, thereby confirming that travel blogs are a useful source of travel information.",2009
zhu-etal-2020-multitask,https://aclanthology.org/2020.coling-main.430,0,,,,,,,"A Multitask Active Learning Framework for Natural Language Understanding. Natural language understanding (NLU) aims at identifying user intent and extracting semantic slots. This requires sufficient annotating data to get considerable performance in real-world situations. Active learning (AL) has been well-studied to decrease the needed amount of the annotating data and successfully applied to NLU. However, no research has been done on investigating how the relation information between intents and slots can improve the efficiency of AL algorithms. In this paper, we propose a multitask AL framework for NLU. Our framework enables poolbased AL algorithms to make use of the relation information between sub-tasks provided by a joint model, and we propose an efficient computation for the entropy of a joint model. Experimental results show our framework can achieve competitive performance with less training data than baseline methods on all datasets. We also demonstrate that when using the entropy as the query strategy, the model with complete relation information can perform better than those with partial information. Additionally, we demonstrate that the efficiency of these active learning algorithms in our framework is still effective when incorporate with the Bidirectional Encoder Representations from Transformers (BERT).",A Multitask Active Learning Framework for Natural Language Understanding,"Natural language understanding (NLU) aims at identifying user intent and extracting semantic slots. This requires sufficient annotating data to get considerable performance in real-world situations. Active learning (AL) has been well-studied to decrease the needed amount of the annotating data and successfully applied to NLU. However, no research has been done on investigating how the relation information between intents and slots can improve the efficiency of AL algorithms. In this paper, we propose a multitask AL framework for NLU. Our framework enables poolbased AL algorithms to make use of the relation information between sub-tasks provided by a joint model, and we propose an efficient computation for the entropy of a joint model. Experimental results show our framework can achieve competitive performance with less training data than baseline methods on all datasets. We also demonstrate that when using the entropy as the query strategy, the model with complete relation information can perform better than those with partial information. Additionally, we demonstrate that the efficiency of these active learning algorithms in our framework is still effective when incorporate with the Bidirectional Encoder Representations from Transformers (BERT).",A Multitask Active Learning Framework for Natural Language Understanding,"Natural language understanding (NLU) aims at identifying user intent and extracting semantic slots. This requires sufficient annotating data to get considerable performance in real-world situations. Active learning (AL) has been well-studied to decrease the needed amount of the annotating data and successfully applied to NLU. However, no research has been done on investigating how the relation information between intents and slots can improve the efficiency of AL algorithms. In this paper, we propose a multitask AL framework for NLU. Our framework enables poolbased AL algorithms to make use of the relation information between sub-tasks provided by a joint model, and we propose an efficient computation for the entropy of a joint model. Experimental results show our framework can achieve competitive performance with less training data than baseline methods on all datasets. We also demonstrate that when using the entropy as the query strategy, the model with complete relation information can perform better than those with partial information. Additionally, we demonstrate that the efficiency of these active learning algorithms in our framework is still effective when incorporate with the Bidirectional Encoder Representations from Transformers (BERT).",,"A Multitask Active Learning Framework for Natural Language Understanding. Natural language understanding (NLU) aims at identifying user intent and extracting semantic slots. This requires sufficient annotating data to get considerable performance in real-world situations. Active learning (AL) has been well-studied to decrease the needed amount of the annotating data and successfully applied to NLU. However, no research has been done on investigating how the relation information between intents and slots can improve the efficiency of AL algorithms. In this paper, we propose a multitask AL framework for NLU. Our framework enables poolbased AL algorithms to make use of the relation information between sub-tasks provided by a joint model, and we propose an efficient computation for the entropy of a joint model. Experimental results show our framework can achieve competitive performance with less training data than baseline methods on all datasets. We also demonstrate that when using the entropy as the query strategy, the model with complete relation information can perform better than those with partial information. Additionally, we demonstrate that the efficiency of these active learning algorithms in our framework is still effective when incorporate with the Bidirectional Encoder Representations from Transformers (BERT).",2020
ekbal-etal-2008-named,https://aclanthology.org/I08-2077,0,,,,,,,"Named Entity Recognition in Bengali: A Conditional Random Field Approach. This paper reports about the development of a Named Entity Recognition (NER) system for Bengali using the statistical Conditional Random Fields (CRFs). The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the various named entity (NE) classes. A portion of the partially NE tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web, has been used to develop the system. The training set consists of 150K words and has been manually annotated with a NE tagset of seventeen tags. Experimental results of the 10-fold cross validation test show the effectiveness of the proposed CRF based NER system with an overall average Recall, Precision and F-Score values of 93.8%, 87.8% and 90.7%, respectively.",Named Entity Recognition in {B}engali: A Conditional Random Field Approach,"This paper reports about the development of a Named Entity Recognition (NER) system for Bengali using the statistical Conditional Random Fields (CRFs). The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the various named entity (NE) classes. A portion of the partially NE tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web, has been used to develop the system. The training set consists of 150K words and has been manually annotated with a NE tagset of seventeen tags. Experimental results of the 10-fold cross validation test show the effectiveness of the proposed CRF based NER system with an overall average Recall, Precision and F-Score values of 93.8%, 87.8% and 90.7%, respectively.",Named Entity Recognition in Bengali: A Conditional Random Field Approach,"This paper reports about the development of a Named Entity Recognition (NER) system for Bengali using the statistical Conditional Random Fields (CRFs). The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the various named entity (NE) classes. A portion of the partially NE tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web, has been used to develop the system. The training set consists of 150K words and has been manually annotated with a NE tagset of seventeen tags. Experimental results of the 10-fold cross validation test show the effectiveness of the proposed CRF based NER system with an overall average Recall, Precision and F-Score values of 93.8%, 87.8% and 90.7%, respectively.",,"Named Entity Recognition in Bengali: A Conditional Random Field Approach. This paper reports about the development of a Named Entity Recognition (NER) system for Bengali using the statistical Conditional Random Fields (CRFs). The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the various named entity (NE) classes. A portion of the partially NE tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web, has been used to develop the system. The training set consists of 150K words and has been manually annotated with a NE tagset of seventeen tags. Experimental results of the 10-fold cross validation test show the effectiveness of the proposed CRF based NER system with an overall average Recall, Precision and F-Score values of 93.8%, 87.8% and 90.7%, respectively.",2008
goodman-1997-global,https://aclanthology.org/W97-0302,0,,,,,,,"Global Thresholding and Multiple-Pass Parsing. We present a variation on classic beam thresholding techniques that is up to an order of magnitude faster than the traditional method, at the same performance level. We also present a new thresholding technique, global thresholding, which, combined with the new beam thresholding, gives an additional factor of two improvement, and a novel technique, multiple pass parsing, that can be combined with the others to yield yet another 50% improvement. We use a new search algorithm to simultaneously optimize the thresholding parameters of the various algorithms.",Global Thresholding and Multiple-Pass Parsing,"We present a variation on classic beam thresholding techniques that is up to an order of magnitude faster than the traditional method, at the same performance level. We also present a new thresholding technique, global thresholding, which, combined with the new beam thresholding, gives an additional factor of two improvement, and a novel technique, multiple pass parsing, that can be combined with the others to yield yet another 50% improvement. We use a new search algorithm to simultaneously optimize the thresholding parameters of the various algorithms.",Global Thresholding and Multiple-Pass Parsing,"We present a variation on classic beam thresholding techniques that is up to an order of magnitude faster than the traditional method, at the same performance level. We also present a new thresholding technique, global thresholding, which, combined with the new beam thresholding, gives an additional factor of two improvement, and a novel technique, multiple pass parsing, that can be combined with the others to yield yet another 50% improvement. We use a new search algorithm to simultaneously optimize the thresholding parameters of the various algorithms.",,"Global Thresholding and Multiple-Pass Parsing. We present a variation on classic beam thresholding techniques that is up to an order of magnitude faster than the traditional method, at the same performance level. We also present a new thresholding technique, global thresholding, which, combined with the new beam thresholding, gives an additional factor of two improvement, and a novel technique, multiple pass parsing, that can be combined with the others to yield yet another 50% improvement. We use a new search algorithm to simultaneously optimize the thresholding parameters of the various algorithms.",1997
bird-2020-sparse,https://aclanthology.org/2020.cl-4.1,0,,,,,,,"Sparse Transcription. The transcription bottleneck is often cited as a major obstacle for efforts to document the world's endangered languages and supply them with language technologies. One solution is to extend methods from automatic speech recognition and machine translation, and recruit linguists to provide narrow phonetic transcriptions and sentence-aligned translations. However, I believe that these approaches are not a good fit with the available data and skills, or with long-established practices that are essentially word-based. In seeking a more effective approach, I consider a century of transcription practice and a wide range of computational approaches, before proposing a computational model based on spoken term detection that I call ""sparse transcription."" This represents a shift away from current assumptions that we transcribe phones, transcribe fully, and transcribe first. Instead, sparse transcription combines the older practice of word-level transcription with interpretive, iterative, and interactive processes that are amenable to wider participation and that open the way to new methods for processing oral languages.",Sparse Transcription,"The transcription bottleneck is often cited as a major obstacle for efforts to document the world's endangered languages and supply them with language technologies. One solution is to extend methods from automatic speech recognition and machine translation, and recruit linguists to provide narrow phonetic transcriptions and sentence-aligned translations. However, I believe that these approaches are not a good fit with the available data and skills, or with long-established practices that are essentially word-based. In seeking a more effective approach, I consider a century of transcription practice and a wide range of computational approaches, before proposing a computational model based on spoken term detection that I call ""sparse transcription."" This represents a shift away from current assumptions that we transcribe phones, transcribe fully, and transcribe first. Instead, sparse transcription combines the older practice of word-level transcription with interpretive, iterative, and interactive processes that are amenable to wider participation and that open the way to new methods for processing oral languages.",Sparse Transcription,"The transcription bottleneck is often cited as a major obstacle for efforts to document the world's endangered languages and supply them with language technologies. One solution is to extend methods from automatic speech recognition and machine translation, and recruit linguists to provide narrow phonetic transcriptions and sentence-aligned translations. However, I believe that these approaches are not a good fit with the available data and skills, or with long-established practices that are essentially word-based. In seeking a more effective approach, I consider a century of transcription practice and a wide range of computational approaches, before proposing a computational model based on spoken term detection that I call ""sparse transcription."" This represents a shift away from current assumptions that we transcribe phones, transcribe fully, and transcribe first. Instead, sparse transcription combines the older practice of word-level transcription with interpretive, iterative, and interactive processes that are amenable to wider participation and that open the way to new methods for processing oral languages.","I am indebted to the Bininj people of the Kuwarddewardde ""Stone Country"" in Northern Australia for the opportunity to live and work in their community, where I gained many insights in the course of learning to transcribe Kunwinjku. Thanks to Steve Abney, Laurent Besacier, Mark Liberman, Maïa Ponsonnet, to my colleagues and students in the Top End Language Lab at Charles Darwin University, and to several anonymous reviewers for thoughtful feedback. This research has been supported by a grant from the Australian Research","Sparse Transcription. The transcription bottleneck is often cited as a major obstacle for efforts to document the world's endangered languages and supply them with language technologies. One solution is to extend methods from automatic speech recognition and machine translation, and recruit linguists to provide narrow phonetic transcriptions and sentence-aligned translations. However, I believe that these approaches are not a good fit with the available data and skills, or with long-established practices that are essentially word-based. In seeking a more effective approach, I consider a century of transcription practice and a wide range of computational approaches, before proposing a computational model based on spoken term detection that I call ""sparse transcription."" This represents a shift away from current assumptions that we transcribe phones, transcribe fully, and transcribe first. Instead, sparse transcription combines the older practice of word-level transcription with interpretive, iterative, and interactive processes that are amenable to wider participation and that open the way to new methods for processing oral languages.",2020
schaler-2004-certified,https://aclanthology.org/2004.tc-1.15,0,,,,,,,"The Certified Localisation Professional (CLP). The Institute of Localisation Professionals (TILP) was established in 2002 as a non-profit organisation and in 2003 merged with the US-based Professional Association for Localization (PAL). TILP's objective is to develop professional practices in localisation globally. TILP is owned by its individual members. It coordinates a number of regional chapters in Europe, North America, Latin America and Asia. The Certified Localisation Professional Programme (CLP) was launched by TILP in September 2004 and provides professional certification to individuals working in a variety of professions in localisation, among them project managers, engineers, testers, internationalisation specialists, and linguists. This article will outline the CLP programme and is aimed at course providers interested in offering TILP accredited courses, employers planning to make CLP certification a requirement for future employees, and individual professionals planning to develop their professional career.",The Certified Localisation Professional ({CLP}),"The Institute of Localisation Professionals (TILP) was established in 2002 as a non-profit organisation and in 2003 merged with the US-based Professional Association for Localization (PAL). TILP's objective is to develop professional practices in localisation globally. TILP is owned by its individual members. It coordinates a number of regional chapters in Europe, North America, Latin America and Asia. The Certified Localisation Professional Programme (CLP) was launched by TILP in September 2004 and provides professional certification to individuals working in a variety of professions in localisation, among them project managers, engineers, testers, internationalisation specialists, and linguists. This article will outline the CLP programme and is aimed at course providers interested in offering TILP accredited courses, employers planning to make CLP certification a requirement for future employees, and individual professionals planning to develop their professional career.",The Certified Localisation Professional (CLP),"The Institute of Localisation Professionals (TILP) was established in 2002 as a non-profit organisation and in 2003 merged with the US-based Professional Association for Localization (PAL). TILP's objective is to develop professional practices in localisation globally. TILP is owned by its individual members. It coordinates a number of regional chapters in Europe, North America, Latin America and Asia. The Certified Localisation Professional Programme (CLP) was launched by TILP in September 2004 and provides professional certification to individuals working in a variety of professions in localisation, among them project managers, engineers, testers, internationalisation specialists, and linguists. This article will outline the CLP programme and is aimed at course providers interested in offering TILP accredited courses, employers planning to make CLP certification a requirement for future employees, and individual professionals planning to develop their professional career.","The support received by the European Union's ADAPT Programme for the development of the initial CLP project (A-1997-Irl-551) is acknowledged. This project was coordinated by the LRC. The project partners were: FÁS (Irish National Training Agency), CATT (Siemens Nixdorf Training Centre) and TELSI Ireland, supported by a large number of stakeholders.The author also would like to acknowledge the support of Siobhan King-Hughes in the preparation of the first CLP certification outline, partially reproduced in this article.","The Certified Localisation Professional (CLP). The Institute of Localisation Professionals (TILP) was established in 2002 as a non-profit organisation and in 2003 merged with the US-based Professional Association for Localization (PAL). TILP's objective is to develop professional practices in localisation globally. TILP is owned by its individual members. It coordinates a number of regional chapters in Europe, North America, Latin America and Asia. The Certified Localisation Professional Programme (CLP) was launched by TILP in September 2004 and provides professional certification to individuals working in a variety of professions in localisation, among them project managers, engineers, testers, internationalisation specialists, and linguists. This article will outline the CLP programme and is aimed at course providers interested in offering TILP accredited courses, employers planning to make CLP certification a requirement for future employees, and individual professionals planning to develop their professional career.",2004
lalor-etal-2019-learning,https://aclanthology.org/D19-1434,0,,,,,,,"Learning Latent Parameters without Human Response Patterns: Item Response Theory with Artificial Crowds. Incorporating Item Response Theory (IRT) into NLP tasks can provide valuable information about model performance and behavior. Traditionally, IRT models are learned using human response pattern (RP) data, presenting a significant bottleneck for large data sets like those required for training deep neural networks (DNNs). In this work we propose learning IRT models using RPs generated from artificial crowds of DNN models. We demonstrate the effectiveness of learning IRT models using DNN-generated data through quantitative and qualitative analyses for two NLP tasks. Parameters learned from human and machine RPs for natural language inference and sentiment analysis exhibit medium to large positive correlations. We demonstrate a use-case for latent difficulty item parameters, namely training set filtering, and show that using difficulty to sample training data outperforms baseline methods. Finally, we highlight cases where human expectation about item difficulty does not match difficulty as estimated from the machine RPs.",Learning Latent Parameters without Human Response Patterns: Item Response Theory with Artificial Crowds,"Incorporating Item Response Theory (IRT) into NLP tasks can provide valuable information about model performance and behavior. Traditionally, IRT models are learned using human response pattern (RP) data, presenting a significant bottleneck for large data sets like those required for training deep neural networks (DNNs). In this work we propose learning IRT models using RPs generated from artificial crowds of DNN models. We demonstrate the effectiveness of learning IRT models using DNN-generated data through quantitative and qualitative analyses for two NLP tasks. Parameters learned from human and machine RPs for natural language inference and sentiment analysis exhibit medium to large positive correlations. We demonstrate a use-case for latent difficulty item parameters, namely training set filtering, and show that using difficulty to sample training data outperforms baseline methods. Finally, we highlight cases where human expectation about item difficulty does not match difficulty as estimated from the machine RPs.",Learning Latent Parameters without Human Response Patterns: Item Response Theory with Artificial Crowds,"Incorporating Item Response Theory (IRT) into NLP tasks can provide valuable information about model performance and behavior. Traditionally, IRT models are learned using human response pattern (RP) data, presenting a significant bottleneck for large data sets like those required for training deep neural networks (DNNs). In this work we propose learning IRT models using RPs generated from artificial crowds of DNN models. We demonstrate the effectiveness of learning IRT models using DNN-generated data through quantitative and qualitative analyses for two NLP tasks. Parameters learned from human and machine RPs for natural language inference and sentiment analysis exhibit medium to large positive correlations. We demonstrate a use-case for latent difficulty item parameters, namely training set filtering, and show that using difficulty to sample training data outperforms baseline methods. Finally, we highlight cases where human expectation about item difficulty does not match difficulty as estimated from the machine RPs.","We thank the anonymous reviewers for their comments and suggestions. This work was supported in part by the HSR&D award IIR 1I01HX001457 from the United States Department of Veterans Affairs (VA). We also acknowledge the support of LM012817 from the National Institutes of Health. This work was also supported in part by the Center for Intelligent Information Retrieval. The contents of this paper do not represent the views of CIIR, NIH, VA, or the United States Government.","Learning Latent Parameters without Human Response Patterns: Item Response Theory with Artificial Crowds. Incorporating Item Response Theory (IRT) into NLP tasks can provide valuable information about model performance and behavior. Traditionally, IRT models are learned using human response pattern (RP) data, presenting a significant bottleneck for large data sets like those required for training deep neural networks (DNNs). In this work we propose learning IRT models using RPs generated from artificial crowds of DNN models. We demonstrate the effectiveness of learning IRT models using DNN-generated data through quantitative and qualitative analyses for two NLP tasks. Parameters learned from human and machine RPs for natural language inference and sentiment analysis exhibit medium to large positive correlations. We demonstrate a use-case for latent difficulty item parameters, namely training set filtering, and show that using difficulty to sample training data outperforms baseline methods. Finally, we highlight cases where human expectation about item difficulty does not match difficulty as estimated from the machine RPs.",2019
dunning-1993-accurate,https://aclanthology.org/J93-1003,0,,,,,,,"Accurate Methods for the Statistics of Surprise and Coincidence. Much work has been done on the statistical analysis of text. In some cases reported in the literature, inappropriate statistical methods have been used, and statistical significance of results have not been addressed. In particular, asymptotic normality assumptions have often been used unjustifiably, leading to flawed results. This assumption of normal distribution limits the ability to analyze rare events. Unfortunately rare events do make up a large fraction of real text. However, more applicable methods based on likelihood ratio tests are available that yield good results with relatively small samples. These tests can be implemented efficiently, and have been used for the detection of composite terms and for the determination of domain-specific terms. In some cases, these measures perform much better than the methods previously used. In cases where traditional contingency table methods work well, the likelihood ratio tests described here are nearly identical. This paper describes the basis of a measure based on likelihood ratios that can be applied to the analysis of text.",Accurate Methods for the Statistics of Surprise and Coincidence,"Much work has been done on the statistical analysis of text. In some cases reported in the literature, inappropriate statistical methods have been used, and statistical significance of results have not been addressed. In particular, asymptotic normality assumptions have often been used unjustifiably, leading to flawed results. This assumption of normal distribution limits the ability to analyze rare events. Unfortunately rare events do make up a large fraction of real text. However, more applicable methods based on likelihood ratio tests are available that yield good results with relatively small samples. These tests can be implemented efficiently, and have been used for the detection of composite terms and for the determination of domain-specific terms. In some cases, these measures perform much better than the methods previously used. In cases where traditional contingency table methods work well, the likelihood ratio tests described here are nearly identical. This paper describes the basis of a measure based on likelihood ratios that can be applied to the analysis of text.",Accurate Methods for the Statistics of Surprise and Coincidence,"Much work has been done on the statistical analysis of text. In some cases reported in the literature, inappropriate statistical methods have been used, and statistical significance of results have not been addressed. In particular, asymptotic normality assumptions have often been used unjustifiably, leading to flawed results. This assumption of normal distribution limits the ability to analyze rare events. Unfortunately rare events do make up a large fraction of real text. However, more applicable methods based on likelihood ratio tests are available that yield good results with relatively small samples. These tests can be implemented efficiently, and have been used for the detection of composite terms and for the determination of domain-specific terms. In some cases, these measures perform much better than the methods previously used. In cases where traditional contingency table methods work well, the likelihood ratio tests described here are nearly identical. This paper describes the basis of a measure based on likelihood ratios that can be applied to the analysis of text.",,"Accurate Methods for the Statistics of Surprise and Coincidence. Much work has been done on the statistical analysis of text. In some cases reported in the literature, inappropriate statistical methods have been used, and statistical significance of results have not been addressed. In particular, asymptotic normality assumptions have often been used unjustifiably, leading to flawed results. This assumption of normal distribution limits the ability to analyze rare events. Unfortunately rare events do make up a large fraction of real text. However, more applicable methods based on likelihood ratio tests are available that yield good results with relatively small samples. These tests can be implemented efficiently, and have been used for the detection of composite terms and for the determination of domain-specific terms. In some cases, these measures perform much better than the methods previously used. In cases where traditional contingency table methods work well, the likelihood ratio tests described here are nearly identical. This paper describes the basis of a measure based on likelihood ratios that can be applied to the analysis of text.",1993
li-jurafsky-2015-multi,https://aclanthology.org/D15-1200,0,,,,,,,"Do Multi-Sense Embeddings Improve Natural Language Understanding?. Learning a distinct representation for each sense of an ambiguous word could lead to more powerful and fine-grained models of vector-space representations. Yet while 'multi-sense' methods have been proposed and tested on artificial wordsimilarity tasks, we don't know if they improve real natural language understanding tasks. In this paper we introduce a multisense embedding model based on Chinese Restaurant Processes that achieves state of the art performance on matching human word similarity judgments, and propose a pipelined architecture for incorporating multi-sense embeddings into language understanding. We then test the performance of our model on part-of-speech tagging, named entity recognition, sentiment analysis, semantic relation identification and semantic relatedness, controlling for embedding dimensionality. We find that multi-sense embeddings do improve performance on some tasks (part-of-speech tagging, semantic relation identification, semantic relatedness) but not on others (named entity recognition, various forms of sentiment analysis). We discuss how these differences may be caused by the different role of word sense information in each of the tasks. The results highlight the importance of testing embedding models in real applications.",Do Multi-Sense Embeddings Improve Natural Language Understanding?,"Learning a distinct representation for each sense of an ambiguous word could lead to more powerful and fine-grained models of vector-space representations. Yet while 'multi-sense' methods have been proposed and tested on artificial wordsimilarity tasks, we don't know if they improve real natural language understanding tasks. In this paper we introduce a multisense embedding model based on Chinese Restaurant Processes that achieves state of the art performance on matching human word similarity judgments, and propose a pipelined architecture for incorporating multi-sense embeddings into language understanding. We then test the performance of our model on part-of-speech tagging, named entity recognition, sentiment analysis, semantic relation identification and semantic relatedness, controlling for embedding dimensionality. We find that multi-sense embeddings do improve performance on some tasks (part-of-speech tagging, semantic relation identification, semantic relatedness) but not on others (named entity recognition, various forms of sentiment analysis). We discuss how these differences may be caused by the different role of word sense information in each of the tasks. The results highlight the importance of testing embedding models in real applications.",Do Multi-Sense Embeddings Improve Natural Language Understanding?,"Learning a distinct representation for each sense of an ambiguous word could lead to more powerful and fine-grained models of vector-space representations. Yet while 'multi-sense' methods have been proposed and tested on artificial wordsimilarity tasks, we don't know if they improve real natural language understanding tasks. In this paper we introduce a multisense embedding model based on Chinese Restaurant Processes that achieves state of the art performance on matching human word similarity judgments, and propose a pipelined architecture for incorporating multi-sense embeddings into language understanding. We then test the performance of our model on part-of-speech tagging, named entity recognition, sentiment analysis, semantic relation identification and semantic relatedness, controlling for embedding dimensionality. We find that multi-sense embeddings do improve performance on some tasks (part-of-speech tagging, semantic relation identification, semantic relatedness) but not on others (named entity recognition, various forms of sentiment analysis). We discuss how these differences may be caused by the different role of word sense information in each of the tasks. The results highlight the importance of testing embedding models in real applications.","We would like to thank Sam Bowman, Ignacio Cases, Kevin Gu, Gabor Angeli, Sida Wang, Percy Liang and other members of the Stanford NLP group, as well as anonymous reviewers for their helpful advice on various aspects of this work. We gratefully acknowledge the support of the NSF via award IIS-1514268, the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF, DARPA, AFRL, or the US government.","Do Multi-Sense Embeddings Improve Natural Language Understanding?. Learning a distinct representation for each sense of an ambiguous word could lead to more powerful and fine-grained models of vector-space representations. Yet while 'multi-sense' methods have been proposed and tested on artificial wordsimilarity tasks, we don't know if they improve real natural language understanding tasks. In this paper we introduce a multisense embedding model based on Chinese Restaurant Processes that achieves state of the art performance on matching human word similarity judgments, and propose a pipelined architecture for incorporating multi-sense embeddings into language understanding. We then test the performance of our model on part-of-speech tagging, named entity recognition, sentiment analysis, semantic relation identification and semantic relatedness, controlling for embedding dimensionality. We find that multi-sense embeddings do improve performance on some tasks (part-of-speech tagging, semantic relation identification, semantic relatedness) but not on others (named entity recognition, various forms of sentiment analysis). We discuss how these differences may be caused by the different role of word sense information in each of the tasks. The results highlight the importance of testing embedding models in real applications.",2015
walinska-potoniec-2020-urszula,https://aclanthology.org/2020.semeval-1.161,0,,,,,,,"Urszula Wali\'nska at SemEval-2020 Task 8: Fusion of Text and Image Features Using LSTM and VGG16 for Memotion Analysis. In the paper, we describe the Urszula Walińska's entry to the SemEval-2020 Task 8: Memotion Analysis. The sentiment analysis of memes task, is motivated by a pervasive problem of offensive content spread in social media up to the present time. In fact, memes are an important medium of expressing opinion and emotions, therefore they can be hateful at many times. In order to identify emotions expressed by memes we construct a tool based on neural networks and deep learning methods. It takes an advantage of a multi-modal nature of the task and performs fusion of image and text features extracted by models dedicated to this task. Our solution achieved 0.346 macro F1-score in Task A-Sentiment Classification, which brought us to the 7th place in the official rank of the competition.",Urszula Wali{\'n}ska at {S}em{E}val-2020 Task 8: Fusion of Text and Image Features Using {LSTM} and {VGG}16 for Memotion Analysis,"In the paper, we describe the Urszula Walińska's entry to the SemEval-2020 Task 8: Memotion Analysis. The sentiment analysis of memes task, is motivated by a pervasive problem of offensive content spread in social media up to the present time. In fact, memes are an important medium of expressing opinion and emotions, therefore they can be hateful at many times. In order to identify emotions expressed by memes we construct a tool based on neural networks and deep learning methods. It takes an advantage of a multi-modal nature of the task and performs fusion of image and text features extracted by models dedicated to this task. Our solution achieved 0.346 macro F1-score in Task A-Sentiment Classification, which brought us to the 7th place in the official rank of the competition.",Urszula Wali\'nska at SemEval-2020 Task 8: Fusion of Text and Image Features Using LSTM and VGG16 for Memotion Analysis,"In the paper, we describe the Urszula Walińska's entry to the SemEval-2020 Task 8: Memotion Analysis. The sentiment analysis of memes task, is motivated by a pervasive problem of offensive content spread in social media up to the present time. In fact, memes are an important medium of expressing opinion and emotions, therefore they can be hateful at many times. In order to identify emotions expressed by memes we construct a tool based on neural networks and deep learning methods. It takes an advantage of a multi-modal nature of the task and performs fusion of image and text features extracted by models dedicated to this task. Our solution achieved 0.346 macro F1-score in Task A-Sentiment Classification, which brought us to the 7th place in the official rank of the competition.",Urszula Walińska executed the research as a part of master thesis project under the supervision of Jedrzej Potoniec. This work was partially funded by project 0311/SBAD/0678.,"Urszula Wali\'nska at SemEval-2020 Task 8: Fusion of Text and Image Features Using LSTM and VGG16 for Memotion Analysis. In the paper, we describe the Urszula Walińska's entry to the SemEval-2020 Task 8: Memotion Analysis. The sentiment analysis of memes task, is motivated by a pervasive problem of offensive content spread in social media up to the present time. In fact, memes are an important medium of expressing opinion and emotions, therefore they can be hateful at many times. In order to identify emotions expressed by memes we construct a tool based on neural networks and deep learning methods. It takes an advantage of a multi-modal nature of the task and performs fusion of image and text features extracted by models dedicated to this task. Our solution achieved 0.346 macro F1-score in Task A-Sentiment Classification, which brought us to the 7th place in the official rank of the competition.",2020
rudinger-etal-2018-neural,https://aclanthology.org/D18-1114,0,,,,,,,"Neural-Davidsonian Semantic Proto-role Labeling. We present a model for semantic proto-role labeling (SPRL) using an adapted bidirectional LSTM encoding strategy that we call Neural-Davidsonian: predicate-argument structure is represented as pairs of hidden states corresponding to predicate and argument head tokens of the input sequence. We demonstrate: (1) state-of-the-art results in SPRL, and (2) that our network naturally shares parameters between attributes, allowing for learning new attribute types with limited added supervision.",Neural-{D}avidsonian Semantic Proto-role Labeling,"We present a model for semantic proto-role labeling (SPRL) using an adapted bidirectional LSTM encoding strategy that we call Neural-Davidsonian: predicate-argument structure is represented as pairs of hidden states corresponding to predicate and argument head tokens of the input sequence. We demonstrate: (1) state-of-the-art results in SPRL, and (2) that our network naturally shares parameters between attributes, allowing for learning new attribute types with limited added supervision.",Neural-Davidsonian Semantic Proto-role Labeling,"We present a model for semantic proto-role labeling (SPRL) using an adapted bidirectional LSTM encoding strategy that we call Neural-Davidsonian: predicate-argument structure is represented as pairs of hidden states corresponding to predicate and argument head tokens of the input sequence. We demonstrate: (1) state-of-the-art results in SPRL, and (2) that our network naturally shares parameters between attributes, allowing for learning new attribute types with limited added supervision.","This research was supported by the JHU HLT-COE, DARPA AIDA, and NSF GRFP (Grant No. DGE-1232825). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA, NSF, or the U.S. Government.","Neural-Davidsonian Semantic Proto-role Labeling. We present a model for semantic proto-role labeling (SPRL) using an adapted bidirectional LSTM encoding strategy that we call Neural-Davidsonian: predicate-argument structure is represented as pairs of hidden states corresponding to predicate and argument head tokens of the input sequence. We demonstrate: (1) state-of-the-art results in SPRL, and (2) that our network naturally shares parameters between attributes, allowing for learning new attribute types with limited added supervision.",2018
sheikh-etal-2016-diachronic,https://aclanthology.org/L16-1609,0,,,,,,,"How Diachronic Text Corpora Affect Context based Retrieval of OOV Proper Names for Audio News. Out-Of-Vocabulary (OOV) words missed by Large Vocabulary Continuous Speech Recognition (LVCSR) systems can be recovered with the help of topic and semantic context of the OOV words captured from a diachronic text corpus. In this paper we investigate how the choice of documents for the diachronic text corpora affects the retrieval of OOV Proper Names (PNs) relevant to an audio document. We first present our diachronic French broadcast news datasets, which highlight the motivation of our study on OOV PNs. Then the effect of using diachronic text data from different sources and a different time span is analysed. With OOV PN retrieval experiments on French broadcast news videos, we conclude that a diachronic corpus with text from different sources leads to better retrieval performance than one relying on text from single source or from a longer time span.",How Diachronic Text Corpora Affect Context based Retrieval of {OOV} Proper Names for Audio News,"Out-Of-Vocabulary (OOV) words missed by Large Vocabulary Continuous Speech Recognition (LVCSR) systems can be recovered with the help of topic and semantic context of the OOV words captured from a diachronic text corpus. In this paper we investigate how the choice of documents for the diachronic text corpora affects the retrieval of OOV Proper Names (PNs) relevant to an audio document. We first present our diachronic French broadcast news datasets, which highlight the motivation of our study on OOV PNs. Then the effect of using diachronic text data from different sources and a different time span is analysed. With OOV PN retrieval experiments on French broadcast news videos, we conclude that a diachronic corpus with text from different sources leads to better retrieval performance than one relying on text from single source or from a longer time span.",How Diachronic Text Corpora Affect Context based Retrieval of OOV Proper Names for Audio News,"Out-Of-Vocabulary (OOV) words missed by Large Vocabulary Continuous Speech Recognition (LVCSR) systems can be recovered with the help of topic and semantic context of the OOV words captured from a diachronic text corpus. In this paper we investigate how the choice of documents for the diachronic text corpora affects the retrieval of OOV Proper Names (PNs) relevant to an audio document. We first present our diachronic French broadcast news datasets, which highlight the motivation of our study on OOV PNs. Then the effect of using diachronic text data from different sources and a different time span is analysed. With OOV PN retrieval experiments on French broadcast news videos, we conclude that a diachronic corpus with text from different sources leads to better retrieval performance than one relying on text from single source or from a longer time span.",This work is funded by the ContNomina project supported by the French National Research Agency (ANR) under the contract ANR-12-BS02-0009.,"How Diachronic Text Corpora Affect Context based Retrieval of OOV Proper Names for Audio News. Out-Of-Vocabulary (OOV) words missed by Large Vocabulary Continuous Speech Recognition (LVCSR) systems can be recovered with the help of topic and semantic context of the OOV words captured from a diachronic text corpus. In this paper we investigate how the choice of documents for the diachronic text corpora affects the retrieval of OOV Proper Names (PNs) relevant to an audio document. We first present our diachronic French broadcast news datasets, which highlight the motivation of our study on OOV PNs. Then the effect of using diachronic text data from different sources and a different time span is analysed. With OOV PN retrieval experiments on French broadcast news videos, we conclude that a diachronic corpus with text from different sources leads to better retrieval performance than one relying on text from single source or from a longer time span.",2016
tripodi-etal-2019-tracing,https://aclanthology.org/W19-4715,0,,,,,,,"Tracing Antisemitic Language Through Diachronic Embedding Projections: France 1789-1914. We investigate some aspects of the history of antisemitism in France, one of the cradles of modern antisemitism, using diachronic word embeddings. We constructed a large corpus of French books and periodicals issues that contain a keyword related to Jews and performed a diachronic word embedding over the 1789-1914 period. We studied the changes over time in the semantic spaces of 4 target words and performed embedding projections over 6 streams of antisemitic discourse. This allowed us to track the evolution of antisemitic bias in the religious, economic, socio-politic, racial, ethic and conspiratorial domains. Projections show a trend of growing antisemitism, especially in the years starting in the mid-80s and culminating in the Dreyfus affair. Our analysis also allows us to highlight the peculiar adverse bias towards Judaism in the broader context of other religions.",Tracing Antisemitic Language Through Diachronic Embedding Projections: {F}rance 1789-1914,"We investigate some aspects of the history of antisemitism in France, one of the cradles of modern antisemitism, using diachronic word embeddings. We constructed a large corpus of French books and periodicals issues that contain a keyword related to Jews and performed a diachronic word embedding over the 1789-1914 period. We studied the changes over time in the semantic spaces of 4 target words and performed embedding projections over 6 streams of antisemitic discourse. This allowed us to track the evolution of antisemitic bias in the religious, economic, socio-politic, racial, ethic and conspiratorial domains. Projections show a trend of growing antisemitism, especially in the years starting in the mid-80s and culminating in the Dreyfus affair. Our analysis also allows us to highlight the peculiar adverse bias towards Judaism in the broader context of other religions.",Tracing Antisemitic Language Through Diachronic Embedding Projections: France 1789-1914,"We investigate some aspects of the history of antisemitism in France, one of the cradles of modern antisemitism, using diachronic word embeddings. We constructed a large corpus of French books and periodicals issues that contain a keyword related to Jews and performed a diachronic word embedding over the 1789-1914 period. We studied the changes over time in the semantic spaces of 4 target words and performed embedding projections over 6 streams of antisemitic discourse. This allowed us to track the evolution of antisemitic bias in the religious, economic, socio-politic, racial, ethic and conspiratorial domains. Projections show a trend of growing antisemitism, especially in the years starting in the mid-80s and culminating in the Dreyfus affair. Our analysis also allows us to highlight the peculiar adverse bias towards Judaism in the broader context of other religions.",The authors of this work have received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 732942. The experiments have been run on the SCSCF cluster of Ca' Foscari University.,"Tracing Antisemitic Language Through Diachronic Embedding Projections: France 1789-1914. We investigate some aspects of the history of antisemitism in France, one of the cradles of modern antisemitism, using diachronic word embeddings. We constructed a large corpus of French books and periodicals issues that contain a keyword related to Jews and performed a diachronic word embedding over the 1789-1914 period. We studied the changes over time in the semantic spaces of 4 target words and performed embedding projections over 6 streams of antisemitic discourse. This allowed us to track the evolution of antisemitic bias in the religious, economic, socio-politic, racial, ethic and conspiratorial domains. Projections show a trend of growing antisemitism, especially in the years starting in the mid-80s and culminating in the Dreyfus affair. Our analysis also allows us to highlight the peculiar adverse bias towards Judaism in the broader context of other religions.",2019
popat-etal-2013-haves,https://aclanthology.org/P13-1041,0,,,,,,,"The Haves and the Have-Nots: Leveraging Unlabelled Corpora for Sentiment Analysis. Expensive feature engineering based on WordNet senses has been shown to be useful for document level sentiment classification. A plausible reason for such a performance improvement is the reduction in data sparsity. However, such a reduction could be achieved with a lesser effort through the means of syntagma based word clustering. In this paper, the problem of data sparsity in sentiment analysis, both monolingual and cross-lingual, is addressed through the means of clustering. Experiments show that cluster based data sparsity reduction leads to performance better than sense based classification for sentiment analysis at document level. Similar idea is applied to Cross Lingual Sentiment Analysis (CLSA), and it is shown that reduction in data sparsity (after translation or bilingual-mapping) produces accuracy higher than Machine Translation based CLSA and sense based CLSA.",The Haves and the Have-Nots: Leveraging Unlabelled Corpora for Sentiment Analysis,"Expensive feature engineering based on WordNet senses has been shown to be useful for document level sentiment classification. A plausible reason for such a performance improvement is the reduction in data sparsity. However, such a reduction could be achieved with a lesser effort through the means of syntagma based word clustering. In this paper, the problem of data sparsity in sentiment analysis, both monolingual and cross-lingual, is addressed through the means of clustering. Experiments show that cluster based data sparsity reduction leads to performance better than sense based classification for sentiment analysis at document level. Similar idea is applied to Cross Lingual Sentiment Analysis (CLSA), and it is shown that reduction in data sparsity (after translation or bilingual-mapping) produces accuracy higher than Machine Translation based CLSA and sense based CLSA.",The Haves and the Have-Nots: Leveraging Unlabelled Corpora for Sentiment Analysis,"Expensive feature engineering based on WordNet senses has been shown to be useful for document level sentiment classification. A plausible reason for such a performance improvement is the reduction in data sparsity. However, such a reduction could be achieved with a lesser effort through the means of syntagma based word clustering. In this paper, the problem of data sparsity in sentiment analysis, both monolingual and cross-lingual, is addressed through the means of clustering. Experiments show that cluster based data sparsity reduction leads to performance better than sense based classification for sentiment analysis at document level. Similar idea is applied to Cross Lingual Sentiment Analysis (CLSA), and it is shown that reduction in data sparsity (after translation or bilingual-mapping) produces accuracy higher than Machine Translation based CLSA and sense based CLSA.",,"The Haves and the Have-Nots: Leveraging Unlabelled Corpora for Sentiment Analysis. Expensive feature engineering based on WordNet senses has been shown to be useful for document level sentiment classification. A plausible reason for such a performance improvement is the reduction in data sparsity. However, such a reduction could be achieved with a lesser effort through the means of syntagma based word clustering. In this paper, the problem of data sparsity in sentiment analysis, both monolingual and cross-lingual, is addressed through the means of clustering. Experiments show that cluster based data sparsity reduction leads to performance better than sense based classification for sentiment analysis at document level. Similar idea is applied to Cross Lingual Sentiment Analysis (CLSA), and it is shown that reduction in data sparsity (after translation or bilingual-mapping) produces accuracy higher than Machine Translation based CLSA and sense based CLSA.",2013
aji-etal-2020-neural,https://aclanthology.org/2020.acl-main.688,0,,,,,,,"In Neural Machine Translation, What Does Transfer Learning Transfer?. Transfer learning improves quality for lowresource machine translation, but it is unclear what exactly it transfers. We perform several ablation studies that limit information transfer, then measure the quality impact across three language pairs to gain a black-box understanding of transfer learning. Word embeddings play an important role in transfer learning, particularly if they are properly aligned. Although transfer learning can be performed without embeddings, results are sub-optimal. In contrast, transferring only the embeddings but nothing else yields catastrophic results. We then investigate diagonal alignments with auto-encoders over real languages and randomly generated sequences, finding even randomly generated sequences as parents yield noticeable but smaller gains. Finally, transfer learning can eliminate the need for a warmup phase when training transformer models in high resource language pairs.","In Neural Machine Translation, What Does Transfer Learning Transfer?","Transfer learning improves quality for lowresource machine translation, but it is unclear what exactly it transfers. We perform several ablation studies that limit information transfer, then measure the quality impact across three language pairs to gain a black-box understanding of transfer learning. Word embeddings play an important role in transfer learning, particularly if they are properly aligned. Although transfer learning can be performed without embeddings, results are sub-optimal. In contrast, transferring only the embeddings but nothing else yields catastrophic results. We then investigate diagonal alignments with auto-encoders over real languages and randomly generated sequences, finding even randomly generated sequences as parents yield noticeable but smaller gains. Finally, transfer learning can eliminate the need for a warmup phase when training transformer models in high resource language pairs.","In Neural Machine Translation, What Does Transfer Learning Transfer?","Transfer learning improves quality for lowresource machine translation, but it is unclear what exactly it transfers. We perform several ablation studies that limit information transfer, then measure the quality impact across three language pairs to gain a black-box understanding of transfer learning. Word embeddings play an important role in transfer learning, particularly if they are properly aligned. Although transfer learning can be performed without embeddings, results are sub-optimal. In contrast, transferring only the embeddings but nothing else yields catastrophic results. We then investigate diagonal alignments with auto-encoders over real languages and randomly generated sequences, finding even randomly generated sequences as parents yield noticeable but smaller gains. Finally, transfer learning can eliminate the need for a warmup phase when training transformer models in high resource language pairs.","This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (http: //www.csd3.cam.ac.uk/), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). Alham Fikri Aji is funded by the Indonesia Endowment Fund for Education scholarship scheme. Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727).","In Neural Machine Translation, What Does Transfer Learning Transfer?. Transfer learning improves quality for lowresource machine translation, but it is unclear what exactly it transfers. We perform several ablation studies that limit information transfer, then measure the quality impact across three language pairs to gain a black-box understanding of transfer learning. Word embeddings play an important role in transfer learning, particularly if they are properly aligned. Although transfer learning can be performed without embeddings, results are sub-optimal. In contrast, transferring only the embeddings but nothing else yields catastrophic results. We then investigate diagonal alignments with auto-encoders over real languages and randomly generated sequences, finding even randomly generated sequences as parents yield noticeable but smaller gains. Finally, transfer learning can eliminate the need for a warmup phase when training transformer models in high resource language pairs.",2020
abed-reiter-2020-arabic,https://aclanthology.org/2020.inlg-1.2,0,,,,,,,"Arabic NLG Language Functions. The Arabic language has very limited supports from NLG researchers. In this paper, we explain the challenges of the core grammar, provide a lexical resource, and implement the first language functions for the Arabic language. We did a human evaluation to evaluate our functions in generating sentences from the NADA Corpus.",{A}rabic {NLG} Language Functions,"The Arabic language has very limited supports from NLG researchers. In this paper, we explain the challenges of the core grammar, provide a lexical resource, and implement the first language functions for the Arabic language. We did a human evaluation to evaluate our functions in generating sentences from the NADA Corpus.",Arabic NLG Language Functions,"The Arabic language has very limited supports from NLG researchers. In this paper, we explain the challenges of the core grammar, provide a lexical resource, and implement the first language functions for the Arabic language. We did a human evaluation to evaluate our functions in generating sentences from the NADA Corpus.",,"Arabic NLG Language Functions. The Arabic language has very limited supports from NLG researchers. In this paper, we explain the challenges of the core grammar, provide a lexical resource, and implement the first language functions for the Arabic language. We did a human evaluation to evaluate our functions in generating sentences from the NADA Corpus.",2020
musha-1986-new,https://aclanthology.org/C86-1111,0,,,,,,,"A New Predictive Analyzer of English. Aspects of syntactic predictions made during the recognition of English sentences are investigated. We reinforce Kuno's original predictive analyzer[i] by introducing five types of predictions. For each type of prediction, we discuss and present its necessity, its description method, and recognition mechanism. We make use of three kinds of stacks whose behavior is specified by grammar rules in an extended version of Greibach normal form. We also investigate other factors that affect the predictive recognition process, i.e., preferences among syntactic ambiguities and necessary amount of lookahead. These factors as well as the proposed handling mechanisms of predictions are tested by analyzing two kinds of articles. In our experiment, more than seventy percent of sentences are recognized and looking two words ahead seems to be the critical length for the predictive recognition.",A New Predictive Analyzer of {E}nglish,"Aspects of syntactic predictions made during the recognition of English sentences are investigated. We reinforce Kuno's original predictive analyzer[i] by introducing five types of predictions. For each type of prediction, we discuss and present its necessity, its description method, and recognition mechanism. We make use of three kinds of stacks whose behavior is specified by grammar rules in an extended version of Greibach normal form. We also investigate other factors that affect the predictive recognition process, i.e., preferences among syntactic ambiguities and necessary amount of lookahead. These factors as well as the proposed handling mechanisms of predictions are tested by analyzing two kinds of articles. In our experiment, more than seventy percent of sentences are recognized and looking two words ahead seems to be the critical length for the predictive recognition.",A New Predictive Analyzer of English,"Aspects of syntactic predictions made during the recognition of English sentences are investigated. We reinforce Kuno's original predictive analyzer[i] by introducing five types of predictions. For each type of prediction, we discuss and present its necessity, its description method, and recognition mechanism. We make use of three kinds of stacks whose behavior is specified by grammar rules in an extended version of Greibach normal form. We also investigate other factors that affect the predictive recognition process, i.e., preferences among syntactic ambiguities and necessary amount of lookahead. These factors as well as the proposed handling mechanisms of predictions are tested by analyzing two kinds of articles. In our experiment, more than seventy percent of sentences are recognized and looking two words ahead seems to be the critical length for the predictive recognition.","I would especially like to thank my adviser, Prof. A. Yonezawa of Tokyo Institute of Technology, for his valuable comments on this researdl and encouragement. I also thank the members of Yonezawa Lab. for their comments on my research. I also give my special thanks to the managers of Resource Sharing Company who allowed me to use their valuable dictionary for my research.","A New Predictive Analyzer of English. Aspects of syntactic predictions made during the recognition of English sentences are investigated. We reinforce Kuno's original predictive analyzer[i] by introducing five types of predictions. For each type of prediction, we discuss and present its necessity, its description method, and recognition mechanism. We make use of three kinds of stacks whose behavior is specified by grammar rules in an extended version of Greibach normal form. We also investigate other factors that affect the predictive recognition process, i.e., preferences among syntactic ambiguities and necessary amount of lookahead. These factors as well as the proposed handling mechanisms of predictions are tested by analyzing two kinds of articles. In our experiment, more than seventy percent of sentences are recognized and looking two words ahead seems to be the critical length for the predictive recognition.",1986
holderness-etal-2018-analysis,https://aclanthology.org/W18-5615,1,,,,health,,,"Analysis of Risk Factor Domains in Psychosis Patient Health Records. Readmission after discharge from a hospital is disruptive and costly, regardless of the reason. However, it can be particularly problematic for psychiatric patients, so predicting which patients may be readmitted is critically important but also very difficult. Clinical narratives in psychiatric electronic health records (EHRs) span a wide range of topics and vocabulary; therefore, a psychiatric readmission prediction model must begin with a robust and interpretable topic extraction component. We created a data pipeline for using document vector similarity metrics to perform topic extraction on psychiatric EHR data in service of our long-term goal of creating a readmission risk classifier. We show initial results for our topic extraction model and identify additional features we will be incorporating in the future.",Analysis of Risk Factor Domains in Psychosis Patient Health Records,"Readmission after discharge from a hospital is disruptive and costly, regardless of the reason. However, it can be particularly problematic for psychiatric patients, so predicting which patients may be readmitted is critically important but also very difficult. Clinical narratives in psychiatric electronic health records (EHRs) span a wide range of topics and vocabulary; therefore, a psychiatric readmission prediction model must begin with a robust and interpretable topic extraction component. We created a data pipeline for using document vector similarity metrics to perform topic extraction on psychiatric EHR data in service of our long-term goal of creating a readmission risk classifier. We show initial results for our topic extraction model and identify additional features we will be incorporating in the future.",Analysis of Risk Factor Domains in Psychosis Patient Health Records,"Readmission after discharge from a hospital is disruptive and costly, regardless of the reason. However, it can be particularly problematic for psychiatric patients, so predicting which patients may be readmitted is critically important but also very difficult. Clinical narratives in psychiatric electronic health records (EHRs) span a wide range of topics and vocabulary; therefore, a psychiatric readmission prediction model must begin with a robust and interpretable topic extraction component. We created a data pipeline for using document vector similarity metrics to perform topic extraction on psychiatric EHR data in service of our long-term goal of creating a readmission risk classifier. We show initial results for our topic extraction model and identify additional features we will be incorporating in the future.",This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2018 Workshop reviewers for their constructive and helpful comments.,"Analysis of Risk Factor Domains in Psychosis Patient Health Records. Readmission after discharge from a hospital is disruptive and costly, regardless of the reason. However, it can be particularly problematic for psychiatric patients, so predicting which patients may be readmitted is critically important but also very difficult. Clinical narratives in psychiatric electronic health records (EHRs) span a wide range of topics and vocabulary; therefore, a psychiatric readmission prediction model must begin with a robust and interpretable topic extraction component. We created a data pipeline for using document vector similarity metrics to perform topic extraction on psychiatric EHR data in service of our long-term goal of creating a readmission risk classifier. We show initial results for our topic extraction model and identify additional features we will be incorporating in the future.",2018
mieskes-2017-quantitative,https://aclanthology.org/W17-1603,0,,,,,,,"A Quantitative Study of Data in the NLP community. We present results on a quantitative analysis of publications in the NLP domain on collecting, publishing and availability of research data. We find that a wide range of publications rely on data crawled from the web, but few give details on how potentially sensitive data was treated. Additionally, we find that while links to repositories of data are given, they often do not work even a short time after publication. We put together several suggestions on how to improve this situation based on publications from the NLP domain, but also other research areas.",A Quantitative Study of Data in the {NLP} community,"We present results on a quantitative analysis of publications in the NLP domain on collecting, publishing and availability of research data. We find that a wide range of publications rely on data crawled from the web, but few give details on how potentially sensitive data was treated. Additionally, we find that while links to repositories of data are given, they often do not work even a short time after publication. We put together several suggestions on how to improve this situation based on publications from the NLP domain, but also other research areas.",A Quantitative Study of Data in the NLP community,"We present results on a quantitative analysis of publications in the NLP domain on collecting, publishing and availability of research data. We find that a wide range of publications rely on data crawled from the web, but few give details on how potentially sensitive data was treated. Additionally, we find that while links to repositories of data are given, they often do not work even a short time after publication. We put together several suggestions on how to improve this situation based on publications from the NLP domain, but also other research areas.","This work was partially supported by the DFGfunded research training group ""Adaptive Preparation of Information from Heterogeneous Sources"" (AIPHES, GRK 1994/1). We would like to thank the reviewers for their valuable comments that helped to considerably improve the paper.","A Quantitative Study of Data in the NLP community. We present results on a quantitative analysis of publications in the NLP domain on collecting, publishing and availability of research data. We find that a wide range of publications rely on data crawled from the web, but few give details on how potentially sensitive data was treated. Additionally, we find that while links to repositories of data are given, they often do not work even a short time after publication. We put together several suggestions on how to improve this situation based on publications from the NLP domain, but also other research areas.",2017
oepen-etal-2016-opt,https://aclanthology.org/K16-2002,0,,,,,,,"OPT: Oslo--Potsdam--Teesside. Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing. The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.","{OPT}: {O}slo{--}{P}otsdam{--}{T}eesside. Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing","The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.","OPT: Oslo--Potsdam--Teesside. Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing","The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.","We are indebted to Te Rutherford of Brandeis University for his effort in preparing data and infrastructure for the Task, as well as for shepherding our team and everyone else through its various stages. We are grateful to two anonymous reviewers for comments on an earlier version of this manuscript.","OPT: Oslo--Potsdam--Teesside. Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing. The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.",2016
sikos-pado-2018-using,https://aclanthology.org/W18-3813,0,,,,,,,"Using Embeddings to Compare FrameNet Frames Across Languages. Much of the recent interest in Frame Semantics is fueled by the substantial extent of its applicability across languages. At the same time, lexicographic studies have found that the applicability of individual frames can be diminished by cross-lingual divergences regarding polysemy, syntactic valency, and lexicalization. Due to the large effort involved in manual investigations, there are so far no broad-coverage resources with ""problematic"" frames for any language pair. Our study investigates to what extent multilingual vector representations of frames learned from manually annotated corpora can address this need by serving as a wide coverage source for such divergences. We present a case study for the language pair English-German using the FrameNet and SALSA corpora and find that inferences can be made about cross-lingual frame applicability using a vector space model.",Using Embeddings to Compare {F}rame{N}et Frames Across Languages,"Much of the recent interest in Frame Semantics is fueled by the substantial extent of its applicability across languages. At the same time, lexicographic studies have found that the applicability of individual frames can be diminished by cross-lingual divergences regarding polysemy, syntactic valency, and lexicalization. Due to the large effort involved in manual investigations, there are so far no broad-coverage resources with ""problematic"" frames for any language pair. Our study investigates to what extent multilingual vector representations of frames learned from manually annotated corpora can address this need by serving as a wide coverage source for such divergences. We present a case study for the language pair English-German using the FrameNet and SALSA corpora and find that inferences can be made about cross-lingual frame applicability using a vector space model.",Using Embeddings to Compare FrameNet Frames Across Languages,"Much of the recent interest in Frame Semantics is fueled by the substantial extent of its applicability across languages. At the same time, lexicographic studies have found that the applicability of individual frames can be diminished by cross-lingual divergences regarding polysemy, syntactic valency, and lexicalization. Due to the large effort involved in manual investigations, there are so far no broad-coverage resources with ""problematic"" frames for any language pair. Our study investigates to what extent multilingual vector representations of frames learned from manually annotated corpora can address this need by serving as a wide coverage source for such divergences. We present a case study for the language pair English-German using the FrameNet and SALSA corpora and find that inferences can be made about cross-lingual frame applicability using a vector space model.",,"Using Embeddings to Compare FrameNet Frames Across Languages. Much of the recent interest in Frame Semantics is fueled by the substantial extent of its applicability across languages. At the same time, lexicographic studies have found that the applicability of individual frames can be diminished by cross-lingual divergences regarding polysemy, syntactic valency, and lexicalization. Due to the large effort involved in manual investigations, there are so far no broad-coverage resources with ""problematic"" frames for any language pair. Our study investigates to what extent multilingual vector representations of frames learned from manually annotated corpora can address this need by serving as a wide coverage source for such divergences. We present a case study for the language pair English-German using the FrameNet and SALSA corpora and find that inferences can be made about cross-lingual frame applicability using a vector space model.",2018
lee-etal-2005-zero,https://aclanthology.org/I05-1052,0,,,,,,,"Why Is Zero Marking Important in Korean?. This paper argues for the necessity of zero pronoun annotations in Korean treebanks and provides an annotation scheme that can be used to develop a gold standard for testing different anaphor resolution algorithms. Relevant issues of pronoun annotation will be discussed by comparing the Penn Korean Treebank with zero pronoun markup and the newly developing Sejong Teebank without zero pronoun markup. In addition to supportive evidence for zero marking, necessary morphosyntactic and semantic features will be suggested for zero annotation in Korean treebanks.",Why Is Zero Marking Important in {K}orean?,"This paper argues for the necessity of zero pronoun annotations in Korean treebanks and provides an annotation scheme that can be used to develop a gold standard for testing different anaphor resolution algorithms. Relevant issues of pronoun annotation will be discussed by comparing the Penn Korean Treebank with zero pronoun markup and the newly developing Sejong Teebank without zero pronoun markup. In addition to supportive evidence for zero marking, necessary morphosyntactic and semantic features will be suggested for zero annotation in Korean treebanks.",Why Is Zero Marking Important in Korean?,"This paper argues for the necessity of zero pronoun annotations in Korean treebanks and provides an annotation scheme that can be used to develop a gold standard for testing different anaphor resolution algorithms. Relevant issues of pronoun annotation will be discussed by comparing the Penn Korean Treebank with zero pronoun markup and the newly developing Sejong Teebank without zero pronoun markup. In addition to supportive evidence for zero marking, necessary morphosyntactic and semantic features will be suggested for zero annotation in Korean treebanks.",,"Why Is Zero Marking Important in Korean?. This paper argues for the necessity of zero pronoun annotations in Korean treebanks and provides an annotation scheme that can be used to develop a gold standard for testing different anaphor resolution algorithms. Relevant issues of pronoun annotation will be discussed by comparing the Penn Korean Treebank with zero pronoun markup and the newly developing Sejong Teebank without zero pronoun markup. In addition to supportive evidence for zero marking, necessary morphosyntactic and semantic features will be suggested for zero annotation in Korean treebanks.",2005
johnson-watanabe-1990-relational,https://aclanthology.org/W90-0123,0,,,,,,,"Relational-Grammar-Based Generation in the JETS Japanese-English Machine Translation System. This paper describes the design and functioning of the English generation phase in JETS, a limited transfer, Japanese-English machine translation system that is loosely based on the linguistic framework of relational grammar. To facilitate the development of relational-grammar-based generators, we have built an NL-and-application-independent generator shell and relational grammar rulewriting language. The implemented generator, GENIE, maps abstract canonical structures, representing the basic predicate-argument structures of sentences, into well-formed English sentences via a two-stage plan-and-execute design. This modularity permits the independent development of a very general, deterministic execution grammar that is driven by a set of planning rules sensitive to lexical, syntactic and stylistic constraints. Processing in GENIE is category-driven, i.e., grammatical rules are distributed over a part-of-speech hierarchy and, using an inheritance mechanism, are invoked only ff appropriate for the category being processed.",Relational-Grammar-Based Generation in the {JETS} {J}apanese-{E}nglish Machine Translation System,"This paper describes the design and functioning of the English generation phase in JETS, a limited transfer, Japanese-English machine translation system that is loosely based on the linguistic framework of relational grammar. To facilitate the development of relational-grammar-based generators, we have built an NL-and-application-independent generator shell and relational grammar rulewriting language. The implemented generator, GENIE, maps abstract canonical structures, representing the basic predicate-argument structures of sentences, into well-formed English sentences via a two-stage plan-and-execute design. This modularity permits the independent development of a very general, deterministic execution grammar that is driven by a set of planning rules sensitive to lexical, syntactic and stylistic constraints. Processing in GENIE is category-driven, i.e., grammatical rules are distributed over a part-of-speech hierarchy and, using an inheritance mechanism, are invoked only ff appropriate for the category being processed.",Relational-Grammar-Based Generation in the JETS Japanese-English Machine Translation System,"This paper describes the design and functioning of the English generation phase in JETS, a limited transfer, Japanese-English machine translation system that is loosely based on the linguistic framework of relational grammar. To facilitate the development of relational-grammar-based generators, we have built an NL-and-application-independent generator shell and relational grammar rulewriting language. The implemented generator, GENIE, maps abstract canonical structures, representing the basic predicate-argument structures of sentences, into well-formed English sentences via a two-stage plan-and-execute design. This modularity permits the independent development of a very general, deterministic execution grammar that is driven by a set of planning rules sensitive to lexical, syntactic and stylistic constraints. Processing in GENIE is category-driven, i.e., grammatical rules are distributed over a part-of-speech hierarchy and, using an inheritance mechanism, are invoked only ff appropriate for the category being processed.",,"Relational-Grammar-Based Generation in the JETS Japanese-English Machine Translation System. This paper describes the design and functioning of the English generation phase in JETS, a limited transfer, Japanese-English machine translation system that is loosely based on the linguistic framework of relational grammar. To facilitate the development of relational-grammar-based generators, we have built an NL-and-application-independent generator shell and relational grammar rulewriting language. The implemented generator, GENIE, maps abstract canonical structures, representing the basic predicate-argument structures of sentences, into well-formed English sentences via a two-stage plan-and-execute design. This modularity permits the independent development of a very general, deterministic execution grammar that is driven by a set of planning rules sensitive to lexical, syntactic and stylistic constraints. Processing in GENIE is category-driven, i.e., grammatical rules are distributed over a part-of-speech hierarchy and, using an inheritance mechanism, are invoked only ff appropriate for the category being processed.",1990
wise-2014-keynote,https://aclanthology.org/W14-4101,0,,,,,,,"Keynote: Data Archeology: A theory informed approach to analyzing data traces of social interaction in large scale learning environments. Data archeology is a theoreticallyinformed approach to make sense of the digital artifacts left behind by a prior learning ""civilization."" Critical elements include use of theoretical learning models to construct analytic metrics, attention to temporality as a means to reconstruct individual and collective trajectories, and consideration of the pedagogical and technological structures framing activity. Examples of the approach using discussion forum trace data will be presented.",{K}eynote: Data Archeology: A theory informed approach to analyzing data traces of social interaction in large scale learning environments,"Data archeology is a theoreticallyinformed approach to make sense of the digital artifacts left behind by a prior learning ""civilization."" Critical elements include use of theoretical learning models to construct analytic metrics, attention to temporality as a means to reconstruct individual and collective trajectories, and consideration of the pedagogical and technological structures framing activity. Examples of the approach using discussion forum trace data will be presented.",Keynote: Data Archeology: A theory informed approach to analyzing data traces of social interaction in large scale learning environments,"Data archeology is a theoreticallyinformed approach to make sense of the digital artifacts left behind by a prior learning ""civilization."" Critical elements include use of theoretical learning models to construct analytic metrics, attention to temporality as a means to reconstruct individual and collective trajectories, and consideration of the pedagogical and technological structures framing activity. Examples of the approach using discussion forum trace data will be presented.",,"Keynote: Data Archeology: A theory informed approach to analyzing data traces of social interaction in large scale learning environments. Data archeology is a theoreticallyinformed approach to make sense of the digital artifacts left behind by a prior learning ""civilization."" Critical elements include use of theoretical learning models to construct analytic metrics, attention to temporality as a means to reconstruct individual and collective trajectories, and consideration of the pedagogical and technological structures framing activity. Examples of the approach using discussion forum trace data will be presented.",2014
brants-xu-2009-distributed,https://aclanthology.org/N09-4002,0,,,,,,,"Distributed Language Models. Language models are used in a wide variety of natural language applications, including machine translation, speech recognition, spelling correction, optical character recognition, etc. Recent studies have shown that more data is better data, and bigger language models are better language models: the authors found nearly constant machine translation improvements with each doubling of the training data size even at 2 trillion tokens (resulting in 400 billion n-grams). Training and using such large models is a challenge. This tutorial shows efficient methods for distributed training of large language models based on the MapReduce computing model. We also show efficient ways of using distributed models in which requesting individual n-grams is expensive because they require communication between different machines.",Distributed Language Models,"Language models are used in a wide variety of natural language applications, including machine translation, speech recognition, spelling correction, optical character recognition, etc. Recent studies have shown that more data is better data, and bigger language models are better language models: the authors found nearly constant machine translation improvements with each doubling of the training data size even at 2 trillion tokens (resulting in 400 billion n-grams). Training and using such large models is a challenge. This tutorial shows efficient methods for distributed training of large language models based on the MapReduce computing model. We also show efficient ways of using distributed models in which requesting individual n-grams is expensive because they require communication between different machines.",Distributed Language Models,"Language models are used in a wide variety of natural language applications, including machine translation, speech recognition, spelling correction, optical character recognition, etc. Recent studies have shown that more data is better data, and bigger language models are better language models: the authors found nearly constant machine translation improvements with each doubling of the training data size even at 2 trillion tokens (resulting in 400 billion n-grams). Training and using such large models is a challenge. This tutorial shows efficient methods for distributed training of large language models based on the MapReduce computing model. We also show efficient ways of using distributed models in which requesting individual n-grams is expensive because they require communication between different machines.",,"Distributed Language Models. Language models are used in a wide variety of natural language applications, including machine translation, speech recognition, spelling correction, optical character recognition, etc. Recent studies have shown that more data is better data, and bigger language models are better language models: the authors found nearly constant machine translation improvements with each doubling of the training data size even at 2 trillion tokens (resulting in 400 billion n-grams). Training and using such large models is a challenge. This tutorial shows efficient methods for distributed training of large language models based on the MapReduce computing model. We also show efficient ways of using distributed models in which requesting individual n-grams is expensive because they require communication between different machines.",2009
larson-etal-2019-outlier,https://aclanthology.org/N19-1051,0,,,,,,,"Outlier Detection for Improved Data Quality and Diversity in Dialog Systems. In a corpus of data, outliers are either errors: mistakes in the data that are counterproductive, or are unique: informative samples that improve model robustness. Identifying outliers can lead to better datasets by (1) removing noise in datasets and (2) guiding collection of additional data to fill gaps. However, the problem of detecting both outlier types has received relatively little attention in NLP, particularly for dialog systems. We introduce a simple and effective technique for detecting both erroneous and unique samples in a corpus of short texts using neural sentence embeddings combined with distance-based outlier detection. We also present a novel data collection pipeline built atop our detection technique to automatically and iteratively mine unique data samples while discarding erroneous samples. Experiments show that our outlier detection technique is effective at finding errors while our data collection pipeline yields highly diverse corpora that in turn produce more robust intent classification and slot-filling models.",Outlier Detection for Improved Data Quality and Diversity in Dialog Systems,"In a corpus of data, outliers are either errors: mistakes in the data that are counterproductive, or are unique: informative samples that improve model robustness. Identifying outliers can lead to better datasets by (1) removing noise in datasets and (2) guiding collection of additional data to fill gaps. However, the problem of detecting both outlier types has received relatively little attention in NLP, particularly for dialog systems. We introduce a simple and effective technique for detecting both erroneous and unique samples in a corpus of short texts using neural sentence embeddings combined with distance-based outlier detection. We also present a novel data collection pipeline built atop our detection technique to automatically and iteratively mine unique data samples while discarding erroneous samples. Experiments show that our outlier detection technique is effective at finding errors while our data collection pipeline yields highly diverse corpora that in turn produce more robust intent classification and slot-filling models.",Outlier Detection for Improved Data Quality and Diversity in Dialog Systems,"In a corpus of data, outliers are either errors: mistakes in the data that are counterproductive, or are unique: informative samples that improve model robustness. Identifying outliers can lead to better datasets by (1) removing noise in datasets and (2) guiding collection of additional data to fill gaps. However, the problem of detecting both outlier types has received relatively little attention in NLP, particularly for dialog systems. We introduce a simple and effective technique for detecting both erroneous and unique samples in a corpus of short texts using neural sentence embeddings combined with distance-based outlier detection. We also present a novel data collection pipeline built atop our detection technique to automatically and iteratively mine unique data samples while discarding erroneous samples. Experiments show that our outlier detection technique is effective at finding errors while our data collection pipeline yields highly diverse corpora that in turn produce more robust intent classification and slot-filling models.","The authors thank Yiping Kang, Yunqi Zhang, Joseph Peper, and the anonymous reviewers for their helpful comments and feedback.","Outlier Detection for Improved Data Quality and Diversity in Dialog Systems. In a corpus of data, outliers are either errors: mistakes in the data that are counterproductive, or are unique: informative samples that improve model robustness. Identifying outliers can lead to better datasets by (1) removing noise in datasets and (2) guiding collection of additional data to fill gaps. However, the problem of detecting both outlier types has received relatively little attention in NLP, particularly for dialog systems. We introduce a simple and effective technique for detecting both erroneous and unique samples in a corpus of short texts using neural sentence embeddings combined with distance-based outlier detection. We also present a novel data collection pipeline built atop our detection technique to automatically and iteratively mine unique data samples while discarding erroneous samples. Experiments show that our outlier detection technique is effective at finding errors while our data collection pipeline yields highly diverse corpora that in turn produce more robust intent classification and slot-filling models.",2019
zhai-etal-2021-script,https://aclanthology.org/2021.starsem-1.18,0,,,,,,,"Script Parsing with Hierarchical Sequence Modelling. Scripts (Schank and Abelson, 1977) capture commonsense knowledge about everyday activities and their participants. Script knowledge has been shown to be useful in a number of NLP tasks, such as referent prediction, discourse classification, and story generation. A crucial step for the exploitation of script knowledge is script parsing, the task of tagging a text with the events and participants from a certain activity. This task is challenging: it requires information both about the ways events and participants are usually realized in surface language as well as the order in which they occur in the world. We show how to do accurate script parsing with a hierarchical sequence model. Our model improves the state of the art of event parsing by over 16 points F-score and, for the first time, accurately tags script participants.",Script Parsing with Hierarchical Sequence Modelling,"Scripts (Schank and Abelson, 1977) capture commonsense knowledge about everyday activities and their participants. Script knowledge has been shown to be useful in a number of NLP tasks, such as referent prediction, discourse classification, and story generation. A crucial step for the exploitation of script knowledge is script parsing, the task of tagging a text with the events and participants from a certain activity. This task is challenging: it requires information both about the ways events and participants are usually realized in surface language as well as the order in which they occur in the world. We show how to do accurate script parsing with a hierarchical sequence model. Our model improves the state of the art of event parsing by over 16 points F-score and, for the first time, accurately tags script participants.",Script Parsing with Hierarchical Sequence Modelling,"Scripts (Schank and Abelson, 1977) capture commonsense knowledge about everyday activities and their participants. Script knowledge has been shown to be useful in a number of NLP tasks, such as referent prediction, discourse classification, and story generation. A crucial step for the exploitation of script knowledge is script parsing, the task of tagging a text with the events and participants from a certain activity. This task is challenging: it requires information both about the ways events and participants are usually realized in surface language as well as the order in which they occur in the world. We show how to do accurate script parsing with a hierarchical sequence model. Our model improves the state of the art of event parsing by over 16 points F-score and, for the first time, accurately tags script participants.","We thank Simon Ostermann for providing the data from his experiments and his help all along the way of re-implementing his model. We thank Vera Demberg for the useful comments and suggestions. We also thank the anonymous reviewers for the valuable comments. This research was funded by the German Research Foundation (DFG) as part of SFB 1102 (Project-ID 232722074) ""Information Density and Linguistic Encoding"".","Script Parsing with Hierarchical Sequence Modelling. Scripts (Schank and Abelson, 1977) capture commonsense knowledge about everyday activities and their participants. Script knowledge has been shown to be useful in a number of NLP tasks, such as referent prediction, discourse classification, and story generation. A crucial step for the exploitation of script knowledge is script parsing, the task of tagging a text with the events and participants from a certain activity. This task is challenging: it requires information both about the ways events and participants are usually realized in surface language as well as the order in which they occur in the world. We show how to do accurate script parsing with a hierarchical sequence model. Our model improves the state of the art of event parsing by over 16 points F-score and, for the first time, accurately tags script participants.",2021
ruiter-etal-2019-self,https://aclanthology.org/P19-1178,0,,,,,,,"Self-Supervised Neural Machine Translation. We present a simple new method where an emergent NMT system is used for simultaneously selecting training data and learning internal NMT representations. This is done in a self-supervised way without parallel data, in such a way that both tasks enhance each other during training. The method is language independent, introduces no additional hyper-parameters, and achieves BLEU scores of 29.21 (en2f r) and 27.36 (f r2en) on new-stest2014 using English and French Wikipedia data for training.",Self-Supervised Neural Machine Translation,"We present a simple new method where an emergent NMT system is used for simultaneously selecting training data and learning internal NMT representations. This is done in a self-supervised way without parallel data, in such a way that both tasks enhance each other during training. The method is language independent, introduces no additional hyper-parameters, and achieves BLEU scores of 29.21 (en2f r) and 27.36 (f r2en) on new-stest2014 using English and French Wikipedia data for training.",Self-Supervised Neural Machine Translation,"We present a simple new method where an emergent NMT system is used for simultaneously selecting training data and learning internal NMT representations. This is done in a self-supervised way without parallel data, in such a way that both tasks enhance each other during training. The method is language independent, introduces no additional hyper-parameters, and achieves BLEU scores of 29.21 (en2f r) and 27.36 (f r2en) on new-stest2014 using English and French Wikipedia data for training.",The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee) and by the Leibniz Gemeinschaft via the SAW-2016-ZPID-2 project (CLuBS). Responsibility for the content of this publication is with the authors.,"Self-Supervised Neural Machine Translation. We present a simple new method where an emergent NMT system is used for simultaneously selecting training data and learning internal NMT representations. This is done in a self-supervised way without parallel data, in such a way that both tasks enhance each other during training. The method is language independent, introduces no additional hyper-parameters, and achieves BLEU scores of 29.21 (en2f r) and 27.36 (f r2en) on new-stest2014 using English and French Wikipedia data for training.",2019
nie-etal-2020-adversarial,https://aclanthology.org/2020.acl-main.441,0,,,,,,,"Adversarial NLI: A New Benchmark for Natural Language Understanding. We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks, while posing a more difficult challenge with its new test set. Our analysis sheds light on the shortcomings of current state-of-theart models, and shows that non-expert annotators are successful at finding their weaknesses. The data collection method can be applied in a never-ending learning scenario, becoming a moving target for NLU, rather than a static benchmark that will quickly saturate.",Adversarial {NLI}: A New Benchmark for Natural Language Understanding,"We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks, while posing a more difficult challenge with its new test set. Our analysis sheds light on the shortcomings of current state-of-theart models, and shows that non-expert annotators are successful at finding their weaknesses. The data collection method can be applied in a never-ending learning scenario, becoming a moving target for NLU, rather than a static benchmark that will quickly saturate.",Adversarial NLI: A New Benchmark for Natural Language Understanding,"We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks, while posing a more difficult challenge with its new test set. Our analysis sheds light on the shortcomings of current state-of-theart models, and shows that non-expert annotators are successful at finding their weaknesses. The data collection method can be applied in a never-ending learning scenario, becoming a moving target for NLU, rather than a static benchmark that will quickly saturate.","YN interned at Facebook. YN and MB were sponsored by DARPA MCS Grant #N66001-19-2-4031, ONR Grant #N00014-18-1-2871, and DARPA YFA17-D17AP00022. Special thanks to Sam Bowman for comments on an earlier draft.","Adversarial NLI: A New Benchmark for Natural Language Understanding. We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks, while posing a more difficult challenge with its new test set. Our analysis sheds light on the shortcomings of current state-of-theart models, and shows that non-expert annotators are successful at finding their weaknesses. The data collection method can be applied in a never-ending learning scenario, becoming a moving target for NLU, rather than a static benchmark that will quickly saturate.",2020
nn-1997-neocortech,https://aclanthology.org/1997.mtsummit-exhibits.11,0,,,,,,,"NeocorTech LLC. NEOCORTECH is the premier provider of Japanese communication solutions for computer users outside of Japan, using Microsoft Windows 32-bit operating systems. In fact, only NeocorTech's machine translation (MT) products have been awarded the coveted Microsoft Windows 95 compatibility logo. Long heralded as the final solution in international communications, MT is now available between Japanese and English on a Windows 95 or NT computer. With Neocor's Tsunami MT, Typhoon MT, and KanjiScan programs, you don't need any additional support software to recognize, display, edit, translate, print, and e-mail. TSUNAMI MT is NeocorTech's flagship software, and has been providing communication solutions for organizations and individuals that need to translate their English documents efficiently and effectively into Japanese. Tsunami's key features are high translation speed, unmatched accuracy, and the capability to operate on English Windows with an English interface, and an English manual (a huge benefit for American companies that do not use Japanese operating systems). In addition to superior translation speed and accuracy, Tsunami MT conies with Japanese TrueType fonts, allows users to type Kanji & Kana with an English keyboard, and offers flexible Japanese formality and grammar settings.",{N}eocor{T}ech {LLC},"NEOCORTECH is the premier provider of Japanese communication solutions for computer users outside of Japan, using Microsoft Windows 32-bit operating systems. In fact, only NeocorTech's machine translation (MT) products have been awarded the coveted Microsoft Windows 95 compatibility logo. Long heralded as the final solution in international communications, MT is now available between Japanese and English on a Windows 95 or NT computer. With Neocor's Tsunami MT, Typhoon MT, and KanjiScan programs, you don't need any additional support software to recognize, display, edit, translate, print, and e-mail. TSUNAMI MT is NeocorTech's flagship software, and has been providing communication solutions for organizations and individuals that need to translate their English documents efficiently and effectively into Japanese. Tsunami's key features are high translation speed, unmatched accuracy, and the capability to operate on English Windows with an English interface, and an English manual (a huge benefit for American companies that do not use Japanese operating systems). In addition to superior translation speed and accuracy, Tsunami MT conies with Japanese TrueType fonts, allows users to type Kanji & Kana with an English keyboard, and offers flexible Japanese formality and grammar settings.",NeocorTech LLC,"NEOCORTECH is the premier provider of Japanese communication solutions for computer users outside of Japan, using Microsoft Windows 32-bit operating systems. In fact, only NeocorTech's machine translation (MT) products have been awarded the coveted Microsoft Windows 95 compatibility logo. Long heralded as the final solution in international communications, MT is now available between Japanese and English on a Windows 95 or NT computer. With Neocor's Tsunami MT, Typhoon MT, and KanjiScan programs, you don't need any additional support software to recognize, display, edit, translate, print, and e-mail. TSUNAMI MT is NeocorTech's flagship software, and has been providing communication solutions for organizations and individuals that need to translate their English documents efficiently and effectively into Japanese. Tsunami's key features are high translation speed, unmatched accuracy, and the capability to operate on English Windows with an English interface, and an English manual (a huge benefit for American companies that do not use Japanese operating systems). In addition to superior translation speed and accuracy, Tsunami MT conies with Japanese TrueType fonts, allows users to type Kanji & Kana with an English keyboard, and offers flexible Japanese formality and grammar settings.",,"NeocorTech LLC. NEOCORTECH is the premier provider of Japanese communication solutions for computer users outside of Japan, using Microsoft Windows 32-bit operating systems. In fact, only NeocorTech's machine translation (MT) products have been awarded the coveted Microsoft Windows 95 compatibility logo. Long heralded as the final solution in international communications, MT is now available between Japanese and English on a Windows 95 or NT computer. With Neocor's Tsunami MT, Typhoon MT, and KanjiScan programs, you don't need any additional support software to recognize, display, edit, translate, print, and e-mail. TSUNAMI MT is NeocorTech's flagship software, and has been providing communication solutions for organizations and individuals that need to translate their English documents efficiently and effectively into Japanese. Tsunami's key features are high translation speed, unmatched accuracy, and the capability to operate on English Windows with an English interface, and an English manual (a huge benefit for American companies that do not use Japanese operating systems). In addition to superior translation speed and accuracy, Tsunami MT conies with Japanese TrueType fonts, allows users to type Kanji & Kana with an English keyboard, and offers flexible Japanese formality and grammar settings.",1997
grouin-2008-certification,http://www.lrec-conf.org/proceedings/lrec2008/pdf/280_paper.pdf,0,,,,,,,"Certification and Cleaning up of a Text Corpus: Towards an Evaluation of the ``Grammatical'' Quality of a Corpus. We present in this article the methods we used for obtaining measures to ensure the quality and well-formedness of a text corpus. These measures allow us to determine the compatibility of a corpus with the treatments we want to apply on it. We called this method ""certification of corpus"". These measures are based upon the characteristics required by the linguistic treatments we have to apply on the corpus we want to certify. Since the certification of corpus allows us to highlight the errors present in a text, we developed modules to carry out an automatic correction. By applying these modules, we reduced the number of errors. In consequence, it increases the quality of the corpus making it possible to use a corpus that a first certification would not have admitted.",Certification and Cleaning up of a Text Corpus: Towards an Evaluation of the {``}Grammatical{''} Quality of a Corpus,"We present in this article the methods we used for obtaining measures to ensure the quality and well-formedness of a text corpus. These measures allow us to determine the compatibility of a corpus with the treatments we want to apply on it. We called this method ""certification of corpus"". These measures are based upon the characteristics required by the linguistic treatments we have to apply on the corpus we want to certify. Since the certification of corpus allows us to highlight the errors present in a text, we developed modules to carry out an automatic correction. By applying these modules, we reduced the number of errors. In consequence, it increases the quality of the corpus making it possible to use a corpus that a first certification would not have admitted.",Certification and Cleaning up of a Text Corpus: Towards an Evaluation of the ``Grammatical'' Quality of a Corpus,"We present in this article the methods we used for obtaining measures to ensure the quality and well-formedness of a text corpus. These measures allow us to determine the compatibility of a corpus with the treatments we want to apply on it. We called this method ""certification of corpus"". These measures are based upon the characteristics required by the linguistic treatments we have to apply on the corpus we want to certify. Since the certification of corpus allows us to highlight the errors present in a text, we developed modules to carry out an automatic correction. By applying these modules, we reduced the number of errors. In consequence, it increases the quality of the corpus making it possible to use a corpus that a first certification would not have admitted.","This work has been done within the framework of the SEVEN 8 project, held by the ANR (project number: ANR-05-RNTL-02204 (S0604149W)).","Certification and Cleaning up of a Text Corpus: Towards an Evaluation of the ``Grammatical'' Quality of a Corpus. We present in this article the methods we used for obtaining measures to ensure the quality and well-formedness of a text corpus. These measures allow us to determine the compatibility of a corpus with the treatments we want to apply on it. We called this method ""certification of corpus"". These measures are based upon the characteristics required by the linguistic treatments we have to apply on the corpus we want to certify. Since the certification of corpus allows us to highlight the errors present in a text, we developed modules to carry out an automatic correction. By applying these modules, we reduced the number of errors. In consequence, it increases the quality of the corpus making it possible to use a corpus that a first certification would not have admitted.",2008
grangier-auli-2018-quickedit,https://aclanthology.org/N18-1025,0,,,,,,,QuickEdit: Editing Text \& Translations by Crossing Words Out. We propose a framework for computerassisted text editing. It applies to translation post-editing and to paraphrasing. Our proposal relies on very simple interactions: a human editor modifies a sentence by marking tokens they would like the system to change. Our model then generates a new sentence which reformulates the initial sentence by avoiding marked words. The approach builds upon neural sequence-to-sequence modeling and introduces a neural network which takes as input a sentence along with change markers. Our model is trained on translation bitext by simulating post-edits. We demonstrate the advantage of our approach for translation postediting through simulated post-edits. We also evaluate our model for paraphrasing through a user study.,{Q}uick{E}dit: Editing Text {\&} Translations by Crossing Words Out,We propose a framework for computerassisted text editing. It applies to translation post-editing and to paraphrasing. Our proposal relies on very simple interactions: a human editor modifies a sentence by marking tokens they would like the system to change. Our model then generates a new sentence which reformulates the initial sentence by avoiding marked words. The approach builds upon neural sequence-to-sequence modeling and introduces a neural network which takes as input a sentence along with change markers. Our model is trained on translation bitext by simulating post-edits. We demonstrate the advantage of our approach for translation postediting through simulated post-edits. We also evaluate our model for paraphrasing through a user study.,QuickEdit: Editing Text \& Translations by Crossing Words Out,We propose a framework for computerassisted text editing. It applies to translation post-editing and to paraphrasing. Our proposal relies on very simple interactions: a human editor modifies a sentence by marking tokens they would like the system to change. Our model then generates a new sentence which reformulates the initial sentence by avoiding marked words. The approach builds upon neural sequence-to-sequence modeling and introduces a neural network which takes as input a sentence along with change markers. Our model is trained on translation bitext by simulating post-edits. We demonstrate the advantage of our approach for translation postediting through simulated post-edits. We also evaluate our model for paraphrasing through a user study.,"We thank Marc'Aurelio Ranzato, Sumit Chopra, Roman Novak for helpful discussions. We thank Sergey Edunov, Sam Gross, Myle Ott for writing the fairseq-py toolkit used in our experiments. We thank Jonathan Mallinson, Rico Sennrich, Mirella Lapata, for sharing ParaNet data.",QuickEdit: Editing Text \& Translations by Crossing Words Out. We propose a framework for computerassisted text editing. It applies to translation post-editing and to paraphrasing. Our proposal relies on very simple interactions: a human editor modifies a sentence by marking tokens they would like the system to change. Our model then generates a new sentence which reformulates the initial sentence by avoiding marked words. The approach builds upon neural sequence-to-sequence modeling and introduces a neural network which takes as input a sentence along with change markers. Our model is trained on translation bitext by simulating post-edits. We demonstrate the advantage of our approach for translation postediting through simulated post-edits. We also evaluate our model for paraphrasing through a user study.,2018
yakhnenko-rosario-2008-mining,https://aclanthology.org/I08-1036,0,,,,,,,"Mining the Web for Relations between Digital Devices using a Probabilistic Maximum Margin Model. Searching and reading the Web is one of the principal methods used to seek out information to resolve problems about technology in general and digital devices in particular. This paper addresses the problem of text mining in the digital devices domain. In particular, we address the task of detecting semantic relations between digital devices in the text of Web pages. We use a Naïve Bayes model trained to maximize the margin and compare its performance with several other comparable methods. We construct a novel dataset which consists of segments of text extracted from the Web, where each segment contains pairs of devices. We also propose a novel, inexpensive and very effective way of getting people to label text data using a Web service, the Mechanical Turk. Our results show that the maximum margin model consistently outperforms the other methods.",Mining the Web for Relations between Digital Devices using a Probabilistic Maximum Margin Model,"Searching and reading the Web is one of the principal methods used to seek out information to resolve problems about technology in general and digital devices in particular. This paper addresses the problem of text mining in the digital devices domain. In particular, we address the task of detecting semantic relations between digital devices in the text of Web pages. We use a Naïve Bayes model trained to maximize the margin and compare its performance with several other comparable methods. We construct a novel dataset which consists of segments of text extracted from the Web, where each segment contains pairs of devices. We also propose a novel, inexpensive and very effective way of getting people to label text data using a Web service, the Mechanical Turk. Our results show that the maximum margin model consistently outperforms the other methods.",Mining the Web for Relations between Digital Devices using a Probabilistic Maximum Margin Model,"Searching and reading the Web is one of the principal methods used to seek out information to resolve problems about technology in general and digital devices in particular. This paper addresses the problem of text mining in the digital devices domain. In particular, we address the task of detecting semantic relations between digital devices in the text of Web pages. We use a Naïve Bayes model trained to maximize the margin and compare its performance with several other comparable methods. We construct a novel dataset which consists of segments of text extracted from the Web, where each segment contains pairs of devices. We also propose a novel, inexpensive and very effective way of getting people to label text data using a Web service, the Mechanical Turk. Our results show that the maximum margin model consistently outperforms the other methods.",The authors would like to thank the reviewers for their feedback and comments; William Schilit for invaluable insight and help and for first suggesting using the MTurk to gather labeled data; David McDonald for help with developing survey instructions; and numerous MT workers for providing the labels.,"Mining the Web for Relations between Digital Devices using a Probabilistic Maximum Margin Model. Searching and reading the Web is one of the principal methods used to seek out information to resolve problems about technology in general and digital devices in particular. This paper addresses the problem of text mining in the digital devices domain. In particular, we address the task of detecting semantic relations between digital devices in the text of Web pages. We use a Naïve Bayes model trained to maximize the margin and compare its performance with several other comparable methods. We construct a novel dataset which consists of segments of text extracted from the Web, where each segment contains pairs of devices. We also propose a novel, inexpensive and very effective way of getting people to label text data using a Web service, the Mechanical Turk. Our results show that the maximum margin model consistently outperforms the other methods.",2008
von-etter-etal-2010-assessment,https://aclanthology.org/W10-1105,1,,,,health,,,"Assessment of Utility in Web Mining for the Domain of Public Health. This paper presents ongoing work on application of Information Extraction (IE) technology to domain of Public Health, in a real-world scenario. A central issue in IE is the quality of the results. We present two novel points. First, we distinguish the criteria for quality: the objective criteria that measure correctness of the system's analysis in traditional terms (F-measure, recall and precision), and, on the other hand, subjective criteria that measure the utility of the results to the end-user. Second, to obtain measures of utility, we build an environment that allows users to interact with the system by rating the analyzed content. We then build and compare several classifiers that learn from the user's responses to predict the relevance scores for new events. We conduct experiments with learning to predict relevance, and discuss the results and their implications for text mining in the domain of Public Health.",Assessment of Utility in Web Mining for the Domain of Public Health,"This paper presents ongoing work on application of Information Extraction (IE) technology to domain of Public Health, in a real-world scenario. A central issue in IE is the quality of the results. We present two novel points. First, we distinguish the criteria for quality: the objective criteria that measure correctness of the system's analysis in traditional terms (F-measure, recall and precision), and, on the other hand, subjective criteria that measure the utility of the results to the end-user. Second, to obtain measures of utility, we build an environment that allows users to interact with the system by rating the analyzed content. We then build and compare several classifiers that learn from the user's responses to predict the relevance scores for new events. We conduct experiments with learning to predict relevance, and discuss the results and their implications for text mining in the domain of Public Health.",Assessment of Utility in Web Mining for the Domain of Public Health,"This paper presents ongoing work on application of Information Extraction (IE) technology to domain of Public Health, in a real-world scenario. A central issue in IE is the quality of the results. We present two novel points. First, we distinguish the criteria for quality: the objective criteria that measure correctness of the system's analysis in traditional terms (F-measure, recall and precision), and, on the other hand, subjective criteria that measure the utility of the results to the end-user. Second, to obtain measures of utility, we build an environment that allows users to interact with the system by rating the analyzed content. We then build and compare several classifiers that learn from the user's responses to predict the relevance scores for new events. We conduct experiments with learning to predict relevance, and discuss the results and their implications for text mining in the domain of Public Health.","This research was supported in part by: the Technology Development Agency of Finland (TEKES), through the ContentFactory Project, and by the Academy of Finland's National Centre of Excellence ""Algorithmic Data Analysis (ALGODAN).""","Assessment of Utility in Web Mining for the Domain of Public Health. This paper presents ongoing work on application of Information Extraction (IE) technology to domain of Public Health, in a real-world scenario. A central issue in IE is the quality of the results. We present two novel points. First, we distinguish the criteria for quality: the objective criteria that measure correctness of the system's analysis in traditional terms (F-measure, recall and precision), and, on the other hand, subjective criteria that measure the utility of the results to the end-user. Second, to obtain measures of utility, we build an environment that allows users to interact with the system by rating the analyzed content. We then build and compare several classifiers that learn from the user's responses to predict the relevance scores for new events. We conduct experiments with learning to predict relevance, and discuss the results and their implications for text mining in the domain of Public Health.",2010
vydiswaran-etal-2019-towards,https://aclanthology.org/W19-3217,1,,,,health,,,"Towards Text Processing Pipelines to Identify Adverse Drug Events-related Tweets: University of Michigan @ SMM4H 2019 Task 1. We participated in Task 1 of the Social Media Mining for Health Applications (SMM4H) 2019 Shared Tasks on detecting mentions of adverse drug events (ADEs) in tweets. Our approach relied on a text processing pipeline for tweets, and training traditional machine learning and deep learning models. Our submitted runs performed above average for the task.",Towards Text Processing Pipelines to Identify Adverse Drug Events-related Tweets: {U}niversity of {M}ichigan @ {SMM}4{H} 2019 Task 1,"We participated in Task 1 of the Social Media Mining for Health Applications (SMM4H) 2019 Shared Tasks on detecting mentions of adverse drug events (ADEs) in tweets. Our approach relied on a text processing pipeline for tweets, and training traditional machine learning and deep learning models. Our submitted runs performed above average for the task.",Towards Text Processing Pipelines to Identify Adverse Drug Events-related Tweets: University of Michigan @ SMM4H 2019 Task 1,"We participated in Task 1 of the Social Media Mining for Health Applications (SMM4H) 2019 Shared Tasks on detecting mentions of adverse drug events (ADEs) in tweets. Our approach relied on a text processing pipeline for tweets, and training traditional machine learning and deep learning models. Our submitted runs performed above average for the task.",,"Towards Text Processing Pipelines to Identify Adverse Drug Events-related Tweets: University of Michigan @ SMM4H 2019 Task 1. We participated in Task 1 of the Social Media Mining for Health Applications (SMM4H) 2019 Shared Tasks on detecting mentions of adverse drug events (ADEs) in tweets. Our approach relied on a text processing pipeline for tweets, and training traditional machine learning and deep learning models. Our submitted runs performed above average for the task.",2019
koehn-2016-computer,https://aclanthology.org/P16-5003,0,,,,,,,Computer Aided Translation. ,Computer Aided Translation,,Computer Aided Translation,,,Computer Aided Translation. ,2016
matusov-etal-2004-symmetric,https://aclanthology.org/C04-1032,0,,,,,,,"Symmetric Word Alignments for Statistical Machine Translation. In this paper, we address the word alignment problem for statistical machine translation. We aim at creating a symmetric word alignment allowing for reliable one-to-many and many-to-one word relationships. We perform the iterative alignment training in the source-to-target and the target-to-source direction with the well-known IBM and HMM alignment models. Using these models, we robustly estimate the local costs of aligning a source word and a target word in each sentence pair. Then, we use efficient graph algorithms to determine the symmetric alignment with minimal total costs (i. e. maximal alignment probability). We evaluate the automatic alignments created in this way on the German-English Verbmobil task and the French-English Canadian Hansards task. We show statistically significant improvements of the alignment quality compared to the best results reported so far. On the Verbmobil task, we achieve an improvement of more than 1% absolute over the baseline error rate of 4.7%.",Symmetric Word Alignments for Statistical Machine Translation,"In this paper, we address the word alignment problem for statistical machine translation. We aim at creating a symmetric word alignment allowing for reliable one-to-many and many-to-one word relationships. We perform the iterative alignment training in the source-to-target and the target-to-source direction with the well-known IBM and HMM alignment models. Using these models, we robustly estimate the local costs of aligning a source word and a target word in each sentence pair. Then, we use efficient graph algorithms to determine the symmetric alignment with minimal total costs (i. e. maximal alignment probability). We evaluate the automatic alignments created in this way on the German-English Verbmobil task and the French-English Canadian Hansards task. We show statistically significant improvements of the alignment quality compared to the best results reported so far. On the Verbmobil task, we achieve an improvement of more than 1% absolute over the baseline error rate of 4.7%.",Symmetric Word Alignments for Statistical Machine Translation,"In this paper, we address the word alignment problem for statistical machine translation. We aim at creating a symmetric word alignment allowing for reliable one-to-many and many-to-one word relationships. We perform the iterative alignment training in the source-to-target and the target-to-source direction with the well-known IBM and HMM alignment models. Using these models, we robustly estimate the local costs of aligning a source word and a target word in each sentence pair. Then, we use efficient graph algorithms to determine the symmetric alignment with minimal total costs (i. e. maximal alignment probability). We evaluate the automatic alignments created in this way on the German-English Verbmobil task and the French-English Canadian Hansards task. We show statistically significant improvements of the alignment quality compared to the best results reported so far. On the Verbmobil task, we achieve an improvement of more than 1% absolute over the baseline error rate of 4.7%.","This work has been partially funded by the EU project TransType 2, IST-2001-32091. ","Symmetric Word Alignments for Statistical Machine Translation. In this paper, we address the word alignment problem for statistical machine translation. We aim at creating a symmetric word alignment allowing for reliable one-to-many and many-to-one word relationships. We perform the iterative alignment training in the source-to-target and the target-to-source direction with the well-known IBM and HMM alignment models. Using these models, we robustly estimate the local costs of aligning a source word and a target word in each sentence pair. Then, we use efficient graph algorithms to determine the symmetric alignment with minimal total costs (i. e. maximal alignment probability). We evaluate the automatic alignments created in this way on the German-English Verbmobil task and the French-English Canadian Hansards task. We show statistically significant improvements of the alignment quality compared to the best results reported so far. On the Verbmobil task, we achieve an improvement of more than 1% absolute over the baseline error rate of 4.7%.",2004
brixey-etal-2018-chahta,https://aclanthology.org/L18-1532,0,,,,,,,"Chahta Anumpa: A multimodal corpus of the Choctaw Language. This paper presents a general use corpus for the Native American indigenous language Choctaw. The corpus contains audio, video, and text resources, with many texts also translated in English. The Oklahoma Choctaw and the Mississippi Choctaw variants of the language are represented in the corpus. The data set provides documentation support for the threatened language, and allows researchers and language teachers access to a diverse collection of resources.",Chahta Anumpa: A multimodal corpus of the {C}hoctaw Language,"This paper presents a general use corpus for the Native American indigenous language Choctaw. The corpus contains audio, video, and text resources, with many texts also translated in English. The Oklahoma Choctaw and the Mississippi Choctaw variants of the language are represented in the corpus. The data set provides documentation support for the threatened language, and allows researchers and language teachers access to a diverse collection of resources.",Chahta Anumpa: A multimodal corpus of the Choctaw Language,"This paper presents a general use corpus for the Native American indigenous language Choctaw. The corpus contains audio, video, and text resources, with many texts also translated in English. The Oklahoma Choctaw and the Mississippi Choctaw variants of the language are represented in the corpus. The data set provides documentation support for the threatened language, and allows researchers and language teachers access to a diverse collection of resources.",,"Chahta Anumpa: A multimodal corpus of the Choctaw Language. This paper presents a general use corpus for the Native American indigenous language Choctaw. The corpus contains audio, video, and text resources, with many texts also translated in English. The Oklahoma Choctaw and the Mississippi Choctaw variants of the language are represented in the corpus. The data set provides documentation support for the threatened language, and allows researchers and language teachers access to a diverse collection of resources.",2018
schubert-pelletier-1982-english,https://aclanthology.org/J82-1003,0,,,,,,,"From English to Logic: Context-Free Computation of `Conventional' Logical Translation. We describe an approach to parsing and logical translation that was inspired by Gazdar's work on context-free grammar for English. Each grammar rule consists of a syntactic part that specifies an acceptable fragment of a parse tree, and a semantic part that specifies how the logical formulas corresponding to the constituents of the fragment are to be combined to yield the formula for the fragment. However, we have sought to reformulate Gazdar's semantic rules so as to obtain more or less 'conventional' logical translations of English sentences, avoiding the interpretation of NPs as property sets and the use of intensional functors other than certain propositional operators. The reformulated semantic rules often turn out to be slightly simpler than Gazdar's. Moreover, by using a semantically ambiguous logical syntax for the preliminary translations, we can account for quantifier and coordinator scope ambiguities in syntactically unambiguous sentences without recourse to multiple semantic rules, and are able to separate the disambiguation process from the operation of the parser-translator. We have implemented simple recursive descent and left-corner parsers to demonstrate the practicality of our approach.",From {E}nglish to Logic: Context-Free Computation of {`}Conventional{'} Logical Translation,"We describe an approach to parsing and logical translation that was inspired by Gazdar's work on context-free grammar for English. Each grammar rule consists of a syntactic part that specifies an acceptable fragment of a parse tree, and a semantic part that specifies how the logical formulas corresponding to the constituents of the fragment are to be combined to yield the formula for the fragment. However, we have sought to reformulate Gazdar's semantic rules so as to obtain more or less 'conventional' logical translations of English sentences, avoiding the interpretation of NPs as property sets and the use of intensional functors other than certain propositional operators. The reformulated semantic rules often turn out to be slightly simpler than Gazdar's. Moreover, by using a semantically ambiguous logical syntax for the preliminary translations, we can account for quantifier and coordinator scope ambiguities in syntactically unambiguous sentences without recourse to multiple semantic rules, and are able to separate the disambiguation process from the operation of the parser-translator. We have implemented simple recursive descent and left-corner parsers to demonstrate the practicality of our approach.",From English to Logic: Context-Free Computation of `Conventional' Logical Translation,"We describe an approach to parsing and logical translation that was inspired by Gazdar's work on context-free grammar for English. Each grammar rule consists of a syntactic part that specifies an acceptable fragment of a parse tree, and a semantic part that specifies how the logical formulas corresponding to the constituents of the fragment are to be combined to yield the formula for the fragment. However, we have sought to reformulate Gazdar's semantic rules so as to obtain more or less 'conventional' logical translations of English sentences, avoiding the interpretation of NPs as property sets and the use of intensional functors other than certain propositional operators. The reformulated semantic rules often turn out to be slightly simpler than Gazdar's. Moreover, by using a semantically ambiguous logical syntax for the preliminary translations, we can account for quantifier and coordinator scope ambiguities in syntactically unambiguous sentences without recourse to multiple semantic rules, and are able to separate the disambiguation process from the operation of the parser-translator. We have implemented simple recursive descent and left-corner parsers to demonstrate the practicality of our approach.","The authors are indebted to Ivan Sag for a series of very stimulating seminars held by him at the University of Alberta on his linguistic research, and valuable follow-up discussions.The helpful comments of the referees and of Lotfi Zadeh are also appreciated.The research was supported in part by NSERC Operating Grants A8818 and A2252; preliminary work on the left-corner parser was carried out by one of the authors (LKS) under an Alexander von Humboldt fellowship in 1978-79.","From English to Logic: Context-Free Computation of `Conventional' Logical Translation. We describe an approach to parsing and logical translation that was inspired by Gazdar's work on context-free grammar for English. Each grammar rule consists of a syntactic part that specifies an acceptable fragment of a parse tree, and a semantic part that specifies how the logical formulas corresponding to the constituents of the fragment are to be combined to yield the formula for the fragment. However, we have sought to reformulate Gazdar's semantic rules so as to obtain more or less 'conventional' logical translations of English sentences, avoiding the interpretation of NPs as property sets and the use of intensional functors other than certain propositional operators. The reformulated semantic rules often turn out to be slightly simpler than Gazdar's. Moreover, by using a semantically ambiguous logical syntax for the preliminary translations, we can account for quantifier and coordinator scope ambiguities in syntactically unambiguous sentences without recourse to multiple semantic rules, and are able to separate the disambiguation process from the operation of the parser-translator. We have implemented simple recursive descent and left-corner parsers to demonstrate the practicality of our approach.",1982
kahn-etal-2004-parsing,https://aclanthology.org/N04-4032,0,,,,,,,"Parsing Conversational Speech Using Enhanced Segmentation. The lack of sentence boundaries and presence of disfluencies pose difficulties for parsing conversational speech. This work investigates the effects of automatically detecting these phenomena on a probabilistic parser's performance. We demonstrate that a state-of-the-art segmenter, relative to a pause-based segmenter, gives more than 45% of the possible error reduction in parser performance, and that presentation of interruption points to the parser improves performance over using sentence boundaries alone.",Parsing Conversational Speech Using Enhanced Segmentation,"The lack of sentence boundaries and presence of disfluencies pose difficulties for parsing conversational speech. This work investigates the effects of automatically detecting these phenomena on a probabilistic parser's performance. We demonstrate that a state-of-the-art segmenter, relative to a pause-based segmenter, gives more than 45% of the possible error reduction in parser performance, and that presentation of interruption points to the parser improves performance over using sentence boundaries alone.",Parsing Conversational Speech Using Enhanced Segmentation,"The lack of sentence boundaries and presence of disfluencies pose difficulties for parsing conversational speech. This work investigates the effects of automatically detecting these phenomena on a probabilistic parser's performance. We demonstrate that a state-of-the-art segmenter, relative to a pause-based segmenter, gives more than 45% of the possible error reduction in parser performance, and that presentation of interruption points to the parser improves performance over using sentence boundaries alone.","We thank J. Kim for providing the SU-IP detection results, using tools developed under DARPA grant MDA904-02-C-0437. This work is supported by NSF grant no. IIS085940. Any opinions or conclusions expressed in this paper are those of the authors and do not necessarily reflect the views of these agencies.","Parsing Conversational Speech Using Enhanced Segmentation. The lack of sentence boundaries and presence of disfluencies pose difficulties for parsing conversational speech. This work investigates the effects of automatically detecting these phenomena on a probabilistic parser's performance. We demonstrate that a state-of-the-art segmenter, relative to a pause-based segmenter, gives more than 45% of the possible error reduction in parser performance, and that presentation of interruption points to the parser improves performance over using sentence boundaries alone.",2004
feng-etal-2004-new,https://aclanthology.org/W04-3248,0,,,,,,,"A New Approach for English-Chinese Named Entity Alignment. Traditional word alignment approaches cannot come up with satisfactory results for Named Entities. In this paper, we propose a novel approach using a maximum entropy model for named entity alignment. To ease the training of the maximum entropy model, bootstrapping is used to help supervised learning. Unlike previous work reported in the literature, our work conducts bilingual Named Entity alignment without word segmentation for Chinese and its performance is much better than that with word segmentation. When compared with IBM and HMM alignment models, experimental results show that our approach outperforms IBM Model 4 and HMM significantly.",A New Approach for {E}nglish-{C}hinese Named Entity Alignment,"Traditional word alignment approaches cannot come up with satisfactory results for Named Entities. In this paper, we propose a novel approach using a maximum entropy model for named entity alignment. To ease the training of the maximum entropy model, bootstrapping is used to help supervised learning. Unlike previous work reported in the literature, our work conducts bilingual Named Entity alignment without word segmentation for Chinese and its performance is much better than that with word segmentation. When compared with IBM and HMM alignment models, experimental results show that our approach outperforms IBM Model 4 and HMM significantly.",A New Approach for English-Chinese Named Entity Alignment,"Traditional word alignment approaches cannot come up with satisfactory results for Named Entities. In this paper, we propose a novel approach using a maximum entropy model for named entity alignment. To ease the training of the maximum entropy model, bootstrapping is used to help supervised learning. Unlike previous work reported in the literature, our work conducts bilingual Named Entity alignment without word segmentation for Chinese and its performance is much better than that with word segmentation. When compared with IBM and HMM alignment models, experimental results show that our approach outperforms IBM Model 4 and HMM significantly.","Thanks to Hang Li, Changning Huang, Yunbo Cao, and John Chen for their valuable comments on this work. Also thank Kevin Knight for his checking of the English of this paper. Special thanks go to Eduard Hovy for his continuous support and encouragement while the first author was visiting MSRA.","A New Approach for English-Chinese Named Entity Alignment. Traditional word alignment approaches cannot come up with satisfactory results for Named Entities. In this paper, we propose a novel approach using a maximum entropy model for named entity alignment. To ease the training of the maximum entropy model, bootstrapping is used to help supervised learning. Unlike previous work reported in the literature, our work conducts bilingual Named Entity alignment without word segmentation for Chinese and its performance is much better than that with word segmentation. When compared with IBM and HMM alignment models, experimental results show that our approach outperforms IBM Model 4 and HMM significantly.",2004
liu-hulden-2022-detecting,https://aclanthology.org/2022.acl-short.19,0,,,,,,,"Detecting Annotation Errors in Morphological Data with the Transformer. Annotation errors that stem from various sources are usually unavoidable when performing large-scale annotation of linguistic data. In this paper, we evaluate the feasibility of using the Transformer model to detect various types of annotator errors in type-based morphological datasets that contain inflected word forms. We evaluate our error detection model on four languages by injecting three different types of artificial errors into the data: (1) typographic errors, where single characters in the data are inserted, replaced, or deleted; (2) linguistic confusion errors where two inflected forms are systematically swapped; and (3) self-adversarial errors where the Transformer model itself is used to generate plausible-looking, but erroneous forms by retrieving high-scoring predictions from a Transformer search beam. Results show that the model can with perfect, or nearperfect recall detect errors in all three scenarios, even when significant amounts of the annotated data (5%-30%) are corrupted on all languages tested. Precision varies across the languages and types of errors, but is high enough that the model can reliably be used to flag suspicious entries in large datasets for further scrutiny by human annotators.",Detecting Annotation Errors in Morphological Data with the Transformer,"Annotation errors that stem from various sources are usually unavoidable when performing large-scale annotation of linguistic data. In this paper, we evaluate the feasibility of using the Transformer model to detect various types of annotator errors in type-based morphological datasets that contain inflected word forms. We evaluate our error detection model on four languages by injecting three different types of artificial errors into the data: (1) typographic errors, where single characters in the data are inserted, replaced, or deleted; (2) linguistic confusion errors where two inflected forms are systematically swapped; and (3) self-adversarial errors where the Transformer model itself is used to generate plausible-looking, but erroneous forms by retrieving high-scoring predictions from a Transformer search beam. Results show that the model can with perfect, or nearperfect recall detect errors in all three scenarios, even when significant amounts of the annotated data (5%-30%) are corrupted on all languages tested. Precision varies across the languages and types of errors, but is high enough that the model can reliably be used to flag suspicious entries in large datasets for further scrutiny by human annotators.",Detecting Annotation Errors in Morphological Data with the Transformer,"Annotation errors that stem from various sources are usually unavoidable when performing large-scale annotation of linguistic data. In this paper, we evaluate the feasibility of using the Transformer model to detect various types of annotator errors in type-based morphological datasets that contain inflected word forms. We evaluate our error detection model on four languages by injecting three different types of artificial errors into the data: (1) typographic errors, where single characters in the data are inserted, replaced, or deleted; (2) linguistic confusion errors where two inflected forms are systematically swapped; and (3) self-adversarial errors where the Transformer model itself is used to generate plausible-looking, but erroneous forms by retrieving high-scoring predictions from a Transformer search beam. Results show that the model can with perfect, or nearperfect recall detect errors in all three scenarios, even when significant amounts of the annotated data (5%-30%) are corrupted on all languages tested. Precision varies across the languages and types of errors, but is high enough that the model can reliably be used to flag suspicious entries in large datasets for further scrutiny by human annotators.",,"Detecting Annotation Errors in Morphological Data with the Transformer. Annotation errors that stem from various sources are usually unavoidable when performing large-scale annotation of linguistic data. In this paper, we evaluate the feasibility of using the Transformer model to detect various types of annotator errors in type-based morphological datasets that contain inflected word forms. We evaluate our error detection model on four languages by injecting three different types of artificial errors into the data: (1) typographic errors, where single characters in the data are inserted, replaced, or deleted; (2) linguistic confusion errors where two inflected forms are systematically swapped; and (3) self-adversarial errors where the Transformer model itself is used to generate plausible-looking, but erroneous forms by retrieving high-scoring predictions from a Transformer search beam. Results show that the model can with perfect, or nearperfect recall detect errors in all three scenarios, even when significant amounts of the annotated data (5%-30%) are corrupted on all languages tested. Precision varies across the languages and types of errors, but is high enough that the model can reliably be used to flag suspicious entries in large datasets for further scrutiny by human annotators.",2022
kim-etal-2017-adversarial,https://aclanthology.org/P17-1119,0,,,,,,,"Adversarial Adaptation of Synthetic or Stale Data. Two types of data shift common in practice are 1. transferring from synthetic data to live user data (a deployment shift), and 2. transferring from stale data to current data (a temporal shift). Both cause a distribution mismatch between training and evaluation, leading to a model that overfits the flawed training data and performs poorly on the test data. We propose a solution to this mismatch problem by framing it as domain adaptation, treating the flawed training dataset as a source domain and the evaluation dataset as a target domain. To this end, we use and build on several recent advances in neural domain adaptation such as adversarial training (Ganin et al., 2016) and domain separation network (Bousmalis et al., 2016), proposing a new effective adversarial training scheme. In both supervised and unsupervised adaptation scenarios, our approach yields clear improvement over strong baselines.",Adversarial Adaptation of Synthetic or Stale Data,"Two types of data shift common in practice are 1. transferring from synthetic data to live user data (a deployment shift), and 2. transferring from stale data to current data (a temporal shift). Both cause a distribution mismatch between training and evaluation, leading to a model that overfits the flawed training data and performs poorly on the test data. We propose a solution to this mismatch problem by framing it as domain adaptation, treating the flawed training dataset as a source domain and the evaluation dataset as a target domain. To this end, we use and build on several recent advances in neural domain adaptation such as adversarial training (Ganin et al., 2016) and domain separation network (Bousmalis et al., 2016), proposing a new effective adversarial training scheme. In both supervised and unsupervised adaptation scenarios, our approach yields clear improvement over strong baselines.",Adversarial Adaptation of Synthetic or Stale Data,"Two types of data shift common in practice are 1. transferring from synthetic data to live user data (a deployment shift), and 2. transferring from stale data to current data (a temporal shift). Both cause a distribution mismatch between training and evaluation, leading to a model that overfits the flawed training data and performs poorly on the test data. We propose a solution to this mismatch problem by framing it as domain adaptation, treating the flawed training dataset as a source domain and the evaluation dataset as a target domain. To this end, we use and build on several recent advances in neural domain adaptation such as adversarial training (Ganin et al., 2016) and domain separation network (Bousmalis et al., 2016), proposing a new effective adversarial training scheme. In both supervised and unsupervised adaptation scenarios, our approach yields clear improvement over strong baselines.",,"Adversarial Adaptation of Synthetic or Stale Data. Two types of data shift common in practice are 1. transferring from synthetic data to live user data (a deployment shift), and 2. transferring from stale data to current data (a temporal shift). Both cause a distribution mismatch between training and evaluation, leading to a model that overfits the flawed training data and performs poorly on the test data. We propose a solution to this mismatch problem by framing it as domain adaptation, treating the flawed training dataset as a source domain and the evaluation dataset as a target domain. To this end, we use and build on several recent advances in neural domain adaptation such as adversarial training (Ganin et al., 2016) and domain separation network (Bousmalis et al., 2016), proposing a new effective adversarial training scheme. In both supervised and unsupervised adaptation scenarios, our approach yields clear improvement over strong baselines.",2017
orr-etal-2014-semi,http://www.lrec-conf.org/proceedings/lrec2014/pdf/511_Paper.pdf,0,,,,,,,"Semi-automatic annotation of the UCU accents speech corpus. The annotation and labeling of speech tasks in large multitask speech corpora is a necessary part of preparing a corpus for distribution. This paper addresses three approaches to annotation and labeling, namely manual, semi automatic and automatic procedures for labeling the UCU Accent Project speech data, at multilingual multitask longitudinal speech corpus. Accuracy and minimal time investment are the priorities in assessing the efficacy of each procedure. While manual labeling based on aural and visual input should produce the most accurate results, this approach is prone to error because of its repetitive nature. A semi automatic event detection system requiring manual rejection of false alarms and location and labeling of misses provided the best results. A fully automatic system could not be applied to entire speech recordings because of the variety of tasks and genres. However, it could be used to annotate separate sentences within a specific task. Acoustic confidence measures can correctly detect sentences that do not match the text with an equal error rate of 3.3%",Semi-automatic annotation of the {UCU} accents speech corpus,"The annotation and labeling of speech tasks in large multitask speech corpora is a necessary part of preparing a corpus for distribution. This paper addresses three approaches to annotation and labeling, namely manual, semi automatic and automatic procedures for labeling the UCU Accent Project speech data, at multilingual multitask longitudinal speech corpus. Accuracy and minimal time investment are the priorities in assessing the efficacy of each procedure. While manual labeling based on aural and visual input should produce the most accurate results, this approach is prone to error because of its repetitive nature. A semi automatic event detection system requiring manual rejection of false alarms and location and labeling of misses provided the best results. A fully automatic system could not be applied to entire speech recordings because of the variety of tasks and genres. However, it could be used to annotate separate sentences within a specific task. Acoustic confidence measures can correctly detect sentences that do not match the text with an equal error rate of 3.3%",Semi-automatic annotation of the UCU accents speech corpus,"The annotation and labeling of speech tasks in large multitask speech corpora is a necessary part of preparing a corpus for distribution. This paper addresses three approaches to annotation and labeling, namely manual, semi automatic and automatic procedures for labeling the UCU Accent Project speech data, at multilingual multitask longitudinal speech corpus. Accuracy and minimal time investment are the priorities in assessing the efficacy of each procedure. While manual labeling based on aural and visual input should produce the most accurate results, this approach is prone to error because of its repetitive nature. A semi automatic event detection system requiring manual rejection of false alarms and location and labeling of misses provided the best results. A fully automatic system could not be applied to entire speech recordings because of the variety of tasks and genres. However, it could be used to annotate separate sentences within a specific task. Acoustic confidence measures can correctly detect sentences that do not match the text with an equal error rate of 3.3%",,"Semi-automatic annotation of the UCU accents speech corpus. The annotation and labeling of speech tasks in large multitask speech corpora is a necessary part of preparing a corpus for distribution. This paper addresses three approaches to annotation and labeling, namely manual, semi automatic and automatic procedures for labeling the UCU Accent Project speech data, at multilingual multitask longitudinal speech corpus. Accuracy and minimal time investment are the priorities in assessing the efficacy of each procedure. While manual labeling based on aural and visual input should produce the most accurate results, this approach is prone to error because of its repetitive nature. A semi automatic event detection system requiring manual rejection of false alarms and location and labeling of misses provided the best results. A fully automatic system could not be applied to entire speech recordings because of the variety of tasks and genres. However, it could be used to annotate separate sentences within a specific task. Acoustic confidence measures can correctly detect sentences that do not match the text with an equal error rate of 3.3%",2014
poncelas-etal-2019-combining,https://aclanthology.org/R19-1107,0,,,,,,,"Combining PBSMT and NMT Back-translated Data for Efficient NMT. Neural Machine Translation (NMT) models achieve their best performance when large sets of parallel data are used for training. Consequently, techniques for augmenting the training set have become popular recently. One of these methods is back-translation (Sennrich et al., 2016a), which consists on generating synthetic sentences by translating a set of monolingual, target-language sentences using a Machine Translation (MT) model. Generally, NMT models are used for backtranslation. In this work, we analyze the performance of models when the training data is extended with synthetic data using different MT approaches. In particular we investigate back-translated data generated not only by NMT but also by Statistical Machine Translation (SMT) models and combinations of both. The results reveal that the models achieve the best performances when the training set is augmented with back-translated data created by merging different MT approaches.",Combining {PBSMT} and {NMT} Back-translated Data for Efficient {NMT},"Neural Machine Translation (NMT) models achieve their best performance when large sets of parallel data are used for training. Consequently, techniques for augmenting the training set have become popular recently. One of these methods is back-translation (Sennrich et al., 2016a), which consists on generating synthetic sentences by translating a set of monolingual, target-language sentences using a Machine Translation (MT) model. Generally, NMT models are used for backtranslation. In this work, we analyze the performance of models when the training data is extended with synthetic data using different MT approaches. In particular we investigate back-translated data generated not only by NMT but also by Statistical Machine Translation (SMT) models and combinations of both. The results reveal that the models achieve the best performances when the training set is augmented with back-translated data created by merging different MT approaches.",Combining PBSMT and NMT Back-translated Data for Efficient NMT,"Neural Machine Translation (NMT) models achieve their best performance when large sets of parallel data are used for training. Consequently, techniques for augmenting the training set have become popular recently. One of these methods is back-translation (Sennrich et al., 2016a), which consists on generating synthetic sentences by translating a set of monolingual, target-language sentences using a Machine Translation (MT) model. Generally, NMT models are used for backtranslation. In this work, we analyze the performance of models when the training data is extended with synthetic data using different MT approaches. In particular we investigate back-translated data generated not only by NMT but also by Statistical Machine Translation (SMT) models and combinations of both. The results reveal that the models achieve the best performances when the training set is augmented with back-translated data created by merging different MT approaches.",This research has been supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.This work has also received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 713567.,"Combining PBSMT and NMT Back-translated Data for Efficient NMT. Neural Machine Translation (NMT) models achieve their best performance when large sets of parallel data are used for training. Consequently, techniques for augmenting the training set have become popular recently. One of these methods is back-translation (Sennrich et al., 2016a), which consists on generating synthetic sentences by translating a set of monolingual, target-language sentences using a Machine Translation (MT) model. Generally, NMT models are used for backtranslation. In this work, we analyze the performance of models when the training data is extended with synthetic data using different MT approaches. In particular we investigate back-translated data generated not only by NMT but also by Statistical Machine Translation (SMT) models and combinations of both. The results reveal that the models achieve the best performances when the training set is augmented with back-translated data created by merging different MT approaches.",2019
patra-etal-2016-multimodal,https://aclanthology.org/C16-1186,0,,,,,,,"Multimodal Mood Classification - A Case Study of Differences in Hindi and Western Songs. Music information retrieval has emerged as a mainstream research area in the past two decades. Experiments on music mood classification have been performed mainly on Western music based on audio, lyrics and a combination of both. Unfortunately, due to the scarcity of digitalized resources, Indian music fares poorly in music mood retrieval research. In this paper, we identified the mood taxonomy and prepared multimodal mood annotated datasets for Hindi and Western songs. We identified important audio and lyric features using correlation based feature selection technique. Finally, we developed mood classification systems using Support Vector Machines and Feed Forward Neural Networks based on the features collected from audio, lyrics, and a combination of both. The best performing multimodal systems achieved F-measures of 75.1 and 83.5 for classifying the moods of the Hindi and Western songs respectively using Feed Forward Neural Networks. A comparative analysis indicates that the selected features work well for mood classification of the Western songs and produces better results as compared to the mood classification systems for Hindi songs.",Multimodal Mood Classification - A Case Study of Differences in {H}indi and Western Songs,"Music information retrieval has emerged as a mainstream research area in the past two decades. Experiments on music mood classification have been performed mainly on Western music based on audio, lyrics and a combination of both. Unfortunately, due to the scarcity of digitalized resources, Indian music fares poorly in music mood retrieval research. In this paper, we identified the mood taxonomy and prepared multimodal mood annotated datasets for Hindi and Western songs. We identified important audio and lyric features using correlation based feature selection technique. Finally, we developed mood classification systems using Support Vector Machines and Feed Forward Neural Networks based on the features collected from audio, lyrics, and a combination of both. The best performing multimodal systems achieved F-measures of 75.1 and 83.5 for classifying the moods of the Hindi and Western songs respectively using Feed Forward Neural Networks. A comparative analysis indicates that the selected features work well for mood classification of the Western songs and produces better results as compared to the mood classification systems for Hindi songs.",Multimodal Mood Classification - A Case Study of Differences in Hindi and Western Songs,"Music information retrieval has emerged as a mainstream research area in the past two decades. Experiments on music mood classification have been performed mainly on Western music based on audio, lyrics and a combination of both. Unfortunately, due to the scarcity of digitalized resources, Indian music fares poorly in music mood retrieval research. In this paper, we identified the mood taxonomy and prepared multimodal mood annotated datasets for Hindi and Western songs. We identified important audio and lyric features using correlation based feature selection technique. Finally, we developed mood classification systems using Support Vector Machines and Feed Forward Neural Networks based on the features collected from audio, lyrics, and a combination of both. The best performing multimodal systems achieved F-measures of 75.1 and 83.5 for classifying the moods of the Hindi and Western songs respectively using Feed Forward Neural Networks. A comparative analysis indicates that the selected features work well for mood classification of the Western songs and produces better results as compared to the mood classification systems for Hindi songs.","The work reported in this paper is supported by a grant from the ""Visvesvaraya Ph.D. Scheme for Electronics and IT"" funded by Media Lab Asia of Ministry of Electronics and Information Technology (Me-itY), Government of India.","Multimodal Mood Classification - A Case Study of Differences in Hindi and Western Songs. Music information retrieval has emerged as a mainstream research area in the past two decades. Experiments on music mood classification have been performed mainly on Western music based on audio, lyrics and a combination of both. Unfortunately, due to the scarcity of digitalized resources, Indian music fares poorly in music mood retrieval research. In this paper, we identified the mood taxonomy and prepared multimodal mood annotated datasets for Hindi and Western songs. We identified important audio and lyric features using correlation based feature selection technique. Finally, we developed mood classification systems using Support Vector Machines and Feed Forward Neural Networks based on the features collected from audio, lyrics, and a combination of both. The best performing multimodal systems achieved F-measures of 75.1 and 83.5 for classifying the moods of the Hindi and Western songs respectively using Feed Forward Neural Networks. A comparative analysis indicates that the selected features work well for mood classification of the Western songs and produces better results as compared to the mood classification systems for Hindi songs.",2016
burga-etal-2015-towards,https://aclanthology.org/W15-2107,0,,,,,,,"Towards a multi-layered dependency annotation of Finnish. We present a dependency annotation scheme for Finnish which aims at respecting the multilayered nature of language. We first tackle the annotation of surfacesyntactic structures (SSyntS) as inspired by the Meaning-Text framework. Exclusively syntactic criteria are used when defining the surface-syntactic relations tagset. Our annotation scheme allows for a direct mapping between surface-syntax and a more semantics-oriented representation, in particular predicate-argument structures. It has been applied to a corpus of Finnish, composed of 2,025 sentences related to weather conditions.",Towards a multi-layered dependency annotation of {F}innish,"We present a dependency annotation scheme for Finnish which aims at respecting the multilayered nature of language. We first tackle the annotation of surfacesyntactic structures (SSyntS) as inspired by the Meaning-Text framework. Exclusively syntactic criteria are used when defining the surface-syntactic relations tagset. Our annotation scheme allows for a direct mapping between surface-syntax and a more semantics-oriented representation, in particular predicate-argument structures. It has been applied to a corpus of Finnish, composed of 2,025 sentences related to weather conditions.",Towards a multi-layered dependency annotation of Finnish,"We present a dependency annotation scheme for Finnish which aims at respecting the multilayered nature of language. We first tackle the annotation of surfacesyntactic structures (SSyntS) as inspired by the Meaning-Text framework. Exclusively syntactic criteria are used when defining the surface-syntactic relations tagset. Our annotation scheme allows for a direct mapping between surface-syntax and a more semantics-oriented representation, in particular predicate-argument structures. It has been applied to a corpus of Finnish, composed of 2,025 sentences related to weather conditions.","The work described in this paper has been carried out in the framework of the project Personalized Environmental Service Configuration and Delivery Orchestration (PESCaDO), supported by the European Commission under the contract number FP7-ICT-248594.","Towards a multi-layered dependency annotation of Finnish. We present a dependency annotation scheme for Finnish which aims at respecting the multilayered nature of language. We first tackle the annotation of surfacesyntactic structures (SSyntS) as inspired by the Meaning-Text framework. Exclusively syntactic criteria are used when defining the surface-syntactic relations tagset. Our annotation scheme allows for a direct mapping between surface-syntax and a more semantics-oriented representation, in particular predicate-argument structures. It has been applied to a corpus of Finnish, composed of 2,025 sentences related to weather conditions.",2015
iwai-etal-2019-applying,https://aclanthology.org/W19-6704,0,,,,,,,"Applying Machine Translation to Psychology: Automatic Translation of Personality Adjectives. We introduce our approach to apply machine translation to psychology, especially to translate English adjectives in a psychological personality questionnaire. We first extend seed English personality adjectives with a word2vec model trained with web sentences, and then feed the acquired words to a phrase-based machine translation model. We use Moses trained with bilingual corpora that consist of TED subtitles, movie' subtitles and Wikipedia. We collect Japanese translations whose translation probabilities are higher than .01 and filter them based on human evaluations. This resulted in 507 Japanese personality descriptors. We conducted a web-survey (N=17,751) and finalized a personality questionnaire. Statistical analyses supported the five-factor structure, reliability and criterion-validity of the newly developed questionnaire. This shows the potential applicability of machine translation to psychology. We discuss further issues related to machine translation application to psychology.",Applying Machine Translation to Psychology: Automatic Translation of Personality Adjectives,"We introduce our approach to apply machine translation to psychology, especially to translate English adjectives in a psychological personality questionnaire. We first extend seed English personality adjectives with a word2vec model trained with web sentences, and then feed the acquired words to a phrase-based machine translation model. We use Moses trained with bilingual corpora that consist of TED subtitles, movie' subtitles and Wikipedia. We collect Japanese translations whose translation probabilities are higher than .01 and filter them based on human evaluations. This resulted in 507 Japanese personality descriptors. We conducted a web-survey (N=17,751) and finalized a personality questionnaire. Statistical analyses supported the five-factor structure, reliability and criterion-validity of the newly developed questionnaire. This shows the potential applicability of machine translation to psychology. We discuss further issues related to machine translation application to psychology.",Applying Machine Translation to Psychology: Automatic Translation of Personality Adjectives,"We introduce our approach to apply machine translation to psychology, especially to translate English adjectives in a psychological personality questionnaire. We first extend seed English personality adjectives with a word2vec model trained with web sentences, and then feed the acquired words to a phrase-based machine translation model. We use Moses trained with bilingual corpora that consist of TED subtitles, movie' subtitles and Wikipedia. We collect Japanese translations whose translation probabilities are higher than .01 and filter them based on human evaluations. This resulted in 507 Japanese personality descriptors. We conducted a web-survey (N=17,751) and finalized a personality questionnaire. Statistical analyses supported the five-factor structure, reliability and criterion-validity of the newly developed questionnaire. This shows the potential applicability of machine translation to psychology. We discuss further issues related to machine translation application to psychology.",,"Applying Machine Translation to Psychology: Automatic Translation of Personality Adjectives. We introduce our approach to apply machine translation to psychology, especially to translate English adjectives in a psychological personality questionnaire. We first extend seed English personality adjectives with a word2vec model trained with web sentences, and then feed the acquired words to a phrase-based machine translation model. We use Moses trained with bilingual corpora that consist of TED subtitles, movie' subtitles and Wikipedia. We collect Japanese translations whose translation probabilities are higher than .01 and filter them based on human evaluations. This resulted in 507 Japanese personality descriptors. We conducted a web-survey (N=17,751) and finalized a personality questionnaire. Statistical analyses supported the five-factor structure, reliability and criterion-validity of the newly developed questionnaire. This shows the potential applicability of machine translation to psychology. We discuss further issues related to machine translation application to psychology.",2019
samuel-etal-1998-dialogue-act,https://aclanthology.org/P98-2188,0,,,,,,,"Dialogue Act Tagging with Transformation-Based Learning. For the task of recognizing dialogue acts, we are applying the Transformation-Based Learning (TBL) machine learning algorithm. To circumvent a sparse data problem, we extract values of well-motivated features of utterances, such as speaker direction, punctuation marks, and a new feature, called dialogue act cues, which we find to be more effective than cue phrases and word n-grams in practice. We present strategies for constructing a set of dialogue act cues automatically by minimizing the entropy of the distribution of dialogue acts in a training corpus, filtering out irrelevant dialogue act cues, and clustering semantically-related words. In addition, to address limitations of TBL, we introduce a Monte Carlo strategy for training efficiently and a committee method for computing confidence measures. These ideas are combined in our working implementation, which labels held-out data as accurately as any other reported system for the dialogue act tagging task.",Dialogue Act Tagging with Transformation-Based Learning,"For the task of recognizing dialogue acts, we are applying the Transformation-Based Learning (TBL) machine learning algorithm. To circumvent a sparse data problem, we extract values of well-motivated features of utterances, such as speaker direction, punctuation marks, and a new feature, called dialogue act cues, which we find to be more effective than cue phrases and word n-grams in practice. We present strategies for constructing a set of dialogue act cues automatically by minimizing the entropy of the distribution of dialogue acts in a training corpus, filtering out irrelevant dialogue act cues, and clustering semantically-related words. In addition, to address limitations of TBL, we introduce a Monte Carlo strategy for training efficiently and a committee method for computing confidence measures. These ideas are combined in our working implementation, which labels held-out data as accurately as any other reported system for the dialogue act tagging task.",Dialogue Act Tagging with Transformation-Based Learning,"For the task of recognizing dialogue acts, we are applying the Transformation-Based Learning (TBL) machine learning algorithm. To circumvent a sparse data problem, we extract values of well-motivated features of utterances, such as speaker direction, punctuation marks, and a new feature, called dialogue act cues, which we find to be more effective than cue phrases and word n-grams in practice. We present strategies for constructing a set of dialogue act cues automatically by minimizing the entropy of the distribution of dialogue acts in a training corpus, filtering out irrelevant dialogue act cues, and clustering semantically-related words. In addition, to address limitations of TBL, we introduce a Monte Carlo strategy for training efficiently and a committee method for computing confidence measures. These ideas are combined in our working implementation, which labels held-out data as accurately as any other reported system for the dialogue act tagging task.","We wish to thank the members of the VERBMo-BIL research group at DFKI in Germany, particularly Norbert Reithinger, Jan Alexandersson, and Elisabeth Maier, for providing us with the opportunity to work with them and generously granting us access to the VERBMOBIL corpora. This work was partially supported by the NSF Grant #GER-9354869.","Dialogue Act Tagging with Transformation-Based Learning. For the task of recognizing dialogue acts, we are applying the Transformation-Based Learning (TBL) machine learning algorithm. To circumvent a sparse data problem, we extract values of well-motivated features of utterances, such as speaker direction, punctuation marks, and a new feature, called dialogue act cues, which we find to be more effective than cue phrases and word n-grams in practice. We present strategies for constructing a set of dialogue act cues automatically by minimizing the entropy of the distribution of dialogue acts in a training corpus, filtering out irrelevant dialogue act cues, and clustering semantically-related words. In addition, to address limitations of TBL, we introduce a Monte Carlo strategy for training efficiently and a committee method for computing confidence measures. These ideas are combined in our working implementation, which labels held-out data as accurately as any other reported system for the dialogue act tagging task.",1998
bod-2007-unsupervised,https://aclanthology.org/2007.mtsummit-papers.8,0,,,,,,,"Unsupervised syntax-based machine translation: the contribution of discontiguous phrases. We present a new unsupervised syntax-based MT system, termed U-DOT, which uses the unsupervised U-DOP model for learning paired trees, and which computes the most probable target sentence from the relative frequencies of paired subtrees. We test U-DOT on the German-English Europarl corpus, showing that it outperforms the state-of-the-art phrase-based Pharaoh system. We demonstrate that the inclusion of noncontiguous phrases significantly improves the translation accuracy. This paper presents the first translation results with the data-oriented translation (DOT) model on the Europarl corpus, to the best of our knowledge.",Unsupervised syntax-based machine translation: the contribution of discontiguous phrases,"We present a new unsupervised syntax-based MT system, termed U-DOT, which uses the unsupervised U-DOP model for learning paired trees, and which computes the most probable target sentence from the relative frequencies of paired subtrees. We test U-DOT on the German-English Europarl corpus, showing that it outperforms the state-of-the-art phrase-based Pharaoh system. We demonstrate that the inclusion of noncontiguous phrases significantly improves the translation accuracy. This paper presents the first translation results with the data-oriented translation (DOT) model on the Europarl corpus, to the best of our knowledge.",Unsupervised syntax-based machine translation: the contribution of discontiguous phrases,"We present a new unsupervised syntax-based MT system, termed U-DOT, which uses the unsupervised U-DOP model for learning paired trees, and which computes the most probable target sentence from the relative frequencies of paired subtrees. We test U-DOT on the German-English Europarl corpus, showing that it outperforms the state-of-the-art phrase-based Pharaoh system. We demonstrate that the inclusion of noncontiguous phrases significantly improves the translation accuracy. This paper presents the first translation results with the data-oriented translation (DOT) model on the Europarl corpus, to the best of our knowledge.",,"Unsupervised syntax-based machine translation: the contribution of discontiguous phrases. We present a new unsupervised syntax-based MT system, termed U-DOT, which uses the unsupervised U-DOP model for learning paired trees, and which computes the most probable target sentence from the relative frequencies of paired subtrees. We test U-DOT on the German-English Europarl corpus, showing that it outperforms the state-of-the-art phrase-based Pharaoh system. We demonstrate that the inclusion of noncontiguous phrases significantly improves the translation accuracy. This paper presents the first translation results with the data-oriented translation (DOT) model on the Europarl corpus, to the best of our knowledge.",2007
zhang-2018-comparison,https://aclanthology.org/Y18-1095,0,,,,,,,"A Comparison of Tone Normalization Methods for Language Variation Research. One methodological issue in tonal acoustic analyses is revisited and resolved in this study. Previous tone normalization methods mainly served for categorizing tones but did not aim to preserve sociolinguistic variation. This study, from the perspective of variationist studies, reevaluates the effectiveness of sixteen tone normalization methods and finds that the best tone normalization method is a semitone transformation relative to each speaker's average pitch in hertz.",A Comparison of Tone Normalization Methods for Language Variation Research,"One methodological issue in tonal acoustic analyses is revisited and resolved in this study. Previous tone normalization methods mainly served for categorizing tones but did not aim to preserve sociolinguistic variation. This study, from the perspective of variationist studies, reevaluates the effectiveness of sixteen tone normalization methods and finds that the best tone normalization method is a semitone transformation relative to each speaker's average pitch in hertz.",A Comparison of Tone Normalization Methods for Language Variation Research,"One methodological issue in tonal acoustic analyses is revisited and resolved in this study. Previous tone normalization methods mainly served for categorizing tones but did not aim to preserve sociolinguistic variation. This study, from the perspective of variationist studies, reevaluates the effectiveness of sixteen tone normalization methods and finds that the best tone normalization method is a semitone transformation relative to each speaker's average pitch in hertz.","The author would like to thank Utrecht Institute of Linguistics of Utrecht University, Chinese Scholarship Council and the University of Macau (Startup Research Grant SRG2018-00131-FAH) for supporting this study. Thanks also go to Prof. René Kager and Dr. Hans van de Velde for their very helpful comments and suggestions.","A Comparison of Tone Normalization Methods for Language Variation Research. One methodological issue in tonal acoustic analyses is revisited and resolved in this study. Previous tone normalization methods mainly served for categorizing tones but did not aim to preserve sociolinguistic variation. This study, from the perspective of variationist studies, reevaluates the effectiveness of sixteen tone normalization methods and finds that the best tone normalization method is a semitone transformation relative to each speaker's average pitch in hertz.",2018
miller-2009-improved,https://aclanthology.org/N09-1074,0,,,,,,,"Improved Syntactic Models for Parsing Speech with Repairs. This paper introduces three new syntactic models for representing speech with repairs. These models are developed to test the intuition that the erroneous parts of speech repairs (reparanda) are not generated or recognized as such while occurring, but only after they have been corrected. Thus, they are designed to minimize the differences in grammar rule applications between fluent and disfluent speech containing similar structure. The three models considered in this paper are also designed to isolate the mechanism of impact, by systematically exploring different variables.",Improved Syntactic Models for Parsing Speech with Repairs,"This paper introduces three new syntactic models for representing speech with repairs. These models are developed to test the intuition that the erroneous parts of speech repairs (reparanda) are not generated or recognized as such while occurring, but only after they have been corrected. Thus, they are designed to minimize the differences in grammar rule applications between fluent and disfluent speech containing similar structure. The three models considered in this paper are also designed to isolate the mechanism of impact, by systematically exploring different variables.",Improved Syntactic Models for Parsing Speech with Repairs,"This paper introduces three new syntactic models for representing speech with repairs. These models are developed to test the intuition that the erroneous parts of speech repairs (reparanda) are not generated or recognized as such while occurring, but only after they have been corrected. Thus, they are designed to minimize the differences in grammar rule applications between fluent and disfluent speech containing similar structure. The three models considered in this paper are also designed to isolate the mechanism of impact, by systematically exploring different variables.",,"Improved Syntactic Models for Parsing Speech with Repairs. This paper introduces three new syntactic models for representing speech with repairs. These models are developed to test the intuition that the erroneous parts of speech repairs (reparanda) are not generated or recognized as such while occurring, but only after they have been corrected. Thus, they are designed to minimize the differences in grammar rule applications between fluent and disfluent speech containing similar structure. The three models considered in this paper are also designed to isolate the mechanism of impact, by systematically exploring different variables.",2009
khwaileh-al-asad-2020-elmo,https://aclanthology.org/2020.semeval-1.130,0,,,,,,,"ELMo-NB at SemEval-2020 Task 7: Assessing Sense of Humor in EditedNews Headlines Using ELMo and NB. In this paper, we present our submission for SemEval-2020 competition subtask 1 in Task 7 (Hossain et al., 2020a): Assessing Humor in Edited News Headlines. The task consists of estimating the hilariousness of news headlines that have been modified manually by humans using micro-edit changes to make them funny. Our approach is constructed to improve on a couple of aspects; preprocessing with an emphasis on humor sense detection, using embeddings from state-of-the-art language model (ELMo), and ensembling the results came up with using machine learning model Naïve Bayes (NB) with a deep learning pretrained models. ELMo-NB participation has scored (0.5642) on the competition leader board, where results were measured by Root Mean Squared Error (RMSE).",{ELM}o-{NB} at {S}em{E}val-2020 Task 7: Assessing Sense of Humor in {E}dited{N}ews Headlines Using {ELM}o and {NB},"In this paper, we present our submission for SemEval-2020 competition subtask 1 in Task 7 (Hossain et al., 2020a): Assessing Humor in Edited News Headlines. The task consists of estimating the hilariousness of news headlines that have been modified manually by humans using micro-edit changes to make them funny. Our approach is constructed to improve on a couple of aspects; preprocessing with an emphasis on humor sense detection, using embeddings from state-of-the-art language model (ELMo), and ensembling the results came up with using machine learning model Naïve Bayes (NB) with a deep learning pretrained models. ELMo-NB participation has scored (0.5642) on the competition leader board, where results were measured by Root Mean Squared Error (RMSE).",ELMo-NB at SemEval-2020 Task 7: Assessing Sense of Humor in EditedNews Headlines Using ELMo and NB,"In this paper, we present our submission for SemEval-2020 competition subtask 1 in Task 7 (Hossain et al., 2020a): Assessing Humor in Edited News Headlines. The task consists of estimating the hilariousness of news headlines that have been modified manually by humans using micro-edit changes to make them funny. Our approach is constructed to improve on a couple of aspects; preprocessing with an emphasis on humor sense detection, using embeddings from state-of-the-art language model (ELMo), and ensembling the results came up with using machine learning model Naïve Bayes (NB) with a deep learning pretrained models. ELMo-NB participation has scored (0.5642) on the competition leader board, where results were measured by Root Mean Squared Error (RMSE).","We would like to extend our sincere thanks to Dr. Malak Abdullah for her efforts and support. In order to finish this work, we had a lot of straight directions and advice from her, during the fall semester, 2019.","ELMo-NB at SemEval-2020 Task 7: Assessing Sense of Humor in EditedNews Headlines Using ELMo and NB. In this paper, we present our submission for SemEval-2020 competition subtask 1 in Task 7 (Hossain et al., 2020a): Assessing Humor in Edited News Headlines. The task consists of estimating the hilariousness of news headlines that have been modified manually by humans using micro-edit changes to make them funny. Our approach is constructed to improve on a couple of aspects; preprocessing with an emphasis on humor sense detection, using embeddings from state-of-the-art language model (ELMo), and ensembling the results came up with using machine learning model Naïve Bayes (NB) with a deep learning pretrained models. ELMo-NB participation has scored (0.5642) on the competition leader board, where results were measured by Root Mean Squared Error (RMSE).",2020
liu-etal-2020-metaphor,https://aclanthology.org/2020.figlang-1.34,0,,,,,,,"Metaphor Detection Using Contextual Word Embeddings From Transformers. The detection of metaphors can provide valuable information about a given text and is crucial to sentiment analysis and machine translation. In this paper, we outline the techniques for word-level metaphor detection used in our submission to the Second Shared Task on Metaphor Detection. We propose using both BERT and XLNet language models to create contextualized embeddings and a bidirectional LSTM to identify whether a given word is a metaphor. Our best model achieved F1-scores of 68.0% on VUA AllPOS, 73.0% on VUA Verbs, 66.9% on TOEFL AllPOS, and 69.7% on TOEFL Verbs, placing 7th, 6th, 5th, and 5th respectively. In addition, we outline another potential approach with a KNN-LSTM ensemble model that we did not have enough time to implement given the deadline for the competition. We show that a KNN classifier provides a similar F1-score on a validation set as the LSTM and yields different information on metaphors.",Metaphor Detection Using Contextual Word Embeddings From Transformers,"The detection of metaphors can provide valuable information about a given text and is crucial to sentiment analysis and machine translation. In this paper, we outline the techniques for word-level metaphor detection used in our submission to the Second Shared Task on Metaphor Detection. We propose using both BERT and XLNet language models to create contextualized embeddings and a bidirectional LSTM to identify whether a given word is a metaphor. Our best model achieved F1-scores of 68.0% on VUA AllPOS, 73.0% on VUA Verbs, 66.9% on TOEFL AllPOS, and 69.7% on TOEFL Verbs, placing 7th, 6th, 5th, and 5th respectively. In addition, we outline another potential approach with a KNN-LSTM ensemble model that we did not have enough time to implement given the deadline for the competition. We show that a KNN classifier provides a similar F1-score on a validation set as the LSTM and yields different information on metaphors.",Metaphor Detection Using Contextual Word Embeddings From Transformers,"The detection of metaphors can provide valuable information about a given text and is crucial to sentiment analysis and machine translation. In this paper, we outline the techniques for word-level metaphor detection used in our submission to the Second Shared Task on Metaphor Detection. We propose using both BERT and XLNet language models to create contextualized embeddings and a bidirectional LSTM to identify whether a given word is a metaphor. Our best model achieved F1-scores of 68.0% on VUA AllPOS, 73.0% on VUA Verbs, 66.9% on TOEFL AllPOS, and 69.7% on TOEFL Verbs, placing 7th, 6th, 5th, and 5th respectively. In addition, we outline another potential approach with a KNN-LSTM ensemble model that we did not have enough time to implement given the deadline for the competition. We show that a KNN classifier provides a similar F1-score on a validation set as the LSTM and yields different information on metaphors.",The authors thank the organizers of the Second Shared Task on Metaphor Detection and the rest of the Duke Data Science Team. We also thank the anonymous reviewers for their insightful comments.,"Metaphor Detection Using Contextual Word Embeddings From Transformers. The detection of metaphors can provide valuable information about a given text and is crucial to sentiment analysis and machine translation. In this paper, we outline the techniques for word-level metaphor detection used in our submission to the Second Shared Task on Metaphor Detection. We propose using both BERT and XLNet language models to create contextualized embeddings and a bidirectional LSTM to identify whether a given word is a metaphor. Our best model achieved F1-scores of 68.0% on VUA AllPOS, 73.0% on VUA Verbs, 66.9% on TOEFL AllPOS, and 69.7% on TOEFL Verbs, placing 7th, 6th, 5th, and 5th respectively. In addition, we outline another potential approach with a KNN-LSTM ensemble model that we did not have enough time to implement given the deadline for the competition. We show that a KNN classifier provides a similar F1-score on a validation set as the LSTM and yields different information on metaphors.",2020
vertanen-kristensson-2011-imagination,https://aclanthology.org/D11-1065,0,,,,,,,"The Imagination of Crowds: Conversational AAC Language Modeling using Crowdsourcing and Large Data Sources. Augmented and alternative communication (AAC) devices enable users with certain communication disabilities to participate in everyday conversations. Such devices often rely on statistical language models to improve text entry by offering word predictions. These predictions can be improved if the language model is trained on data that closely reflects the style of the users' intended communications. Unfortunately, there is no large dataset consisting of genuine AAC messages. In this paper we demonstrate how we can crowdsource the creation of a large set of fictional AAC messages. We show that these messages model conversational AAC better than the currently used datasets based on telephone conversations or newswire text. We leverage our crowdsourced messages to intelligently select sentences from much larger sets of Twitter, blog and Usenet data. Compared to a model trained only on telephone transcripts, our best performing model reduced perplexity on three test sets of AAC-like communications by 60-82% relative. This translated to a potential keystroke savings in a predictive keyboard interface of 5-11%.",The Imagination of Crowds: Conversational {AAC} Language Modeling using Crowdsourcing and Large Data Sources,"Augmented and alternative communication (AAC) devices enable users with certain communication disabilities to participate in everyday conversations. Such devices often rely on statistical language models to improve text entry by offering word predictions. These predictions can be improved if the language model is trained on data that closely reflects the style of the users' intended communications. Unfortunately, there is no large dataset consisting of genuine AAC messages. In this paper we demonstrate how we can crowdsource the creation of a large set of fictional AAC messages. We show that these messages model conversational AAC better than the currently used datasets based on telephone conversations or newswire text. We leverage our crowdsourced messages to intelligently select sentences from much larger sets of Twitter, blog and Usenet data. Compared to a model trained only on telephone transcripts, our best performing model reduced perplexity on three test sets of AAC-like communications by 60-82% relative. This translated to a potential keystroke savings in a predictive keyboard interface of 5-11%.",The Imagination of Crowds: Conversational AAC Language Modeling using Crowdsourcing and Large Data Sources,"Augmented and alternative communication (AAC) devices enable users with certain communication disabilities to participate in everyday conversations. Such devices often rely on statistical language models to improve text entry by offering word predictions. These predictions can be improved if the language model is trained on data that closely reflects the style of the users' intended communications. Unfortunately, there is no large dataset consisting of genuine AAC messages. In this paper we demonstrate how we can crowdsource the creation of a large set of fictional AAC messages. We show that these messages model conversational AAC better than the currently used datasets based on telephone conversations or newswire text. We leverage our crowdsourced messages to intelligently select sentences from much larger sets of Twitter, blog and Usenet data. Compared to a model trained only on telephone transcripts, our best performing model reduced perplexity on three test sets of AAC-like communications by 60-82% relative. This translated to a potential keystroke savings in a predictive keyboard interface of 5-11%.",We thank Keith Trnka and Horabail Venkatagiri for their assistance. This work was supported by the Engineering and Physical Sciences Research Council (grant number EP/H027408/1).,"The Imagination of Crowds: Conversational AAC Language Modeling using Crowdsourcing and Large Data Sources. Augmented and alternative communication (AAC) devices enable users with certain communication disabilities to participate in everyday conversations. Such devices often rely on statistical language models to improve text entry by offering word predictions. These predictions can be improved if the language model is trained on data that closely reflects the style of the users' intended communications. Unfortunately, there is no large dataset consisting of genuine AAC messages. In this paper we demonstrate how we can crowdsource the creation of a large set of fictional AAC messages. We show that these messages model conversational AAC better than the currently used datasets based on telephone conversations or newswire text. We leverage our crowdsourced messages to intelligently select sentences from much larger sets of Twitter, blog and Usenet data. Compared to a model trained only on telephone transcripts, our best performing model reduced perplexity on three test sets of AAC-like communications by 60-82% relative. This translated to a potential keystroke savings in a predictive keyboard interface of 5-11%.",2011
artetxe-etal-2019-effective,https://aclanthology.org/P19-1019,0,,,,,,,"An Effective Approach to Unsupervised Machine Translation. While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through onthe-fly back-translation. Together, we obtain large improvements over the previous stateof-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014.",An Effective Approach to Unsupervised Machine Translation,"While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through onthe-fly back-translation. Together, we obtain large improvements over the previous stateof-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014.",An Effective Approach to Unsupervised Machine Translation,"While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through onthe-fly back-translation. Together, we obtain large improvements over the previous stateof-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014.","This research was partially supported by the Spanish MINECO (UnsupNMT TIN2017-91692-EXP and DOMINO PGC2018-102041-B-I00, cofunded by EU FEDER), the BigKnowledge project (BBVA foundation grant 2018), the UPV/EHU (excellence research group), and the NVIDIA GPU grant program. Mikel Artetxe was supported by a doctoral grant from the Spanish MECD.","An Effective Approach to Unsupervised Machine Translation. While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through onthe-fly back-translation. Together, we obtain large improvements over the previous stateof-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014.",2019
melamud-etal-2016-role,https://aclanthology.org/N16-1118,0,,,,,,,"The Role of Context Types and Dimensionality in Learning Word Embeddings. We provide the first extensive evaluation of how using different types of context to learn skip-gram word embeddings affects performance on a wide range of intrinsic and extrinsic NLP tasks. Our results suggest that while intrinsic tasks tend to exhibit a clear preference to particular types of contexts and higher dimensionality, more careful tuning is required for finding the optimal settings for most of the extrinsic tasks that we considered. Furthermore, for these extrinsic tasks, we find that once the benefit from increasing the embedding dimensionality is mostly exhausted, simple concatenation of word embeddings, learned with different context types, can yield further performance gains. As an additional contribution, we propose a new variant of the skip-gram model that learns word embeddings from weighted contexts of substitute words.",The Role of Context Types and Dimensionality in Learning Word Embeddings,"We provide the first extensive evaluation of how using different types of context to learn skip-gram word embeddings affects performance on a wide range of intrinsic and extrinsic NLP tasks. Our results suggest that while intrinsic tasks tend to exhibit a clear preference to particular types of contexts and higher dimensionality, more careful tuning is required for finding the optimal settings for most of the extrinsic tasks that we considered. Furthermore, for these extrinsic tasks, we find that once the benefit from increasing the embedding dimensionality is mostly exhausted, simple concatenation of word embeddings, learned with different context types, can yield further performance gains. As an additional contribution, we propose a new variant of the skip-gram model that learns word embeddings from weighted contexts of substitute words.",The Role of Context Types and Dimensionality in Learning Word Embeddings,"We provide the first extensive evaluation of how using different types of context to learn skip-gram word embeddings affects performance on a wide range of intrinsic and extrinsic NLP tasks. Our results suggest that while intrinsic tasks tend to exhibit a clear preference to particular types of contexts and higher dimensionality, more careful tuning is required for finding the optimal settings for most of the extrinsic tasks that we considered. Furthermore, for these extrinsic tasks, we find that once the benefit from increasing the embedding dimensionality is mostly exhausted, simple concatenation of word embeddings, learned with different context types, can yield further performance gains. As an additional contribution, we propose a new variant of the skip-gram model that learns word embeddings from weighted contexts of substitute words.","We thank Do Kook Choe for providing us the jackknifed version of WSJ. We also wish to thank the IBM Watson team for helpful discussions and our anonymous reviewers for their comments. This work was partially supported by the Israel Science Foundation grant 880/12 and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1).","The Role of Context Types and Dimensionality in Learning Word Embeddings. We provide the first extensive evaluation of how using different types of context to learn skip-gram word embeddings affects performance on a wide range of intrinsic and extrinsic NLP tasks. Our results suggest that while intrinsic tasks tend to exhibit a clear preference to particular types of contexts and higher dimensionality, more careful tuning is required for finding the optimal settings for most of the extrinsic tasks that we considered. Furthermore, for these extrinsic tasks, we find that once the benefit from increasing the embedding dimensionality is mostly exhausted, simple concatenation of word embeddings, learned with different context types, can yield further performance gains. As an additional contribution, we propose a new variant of the skip-gram model that learns word embeddings from weighted contexts of substitute words.",2016
tillmann-2004-unigram,https://aclanthology.org/N04-4026,0,,,,,,,"A Unigram Orientation Model for Statistical Machine Translation. In this paper, we present a unigram segmentation model for statistical machine translation where the segmentation units are blocks: pairs of phrases without internal structure. The segmentation model uses a novel orientation component to handle swapping of neighbor blocks. During training, we collect block unigram counts with orientation: we count how often a block occurs to the left or to the right of some predecessor block. The orientation model is shown to improve translation performance over two models: 1) no block reordering is used, and 2) the block swapping is controlled only by a language model. We show experimental results on a standard Arabic-English translation task.",A Unigram Orientation Model for Statistical Machine Translation,"In this paper, we present a unigram segmentation model for statistical machine translation where the segmentation units are blocks: pairs of phrases without internal structure. The segmentation model uses a novel orientation component to handle swapping of neighbor blocks. During training, we collect block unigram counts with orientation: we count how often a block occurs to the left or to the right of some predecessor block. The orientation model is shown to improve translation performance over two models: 1) no block reordering is used, and 2) the block swapping is controlled only by a language model. We show experimental results on a standard Arabic-English translation task.",A Unigram Orientation Model for Statistical Machine Translation,"In this paper, we present a unigram segmentation model for statistical machine translation where the segmentation units are blocks: pairs of phrases without internal structure. The segmentation model uses a novel orientation component to handle swapping of neighbor blocks. During training, we collect block unigram counts with orientation: we count how often a block occurs to the left or to the right of some predecessor block. The orientation model is shown to improve translation performance over two models: 1) no block reordering is used, and 2) the block swapping is controlled only by a language model. We show experimental results on a standard Arabic-English translation task.",This work was partially supported by DARPA and monitored by SPAWAR under contract No. N66001-99-2-8916. The paper has greatly profited from discussion with Kishore Papineni and Fei Xia.,"A Unigram Orientation Model for Statistical Machine Translation. In this paper, we present a unigram segmentation model for statistical machine translation where the segmentation units are blocks: pairs of phrases without internal structure. The segmentation model uses a novel orientation component to handle swapping of neighbor blocks. During training, we collect block unigram counts with orientation: we count how often a block occurs to the left or to the right of some predecessor block. The orientation model is shown to improve translation performance over two models: 1) no block reordering is used, and 2) the block swapping is controlled only by a language model. We show experimental results on a standard Arabic-English translation task.",2004
wang-etal-2019-role,https://aclanthology.org/D19-6405,0,,,,,,,"On the Role of Scene Graphs in Image Captioning. Scene graphs represent semantic information in images, which can help image captioning system to produce more descriptive outputs versus using only the image as context. Recent captioning approaches rely on ad-hoc approaches to obtain graphs for images. However, those graphs introduce noise and it is unclear the effect of parser errors on captioning accuracy. In this work, we investigate to what extent scene graphs can help image captioning. Our results show that a state-of-the-art scene graph parser can boost performance almost as much as the ground truth graphs, showing that the bottleneck currently resides more on the captioning models than on the performance of the scene graph parser.",On the Role of Scene Graphs in Image Captioning,"Scene graphs represent semantic information in images, which can help image captioning system to produce more descriptive outputs versus using only the image as context. Recent captioning approaches rely on ad-hoc approaches to obtain graphs for images. However, those graphs introduce noise and it is unclear the effect of parser errors on captioning accuracy. In this work, we investigate to what extent scene graphs can help image captioning. Our results show that a state-of-the-art scene graph parser can boost performance almost as much as the ground truth graphs, showing that the bottleneck currently resides more on the captioning models than on the performance of the scene graph parser.",On the Role of Scene Graphs in Image Captioning,"Scene graphs represent semantic information in images, which can help image captioning system to produce more descriptive outputs versus using only the image as context. Recent captioning approaches rely on ad-hoc approaches to obtain graphs for images. However, those graphs introduce noise and it is unclear the effect of parser errors on captioning accuracy. In this work, we investigate to what extent scene graphs can help image captioning. Our results show that a state-of-the-art scene graph parser can boost performance almost as much as the ground truth graphs, showing that the bottleneck currently resides more on the captioning models than on the performance of the scene graph parser.",,"On the Role of Scene Graphs in Image Captioning. Scene graphs represent semantic information in images, which can help image captioning system to produce more descriptive outputs versus using only the image as context. Recent captioning approaches rely on ad-hoc approaches to obtain graphs for images. However, those graphs introduce noise and it is unclear the effect of parser errors on captioning accuracy. In this work, we investigate to what extent scene graphs can help image captioning. Our results show that a state-of-the-art scene graph parser can boost performance almost as much as the ground truth graphs, showing that the bottleneck currently resides more on the captioning models than on the performance of the scene graph parser.",2019
gong-etal-2015-hashtag,https://aclanthology.org/D15-1046,0,,,,,,,"Hashtag Recommendation Using Dirichlet Process Mixture Models Incorporating Types of Hashtags. In recent years, the task of recommending hashtags for microblogs has been given increasing attention. Various methods have been proposed to study the problem from different aspects. However, most of the recent studies have not considered the differences in the types or uses of hashtags. In this paper, we introduce a novel nonparametric Bayesian method for this task. Based on the Dirichlet Process Mixture Models (DPMM), we incorporate the type of hashtag as a hidden variable. The results of experiments on the data collected from a real world microblogging service demonstrate that the proposed method outperforms stateof-the-art methods that do not consider these aspects. By taking these aspects into consideration, the relative improvement of the proposed method over the state-of-theart methods is around 12.2% in F1-score.",Hashtag Recommendation Using {D}irichlet Process Mixture Models Incorporating Types of Hashtags,"In recent years, the task of recommending hashtags for microblogs has been given increasing attention. Various methods have been proposed to study the problem from different aspects. However, most of the recent studies have not considered the differences in the types or uses of hashtags. In this paper, we introduce a novel nonparametric Bayesian method for this task. Based on the Dirichlet Process Mixture Models (DPMM), we incorporate the type of hashtag as a hidden variable. The results of experiments on the data collected from a real world microblogging service demonstrate that the proposed method outperforms stateof-the-art methods that do not consider these aspects. By taking these aspects into consideration, the relative improvement of the proposed method over the state-of-theart methods is around 12.2% in F1-score.",Hashtag Recommendation Using Dirichlet Process Mixture Models Incorporating Types of Hashtags,"In recent years, the task of recommending hashtags for microblogs has been given increasing attention. Various methods have been proposed to study the problem from different aspects. However, most of the recent studies have not considered the differences in the types or uses of hashtags. In this paper, we introduce a novel nonparametric Bayesian method for this task. Based on the Dirichlet Process Mixture Models (DPMM), we incorporate the type of hashtag as a hidden variable. The results of experiments on the data collected from a real world microblogging service demonstrate that the proposed method outperforms stateof-the-art methods that do not consider these aspects. By taking these aspects into consideration, the relative improvement of the proposed method over the state-of-theart methods is around 12.2% in F1-score.","tially funded by National Natural Science Foundation of China (No. 61473092 and 61472088), the National High Technology Research and Development Program of China (No. 2015AA011802), and Shanghai Science and Technology Development Funds (13dz226020013511504300).","Hashtag Recommendation Using Dirichlet Process Mixture Models Incorporating Types of Hashtags. In recent years, the task of recommending hashtags for microblogs has been given increasing attention. Various methods have been proposed to study the problem from different aspects. However, most of the recent studies have not considered the differences in the types or uses of hashtags. In this paper, we introduce a novel nonparametric Bayesian method for this task. Based on the Dirichlet Process Mixture Models (DPMM), we incorporate the type of hashtag as a hidden variable. The results of experiments on the data collected from a real world microblogging service demonstrate that the proposed method outperforms stateof-the-art methods that do not consider these aspects. By taking these aspects into consideration, the relative improvement of the proposed method over the state-of-theart methods is around 12.2% in F1-score.",2015
ambati-etal-2010-active-semi,https://aclanthology.org/W10-0102,0,,,,,,,Active Semi-Supervised Learning for Improving Word Alignment. Word alignment models form an important part of building statistical machine translation systems. Semi-supervised word alignment aims to improve the accuracy of automatic word alignment by incorporating full or partial alignments acquired from humans. Such dedicated elicitation effort is often expensive and depends on availability of bilingual speakers for the language-pair. In this paper we study active learning query strategies to carefully identify highly uncertain or most informative alignment links that are proposed under an unsupervised word alignment model. Manual correction of such informative links can then be applied to create a labeled dataset used by a semi-supervised word alignment model. Our experiments show that using active learning leads to maximal reduction of alignment error rates with reduced human effort.,Active Semi-Supervised Learning for Improving Word Alignment,Word alignment models form an important part of building statistical machine translation systems. Semi-supervised word alignment aims to improve the accuracy of automatic word alignment by incorporating full or partial alignments acquired from humans. Such dedicated elicitation effort is often expensive and depends on availability of bilingual speakers for the language-pair. In this paper we study active learning query strategies to carefully identify highly uncertain or most informative alignment links that are proposed under an unsupervised word alignment model. Manual correction of such informative links can then be applied to create a labeled dataset used by a semi-supervised word alignment model. Our experiments show that using active learning leads to maximal reduction of alignment error rates with reduced human effort.,Active Semi-Supervised Learning for Improving Word Alignment,Word alignment models form an important part of building statistical machine translation systems. Semi-supervised word alignment aims to improve the accuracy of automatic word alignment by incorporating full or partial alignments acquired from humans. Such dedicated elicitation effort is often expensive and depends on availability of bilingual speakers for the language-pair. In this paper we study active learning query strategies to carefully identify highly uncertain or most informative alignment links that are proposed under an unsupervised word alignment model. Manual correction of such informative links can then be applied to create a labeled dataset used by a semi-supervised word alignment model. Our experiments show that using active learning leads to maximal reduction of alignment error rates with reduced human effort.,"This research was partially supported by DARPA under grant NBCHC080097. Any opinions, findings, and conclusions expressed in this paper are those of the authors and do not necessarily reflect the views of the DARPA. The first author would like to thank Qin Gao for the semi-supervised word alignment software and help with running experiments.",Active Semi-Supervised Learning for Improving Word Alignment. Word alignment models form an important part of building statistical machine translation systems. Semi-supervised word alignment aims to improve the accuracy of automatic word alignment by incorporating full or partial alignments acquired from humans. Such dedicated elicitation effort is often expensive and depends on availability of bilingual speakers for the language-pair. In this paper we study active learning query strategies to carefully identify highly uncertain or most informative alignment links that are proposed under an unsupervised word alignment model. Manual correction of such informative links can then be applied to create a labeled dataset used by a semi-supervised word alignment model. Our experiments show that using active learning leads to maximal reduction of alignment error rates with reduced human effort.,2010
tilk-alumae-2017-low,https://aclanthology.org/W17-4503,0,,,,,,,"Low-Resource Neural Headline Generation. Recent neural headline generation models have shown great results, but are generally trained on very large datasets. We focus our efforts on improving headline quality on smaller datasets by the means of pretraining. We propose new methods that enable pre-training all the parameters of the model and utilize all available text, resulting in improvements by up to 32.4% relative in perplexity and 2.84 points in ROUGE.",Low-Resource Neural Headline Generation,"Recent neural headline generation models have shown great results, but are generally trained on very large datasets. We focus our efforts on improving headline quality on smaller datasets by the means of pretraining. We propose new methods that enable pre-training all the parameters of the model and utilize all available text, resulting in improvements by up to 32.4% relative in perplexity and 2.84 points in ROUGE.",Low-Resource Neural Headline Generation,"Recent neural headline generation models have shown great results, but are generally trained on very large datasets. We focus our efforts on improving headline quality on smaller datasets by the means of pretraining. We propose new methods that enable pre-training all the parameters of the model and utilize all available text, resulting in improvements by up to 32.4% relative in perplexity and 2.84 points in ROUGE.","We would like to thank NVIDIA for the donated GPU, the anonymous reviewers for their valuable comments, and Kyunghyun Cho for the help with the CNN/Daily Mail dataset.","Low-Resource Neural Headline Generation. Recent neural headline generation models have shown great results, but are generally trained on very large datasets. We focus our efforts on improving headline quality on smaller datasets by the means of pretraining. We propose new methods that enable pre-training all the parameters of the model and utilize all available text, resulting in improvements by up to 32.4% relative in perplexity and 2.84 points in ROUGE.",2017
stafanovics-etal-2020-mitigating,https://aclanthology.org/2020.wmt-1.73,1,,,,gender_equality,,,"Mitigating Gender Bias in Machine Translation with Target Gender Annotations. When translating ""The secretary asked for details."" to a language with grammatical gender, it might be necessary to determine the gender of the subject ""secretary"". If the sentence does not contain the necessary information, it is not always possible to disambiguate. In such cases, machine translation systems select the most common translation option, which often corresponds to the stereotypical translations, thus potentially exacerbating prejudice and marginalisation of certain groups and people. We argue that the information necessary for an adequate translation can not always be deduced from the sentence being translated or even might depend on external knowledge. Therefore, in this work, we propose to decouple the task of acquiring the necessary information from the task of learning to translate correctly when such information is available. To that end, we present a method for training machine translation systems to use word-level annotations containing information about subject's gender. To prepare training data, we annotate regular source language words with grammatical gender information of the corresponding target language words. Using such data to train machine translation systems reduces their reliance on gender stereotypes when information about the subject's gender is available. Our experiments on five language pairs show that this allows improving accuracy on the WinoMT test set by up to 25.8 percentage points.",Mitigating Gender Bias in Machine Translation with Target Gender Annotations,"When translating ""The secretary asked for details."" to a language with grammatical gender, it might be necessary to determine the gender of the subject ""secretary"". If the sentence does not contain the necessary information, it is not always possible to disambiguate. In such cases, machine translation systems select the most common translation option, which often corresponds to the stereotypical translations, thus potentially exacerbating prejudice and marginalisation of certain groups and people. We argue that the information necessary for an adequate translation can not always be deduced from the sentence being translated or even might depend on external knowledge. Therefore, in this work, we propose to decouple the task of acquiring the necessary information from the task of learning to translate correctly when such information is available. To that end, we present a method for training machine translation systems to use word-level annotations containing information about subject's gender. To prepare training data, we annotate regular source language words with grammatical gender information of the corresponding target language words. Using such data to train machine translation systems reduces their reliance on gender stereotypes when information about the subject's gender is available. Our experiments on five language pairs show that this allows improving accuracy on the WinoMT test set by up to 25.8 percentage points.",Mitigating Gender Bias in Machine Translation with Target Gender Annotations,"When translating ""The secretary asked for details."" to a language with grammatical gender, it might be necessary to determine the gender of the subject ""secretary"". If the sentence does not contain the necessary information, it is not always possible to disambiguate. In such cases, machine translation systems select the most common translation option, which often corresponds to the stereotypical translations, thus potentially exacerbating prejudice and marginalisation of certain groups and people. We argue that the information necessary for an adequate translation can not always be deduced from the sentence being translated or even might depend on external knowledge. Therefore, in this work, we propose to decouple the task of acquiring the necessary information from the task of learning to translate correctly when such information is available. To that end, we present a method for training machine translation systems to use word-level annotations containing information about subject's gender. To prepare training data, we annotate regular source language words with grammatical gender information of the corresponding target language words. Using such data to train machine translation systems reduces their reliance on gender stereotypes when information about the subject's gender is available. Our experiments on five language pairs show that this allows improving accuracy on the WinoMT test set by up to 25.8 percentage points.","This research was partly done within the scope of the undergraduate thesis project of the first author at the University of Latvia and supervised at Tilde. This research has been supported by the European Regional Development Fund within the joint project of SIA TILDE and University of Latvia ""Multilingual Artificial Intelligence Based Human Computer Interaction"" No. 1.1.1.1/18/A/148.","Mitigating Gender Bias in Machine Translation with Target Gender Annotations. When translating ""The secretary asked for details."" to a language with grammatical gender, it might be necessary to determine the gender of the subject ""secretary"". If the sentence does not contain the necessary information, it is not always possible to disambiguate. In such cases, machine translation systems select the most common translation option, which often corresponds to the stereotypical translations, thus potentially exacerbating prejudice and marginalisation of certain groups and people. We argue that the information necessary for an adequate translation can not always be deduced from the sentence being translated or even might depend on external knowledge. Therefore, in this work, we propose to decouple the task of acquiring the necessary information from the task of learning to translate correctly when such information is available. To that end, we present a method for training machine translation systems to use word-level annotations containing information about subject's gender. To prepare training data, we annotate regular source language words with grammatical gender information of the corresponding target language words. Using such data to train machine translation systems reduces their reliance on gender stereotypes when information about the subject's gender is available. Our experiments on five language pairs show that this allows improving accuracy on the WinoMT test set by up to 25.8 percentage points.",2020
tan-etal-2014-sensible,https://aclanthology.org/S14-2094,0,,,,,,,"Sensible: L2 Translation Assistance by Emulating the Manual Post-Editing Process. This paper describes the Post-Editor Z system submitted to the L2 writing assistant task in SemEval-2014. The aim of task is to build a translation assistance system to translate untranslated sentence fragments. This is not unlike the task of post-editing where human translators improve machine-generated translations. Post-Editor Z emulates the manual process of post-editing by (i) crawling and extracting parallel sentences that contain the untranslated fragments from a Web-based translation memory, (ii) extracting the possible translations of the fragments indexed by the translation memory and (iii) applying simple cosine-based sentence similarity to rank possible translations for the untranslated fragment.",{S}ensible: {L}2 Translation Assistance by Emulating the Manual Post-Editing Process,"This paper describes the Post-Editor Z system submitted to the L2 writing assistant task in SemEval-2014. The aim of task is to build a translation assistance system to translate untranslated sentence fragments. This is not unlike the task of post-editing where human translators improve machine-generated translations. Post-Editor Z emulates the manual process of post-editing by (i) crawling and extracting parallel sentences that contain the untranslated fragments from a Web-based translation memory, (ii) extracting the possible translations of the fragments indexed by the translation memory and (iii) applying simple cosine-based sentence similarity to rank possible translations for the untranslated fragment.",Sensible: L2 Translation Assistance by Emulating the Manual Post-Editing Process,"This paper describes the Post-Editor Z system submitted to the L2 writing assistant task in SemEval-2014. The aim of task is to build a translation assistance system to translate untranslated sentence fragments. This is not unlike the task of post-editing where human translators improve machine-generated translations. Post-Editor Z emulates the manual process of post-editing by (i) crawling and extracting parallel sentences that contain the untranslated fragments from a Web-based translation memory, (ii) extracting the possible translations of the fragments indexed by the translation memory and (iii) applying simple cosine-based sentence similarity to rank possible translations for the untranslated fragment.",The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme FP7/2007-2013/ under REA grant agreement n • 317471.,"Sensible: L2 Translation Assistance by Emulating the Manual Post-Editing Process. This paper describes the Post-Editor Z system submitted to the L2 writing assistant task in SemEval-2014. The aim of task is to build a translation assistance system to translate untranslated sentence fragments. This is not unlike the task of post-editing where human translators improve machine-generated translations. Post-Editor Z emulates the manual process of post-editing by (i) crawling and extracting parallel sentences that contain the untranslated fragments from a Web-based translation memory, (ii) extracting the possible translations of the fragments indexed by the translation memory and (iii) applying simple cosine-based sentence similarity to rank possible translations for the untranslated fragment.",2014
eckart-etal-2012-influence,http://www.lrec-conf.org/proceedings/lrec2012/pdf/476_Paper.pdf,0,,,,,,,"The Influence of Corpus Quality on Statistical Measurements on Language Resources. The quality of statistical measurements on corpora is strongly related to a strict definition of the measuring process and to corpus quality. In the case of multiple result inspections, an exact measurement of previously specified parameters ensures compatibility of the different measurements performed by different researchers on possibly different objects. Hence, the comparison of different values requires an exact description of the measuring process. To illustrate this correlation the influence of different definitions for the concepts word and sentence is shown for several properties of large text corpora. It is also shown that corpus pre-processing strongly influences corpus size and quality as well. As an example near duplicate sentences are identified as source of many statistical irregularities. The problem of strongly varying results especially holds for Web corpora with a large set of pre-processing steps. Here, a well-defined and language independent pre-processing is indispensable for language comparison based on measured values. Conversely, irregularities found in such measurements are often a result of poor pre-processing and therefore such measurements can help to improve corpus quality.",The Influence of Corpus Quality on Statistical Measurements on Language Resources,"The quality of statistical measurements on corpora is strongly related to a strict definition of the measuring process and to corpus quality. In the case of multiple result inspections, an exact measurement of previously specified parameters ensures compatibility of the different measurements performed by different researchers on possibly different objects. Hence, the comparison of different values requires an exact description of the measuring process. To illustrate this correlation the influence of different definitions for the concepts word and sentence is shown for several properties of large text corpora. It is also shown that corpus pre-processing strongly influences corpus size and quality as well. As an example near duplicate sentences are identified as source of many statistical irregularities. The problem of strongly varying results especially holds for Web corpora with a large set of pre-processing steps. Here, a well-defined and language independent pre-processing is indispensable for language comparison based on measured values. Conversely, irregularities found in such measurements are often a result of poor pre-processing and therefore such measurements can help to improve corpus quality.",The Influence of Corpus Quality on Statistical Measurements on Language Resources,"The quality of statistical measurements on corpora is strongly related to a strict definition of the measuring process and to corpus quality. In the case of multiple result inspections, an exact measurement of previously specified parameters ensures compatibility of the different measurements performed by different researchers on possibly different objects. Hence, the comparison of different values requires an exact description of the measuring process. To illustrate this correlation the influence of different definitions for the concepts word and sentence is shown for several properties of large text corpora. It is also shown that corpus pre-processing strongly influences corpus size and quality as well. As an example near duplicate sentences are identified as source of many statistical irregularities. The problem of strongly varying results especially holds for Web corpora with a large set of pre-processing steps. Here, a well-defined and language independent pre-processing is indispensable for language comparison based on measured values. Conversely, irregularities found in such measurements are often a result of poor pre-processing and therefore such measurements can help to improve corpus quality.",,"The Influence of Corpus Quality on Statistical Measurements on Language Resources. The quality of statistical measurements on corpora is strongly related to a strict definition of the measuring process and to corpus quality. In the case of multiple result inspections, an exact measurement of previously specified parameters ensures compatibility of the different measurements performed by different researchers on possibly different objects. Hence, the comparison of different values requires an exact description of the measuring process. To illustrate this correlation the influence of different definitions for the concepts word and sentence is shown for several properties of large text corpora. It is also shown that corpus pre-processing strongly influences corpus size and quality as well. As an example near duplicate sentences are identified as source of many statistical irregularities. The problem of strongly varying results especially holds for Web corpora with a large set of pre-processing steps. Here, a well-defined and language independent pre-processing is indispensable for language comparison based on measured values. Conversely, irregularities found in such measurements are often a result of poor pre-processing and therefore such measurements can help to improve corpus quality.",2012
fujiki-etal-2003-automatic,https://aclanthology.org/E03-1061,0,,,,,,,"Automatic Acquisition of Script Knowledge from a Text Collection. In this paper, we describe a method for automatic acquisition of script knowledge from a Japanese text collection. Script knowledge represents a typical sequence of actions that occur in a particular situation. We extracted sequences (pairs) of actions occurring in time order from a Japanese text collection and then chose those that were typical of certain situations by ranking these sequences (pairs) in terms of the frequency of their occurrence. To extract sequences of actions occurring in time order, we constructed a text collection in which texts describing facts relating to a similar situation were clustered together and arranged in time order. We also describe a preliminary experiment with our acquisition system and discuss the results.",Automatic Acquisition of Script Knowledge from a Text Collection,"In this paper, we describe a method for automatic acquisition of script knowledge from a Japanese text collection. Script knowledge represents a typical sequence of actions that occur in a particular situation. We extracted sequences (pairs) of actions occurring in time order from a Japanese text collection and then chose those that were typical of certain situations by ranking these sequences (pairs) in terms of the frequency of their occurrence. To extract sequences of actions occurring in time order, we constructed a text collection in which texts describing facts relating to a similar situation were clustered together and arranged in time order. We also describe a preliminary experiment with our acquisition system and discuss the results.",Automatic Acquisition of Script Knowledge from a Text Collection,"In this paper, we describe a method for automatic acquisition of script knowledge from a Japanese text collection. Script knowledge represents a typical sequence of actions that occur in a particular situation. We extracted sequences (pairs) of actions occurring in time order from a Japanese text collection and then chose those that were typical of certain situations by ranking these sequences (pairs) in terms of the frequency of their occurrence. To extract sequences of actions occurring in time order, we constructed a text collection in which texts describing facts relating to a similar situation were clustered together and arranged in time order. We also describe a preliminary experiment with our acquisition system and discuss the results.",,"Automatic Acquisition of Script Knowledge from a Text Collection. In this paper, we describe a method for automatic acquisition of script knowledge from a Japanese text collection. Script knowledge represents a typical sequence of actions that occur in a particular situation. We extracted sequences (pairs) of actions occurring in time order from a Japanese text collection and then chose those that were typical of certain situations by ranking these sequences (pairs) in terms of the frequency of their occurrence. To extract sequences of actions occurring in time order, we constructed a text collection in which texts describing facts relating to a similar situation were clustered together and arranged in time order. We also describe a preliminary experiment with our acquisition system and discuss the results.",2003
utt-etal-2013-curious,https://aclanthology.org/W13-0604,0,,,,,,,"The Curious Case of Metonymic Verbs: A Distributional Characterization. Logical metonymy combines an event-selecting verb with an entity-denoting noun (e.g., The writer began the novel), triggering a covert event interpretation (e.g., reading, writing). Experimental investigations of logical metonymy must assume a binary distinction between metonymic (i.e. eventselecting) verbs and non-metonymic verbs to establish a control condition. However, this binary distinction (whether a verb is metonymic or not) is mostly made on intuitive grounds, which introduces a potential confounding factor. We describe a corpus-based approach which characterizes verbs in terms of their behavior at the syntax-semantics interface. The model assesses the extent to which transitive verbs prefer event-denoting objects over entity-denoting objects. We then test this ""eventhood"" measure on psycholinguistic datasets, showing that it can distinguish not only metonymic from non-metonymic verbs, but that it can also capture more fine-grained distinctions among different classes of metonymic verbs, putting such distinctions into a new graded perspective.",The Curious Case of Metonymic Verbs: A Distributional Characterization,"Logical metonymy combines an event-selecting verb with an entity-denoting noun (e.g., The writer began the novel), triggering a covert event interpretation (e.g., reading, writing). Experimental investigations of logical metonymy must assume a binary distinction between metonymic (i.e. eventselecting) verbs and non-metonymic verbs to establish a control condition. However, this binary distinction (whether a verb is metonymic or not) is mostly made on intuitive grounds, which introduces a potential confounding factor. We describe a corpus-based approach which characterizes verbs in terms of their behavior at the syntax-semantics interface. The model assesses the extent to which transitive verbs prefer event-denoting objects over entity-denoting objects. We then test this ""eventhood"" measure on psycholinguistic datasets, showing that it can distinguish not only metonymic from non-metonymic verbs, but that it can also capture more fine-grained distinctions among different classes of metonymic verbs, putting such distinctions into a new graded perspective.",The Curious Case of Metonymic Verbs: A Distributional Characterization,"Logical metonymy combines an event-selecting verb with an entity-denoting noun (e.g., The writer began the novel), triggering a covert event interpretation (e.g., reading, writing). Experimental investigations of logical metonymy must assume a binary distinction between metonymic (i.e. eventselecting) verbs and non-metonymic verbs to establish a control condition. However, this binary distinction (whether a verb is metonymic or not) is mostly made on intuitive grounds, which introduces a potential confounding factor. We describe a corpus-based approach which characterizes verbs in terms of their behavior at the syntax-semantics interface. The model assesses the extent to which transitive verbs prefer event-denoting objects over entity-denoting objects. We then test this ""eventhood"" measure on psycholinguistic datasets, showing that it can distinguish not only metonymic from non-metonymic verbs, but that it can also capture more fine-grained distinctions among different classes of metonymic verbs, putting such distinctions into a new graded perspective.","Acknowledgements The research for this paper was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft) as part of the SFB 732 ""Incremental specification in context"" / project D6 ""Lexical-semantic factors in event interpretation"" at the University of Stuttgart.","The Curious Case of Metonymic Verbs: A Distributional Characterization. Logical metonymy combines an event-selecting verb with an entity-denoting noun (e.g., The writer began the novel), triggering a covert event interpretation (e.g., reading, writing). Experimental investigations of logical metonymy must assume a binary distinction between metonymic (i.e. eventselecting) verbs and non-metonymic verbs to establish a control condition. However, this binary distinction (whether a verb is metonymic or not) is mostly made on intuitive grounds, which introduces a potential confounding factor. We describe a corpus-based approach which characterizes verbs in terms of their behavior at the syntax-semantics interface. The model assesses the extent to which transitive verbs prefer event-denoting objects over entity-denoting objects. We then test this ""eventhood"" measure on psycholinguistic datasets, showing that it can distinguish not only metonymic from non-metonymic verbs, but that it can also capture more fine-grained distinctions among different classes of metonymic verbs, putting such distinctions into a new graded perspective.",2013
hsu-glass-2008-n,https://aclanthology.org/D08-1087,0,,,,,,,"N-gram Weighting: Reducing Training Data Mismatch in Cross-Domain Language Model Estimation. In domains with insufficient matched training data, language models are often constructed by interpolating component models trained from partially matched corpora. Since the ngrams from such corpora may not be of equal relevance to the target domain, we propose an n-gram weighting technique to adjust the component n-gram probabilities based on features derived from readily available segmentation and metadata information for each corpus. Using a log-linear combination of such features, the resulting model achieves up to a 1.2% absolute word error rate reduction over a linearly interpolated baseline language model on a lecture transcription task.",{N}-gram Weighting: {R}educing Training Data Mismatch in Cross-Domain Language Model Estimation,"In domains with insufficient matched training data, language models are often constructed by interpolating component models trained from partially matched corpora. Since the ngrams from such corpora may not be of equal relevance to the target domain, we propose an n-gram weighting technique to adjust the component n-gram probabilities based on features derived from readily available segmentation and metadata information for each corpus. Using a log-linear combination of such features, the resulting model achieves up to a 1.2% absolute word error rate reduction over a linearly interpolated baseline language model on a lecture transcription task.",N-gram Weighting: Reducing Training Data Mismatch in Cross-Domain Language Model Estimation,"In domains with insufficient matched training data, language models are often constructed by interpolating component models trained from partially matched corpora. Since the ngrams from such corpora may not be of equal relevance to the target domain, we propose an n-gram weighting technique to adjust the component n-gram probabilities based on features derived from readily available segmentation and metadata information for each corpus. Using a log-linear combination of such features, the resulting model achieves up to a 1.2% absolute word error rate reduction over a linearly interpolated baseline language model on a lecture transcription task.","We would like to thank the anonymous reviewers for their constructive feedback. This research is supported in part by the T-Party Project, a joint research program between MIT and Quanta Computer Inc.","N-gram Weighting: Reducing Training Data Mismatch in Cross-Domain Language Model Estimation. In domains with insufficient matched training data, language models are often constructed by interpolating component models trained from partially matched corpora. Since the ngrams from such corpora may not be of equal relevance to the target domain, we propose an n-gram weighting technique to adjust the component n-gram probabilities based on features derived from readily available segmentation and metadata information for each corpus. Using a log-linear combination of such features, the resulting model achieves up to a 1.2% absolute word error rate reduction over a linearly interpolated baseline language model on a lecture transcription task.",2008
berend-etal-2013-lfg,https://aclanthology.org/W13-3608,0,,,,,,,"LFG-based Features for Noun Number and Article Grammatical Errors. We introduce here a participating system of the CoNLL-2013 Shared Task ""Grammatical Error Correction"". We focused on the noun number and article error categories and constructed a supervised learning system for solving these tasks. We carried out feature engineering and we found that (among others) the f-structure of an LFG parser can provide very informative features for the machine learning system.",{LFG}-based Features for Noun Number and Article Grammatical Errors,"We introduce here a participating system of the CoNLL-2013 Shared Task ""Grammatical Error Correction"". We focused on the noun number and article error categories and constructed a supervised learning system for solving these tasks. We carried out feature engineering and we found that (among others) the f-structure of an LFG parser can provide very informative features for the machine learning system.",LFG-based Features for Noun Number and Article Grammatical Errors,"We introduce here a participating system of the CoNLL-2013 Shared Task ""Grammatical Error Correction"". We focused on the noun number and article error categories and constructed a supervised learning system for solving these tasks. We carried out feature engineering and we found that (among others) the f-structure of an LFG parser can provide very informative features for the machine learning system.",This work was supported in part by the European Union and the European Social Fund through the project FuturICT.hu (grant no.: TÁMOP-4.2.2.C-11/1/KONV-2012-0013).,"LFG-based Features for Noun Number and Article Grammatical Errors. We introduce here a participating system of the CoNLL-2013 Shared Task ""Grammatical Error Correction"". We focused on the noun number and article error categories and constructed a supervised learning system for solving these tasks. We carried out feature engineering and we found that (among others) the f-structure of an LFG parser can provide very informative features for the machine learning system.",2013
guo-etal-2018-soft,https://aclanthology.org/P18-1064,0,,,,,,,"Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation. An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input document. We also propose novel multitask architectures with high-level (semantic) layer-specific sharing across multiple encoder and decoder layers of the three tasks, as well as soft-sharing mechanisms (and show performance ablations and analysis examples of each contribution). Overall, we achieve statistically significant improvements over the state-ofthe-art on both the CNN/DailyMail and Gigaword datasets, as well as on the DUC-2002 transfer setup. We also present several quantitative and qualitative analysis studies of our model's learned saliency and entailment skills.",Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation,"An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input document. We also propose novel multitask architectures with high-level (semantic) layer-specific sharing across multiple encoder and decoder layers of the three tasks, as well as soft-sharing mechanisms (and show performance ablations and analysis examples of each contribution). Overall, we achieve statistically significant improvements over the state-ofthe-art on both the CNN/DailyMail and Gigaword datasets, as well as on the DUC-2002 transfer setup. We also present several quantitative and qualitative analysis studies of our model's learned saliency and entailment skills.",Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation,"An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input document. We also propose novel multitask architectures with high-level (semantic) layer-specific sharing across multiple encoder and decoder layers of the three tasks, as well as soft-sharing mechanisms (and show performance ablations and analysis examples of each contribution). Overall, we achieve statistically significant improvements over the state-ofthe-art on both the CNN/DailyMail and Gigaword datasets, as well as on the DUC-2002 transfer setup. We also present several quantitative and qualitative analysis studies of our model's learned saliency and entailment skills.","We thank the reviewers for their helpful comments. This work was supported by DARPA (YFA17-D17AP00022), Google Faculty Research Award, Bloomberg Data Science Research Grant, and NVidia GPU awards. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency.","Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation. An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input document. We also propose novel multitask architectures with high-level (semantic) layer-specific sharing across multiple encoder and decoder layers of the three tasks, as well as soft-sharing mechanisms (and show performance ablations and analysis examples of each contribution). Overall, we achieve statistically significant improvements over the state-ofthe-art on both the CNN/DailyMail and Gigaword datasets, as well as on the DUC-2002 transfer setup. We also present several quantitative and qualitative analysis studies of our model's learned saliency and entailment skills.",2018
schulz-etal-2019-analysis,https://aclanthology.org/P19-1265,0,,,,,,,"Analysis of Automatic Annotation Suggestions for Hard Discourse-Level Tasks in Expert Domains. Many complex discourse-level tasks can aid domain experts in their work but require costly expert annotations for data creation. To speed up and ease annotations, we investigate the viability of automatically generated annotation suggestions for such tasks. As an example, we choose a task that is particularly hard for both humans and machines: the segmentation and classification of epistemic activities in diagnostic reasoning texts. We create and publish a new dataset covering two domains and carefully analyse the suggested annotations. We find that suggestions have positive effects on annotation speed and performance, while not introducing noteworthy biases. Envisioning suggestion models that improve with newly annotated texts, we contrast methods for continuous model adjustment and suggest the most effective setup for suggestions in future expert tasks.",Analysis of Automatic Annotation Suggestions for Hard Discourse-Level Tasks in Expert Domains,"Many complex discourse-level tasks can aid domain experts in their work but require costly expert annotations for data creation. To speed up and ease annotations, we investigate the viability of automatically generated annotation suggestions for such tasks. As an example, we choose a task that is particularly hard for both humans and machines: the segmentation and classification of epistemic activities in diagnostic reasoning texts. We create and publish a new dataset covering two domains and carefully analyse the suggested annotations. We find that suggestions have positive effects on annotation speed and performance, while not introducing noteworthy biases. Envisioning suggestion models that improve with newly annotated texts, we contrast methods for continuous model adjustment and suggest the most effective setup for suggestions in future expert tasks.",Analysis of Automatic Annotation Suggestions for Hard Discourse-Level Tasks in Expert Domains,"Many complex discourse-level tasks can aid domain experts in their work but require costly expert annotations for data creation. To speed up and ease annotations, we investigate the viability of automatically generated annotation suggestions for such tasks. As an example, we choose a task that is particularly hard for both humans and machines: the segmentation and classification of epistemic activities in diagnostic reasoning texts. We create and publish a new dataset covering two domains and carefully analyse the suggested annotations. We find that suggestions have positive effects on annotation speed and performance, while not introducing noteworthy biases. Envisioning suggestion models that improve with newly annotated texts, we contrast methods for continuous model adjustment and suggest the most effective setup for suggestions in future expert tasks.","This work was supported by the German Federal Ministry of Education and Research (BMBF) under the reference 16DHL1040 (FAMULUS). We thank our annotators M. Achtner, S. Eichler, V. Jung, H. Mißbach, K. Nederstigt, P. Schäffner, R. Schönberger, and H. Werl. We also acknowledge Samaun Ibna Faiz for his contributions to the model adjustment experiments.","Analysis of Automatic Annotation Suggestions for Hard Discourse-Level Tasks in Expert Domains. Many complex discourse-level tasks can aid domain experts in their work but require costly expert annotations for data creation. To speed up and ease annotations, we investigate the viability of automatically generated annotation suggestions for such tasks. As an example, we choose a task that is particularly hard for both humans and machines: the segmentation and classification of epistemic activities in diagnostic reasoning texts. We create and publish a new dataset covering two domains and carefully analyse the suggested annotations. We find that suggestions have positive effects on annotation speed and performance, while not introducing noteworthy biases. Envisioning suggestion models that improve with newly annotated texts, we contrast methods for continuous model adjustment and suggest the most effective setup for suggestions in future expert tasks.",2019
de-melo-bansal-2013-good,https://aclanthology.org/Q13-1023,0,,,,,,,"Good, Great, Excellent: Global Inference of Semantic Intensities. Adjectives like good, great, and excellent are similar in meaning, but differ in intensity. Intensity order information is very useful for language learners as well as in several NLP tasks, but is missing in most lexical resources (dictionaries, WordNet, and thesauri). In this paper, we present a primarily unsupervised approach that uses semantics from Web-scale data (e.g., phrases like good but not excellent) to rank words by assigning them positions on a continuous scale. We rely on Mixed Integer Linear Programming to jointly determine the ranks, such that individual decisions benefit from global information. When ranking English adjectives, our global algorithm achieves substantial improvements over previous work on both pairwise and rank correlation metrics (specifically, 70% pairwise accuracy as compared to only 56% by previous work). Moreover, our approach can incorporate external synonymy information (increasing its pairwise accuracy to 78%) and extends easily to new languages. We also make our code and data freely available. 1","Good, Great, Excellent: Global Inference of Semantic Intensities","Adjectives like good, great, and excellent are similar in meaning, but differ in intensity. Intensity order information is very useful for language learners as well as in several NLP tasks, but is missing in most lexical resources (dictionaries, WordNet, and thesauri). In this paper, we present a primarily unsupervised approach that uses semantics from Web-scale data (e.g., phrases like good but not excellent) to rank words by assigning them positions on a continuous scale. We rely on Mixed Integer Linear Programming to jointly determine the ranks, such that individual decisions benefit from global information. When ranking English adjectives, our global algorithm achieves substantial improvements over previous work on both pairwise and rank correlation metrics (specifically, 70% pairwise accuracy as compared to only 56% by previous work). Moreover, our approach can incorporate external synonymy information (increasing its pairwise accuracy to 78%) and extends easily to new languages. We also make our code and data freely available. 1","Good, Great, Excellent: Global Inference of Semantic Intensities","Adjectives like good, great, and excellent are similar in meaning, but differ in intensity. Intensity order information is very useful for language learners as well as in several NLP tasks, but is missing in most lexical resources (dictionaries, WordNet, and thesauri). In this paper, we present a primarily unsupervised approach that uses semantics from Web-scale data (e.g., phrases like good but not excellent) to rank words by assigning them positions on a continuous scale. We rely on Mixed Integer Linear Programming to jointly determine the ranks, such that individual decisions benefit from global information. When ranking English adjectives, our global algorithm achieves substantial improvements over previous work on both pairwise and rank correlation metrics (specifically, 70% pairwise accuracy as compared to only 56% by previous work). Moreover, our approach can incorporate external synonymy information (increasing its pairwise accuracy to 78%) and extends easily to new languages. We also make our code and data freely available. 1",We would like to thank the editor and the anonymous reviewers for their helpful feedback.,"Good, Great, Excellent: Global Inference of Semantic Intensities. Adjectives like good, great, and excellent are similar in meaning, but differ in intensity. Intensity order information is very useful for language learners as well as in several NLP tasks, but is missing in most lexical resources (dictionaries, WordNet, and thesauri). In this paper, we present a primarily unsupervised approach that uses semantics from Web-scale data (e.g., phrases like good but not excellent) to rank words by assigning them positions on a continuous scale. We rely on Mixed Integer Linear Programming to jointly determine the ranks, such that individual decisions benefit from global information. When ranking English adjectives, our global algorithm achieves substantial improvements over previous work on both pairwise and rank correlation metrics (specifically, 70% pairwise accuracy as compared to only 56% by previous work). Moreover, our approach can incorporate external synonymy information (increasing its pairwise accuracy to 78%) and extends easily to new languages. We also make our code and data freely available. 1",2013
mizuta-2004-analysis,https://aclanthology.org/Y04-1007,0,,,,,,,"An Analysis of Japanese ta / teiru in a Dynamic Semantics Framework and a Comparison with Korean Temporal Markers a nohta / a twuta. In this paper I will shed new light on the semantics of Japanese tense-aspect markers ta and teiru from dynamic semantics and contrastive perspectives. The focus of investigation will be on the essential difference between ta and teiru used in an aspectual sense related to a perfect. I analyze the asymmetry between ta and teiru with empirical data and illustrate it in the DRT framework (Discourse Representation Theory: Kamp and Reyle (1993)). Defending the intuition that ta and teiru make respectively an eventive and a stative description of eventualities, I argue that ta is committed to an assertion of the triggering event whereas teiru is not. In the case of teiru, a triggering event, if there is any, is only entailed. In DRT, ta and teiru introduce respectively an event and a state as a codition into the main DRS. Teiru may introduce a triggering event only as a codition in an embedded DRS. I also illustrate how the proposed analysis of the perfect meaning fits into a more general scheme of ta and teiru. and analyze ta and teiru in a discourse. Furthermore, in DRT terms, I will compare Japanese ta / teiru with Korean perfect-related temporal markers a nohta / a twuta in light of Lee (1996). (1) a. [The water in the kettle comes to the boil while the speaker sees it.] Yoshi, o-yu-ga wai-ta / ?? teiru. All right, Hon-hot-water-Nom (come-to-the-)boil-Past / State-Nonpast o.k. 'All right, the water has (just) come to the boil.' / ?? 'The water is on the boil.' b. [The speaker put the kettle on the gas and left. Some time later he comes back and finds the water boiling.] Ah, o-yu-ga wai-ta / teiru. 'Oh, the water has come to the boil.' / 'Oh, the water is on the boil.' c. [The speaker comes to the kitchen and finds the water boiling. (He doesn't know who put the kettle on the gas.)] 1 I assume that the Japanese tense / aspect is encoded in terms of tei(ru) (stative) / non-tei(ru) (non-stative) forms and ta (past) / non-ta (nonpast) forms. Here I focus on 'non-tei(ru) + ta' and 'tei(ru) + non-ta' combinations. 2 For practical reasons I use a single gloss 'Past' for ta with any meaning.",An Analysis of {J}apanese ta / teiru in a Dynamic Semantics Framework and a Comparison with {K}orean Temporal Markers a nohta / a twuta,"In this paper I will shed new light on the semantics of Japanese tense-aspect markers ta and teiru from dynamic semantics and contrastive perspectives. The focus of investigation will be on the essential difference between ta and teiru used in an aspectual sense related to a perfect. I analyze the asymmetry between ta and teiru with empirical data and illustrate it in the DRT framework (Discourse Representation Theory: Kamp and Reyle (1993)). Defending the intuition that ta and teiru make respectively an eventive and a stative description of eventualities, I argue that ta is committed to an assertion of the triggering event whereas teiru is not. In the case of teiru, a triggering event, if there is any, is only entailed. In DRT, ta and teiru introduce respectively an event and a state as a codition into the main DRS. Teiru may introduce a triggering event only as a codition in an embedded DRS. I also illustrate how the proposed analysis of the perfect meaning fits into a more general scheme of ta and teiru. and analyze ta and teiru in a discourse. Furthermore, in DRT terms, I will compare Japanese ta / teiru with Korean perfect-related temporal markers a nohta / a twuta in light of Lee (1996). (1) a. [The water in the kettle comes to the boil while the speaker sees it.] Yoshi, o-yu-ga wai-{ta / ?? teiru}. All right, Hon-hot-water-Nom (come-to-the-)boil-{Past / State-Nonpast} o.k. 'All right, the water has (just) come to the boil.' / ?? 'The water is on the boil.' b. [The speaker put the kettle on the gas and left. Some time later he comes back and finds the water boiling.] Ah, o-yu-ga wai-{ta / teiru}. 'Oh, the water has come to the boil.' / 'Oh, the water is on the boil.' c. [The speaker comes to the kitchen and finds the water boiling. (He doesn't know who put the kettle on the gas.)] 1 I assume that the Japanese tense / aspect is encoded in terms of tei(ru) (stative) / non-tei(ru) (non-stative) forms and ta (past) / non-ta (nonpast) forms. Here I focus on 'non-tei(ru) + ta' and 'tei(ru) + non-ta' combinations. 2 For practical reasons I use a single gloss 'Past' for ta with any meaning.",An Analysis of Japanese ta / teiru in a Dynamic Semantics Framework and a Comparison with Korean Temporal Markers a nohta / a twuta,"In this paper I will shed new light on the semantics of Japanese tense-aspect markers ta and teiru from dynamic semantics and contrastive perspectives. The focus of investigation will be on the essential difference between ta and teiru used in an aspectual sense related to a perfect. I analyze the asymmetry between ta and teiru with empirical data and illustrate it in the DRT framework (Discourse Representation Theory: Kamp and Reyle (1993)). Defending the intuition that ta and teiru make respectively an eventive and a stative description of eventualities, I argue that ta is committed to an assertion of the triggering event whereas teiru is not. In the case of teiru, a triggering event, if there is any, is only entailed. In DRT, ta and teiru introduce respectively an event and a state as a codition into the main DRS. Teiru may introduce a triggering event only as a codition in an embedded DRS. I also illustrate how the proposed analysis of the perfect meaning fits into a more general scheme of ta and teiru. and analyze ta and teiru in a discourse. Furthermore, in DRT terms, I will compare Japanese ta / teiru with Korean perfect-related temporal markers a nohta / a twuta in light of Lee (1996). (1) a. [The water in the kettle comes to the boil while the speaker sees it.] Yoshi, o-yu-ga wai-ta / ?? teiru. All right, Hon-hot-water-Nom (come-to-the-)boil-Past / State-Nonpast o.k. 'All right, the water has (just) come to the boil.' / ?? 'The water is on the boil.' b. [The speaker put the kettle on the gas and left. Some time later he comes back and finds the water boiling.] Ah, o-yu-ga wai-ta / teiru. 'Oh, the water has come to the boil.' / 'Oh, the water is on the boil.' c. [The speaker comes to the kitchen and finds the water boiling. (He doesn't know who put the kettle on the gas.)] 1 I assume that the Japanese tense / aspect is encoded in terms of tei(ru) (stative) / non-tei(ru) (non-stative) forms and ta (past) / non-ta (nonpast) forms. Here I focus on 'non-tei(ru) + ta' and 'tei(ru) + non-ta' combinations. 2 For practical reasons I use a single gloss 'Past' for ta with any meaning.",I am grateful to the anonymous reviewer of my abstract and to Norihiro Ogata (Osaka University) for helpful comments. Shortcomings are of course solely mine.,"An Analysis of Japanese ta / teiru in a Dynamic Semantics Framework and a Comparison with Korean Temporal Markers a nohta / a twuta. In this paper I will shed new light on the semantics of Japanese tense-aspect markers ta and teiru from dynamic semantics and contrastive perspectives. The focus of investigation will be on the essential difference between ta and teiru used in an aspectual sense related to a perfect. I analyze the asymmetry between ta and teiru with empirical data and illustrate it in the DRT framework (Discourse Representation Theory: Kamp and Reyle (1993)). Defending the intuition that ta and teiru make respectively an eventive and a stative description of eventualities, I argue that ta is committed to an assertion of the triggering event whereas teiru is not. In the case of teiru, a triggering event, if there is any, is only entailed. In DRT, ta and teiru introduce respectively an event and a state as a codition into the main DRS. Teiru may introduce a triggering event only as a codition in an embedded DRS. I also illustrate how the proposed analysis of the perfect meaning fits into a more general scheme of ta and teiru. and analyze ta and teiru in a discourse. Furthermore, in DRT terms, I will compare Japanese ta / teiru with Korean perfect-related temporal markers a nohta / a twuta in light of Lee (1996). (1) a. [The water in the kettle comes to the boil while the speaker sees it.] Yoshi, o-yu-ga wai-ta / ?? teiru. All right, Hon-hot-water-Nom (come-to-the-)boil-Past / State-Nonpast o.k. 'All right, the water has (just) come to the boil.' / ?? 'The water is on the boil.' b. [The speaker put the kettle on the gas and left. Some time later he comes back and finds the water boiling.] Ah, o-yu-ga wai-ta / teiru. 'Oh, the water has come to the boil.' / 'Oh, the water is on the boil.' c. [The speaker comes to the kitchen and finds the water boiling. (He doesn't know who put the kettle on the gas.)] 1 I assume that the Japanese tense / aspect is encoded in terms of tei(ru) (stative) / non-tei(ru) (non-stative) forms and ta (past) / non-ta (nonpast) forms. Here I focus on 'non-tei(ru) + ta' and 'tei(ru) + non-ta' combinations. 2 For practical reasons I use a single gloss 'Past' for ta with any meaning.",2004
obermeier-1985-temporal,https://aclanthology.org/P85-1002,1,,,,health,,,"Temporal Inferences in Medical Texts. The objectives of this paper are twofold, whereby the computer program is meant to be a particular implementation of a general natural Language
[NL] proeessin~ system [NI,PSI which could be used for different domains.",Temporal Inferences in Medical Texts,"The objectives of this paper are twofold, whereby the computer program is meant to be a particular implementation of a general natural Language
[NL] proeessin~ system [NI,PSI which could be used for different domains.",Temporal Inferences in Medical Texts,"The objectives of this paper are twofold, whereby the computer program is meant to be a particular implementation of a general natural Language
[NL] proeessin~ system [NI,PSI which could be used for different domains.",,"Temporal Inferences in Medical Texts. The objectives of this paper are twofold, whereby the computer program is meant to be a particular implementation of a general natural Language
[NL] proeessin~ system [NI,PSI which could be used for different domains.",1985
matsuzaki-etal-2013-complexity,https://aclanthology.org/I13-1009,0,,,,,,,"The Complexity of Math Problems -- Linguistic, or Computational?. We present a simple, logic-based architecture for solving math problems written in natural language. A problem is firstly translated to a logical form. It is then rewritten into the input language of a solver algorithm and finally the solver finds an answer. Such a clean decomposition of the task however does not come for free. First, despite its formality, math text still exploits the flexibility of natural language to convey its complex logical content succinctly. We propose a mechanism to fill the gap between the simple form and the complex meaning while adhering to the principle of compositionality. Second, since the input to the solver is derived by strictly following the text, it may require far more computation than those derived by a human, and may go beyond the capability of the current solvers. Empirical study on Japanese university entrance examination problems showed positive results indicating the viability of the approach, which opens up a way towards a true end-to-end problem solving system through the synthesis of the advances in linguistics, NLP, and computer math.","The Complexity of Math Problems {--} Linguistic, or Computational?","We present a simple, logic-based architecture for solving math problems written in natural language. A problem is firstly translated to a logical form. It is then rewritten into the input language of a solver algorithm and finally the solver finds an answer. Such a clean decomposition of the task however does not come for free. First, despite its formality, math text still exploits the flexibility of natural language to convey its complex logical content succinctly. We propose a mechanism to fill the gap between the simple form and the complex meaning while adhering to the principle of compositionality. Second, since the input to the solver is derived by strictly following the text, it may require far more computation than those derived by a human, and may go beyond the capability of the current solvers. Empirical study on Japanese university entrance examination problems showed positive results indicating the viability of the approach, which opens up a way towards a true end-to-end problem solving system through the synthesis of the advances in linguistics, NLP, and computer math.","The Complexity of Math Problems -- Linguistic, or Computational?","We present a simple, logic-based architecture for solving math problems written in natural language. A problem is firstly translated to a logical form. It is then rewritten into the input language of a solver algorithm and finally the solver finds an answer. Such a clean decomposition of the task however does not come for free. First, despite its formality, math text still exploits the flexibility of natural language to convey its complex logical content succinctly. We propose a mechanism to fill the gap between the simple form and the complex meaning while adhering to the principle of compositionality. Second, since the input to the solver is derived by strictly following the text, it may require far more computation than those derived by a human, and may go beyond the capability of the current solvers. Empirical study on Japanese university entrance examination problems showed positive results indicating the viability of the approach, which opens up a way towards a true end-to-end problem solving system through the synthesis of the advances in linguistics, NLP, and computer math.",,"The Complexity of Math Problems -- Linguistic, or Computational?. We present a simple, logic-based architecture for solving math problems written in natural language. A problem is firstly translated to a logical form. It is then rewritten into the input language of a solver algorithm and finally the solver finds an answer. Such a clean decomposition of the task however does not come for free. First, despite its formality, math text still exploits the flexibility of natural language to convey its complex logical content succinctly. We propose a mechanism to fill the gap between the simple form and the complex meaning while adhering to the principle of compositionality. Second, since the input to the solver is derived by strictly following the text, it may require far more computation than those derived by a human, and may go beyond the capability of the current solvers. Empirical study on Japanese university entrance examination problems showed positive results indicating the viability of the approach, which opens up a way towards a true end-to-end problem solving system through the synthesis of the advances in linguistics, NLP, and computer math.",2013
wang-etal-2021-predicting,https://aclanthology.org/2021.rocling-1.18,1,,,,health,,,"Predicting elders' cognitive flexibility from their language use. Increasing research efforts are directed towards the relationship between cognitive decline and language use. However, few of them had focused specifically on how language use is related to cognitive flexibility. This study recruited 51 elders aged 53-74 to discuss their daily activities in focus groups. The transcribed discourse was analyzed using the Chinese version of LIWC (Lin et al., 2020; Pennebaker et al., 2015) for cognitive complexity and dynamic language as well as content words related to elders' daily activities. The interruption behavior during conversation was also analyzed. The results showed that, after controlling for education, gender and age, cognitive flexibility performance was accompanied by the increasing adoption of dynamic language, insight words and family words. These findings serve as the basis for the prediction of elders' cognitive flexibility through their daily language use.",Predicting elders{'} cognitive flexibility from their language use,"Increasing research efforts are directed towards the relationship between cognitive decline and language use. However, few of them had focused specifically on how language use is related to cognitive flexibility. This study recruited 51 elders aged 53-74 to discuss their daily activities in focus groups. The transcribed discourse was analyzed using the Chinese version of LIWC (Lin et al., 2020; Pennebaker et al., 2015) for cognitive complexity and dynamic language as well as content words related to elders' daily activities. The interruption behavior during conversation was also analyzed. The results showed that, after controlling for education, gender and age, cognitive flexibility performance was accompanied by the increasing adoption of dynamic language, insight words and family words. These findings serve as the basis for the prediction of elders' cognitive flexibility through their daily language use.",Predicting elders' cognitive flexibility from their language use,"Increasing research efforts are directed towards the relationship between cognitive decline and language use. However, few of them had focused specifically on how language use is related to cognitive flexibility. This study recruited 51 elders aged 53-74 to discuss their daily activities in focus groups. The transcribed discourse was analyzed using the Chinese version of LIWC (Lin et al., 2020; Pennebaker et al., 2015) for cognitive complexity and dynamic language as well as content words related to elders' daily activities. The interruption behavior during conversation was also analyzed. The results showed that, after controlling for education, gender and age, cognitive flexibility performance was accompanied by the increasing adoption of dynamic language, insight words and family words. These findings serve as the basis for the prediction of elders' cognitive flexibility through their daily language use.",,"Predicting elders' cognitive flexibility from their language use. Increasing research efforts are directed towards the relationship between cognitive decline and language use. However, few of them had focused specifically on how language use is related to cognitive flexibility. This study recruited 51 elders aged 53-74 to discuss their daily activities in focus groups. The transcribed discourse was analyzed using the Chinese version of LIWC (Lin et al., 2020; Pennebaker et al., 2015) for cognitive complexity and dynamic language as well as content words related to elders' daily activities. The interruption behavior during conversation was also analyzed. The results showed that, after controlling for education, gender and age, cognitive flexibility performance was accompanied by the increasing adoption of dynamic language, insight words and family words. These findings serve as the basis for the prediction of elders' cognitive flexibility through their daily language use.",2021
yao-etal-2012-probabilistic,https://aclanthology.org/W12-3022,0,,,,,,,"Probabilistic Databases of Universal Schema. In data integration we transform information from a source into a target schema. A general problem in this task is loss of fidelity and coverage: the source expresses more knowledge than can fit into the target schema, or knowledge that is hard to fit into any schema at all. This problem is taken to an extreme in information extraction (IE) where the source is natural language. To address this issue, one can either automatically learn a latent schema emergent in text (a brittle and ill-defined task), or manually extend schemas. We propose instead to store data in a probabilistic database of universal schema. This schema is simply the union of all source schemas, and the probabilistic database learns how to predict the cells of each source relation in this union. For example, the database could store Freebase relations and relations that correspond to natural language surface patterns. The database would learn to predict what freebase relations hold true based on what surface patterns appear, and vice versa. We describe an analogy between such databases and collaborative filtering models, and use it to implement our paradigm with probabilistic PCA, a scalable and effective collaborative filtering method.",Probabilistic Databases of Universal Schema,"In data integration we transform information from a source into a target schema. A general problem in this task is loss of fidelity and coverage: the source expresses more knowledge than can fit into the target schema, or knowledge that is hard to fit into any schema at all. This problem is taken to an extreme in information extraction (IE) where the source is natural language. To address this issue, one can either automatically learn a latent schema emergent in text (a brittle and ill-defined task), or manually extend schemas. We propose instead to store data in a probabilistic database of universal schema. This schema is simply the union of all source schemas, and the probabilistic database learns how to predict the cells of each source relation in this union. For example, the database could store Freebase relations and relations that correspond to natural language surface patterns. The database would learn to predict what freebase relations hold true based on what surface patterns appear, and vice versa. We describe an analogy between such databases and collaborative filtering models, and use it to implement our paradigm with probabilistic PCA, a scalable and effective collaborative filtering method.",Probabilistic Databases of Universal Schema,"In data integration we transform information from a source into a target schema. A general problem in this task is loss of fidelity and coverage: the source expresses more knowledge than can fit into the target schema, or knowledge that is hard to fit into any schema at all. This problem is taken to an extreme in information extraction (IE) where the source is natural language. To address this issue, one can either automatically learn a latent schema emergent in text (a brittle and ill-defined task), or manually extend schemas. We propose instead to store data in a probabilistic database of universal schema. This schema is simply the union of all source schemas, and the probabilistic database learns how to predict the cells of each source relation in this union. For example, the database could store Freebase relations and relations that correspond to natural language surface patterns. The database would learn to predict what freebase relations hold true based on what surface patterns appear, and vice versa. We describe an analogy between such databases and collaborative filtering models, and use it to implement our paradigm with probabilistic PCA, a scalable and effective collaborative filtering method.","This work was supported in part by the Center for Intelligent Information Retrieval and the University of Massachusetts and in part by UPenn NSF medium IIS-0803847. We gratefully acknowledge the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of DARPA, AFRL, or the US government.","Probabilistic Databases of Universal Schema. In data integration we transform information from a source into a target schema. A general problem in this task is loss of fidelity and coverage: the source expresses more knowledge than can fit into the target schema, or knowledge that is hard to fit into any schema at all. This problem is taken to an extreme in information extraction (IE) where the source is natural language. To address this issue, one can either automatically learn a latent schema emergent in text (a brittle and ill-defined task), or manually extend schemas. We propose instead to store data in a probabilistic database of universal schema. This schema is simply the union of all source schemas, and the probabilistic database learns how to predict the cells of each source relation in this union. For example, the database could store Freebase relations and relations that correspond to natural language surface patterns. The database would learn to predict what freebase relations hold true based on what surface patterns appear, and vice versa. We describe an analogy between such databases and collaborative filtering models, and use it to implement our paradigm with probabilistic PCA, a scalable and effective collaborative filtering method.",2012
wu-etal-2006-computational,https://aclanthology.org/P06-4011,1,,,,industry_innovation_infrastructure,,,"Computational Analysis of Move Structures in Academic Abstracts. This paper introduces a method for computational analysis of move structures in abstracts of research articles. In our approach, sentences in a given abstract are analyzed and labeled with a specific move in light of various rhetorical functions. The method involves automatically gathering a large number of abstracts from the Web and building a language model of abstract moves. We also present a prototype concordancer, CARE, which exploits the move-tagged abstracts for digital learning. This system provides a promising approach to Webbased computer-assisted academic writing.",Computational Analysis of Move Structures in Academic Abstracts,"This paper introduces a method for computational analysis of move structures in abstracts of research articles. In our approach, sentences in a given abstract are analyzed and labeled with a specific move in light of various rhetorical functions. The method involves automatically gathering a large number of abstracts from the Web and building a language model of abstract moves. We also present a prototype concordancer, CARE, which exploits the move-tagged abstracts for digital learning. This system provides a promising approach to Webbased computer-assisted academic writing.",Computational Analysis of Move Structures in Academic Abstracts,"This paper introduces a method for computational analysis of move structures in abstracts of research articles. In our approach, sentences in a given abstract are analyzed and labeled with a specific move in light of various rhetorical functions. The method involves automatically gathering a large number of abstracts from the Web and building a language model of abstract moves. We also present a prototype concordancer, CARE, which exploits the move-tagged abstracts for digital learning. This system provides a promising approach to Webbased computer-assisted academic writing.",,"Computational Analysis of Move Structures in Academic Abstracts. This paper introduces a method for computational analysis of move structures in abstracts of research articles. In our approach, sentences in a given abstract are analyzed and labeled with a specific move in light of various rhetorical functions. The method involves automatically gathering a large number of abstracts from the Web and building a language model of abstract moves. We also present a prototype concordancer, CARE, which exploits the move-tagged abstracts for digital learning. This system provides a promising approach to Webbased computer-assisted academic writing.",2006
clemenceau-roche-1993-enhancing,https://aclanthology.org/E93-1059,0,,,,,,,"Enhancing a large scale dictionary with a two-level system. We present in this paper a morphological analyzer and generator for French that contains a dictionary of 700,000 inflected words called DELAF 1, and a full twolevel system aimed at the analysis of new derivatives. Hence, this tool recognizes and generates both correct inflected forms of French simple words (DELAF lookup procedure) and new derivatives and their inflected forms (two-level analysis). Moreover, a clear distinction is made between dictionary look-up processes and new words analyses in order to clearly identify the analyses that involve heuristic rules. We tested this tool upon a French corpus of 1,300,000 words with significant results (Clemenceau D. 1992). With regards to efficiency, since this tool is compiled into a unique transducer, it provides a very fast look-up procedure (1,100 words per second) at a low memory cost (around 1.3 Mb in RAM).",Enhancing a large scale dictionary with a two-level system,"We present in this paper a morphological analyzer and generator for French that contains a dictionary of 700,000 inflected words called DELAF 1, and a full twolevel system aimed at the analysis of new derivatives. Hence, this tool recognizes and generates both correct inflected forms of French simple words (DELAF lookup procedure) and new derivatives and their inflected forms (two-level analysis). Moreover, a clear distinction is made between dictionary look-up processes and new words analyses in order to clearly identify the analyses that involve heuristic rules. We tested this tool upon a French corpus of 1,300,000 words with significant results (Clemenceau D. 1992). With regards to efficiency, since this tool is compiled into a unique transducer, it provides a very fast look-up procedure (1,100 words per second) at a low memory cost (around 1.3 Mb in RAM).",Enhancing a large scale dictionary with a two-level system,"We present in this paper a morphological analyzer and generator for French that contains a dictionary of 700,000 inflected words called DELAF 1, and a full twolevel system aimed at the analysis of new derivatives. Hence, this tool recognizes and generates both correct inflected forms of French simple words (DELAF lookup procedure) and new derivatives and their inflected forms (two-level analysis). Moreover, a clear distinction is made between dictionary look-up processes and new words analyses in order to clearly identify the analyses that involve heuristic rules. We tested this tool upon a French corpus of 1,300,000 words with significant results (Clemenceau D. 1992). With regards to efficiency, since this tool is compiled into a unique transducer, it provides a very fast look-up procedure (1,100 words per second) at a low memory cost (around 1.3 Mb in RAM).",,"Enhancing a large scale dictionary with a two-level system. We present in this paper a morphological analyzer and generator for French that contains a dictionary of 700,000 inflected words called DELAF 1, and a full twolevel system aimed at the analysis of new derivatives. Hence, this tool recognizes and generates both correct inflected forms of French simple words (DELAF lookup procedure) and new derivatives and their inflected forms (two-level analysis). Moreover, a clear distinction is made between dictionary look-up processes and new words analyses in order to clearly identify the analyses that involve heuristic rules. We tested this tool upon a French corpus of 1,300,000 words with significant results (Clemenceau D. 1992). With regards to efficiency, since this tool is compiled into a unique transducer, it provides a very fast look-up procedure (1,100 words per second) at a low memory cost (around 1.3 Mb in RAM).",1993
theeramunkong-etal-1997-exploiting,https://aclanthology.org/W97-1511,0,,,,,,,"Exploiting Contextual Information in Hypothesis Selection for Grammar Refinement. In this paper, we propose a new framework of grammar development and some techniques for exploiting contextual information in a process of grammar refinement. The proposed framework involves two processes, partial grammar acquisition and grammar refinement. In the former process, a rough grammar is constructed from a bracketed corpus. The grammar is later refined by the latter process where a combination of rule-based and corpusbased approaches is applied. Since there may be more than one rules introduced as alternative hypotheses to recover the analysis of sentences which cannot be parsed by the current grammar, we propose a method to give priority to these hypotheses based on local contextual information. By experiments, our hypothesis selection is evaluated and its effectiveness is shown.",Exploiting Contextual Information in Hypothesis Selection for Grammar Refinement,"In this paper, we propose a new framework of grammar development and some techniques for exploiting contextual information in a process of grammar refinement. The proposed framework involves two processes, partial grammar acquisition and grammar refinement. In the former process, a rough grammar is constructed from a bracketed corpus. The grammar is later refined by the latter process where a combination of rule-based and corpusbased approaches is applied. Since there may be more than one rules introduced as alternative hypotheses to recover the analysis of sentences which cannot be parsed by the current grammar, we propose a method to give priority to these hypotheses based on local contextual information. By experiments, our hypothesis selection is evaluated and its effectiveness is shown.",Exploiting Contextual Information in Hypothesis Selection for Grammar Refinement,"In this paper, we propose a new framework of grammar development and some techniques for exploiting contextual information in a process of grammar refinement. The proposed framework involves two processes, partial grammar acquisition and grammar refinement. In the former process, a rough grammar is constructed from a bracketed corpus. The grammar is later refined by the latter process where a combination of rule-based and corpusbased approaches is applied. Since there may be more than one rules introduced as alternative hypotheses to recover the analysis of sentences which cannot be parsed by the current grammar, we propose a method to give priority to these hypotheses based on local contextual information. By experiments, our hypothesis selection is evaluated and its effectiveness is shown.","We would like to thank the EDR organization for permitting us to access the EDR corpus. Special thanks go to Dr. Ratana Rujiravanit, who helps me to keenly proofread a draft of this paper. We also wish to thank the members in Okumura laboratory at JAIST for their useful comments and their technical supports.","Exploiting Contextual Information in Hypothesis Selection for Grammar Refinement. In this paper, we propose a new framework of grammar development and some techniques for exploiting contextual information in a process of grammar refinement. The proposed framework involves two processes, partial grammar acquisition and grammar refinement. In the former process, a rough grammar is constructed from a bracketed corpus. The grammar is later refined by the latter process where a combination of rule-based and corpusbased approaches is applied. Since there may be more than one rules introduced as alternative hypotheses to recover the analysis of sentences which cannot be parsed by the current grammar, we propose a method to give priority to these hypotheses based on local contextual information. By experiments, our hypothesis selection is evaluated and its effectiveness is shown.",1997
rasooli-tetreault-2013-joint,https://aclanthology.org/D13-1013,0,,,,,,,"Joint Parsing and Disfluency Detection in Linear Time. We introduce a novel method to jointly parse and detect disfluencies in spoken utterances. Our model can use arbitrary features for parsing sentences and adapt itself with out-ofdomain data. We show that our method, based on transition-based parsing, performs at a high level of accuracy for both the parsing and disfluency detection tasks. Additionally, our method is the fastest for the joint task, running in linear time.",Joint Parsing and Disfluency Detection in Linear Time,"We introduce a novel method to jointly parse and detect disfluencies in spoken utterances. Our model can use arbitrary features for parsing sentences and adapt itself with out-ofdomain data. We show that our method, based on transition-based parsing, performs at a high level of accuracy for both the parsing and disfluency detection tasks. Additionally, our method is the fastest for the joint task, running in linear time.",Joint Parsing and Disfluency Detection in Linear Time,"We introduce a novel method to jointly parse and detect disfluencies in spoken utterances. Our model can use arbitrary features for parsing sentences and adapt itself with out-ofdomain data. We show that our method, based on transition-based parsing, performs at a high level of accuracy for both the parsing and disfluency detection tasks. Additionally, our method is the fastest for the joint task, running in linear time.","We would like to thank anonymous reviewers for their helpful comments on the paper. Additionally, we were aided by researchers by their prompt responses to our many questions: Mark Core, Luciana Ferrer, Kallirroi Georgila, Mark Johnson, Jeremy Kahn, Yang Liu, Xian Qian, Kenji Sagae, and Wen Wang. Finally, this work was conducted during the first author's summer internship at the Nuance Sunnyvale Research Lab. We would like to thank the researchers in the group for the helpful discussions and assistance on different aspects of the problem. In particular, we would like to thank Chris Brew, Ron Kaplan, Deepak Ramachandran and Adwait Ratnaparkhi.","Joint Parsing and Disfluency Detection in Linear Time. We introduce a novel method to jointly parse and detect disfluencies in spoken utterances. Our model can use arbitrary features for parsing sentences and adapt itself with out-ofdomain data. We show that our method, based on transition-based parsing, performs at a high level of accuracy for both the parsing and disfluency detection tasks. Additionally, our method is the fastest for the joint task, running in linear time.",2013
delmonte-2016-venseseval,https://aclanthology.org/S16-1123,0,,,,,,,"VENSESEVAL at Semeval-2016 Task 2 iSTS - with a full-fledged rule-based approach. In our paper we present our rule-based system for semantic processing. In particular we show examples and solutions that may be challenge our approach. We then discuss problems and shortcomings of Task 2-iSTS. We comment on the existence of a tension between the inherent need to on the one side, to make the task as much as possible ""semantically feasible"". Whereas the detailed presentation and some notes in the guidelines refer to inferential processes, paraphrases and the use of commonsense knowledge of the world for the interpretation to work. We then present results and some conclusions.",{VENSESEVAL} at {S}emeval-2016 Task 2 i{STS} - with a full-fledged rule-based approach,"In our paper we present our rule-based system for semantic processing. In particular we show examples and solutions that may be challenge our approach. We then discuss problems and shortcomings of Task 2-iSTS. We comment on the existence of a tension between the inherent need to on the one side, to make the task as much as possible ""semantically feasible"". Whereas the detailed presentation and some notes in the guidelines refer to inferential processes, paraphrases and the use of commonsense knowledge of the world for the interpretation to work. We then present results and some conclusions.",VENSESEVAL at Semeval-2016 Task 2 iSTS - with a full-fledged rule-based approach,"In our paper we present our rule-based system for semantic processing. In particular we show examples and solutions that may be challenge our approach. We then discuss problems and shortcomings of Task 2-iSTS. We comment on the existence of a tension between the inherent need to on the one side, to make the task as much as possible ""semantically feasible"". Whereas the detailed presentation and some notes in the guidelines refer to inferential processes, paraphrases and the use of commonsense knowledge of the world for the interpretation to work. We then present results and some conclusions.",,"VENSESEVAL at Semeval-2016 Task 2 iSTS - with a full-fledged rule-based approach. In our paper we present our rule-based system for semantic processing. In particular we show examples and solutions that may be challenge our approach. We then discuss problems and shortcomings of Task 2-iSTS. We comment on the existence of a tension between the inherent need to on the one side, to make the task as much as possible ""semantically feasible"". Whereas the detailed presentation and some notes in the guidelines refer to inferential processes, paraphrases and the use of commonsense knowledge of the world for the interpretation to work. We then present results and some conclusions.",2016
shwartz-etal-2020-unsupervised,https://aclanthology.org/2020.emnlp-main.373,0,,,,,,,"Unsupervised Commonsense Question Answering with Self-Talk. Natural language understanding involves reading between the lines with implicit background knowledge. Current systems either rely on pretrained language models as the sole implicit source of world knowledge, or resort to external knowledge bases (KBs) to incorporate additional relevant knowledge. We propose an unsupervised framework based on self-talk as a novel alternative to multiple-choice commonsense tasks. Inspired by inquiry-based discovery learning (Bruner, 1961), our approach inquires language models with a number of information seeking questions such as ""what is the definition of ..."" to discover additional background knowledge. Empirical results demonstrate that the self-talk procedure substantially improves the performance of zeroshot language model baselines on four out of six commonsense benchmarks, and competes with models that obtain knowledge from external KBs. While our approach improves performance on several benchmarks, the selftalk induced knowledge even when leading to correct answers is not always seen as helpful by human judges, raising interesting questions about the inner-workings of pre-trained language models for commonsense reasoning.",Unsupervised Commonsense Question Answering with Self-Talk,"Natural language understanding involves reading between the lines with implicit background knowledge. Current systems either rely on pretrained language models as the sole implicit source of world knowledge, or resort to external knowledge bases (KBs) to incorporate additional relevant knowledge. We propose an unsupervised framework based on self-talk as a novel alternative to multiple-choice commonsense tasks. Inspired by inquiry-based discovery learning (Bruner, 1961), our approach inquires language models with a number of information seeking questions such as ""what is the definition of ..."" to discover additional background knowledge. Empirical results demonstrate that the self-talk procedure substantially improves the performance of zeroshot language model baselines on four out of six commonsense benchmarks, and competes with models that obtain knowledge from external KBs. While our approach improves performance on several benchmarks, the selftalk induced knowledge even when leading to correct answers is not always seen as helpful by human judges, raising interesting questions about the inner-workings of pre-trained language models for commonsense reasoning.",Unsupervised Commonsense Question Answering with Self-Talk,"Natural language understanding involves reading between the lines with implicit background knowledge. Current systems either rely on pretrained language models as the sole implicit source of world knowledge, or resort to external knowledge bases (KBs) to incorporate additional relevant knowledge. We propose an unsupervised framework based on self-talk as a novel alternative to multiple-choice commonsense tasks. Inspired by inquiry-based discovery learning (Bruner, 1961), our approach inquires language models with a number of information seeking questions such as ""what is the definition of ..."" to discover additional background knowledge. Empirical results demonstrate that the self-talk procedure substantially improves the performance of zeroshot language model baselines on four out of six commonsense benchmarks, and competes with models that obtain knowledge from external KBs. While our approach improves performance on several benchmarks, the selftalk induced knowledge even when leading to correct answers is not always seen as helpful by human judges, raising interesting questions about the inner-workings of pre-trained language models for commonsense reasoning.","This research was supported in part by NSF (IIS-1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), and DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031).","Unsupervised Commonsense Question Answering with Self-Talk. Natural language understanding involves reading between the lines with implicit background knowledge. Current systems either rely on pretrained language models as the sole implicit source of world knowledge, or resort to external knowledge bases (KBs) to incorporate additional relevant knowledge. We propose an unsupervised framework based on self-talk as a novel alternative to multiple-choice commonsense tasks. Inspired by inquiry-based discovery learning (Bruner, 1961), our approach inquires language models with a number of information seeking questions such as ""what is the definition of ..."" to discover additional background knowledge. Empirical results demonstrate that the self-talk procedure substantially improves the performance of zeroshot language model baselines on four out of six commonsense benchmarks, and competes with models that obtain knowledge from external KBs. While our approach improves performance on several benchmarks, the selftalk induced knowledge even when leading to correct answers is not always seen as helpful by human judges, raising interesting questions about the inner-workings of pre-trained language models for commonsense reasoning.",2020
berwick-1984-strong,https://aclanthology.org/J84-3005,0,,,,,,,"Strong Generative Capacity, Weak Generative Capacity, and Modern Linguistic Theories. What makes a language a natural language? A longstanding tradition in generative grammar holds that a language is natural just in case it is learnable under a constellation of auxiliary assumptions about input evidence available to children. Yet another approach seeks some key mathematical property that distinguishes the natural languages from all possible symbol-systems. With some exceptions -for example, Chomsky's demonstration that a complete characterization of our grammatical knowledge lies beyond the power of finite state languages -the mathematical approach has not provided clear-cut results. For example, for a variety of reasons we cannot say that the predicate is context-free characterizes all and only the natural languages.
Still another use of mathematical analysis in linguistics has been to diagnose a proposed grammatical formalism as too powerful (allowing too many grammars or languages) rather than as too weak. Such a diagnosis was supposed by some to follow from Peters and Ritchie's demonstration that the theory of transformational grammar as described in Chomsky's Aspects of the Theory of Syntax could specify grammars to generate any recursively enumerable set. For some this demonstration marked a watershed in the formal analysis transformational grammar. One general reaction (not prompted by the Peters and Ritchie result alone) was to turn to other theories of grammar designed to explicitly avoid the problems of a theory that could specify an arbitrary Turing machine computation. The proposals for generalized phrase structure grammar (GPSG) and lexical-functional grammar (LFG) have explicitly emphasized this point. GPSG aims for grammars that generate context-free languages (though there is some recent wavering on this point; see Pullum 1984) ; LFG, for languages that are at worst context-sensitive. Whatever the merits of the arguments for this restriction in terms of weak generative capacity -and they are far from obvious, as discussed at length in Berwick and Weinberg (1983) -one point remains: the switch was prompted by criticism of the nearly two-decades old Aspects theory.","Strong Generative Capacity, Weak Generative Capacity, and Modern Linguistic Theories","What makes a language a natural language? A longstanding tradition in generative grammar holds that a language is natural just in case it is learnable under a constellation of auxiliary assumptions about input evidence available to children. Yet another approach seeks some key mathematical property that distinguishes the natural languages from all possible symbol-systems. With some exceptions -for example, Chomsky's demonstration that a complete characterization of our grammatical knowledge lies beyond the power of finite state languages -the mathematical approach has not provided clear-cut results. For example, for a variety of reasons we cannot say that the predicate is context-free characterizes all and only the natural languages.
Still another use of mathematical analysis in linguistics has been to diagnose a proposed grammatical formalism as too powerful (allowing too many grammars or languages) rather than as too weak. Such a diagnosis was supposed by some to follow from Peters and Ritchie's demonstration that the theory of transformational grammar as described in Chomsky's Aspects of the Theory of Syntax could specify grammars to generate any recursively enumerable set. For some this demonstration marked a watershed in the formal analysis transformational grammar. One general reaction (not prompted by the Peters and Ritchie result alone) was to turn to other theories of grammar designed to explicitly avoid the problems of a theory that could specify an arbitrary Turing machine computation. The proposals for generalized phrase structure grammar (GPSG) and lexical-functional grammar (LFG) have explicitly emphasized this point. GPSG aims for grammars that generate context-free languages (though there is some recent wavering on this point; see Pullum 1984) ; LFG, for languages that are at worst context-sensitive. Whatever the merits of the arguments for this restriction in terms of weak generative capacity -and they are far from obvious, as discussed at length in Berwick and Weinberg (1983) -one point remains: the switch was prompted by criticism of the nearly two-decades old Aspects theory.","Strong Generative Capacity, Weak Generative Capacity, and Modern Linguistic Theories","What makes a language a natural language? A longstanding tradition in generative grammar holds that a language is natural just in case it is learnable under a constellation of auxiliary assumptions about input evidence available to children. Yet another approach seeks some key mathematical property that distinguishes the natural languages from all possible symbol-systems. With some exceptions -for example, Chomsky's demonstration that a complete characterization of our grammatical knowledge lies beyond the power of finite state languages -the mathematical approach has not provided clear-cut results. For example, for a variety of reasons we cannot say that the predicate is context-free characterizes all and only the natural languages.
Still another use of mathematical analysis in linguistics has been to diagnose a proposed grammatical formalism as too powerful (allowing too many grammars or languages) rather than as too weak. Such a diagnosis was supposed by some to follow from Peters and Ritchie's demonstration that the theory of transformational grammar as described in Chomsky's Aspects of the Theory of Syntax could specify grammars to generate any recursively enumerable set. For some this demonstration marked a watershed in the formal analysis transformational grammar. One general reaction (not prompted by the Peters and Ritchie result alone) was to turn to other theories of grammar designed to explicitly avoid the problems of a theory that could specify an arbitrary Turing machine computation. The proposals for generalized phrase structure grammar (GPSG) and lexical-functional grammar (LFG) have explicitly emphasized this point. GPSG aims for grammars that generate context-free languages (though there is some recent wavering on this point; see Pullum 1984) ; LFG, for languages that are at worst context-sensitive. Whatever the merits of the arguments for this restriction in terms of weak generative capacity -and they are far from obvious, as discussed at length in Berwick and Weinberg (1983) -one point remains: the switch was prompted by criticism of the nearly two-decades old Aspects theory.",Much of this research has been sparked by collaboration with Amy S. Weinberg.Thanks to her for many discussions on GB theory. Portions of this work have appeared in The Grammatical Basis of Linguistic Perform-Generative Capacity and Linguistic Theory ance.The research has been carried out at the MIT Artificial Intelligence Laboratory.Support for the Laboratory's work comes in part from the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-80-C-0505. ,"Strong Generative Capacity, Weak Generative Capacity, and Modern Linguistic Theories. What makes a language a natural language? A longstanding tradition in generative grammar holds that a language is natural just in case it is learnable under a constellation of auxiliary assumptions about input evidence available to children. Yet another approach seeks some key mathematical property that distinguishes the natural languages from all possible symbol-systems. With some exceptions -for example, Chomsky's demonstration that a complete characterization of our grammatical knowledge lies beyond the power of finite state languages -the mathematical approach has not provided clear-cut results. For example, for a variety of reasons we cannot say that the predicate is context-free characterizes all and only the natural languages.
Still another use of mathematical analysis in linguistics has been to diagnose a proposed grammatical formalism as too powerful (allowing too many grammars or languages) rather than as too weak. Such a diagnosis was supposed by some to follow from Peters and Ritchie's demonstration that the theory of transformational grammar as described in Chomsky's Aspects of the Theory of Syntax could specify grammars to generate any recursively enumerable set. For some this demonstration marked a watershed in the formal analysis transformational grammar. One general reaction (not prompted by the Peters and Ritchie result alone) was to turn to other theories of grammar designed to explicitly avoid the problems of a theory that could specify an arbitrary Turing machine computation. The proposals for generalized phrase structure grammar (GPSG) and lexical-functional grammar (LFG) have explicitly emphasized this point. GPSG aims for grammars that generate context-free languages (though there is some recent wavering on this point; see Pullum 1984) ; LFG, for languages that are at worst context-sensitive. Whatever the merits of the arguments for this restriction in terms of weak generative capacity -and they are far from obvious, as discussed at length in Berwick and Weinberg (1983) -one point remains: the switch was prompted by criticism of the nearly two-decades old Aspects theory.",1984
zhang-feng-2021-universal,https://aclanthology.org/2021.emnlp-main.581,0,,,,,,,"Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy. Simultaneous machine translation (SiMT) generates translation before reading the entire source sentence and hence it has to trade off between translation quality and latency. To fulfill the requirements of different translation quality and latency in practical applications, the previous methods usually need to train multiple SiMT models for different latency levels, resulting in large computational costs. In this paper, we propose a universal SiMT model with Mixture-of-Experts Wait-k Policy to achieve the best translation quality under arbitrary latency with only one trained model. Specifically, our method employs multi-head attention to accomplish the mixture of experts where each head is treated as a wait-k expert with its own waiting words number, and given a test latency and source inputs, the weights of the experts are accordingly adjusted to produce the best translation. Experiments on three datasets show that our method outperforms all the strong baselines under different latency, including the state-of-the-art adaptive policy.",Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy,"Simultaneous machine translation (SiMT) generates translation before reading the entire source sentence and hence it has to trade off between translation quality and latency. To fulfill the requirements of different translation quality and latency in practical applications, the previous methods usually need to train multiple SiMT models for different latency levels, resulting in large computational costs. In this paper, we propose a universal SiMT model with Mixture-of-Experts Wait-k Policy to achieve the best translation quality under arbitrary latency with only one trained model. Specifically, our method employs multi-head attention to accomplish the mixture of experts where each head is treated as a wait-k expert with its own waiting words number, and given a test latency and source inputs, the weights of the experts are accordingly adjusted to produce the best translation. Experiments on three datasets show that our method outperforms all the strong baselines under different latency, including the state-of-the-art adaptive policy.",Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy,"Simultaneous machine translation (SiMT) generates translation before reading the entire source sentence and hence it has to trade off between translation quality and latency. To fulfill the requirements of different translation quality and latency in practical applications, the previous methods usually need to train multiple SiMT models for different latency levels, resulting in large computational costs. In this paper, we propose a universal SiMT model with Mixture-of-Experts Wait-k Policy to achieve the best translation quality under arbitrary latency with only one trained model. Specifically, our method employs multi-head attention to accomplish the mixture of experts where each head is treated as a wait-k expert with its own waiting words number, and given a test latency and source inputs, the weights of the experts are accordingly adjusted to produce the best translation. Experiments on three datasets show that our method outperforms all the strong baselines under different latency, including the state-of-the-art adaptive policy.",We thank all the anonymous reviewers for their insightful and valuable comments. This work was supported by National Key R&D Program of China (NO. 2017YFE0192900).,"Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy. Simultaneous machine translation (SiMT) generates translation before reading the entire source sentence and hence it has to trade off between translation quality and latency. To fulfill the requirements of different translation quality and latency in practical applications, the previous methods usually need to train multiple SiMT models for different latency levels, resulting in large computational costs. In this paper, we propose a universal SiMT model with Mixture-of-Experts Wait-k Policy to achieve the best translation quality under arbitrary latency with only one trained model. Specifically, our method employs multi-head attention to accomplish the mixture of experts where each head is treated as a wait-k expert with its own waiting words number, and given a test latency and source inputs, the weights of the experts are accordingly adjusted to produce the best translation. Experiments on three datasets show that our method outperforms all the strong baselines under different latency, including the state-of-the-art adaptive policy.",2021
angelov-2009-incremental,https://aclanthology.org/E09-1009,0,,,,,,,Incremental Parsing with Parallel Multiple Context-Free Grammars. Parallel Multiple Context-Free Grammar (PMCFG) is an extension of context-free grammar for which the recognition problem is still solvable in polynomial time. We describe a new parsing algorithm that has the advantage to be incremental and to support PMCFG directly rather than the weaker MCFG formalism. The algorithm is also top-down which allows it to be used for grammar based word prediction.,Incremental Parsing with Parallel Multiple Context-Free Grammars,Parallel Multiple Context-Free Grammar (PMCFG) is an extension of context-free grammar for which the recognition problem is still solvable in polynomial time. We describe a new parsing algorithm that has the advantage to be incremental and to support PMCFG directly rather than the weaker MCFG formalism. The algorithm is also top-down which allows it to be used for grammar based word prediction.,Incremental Parsing with Parallel Multiple Context-Free Grammars,Parallel Multiple Context-Free Grammar (PMCFG) is an extension of context-free grammar for which the recognition problem is still solvable in polynomial time. We describe a new parsing algorithm that has the advantage to be incremental and to support PMCFG directly rather than the weaker MCFG formalism. The algorithm is also top-down which allows it to be used for grammar based word prediction.,,Incremental Parsing with Parallel Multiple Context-Free Grammars. Parallel Multiple Context-Free Grammar (PMCFG) is an extension of context-free grammar for which the recognition problem is still solvable in polynomial time. We describe a new parsing algorithm that has the advantage to be incremental and to support PMCFG directly rather than the weaker MCFG formalism. The algorithm is also top-down which allows it to be used for grammar based word prediction.,2009
anechitei-ignat-2013-multilingual,https://aclanthology.org/W13-3110,0,,,,,,,"Multilingual summarization system based on analyzing the discourse structure at MultiLing 2013. This paper describes the architecture of UAIC 1 's Summarization system participating at MultiLing-2013. The architecture includes language independent text processing modules, but also modules that are adapted for one language or another. In our experiments, the languages under consideration are Bulgarian, German, Greek, English, and Romanian. Our method exploits the cohesion and coherence properties of texts to build discourse structures. The output of the parsing process is used to extract general summaries.",Multilingual summarization system based on analyzing the discourse structure at {M}ulti{L}ing 2013,"This paper describes the architecture of UAIC 1 's Summarization system participating at MultiLing-2013. The architecture includes language independent text processing modules, but also modules that are adapted for one language or another. In our experiments, the languages under consideration are Bulgarian, German, Greek, English, and Romanian. Our method exploits the cohesion and coherence properties of texts to build discourse structures. The output of the parsing process is used to extract general summaries.",Multilingual summarization system based on analyzing the discourse structure at MultiLing 2013,"This paper describes the architecture of UAIC 1 's Summarization system participating at MultiLing-2013. The architecture includes language independent text processing modules, but also modules that are adapted for one language or another. In our experiments, the languages under consideration are Bulgarian, German, Greek, English, and Romanian. Our method exploits the cohesion and coherence properties of texts to build discourse structures. The output of the parsing process is used to extract general summaries.",,"Multilingual summarization system based on analyzing the discourse structure at MultiLing 2013. This paper describes the architecture of UAIC 1 's Summarization system participating at MultiLing-2013. The architecture includes language independent text processing modules, but also modules that are adapted for one language or another. In our experiments, the languages under consideration are Bulgarian, German, Greek, English, and Romanian. Our method exploits the cohesion and coherence properties of texts to build discourse structures. The output of the parsing process is used to extract general summaries.",2013
boussidan-ploux-2011-using,https://aclanthology.org/W11-0134,0,,,,,,,"Using Topic Salience and Connotational Drifts to Detect Candidates to Semantic Change. Semantic change has mostly been studied by historical linguists and typically at the scale of centuries. Here we study semantic change at a finer-grained level, the decade, making use of recent newspaper corpora. We detect semantic change candidates by observing context shifts which can be triggered by topic salience or may be independent from it. To discriminate these phenomena with accuracy, we combine variation filters with a series of indices which enable building a coherent and flexible semantic change detection model. The indices include widely adaptable tools such as frequency counts, co-occurrence patterns and networks, ranks, as well as model-specific items such as a variability and cohesion measure and graphical representations. The research uses ACOM, a co-occurrence based geometrical model, which is an extension of the Semantic Atlas. Compared to other models of semantic representation, it allows for extremely detailed analysis and provides insight as to how connotational drift processes unfold.",Using Topic Salience and Connotational Drifts to Detect Candidates to Semantic Change,"Semantic change has mostly been studied by historical linguists and typically at the scale of centuries. Here we study semantic change at a finer-grained level, the decade, making use of recent newspaper corpora. We detect semantic change candidates by observing context shifts which can be triggered by topic salience or may be independent from it. To discriminate these phenomena with accuracy, we combine variation filters with a series of indices which enable building a coherent and flexible semantic change detection model. The indices include widely adaptable tools such as frequency counts, co-occurrence patterns and networks, ranks, as well as model-specific items such as a variability and cohesion measure and graphical representations. The research uses ACOM, a co-occurrence based geometrical model, which is an extension of the Semantic Atlas. Compared to other models of semantic representation, it allows for extremely detailed analysis and provides insight as to how connotational drift processes unfold.",Using Topic Salience and Connotational Drifts to Detect Candidates to Semantic Change,"Semantic change has mostly been studied by historical linguists and typically at the scale of centuries. Here we study semantic change at a finer-grained level, the decade, making use of recent newspaper corpora. We detect semantic change candidates by observing context shifts which can be triggered by topic salience or may be independent from it. To discriminate these phenomena with accuracy, we combine variation filters with a series of indices which enable building a coherent and flexible semantic change detection model. The indices include widely adaptable tools such as frequency counts, co-occurrence patterns and networks, ranks, as well as model-specific items such as a variability and cohesion measure and graphical representations. The research uses ACOM, a co-occurrence based geometrical model, which is an extension of the Semantic Atlas. Compared to other models of semantic representation, it allows for extremely detailed analysis and provides insight as to how connotational drift processes unfold.","This research is supported by the Région Rhône-Alpes, via the Cible Project 2009. Many thanks to Sylvain Lupone, previously engineer at the L2c2 for the tools he developed in this research's framework.","Using Topic Salience and Connotational Drifts to Detect Candidates to Semantic Change. Semantic change has mostly been studied by historical linguists and typically at the scale of centuries. Here we study semantic change at a finer-grained level, the decade, making use of recent newspaper corpora. We detect semantic change candidates by observing context shifts which can be triggered by topic salience or may be independent from it. To discriminate these phenomena with accuracy, we combine variation filters with a series of indices which enable building a coherent and flexible semantic change detection model. The indices include widely adaptable tools such as frequency counts, co-occurrence patterns and networks, ranks, as well as model-specific items such as a variability and cohesion measure and graphical representations. The research uses ACOM, a co-occurrence based geometrical model, which is an extension of the Semantic Atlas. Compared to other models of semantic representation, it allows for extremely detailed analysis and provides insight as to how connotational drift processes unfold.",2011
cooke-1999-interactive,https://aclanthology.org/W99-0805,0,,,,,,,"Interactive Auditory Demonstrations. The subject matter of speech and hearing is packed full of phenomena and processes which lend themselves to or require auditory demonstration. In the past, this has been achieved through passive media such as tape or CD (e.g. Houtsma et ai, 1987; Bregman & Ahad, 1995). The advent of languages such as MATLAB which suppor!s sound handling, modern interface elements and powerful signal processing routines, coupled with the availability of fast processors and ubiquitous soundcards allows tbr a more interactive style of demonstration. A significant effort is now underway in the speech and hearing community to exploit these favourable conditions (see the MATISSE proceedings (1999), for instance).",Interactive Auditory Demonstrations,"The subject matter of speech and hearing is packed full of phenomena and processes which lend themselves to or require auditory demonstration. In the past, this has been achieved through passive media such as tape or CD (e.g. Houtsma et ai, 1987; Bregman & Ahad, 1995). The advent of languages such as MATLAB which suppor!s sound handling, modern interface elements and powerful signal processing routines, coupled with the availability of fast processors and ubiquitous soundcards allows tbr a more interactive style of demonstration. A significant effort is now underway in the speech and hearing community to exploit these favourable conditions (see the MATISSE proceedings (1999), for instance).",Interactive Auditory Demonstrations,"The subject matter of speech and hearing is packed full of phenomena and processes which lend themselves to or require auditory demonstration. In the past, this has been achieved through passive media such as tape or CD (e.g. Houtsma et ai, 1987; Bregman & Ahad, 1995). The advent of languages such as MATLAB which suppor!s sound handling, modern interface elements and powerful signal processing routines, coupled with the availability of fast processors and ubiquitous soundcards allows tbr a more interactive style of demonstration. A significant effort is now underway in the speech and hearing community to exploit these favourable conditions (see the MATISSE proceedings (1999), for instance).","Demonstrations described here were programmed by Guy Brown, Martin Cooke and Stuart Wrigley (Sheffield, UK) and Dan Ellis (ICSI, Berkeley, USA). Stuart Cunningham and Ljubomir Josifovski helped with the testing. Funding for some of the development work was provided by the ELSNET LE Training Showcase, 98/02.","Interactive Auditory Demonstrations. The subject matter of speech and hearing is packed full of phenomena and processes which lend themselves to or require auditory demonstration. In the past, this has been achieved through passive media such as tape or CD (e.g. Houtsma et ai, 1987; Bregman & Ahad, 1995). The advent of languages such as MATLAB which suppor!s sound handling, modern interface elements and powerful signal processing routines, coupled with the availability of fast processors and ubiquitous soundcards allows tbr a more interactive style of demonstration. A significant effort is now underway in the speech and hearing community to exploit these favourable conditions (see the MATISSE proceedings (1999), for instance).",1999
vasconcelos-etal-2020-aspect,https://aclanthology.org/2020.lrec-1.183,0,,,,,,,"Aspect Flow Representation and Audio Inspired Analysis for Texts. For better understanding how people write texts, it is fundamental to examine how a particular linguistic aspect (e.g., subjectivity, sentiment, argumentation) is exploited in a text. Analysing such an aspect of a text as a whole (i.e., through a summarised single feature) can lead to significant information loss. In this paper, we propose a novel method of representing and analysing texts that consider how an aspect behaves throughout the text. We represent the texts by aspect flows for capturing all the aspect behaviour. Then, inspired by the resemblance between these flows format and a sound waveform, we fragment them into frames and calculate an adaptation of audio analysis features, named here Audio-Like Features, as a way of analysing the texts. The results of the conducted classification tasks reveal that our approach can surpass methods based on summarised features. We also show that a detailed examination of the Audio-Like Features can lead to a more profound knowledge about the represented texts.",Aspect Flow Representation and Audio Inspired Analysis for Texts,"For better understanding how people write texts, it is fundamental to examine how a particular linguistic aspect (e.g., subjectivity, sentiment, argumentation) is exploited in a text. Analysing such an aspect of a text as a whole (i.e., through a summarised single feature) can lead to significant information loss. In this paper, we propose a novel method of representing and analysing texts that consider how an aspect behaves throughout the text. We represent the texts by aspect flows for capturing all the aspect behaviour. Then, inspired by the resemblance between these flows format and a sound waveform, we fragment them into frames and calculate an adaptation of audio analysis features, named here Audio-Like Features, as a way of analysing the texts. The results of the conducted classification tasks reveal that our approach can surpass methods based on summarised features. We also show that a detailed examination of the Audio-Like Features can lead to a more profound knowledge about the represented texts.",Aspect Flow Representation and Audio Inspired Analysis for Texts,"For better understanding how people write texts, it is fundamental to examine how a particular linguistic aspect (e.g., subjectivity, sentiment, argumentation) is exploited in a text. Analysing such an aspect of a text as a whole (i.e., through a summarised single feature) can lead to significant information loss. In this paper, we propose a novel method of representing and analysing texts that consider how an aspect behaves throughout the text. We represent the texts by aspect flows for capturing all the aspect behaviour. Then, inspired by the resemblance between these flows format and a sound waveform, we fragment them into frames and calculate an adaptation of audio analysis features, named here Audio-Like Features, as a way of analysing the texts. The results of the conducted classification tasks reveal that our approach can surpass methods based on summarised features. We also show that a detailed examination of the Audio-Like Features can lead to a more profound knowledge about the represented texts.",,"Aspect Flow Representation and Audio Inspired Analysis for Texts. For better understanding how people write texts, it is fundamental to examine how a particular linguistic aspect (e.g., subjectivity, sentiment, argumentation) is exploited in a text. Analysing such an aspect of a text as a whole (i.e., through a summarised single feature) can lead to significant information loss. In this paper, we propose a novel method of representing and analysing texts that consider how an aspect behaves throughout the text. We represent the texts by aspect flows for capturing all the aspect behaviour. Then, inspired by the resemblance between these flows format and a sound waveform, we fragment them into frames and calculate an adaptation of audio analysis features, named here Audio-Like Features, as a way of analysing the texts. The results of the conducted classification tasks reveal that our approach can surpass methods based on summarised features. We also show that a detailed examination of the Audio-Like Features can lead to a more profound knowledge about the represented texts.",2020
sultan-etal-2020-importance,https://aclanthology.org/2020.acl-main.500,0,,,,,,,"On the Importance of Diversity in Question Generation for QA. Automatic question generation (QG) has shown promise as a source of synthetic training data for question answering (QA). In this paper we ask: Is textual diversity in QG beneficial for downstream QA? Using top-p nucleus sampling to derive samples from a transformer-based question generator, we show that diversity-promoting QG indeed provides better QA training than likelihood maximization approaches such as beam search. We also show that standard QG evaluation metrics such as BLEU, ROUGE and METEOR are inversely correlated with diversity, and propose a diversity-aware intrinsic measure of overall QG quality that correlates well with extrinsic evaluation on QA.",On the Importance of Diversity in Question Generation for {QA},"Automatic question generation (QG) has shown promise as a source of synthetic training data for question answering (QA). In this paper we ask: Is textual diversity in QG beneficial for downstream QA? Using top-p nucleus sampling to derive samples from a transformer-based question generator, we show that diversity-promoting QG indeed provides better QA training than likelihood maximization approaches such as beam search. We also show that standard QG evaluation metrics such as BLEU, ROUGE and METEOR are inversely correlated with diversity, and propose a diversity-aware intrinsic measure of overall QG quality that correlates well with extrinsic evaluation on QA.",On the Importance of Diversity in Question Generation for QA,"Automatic question generation (QG) has shown promise as a source of synthetic training data for question answering (QA). In this paper we ask: Is textual diversity in QG beneficial for downstream QA? Using top-p nucleus sampling to derive samples from a transformer-based question generator, we show that diversity-promoting QG indeed provides better QA training than likelihood maximization approaches such as beam search. We also show that standard QG evaluation metrics such as BLEU, ROUGE and METEOR are inversely correlated with diversity, and propose a diversity-aware intrinsic measure of overall QG quality that correlates well with extrinsic evaluation on QA.",We thank the anonymous reviewers for their valuable feedback.,"On the Importance of Diversity in Question Generation for QA. Automatic question generation (QG) has shown promise as a source of synthetic training data for question answering (QA). In this paper we ask: Is textual diversity in QG beneficial for downstream QA? Using top-p nucleus sampling to derive samples from a transformer-based question generator, we show that diversity-promoting QG indeed provides better QA training than likelihood maximization approaches such as beam search. We also show that standard QG evaluation metrics such as BLEU, ROUGE and METEOR are inversely correlated with diversity, and propose a diversity-aware intrinsic measure of overall QG quality that correlates well with extrinsic evaluation on QA.",2020
li-church-2005-using,https://aclanthology.org/H05-1089,0,,,,,,,"Using Sketches to Estimate Associations. We should not have to look at the entire corpus (e.g., the Web) to know if two words are associated or not. 1 A powerful sampling technique called Sketches was originally introduced to remove duplicate Web pages. We generalize sketches to estimate contingency tables and associations, using a maximum likelihood estimator to find the most likely contingency table given the sample, the margins (document frequencies) and the size of the collection. Not unsurprisingly, computational work and statistical accuracy (variance or errors) depend on sampling rate, as will be shown both theoretically and empirically. Sampling methods become more and more important with larger and larger collections. At Web scale, sampling rates as low as 10 −4 may suffice.",Using Sketches to Estimate Associations,"We should not have to look at the entire corpus (e.g., the Web) to know if two words are associated or not. 1 A powerful sampling technique called Sketches was originally introduced to remove duplicate Web pages. We generalize sketches to estimate contingency tables and associations, using a maximum likelihood estimator to find the most likely contingency table given the sample, the margins (document frequencies) and the size of the collection. Not unsurprisingly, computational work and statistical accuracy (variance or errors) depend on sampling rate, as will be shown both theoretically and empirically. Sampling methods become more and more important with larger and larger collections. At Web scale, sampling rates as low as 10 −4 may suffice.",Using Sketches to Estimate Associations,"We should not have to look at the entire corpus (e.g., the Web) to know if two words are associated or not. 1 A powerful sampling technique called Sketches was originally introduced to remove duplicate Web pages. We generalize sketches to estimate contingency tables and associations, using a maximum likelihood estimator to find the most likely contingency table given the sample, the margins (document frequencies) and the size of the collection. Not unsurprisingly, computational work and statistical accuracy (variance or errors) depend on sampling rate, as will be shown both theoretically and empirically. Sampling methods become more and more important with larger and larger collections. At Web scale, sampling rates as low as 10 −4 may suffice.",,"Using Sketches to Estimate Associations. We should not have to look at the entire corpus (e.g., the Web) to know if two words are associated or not. 1 A powerful sampling technique called Sketches was originally introduced to remove duplicate Web pages. We generalize sketches to estimate contingency tables and associations, using a maximum likelihood estimator to find the most likely contingency table given the sample, the margins (document frequencies) and the size of the collection. Not unsurprisingly, computational work and statistical accuracy (variance or errors) depend on sampling rate, as will be shown both theoretically and empirically. Sampling methods become more and more important with larger and larger collections. At Web scale, sampling rates as low as 10 −4 may suffice.",2005
bernardi-etal-2006-multilingual,http://www.lrec-conf.org/proceedings/lrec2006/pdf/433_pdf.pdf,0,,,,,,,"Multilingual Search in Libraries. The case-study of the Free University of Bozen-Bolzano. This paper presents an ongoing project aiming at enhancing the OPAC (Online Public Access Catalog) search system of the Library of the Free University of Bozen-Bolzano with multilingual access. The Multilingual search system (MUSIL), we have developed, integrates advanced linguistic technologies in a user friendly interface and bridges the gap between the world of free text search and the world of conceptual librarian search. In this paper we present the architecture of the system, its interface and preliminary evaluations of the precision of the search results.",Multilingual Search in Libraries. The case-study of the Free {U}niversity of {B}ozen-{B}olzano,"This paper presents an ongoing project aiming at enhancing the OPAC (Online Public Access Catalog) search system of the Library of the Free University of Bozen-Bolzano with multilingual access. The Multilingual search system (MUSIL), we have developed, integrates advanced linguistic technologies in a user friendly interface and bridges the gap between the world of free text search and the world of conceptual librarian search. In this paper we present the architecture of the system, its interface and preliminary evaluations of the precision of the search results.",Multilingual Search in Libraries. The case-study of the Free University of Bozen-Bolzano,"This paper presents an ongoing project aiming at enhancing the OPAC (Online Public Access Catalog) search system of the Library of the Free University of Bozen-Bolzano with multilingual access. The Multilingual search system (MUSIL), we have developed, integrates advanced linguistic technologies in a user friendly interface and bridges the gap between the world of free text search and the world of conceptual librarian search. In this paper we present the architecture of the system, its interface and preliminary evaluations of the precision of the search results.",,"Multilingual Search in Libraries. The case-study of the Free University of Bozen-Bolzano. This paper presents an ongoing project aiming at enhancing the OPAC (Online Public Access Catalog) search system of the Library of the Free University of Bozen-Bolzano with multilingual access. The Multilingual search system (MUSIL), we have developed, integrates advanced linguistic technologies in a user friendly interface and bridges the gap between the world of free text search and the world of conceptual librarian search. In this paper we present the architecture of the system, its interface and preliminary evaluations of the precision of the search results.",2006
ball-1994-practical,https://aclanthology.org/1994.tc-1.10,0,,,,,,,"Practical Choices for Hardware and Software. The choices we have to make when selecting hardware and software appear very difficult for most of us. The unrelenting rate of change and the torrent of information we are bombarded with is so confusing and intimidating that it can make navigating a traffic-jam in the Parisian rush hour look easy. One of the problems is that when it comes to computers we are all surrounded by well meaning semi-experts-the bloke in the pub, your husband, your children and even minicab drivers. The clue is that these so called experts are usually more interested in demonstrating their skills and playing with the technology than your need to earn a living-they are what I would call computer freaks. So you are being patronised by the computer experts and your desk is a metre deep in computer magazines. In short, you have more raw data than any one person can assimilate in a lifetime. How do you make sense of it all? How do you take sensible decisions? What you need to do is start from some basic principles which follow from answering two simple questions-""Why do you need a computer?"" and ""Are you going to waste your money?"":",Practical Choices for Hardware and Software,"The choices we have to make when selecting hardware and software appear very difficult for most of us. The unrelenting rate of change and the torrent of information we are bombarded with is so confusing and intimidating that it can make navigating a traffic-jam in the Parisian rush hour look easy. One of the problems is that when it comes to computers we are all surrounded by well meaning semi-experts-the bloke in the pub, your husband, your children and even minicab drivers. The clue is that these so called experts are usually more interested in demonstrating their skills and playing with the technology than your need to earn a living-they are what I would call computer freaks. So you are being patronised by the computer experts and your desk is a metre deep in computer magazines. In short, you have more raw data than any one person can assimilate in a lifetime. How do you make sense of it all? How do you take sensible decisions? What you need to do is start from some basic principles which follow from answering two simple questions-""Why do you need a computer?"" and ""Are you going to waste your money?"":",Practical Choices for Hardware and Software,"The choices we have to make when selecting hardware and software appear very difficult for most of us. The unrelenting rate of change and the torrent of information we are bombarded with is so confusing and intimidating that it can make navigating a traffic-jam in the Parisian rush hour look easy. One of the problems is that when it comes to computers we are all surrounded by well meaning semi-experts-the bloke in the pub, your husband, your children and even minicab drivers. The clue is that these so called experts are usually more interested in demonstrating their skills and playing with the technology than your need to earn a living-they are what I would call computer freaks. So you are being patronised by the computer experts and your desk is a metre deep in computer magazines. In short, you have more raw data than any one person can assimilate in a lifetime. How do you make sense of it all? How do you take sensible decisions? What you need to do is start from some basic principles which follow from answering two simple questions-""Why do you need a computer?"" and ""Are you going to waste your money?"":",,"Practical Choices for Hardware and Software. The choices we have to make when selecting hardware and software appear very difficult for most of us. The unrelenting rate of change and the torrent of information we are bombarded with is so confusing and intimidating that it can make navigating a traffic-jam in the Parisian rush hour look easy. One of the problems is that when it comes to computers we are all surrounded by well meaning semi-experts-the bloke in the pub, your husband, your children and even minicab drivers. The clue is that these so called experts are usually more interested in demonstrating their skills and playing with the technology than your need to earn a living-they are what I would call computer freaks. So you are being patronised by the computer experts and your desk is a metre deep in computer magazines. In short, you have more raw data than any one person can assimilate in a lifetime. How do you make sense of it all? How do you take sensible decisions? What you need to do is start from some basic principles which follow from answering two simple questions-""Why do you need a computer?"" and ""Are you going to waste your money?"":",1994
dang-etal-1998-investigating-regular,https://aclanthology.org/P98-1046,0,,,,,,,"Investigating Regular Sense Extensions based on Intersective Levin Classes. In this paper we specifically address questions of polysemy with respect to verbs, and how regular extensions of meaning can be achieved through the adjunction of particular syntactic phrases. We see verb classes as the key to making generalizations about regular extensions of meaning. Current approaches to English classification, Levin classes and WordNet, have limitations in their applicability that impede their utility as general classification schemes. We present a refinement of Levin classes, intersective sets, which are a more fine-grained classification and have more coherent sets of syntactic frames and associated semantic components. We have preliminary indications that the membership of our intersective sets will be more compatible with WordNet than the original Levin classes. We also have begun to examine related classes in Portuguese, and find that these verbs demonstrate similarly coherent syntactic and semantic properties.",Investigating Regular Sense Extensions based on Intersective {L}evin Classes,"In this paper we specifically address questions of polysemy with respect to verbs, and how regular extensions of meaning can be achieved through the adjunction of particular syntactic phrases. We see verb classes as the key to making generalizations about regular extensions of meaning. Current approaches to English classification, Levin classes and WordNet, have limitations in their applicability that impede their utility as general classification schemes. We present a refinement of Levin classes, intersective sets, which are a more fine-grained classification and have more coherent sets of syntactic frames and associated semantic components. We have preliminary indications that the membership of our intersective sets will be more compatible with WordNet than the original Levin classes. We also have begun to examine related classes in Portuguese, and find that these verbs demonstrate similarly coherent syntactic and semantic properties.",Investigating Regular Sense Extensions based on Intersective Levin Classes,"In this paper we specifically address questions of polysemy with respect to verbs, and how regular extensions of meaning can be achieved through the adjunction of particular syntactic phrases. We see verb classes as the key to making generalizations about regular extensions of meaning. Current approaches to English classification, Levin classes and WordNet, have limitations in their applicability that impede their utility as general classification schemes. We present a refinement of Levin classes, intersective sets, which are a more fine-grained classification and have more coherent sets of syntactic frames and associated semantic components. We have preliminary indications that the membership of our intersective sets will be more compatible with WordNet than the original Levin classes. We also have begun to examine related classes in Portuguese, and find that these verbs demonstrate similarly coherent syntactic and semantic properties.",,"Investigating Regular Sense Extensions based on Intersective Levin Classes. In this paper we specifically address questions of polysemy with respect to verbs, and how regular extensions of meaning can be achieved through the adjunction of particular syntactic phrases. We see verb classes as the key to making generalizations about regular extensions of meaning. Current approaches to English classification, Levin classes and WordNet, have limitations in their applicability that impede their utility as general classification schemes. We present a refinement of Levin classes, intersective sets, which are a more fine-grained classification and have more coherent sets of syntactic frames and associated semantic components. We have preliminary indications that the membership of our intersective sets will be more compatible with WordNet than the original Levin classes. We also have begun to examine related classes in Portuguese, and find that these verbs demonstrate similarly coherent syntactic and semantic properties.",1998
bonheme-grzes-2020-sesam,https://aclanthology.org/2020.semeval-1.102,0,,,,,,,"SESAM at SemEval-2020 Task 8: Investigating the Relationship between Image and Text in Sentiment Analysis of Memes. This paper presents our submission to task 8 (memotion analysis) of the SemEval 2020 competition. We explain the algorithms that were used to learn our models along with the process of tuning the algorithms and selecting the best model. Since meme analysis is a challenging task with two distinct modalities, we studied the impact of different multimodal representation strategies. The results of several approaches to dealing with multimodal data are therefore discussed in the paper. We found that alignment-based strategies did not perform well on memes. Our quantitative results also showed that images and text were uncorrelated. Fusion-based strategies did not show significant improvements and using one modality only (text or image) tends to lead to better results when applied with the predictive models that we used in our research. 2 Related work Sentiment analysis of text is a very active research area which still faces multiple challenges such as irony and humour detection (Hernández Farias and Rosso, 2017) and low inter-annotator agreement caused by the high subjectivity of the content (Mohammad, 2017). Research has been extended to multimodal sentiment analysis during the last years (Soleymani et al., 2017), but the focus was mostly on video and text or speech and text. The specific multi-modality of memes in sentiment analysis has only been addressed recently by French (2017), who investigated their correlation with other comments in online discussions. 1 We use the term meme to refer to internet memes as defined in Davidson (2012). The memes considered in this task are only composed of image and text.",{SESAM} at {S}em{E}val-2020 Task 8: Investigating the Relationship between Image and Text in Sentiment Analysis of Memes,"This paper presents our submission to task 8 (memotion analysis) of the SemEval 2020 competition. We explain the algorithms that were used to learn our models along with the process of tuning the algorithms and selecting the best model. Since meme analysis is a challenging task with two distinct modalities, we studied the impact of different multimodal representation strategies. The results of several approaches to dealing with multimodal data are therefore discussed in the paper. We found that alignment-based strategies did not perform well on memes. Our quantitative results also showed that images and text were uncorrelated. Fusion-based strategies did not show significant improvements and using one modality only (text or image) tends to lead to better results when applied with the predictive models that we used in our research. 2 Related work Sentiment analysis of text is a very active research area which still faces multiple challenges such as irony and humour detection (Hernández Farias and Rosso, 2017) and low inter-annotator agreement caused by the high subjectivity of the content (Mohammad, 2017). Research has been extended to multimodal sentiment analysis during the last years (Soleymani et al., 2017), but the focus was mostly on video and text or speech and text. The specific multi-modality of memes in sentiment analysis has only been addressed recently by French (2017), who investigated their correlation with other comments in online discussions. 1 We use the term meme to refer to internet memes as defined in Davidson (2012). The memes considered in this task are only composed of image and text.",SESAM at SemEval-2020 Task 8: Investigating the Relationship between Image and Text in Sentiment Analysis of Memes,"This paper presents our submission to task 8 (memotion analysis) of the SemEval 2020 competition. We explain the algorithms that were used to learn our models along with the process of tuning the algorithms and selecting the best model. Since meme analysis is a challenging task with two distinct modalities, we studied the impact of different multimodal representation strategies. The results of several approaches to dealing with multimodal data are therefore discussed in the paper. We found that alignment-based strategies did not perform well on memes. Our quantitative results also showed that images and text were uncorrelated. Fusion-based strategies did not show significant improvements and using one modality only (text or image) tends to lead to better results when applied with the predictive models that we used in our research. 2 Related work Sentiment analysis of text is a very active research area which still faces multiple challenges such as irony and humour detection (Hernández Farias and Rosso, 2017) and low inter-annotator agreement caused by the high subjectivity of the content (Mohammad, 2017). Research has been extended to multimodal sentiment analysis during the last years (Soleymani et al., 2017), but the focus was mostly on video and text or speech and text. The specific multi-modality of memes in sentiment analysis has only been addressed recently by French (2017), who investigated their correlation with other comments in online discussions. 1 We use the term meme to refer to internet memes as defined in Davidson (2012). The memes considered in this task are only composed of image and text.","We thank the SemEval-2020 organisers for their time to prepare the data and run the competition, and the reviewers for their insightful comments.","SESAM at SemEval-2020 Task 8: Investigating the Relationship between Image and Text in Sentiment Analysis of Memes. This paper presents our submission to task 8 (memotion analysis) of the SemEval 2020 competition. We explain the algorithms that were used to learn our models along with the process of tuning the algorithms and selecting the best model. Since meme analysis is a challenging task with two distinct modalities, we studied the impact of different multimodal representation strategies. The results of several approaches to dealing with multimodal data are therefore discussed in the paper. We found that alignment-based strategies did not perform well on memes. Our quantitative results also showed that images and text were uncorrelated. Fusion-based strategies did not show significant improvements and using one modality only (text or image) tends to lead to better results when applied with the predictive models that we used in our research. 2 Related work Sentiment analysis of text is a very active research area which still faces multiple challenges such as irony and humour detection (Hernández Farias and Rosso, 2017) and low inter-annotator agreement caused by the high subjectivity of the content (Mohammad, 2017). Research has been extended to multimodal sentiment analysis during the last years (Soleymani et al., 2017), but the focus was mostly on video and text or speech and text. The specific multi-modality of memes in sentiment analysis has only been addressed recently by French (2017), who investigated their correlation with other comments in online discussions. 1 We use the term meme to refer to internet memes as defined in Davidson (2012). The memes considered in this task are only composed of image and text.",2020
kanzaki-isahara-2018-building,https://aclanthology.org/L18-1376,0,,,,,,,"Building a List of Synonymous Words and Phrases of Japanese Compound Verbs. We started to construct a database of synonymous expressions of Japanese ""Verb + Verb"" compounds semi-automatically. Japanese is known to be rich in compound verbs consisting of two verbs joined together. However, we did not have a comprehensive Japanese compound lexicon. Recently a Japanese compound verb lexicon was constructed by the National Institute for Japanese Language and Linguistics(NINJAL)(2013-15). Though it has meanings, example sentences, syntactic patterns and actual sentences from the corpus that they possess, it has no information on relationships with another words, such as synonymous words and phrases. We automatically extracted synonymous expressions of compound verbs from corpus which is ""five hundred million Japanese texts gathered from the web"" produced by Kawahara et.al. (2006) by using word2vec and cosine similarity and find suitable clusters which correspond to meanings of the compound verbs by using k-means++ and PCA. The automatic extraction from corpus helps humans find not only typical synonyms but also unexpected synonymous words and phrases. Then we manually compile the list of synonymous expressions of Japanese compound verbs by assessing the result and also link it to the ""Compound Verb Lexicon"" published by NINJAL.",Building a List of Synonymous Words and Phrases of {J}apanese Compound Verbs,"We started to construct a database of synonymous expressions of Japanese ""Verb + Verb"" compounds semi-automatically. Japanese is known to be rich in compound verbs consisting of two verbs joined together. However, we did not have a comprehensive Japanese compound lexicon. Recently a Japanese compound verb lexicon was constructed by the National Institute for Japanese Language and Linguistics(NINJAL)(2013-15). Though it has meanings, example sentences, syntactic patterns and actual sentences from the corpus that they possess, it has no information on relationships with another words, such as synonymous words and phrases. We automatically extracted synonymous expressions of compound verbs from corpus which is ""five hundred million Japanese texts gathered from the web"" produced by Kawahara et.al. (2006) by using word2vec and cosine similarity and find suitable clusters which correspond to meanings of the compound verbs by using k-means++ and PCA. The automatic extraction from corpus helps humans find not only typical synonyms but also unexpected synonymous words and phrases. Then we manually compile the list of synonymous expressions of Japanese compound verbs by assessing the result and also link it to the ""Compound Verb Lexicon"" published by NINJAL.",Building a List of Synonymous Words and Phrases of Japanese Compound Verbs,"We started to construct a database of synonymous expressions of Japanese ""Verb + Verb"" compounds semi-automatically. Japanese is known to be rich in compound verbs consisting of two verbs joined together. However, we did not have a comprehensive Japanese compound lexicon. Recently a Japanese compound verb lexicon was constructed by the National Institute for Japanese Language and Linguistics(NINJAL)(2013-15). Though it has meanings, example sentences, syntactic patterns and actual sentences from the corpus that they possess, it has no information on relationships with another words, such as synonymous words and phrases. We automatically extracted synonymous expressions of compound verbs from corpus which is ""five hundred million Japanese texts gathered from the web"" produced by Kawahara et.al. (2006) by using word2vec and cosine similarity and find suitable clusters which correspond to meanings of the compound verbs by using k-means++ and PCA. The automatic extraction from corpus helps humans find not only typical synonyms but also unexpected synonymous words and phrases. Then we manually compile the list of synonymous expressions of Japanese compound verbs by assessing the result and also link it to the ""Compound Verb Lexicon"" published by NINJAL.",This work was supported by JSPS KAKENHI(Grant-in-Aid for Scientific Research (C) ) Grant Number JP 16K02727.,"Building a List of Synonymous Words and Phrases of Japanese Compound Verbs. We started to construct a database of synonymous expressions of Japanese ""Verb + Verb"" compounds semi-automatically. Japanese is known to be rich in compound verbs consisting of two verbs joined together. However, we did not have a comprehensive Japanese compound lexicon. Recently a Japanese compound verb lexicon was constructed by the National Institute for Japanese Language and Linguistics(NINJAL)(2013-15). Though it has meanings, example sentences, syntactic patterns and actual sentences from the corpus that they possess, it has no information on relationships with another words, such as synonymous words and phrases. We automatically extracted synonymous expressions of compound verbs from corpus which is ""five hundred million Japanese texts gathered from the web"" produced by Kawahara et.al. (2006) by using word2vec and cosine similarity and find suitable clusters which correspond to meanings of the compound verbs by using k-means++ and PCA. The automatic extraction from corpus helps humans find not only typical synonyms but also unexpected synonymous words and phrases. Then we manually compile the list of synonymous expressions of Japanese compound verbs by assessing the result and also link it to the ""Compound Verb Lexicon"" published by NINJAL.",2018
lin-etal-2021-contextualized,https://aclanthology.org/2021.emnlp-main.77,0,,,,,,,"Contextualized Query Embeddings for Conversational Search. This paper describes a compact and effective model for low-latency passage retrieval in conversational search based on learned dense representations. Prior to our work, the state-ofthe-art approach uses a multi-stage pipeline comprising conversational query reformulation and information retrieval modules. Despite its effectiveness, such a pipeline often includes multiple neural models that require long inference times. In addition, independently optimizing each module ignores dependencies among them. To address these shortcomings, we propose to integrate conversational query reformulation directly into a dense retrieval model. To aid in this goal, we create a dataset with pseudo-relevance labels for conversational search to overcome the lack of training data and to explore different training strategies. We demonstrate that our model effectively rewrites conversational queries as dense representations in conversational search and open-domain question answering datasets. Finally, after observing that our model learns to adjust the L 2 norm of query token embeddings, we leverage this property for hybrid retrieval and to support error analysis.",Contextualized Query Embeddings for Conversational Search,"This paper describes a compact and effective model for low-latency passage retrieval in conversational search based on learned dense representations. Prior to our work, the state-ofthe-art approach uses a multi-stage pipeline comprising conversational query reformulation and information retrieval modules. Despite its effectiveness, such a pipeline often includes multiple neural models that require long inference times. In addition, independently optimizing each module ignores dependencies among them. To address these shortcomings, we propose to integrate conversational query reformulation directly into a dense retrieval model. To aid in this goal, we create a dataset with pseudo-relevance labels for conversational search to overcome the lack of training data and to explore different training strategies. We demonstrate that our model effectively rewrites conversational queries as dense representations in conversational search and open-domain question answering datasets. Finally, after observing that our model learns to adjust the L 2 norm of query token embeddings, we leverage this property for hybrid retrieval and to support error analysis.",Contextualized Query Embeddings for Conversational Search,"This paper describes a compact and effective model for low-latency passage retrieval in conversational search based on learned dense representations. Prior to our work, the state-ofthe-art approach uses a multi-stage pipeline comprising conversational query reformulation and information retrieval modules. Despite its effectiveness, such a pipeline often includes multiple neural models that require long inference times. In addition, independently optimizing each module ignores dependencies among them. To address these shortcomings, we propose to integrate conversational query reformulation directly into a dense retrieval model. To aid in this goal, we create a dataset with pseudo-relevance labels for conversational search to overcome the lack of training data and to explore different training strategies. We demonstrate that our model effectively rewrites conversational queries as dense representations in conversational search and open-domain question answering datasets. Finally, after observing that our model learns to adjust the L 2 norm of query token embeddings, we leverage this property for hybrid retrieval and to support error analysis.","This research was supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada. Additionally, we would like to thank the support of Cloud TPUs from Google's TPU Research Cloud (TRC).","Contextualized Query Embeddings for Conversational Search. This paper describes a compact and effective model for low-latency passage retrieval in conversational search based on learned dense representations. Prior to our work, the state-ofthe-art approach uses a multi-stage pipeline comprising conversational query reformulation and information retrieval modules. Despite its effectiveness, such a pipeline often includes multiple neural models that require long inference times. In addition, independently optimizing each module ignores dependencies among them. To address these shortcomings, we propose to integrate conversational query reformulation directly into a dense retrieval model. To aid in this goal, we create a dataset with pseudo-relevance labels for conversational search to overcome the lack of training data and to explore different training strategies. We demonstrate that our model effectively rewrites conversational queries as dense representations in conversational search and open-domain question answering datasets. Finally, after observing that our model learns to adjust the L 2 norm of query token embeddings, we leverage this property for hybrid retrieval and to support error analysis.",2021
yamauchi-etal-2013-robotic,https://aclanthology.org/W13-4060,0,,,,,,,A Robotic Agent in a Virtual Environment that Performs Situated Incremental Understanding of Navigational Utterances. We demonstrate a robotic agent in a 3D virtual environment that understands human navigational instructions. Such an agent needs to select actions based on not only instructions but also situations. It is also expected to immediately react to the instructions. Our agent incrementally understands spoken instructions and immediately controls a mobile robot based on the incremental understanding results and situation information such as the locations of obstacles and moving history. It can be used as an experimental system for collecting human-robot interactions in dynamically changing situations.,A Robotic Agent in a Virtual Environment that Performs Situated Incremental Understanding of Navigational Utterances,We demonstrate a robotic agent in a 3D virtual environment that understands human navigational instructions. Such an agent needs to select actions based on not only instructions but also situations. It is also expected to immediately react to the instructions. Our agent incrementally understands spoken instructions and immediately controls a mobile robot based on the incremental understanding results and situation information such as the locations of obstacles and moving history. It can be used as an experimental system for collecting human-robot interactions in dynamically changing situations.,A Robotic Agent in a Virtual Environment that Performs Situated Incremental Understanding of Navigational Utterances,We demonstrate a robotic agent in a 3D virtual environment that understands human navigational instructions. Such an agent needs to select actions based on not only instructions but also situations. It is also expected to immediately react to the instructions. Our agent incrementally understands spoken instructions and immediately controls a mobile robot based on the incremental understanding results and situation information such as the locations of obstacles and moving history. It can be used as an experimental system for collecting human-robot interactions in dynamically changing situations.,"We thank Antoine Raux and Shun Sato for their contribution to building the previous versions of this system. Thanks also go to Timo Baumann Okko Buß, and David Schlangen for making their InproTK available.",A Robotic Agent in a Virtual Environment that Performs Situated Incremental Understanding of Navigational Utterances. We demonstrate a robotic agent in a 3D virtual environment that understands human navigational instructions. Such an agent needs to select actions based on not only instructions but also situations. It is also expected to immediately react to the instructions. Our agent incrementally understands spoken instructions and immediately controls a mobile robot based on the incremental understanding results and situation information such as the locations of obstacles and moving history. It can be used as an experimental system for collecting human-robot interactions in dynamically changing situations.,2013
zhang-etal-2010-machine,https://aclanthology.org/C10-2165,0,,,,,,,"Machine Transliteration: Leveraging on Third Languages. This paper presents two pivot strategies for statistical machine transliteration, namely system-based pivot strategy and model-based pivot strategy. Given two independent source-pivot and pivot-target name pair corpora, the model-based strategy learns a direct sourcetarget transliteration model while the system-based strategy learns a sourcepivot model and a pivot-target model, respectively. Experimental results on benchmark data show that the systembased pivot strategy is effective in reducing the high resource requirement of training corpus for low-density language pairs while the model-based pivot strategy performs worse than the system-based one. Language Pairs ACC F-Score MRR MAP_ref MAP_10 MAP_sys English Chinese 0.",Machine Transliteration: Leveraging on Third Languages,"This paper presents two pivot strategies for statistical machine transliteration, namely system-based pivot strategy and model-based pivot strategy. Given two independent source-pivot and pivot-target name pair corpora, the model-based strategy learns a direct sourcetarget transliteration model while the system-based strategy learns a sourcepivot model and a pivot-target model, respectively. Experimental results on benchmark data show that the systembased pivot strategy is effective in reducing the high resource requirement of training corpus for low-density language pairs while the model-based pivot strategy performs worse than the system-based one. Language Pairs ACC F-Score MRR MAP_ref MAP_10 MAP_sys English Chinese 0.",Machine Transliteration: Leveraging on Third Languages,"This paper presents two pivot strategies for statistical machine transliteration, namely system-based pivot strategy and model-based pivot strategy. Given two independent source-pivot and pivot-target name pair corpora, the model-based strategy learns a direct sourcetarget transliteration model while the system-based strategy learns a sourcepivot model and a pivot-target model, respectively. Experimental results on benchmark data show that the systembased pivot strategy is effective in reducing the high resource requirement of training corpus for low-density language pairs while the model-based pivot strategy performs worse than the system-based one. Language Pairs ACC F-Score MRR MAP_ref MAP_10 MAP_sys English Chinese 0.",,"Machine Transliteration: Leveraging on Third Languages. This paper presents two pivot strategies for statistical machine transliteration, namely system-based pivot strategy and model-based pivot strategy. Given two independent source-pivot and pivot-target name pair corpora, the model-based strategy learns a direct sourcetarget transliteration model while the system-based strategy learns a sourcepivot model and a pivot-target model, respectively. Experimental results on benchmark data show that the systembased pivot strategy is effective in reducing the high resource requirement of training corpus for low-density language pairs while the model-based pivot strategy performs worse than the system-based one. Language Pairs ACC F-Score MRR MAP_ref MAP_10 MAP_sys English Chinese 0.",2010
barthelemy-1998-morphological,https://aclanthology.org/W98-1010,0,,,,,,,A Morphological Analyzer for Akkadian Verbal Forms with a Model of Phonetic Transformations. The paper describes a first attempt to design a morphological analyzer for Akkadian verbal forms. Akkadian is a semitic dead language which was used in the ancient Mesopotamia. The analyzer described has two levels: the first one is a deterministic and unique paradigm that describes the flexion of Akkadian verbs. The second level is a non deterministic rewriting system which describes possible phonetic transformations of the forms. The results obtained so far are encouraging.,A Morphological Analyzer for {A}kkadian Verbal Forms with a Model of Phonetic Transformations,The paper describes a first attempt to design a morphological analyzer for Akkadian verbal forms. Akkadian is a semitic dead language which was used in the ancient Mesopotamia. The analyzer described has two levels: the first one is a deterministic and unique paradigm that describes the flexion of Akkadian verbs. The second level is a non deterministic rewriting system which describes possible phonetic transformations of the forms. The results obtained so far are encouraging.,A Morphological Analyzer for Akkadian Verbal Forms with a Model of Phonetic Transformations,The paper describes a first attempt to design a morphological analyzer for Akkadian verbal forms. Akkadian is a semitic dead language which was used in the ancient Mesopotamia. The analyzer described has two levels: the first one is a deterministic and unique paradigm that describes the flexion of Akkadian verbs. The second level is a non deterministic rewriting system which describes possible phonetic transformations of the forms. The results obtained so far are encouraging.,"The following references, given by one of the referees as relevant to our work, were not used for lack of time.",A Morphological Analyzer for Akkadian Verbal Forms with a Model of Phonetic Transformations. The paper describes a first attempt to design a morphological analyzer for Akkadian verbal forms. Akkadian is a semitic dead language which was used in the ancient Mesopotamia. The analyzer described has two levels: the first one is a deterministic and unique paradigm that describes the flexion of Akkadian verbs. The second level is a non deterministic rewriting system which describes possible phonetic transformations of the forms. The results obtained so far are encouraging.,1998
park-caragea-2020-scientific,https://aclanthology.org/2020.coling-main.472,0,,,,,,,"Scientific Keyphrase Identification and Classification by Pre-Trained Language Models Intermediate Task Transfer Learning. Scientific keyphrase identification and classification is the task of detecting and classifying keyphrases from scholarly text with their types from a set of predefined classes. This task has a wide range of benefits, but it is still challenging in performance due to the lack of large amounts of labeled data required for training deep neural models. In order to overcome this challenge, we explore pre-trained language models BERT and SciBERT with intermediate task transfer learning, using 42 data-rich related intermediate-target task combinations. We reveal that intermediate task transfer learning on SciBERT induces a better starting point for target task fine-tuning compared with BERT and achieves competitive performance in scientific keyphrase identification and classification compared to both previous works and strong baselines. Interestingly, we observe that BERT with intermediate task transfer learning fails to improve the performance of scientific keyphrase identification and classification potentially due to significant catastrophic forgetting. This result highlights that scientific knowledge achieved during the pre-training of language models on large scientific collections plays an important role in the target tasks. We also observe that sequence tagging related intermediate tasks, especially syntactic structure learning tasks such as POS Tagging, tend to work best for scientific keyphrase identification and classification.",Scientific Keyphrase Identification and Classification by Pre-Trained Language Models Intermediate Task Transfer Learning,"Scientific keyphrase identification and classification is the task of detecting and classifying keyphrases from scholarly text with their types from a set of predefined classes. This task has a wide range of benefits, but it is still challenging in performance due to the lack of large amounts of labeled data required for training deep neural models. In order to overcome this challenge, we explore pre-trained language models BERT and SciBERT with intermediate task transfer learning, using 42 data-rich related intermediate-target task combinations. We reveal that intermediate task transfer learning on SciBERT induces a better starting point for target task fine-tuning compared with BERT and achieves competitive performance in scientific keyphrase identification and classification compared to both previous works and strong baselines. Interestingly, we observe that BERT with intermediate task transfer learning fails to improve the performance of scientific keyphrase identification and classification potentially due to significant catastrophic forgetting. This result highlights that scientific knowledge achieved during the pre-training of language models on large scientific collections plays an important role in the target tasks. We also observe that sequence tagging related intermediate tasks, especially syntactic structure learning tasks such as POS Tagging, tend to work best for scientific keyphrase identification and classification.",Scientific Keyphrase Identification and Classification by Pre-Trained Language Models Intermediate Task Transfer Learning,"Scientific keyphrase identification and classification is the task of detecting and classifying keyphrases from scholarly text with their types from a set of predefined classes. This task has a wide range of benefits, but it is still challenging in performance due to the lack of large amounts of labeled data required for training deep neural models. In order to overcome this challenge, we explore pre-trained language models BERT and SciBERT with intermediate task transfer learning, using 42 data-rich related intermediate-target task combinations. We reveal that intermediate task transfer learning on SciBERT induces a better starting point for target task fine-tuning compared with BERT and achieves competitive performance in scientific keyphrase identification and classification compared to both previous works and strong baselines. Interestingly, we observe that BERT with intermediate task transfer learning fails to improve the performance of scientific keyphrase identification and classification potentially due to significant catastrophic forgetting. This result highlights that scientific knowledge achieved during the pre-training of language models on large scientific collections plays an important role in the target tasks. We also observe that sequence tagging related intermediate tasks, especially syntactic structure learning tasks such as POS Tagging, tend to work best for scientific keyphrase identification and classification.","We thank Isabelle Augenstein for several clarifications of the task and the evaluation approach. We also thank our anonymous reviewers for their constructive comments and feedback, which helped improve our paper. This research is supported in part by NSF CAREER award #1802358, NSF CRI award #1823292, and UIC Discovery Partners Institute to Cornelia Caragea. Any opinions, findings, and conclusions expressed here are those of the authors and do not necessarily reflect the views of NSF.","Scientific Keyphrase Identification and Classification by Pre-Trained Language Models Intermediate Task Transfer Learning. Scientific keyphrase identification and classification is the task of detecting and classifying keyphrases from scholarly text with their types from a set of predefined classes. This task has a wide range of benefits, but it is still challenging in performance due to the lack of large amounts of labeled data required for training deep neural models. In order to overcome this challenge, we explore pre-trained language models BERT and SciBERT with intermediate task transfer learning, using 42 data-rich related intermediate-target task combinations. We reveal that intermediate task transfer learning on SciBERT induces a better starting point for target task fine-tuning compared with BERT and achieves competitive performance in scientific keyphrase identification and classification compared to both previous works and strong baselines. Interestingly, we observe that BERT with intermediate task transfer learning fails to improve the performance of scientific keyphrase identification and classification potentially due to significant catastrophic forgetting. This result highlights that scientific knowledge achieved during the pre-training of language models on large scientific collections plays an important role in the target tasks. We also observe that sequence tagging related intermediate tasks, especially syntactic structure learning tasks such as POS Tagging, tend to work best for scientific keyphrase identification and classification.",2020
milajevs-etal-2016-robust,https://aclanthology.org/P16-3009,0,,,,,,,"Robust Co-occurrence Quantification for Lexical Distributional Semantics. Previous optimisations of parameters affecting the word-context association measure used in distributional vector space models have focused either on highdimensional vectors with hundreds of thousands of dimensions, or dense vectors with dimensionality of a few hundreds; but dimensionality of a few thousands is often applied in compositional tasks as it is still computationally feasible and does not require the dimensionality reduction step. We present a systematic study of the interaction of the parameters of the association measure and vector dimensionality, and derive parameter selection heuristics that achieve performance across word similarity and relevance datasets competitive with the results previously reported in the literature achieved by highly dimensional or dense models.",Robust Co-occurrence Quantification for Lexical Distributional Semantics,"Previous optimisations of parameters affecting the word-context association measure used in distributional vector space models have focused either on highdimensional vectors with hundreds of thousands of dimensions, or dense vectors with dimensionality of a few hundreds; but dimensionality of a few thousands is often applied in compositional tasks as it is still computationally feasible and does not require the dimensionality reduction step. We present a systematic study of the interaction of the parameters of the association measure and vector dimensionality, and derive parameter selection heuristics that achieve performance across word similarity and relevance datasets competitive with the results previously reported in the literature achieved by highly dimensional or dense models.",Robust Co-occurrence Quantification for Lexical Distributional Semantics,"Previous optimisations of parameters affecting the word-context association measure used in distributional vector space models have focused either on highdimensional vectors with hundreds of thousands of dimensions, or dense vectors with dimensionality of a few hundreds; but dimensionality of a few thousands is often applied in compositional tasks as it is still computationally feasible and does not require the dimensionality reduction step. We present a systematic study of the interaction of the parameters of the association measure and vector dimensionality, and derive parameter selection heuristics that achieve performance across word similarity and relevance datasets competitive with the results previously reported in the literature achieved by highly dimensional or dense models.","We thank Ann Copestake for her valuable comments as part of the ACL SRW mentorship program and the anonymous reviewers for their comments. Support from EPSRC grant EP/J002607/1 is gratefully acknowledged by Dmitrijs Milajevs and Mehrnoosh Sadrzadeh. Matthew Purver is partly supported by ConCreTe: the project ConCreTe acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Pro-gramme for Research of the European Commission, under FET grant number 611733.","Robust Co-occurrence Quantification for Lexical Distributional Semantics. Previous optimisations of parameters affecting the word-context association measure used in distributional vector space models have focused either on highdimensional vectors with hundreds of thousands of dimensions, or dense vectors with dimensionality of a few hundreds; but dimensionality of a few thousands is often applied in compositional tasks as it is still computationally feasible and does not require the dimensionality reduction step. We present a systematic study of the interaction of the parameters of the association measure and vector dimensionality, and derive parameter selection heuristics that achieve performance across word similarity and relevance datasets competitive with the results previously reported in the literature achieved by highly dimensional or dense models.",2016
lin-etal-2019-task,https://aclanthology.org/D19-1463,0,,,,,,,"Task-Oriented Conversation Generation Using Heterogeneous Memory Networks. How to incorporate external knowledge into a neural dialogue model is critically important for dialogue systems to behave like real humans. To handle this problem, memory networks are usually a great choice and a promising way. However, existing memory networks do not perform well when leveraging heterogeneous information from different sources. In this paper, we propose a novel and versatile external memory networks called Heterogeneous Memory Networks (HMNs), to simultaneously utilize user utterances, dialogue history and background knowledge tuples. In our method, historical sequential dialogues are encoded and stored into the context-aware memory enhanced by gating mechanism while grounding knowledge tuples are encoded and stored into the context-free memory. During decoding, the decoder augmented with HMNs recurrently selects each word in one response utterance from these two memories and a general vocabulary. Experimental results on multiple real-world datasets show that HMNs significantly outperform the state-of-the-art datadriven task-oriented dialogue models in most domains.",Task-Oriented Conversation Generation Using Heterogeneous Memory Networks,"How to incorporate external knowledge into a neural dialogue model is critically important for dialogue systems to behave like real humans. To handle this problem, memory networks are usually a great choice and a promising way. However, existing memory networks do not perform well when leveraging heterogeneous information from different sources. In this paper, we propose a novel and versatile external memory networks called Heterogeneous Memory Networks (HMNs), to simultaneously utilize user utterances, dialogue history and background knowledge tuples. In our method, historical sequential dialogues are encoded and stored into the context-aware memory enhanced by gating mechanism while grounding knowledge tuples are encoded and stored into the context-free memory. During decoding, the decoder augmented with HMNs recurrently selects each word in one response utterance from these two memories and a general vocabulary. Experimental results on multiple real-world datasets show that HMNs significantly outperform the state-of-the-art datadriven task-oriented dialogue models in most domains.",Task-Oriented Conversation Generation Using Heterogeneous Memory Networks,"How to incorporate external knowledge into a neural dialogue model is critically important for dialogue systems to behave like real humans. To handle this problem, memory networks are usually a great choice and a promising way. However, existing memory networks do not perform well when leveraging heterogeneous information from different sources. In this paper, we propose a novel and versatile external memory networks called Heterogeneous Memory Networks (HMNs), to simultaneously utilize user utterances, dialogue history and background knowledge tuples. In our method, historical sequential dialogues are encoded and stored into the context-aware memory enhanced by gating mechanism while grounding knowledge tuples are encoded and stored into the context-free memory. During decoding, the decoder augmented with HMNs recurrently selects each word in one response utterance from these two memories and a general vocabulary. Experimental results on multiple real-world datasets show that HMNs significantly outperform the state-of-the-art datadriven task-oriented dialogue models in most domains.","We thank the anonymous reviewers for their insightful comments on this paper. This work was supported by the NSFC (No.61402403), DAMO Academy (Alibaba Group), Alibaba-Zhejiang University Joint Institute of Frontier Technologies, Chinese Knowledge Center for Engineering Sciences and Technology, and the Fundamental Research Funds for the Central Universities.","Task-Oriented Conversation Generation Using Heterogeneous Memory Networks. How to incorporate external knowledge into a neural dialogue model is critically important for dialogue systems to behave like real humans. To handle this problem, memory networks are usually a great choice and a promising way. However, existing memory networks do not perform well when leveraging heterogeneous information from different sources. In this paper, we propose a novel and versatile external memory networks called Heterogeneous Memory Networks (HMNs), to simultaneously utilize user utterances, dialogue history and background knowledge tuples. In our method, historical sequential dialogues are encoded and stored into the context-aware memory enhanced by gating mechanism while grounding knowledge tuples are encoded and stored into the context-free memory. During decoding, the decoder augmented with HMNs recurrently selects each word in one response utterance from these two memories and a general vocabulary. Experimental results on multiple real-world datasets show that HMNs significantly outperform the state-of-the-art datadriven task-oriented dialogue models in most domains.",2019
agarwal-kann-2020-acrostic,https://aclanthology.org/2020.emnlp-main.94,0,,,,,,,"Acrostic Poem Generation. We propose a new task in the area of computational creativity: acrostic poem generation in English. Acrostic poems are poems that contain a hidden message; typically, the first letter of each line spells out a word or short phrase. We define the task as a generation task with multiple constraints: given an input word, 1) the initial letters of each line should spell out the provided word, 2) the poem's semantics should also relate to it, and 3) the poem should conform to a rhyming scheme. We further provide a baseline model for the task, which consists of a conditional neural language model in combination with a neural rhyming model. Since no dedicated datasets for acrostic poem generation exist, we create training data for our task by first training a separate topic prediction model on a small set of topic-annotated poems and then predicting topics for additional poems. Our experiments show that the acrostic poems generated by our baseline are received well by humans and do not lose much quality due to the additional constraints. Last, we confirm that poems generated by our model are indeed closely related to the provided prompts, and that pretraining on Wikipedia can boost performance.",Acrostic Poem Generation,"We propose a new task in the area of computational creativity: acrostic poem generation in English. Acrostic poems are poems that contain a hidden message; typically, the first letter of each line spells out a word or short phrase. We define the task as a generation task with multiple constraints: given an input word, 1) the initial letters of each line should spell out the provided word, 2) the poem's semantics should also relate to it, and 3) the poem should conform to a rhyming scheme. We further provide a baseline model for the task, which consists of a conditional neural language model in combination with a neural rhyming model. Since no dedicated datasets for acrostic poem generation exist, we create training data for our task by first training a separate topic prediction model on a small set of topic-annotated poems and then predicting topics for additional poems. Our experiments show that the acrostic poems generated by our baseline are received well by humans and do not lose much quality due to the additional constraints. Last, we confirm that poems generated by our model are indeed closely related to the provided prompts, and that pretraining on Wikipedia can boost performance.",Acrostic Poem Generation,"We propose a new task in the area of computational creativity: acrostic poem generation in English. Acrostic poems are poems that contain a hidden message; typically, the first letter of each line spells out a word or short phrase. We define the task as a generation task with multiple constraints: given an input word, 1) the initial letters of each line should spell out the provided word, 2) the poem's semantics should also relate to it, and 3) the poem should conform to a rhyming scheme. We further provide a baseline model for the task, which consists of a conditional neural language model in combination with a neural rhyming model. Since no dedicated datasets for acrostic poem generation exist, we create training data for our task by first training a separate topic prediction model on a small set of topic-annotated poems and then predicting topics for additional poems. Our experiments show that the acrostic poems generated by our baseline are received well by humans and do not lose much quality due to the additional constraints. Last, we confirm that poems generated by our model are indeed closely related to the provided prompts, and that pretraining on Wikipedia can boost performance.",We would like to thank the members of NYU's ML 2 group for their help with the human evaluation and their feedback on our paper! We are also grateful to the anonymous reviewers for their insightful comments.,"Acrostic Poem Generation. We propose a new task in the area of computational creativity: acrostic poem generation in English. Acrostic poems are poems that contain a hidden message; typically, the first letter of each line spells out a word or short phrase. We define the task as a generation task with multiple constraints: given an input word, 1) the initial letters of each line should spell out the provided word, 2) the poem's semantics should also relate to it, and 3) the poem should conform to a rhyming scheme. We further provide a baseline model for the task, which consists of a conditional neural language model in combination with a neural rhyming model. Since no dedicated datasets for acrostic poem generation exist, we create training data for our task by first training a separate topic prediction model on a small set of topic-annotated poems and then predicting topics for additional poems. Our experiments show that the acrostic poems generated by our baseline are received well by humans and do not lose much quality due to the additional constraints. Last, we confirm that poems generated by our model are indeed closely related to the provided prompts, and that pretraining on Wikipedia can boost performance.",2020
vu-etal-2018-sentence,https://aclanthology.org/N18-2013,0,,,,,,,"Sentence Simplification with Memory-Augmented Neural Networks. Sentence simplification aims to simplify the content and structure of complex sentences, and thus make them easier to interpret for human readers, and easier to process for downstream NLP applications. Recent advances in neural machine translation have paved the way for novel approaches to the task. In this paper, we adapt an architecture with augmented memory capacities called Neural Semantic Encoders (Munkhdalai and Yu, 2017) for sentence simplification. Our experiments demonstrate the effectiveness of our approach on different simplification datasets, both in terms of automatic evaluation measures and human judgments.",Sentence Simplification with Memory-Augmented Neural Networks,"Sentence simplification aims to simplify the content and structure of complex sentences, and thus make them easier to interpret for human readers, and easier to process for downstream NLP applications. Recent advances in neural machine translation have paved the way for novel approaches to the task. In this paper, we adapt an architecture with augmented memory capacities called Neural Semantic Encoders (Munkhdalai and Yu, 2017) for sentence simplification. Our experiments demonstrate the effectiveness of our approach on different simplification datasets, both in terms of automatic evaluation measures and human judgments.",Sentence Simplification with Memory-Augmented Neural Networks,"Sentence simplification aims to simplify the content and structure of complex sentences, and thus make them easier to interpret for human readers, and easier to process for downstream NLP applications. Recent advances in neural machine translation have paved the way for novel approaches to the task. In this paper, we adapt an architecture with augmented memory capacities called Neural Semantic Encoders (Munkhdalai and Yu, 2017) for sentence simplification. Our experiments demonstrate the effectiveness of our approach on different simplification datasets, both in terms of automatic evaluation measures and human judgments.","We would like to thank Emily Druhl, Jesse Lingeman, and the UMass BioNLP team for their help with this work. We also thank Xingxing Zhang and Sergiu Nisioi for valuable discussions, and the anonymous reviewers for their thoughtful comments and suggestions.","Sentence Simplification with Memory-Augmented Neural Networks. Sentence simplification aims to simplify the content and structure of complex sentences, and thus make them easier to interpret for human readers, and easier to process for downstream NLP applications. Recent advances in neural machine translation have paved the way for novel approaches to the task. In this paper, we adapt an architecture with augmented memory capacities called Neural Semantic Encoders (Munkhdalai and Yu, 2017) for sentence simplification. Our experiments demonstrate the effectiveness of our approach on different simplification datasets, both in terms of automatic evaluation measures and human judgments.",2018
pedersen-2001-machine,https://aclanthology.org/S01-1034,0,,,,,,,"Machine Learning with Lexical Features: The Duluth Approach to SENSEVAL-2. This paper describes the sixteen Duluth entries in the SENSEVAL-2 comparative exercise among word sense disambiguation systems. There were eight pairs of Duluth systems entered in the Spanish and English lexical sample tasks. These are all based on standard machine learning algorithms that induce classifiers from sense-tagged training text where the context in which ambiguous words occur are represented by simple lexical features. These are highly portable, robust methods that can serve as a foundation for more tailored approaches.",Machine Learning with Lexical Features: {T}he {D}uluth Approach to {SENSEVAL}-2,"This paper describes the sixteen Duluth entries in the SENSEVAL-2 comparative exercise among word sense disambiguation systems. There were eight pairs of Duluth systems entered in the Spanish and English lexical sample tasks. These are all based on standard machine learning algorithms that induce classifiers from sense-tagged training text where the context in which ambiguous words occur are represented by simple lexical features. These are highly portable, robust methods that can serve as a foundation for more tailored approaches.",Machine Learning with Lexical Features: The Duluth Approach to SENSEVAL-2,"This paper describes the sixteen Duluth entries in the SENSEVAL-2 comparative exercise among word sense disambiguation systems. There were eight pairs of Duluth systems entered in the Spanish and English lexical sample tasks. These are all based on standard machine learning algorithms that induce classifiers from sense-tagged training text where the context in which ambiguous words occur are represented by simple lexical features. These are highly portable, robust methods that can serve as a foundation for more tailored approaches.",This work has been partially supported by a National Science Foundation Faculty Early CA-REER Development award (#0092784).The Bigram Statistics Package and Sense-Tools have been implemented by Satanjeev Banerjee.,"Machine Learning with Lexical Features: The Duluth Approach to SENSEVAL-2. This paper describes the sixteen Duluth entries in the SENSEVAL-2 comparative exercise among word sense disambiguation systems. There were eight pairs of Duluth systems entered in the Spanish and English lexical sample tasks. These are all based on standard machine learning algorithms that induce classifiers from sense-tagged training text where the context in which ambiguous words occur are represented by simple lexical features. These are highly portable, robust methods that can serve as a foundation for more tailored approaches.",2001
huang-etal-2009-bilingually,https://aclanthology.org/D09-1127,0,,,,,,,"Bilingually-Constrained (Monolingual) Shift-Reduce Parsing. Jointly parsing two languages has been shown to improve accuracies on either or both sides. However, its search space is much bigger than the monolingual case, forcing existing approaches to employ complicated modeling and crude approximations. Here we propose a much simpler alternative, bilingually-constrained monolingual parsing, where a source-language parser learns to exploit reorderings as additional observation, but not bothering to build the target-side tree as well. We show specifically how to enhance a shift-reduce dependency parser with alignment features to resolve shift-reduce conflicts. Experiments on the bilingual portion of Chinese Treebank show that, with just 3 bilingual features, we can improve parsing accuracies by 0.6% (absolute) for both English and Chinese over a state-of-the-art baseline, with negligible (∼6%) efficiency overhead, thus much faster than biparsing.",Bilingually-Constrained (Monolingual) Shift-Reduce Parsing,"Jointly parsing two languages has been shown to improve accuracies on either or both sides. However, its search space is much bigger than the monolingual case, forcing existing approaches to employ complicated modeling and crude approximations. Here we propose a much simpler alternative, bilingually-constrained monolingual parsing, where a source-language parser learns to exploit reorderings as additional observation, but not bothering to build the target-side tree as well. We show specifically how to enhance a shift-reduce dependency parser with alignment features to resolve shift-reduce conflicts. Experiments on the bilingual portion of Chinese Treebank show that, with just 3 bilingual features, we can improve parsing accuracies by 0.6% (absolute) for both English and Chinese over a state-of-the-art baseline, with negligible (∼6%) efficiency overhead, thus much faster than biparsing.",Bilingually-Constrained (Monolingual) Shift-Reduce Parsing,"Jointly parsing two languages has been shown to improve accuracies on either or both sides. However, its search space is much bigger than the monolingual case, forcing existing approaches to employ complicated modeling and crude approximations. Here we propose a much simpler alternative, bilingually-constrained monolingual parsing, where a source-language parser learns to exploit reorderings as additional observation, but not bothering to build the target-side tree as well. We show specifically how to enhance a shift-reduce dependency parser with alignment features to resolve shift-reduce conflicts. Experiments on the bilingual portion of Chinese Treebank show that, with just 3 bilingual features, we can improve parsing accuracies by 0.6% (absolute) for both English and Chinese over a state-of-the-art baseline, with negligible (∼6%) efficiency overhead, thus much faster than biparsing.","We thank the anonymous reviewers for pointing to us references about ""arc-standard"". We also thank Aravind Joshi and Mitch Marcus for insights on PP attachment, Joakim Nivre for discussions on arc-eager, Yang Liu for suggestion to look at manual alignments, and David A. Smith for sending us his paper. The second and third authors were supported by National Natural Science Foundation of China, Contracts 60603095 and 60736014, and 863 State Key Project No. 2006AA010108. ","Bilingually-Constrained (Monolingual) Shift-Reduce Parsing. Jointly parsing two languages has been shown to improve accuracies on either or both sides. However, its search space is much bigger than the monolingual case, forcing existing approaches to employ complicated modeling and crude approximations. Here we propose a much simpler alternative, bilingually-constrained monolingual parsing, where a source-language parser learns to exploit reorderings as additional observation, but not bothering to build the target-side tree as well. We show specifically how to enhance a shift-reduce dependency parser with alignment features to resolve shift-reduce conflicts. Experiments on the bilingual portion of Chinese Treebank show that, with just 3 bilingual features, we can improve parsing accuracies by 0.6% (absolute) for both English and Chinese over a state-of-the-art baseline, with negligible (∼6%) efficiency overhead, thus much faster than biparsing.",2009
shirakawa-etal-2017-never,https://aclanthology.org/D17-1251,0,,,,,,,"Never Abandon Minorities: Exhaustive Extraction of Bursty Phrases on Microblogs Using Set Cover Problem. We propose a language-independent datadriven method to exhaustively extract bursty phrases of arbitrary forms (e.g., phrases other than simple noun phrases) from microblogs. The burst (i.e., the rapid increase of the occurrence) of a phrase causes the burst of overlapping Ngrams including incomplete ones. In other words, bursty incomplete N-grams inevitably overlap bursty phrases. Thus, the proposed method performs the extraction of bursty phrases as the set cover problem in which all bursty N-grams are covered by a minimum set of bursty phrases. Experimental results using Japanese Twitter data showed that the proposed method outperformed word-based, noun phrase-based, and segmentation-based methods both in terms of accuracy and coverage.",Never Abandon Minorities: Exhaustive Extraction of Bursty Phrases on Microblogs Using Set Cover Problem,"We propose a language-independent datadriven method to exhaustively extract bursty phrases of arbitrary forms (e.g., phrases other than simple noun phrases) from microblogs. The burst (i.e., the rapid increase of the occurrence) of a phrase causes the burst of overlapping Ngrams including incomplete ones. In other words, bursty incomplete N-grams inevitably overlap bursty phrases. Thus, the proposed method performs the extraction of bursty phrases as the set cover problem in which all bursty N-grams are covered by a minimum set of bursty phrases. Experimental results using Japanese Twitter data showed that the proposed method outperformed word-based, noun phrase-based, and segmentation-based methods both in terms of accuracy and coverage.",Never Abandon Minorities: Exhaustive Extraction of Bursty Phrases on Microblogs Using Set Cover Problem,"We propose a language-independent datadriven method to exhaustively extract bursty phrases of arbitrary forms (e.g., phrases other than simple noun phrases) from microblogs. The burst (i.e., the rapid increase of the occurrence) of a phrase causes the burst of overlapping Ngrams including incomplete ones. In other words, bursty incomplete N-grams inevitably overlap bursty phrases. Thus, the proposed method performs the extraction of bursty phrases as the set cover problem in which all bursty N-grams are covered by a minimum set of bursty phrases. Experimental results using Japanese Twitter data showed that the proposed method outperformed word-based, noun phrase-based, and segmentation-based methods both in terms of accuracy and coverage.","This research is partially supported by the Grantin-Aid for Scientific Research (A)(2620013) of the Ministry of Education, Culture, Sports, Science and Technology, Japan, and JST, Strategic International Collaborative Research Program, SICORP.","Never Abandon Minorities: Exhaustive Extraction of Bursty Phrases on Microblogs Using Set Cover Problem. We propose a language-independent datadriven method to exhaustively extract bursty phrases of arbitrary forms (e.g., phrases other than simple noun phrases) from microblogs. The burst (i.e., the rapid increase of the occurrence) of a phrase causes the burst of overlapping Ngrams including incomplete ones. In other words, bursty incomplete N-grams inevitably overlap bursty phrases. Thus, the proposed method performs the extraction of bursty phrases as the set cover problem in which all bursty N-grams are covered by a minimum set of bursty phrases. Experimental results using Japanese Twitter data showed that the proposed method outperformed word-based, noun phrase-based, and segmentation-based methods both in terms of accuracy and coverage.",2017
mohammadi-etal-2017-native,https://aclanthology.org/W17-5022,0,,,,,,,"Native Language Identification Using a Mixture of Character and Word N-grams. Native language identification (NLI) is the task of determining an author's native language, based on a piece of his/her writing in a second language. In recent years, NLI has received much attention due to its challenging nature and its applications in language pedagogy and forensic linguistics. We participated in the NLI Shared Task 2017 under the name UT-DSP. In our effort to implement a method for native language identification, we made use of a mixture of character and word Ngrams, and achieved an optimal F1-score of 0.7748, using both essay and speech transcription datasets.",Native Language Identification Using a Mixture of Character and Word N-grams,"Native language identification (NLI) is the task of determining an author's native language, based on a piece of his/her writing in a second language. In recent years, NLI has received much attention due to its challenging nature and its applications in language pedagogy and forensic linguistics. We participated in the NLI Shared Task 2017 under the name UT-DSP. In our effort to implement a method for native language identification, we made use of a mixture of character and word Ngrams, and achieved an optimal F1-score of 0.7748, using both essay and speech transcription datasets.",Native Language Identification Using a Mixture of Character and Word N-grams,"Native language identification (NLI) is the task of determining an author's native language, based on a piece of his/her writing in a second language. In recent years, NLI has received much attention due to its challenging nature and its applications in language pedagogy and forensic linguistics. We participated in the NLI Shared Task 2017 under the name UT-DSP. In our effort to implement a method for native language identification, we made use of a mixture of character and word Ngrams, and achieved an optimal F1-score of 0.7748, using both essay and speech transcription datasets.",,"Native Language Identification Using a Mixture of Character and Word N-grams. Native language identification (NLI) is the task of determining an author's native language, based on a piece of his/her writing in a second language. In recent years, NLI has received much attention due to its challenging nature and its applications in language pedagogy and forensic linguistics. We participated in the NLI Shared Task 2017 under the name UT-DSP. In our effort to implement a method for native language identification, we made use of a mixture of character and word Ngrams, and achieved an optimal F1-score of 0.7748, using both essay and speech transcription datasets.",2017
goyal-etal-2012-distributed,https://aclanthology.org/C12-1062,0,,,,,,,"A Distributed Platform for Sanskrit Processing. Sanskrit, the classical language of India, presents specific challenges for computational linguistics: exact phonetic transcription in writing that obscures word boundaries, rich morphology and an enormous corpus, among others. Recent international cooperation has developed innovative solutions to these problems and significant resources for linguistic research. Solutions include efficient segmenting and tagging algorithms and dependency parsers based on constraint programming. The integration of lexical resources, text archives and linguistic software is achieved by distributed interoperable Web services. Resources include a morphological tagger and tagged corpus.",A Distributed Platform for {S}anskrit Processing,"Sanskrit, the classical language of India, presents specific challenges for computational linguistics: exact phonetic transcription in writing that obscures word boundaries, rich morphology and an enormous corpus, among others. Recent international cooperation has developed innovative solutions to these problems and significant resources for linguistic research. Solutions include efficient segmenting and tagging algorithms and dependency parsers based on constraint programming. The integration of lexical resources, text archives and linguistic software is achieved by distributed interoperable Web services. Resources include a morphological tagger and tagged corpus.",A Distributed Platform for Sanskrit Processing,"Sanskrit, the classical language of India, presents specific challenges for computational linguistics: exact phonetic transcription in writing that obscures word boundaries, rich morphology and an enormous corpus, among others. Recent international cooperation has developed innovative solutions to these problems and significant resources for linguistic research. Solutions include efficient segmenting and tagging algorithms and dependency parsers based on constraint programming. The integration of lexical resources, text archives and linguistic software is achieved by distributed interoperable Web services. Resources include a morphological tagger and tagged corpus.",,"A Distributed Platform for Sanskrit Processing. Sanskrit, the classical language of India, presents specific challenges for computational linguistics: exact phonetic transcription in writing that obscures word boundaries, rich morphology and an enormous corpus, among others. Recent international cooperation has developed innovative solutions to these problems and significant resources for linguistic research. Solutions include efficient segmenting and tagging algorithms and dependency parsers based on constraint programming. The integration of lexical resources, text archives and linguistic software is achieved by distributed interoperable Web services. Resources include a morphological tagger and tagged corpus.",2012
dusek-jurcicek-2015-training,https://aclanthology.org/P15-1044,0,,,,,,,"Training a Natural Language Generator From Unaligned Data. We present a novel syntax-based natural language generation system that is trainable from unaligned pairs of input meaning representations and output sentences. It is divided into sentence planning, which incrementally builds deep-syntactic dependency trees, and surface realization. Sentence planner is based on A* search with a perceptron ranker that uses novel differing subtree updates and a simple future promise estimation; surface realization uses a rule-based pipeline from the Treex NLP toolkit. Our first results show that training from unaligned data is feasible, the outputs of our generator are mostly fluent and relevant.",Training a Natural Language Generator From Unaligned Data,"We present a novel syntax-based natural language generation system that is trainable from unaligned pairs of input meaning representations and output sentences. It is divided into sentence planning, which incrementally builds deep-syntactic dependency trees, and surface realization. Sentence planner is based on A* search with a perceptron ranker that uses novel differing subtree updates and a simple future promise estimation; surface realization uses a rule-based pipeline from the Treex NLP toolkit. Our first results show that training from unaligned data is feasible, the outputs of our generator are mostly fluent and relevant.",Training a Natural Language Generator From Unaligned Data,"We present a novel syntax-based natural language generation system that is trainable from unaligned pairs of input meaning representations and output sentences. It is divided into sentence planning, which incrementally builds deep-syntactic dependency trees, and surface realization. Sentence planner is based on A* search with a perceptron ranker that uses novel differing subtree updates and a simple future promise estimation; surface realization uses a rule-based pipeline from the Treex NLP toolkit. Our first results show that training from unaligned data is feasible, the outputs of our generator are mostly fluent and relevant.","This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 104, and GAUK grant 2058214 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2010013).The authors would like to thank Lukáš Žilka, Ondřej Plátek, and the anonymous reviewers for helpful comments on the draft.","Training a Natural Language Generator From Unaligned Data. We present a novel syntax-based natural language generation system that is trainable from unaligned pairs of input meaning representations and output sentences. It is divided into sentence planning, which incrementally builds deep-syntactic dependency trees, and surface realization. Sentence planner is based on A* search with a perceptron ranker that uses novel differing subtree updates and a simple future promise estimation; surface realization uses a rule-based pipeline from the Treex NLP toolkit. Our first results show that training from unaligned data is feasible, the outputs of our generator are mostly fluent and relevant.",2015
jiang-etal-2016-ecnu,https://aclanthology.org/S16-1058,0,,,,,,,"ECNU at SemEval-2016 Task 5: Extracting Effective Features from Relevant Fragments in Sentence for Aspect-Based Sentiment Analysis in Reviews. This paper describes our systems submitted to the Sentence-level and Text-level Aspect-Based Sentiment Analysis (ABSA) task (i.e., Task 5) in SemEval-2016. The task involves two phases, namely, Aspect Detection phase and Sentiment Polarity Classification phase. We participated in the second phase of both subtasks in laptop and restaurant domains, which focuses on the sentiment analysis based on the given aspect. In this task, we extracted four types of features (i.e., Sentiment Lexicon Features, Linguistic Features, Topic Model Features and Word2vec Feature) from certain fragments related to aspect rather than the whole sentence. Then the proposed features are fed into supervised classifiers for sentiment analysis. Our submissions rank above average.",{ECNU} at {S}em{E}val-2016 Task 5: Extracting Effective Features from Relevant Fragments in Sentence for Aspect-Based Sentiment Analysis in Reviews,"This paper describes our systems submitted to the Sentence-level and Text-level Aspect-Based Sentiment Analysis (ABSA) task (i.e., Task 5) in SemEval-2016. The task involves two phases, namely, Aspect Detection phase and Sentiment Polarity Classification phase. We participated in the second phase of both subtasks in laptop and restaurant domains, which focuses on the sentiment analysis based on the given aspect. In this task, we extracted four types of features (i.e., Sentiment Lexicon Features, Linguistic Features, Topic Model Features and Word2vec Feature) from certain fragments related to aspect rather than the whole sentence. Then the proposed features are fed into supervised classifiers for sentiment analysis. Our submissions rank above average.",ECNU at SemEval-2016 Task 5: Extracting Effective Features from Relevant Fragments in Sentence for Aspect-Based Sentiment Analysis in Reviews,"This paper describes our systems submitted to the Sentence-level and Text-level Aspect-Based Sentiment Analysis (ABSA) task (i.e., Task 5) in SemEval-2016. The task involves two phases, namely, Aspect Detection phase and Sentiment Polarity Classification phase. We participated in the second phase of both subtasks in laptop and restaurant domains, which focuses on the sentiment analysis based on the given aspect. In this task, we extracted four types of features (i.e., Sentiment Lexicon Features, Linguistic Features, Topic Model Features and Word2vec Feature) from certain fragments related to aspect rather than the whole sentence. Then the proposed features are fed into supervised classifiers for sentiment analysis. Our submissions rank above average.","This research is supported by grants from Science and Technology Commission of Shanghai Municipality (14DZ2260800 and 15ZR1410700), Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things (ZF1213).","ECNU at SemEval-2016 Task 5: Extracting Effective Features from Relevant Fragments in Sentence for Aspect-Based Sentiment Analysis in Reviews. This paper describes our systems submitted to the Sentence-level and Text-level Aspect-Based Sentiment Analysis (ABSA) task (i.e., Task 5) in SemEval-2016. The task involves two phases, namely, Aspect Detection phase and Sentiment Polarity Classification phase. We participated in the second phase of both subtasks in laptop and restaurant domains, which focuses on the sentiment analysis based on the given aspect. In this task, we extracted four types of features (i.e., Sentiment Lexicon Features, Linguistic Features, Topic Model Features and Word2vec Feature) from certain fragments related to aspect rather than the whole sentence. Then the proposed features are fed into supervised classifiers for sentiment analysis. Our submissions rank above average.",2016
takehisa-2016-possessor,https://aclanthology.org/Y16-3014,0,,,,,,,"On the Possessor Interpretation of Non-Agentive Subjects. It has been observed that the relation of possession contributes to the formation of socalled adversity causatives, whose subject is understood as a possessor of an object referent. This interpretation is reflected at face value in some studies, and it is assumed there that the subject argument is introduced as a possessor in syntax. This paper addresses the question of whether the observed relation should be directly encoded as such and argues that the subject argument is introduced as merely an event participant whose manner is underspecified. Moreover, it argues that the possessor interpretation arises from inference based on both linguistic and extralinguistic contexts, such as the presence of a possessum argument. This view is implemented as an analysis making use of a kind of applicative head (Pylkkänen, 2008) in conjunction with the postsyntactic inferential strategy (Rivero, 2004). 1 The following abbreviations are used: ACC = accusative, CAUS, C = causative, CL = classifier, COP = copula, DAT = dative, DV = dummy verb, GEN = genitive, INCH, I = inchoative, INST = instrumental, LOC = locative, NEG = negative, NML = nominalizer, NPST = nonpast, PASS = passive, pro = null pronoun, PST = past, TOP = topic, ¥verb = verbal root. (1) Taroo 1-ga kare 1-no/ zibun 1-no/Ø 1 T.-NOM he-GEN/ self-GEN/ pro ude-o or-Ø-ta (>ot-ta) arm-ACC ¥break-CAUS-PST 'Taroo broke his arm.' That the ambiguity is real can be shown by the sentence in (2), where the second conjunct serves to ensure the subject is not an agent. (2) Taroo 1-ga kare 1-no/ zibun 1-no/Ø 1 T.-NOM he-GEN/ self-GEN/ pro ude-o or-Ø-ta (>ot-ta) kedo, arm-ACC ¥break-CAUS-PST but zibun 1-de-wa or-Ø-anak-at-ta self-INST-TOP break-CAUS-NEG-DV-PST 'Taroo broke his arm, but he didn't break it himself.'",On the Possessor Interpretation of Non-Agentive Subjects,"It has been observed that the relation of possession contributes to the formation of socalled adversity causatives, whose subject is understood as a possessor of an object referent. This interpretation is reflected at face value in some studies, and it is assumed there that the subject argument is introduced as a possessor in syntax. This paper addresses the question of whether the observed relation should be directly encoded as such and argues that the subject argument is introduced as merely an event participant whose manner is underspecified. Moreover, it argues that the possessor interpretation arises from inference based on both linguistic and extralinguistic contexts, such as the presence of a possessum argument. This view is implemented as an analysis making use of a kind of applicative head (Pylkkänen, 2008) in conjunction with the postsyntactic inferential strategy (Rivero, 2004). 1 The following abbreviations are used: ACC = accusative, CAUS, C = causative, CL = classifier, COP = copula, DAT = dative, DV = dummy verb, GEN = genitive, INCH, I = inchoative, INST = instrumental, LOC = locative, NEG = negative, NML = nominalizer, NPST = nonpast, PASS = passive, pro = null pronoun, PST = past, TOP = topic, ¥verb = verbal root. (1) Taroo 1-ga { kare 1-no/ zibun 1-no/Ø 1 } T.-NOM he-GEN/ self-GEN/ pro ude-o or-Ø-ta (>ot-ta) arm-ACC ¥break-CAUS-PST 'Taroo broke his arm.' That the ambiguity is real can be shown by the sentence in (2), where the second conjunct serves to ensure the subject is not an agent. (2) Taroo 1-ga { kare 1-no/ zibun 1-no/Ø 1 } T.-NOM he-GEN/ self-GEN/ pro ude-o or-Ø-ta (>ot-ta) kedo, arm-ACC ¥break-CAUS-PST but zibun 1-de-wa or-Ø-anak-at-ta self-INST-TOP break-CAUS-NEG-DV-PST 'Taroo broke his arm, but he didn't break it himself.'",On the Possessor Interpretation of Non-Agentive Subjects,"It has been observed that the relation of possession contributes to the formation of socalled adversity causatives, whose subject is understood as a possessor of an object referent. This interpretation is reflected at face value in some studies, and it is assumed there that the subject argument is introduced as a possessor in syntax. This paper addresses the question of whether the observed relation should be directly encoded as such and argues that the subject argument is introduced as merely an event participant whose manner is underspecified. Moreover, it argues that the possessor interpretation arises from inference based on both linguistic and extralinguistic contexts, such as the presence of a possessum argument. This view is implemented as an analysis making use of a kind of applicative head (Pylkkänen, 2008) in conjunction with the postsyntactic inferential strategy (Rivero, 2004). 1 The following abbreviations are used: ACC = accusative, CAUS, C = causative, CL = classifier, COP = copula, DAT = dative, DV = dummy verb, GEN = genitive, INCH, I = inchoative, INST = instrumental, LOC = locative, NEG = negative, NML = nominalizer, NPST = nonpast, PASS = passive, pro = null pronoun, PST = past, TOP = topic, ¥verb = verbal root. (1) Taroo 1-ga kare 1-no/ zibun 1-no/Ø 1 T.-NOM he-GEN/ self-GEN/ pro ude-o or-Ø-ta (>ot-ta) arm-ACC ¥break-CAUS-PST 'Taroo broke his arm.' That the ambiguity is real can be shown by the sentence in (2), where the second conjunct serves to ensure the subject is not an agent. (2) Taroo 1-ga kare 1-no/ zibun 1-no/Ø 1 T.-NOM he-GEN/ self-GEN/ pro ude-o or-Ø-ta (>ot-ta) kedo, arm-ACC ¥break-CAUS-PST but zibun 1-de-wa or-Ø-anak-at-ta self-INST-TOP break-CAUS-NEG-DV-PST 'Taroo broke his arm, but he didn't break it himself.'","I am grateful to Chigusa Morita and three anonymous reviewers for their invaluable comments, which helped clarify the manuscript. I am solely responsible for any errors and inadequacies contained herein.","On the Possessor Interpretation of Non-Agentive Subjects. It has been observed that the relation of possession contributes to the formation of socalled adversity causatives, whose subject is understood as a possessor of an object referent. This interpretation is reflected at face value in some studies, and it is assumed there that the subject argument is introduced as a possessor in syntax. This paper addresses the question of whether the observed relation should be directly encoded as such and argues that the subject argument is introduced as merely an event participant whose manner is underspecified. Moreover, it argues that the possessor interpretation arises from inference based on both linguistic and extralinguistic contexts, such as the presence of a possessum argument. This view is implemented as an analysis making use of a kind of applicative head (Pylkkänen, 2008) in conjunction with the postsyntactic inferential strategy (Rivero, 2004). 1 The following abbreviations are used: ACC = accusative, CAUS, C = causative, CL = classifier, COP = copula, DAT = dative, DV = dummy verb, GEN = genitive, INCH, I = inchoative, INST = instrumental, LOC = locative, NEG = negative, NML = nominalizer, NPST = nonpast, PASS = passive, pro = null pronoun, PST = past, TOP = topic, ¥verb = verbal root. (1) Taroo 1-ga kare 1-no/ zibun 1-no/Ø 1 T.-NOM he-GEN/ self-GEN/ pro ude-o or-Ø-ta (>ot-ta) arm-ACC ¥break-CAUS-PST 'Taroo broke his arm.' That the ambiguity is real can be shown by the sentence in (2), where the second conjunct serves to ensure the subject is not an agent. (2) Taroo 1-ga kare 1-no/ zibun 1-no/Ø 1 T.-NOM he-GEN/ self-GEN/ pro ude-o or-Ø-ta (>ot-ta) kedo, arm-ACC ¥break-CAUS-PST but zibun 1-de-wa or-Ø-anak-at-ta self-INST-TOP break-CAUS-NEG-DV-PST 'Taroo broke his arm, but he didn't break it himself.'",2016
wang-etal-2020-neural,https://aclanthology.org/2020.aacl-main.21,0,,,,,,,"Neural Gibbs Sampling for Joint Event Argument Extraction. Event Argument Extraction (EAE) aims at predicting event argument roles of entities in text, which is a crucial subtask and bottleneck of event extraction. Existing EAE methods either extract each event argument roles independently or sequentially, which cannot adequately model the joint probability distribution among event arguments and their roles. In this paper, we propose a Bayesian model named Neural Gibbs Sampling (NGS) to jointly extract event arguments. Specifically, we train two neural networks to model the prior distribution and conditional distribution over event arguments respectively and then use Gibbs sampling to approximate the joint distribution with the learned distributions. For overcoming the shortcoming of the high complexity of the original Gibbs sampling algorithm, we further apply simulated annealing to efficiently estimate the joint probability distribution over event arguments and make predictions. We conduct experiments on the two widely-used benchmark datasets ACE 2005 and TAC KBP 2016. The Experimental results show that our NGS model can achieve comparable results to existing state-of-the-art EAE methods. The source code can be obtained from https:// github.com/THU-KEG/NGS.",{N}eural {G}ibbs {S}ampling for {J}oint {E}vent {A}rgument {E}xtraction,"Event Argument Extraction (EAE) aims at predicting event argument roles of entities in text, which is a crucial subtask and bottleneck of event extraction. Existing EAE methods either extract each event argument roles independently or sequentially, which cannot adequately model the joint probability distribution among event arguments and their roles. In this paper, we propose a Bayesian model named Neural Gibbs Sampling (NGS) to jointly extract event arguments. Specifically, we train two neural networks to model the prior distribution and conditional distribution over event arguments respectively and then use Gibbs sampling to approximate the joint distribution with the learned distributions. For overcoming the shortcoming of the high complexity of the original Gibbs sampling algorithm, we further apply simulated annealing to efficiently estimate the joint probability distribution over event arguments and make predictions. We conduct experiments on the two widely-used benchmark datasets ACE 2005 and TAC KBP 2016. The Experimental results show that our NGS model can achieve comparable results to existing state-of-the-art EAE methods. The source code can be obtained from https:// github.com/THU-KEG/NGS.",Neural Gibbs Sampling for Joint Event Argument Extraction,"Event Argument Extraction (EAE) aims at predicting event argument roles of entities in text, which is a crucial subtask and bottleneck of event extraction. Existing EAE methods either extract each event argument roles independently or sequentially, which cannot adequately model the joint probability distribution among event arguments and their roles. In this paper, we propose a Bayesian model named Neural Gibbs Sampling (NGS) to jointly extract event arguments. Specifically, we train two neural networks to model the prior distribution and conditional distribution over event arguments respectively and then use Gibbs sampling to approximate the joint distribution with the learned distributions. For overcoming the shortcoming of the high complexity of the original Gibbs sampling algorithm, we further apply simulated annealing to efficiently estimate the joint probability distribution over event arguments and make predictions. We conduct experiments on the two widely-used benchmark datasets ACE 2005 and TAC KBP 2016. The Experimental results show that our NGS model can achieve comparable results to existing state-of-the-art EAE methods. The source code can be obtained from https:// github.com/THU-KEG/NGS.","We thank Hedong (Ben) Hou for his help in the mathematical proof. This work is supported by the Key-Area Research and Development Program of Guangdong Province (2019B010153002), NSFC Key Projects (U1736204, 61533018), a grant from Institute for Guo Qiang, Tsinghua University (2019GQB0003) and THUNUS NExT Co-Lab. This work is also supported by the Pattern Recognition Center, WeChat AI, Tencent Inc. Xiaozhi Wang is supported by Tsinghua University Initiative Scientific Research Program.","Neural Gibbs Sampling for Joint Event Argument Extraction. Event Argument Extraction (EAE) aims at predicting event argument roles of entities in text, which is a crucial subtask and bottleneck of event extraction. Existing EAE methods either extract each event argument roles independently or sequentially, which cannot adequately model the joint probability distribution among event arguments and their roles. In this paper, we propose a Bayesian model named Neural Gibbs Sampling (NGS) to jointly extract event arguments. Specifically, we train two neural networks to model the prior distribution and conditional distribution over event arguments respectively and then use Gibbs sampling to approximate the joint distribution with the learned distributions. For overcoming the shortcoming of the high complexity of the original Gibbs sampling algorithm, we further apply simulated annealing to efficiently estimate the joint probability distribution over event arguments and make predictions. We conduct experiments on the two widely-used benchmark datasets ACE 2005 and TAC KBP 2016. The Experimental results show that our NGS model can achieve comparable results to existing state-of-the-art EAE methods. The source code can be obtained from https:// github.com/THU-KEG/NGS.",2020
tan-etal-2019-expressing,https://aclanthology.org/P19-1182,0,,,,,,,"Expressing Visual Relationships via Language. Describing images with text is a fundamental problem in vision-language research. Current studies in this domain mostly focus on single image captioning. However, in various real applications (e.g., image editing, difference interpretation, and retrieval), generating relational captions for two images, can also be very useful. This important problem has not been explored mostly due to lack of datasets and effective models. To push forward the research in this direction, we first introduce a new language-guided image editing dataset that contains a large number of real image pairs with corresponding editing instructions. We then propose a new relational speaker model based on an encoder-decoder architecture with static relational attention and sequential multi-head attention. We also extend the model with dynamic relational attention, which calculates visual alignment while decoding. Our models are evaluated on our newly collected and two public datasets consisting of image pairs annotated with relationship sentences. Experimental results, based on both automatic and human evaluation, demonstrate that our model outperforms all baselines and existing methods on all the datasets. 1",Expressing Visual Relationships via Language,"Describing images with text is a fundamental problem in vision-language research. Current studies in this domain mostly focus on single image captioning. However, in various real applications (e.g., image editing, difference interpretation, and retrieval), generating relational captions for two images, can also be very useful. This important problem has not been explored mostly due to lack of datasets and effective models. To push forward the research in this direction, we first introduce a new language-guided image editing dataset that contains a large number of real image pairs with corresponding editing instructions. We then propose a new relational speaker model based on an encoder-decoder architecture with static relational attention and sequential multi-head attention. We also extend the model with dynamic relational attention, which calculates visual alignment while decoding. Our models are evaluated on our newly collected and two public datasets consisting of image pairs annotated with relationship sentences. Experimental results, based on both automatic and human evaluation, demonstrate that our model outperforms all baselines and existing methods on all the datasets. 1",Expressing Visual Relationships via Language,"Describing images with text is a fundamental problem in vision-language research. Current studies in this domain mostly focus on single image captioning. However, in various real applications (e.g., image editing, difference interpretation, and retrieval), generating relational captions for two images, can also be very useful. This important problem has not been explored mostly due to lack of datasets and effective models. To push forward the research in this direction, we first introduce a new language-guided image editing dataset that contains a large number of real image pairs with corresponding editing instructions. We then propose a new relational speaker model based on an encoder-decoder architecture with static relational attention and sequential multi-head attention. We also extend the model with dynamic relational attention, which calculates visual alignment while decoding. Our models are evaluated on our newly collected and two public datasets consisting of image pairs annotated with relationship sentences. Experimental results, based on both automatic and human evaluation, demonstrate that our model outperforms all baselines and existing methods on all the datasets. 1","We thank the reviewers for their helpful comments and Nham Le for helping with the initial data collection. This work was supported by Adobe, ARO-YIP Award #W911NF-18-1-0336, and faculty awards from Google, Facebook, and Salesforce. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency.","Expressing Visual Relationships via Language. Describing images with text is a fundamental problem in vision-language research. Current studies in this domain mostly focus on single image captioning. However, in various real applications (e.g., image editing, difference interpretation, and retrieval), generating relational captions for two images, can also be very useful. This important problem has not been explored mostly due to lack of datasets and effective models. To push forward the research in this direction, we first introduce a new language-guided image editing dataset that contains a large number of real image pairs with corresponding editing instructions. We then propose a new relational speaker model based on an encoder-decoder architecture with static relational attention and sequential multi-head attention. We also extend the model with dynamic relational attention, which calculates visual alignment while decoding. Our models are evaluated on our newly collected and two public datasets consisting of image pairs annotated with relationship sentences. Experimental results, based on both automatic and human evaluation, demonstrate that our model outperforms all baselines and existing methods on all the datasets. 1",2019
hao-etal-2019-modeling,https://aclanthology.org/N19-1122,0,,,,,,,"Modeling Recurrence for Transformer. Recently, the Transformer model (Vaswani et al., 2017) that is based solely on attention mechanisms, has advanced the state-of-the-art on various machine translation tasks. However, recent studies reveal that the lack of recurrence hinders its further improvement of translation capacity (Chen et al., 2018; Dehghani et al., 2019). In response to this problem, we propose to directly model recurrence for Transformer with an additional recurrence encoder. In addition to the standard recurrent neural network, we introduce a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks. Experimental results on the widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness of the proposed approach. Our studies also reveal that the proposed model benefits from a shortcut that bridges the source and target sequences with a single recurrent layer, which outperforms its deep counterpart.",Modeling Recurrence for Transformer,"Recently, the Transformer model (Vaswani et al., 2017) that is based solely on attention mechanisms, has advanced the state-of-the-art on various machine translation tasks. However, recent studies reveal that the lack of recurrence hinders its further improvement of translation capacity (Chen et al., 2018; Dehghani et al., 2019). In response to this problem, we propose to directly model recurrence for Transformer with an additional recurrence encoder. In addition to the standard recurrent neural network, we introduce a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks. Experimental results on the widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness of the proposed approach. Our studies also reveal that the proposed model benefits from a shortcut that bridges the source and target sequences with a single recurrent layer, which outperforms its deep counterpart.",Modeling Recurrence for Transformer,"Recently, the Transformer model (Vaswani et al., 2017) that is based solely on attention mechanisms, has advanced the state-of-the-art on various machine translation tasks. However, recent studies reveal that the lack of recurrence hinders its further improvement of translation capacity (Chen et al., 2018; Dehghani et al., 2019). In response to this problem, we propose to directly model recurrence for Transformer with an additional recurrence encoder. In addition to the standard recurrent neural network, we introduce a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks. Experimental results on the widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness of the proposed approach. Our studies also reveal that the proposed model benefits from a shortcut that bridges the source and target sequences with a single recurrent layer, which outperforms its deep counterpart.",Acknowledgments J.Z. was supported by the National Institute of General Medical Sciences of the National Institute of Health under award number R01GM126558. We thank the anonymous reviewers for their insightful comments.,"Modeling Recurrence for Transformer. Recently, the Transformer model (Vaswani et al., 2017) that is based solely on attention mechanisms, has advanced the state-of-the-art on various machine translation tasks. However, recent studies reveal that the lack of recurrence hinders its further improvement of translation capacity (Chen et al., 2018; Dehghani et al., 2019). In response to this problem, we propose to directly model recurrence for Transformer with an additional recurrence encoder. In addition to the standard recurrent neural network, we introduce a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks. Experimental results on the widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness of the proposed approach. Our studies also reveal that the proposed model benefits from a shortcut that bridges the source and target sequences with a single recurrent layer, which outperforms its deep counterpart.",2019
gustavson-1981-forarbeten,https://aclanthology.org/W81-0119,0,,,,,,,"F\""orarbeten till en datoriserad runordbok (Preliminary work for a computerized rune lexicon) [In Swedish]. I det följande beskrivs arbetet med att upprätta ett ADB-baserat re gister över det språkliga materialet i Sveriges runinskrifter.Det skall också tjäna som en utgångspunkt för en planerad runordbok. Ur datorteknisk synpunkt kan det vara av intresse, eftersom det bygger på ett mikrodatorsystem och tillämpning av interaktiva program, som medger direktkommuniktion i klartext.Registrets innehåll kommer huvud sakligen att bygga på innehållet i inskrifterna i seriverket Sveriges runinskrifter (19OO-). Registret är tänkt att bestå av två större delar: ett ordregister och ett register över de enskilda in skrifterna. Till ordregistret knyts, om så är lämpligt och möjligt, ett namnregister över personnamnen och ortnamnen i runinskrifterna.
Förutom att registren kommer att ligga lagrade för löpande ADBbehandling blir de också utgångspunkt för tryckta förteckningar, bl a i form av den nämnda runordboken.","F{\""o}rarbeten till en datoriserad runordbok (Preliminary work for a computerized rune lexicon) [In {S}wedish]","I det följande beskrivs arbetet med att upprätta ett ADB-baserat re gister över det språkliga materialet i Sveriges runinskrifter.Det skall också tjäna som en utgångspunkt för en planerad runordbok. Ur datorteknisk synpunkt kan det vara av intresse, eftersom det bygger på ett mikrodatorsystem och tillämpning av interaktiva program, som medger direktkommuniktion i klartext.Registrets innehåll kommer huvud sakligen att bygga på innehållet i inskrifterna i seriverket Sveriges runinskrifter (19OO-). Registret är tänkt att bestå av två större delar: ett ordregister och ett register över de enskilda in skrifterna. Till ordregistret knyts, om så är lämpligt och möjligt, ett namnregister över personnamnen och ortnamnen i runinskrifterna.
Förutom att registren kommer att ligga lagrade för löpande ADBbehandling blir de också utgångspunkt för tryckta förteckningar, bl a i form av den nämnda runordboken.","F\""orarbeten till en datoriserad runordbok (Preliminary work for a computerized rune lexicon) [In Swedish]","I det följande beskrivs arbetet med att upprätta ett ADB-baserat re gister över det språkliga materialet i Sveriges runinskrifter.Det skall också tjäna som en utgångspunkt för en planerad runordbok. Ur datorteknisk synpunkt kan det vara av intresse, eftersom det bygger på ett mikrodatorsystem och tillämpning av interaktiva program, som medger direktkommuniktion i klartext.Registrets innehåll kommer huvud sakligen att bygga på innehållet i inskrifterna i seriverket Sveriges runinskrifter (19OO-). Registret är tänkt att bestå av två större delar: ett ordregister och ett register över de enskilda in skrifterna. Till ordregistret knyts, om så är lämpligt och möjligt, ett namnregister över personnamnen och ortnamnen i runinskrifterna.
Förutom att registren kommer att ligga lagrade för löpande ADBbehandling blir de också utgångspunkt för tryckta förteckningar, bl a i form av den nämnda runordboken.",,"F\""orarbeten till en datoriserad runordbok (Preliminary work for a computerized rune lexicon) [In Swedish]. I det följande beskrivs arbetet med att upprätta ett ADB-baserat re gister över det språkliga materialet i Sveriges runinskrifter.Det skall också tjäna som en utgångspunkt för en planerad runordbok. Ur datorteknisk synpunkt kan det vara av intresse, eftersom det bygger på ett mikrodatorsystem och tillämpning av interaktiva program, som medger direktkommuniktion i klartext.Registrets innehåll kommer huvud sakligen att bygga på innehållet i inskrifterna i seriverket Sveriges runinskrifter (19OO-). Registret är tänkt att bestå av två större delar: ett ordregister och ett register över de enskilda in skrifterna. Till ordregistret knyts, om så är lämpligt och möjligt, ett namnregister över personnamnen och ortnamnen i runinskrifterna.
Förutom att registren kommer att ligga lagrade för löpande ADBbehandling blir de också utgångspunkt för tryckta förteckningar, bl a i form av den nämnda runordboken.",1981
yu-etal-2020-mooccube,https://aclanthology.org/2020.acl-main.285,1,,,,education,,,"MOOCCube: A Large-scale Data Repository for NLP Applications in MOOCs. The prosperity of Massive Open Online Courses (MOOCs) provides fodder for many NLP and AI research for education applications, e.g., course concept extraction, prerequisite relation discovery, etc. However, the publicly available datasets of MOOC are limited in size with few types of data, which hinders advanced models and novel attempts in related topics. Therefore, we present MOOCCube, a large-scale data repository of over 700 MOOC courses, 100k concepts, 8 million student behaviors with an external resource. Moreover, we conduct a prerequisite discovery task as an example application to show the potential of MOOCCube in facilitating relevant research. The data repository is now available at http: //moocdata.cn/data/MOOCCube.",{MOOCC}ube: A Large-scale Data Repository for {NLP} Applications in {MOOC}s,"The prosperity of Massive Open Online Courses (MOOCs) provides fodder for many NLP and AI research for education applications, e.g., course concept extraction, prerequisite relation discovery, etc. However, the publicly available datasets of MOOC are limited in size with few types of data, which hinders advanced models and novel attempts in related topics. Therefore, we present MOOCCube, a large-scale data repository of over 700 MOOC courses, 100k concepts, 8 million student behaviors with an external resource. Moreover, we conduct a prerequisite discovery task as an example application to show the potential of MOOCCube in facilitating relevant research. The data repository is now available at http: //moocdata.cn/data/MOOCCube.",MOOCCube: A Large-scale Data Repository for NLP Applications in MOOCs,"The prosperity of Massive Open Online Courses (MOOCs) provides fodder for many NLP and AI research for education applications, e.g., course concept extraction, prerequisite relation discovery, etc. However, the publicly available datasets of MOOC are limited in size with few types of data, which hinders advanced models and novel attempts in related topics. Therefore, we present MOOCCube, a large-scale data repository of over 700 MOOC courses, 100k concepts, 8 million student behaviors with an external resource. Moreover, we conduct a prerequisite discovery task as an example application to show the potential of MOOCCube in facilitating relevant research. The data repository is now available at http: //moocdata.cn/data/MOOCCube.","Zhiyuan Liu is supported by the National KeyResearch and Development Program of China(No. 2018YFB1004503), and others are supported by NSFC key project (U1736204, 61533018), a grant from Beijing Academy of Artificial Intelligence (BAAI2019ZD0502), a grant from the Insititute for Guo Qiang, Tsinghua University, THUNUS NExT Co-Lab, the Center for Massive Online Education of Tsinghua Univerisity, and XuetangX.","MOOCCube: A Large-scale Data Repository for NLP Applications in MOOCs. The prosperity of Massive Open Online Courses (MOOCs) provides fodder for many NLP and AI research for education applications, e.g., course concept extraction, prerequisite relation discovery, etc. However, the publicly available datasets of MOOC are limited in size with few types of data, which hinders advanced models and novel attempts in related topics. Therefore, we present MOOCCube, a large-scale data repository of over 700 MOOC courses, 100k concepts, 8 million student behaviors with an external resource. Moreover, we conduct a prerequisite discovery task as an example application to show the potential of MOOCCube in facilitating relevant research. The data repository is now available at http: //moocdata.cn/data/MOOCCube.",2020
chen-chang-1998-topical,https://aclanthology.org/J98-1003,0,,,,,,,"Topical Clustering of MRD Senses Based on Information Retrieval Techniques. This paper describes a heuristic approach capable of automatically clustering senses in a machinereadable dictionary (MRD). Including these clusters in the MRD-based lexical database offers several positive benefits for word sense disambiguation (WSD). First, the clusters can be used as a coarser sense division, so unnecessarily fine sense distinction can be avoided. The clustered entries in the MRD can also be used as materials for supervised training to develop a WSD system. Furthermore, if the algorithm is run on several MRDs, the clusters also provide a means of linking different senses across multiple MRDs to create an integrated lexical database. An implementation of the method for clustering definition sentences in the Longman Dictionary of Contemporary English (LDOCE) is described. To this end, the topical word lists and topical cross-references in the Longman Lexicon of Contemporary English (LLOCE) are used. Nearly half of the senses in the LDOCE can be linked precisely to a relevant LLOCE topic using a simple heuristic. With the definitions of senses linked to the same topic viewed as a document, topical clustering of the MRD senses bears a striking resemblance to retrieval of relevant documents for a given query in information retrieval (IR) research. Relatively well-established IR techniques of weighting terms and ranking document relevancy are applied to find the topical clusters that are most relevant to the definition of each MRD sense. Finally, we describe an implemented version of the algorithms for the LDOCE and the LLOCE and assess the performance of the proposed approach in a series of experiments and evaluations.",Topical Clustering of {MRD} Senses Based on Information Retrieval Techniques,"This paper describes a heuristic approach capable of automatically clustering senses in a machinereadable dictionary (MRD). Including these clusters in the MRD-based lexical database offers several positive benefits for word sense disambiguation (WSD). First, the clusters can be used as a coarser sense division, so unnecessarily fine sense distinction can be avoided. The clustered entries in the MRD can also be used as materials for supervised training to develop a WSD system. Furthermore, if the algorithm is run on several MRDs, the clusters also provide a means of linking different senses across multiple MRDs to create an integrated lexical database. An implementation of the method for clustering definition sentences in the Longman Dictionary of Contemporary English (LDOCE) is described. To this end, the topical word lists and topical cross-references in the Longman Lexicon of Contemporary English (LLOCE) are used. Nearly half of the senses in the LDOCE can be linked precisely to a relevant LLOCE topic using a simple heuristic. With the definitions of senses linked to the same topic viewed as a document, topical clustering of the MRD senses bears a striking resemblance to retrieval of relevant documents for a given query in information retrieval (IR) research. Relatively well-established IR techniques of weighting terms and ranking document relevancy are applied to find the topical clusters that are most relevant to the definition of each MRD sense. Finally, we describe an implemented version of the algorithms for the LDOCE and the LLOCE and assess the performance of the proposed approach in a series of experiments and evaluations.",Topical Clustering of MRD Senses Based on Information Retrieval Techniques,"This paper describes a heuristic approach capable of automatically clustering senses in a machinereadable dictionary (MRD). Including these clusters in the MRD-based lexical database offers several positive benefits for word sense disambiguation (WSD). First, the clusters can be used as a coarser sense division, so unnecessarily fine sense distinction can be avoided. The clustered entries in the MRD can also be used as materials for supervised training to develop a WSD system. Furthermore, if the algorithm is run on several MRDs, the clusters also provide a means of linking different senses across multiple MRDs to create an integrated lexical database. An implementation of the method for clustering definition sentences in the Longman Dictionary of Contemporary English (LDOCE) is described. To this end, the topical word lists and topical cross-references in the Longman Lexicon of Contemporary English (LLOCE) are used. Nearly half of the senses in the LDOCE can be linked precisely to a relevant LLOCE topic using a simple heuristic. With the definitions of senses linked to the same topic viewed as a document, topical clustering of the MRD senses bears a striking resemblance to retrieval of relevant documents for a given query in information retrieval (IR) research. Relatively well-established IR techniques of weighting terms and ranking document relevancy are applied to find the topical clusters that are most relevant to the definition of each MRD sense. Finally, we describe an implemented version of the algorithms for the LDOCE and the LLOCE and assess the performance of the proposed approach in a series of experiments and evaluations.","This work is partially supported by ROC NSC grants 84-2213-E-007-023 and NSC 85-2213-E-007-042. We are grateful to Betty Teng and Nora Liu from Longman Asia Limited for the permission to use their lexicographical resources for research purposes. Finally, we would like to thank the anonymous reviewers for many constructive and insightful suggestions.","Topical Clustering of MRD Senses Based on Information Retrieval Techniques. This paper describes a heuristic approach capable of automatically clustering senses in a machinereadable dictionary (MRD). Including these clusters in the MRD-based lexical database offers several positive benefits for word sense disambiguation (WSD). First, the clusters can be used as a coarser sense division, so unnecessarily fine sense distinction can be avoided. The clustered entries in the MRD can also be used as materials for supervised training to develop a WSD system. Furthermore, if the algorithm is run on several MRDs, the clusters also provide a means of linking different senses across multiple MRDs to create an integrated lexical database. An implementation of the method for clustering definition sentences in the Longman Dictionary of Contemporary English (LDOCE) is described. To this end, the topical word lists and topical cross-references in the Longman Lexicon of Contemporary English (LLOCE) are used. Nearly half of the senses in the LDOCE can be linked precisely to a relevant LLOCE topic using a simple heuristic. With the definitions of senses linked to the same topic viewed as a document, topical clustering of the MRD senses bears a striking resemblance to retrieval of relevant documents for a given query in information retrieval (IR) research. Relatively well-established IR techniques of weighting terms and ranking document relevancy are applied to find the topical clusters that are most relevant to the definition of each MRD sense. Finally, we describe an implemented version of the algorithms for the LDOCE and the LLOCE and assess the performance of the proposed approach in a series of experiments and evaluations.",1998
jagarlamudi-daume-iii-2012-low,https://aclanthology.org/N12-1088,0,,,,,,,"Low-Dimensional Discriminative Reranking. The accuracy of many natural language processing tasks can be improved by a reranking step, which involves selecting a single output from a list of candidate outputs generated by a baseline system. We propose a novel family of reranking algorithms based on learning separate low-dimensional embeddings of the task's input and output spaces. This embedding is learned in such a way that prediction becomes a low-dimensional nearest-neighbor search, which can be done computationally efficiently. A key quality of our approach is that feature engineering can be done separately on the input and output spaces; the relationship between inputs and outputs is learned automatically. Experiments on part-of-speech tagging task in four languages show significant improvements over a baseline decoder and existing reranking approaches.",Low-Dimensional Discriminative Reranking,"The accuracy of many natural language processing tasks can be improved by a reranking step, which involves selecting a single output from a list of candidate outputs generated by a baseline system. We propose a novel family of reranking algorithms based on learning separate low-dimensional embeddings of the task's input and output spaces. This embedding is learned in such a way that prediction becomes a low-dimensional nearest-neighbor search, which can be done computationally efficiently. A key quality of our approach is that feature engineering can be done separately on the input and output spaces; the relationship between inputs and outputs is learned automatically. Experiments on part-of-speech tagging task in four languages show significant improvements over a baseline decoder and existing reranking approaches.",Low-Dimensional Discriminative Reranking,"The accuracy of many natural language processing tasks can be improved by a reranking step, which involves selecting a single output from a list of candidate outputs generated by a baseline system. We propose a novel family of reranking algorithms based on learning separate low-dimensional embeddings of the task's input and output spaces. This embedding is learned in such a way that prediction becomes a low-dimensional nearest-neighbor search, which can be done computationally efficiently. A key quality of our approach is that feature engineering can be done separately on the input and output spaces; the relationship between inputs and outputs is learned automatically. Experiments on part-of-speech tagging task in four languages show significant improvements over a baseline decoder and existing reranking approaches.","We thank Zhongqiang Huang for providing the code for the baseline systems, Raghavendra Udupa and the anonymous reviewers for their insightful comments. This work is partially funded by NSF grants IIS-1153487 and IIS-1139909.","Low-Dimensional Discriminative Reranking. The accuracy of many natural language processing tasks can be improved by a reranking step, which involves selecting a single output from a list of candidate outputs generated by a baseline system. We propose a novel family of reranking algorithms based on learning separate low-dimensional embeddings of the task's input and output spaces. This embedding is learned in such a way that prediction becomes a low-dimensional nearest-neighbor search, which can be done computationally efficiently. A key quality of our approach is that feature engineering can be done separately on the input and output spaces; the relationship between inputs and outputs is learned automatically. Experiments on part-of-speech tagging task in four languages show significant improvements over a baseline decoder and existing reranking approaches.",2012
massung-etal-2016-meta,https://aclanthology.org/P16-4016,0,,,,,,,"MeTA: A Unified Toolkit for Text Retrieval and Analysis. META is developed to unite machine learning, information retrieval, and natural language processing in one easy-to-use toolkit. Its focus on indexing allows it to perform well on large datasets, supporting online classification and other out-of-core algorithms. META's liberal open source license encourages contributions, and its extensive online documentation, forum, and tutorials make this process straightforward. We run experiments and show META's performance is competitive with or better than existing software.",{M}e{TA}: A Unified Toolkit for Text Retrieval and Analysis,"META is developed to unite machine learning, information retrieval, and natural language processing in one easy-to-use toolkit. Its focus on indexing allows it to perform well on large datasets, supporting online classification and other out-of-core algorithms. META's liberal open source license encourages contributions, and its extensive online documentation, forum, and tutorials make this process straightforward. We run experiments and show META's performance is competitive with or better than existing software.",MeTA: A Unified Toolkit for Text Retrieval and Analysis,"META is developed to unite machine learning, information retrieval, and natural language processing in one easy-to-use toolkit. Its focus on indexing allows it to perform well on large datasets, supporting online classification and other out-of-core algorithms. META's liberal open source license encourages contributions, and its extensive online documentation, forum, and tutorials make this process straightforward. We run experiments and show META's performance is competitive with or better than existing software.",This material is based upon work supported by the NSF GRFP under Grant Number DGE-1144245. 22 ftp://largescale.ml.tu-berlin.de/largescale/ 23 It took 12m 24s to generate the index.,"MeTA: A Unified Toolkit for Text Retrieval and Analysis. META is developed to unite machine learning, information retrieval, and natural language processing in one easy-to-use toolkit. Its focus on indexing allows it to perform well on large datasets, supporting online classification and other out-of-core algorithms. META's liberal open source license encourages contributions, and its extensive online documentation, forum, and tutorials make this process straightforward. We run experiments and show META's performance is competitive with or better than existing software.",2016
kay-1987-machines,https://aclanthology.org/1987.mtsummit-1.21,0,,,,,,,"Machines and People in Translation. It is useful to distinguish a narrower and a wider use for the term ""machine translation"". The narrow sense is the more usual one. In this sense, the term refers to a batch process in which a text is given over to a machine from which, some time later, a result is collected which we think of as the output of the machine translation process. When we use the term in the wider sense, it includes all the process required to obtain final translation output on paper. In particular, the wider usage allows for the possibility of an interactive process involving people and machines.
Machine translation, narrowly conceived, is not appropriate for achieving engineering objectives. Machine translation, narrowly conceived, provides an extremely rich framework within which to conduct research on theoretical and computational linguistics, on cognitive modeling and, indeed, a variety of scientific problems. I believe that it provides the best view we can get of human cognitive performance, without introducing a perceptual component. When we learn more about vision, or other perceptual modalities, this situation may change. Machine translation, narrowly conceived, requires a solution to be found to almost every imaginable linguistic problem, and the solutions must be coherent with one another, so that it is a very demanding framework in which to work.",Machines and People in Translation,"It is useful to distinguish a narrower and a wider use for the term ""machine translation"". The narrow sense is the more usual one. In this sense, the term refers to a batch process in which a text is given over to a machine from which, some time later, a result is collected which we think of as the output of the machine translation process. When we use the term in the wider sense, it includes all the process required to obtain final translation output on paper. In particular, the wider usage allows for the possibility of an interactive process involving people and machines.
Machine translation, narrowly conceived, is not appropriate for achieving engineering objectives. Machine translation, narrowly conceived, provides an extremely rich framework within which to conduct research on theoretical and computational linguistics, on cognitive modeling and, indeed, a variety of scientific problems. I believe that it provides the best view we can get of human cognitive performance, without introducing a perceptual component. When we learn more about vision, or other perceptual modalities, this situation may change. Machine translation, narrowly conceived, requires a solution to be found to almost every imaginable linguistic problem, and the solutions must be coherent with one another, so that it is a very demanding framework in which to work.",Machines and People in Translation,"It is useful to distinguish a narrower and a wider use for the term ""machine translation"". The narrow sense is the more usual one. In this sense, the term refers to a batch process in which a text is given over to a machine from which, some time later, a result is collected which we think of as the output of the machine translation process. When we use the term in the wider sense, it includes all the process required to obtain final translation output on paper. In particular, the wider usage allows for the possibility of an interactive process involving people and machines.
Machine translation, narrowly conceived, is not appropriate for achieving engineering objectives. Machine translation, narrowly conceived, provides an extremely rich framework within which to conduct research on theoretical and computational linguistics, on cognitive modeling and, indeed, a variety of scientific problems. I believe that it provides the best view we can get of human cognitive performance, without introducing a perceptual component. When we learn more about vision, or other perceptual modalities, this situation may change. Machine translation, narrowly conceived, requires a solution to be found to almost every imaginable linguistic problem, and the solutions must be coherent with one another, so that it is a very demanding framework in which to work.",,"Machines and People in Translation. It is useful to distinguish a narrower and a wider use for the term ""machine translation"". The narrow sense is the more usual one. In this sense, the term refers to a batch process in which a text is given over to a machine from which, some time later, a result is collected which we think of as the output of the machine translation process. When we use the term in the wider sense, it includes all the process required to obtain final translation output on paper. In particular, the wider usage allows for the possibility of an interactive process involving people and machines.
Machine translation, narrowly conceived, is not appropriate for achieving engineering objectives. Machine translation, narrowly conceived, provides an extremely rich framework within which to conduct research on theoretical and computational linguistics, on cognitive modeling and, indeed, a variety of scientific problems. I believe that it provides the best view we can get of human cognitive performance, without introducing a perceptual component. When we learn more about vision, or other perceptual modalities, this situation may change. Machine translation, narrowly conceived, requires a solution to be found to almost every imaginable linguistic problem, and the solutions must be coherent with one another, so that it is a very demanding framework in which to work.",1987
pilault-etal-2020-extractive,https://aclanthology.org/2020.emnlp-main.748,0,,,,,,,"On Extractive and Abstractive Neural Document Summarization with Transformer Language Models. We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher ROUGE scores. We provide extensive comparisons with strong baseline methods, prior state of the art work as well as multiple variants of our approach including those using only transformers, only extractive techniques and combinations of the two. We examine these models using four different summarization tasks and datasets: arXiv papers, PubMed papers, the Newsroom and BigPatent datasets. We find that transformer based methods produce summaries with fewer n-gram copies, leading to n-gram copying statistics that are more similar to human generated abstracts. We include a human evaluation, finding that transformers are ranked highly for coherence and fluency, but purely extractive methods score higher for informativeness and relevance. We hope that these architectures and experiments may serve as strong points of comparison for future work. 1",On Extractive and Abstractive Neural Document Summarization with Transformer Language Models,"We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher ROUGE scores. We provide extensive comparisons with strong baseline methods, prior state of the art work as well as multiple variants of our approach including those using only transformers, only extractive techniques and combinations of the two. We examine these models using four different summarization tasks and datasets: arXiv papers, PubMed papers, the Newsroom and BigPatent datasets. We find that transformer based methods produce summaries with fewer n-gram copies, leading to n-gram copying statistics that are more similar to human generated abstracts. We include a human evaluation, finding that transformers are ranked highly for coherence and fluency, but purely extractive methods score higher for informativeness and relevance. We hope that these architectures and experiments may serve as strong points of comparison for future work. 1",On Extractive and Abstractive Neural Document Summarization with Transformer Language Models,"We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher ROUGE scores. We provide extensive comparisons with strong baseline methods, prior state of the art work as well as multiple variants of our approach including those using only transformers, only extractive techniques and combinations of the two. We examine these models using four different summarization tasks and datasets: arXiv papers, PubMed papers, the Newsroom and BigPatent datasets. We find that transformer based methods produce summaries with fewer n-gram copies, leading to n-gram copying statistics that are more similar to human generated abstracts. We include a human evaluation, finding that transformers are ranked highly for coherence and fluency, but purely extractive methods score higher for informativeness and relevance. We hope that these architectures and experiments may serve as strong points of comparison for future work. 1",1 Note: The abstract above was collaboratively written by the authors and one of the models presented in this paper based on an earlier draft of this paper.,"On Extractive and Abstractive Neural Document Summarization with Transformer Language Models. We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher ROUGE scores. We provide extensive comparisons with strong baseline methods, prior state of the art work as well as multiple variants of our approach including those using only transformers, only extractive techniques and combinations of the two. We examine these models using four different summarization tasks and datasets: arXiv papers, PubMed papers, the Newsroom and BigPatent datasets. We find that transformer based methods produce summaries with fewer n-gram copies, leading to n-gram copying statistics that are more similar to human generated abstracts. We include a human evaluation, finding that transformers are ranked highly for coherence and fluency, but purely extractive methods score higher for informativeness and relevance. We hope that these architectures and experiments may serve as strong points of comparison for future work. 1",2020
neubig-etal-2018-xnmt,https://aclanthology.org/W18-1818,0,,,,,,,"XNMT: The eXtensible Neural Machine Translation Toolkit. This paper describes XNMT, the eXtensible Neural Machine Translation toolkit. XNMT distinguishes itself from other open-source NMT toolkits by its focus on modular code design, with the purpose of enabling fast iteration in research and replicable, reliable results. In this paper we describe the design of XNMT and its experiment configuration system, and demonstrate its utility on the tasks of machine translation, speech recognition, and multi-tasked machine translation/parsing. XNMT is available open-source at https://github.com/neulab/xnmt.",{XNMT}: The e{X}tensible Neural Machine Translation Toolkit,"This paper describes XNMT, the eXtensible Neural Machine Translation toolkit. XNMT distinguishes itself from other open-source NMT toolkits by its focus on modular code design, with the purpose of enabling fast iteration in research and replicable, reliable results. In this paper we describe the design of XNMT and its experiment configuration system, and demonstrate its utility on the tasks of machine translation, speech recognition, and multi-tasked machine translation/parsing. XNMT is available open-source at https://github.com/neulab/xnmt.",XNMT: The eXtensible Neural Machine Translation Toolkit,"This paper describes XNMT, the eXtensible Neural Machine Translation toolkit. XNMT distinguishes itself from other open-source NMT toolkits by its focus on modular code design, with the purpose of enabling fast iteration in research and replicable, reliable results. In this paper we describe the design of XNMT and its experiment configuration system, and demonstrate its utility on the tasks of machine translation, speech recognition, and multi-tasked machine translation/parsing. XNMT is available open-source at https://github.com/neulab/xnmt.","Part of the development of XNMT was performed at the Jelinek Summer Workshop in Speech and Language Technology (JSALT) ""Speaking Rosetta Stone"" project (Scharenborg et al., 2018) , and we are grateful to the JSALT organizers for the financial/logistical support, and also participants of the workshop for their feedback on XNMT as a tool.Parts of this work were sponsored by Defense Advanced Research Projects Agency Information Innovation Office (I2O). Program: Low Resource Languages for Emergent Incidents (LORELEI). Issued by DARPA/I2O under Contract No. HR0011-15-C-0114. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.","XNMT: The eXtensible Neural Machine Translation Toolkit. This paper describes XNMT, the eXtensible Neural Machine Translation toolkit. XNMT distinguishes itself from other open-source NMT toolkits by its focus on modular code design, with the purpose of enabling fast iteration in research and replicable, reliable results. In this paper we describe the design of XNMT and its experiment configuration system, and demonstrate its utility on the tasks of machine translation, speech recognition, and multi-tasked machine translation/parsing. XNMT is available open-source at https://github.com/neulab/xnmt.",2018
scheible-schutze-2014-multi,https://aclanthology.org/E14-4039,0,,,,,,,"Multi-Domain Sentiment Relevance Classification with Automatic Representation Learning. Sentiment relevance (SR) aims at identifying content that does not contribute to sentiment analysis. Previously, automatic SR classification has been studied in a limited scope, using a single domain and feature augmentation techniques that require large hand-crafted databases. In this paper, we present experiments on SR classification with automatically learned feature representations on multiple domains. We show that a combination of transfer learning and in-task supervision using features learned unsupervisedly by the stacked denoising autoencoder significantly outperforms a bag-of-words baseline for in-domain and cross-domain classification.",Multi-Domain Sentiment Relevance Classification with Automatic Representation Learning,"Sentiment relevance (SR) aims at identifying content that does not contribute to sentiment analysis. Previously, automatic SR classification has been studied in a limited scope, using a single domain and feature augmentation techniques that require large hand-crafted databases. In this paper, we present experiments on SR classification with automatically learned feature representations on multiple domains. We show that a combination of transfer learning and in-task supervision using features learned unsupervisedly by the stacked denoising autoencoder significantly outperforms a bag-of-words baseline for in-domain and cross-domain classification.",Multi-Domain Sentiment Relevance Classification with Automatic Representation Learning,"Sentiment relevance (SR) aims at identifying content that does not contribute to sentiment analysis. Previously, automatic SR classification has been studied in a limited scope, using a single domain and feature augmentation techniques that require large hand-crafted databases. In this paper, we present experiments on SR classification with automatically learned feature representations on multiple domains. We show that a combination of transfer learning and in-task supervision using features learned unsupervisedly by the stacked denoising autoencoder significantly outperforms a bag-of-words baseline for in-domain and cross-domain classification.",,"Multi-Domain Sentiment Relevance Classification with Automatic Representation Learning. Sentiment relevance (SR) aims at identifying content that does not contribute to sentiment analysis. Previously, automatic SR classification has been studied in a limited scope, using a single domain and feature augmentation techniques that require large hand-crafted databases. In this paper, we present experiments on SR classification with automatically learned feature representations on multiple domains. We show that a combination of transfer learning and in-task supervision using features learned unsupervisedly by the stacked denoising autoencoder significantly outperforms a bag-of-words baseline for in-domain and cross-domain classification.",2014
stede-grishina-2016-anaphoricity,https://aclanthology.org/W16-0706,0,,,,,,,"Anaphoricity in Connectives: A Case Study on German. Anaphoric connectives are event anaphors (or abstract anaphors) that in addition convey a coherence relation holding between the antecedent and the host clause of the connective. Some of them carry an explicitly-anaphoric morpheme, others do not. We analysed the set of German connectives for this property and found that many have an additional nonconnective reading, where they serve as nominal anaphors. Furthermore, many connectives can have multiple senses, so altogether the processing of these words can involve substantial disambiguation. We study the problem for one specific German word, demzufolge, which can be taken as representative for a large group of similar words.",Anaphoricity in Connectives: A Case Study on {G}erman,"Anaphoric connectives are event anaphors (or abstract anaphors) that in addition convey a coherence relation holding between the antecedent and the host clause of the connective. Some of them carry an explicitly-anaphoric morpheme, others do not. We analysed the set of German connectives for this property and found that many have an additional nonconnective reading, where they serve as nominal anaphors. Furthermore, many connectives can have multiple senses, so altogether the processing of these words can involve substantial disambiguation. We study the problem for one specific German word, demzufolge, which can be taken as representative for a large group of similar words.",Anaphoricity in Connectives: A Case Study on German,"Anaphoric connectives are event anaphors (or abstract anaphors) that in addition convey a coherence relation holding between the antecedent and the host clause of the connective. Some of them carry an explicitly-anaphoric morpheme, others do not. We analysed the set of German connectives for this property and found that many have an additional nonconnective reading, where they serve as nominal anaphors. Furthermore, many connectives can have multiple senses, so altogether the processing of these words can involve substantial disambiguation. We study the problem for one specific German word, demzufolge, which can be taken as representative for a large group of similar words.","We thank Tatjana Scheffler and Erik Haegert for their help with corpus annotation, and the anonymous reviewers for their valuable suggestions on improving the paper.","Anaphoricity in Connectives: A Case Study on German. Anaphoric connectives are event anaphors (or abstract anaphors) that in addition convey a coherence relation holding between the antecedent and the host clause of the connective. Some of them carry an explicitly-anaphoric morpheme, others do not. We analysed the set of German connectives for this property and found that many have an additional nonconnective reading, where they serve as nominal anaphors. Furthermore, many connectives can have multiple senses, so altogether the processing of these words can involve substantial disambiguation. We study the problem for one specific German word, demzufolge, which can be taken as representative for a large group of similar words.",2016
goldwasser-zhang-2016-understanding,https://aclanthology.org/Q16-1038,0,,,,,,,"Understanding Satirical Articles Using Common-Sense. Automatic satire detection is a subtle text classification task, for machines and at times, even for humans. In this paper we argue that satire detection should be approached using common-sense inferences, rather than traditional text classification methods. We present a highly structured latent variable model capturing the required inferences. The model abstracts over the specific entities appearing in the articles, grouping them into generalized categories, thus allowing the model to adapt to previously unseen situations.",Understanding Satirical Articles Using Common-Sense,"Automatic satire detection is a subtle text classification task, for machines and at times, even for humans. In this paper we argue that satire detection should be approached using common-sense inferences, rather than traditional text classification methods. We present a highly structured latent variable model capturing the required inferences. The model abstracts over the specific entities appearing in the articles, grouping them into generalized categories, thus allowing the model to adapt to previously unseen situations.",Understanding Satirical Articles Using Common-Sense,"Automatic satire detection is a subtle text classification task, for machines and at times, even for humans. In this paper we argue that satire detection should be approached using common-sense inferences, rather than traditional text classification methods. We present a highly structured latent variable model capturing the required inferences. The model abstracts over the specific entities appearing in the articles, grouping them into generalized categories, thus allowing the model to adapt to previously unseen situations.",,"Understanding Satirical Articles Using Common-Sense. Automatic satire detection is a subtle text classification task, for machines and at times, even for humans. In this paper we argue that satire detection should be approached using common-sense inferences, rather than traditional text classification methods. We present a highly structured latent variable model capturing the required inferences. The model abstracts over the specific entities appearing in the articles, grouping them into generalized categories, thus allowing the model to adapt to previously unseen situations.",2016
dong-etal-2020-transformer,https://aclanthology.org/2020.figlang-1.38,0,,,,,,,"Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media. We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1% and 7.0% improvements over their baselines. Our best models give the F1-scores of 79.0% and 75.0% for the Twitter and Reddit datasets respectively, becoming one of the highest performing systems among 36 participants in this shared task.",Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media,"We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1% and 7.0% improvements over their baselines. Our best models give the F1-scores of 79.0% and 75.0% for the Twitter and Reddit datasets respectively, becoming one of the highest performing systems among 36 participants in this shared task.",Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media,"We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1% and 7.0% improvements over their baselines. Our best models give the F1-scores of 79.0% and 75.0% for the Twitter and Reddit datasets respectively, becoming one of the highest performing systems among 36 participants in this shared task.",We gratefully acknowledge the support of the AWS Machine Learning Research Awards (MLRA). Any contents in this material are those of the authors and do not necessarily reflect the views of AWS.,"Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media. We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1% and 7.0% improvements over their baselines. Our best models give the F1-scores of 79.0% and 75.0% for the Twitter and Reddit datasets respectively, becoming one of the highest performing systems among 36 participants in this shared task.",2020
klementiev-roth-2006-weakly,https://aclanthology.org/P06-1103,0,,,,,,,"Weakly Supervised Named Entity Transliteration and Discovery from Multilingual Comparable Corpora. Named Entity recognition (NER) is an important part of many natural language processing tasks. Current approaches often employ machine learning techniques and require supervised data. However, many languages lack such resources. This paper presents an (almost) unsupervised learning algorithm for automatic discovery of Named Entities (NEs) in a resource free language, given a bilingual corpora in which it is weakly temporally aligned with a resource rich language. NEs have similar time distributions across such corpora, and often some of the tokens in a multi-word NE are transliterated. We develop an algorithm that exploits both observations iteratively. The algorithm makes use of a new, frequency based, metric for time distributions and a resource free discriminative approach to transliteration. Seeded with a small number of transliteration pairs, our algorithm discovers multi-word NEs, and takes advantage of a dictionary (if one exists) to account for translated or partially translated NEs. We evaluate the algorithm on an English-Russian corpus, and show high level of NEs discovery in Russian.",Weakly Supervised Named Entity Transliteration and Discovery from Multilingual Comparable Corpora,"Named Entity recognition (NER) is an important part of many natural language processing tasks. Current approaches often employ machine learning techniques and require supervised data. However, many languages lack such resources. This paper presents an (almost) unsupervised learning algorithm for automatic discovery of Named Entities (NEs) in a resource free language, given a bilingual corpora in which it is weakly temporally aligned with a resource rich language. NEs have similar time distributions across such corpora, and often some of the tokens in a multi-word NE are transliterated. We develop an algorithm that exploits both observations iteratively. The algorithm makes use of a new, frequency based, metric for time distributions and a resource free discriminative approach to transliteration. Seeded with a small number of transliteration pairs, our algorithm discovers multi-word NEs, and takes advantage of a dictionary (if one exists) to account for translated or partially translated NEs. We evaluate the algorithm on an English-Russian corpus, and show high level of NEs discovery in Russian.",Weakly Supervised Named Entity Transliteration and Discovery from Multilingual Comparable Corpora,"Named Entity recognition (NER) is an important part of many natural language processing tasks. Current approaches often employ machine learning techniques and require supervised data. However, many languages lack such resources. This paper presents an (almost) unsupervised learning algorithm for automatic discovery of Named Entities (NEs) in a resource free language, given a bilingual corpora in which it is weakly temporally aligned with a resource rich language. NEs have similar time distributions across such corpora, and often some of the tokens in a multi-word NE are transliterated. We develop an algorithm that exploits both observations iteratively. The algorithm makes use of a new, frequency based, metric for time distributions and a resource free discriminative approach to transliteration. Seeded with a small number of transliteration pairs, our algorithm discovers multi-word NEs, and takes advantage of a dictionary (if one exists) to account for translated or partially translated NEs. We evaluate the algorithm on an English-Russian corpus, and show high level of NEs discovery in Russian.","We thank Richard Sproat, ChengXiang Zhai, and Kevin Small for their useful feedback during this work, and the anonymous referees for their helpful comments. This research is supported by the Advanced Research and Development Activity (ARDA)'s Advanced Question Answering for Intelligence (AQUAINT) Program and a DOI grant under the Reflex program.","Weakly Supervised Named Entity Transliteration and Discovery from Multilingual Comparable Corpora. Named Entity recognition (NER) is an important part of many natural language processing tasks. Current approaches often employ machine learning techniques and require supervised data. However, many languages lack such resources. This paper presents an (almost) unsupervised learning algorithm for automatic discovery of Named Entities (NEs) in a resource free language, given a bilingual corpora in which it is weakly temporally aligned with a resource rich language. NEs have similar time distributions across such corpora, and often some of the tokens in a multi-word NE are transliterated. We develop an algorithm that exploits both observations iteratively. The algorithm makes use of a new, frequency based, metric for time distributions and a resource free discriminative approach to transliteration. Seeded with a small number of transliteration pairs, our algorithm discovers multi-word NEs, and takes advantage of a dictionary (if one exists) to account for translated or partially translated NEs. We evaluate the algorithm on an English-Russian corpus, and show high level of NEs discovery in Russian.",2006
costa-etal-2016-mapping,https://aclanthology.org/2016.gwc-1.36,0,,,,,,,"Mapping and Generating Classifiers using an Open Chinese Ontology. In languages such as Chinese, classifiers (CLs) play a central role in the quantification of noun-phrases. This can be a problem when generating text from input that does not specify the classifier, as in machine translation (MT) from English to Chinese. Many solutions to this problem rely on dictionaries of noun-CL pairs. However, there is no open large-scale machine-tractable dictionary of noun-CL associations. Many published resources exist, but they tend to focus on how a CL is used (e.g. what kinds of nouns can be used with it, or what features seem to be selected by each CL). In fact, since nouns are open class words, producing an exhaustive definite list of noun-CL associations is not possible, since it would quickly get out of date. Our work tries to address this problem by providing an algorithm for automatic building of a frequency based dictionary of noun-CL pairs, mapped to concepts in the Chinese Open Wordnet (Wang and Bond, 2013), an open machinetractable dictionary for Chinese. All results will released under an open license.",Mapping and Generating Classifiers using an Open {C}hinese Ontology,"In languages such as Chinese, classifiers (CLs) play a central role in the quantification of noun-phrases. This can be a problem when generating text from input that does not specify the classifier, as in machine translation (MT) from English to Chinese. Many solutions to this problem rely on dictionaries of noun-CL pairs. However, there is no open large-scale machine-tractable dictionary of noun-CL associations. Many published resources exist, but they tend to focus on how a CL is used (e.g. what kinds of nouns can be used with it, or what features seem to be selected by each CL). In fact, since nouns are open class words, producing an exhaustive definite list of noun-CL associations is not possible, since it would quickly get out of date. Our work tries to address this problem by providing an algorithm for automatic building of a frequency based dictionary of noun-CL pairs, mapped to concepts in the Chinese Open Wordnet (Wang and Bond, 2013), an open machinetractable dictionary for Chinese. All results will released under an open license.",Mapping and Generating Classifiers using an Open Chinese Ontology,"In languages such as Chinese, classifiers (CLs) play a central role in the quantification of noun-phrases. This can be a problem when generating text from input that does not specify the classifier, as in machine translation (MT) from English to Chinese. Many solutions to this problem rely on dictionaries of noun-CL pairs. However, there is no open large-scale machine-tractable dictionary of noun-CL associations. Many published resources exist, but they tend to focus on how a CL is used (e.g. what kinds of nouns can be used with it, or what features seem to be selected by each CL). In fact, since nouns are open class words, producing an exhaustive definite list of noun-CL associations is not possible, since it would quickly get out of date. Our work tries to address this problem by providing an algorithm for automatic building of a frequency based dictionary of noun-CL pairs, mapped to concepts in the Chinese Open Wordnet (Wang and Bond, 2013), an open machinetractable dictionary for Chinese. All results will released under an open license.",This research was supported in part by the MOE Tier 2 grant That's what you meant: a Rich Representation for Manipulation of Meaning (MOE ARC41/13).,"Mapping and Generating Classifiers using an Open Chinese Ontology. In languages such as Chinese, classifiers (CLs) play a central role in the quantification of noun-phrases. This can be a problem when generating text from input that does not specify the classifier, as in machine translation (MT) from English to Chinese. Many solutions to this problem rely on dictionaries of noun-CL pairs. However, there is no open large-scale machine-tractable dictionary of noun-CL associations. Many published resources exist, but they tend to focus on how a CL is used (e.g. what kinds of nouns can be used with it, or what features seem to be selected by each CL). In fact, since nouns are open class words, producing an exhaustive definite list of noun-CL associations is not possible, since it would quickly get out of date. Our work tries to address this problem by providing an algorithm for automatic building of a frequency based dictionary of noun-CL pairs, mapped to concepts in the Chinese Open Wordnet (Wang and Bond, 2013), an open machinetractable dictionary for Chinese. All results will released under an open license.",2016
bernier-colborne-drouin-2016-evaluation-distributional,https://aclanthology.org/W16-4707,0,,,,,,,"Evaluation of distributional semantic models: a holistic approach. We investigate how both model-related factors and application-related factors affect the accuracy of distributional semantic models (DSMs) in the context of specialized lexicography, and how these factors interact. This holistic approach to the evaluation of DSMs provides valuable guidelines for the use of these models and insight into the kind of semantic information they capture.",Evaluation of distributional semantic models: a holistic approach,"We investigate how both model-related factors and application-related factors affect the accuracy of distributional semantic models (DSMs) in the context of specialized lexicography, and how these factors interact. This holistic approach to the evaluation of DSMs provides valuable guidelines for the use of these models and insight into the kind of semantic information they capture.",Evaluation of distributional semantic models: a holistic approach,"We investigate how both model-related factors and application-related factors affect the accuracy of distributional semantic models (DSMs) in the context of specialized lexicography, and how these factors interact. This holistic approach to the evaluation of DSMs provides valuable guidelines for the use of these models and insight into the kind of semantic information they capture.",This work was supported by the Social Sciences and Humanities Research Council (SSHRC) of Canada.,"Evaluation of distributional semantic models: a holistic approach. We investigate how both model-related factors and application-related factors affect the accuracy of distributional semantic models (DSMs) in the context of specialized lexicography, and how these factors interact. This holistic approach to the evaluation of DSMs provides valuable guidelines for the use of these models and insight into the kind of semantic information they capture.",2016
tong-etal-2021-learning,https://aclanthology.org/2021.acl-long.487,0,,,,,,,"Learning from Miscellaneous Other-Class Words for Few-shot Named Entity Recognition. Few-shot Named Entity Recognition (NER) exploits only a handful of annotations to identify and classify named entity mentions. Prototypical network shows superior performance on few-shot NER. However, existing prototypical methods fail to differentiate rich semantics in other-class words, which will aggravate overfitting under few shot scenario. To address the issue, we propose a novel model, Mining Undefined Classes from Other-class (MUCO), that can automatically induce different undefined classes from the other class to improve few-shot NER. With these extra-labeled undefined classes, our method will improve the discriminative ability of NER classifier and enhance the understanding of predefined classes with stand-by semantic knowledge. Experimental results demonstrate that our model outperforms five state-of-the-art models in both 1shot and 5-shots settings on four NER benchmarks. We will release the code upon acceptance. The source code is released on https: //github.com/shuaiwa16/OtherClassNER.git.",Learning from Miscellaneous Other-Class Words for Few-shot Named Entity Recognition,"Few-shot Named Entity Recognition (NER) exploits only a handful of annotations to identify and classify named entity mentions. Prototypical network shows superior performance on few-shot NER. However, existing prototypical methods fail to differentiate rich semantics in other-class words, which will aggravate overfitting under few shot scenario. To address the issue, we propose a novel model, Mining Undefined Classes from Other-class (MUCO), that can automatically induce different undefined classes from the other class to improve few-shot NER. With these extra-labeled undefined classes, our method will improve the discriminative ability of NER classifier and enhance the understanding of predefined classes with stand-by semantic knowledge. Experimental results demonstrate that our model outperforms five state-of-the-art models in both 1shot and 5-shots settings on four NER benchmarks. We will release the code upon acceptance. The source code is released on https: //github.com/shuaiwa16/OtherClassNER.git.",Learning from Miscellaneous Other-Class Words for Few-shot Named Entity Recognition,"Few-shot Named Entity Recognition (NER) exploits only a handful of annotations to identify and classify named entity mentions. Prototypical network shows superior performance on few-shot NER. However, existing prototypical methods fail to differentiate rich semantics in other-class words, which will aggravate overfitting under few shot scenario. To address the issue, we propose a novel model, Mining Undefined Classes from Other-class (MUCO), that can automatically induce different undefined classes from the other class to improve few-shot NER. With these extra-labeled undefined classes, our method will improve the discriminative ability of NER classifier and enhance the understanding of predefined classes with stand-by semantic knowledge. Experimental results demonstrate that our model outperforms five state-of-the-art models in both 1shot and 5-shots settings on four NER benchmarks. We will release the code upon acceptance. The source code is released on https: //github.com/shuaiwa16/OtherClassNER.git.","This work is supported by the National Key Research and Development Program of China (2018YFB1005100 and 2018YFB1005101) and NSFC Key Project (U1736204). This work is supported by National Engineering Laboratory for Cyberlearning and Intelligent Technology, Beijing Key Lab of Networked Multimedia and the Institute for Guo Qiang, Tsinghua University (2019GQB0003). This research was conducted in collaboration with SenseTime. This work is partially supported by A*STAR through the Industry Alignment Fund -Industry Collaboration Projects Grant, by NTU (NTU-ACE2020-01) and Ministry of Education (RG96/20).","Learning from Miscellaneous Other-Class Words for Few-shot Named Entity Recognition. Few-shot Named Entity Recognition (NER) exploits only a handful of annotations to identify and classify named entity mentions. Prototypical network shows superior performance on few-shot NER. However, existing prototypical methods fail to differentiate rich semantics in other-class words, which will aggravate overfitting under few shot scenario. To address the issue, we propose a novel model, Mining Undefined Classes from Other-class (MUCO), that can automatically induce different undefined classes from the other class to improve few-shot NER. With these extra-labeled undefined classes, our method will improve the discriminative ability of NER classifier and enhance the understanding of predefined classes with stand-by semantic knowledge. Experimental results demonstrate that our model outperforms five state-of-the-art models in both 1shot and 5-shots settings on four NER benchmarks. We will release the code upon acceptance. The source code is released on https: //github.com/shuaiwa16/OtherClassNER.git.",2021
krishna-iyyer-2019-generating,https://aclanthology.org/P19-1224,0,,,,,,,"Generating Question-Answer Hierarchies. The process of knowledge acquisition can be viewed as a question-answer game between a student and a teacher in which the student typically starts by asking broad, open-ended questions before drilling down into specifics (Hintikka, 1981; Hakkarainen and Sintonen, 2002). This pedagogical perspective motivates a new way of representing documents. In this paper, we present SQUASH (Specificity-controlled Question-Answer Hierarchies), a novel and challenging text generation task that converts an input document into a hierarchy of question-answer pairs. Users can click on high-level questions (e.g., ""Why did Frodo leave the Fellowship?"") to reveal related but more specific questions (e.g., ""Who did Frodo leave with?""). Using a question taxonomy loosely based on Lehnert (1978), we classify questions in existing reading comprehension datasets as either GENERAL or SPECIFIC. We then use these labels as input to a pipelined system centered around a conditional neural language model. We extensively evaluate the quality of the generated QA hierarchies through crowdsourced experiments and report strong empirical results. Q. What was the iPhone application Fantom? A. The app... let users hear parts of ... real time, Q. Who created it? A. ... team including ... Robert Del Naja On 26 July 2016, Massive Attack previewed three new songs: ""Come Near Me"", ""The Spoils"", and ""Dear Friend"" on Fantom, an iPhone application on which they previously previewed the four songs from the Ritual Spirit EP ... The video for ""The Spoils"", featuring Cate Blanchett, and directed by Australian director John Hillcoat, ... Q. What is Ritual Spirit? A. On ... Attack released a new EP Ritual Spirit Q. What did they do in 2016? A. On ... 2016 ... three new songs: ""Come Near Me"", ""The Spoils"", and ""Dear Friend"" on Fantom, Q. Who was in the video? A. The video ... featuring Cate Blanchett, Q. Who was the Australian director? A. John Hillcoat, Q. When did they release a song with ""Fantom''? A. On 28",Generating Question-Answer Hierarchies,"The process of knowledge acquisition can be viewed as a question-answer game between a student and a teacher in which the student typically starts by asking broad, open-ended questions before drilling down into specifics (Hintikka, 1981; Hakkarainen and Sintonen, 2002). This pedagogical perspective motivates a new way of representing documents. In this paper, we present SQUASH (Specificity-controlled Question-Answer Hierarchies), a novel and challenging text generation task that converts an input document into a hierarchy of question-answer pairs. Users can click on high-level questions (e.g., ""Why did Frodo leave the Fellowship?"") to reveal related but more specific questions (e.g., ""Who did Frodo leave with?""). Using a question taxonomy loosely based on Lehnert (1978), we classify questions in existing reading comprehension datasets as either GENERAL or SPECIFIC. We then use these labels as input to a pipelined system centered around a conditional neural language model. We extensively evaluate the quality of the generated QA hierarchies through crowdsourced experiments and report strong empirical results. Q. What was the iPhone application Fantom? A. The app... let users hear parts of ... real time, Q. Who created it? A. ... team including ... Robert Del Naja On 26 July 2016, Massive Attack previewed three new songs: ""Come Near Me"", ""The Spoils"", and ""Dear Friend"" on Fantom, an iPhone application on which they previously previewed the four songs from the Ritual Spirit EP ... The video for ""The Spoils"", featuring Cate Blanchett, and directed by Australian director John Hillcoat, ... Q. What is Ritual Spirit? A. On ... Attack released a new EP Ritual Spirit Q. What did they do in 2016? A. On ... 2016 ... three new songs: ""Come Near Me"", ""The Spoils"", and ""Dear Friend"" on Fantom, Q. Who was in the video? A. The video ... featuring Cate Blanchett, Q. Who was the Australian director? A. John Hillcoat, Q. When did they release a song with ""Fantom''? A. On 28",Generating Question-Answer Hierarchies,"The process of knowledge acquisition can be viewed as a question-answer game between a student and a teacher in which the student typically starts by asking broad, open-ended questions before drilling down into specifics (Hintikka, 1981; Hakkarainen and Sintonen, 2002). This pedagogical perspective motivates a new way of representing documents. In this paper, we present SQUASH (Specificity-controlled Question-Answer Hierarchies), a novel and challenging text generation task that converts an input document into a hierarchy of question-answer pairs. Users can click on high-level questions (e.g., ""Why did Frodo leave the Fellowship?"") to reveal related but more specific questions (e.g., ""Who did Frodo leave with?""). Using a question taxonomy loosely based on Lehnert (1978), we classify questions in existing reading comprehension datasets as either GENERAL or SPECIFIC. We then use these labels as input to a pipelined system centered around a conditional neural language model. We extensively evaluate the quality of the generated QA hierarchies through crowdsourced experiments and report strong empirical results. Q. What was the iPhone application Fantom? A. The app... let users hear parts of ... real time, Q. Who created it? A. ... team including ... Robert Del Naja On 26 July 2016, Massive Attack previewed three new songs: ""Come Near Me"", ""The Spoils"", and ""Dear Friend"" on Fantom, an iPhone application on which they previously previewed the four songs from the Ritual Spirit EP ... The video for ""The Spoils"", featuring Cate Blanchett, and directed by Australian director John Hillcoat, ... Q. What is Ritual Spirit? A. On ... Attack released a new EP Ritual Spirit Q. What did they do in 2016? A. On ... 2016 ... three new songs: ""Come Near Me"", ""The Spoils"", and ""Dear Friend"" on Fantom, Q. Who was in the video? A. The video ... featuring Cate Blanchett, Q. Who was the Australian director? A. John Hillcoat, Q. When did they release a song with ""Fantom''? A. On 28","We thank the anonymous reviewers for their insightful comments. In addition, we thank Nader Akoury, Ari Kobren, Tu Vu and the other members of the UMass NLP group for helpful comments on earlier drafts of the paper and suggestions on the paper's presentation. This work was supported in part by research awards from the Allen Institute for Artificial Intelligence and Adobe Research.","Generating Question-Answer Hierarchies. The process of knowledge acquisition can be viewed as a question-answer game between a student and a teacher in which the student typically starts by asking broad, open-ended questions before drilling down into specifics (Hintikka, 1981; Hakkarainen and Sintonen, 2002). This pedagogical perspective motivates a new way of representing documents. In this paper, we present SQUASH (Specificity-controlled Question-Answer Hierarchies), a novel and challenging text generation task that converts an input document into a hierarchy of question-answer pairs. Users can click on high-level questions (e.g., ""Why did Frodo leave the Fellowship?"") to reveal related but more specific questions (e.g., ""Who did Frodo leave with?""). Using a question taxonomy loosely based on Lehnert (1978), we classify questions in existing reading comprehension datasets as either GENERAL or SPECIFIC. We then use these labels as input to a pipelined system centered around a conditional neural language model. We extensively evaluate the quality of the generated QA hierarchies through crowdsourced experiments and report strong empirical results. Q. What was the iPhone application Fantom? A. The app... let users hear parts of ... real time, Q. Who created it? A. ... team including ... Robert Del Naja On 26 July 2016, Massive Attack previewed three new songs: ""Come Near Me"", ""The Spoils"", and ""Dear Friend"" on Fantom, an iPhone application on which they previously previewed the four songs from the Ritual Spirit EP ... The video for ""The Spoils"", featuring Cate Blanchett, and directed by Australian director John Hillcoat, ... Q. What is Ritual Spirit? A. On ... Attack released a new EP Ritual Spirit Q. What did they do in 2016? A. On ... 2016 ... three new songs: ""Come Near Me"", ""The Spoils"", and ""Dear Friend"" on Fantom, Q. Who was in the video? A. The video ... featuring Cate Blanchett, Q. Who was the Australian director? A. John Hillcoat, Q. When did they release a song with ""Fantom''? A. On 28",2019
clark-curran-2006-partial,https://aclanthology.org/N06-1019,0,,,,,,,"Partial Training for a Lexicalized-Grammar Parser. We propose a solution to the annotation bottleneck for statistical parsing, by exploiting the lexicalized nature of Combinatory Categorial Grammar (CCG). The parsing model uses predicate-argument dependencies for training, which are derived from sequences of CCG lexical categories rather than full derivations. A simple method is used for extracting dependencies from lexical category sequences, resulting in high precision, yet incomplete and noisy data. The dependency parsing model of Clark and Curran (2004b) is extended to exploit this partial training data. Remarkably, the accuracy of the parser trained on data derived from category sequences alone is only 1.3% worse in terms of F-score than the parser trained on complete dependency structures.",Partial Training for a Lexicalized-Grammar Parser,"We propose a solution to the annotation bottleneck for statistical parsing, by exploiting the lexicalized nature of Combinatory Categorial Grammar (CCG). The parsing model uses predicate-argument dependencies for training, which are derived from sequences of CCG lexical categories rather than full derivations. A simple method is used for extracting dependencies from lexical category sequences, resulting in high precision, yet incomplete and noisy data. The dependency parsing model of Clark and Curran (2004b) is extended to exploit this partial training data. Remarkably, the accuracy of the parser trained on data derived from category sequences alone is only 1.3% worse in terms of F-score than the parser trained on complete dependency structures.",Partial Training for a Lexicalized-Grammar Parser,"We propose a solution to the annotation bottleneck for statistical parsing, by exploiting the lexicalized nature of Combinatory Categorial Grammar (CCG). The parsing model uses predicate-argument dependencies for training, which are derived from sequences of CCG lexical categories rather than full derivations. A simple method is used for extracting dependencies from lexical category sequences, resulting in high precision, yet incomplete and noisy data. The dependency parsing model of Clark and Curran (2004b) is extended to exploit this partial training data. Remarkably, the accuracy of the parser trained on data derived from category sequences alone is only 1.3% worse in terms of F-score than the parser trained on complete dependency structures.",,"Partial Training for a Lexicalized-Grammar Parser. We propose a solution to the annotation bottleneck for statistical parsing, by exploiting the lexicalized nature of Combinatory Categorial Grammar (CCG). The parsing model uses predicate-argument dependencies for training, which are derived from sequences of CCG lexical categories rather than full derivations. A simple method is used for extracting dependencies from lexical category sequences, resulting in high precision, yet incomplete and noisy data. The dependency parsing model of Clark and Curran (2004b) is extended to exploit this partial training data. Remarkably, the accuracy of the parser trained on data derived from category sequences alone is only 1.3% worse in terms of F-score than the parser trained on complete dependency structures.",2006
alshenaifi-azmi-2020-faheem,https://aclanthology.org/2020.wanlp-1.29,0,,,,,,,"Faheem at NADI shared task: Identifying the dialect of Arabic tweet. This paper describes Faheem (adj. of understand), our submission to NADI (Nuanced Arabic Dialect Identification) shared task. With so many Arabic dialects being understudied due to the scarcity of the resources, the objective is to identify the Arabic dialect used in the tweet, at the country-level. We propose a machine learning approach where we utilize word-level ngram (n = 1 to 3) and tf-idf features and feed them to six different classifiers. We train the system using a data set of 21,000 tweets-provided by the organizers-covering twenty-one Arab countries. Our top performing classifiers are: Logistic Regression, Support Vector Machines, and Multinomial Naïve Bayes (MNB). We achieved our best result of macro-F 1 = 0.151 using the MNB classifier.",Faheem at {NADI} shared task: Identifying the dialect of {A}rabic tweet,"This paper describes Faheem (adj. of understand), our submission to NADI (Nuanced Arabic Dialect Identification) shared task. With so many Arabic dialects being understudied due to the scarcity of the resources, the objective is to identify the Arabic dialect used in the tweet, at the country-level. We propose a machine learning approach where we utilize word-level ngram (n = 1 to 3) and tf-idf features and feed them to six different classifiers. We train the system using a data set of 21,000 tweets-provided by the organizers-covering twenty-one Arab countries. Our top performing classifiers are: Logistic Regression, Support Vector Machines, and Multinomial Naïve Bayes (MNB). We achieved our best result of macro-F 1 = 0.151 using the MNB classifier.",Faheem at NADI shared task: Identifying the dialect of Arabic tweet,"This paper describes Faheem (adj. of understand), our submission to NADI (Nuanced Arabic Dialect Identification) shared task. With so many Arabic dialects being understudied due to the scarcity of the resources, the objective is to identify the Arabic dialect used in the tweet, at the country-level. We propose a machine learning approach where we utilize word-level ngram (n = 1 to 3) and tf-idf features and feed them to six different classifiers. We train the system using a data set of 21,000 tweets-provided by the organizers-covering twenty-one Arab countries. Our top performing classifiers are: Logistic Regression, Support Vector Machines, and Multinomial Naïve Bayes (MNB). We achieved our best result of macro-F 1 = 0.151 using the MNB classifier.",,"Faheem at NADI shared task: Identifying the dialect of Arabic tweet. This paper describes Faheem (adj. of understand), our submission to NADI (Nuanced Arabic Dialect Identification) shared task. With so many Arabic dialects being understudied due to the scarcity of the resources, the objective is to identify the Arabic dialect used in the tweet, at the country-level. We propose a machine learning approach where we utilize word-level ngram (n = 1 to 3) and tf-idf features and feed them to six different classifiers. We train the system using a data set of 21,000 tweets-provided by the organizers-covering twenty-one Arab countries. Our top performing classifiers are: Logistic Regression, Support Vector Machines, and Multinomial Naïve Bayes (MNB). We achieved our best result of macro-F 1 = 0.151 using the MNB classifier.",2020
nguyen-etal-2021-trankit,https://aclanthology.org/2021.eacl-demos.10,0,,,,,,,"Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing. We introduce Trankit, a lightweight Transformer-based Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 pretrained pipelines for 56 languages. Built on a state-of-the-art pretrained language model, Trankit significantly outperforms prior multilingual NLP pipelines over sentence segmentation, part-of-speech tagging, morphological feature tagging, and dependency parsing while maintaining competitive performance for tokenization, multi-word token expansion, and lemmatization over 90 Universal Dependencies treebanks. Despite the use of a large pretrained transformer, our toolkit is still efficient in memory usage and speed. This is achieved by our novel plugand-play mechanism with Adapters where a multilingual pretrained transformer is shared across pipelines for different languages. Our toolkit along with pretrained models and code are publicly available at: https: //github.com/nlp-uoregon/trankit.",Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing,"We introduce Trankit, a lightweight Transformer-based Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 pretrained pipelines for 56 languages. Built on a state-of-the-art pretrained language model, Trankit significantly outperforms prior multilingual NLP pipelines over sentence segmentation, part-of-speech tagging, morphological feature tagging, and dependency parsing while maintaining competitive performance for tokenization, multi-word token expansion, and lemmatization over 90 Universal Dependencies treebanks. Despite the use of a large pretrained transformer, our toolkit is still efficient in memory usage and speed. This is achieved by our novel plugand-play mechanism with Adapters where a multilingual pretrained transformer is shared across pipelines for different languages. Our toolkit along with pretrained models and code are publicly available at: https: //github.com/nlp-uoregon/trankit.",Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing,"We introduce Trankit, a lightweight Transformer-based Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 pretrained pipelines for 56 languages. Built on a state-of-the-art pretrained language model, Trankit significantly outperforms prior multilingual NLP pipelines over sentence segmentation, part-of-speech tagging, morphological feature tagging, and dependency parsing while maintaining competitive performance for tokenization, multi-word token expansion, and lemmatization over 90 Universal Dependencies treebanks. Despite the use of a large pretrained transformer, our toolkit is still efficient in memory usage and speed. This is achieved by our novel plugand-play mechanism with Adapters where a multilingual pretrained transformer is shared across pipelines for different languages. Our toolkit along with pretrained models and code are publicly available at: https: //github.com/nlp-uoregon/trankit.","Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, and Amelia Archer. 2019 ","Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing. We introduce Trankit, a lightweight Transformer-based Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 pretrained pipelines for 56 languages. Built on a state-of-the-art pretrained language model, Trankit significantly outperforms prior multilingual NLP pipelines over sentence segmentation, part-of-speech tagging, morphological feature tagging, and dependency parsing while maintaining competitive performance for tokenization, multi-word token expansion, and lemmatization over 90 Universal Dependencies treebanks. Despite the use of a large pretrained transformer, our toolkit is still efficient in memory usage and speed. This is achieved by our novel plugand-play mechanism with Adapters where a multilingual pretrained transformer is shared across pipelines for different languages. Our toolkit along with pretrained models and code are publicly available at: https: //github.com/nlp-uoregon/trankit.",2021
kurematsu-1993-automatic,https://aclanthology.org/1993.mtsummit-1.8,0,,,,,,,"Automatic Speech Translation at ATR. Since Graham Bell first invented the telephone in 1876, it has become an indispensable means for communications. We can easily communicate with others domestically as well as internationally. However, another great barrier has not been overcome yet; communications between people speaking different languages. An interpreting telephone system, or a speech translation system, will solve this problem which has been annoying human-being from the beginning of their history. The first effort was made by NEC; they demonstrated a system in Telecom'83 held in Geneva. In 1987, British Telecom Research Laboratories implemented an experimental system which was based on fixed phrase translation [Stentiford] . At Carnegie-Mellon University (CMU), a speech translation system was developed on doctor patient domain in 1988 [Saitoh] . These systems were small and simple, but showed the possibility of speech translation.",Automatic Speech Translation at {ATR},"Since Graham Bell first invented the telephone in 1876, it has become an indispensable means for communications. We can easily communicate with others domestically as well as internationally. However, another great barrier has not been overcome yet; communications between people speaking different languages. An interpreting telephone system, or a speech translation system, will solve this problem which has been annoying human-being from the beginning of their history. The first effort was made by NEC; they demonstrated a system in Telecom'83 held in Geneva. In 1987, British Telecom Research Laboratories implemented an experimental system which was based on fixed phrase translation [Stentiford] . At Carnegie-Mellon University (CMU), a speech translation system was developed on doctor patient domain in 1988 [Saitoh] . These systems were small and simple, but showed the possibility of speech translation.",Automatic Speech Translation at ATR,"Since Graham Bell first invented the telephone in 1876, it has become an indispensable means for communications. We can easily communicate with others domestically as well as internationally. However, another great barrier has not been overcome yet; communications between people speaking different languages. An interpreting telephone system, or a speech translation system, will solve this problem which has been annoying human-being from the beginning of their history. The first effort was made by NEC; they demonstrated a system in Telecom'83 held in Geneva. In 1987, British Telecom Research Laboratories implemented an experimental system which was based on fixed phrase translation [Stentiford] . At Carnegie-Mellon University (CMU), a speech translation system was developed on doctor patient domain in 1988 [Saitoh] . These systems were small and simple, but showed the possibility of speech translation.",,"Automatic Speech Translation at ATR. Since Graham Bell first invented the telephone in 1876, it has become an indispensable means for communications. We can easily communicate with others domestically as well as internationally. However, another great barrier has not been overcome yet; communications between people speaking different languages. An interpreting telephone system, or a speech translation system, will solve this problem which has been annoying human-being from the beginning of their history. The first effort was made by NEC; they demonstrated a system in Telecom'83 held in Geneva. In 1987, British Telecom Research Laboratories implemented an experimental system which was based on fixed phrase translation [Stentiford] . At Carnegie-Mellon University (CMU), a speech translation system was developed on doctor patient domain in 1988 [Saitoh] . These systems were small and simple, but showed the possibility of speech translation.",1993
van-der-meer-2013-dqf,https://aclanthology.org/2013.tc-1.8,0,,,,,,,"The DQF - industry best-practices, metrics and benchmarks for translation quality estimation. ","The {DQF} - industry best-practices, metrics and benchmarks for translation quality estimation",,"The DQF - industry best-practices, metrics and benchmarks for translation quality estimation",,,"The DQF - industry best-practices, metrics and benchmarks for translation quality estimation. ",2013
louis-nenkova-2014-verbose,https://aclanthology.org/E14-1067,0,,,,,,,"Verbose, Laconic or Just Right: A Simple Computational Model of Content Appropriateness under Length Constraints. Length constraints impose implicit requirements on the type of content that can be included in a text. Here we propose the first model to computationally assess if a text deviates from these requirements. Specifically, our model predicts the appropriate length for texts based on content types present in a snippet of constant length. We consider a range of features to approximate content type, including syntactic phrasing, constituent compression probability, presence of named entities, sentence specificity and intersentence continuity. Weights for these features are learned using a corpus of summaries written by experts and on high quality journalistic writing. During test time, the difference between actual and predicted length allows us to quantify text verbosity. We use data from manual evaluation of summarization systems to assess the verbosity scores produced by our model. We show that the automatic verbosity scores are significantly negatively correlated with manual content quality scores given to the summaries.","Verbose, Laconic or Just Right: A Simple Computational Model of Content Appropriateness under Length Constraints","Length constraints impose implicit requirements on the type of content that can be included in a text. Here we propose the first model to computationally assess if a text deviates from these requirements. Specifically, our model predicts the appropriate length for texts based on content types present in a snippet of constant length. We consider a range of features to approximate content type, including syntactic phrasing, constituent compression probability, presence of named entities, sentence specificity and intersentence continuity. Weights for these features are learned using a corpus of summaries written by experts and on high quality journalistic writing. During test time, the difference between actual and predicted length allows us to quantify text verbosity. We use data from manual evaluation of summarization systems to assess the verbosity scores produced by our model. We show that the automatic verbosity scores are significantly negatively correlated with manual content quality scores given to the summaries.","Verbose, Laconic or Just Right: A Simple Computational Model of Content Appropriateness under Length Constraints","Length constraints impose implicit requirements on the type of content that can be included in a text. Here we propose the first model to computationally assess if a text deviates from these requirements. Specifically, our model predicts the appropriate length for texts based on content types present in a snippet of constant length. We consider a range of features to approximate content type, including syntactic phrasing, constituent compression probability, presence of named entities, sentence specificity and intersentence continuity. Weights for these features are learned using a corpus of summaries written by experts and on high quality journalistic writing. During test time, the difference between actual and predicted length allows us to quantify text verbosity. We use data from manual evaluation of summarization systems to assess the verbosity scores produced by our model. We show that the automatic verbosity scores are significantly negatively correlated with manual content quality scores given to the summaries.",This work was partially supported by a NSF CA-REER 0953445 award. We also thank the anonymous reviewers for their comments.,"Verbose, Laconic or Just Right: A Simple Computational Model of Content Appropriateness under Length Constraints. Length constraints impose implicit requirements on the type of content that can be included in a text. Here we propose the first model to computationally assess if a text deviates from these requirements. Specifically, our model predicts the appropriate length for texts based on content types present in a snippet of constant length. We consider a range of features to approximate content type, including syntactic phrasing, constituent compression probability, presence of named entities, sentence specificity and intersentence continuity. Weights for these features are learned using a corpus of summaries written by experts and on high quality journalistic writing. During test time, the difference between actual and predicted length allows us to quantify text verbosity. We use data from manual evaluation of summarization systems to assess the verbosity scores produced by our model. We show that the automatic verbosity scores are significantly negatively correlated with manual content quality scores given to the summaries.",2014
valkenier-etal-2011-psycho,https://aclanthology.org/W11-4630,0,,,,,,,"Psycho-acoustically motivated formant feature extraction. Psycho-acoustical research investigates how human listeners are able to separate sounds that stem from different sources. This ability might be one of the reasons that human speech processing is robust to noise but methods that exploit this are, to our knowledge, not used in systems for automatic formant extraction or in modern speech recognition systems. Therefore we investigate the possibility to use harmonics that are consistent with a harmonic complex as the basis for a robust formant extraction algorithm. With this new method we aim to overcome limitations of most modern automatic speech recognition systems by taking advantage of the robustness of harmonics at formant positions. We tested the effectiveness of our formant detection algorithm on Hillenbrand's annotated American English Vowels dataset and found that in pink noise the results are competitive with existing systems. Furthermore, our method needs no training and is implementable as a realtime system which contrasts many of the existing systems.",Psycho-acoustically motivated formant feature extraction,"Psycho-acoustical research investigates how human listeners are able to separate sounds that stem from different sources. This ability might be one of the reasons that human speech processing is robust to noise but methods that exploit this are, to our knowledge, not used in systems for automatic formant extraction or in modern speech recognition systems. Therefore we investigate the possibility to use harmonics that are consistent with a harmonic complex as the basis for a robust formant extraction algorithm. With this new method we aim to overcome limitations of most modern automatic speech recognition systems by taking advantage of the robustness of harmonics at formant positions. We tested the effectiveness of our formant detection algorithm on Hillenbrand's annotated American English Vowels dataset and found that in pink noise the results are competitive with existing systems. Furthermore, our method needs no training and is implementable as a realtime system which contrasts many of the existing systems.",Psycho-acoustically motivated formant feature extraction,"Psycho-acoustical research investigates how human listeners are able to separate sounds that stem from different sources. This ability might be one of the reasons that human speech processing is robust to noise but methods that exploit this are, to our knowledge, not used in systems for automatic formant extraction or in modern speech recognition systems. Therefore we investigate the possibility to use harmonics that are consistent with a harmonic complex as the basis for a robust formant extraction algorithm. With this new method we aim to overcome limitations of most modern automatic speech recognition systems by taking advantage of the robustness of harmonics at formant positions. We tested the effectiveness of our formant detection algorithm on Hillenbrand's annotated American English Vowels dataset and found that in pink noise the results are competitive with existing systems. Furthermore, our method needs no training and is implementable as a realtime system which contrasts many of the existing systems.","BV was supported by STW grant DTF 7459, JDK was supported by NWO grant 634.000.432. The authors would like to thank Odette Scharenborg, Jennifer Spenader, Maria Niessen, Hedde van de Vooren and three anonymous reviewers for their useful comments on earlier versions of this manuscript.","Psycho-acoustically motivated formant feature extraction. Psycho-acoustical research investigates how human listeners are able to separate sounds that stem from different sources. This ability might be one of the reasons that human speech processing is robust to noise but methods that exploit this are, to our knowledge, not used in systems for automatic formant extraction or in modern speech recognition systems. Therefore we investigate the possibility to use harmonics that are consistent with a harmonic complex as the basis for a robust formant extraction algorithm. With this new method we aim to overcome limitations of most modern automatic speech recognition systems by taking advantage of the robustness of harmonics at formant positions. We tested the effectiveness of our formant detection algorithm on Hillenbrand's annotated American English Vowels dataset and found that in pink noise the results are competitive with existing systems. Furthermore, our method needs no training and is implementable as a realtime system which contrasts many of the existing systems.",2011
popescu-2009-person,https://aclanthology.org/D09-1104,0,,,,,,,"Person Cross Document Coreference with Name Perplexity Estimates. The Person Cross Document Coreference systems depend on the context for making decisions on the possible coreferences between person name mentions. The amount of context required is a parameter that varies from corpora to corpora, which makes it difficult for usual disambiguation methods. In this paper we show that the amount of context required can be dynamically controlled on the basis of the prior probabilities of coreference and we present a new statistical model for the computation of these probabilities. The experiment we carried on a news corpus proves that the prior probabilities of coreference are an important factor for maintaining a good balance between precision and recall for cross document coreference systems.",Person Cross Document Coreference with Name Perplexity Estimates,"The Person Cross Document Coreference systems depend on the context for making decisions on the possible coreferences between person name mentions. The amount of context required is a parameter that varies from corpora to corpora, which makes it difficult for usual disambiguation methods. In this paper we show that the amount of context required can be dynamically controlled on the basis of the prior probabilities of coreference and we present a new statistical model for the computation of these probabilities. The experiment we carried on a news corpus proves that the prior probabilities of coreference are an important factor for maintaining a good balance between precision and recall for cross document coreference systems.",Person Cross Document Coreference with Name Perplexity Estimates,"The Person Cross Document Coreference systems depend on the context for making decisions on the possible coreferences between person name mentions. The amount of context required is a parameter that varies from corpora to corpora, which makes it difficult for usual disambiguation methods. In this paper we show that the amount of context required can be dynamically controlled on the basis of the prior probabilities of coreference and we present a new statistical model for the computation of these probabilities. The experiment we carried on a news corpus proves that the prior probabilities of coreference are an important factor for maintaining a good balance between precision and recall for cross document coreference systems.","The corpus used in this paper is Adige500k, a seven-year news corpus from an Italian local newspaper. The author thanks to all the people involved in the construction of Adige500k.","Person Cross Document Coreference with Name Perplexity Estimates. The Person Cross Document Coreference systems depend on the context for making decisions on the possible coreferences between person name mentions. The amount of context required is a parameter that varies from corpora to corpora, which makes it difficult for usual disambiguation methods. In this paper we show that the amount of context required can be dynamically controlled on the basis of the prior probabilities of coreference and we present a new statistical model for the computation of these probabilities. The experiment we carried on a news corpus proves that the prior probabilities of coreference are an important factor for maintaining a good balance between precision and recall for cross document coreference systems.",2009
rita-etal-2020-lazimpa,https://aclanthology.org/2020.conll-1.26,0,,,,,,,"``LazImpa'': Lazy and Impatient neural agents learn to communicate efficiently. Previous work has shown that artificial neural agents naturally develop surprisingly nonefficient codes. This is illustrated by the fact that in a referential game involving a speaker and a listener neural networks optimizing accurate transmission over a discrete channel, the emergent messages fail to achieve an optimal length. Furthermore, frequent messages tend to be longer than infrequent ones, a pattern contrary to the Zipf Law of Abbreviation (ZLA) observed in all natural languages. Here, we show that near-optimal and ZLA-compatible messages can emerge, but only if both the speaker and the listener are modified. We hence introduce a new communication system, ""Laz-Impa"", where the speaker is made increasingly lazy, i.e., avoids long messages, and the listener impatient, i.e., seeks to guess the intended content as soon as possible.",{``}{L}az{I}mpa{''}: Lazy and Impatient neural agents learn to communicate efficiently,"Previous work has shown that artificial neural agents naturally develop surprisingly nonefficient codes. This is illustrated by the fact that in a referential game involving a speaker and a listener neural networks optimizing accurate transmission over a discrete channel, the emergent messages fail to achieve an optimal length. Furthermore, frequent messages tend to be longer than infrequent ones, a pattern contrary to the Zipf Law of Abbreviation (ZLA) observed in all natural languages. Here, we show that near-optimal and ZLA-compatible messages can emerge, but only if both the speaker and the listener are modified. We hence introduce a new communication system, ""Laz-Impa"", where the speaker is made increasingly lazy, i.e., avoids long messages, and the listener impatient, i.e., seeks to guess the intended content as soon as possible.",``LazImpa'': Lazy and Impatient neural agents learn to communicate efficiently,"Previous work has shown that artificial neural agents naturally develop surprisingly nonefficient codes. This is illustrated by the fact that in a referential game involving a speaker and a listener neural networks optimizing accurate transmission over a discrete channel, the emergent messages fail to achieve an optimal length. Furthermore, frequent messages tend to be longer than infrequent ones, a pattern contrary to the Zipf Law of Abbreviation (ZLA) observed in all natural languages. Here, we show that near-optimal and ZLA-compatible messages can emerge, but only if both the speaker and the listener are modified. We hence introduce a new communication system, ""Laz-Impa"", where the speaker is made increasingly lazy, i.e., avoids long messages, and the listener impatient, i.e., seeks to guess the intended content as soon as possible.","We would like to thank Emmanuel Chemla, Marco Baroni, Eugene Kharitonov, and the anonymous reviewers for helpful comments and suggestions.This work was funded in part by the European Research Council (ERC-2011-AdG-295810 BOOT-PHON), the Agence Nationale pour la Recherche (ANR-17-EURE-0017 Frontcog, ANR-10-IDEX-0001-02 PSL*, ANR-19-P3IA-0001 PRAIRIE 3IA Institute) and grants from CIFAR (Learning in Machines and Brains), Facebook AI Research (Research Grant), Google (Faculty Research Award), Microsoft Research (Azure Credits and Grant), and Amazon Web Service (AWS Research Credits).","``LazImpa'': Lazy and Impatient neural agents learn to communicate efficiently. Previous work has shown that artificial neural agents naturally develop surprisingly nonefficient codes. This is illustrated by the fact that in a referential game involving a speaker and a listener neural networks optimizing accurate transmission over a discrete channel, the emergent messages fail to achieve an optimal length. Furthermore, frequent messages tend to be longer than infrequent ones, a pattern contrary to the Zipf Law of Abbreviation (ZLA) observed in all natural languages. Here, we show that near-optimal and ZLA-compatible messages can emerge, but only if both the speaker and the listener are modified. We hence introduce a new communication system, ""Laz-Impa"", where the speaker is made increasingly lazy, i.e., avoids long messages, and the listener impatient, i.e., seeks to guess the intended content as soon as possible.",2020
weller-seppi-2020-rjokes,https://aclanthology.org/2020.lrec-1.753,0,,,,,,,"The rJokes Dataset: a Large Scale Humor Collection. Humor is a complicated language phenomenon that depends upon many factors, including topic, date, and recipient. Because of this variation, it can be hard to determine what exactly makes a joke humorous, leading to difficulties in joke identification and related tasks. Furthermore, current humor datasets are lacking in both joke variety and size, with almost all current datasets having less than 100k jokes. In order to alleviate this issue we compile a collection of over 550,000 jokes posted over an 11 year period on the Reddit r/Jokes subreddit (an online forum), providing a large scale humor dataset that can easily be used for a myriad of tasks. This dataset also provides quantitative metrics for the level of humor in each joke, as determined by subreddit user feedback. We explore this dataset through the years, examining basic statistics, most mentioned entities, and sentiment proportions. We also introduce this dataset as a task for future work, where models learn to predict the level of humor in a joke. On that task we provide strong state-of-the-art baseline models and show room for future improvement. We hope that this dataset will not only help those researching computational humor, but also help social scientists who seek to understand popular culture through humor.",The r{J}okes Dataset: a Large Scale Humor Collection,"Humor is a complicated language phenomenon that depends upon many factors, including topic, date, and recipient. Because of this variation, it can be hard to determine what exactly makes a joke humorous, leading to difficulties in joke identification and related tasks. Furthermore, current humor datasets are lacking in both joke variety and size, with almost all current datasets having less than 100k jokes. In order to alleviate this issue we compile a collection of over 550,000 jokes posted over an 11 year period on the Reddit r/Jokes subreddit (an online forum), providing a large scale humor dataset that can easily be used for a myriad of tasks. This dataset also provides quantitative metrics for the level of humor in each joke, as determined by subreddit user feedback. We explore this dataset through the years, examining basic statistics, most mentioned entities, and sentiment proportions. We also introduce this dataset as a task for future work, where models learn to predict the level of humor in a joke. On that task we provide strong state-of-the-art baseline models and show room for future improvement. We hope that this dataset will not only help those researching computational humor, but also help social scientists who seek to understand popular culture through humor.",The rJokes Dataset: a Large Scale Humor Collection,"Humor is a complicated language phenomenon that depends upon many factors, including topic, date, and recipient. Because of this variation, it can be hard to determine what exactly makes a joke humorous, leading to difficulties in joke identification and related tasks. Furthermore, current humor datasets are lacking in both joke variety and size, with almost all current datasets having less than 100k jokes. In order to alleviate this issue we compile a collection of over 550,000 jokes posted over an 11 year period on the Reddit r/Jokes subreddit (an online forum), providing a large scale humor dataset that can easily be used for a myriad of tasks. This dataset also provides quantitative metrics for the level of humor in each joke, as determined by subreddit user feedback. We explore this dataset through the years, examining basic statistics, most mentioned entities, and sentiment proportions. We also introduce this dataset as a task for future work, where models learn to predict the level of humor in a joke. On that task we provide strong state-of-the-art baseline models and show room for future improvement. We hope that this dataset will not only help those researching computational humor, but also help social scientists who seek to understand popular culture through humor.",,"The rJokes Dataset: a Large Scale Humor Collection. Humor is a complicated language phenomenon that depends upon many factors, including topic, date, and recipient. Because of this variation, it can be hard to determine what exactly makes a joke humorous, leading to difficulties in joke identification and related tasks. Furthermore, current humor datasets are lacking in both joke variety and size, with almost all current datasets having less than 100k jokes. In order to alleviate this issue we compile a collection of over 550,000 jokes posted over an 11 year period on the Reddit r/Jokes subreddit (an online forum), providing a large scale humor dataset that can easily be used for a myriad of tasks. This dataset also provides quantitative metrics for the level of humor in each joke, as determined by subreddit user feedback. We explore this dataset through the years, examining basic statistics, most mentioned entities, and sentiment proportions. We also introduce this dataset as a task for future work, where models learn to predict the level of humor in a joke. On that task we provide strong state-of-the-art baseline models and show room for future improvement. We hope that this dataset will not only help those researching computational humor, but also help social scientists who seek to understand popular culture through humor.",2020
bicici-van-genabith-2013-cngl-grading,https://aclanthology.org/S13-2098,1,,,,education,,,"CNGL: Grading Student Answers by Acts of Translation. We invent referential translation machines (RTMs), a computational model for identifying the translation acts between any two data sets with respect to a reference corpus selected in the same domain, which can be used for automatically grading student answers. RTMs make quality and semantic similarity judgments possible by using retrieved relevant training data as interpretants for reaching shared semantics. An MTPP (machine translation performance predictor) model derives features measuring the closeness of the test sentences to the training data, the difficulty of translating them, and the presence of acts of translation involved. We view question answering as translation from the question to the answer, from the question to the reference answer, from the answer to the reference answer, or from the question and the answer to the reference answer. Each view is modeled by an RTM model, giving us a new perspective on the ternary relationship between the question, the answer, and the reference answer. We show that all RTM models contribute and a prediction model based on all four perspectives performs the best. Our prediction model is the 2nd best system on some tasks according to the official results of the Student Response Analysis (SRA 2013) challenge.",{CNGL}: Grading Student Answers by Acts of Translation,"We invent referential translation machines (RTMs), a computational model for identifying the translation acts between any two data sets with respect to a reference corpus selected in the same domain, which can be used for automatically grading student answers. RTMs make quality and semantic similarity judgments possible by using retrieved relevant training data as interpretants for reaching shared semantics. An MTPP (machine translation performance predictor) model derives features measuring the closeness of the test sentences to the training data, the difficulty of translating them, and the presence of acts of translation involved. We view question answering as translation from the question to the answer, from the question to the reference answer, from the answer to the reference answer, or from the question and the answer to the reference answer. Each view is modeled by an RTM model, giving us a new perspective on the ternary relationship between the question, the answer, and the reference answer. We show that all RTM models contribute and a prediction model based on all four perspectives performs the best. Our prediction model is the 2nd best system on some tasks according to the official results of the Student Response Analysis (SRA 2013) challenge.",CNGL: Grading Student Answers by Acts of Translation,"We invent referential translation machines (RTMs), a computational model for identifying the translation acts between any two data sets with respect to a reference corpus selected in the same domain, which can be used for automatically grading student answers. RTMs make quality and semantic similarity judgments possible by using retrieved relevant training data as interpretants for reaching shared semantics. An MTPP (machine translation performance predictor) model derives features measuring the closeness of the test sentences to the training data, the difficulty of translating them, and the presence of acts of translation involved. We view question answering as translation from the question to the answer, from the question to the reference answer, from the answer to the reference answer, or from the question and the answer to the reference answer. Each view is modeled by an RTM model, giving us a new perspective on the ternary relationship between the question, the answer, and the reference answer. We show that all RTM models contribute and a prediction model based on all four perspectives performs the best. Our prediction model is the 2nd best system on some tasks according to the official results of the Student Response Analysis (SRA 2013) challenge.",This work is supported in part by SFI (07/CE/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Dublin City University and in part by the European Commission through the QTLaunchPad FP7 project (No: 296347). We also thank the SFI/HEA Irish Centre for High-End Computing (ICHEC) for the provision of computational facilities and support.,"CNGL: Grading Student Answers by Acts of Translation. We invent referential translation machines (RTMs), a computational model for identifying the translation acts between any two data sets with respect to a reference corpus selected in the same domain, which can be used for automatically grading student answers. RTMs make quality and semantic similarity judgments possible by using retrieved relevant training data as interpretants for reaching shared semantics. An MTPP (machine translation performance predictor) model derives features measuring the closeness of the test sentences to the training data, the difficulty of translating them, and the presence of acts of translation involved. We view question answering as translation from the question to the answer, from the question to the reference answer, from the answer to the reference answer, or from the question and the answer to the reference answer. Each view is modeled by an RTM model, giving us a new perspective on the ternary relationship between the question, the answer, and the reference answer. We show that all RTM models contribute and a prediction model based on all four perspectives performs the best. Our prediction model is the 2nd best system on some tasks according to the official results of the Student Response Analysis (SRA 2013) challenge.",2013
nyberg-etal-2002-deriving,https://link.springer.com/chapter/10.1007/3-540-45820-4_15,0,,,,,,,Deriving semantic knowledge from descriptive texts using an MT system. ,Deriving semantic knowledge from descriptive texts using an {MT} system,,Deriving semantic knowledge from descriptive texts using an MT system,,,Deriving semantic knowledge from descriptive texts using an MT system. ,2002
morales-etal-2018-linguistically,https://aclanthology.org/W18-0602,1,,,,health,,,"A Linguistically-Informed Fusion Approach for Multimodal Depression Detection. Automated depression detection is inherently a multimodal problem. Therefore, it is critical that researchers investigate fusion techniques for multimodal design. This paper presents the first ever comprehensive study of fusion techniques for depression detection. In addition, we present novel linguistically-motivated fusion techniques, which we find outperform existing approaches.",A Linguistically-Informed Fusion Approach for Multimodal Depression Detection,"Automated depression detection is inherently a multimodal problem. Therefore, it is critical that researchers investigate fusion techniques for multimodal design. This paper presents the first ever comprehensive study of fusion techniques for depression detection. In addition, we present novel linguistically-motivated fusion techniques, which we find outperform existing approaches.",A Linguistically-Informed Fusion Approach for Multimodal Depression Detection,"Automated depression detection is inherently a multimodal problem. Therefore, it is critical that researchers investigate fusion techniques for multimodal design. This paper presents the first ever comprehensive study of fusion techniques for depression detection. In addition, we present novel linguistically-motivated fusion techniques, which we find outperform existing approaches.",,"A Linguistically-Informed Fusion Approach for Multimodal Depression Detection. Automated depression detection is inherently a multimodal problem. Therefore, it is critical that researchers investigate fusion techniques for multimodal design. This paper presents the first ever comprehensive study of fusion techniques for depression detection. In addition, we present novel linguistically-motivated fusion techniques, which we find outperform existing approaches.",2018
jain-mausam-2016-knowledge,https://aclanthology.org/N16-1011,0,,,,,,,"Knowledge-Guided Linguistic Rewrites for Inference Rule Verification. A corpus of inference rules between a pair of relation phrases is typically generated using the statistical overlap of argument-pairs associated with the relations (e.g., PATTY, CLEAN). We investigate knowledge-guided linguistic rewrites as a secondary source of evidence and find that they can vastly improve the quality of inference rule corpora, obtaining 27 to 33 point precision improvement while retaining substantial recall. The facts inferred using cleaned inference rules are 29-32 points more accurate.",Knowledge-Guided Linguistic Rewrites for Inference Rule Verification,"A corpus of inference rules between a pair of relation phrases is typically generated using the statistical overlap of argument-pairs associated with the relations (e.g., PATTY, CLEAN). We investigate knowledge-guided linguistic rewrites as a secondary source of evidence and find that they can vastly improve the quality of inference rule corpora, obtaining 27 to 33 point precision improvement while retaining substantial recall. The facts inferred using cleaned inference rules are 29-32 points more accurate.",Knowledge-Guided Linguistic Rewrites for Inference Rule Verification,"A corpus of inference rules between a pair of relation phrases is typically generated using the statistical overlap of argument-pairs associated with the relations (e.g., PATTY, CLEAN). We investigate knowledge-guided linguistic rewrites as a secondary source of evidence and find that they can vastly improve the quality of inference rule corpora, obtaining 27 to 33 point precision improvement while retaining substantial recall. The facts inferred using cleaned inference rules are 29-32 points more accurate.","Acknowledgments: We thank Ashwini Vaidya and the anonymous reviewers for their helpful suggestions and feedback. We thank Abhishek, Aditya, Ankit, Jatin, Kabir, and Shikhar for helping with the data annotation. This work was supported by Google language understanding and knowledge discovery focused research grants to Mausam, a KISTI grant and a Bloomberg grant also to Mausam. Prachi was supported by a TCS fellowship.","Knowledge-Guided Linguistic Rewrites for Inference Rule Verification. A corpus of inference rules between a pair of relation phrases is typically generated using the statistical overlap of argument-pairs associated with the relations (e.g., PATTY, CLEAN). We investigate knowledge-guided linguistic rewrites as a secondary source of evidence and find that they can vastly improve the quality of inference rule corpora, obtaining 27 to 33 point precision improvement while retaining substantial recall. The facts inferred using cleaned inference rules are 29-32 points more accurate.",2016
sokolova-etal-2008-telling,https://aclanthology.org/I08-1034,1,,,,partnership,peace_justice_and_strong_institutions,,"The Telling Tail: Signals of Success in Electronic Negotiation Texts. We analyze the linguistic behaviour of participants in bilateral electronic negotiations, and discover that particular language characteristics are in contrast with face-to-face negotiations. Language patterns in the later part of electronic negotiation are highly indicative of the successful or unsuccessful outcome of the process, whereas in face-toface negotiations, the first part of the negotiation is more useful for predicting the outcome. We formulate our problem in terms of text classification on negotiation segments of different sizes. The data are represented by a variety of linguistic features that capture the gist of the discussion: negotiationor strategy-related words. We show that, as we consider ever smaller final segments of a negotiation transcript, the negotiationrelated words become more indicative of the negotiation outcome, and give predictions with higher Accuracy than larger segments from the beginning of the process.",The Telling Tail: Signals of Success in Electronic Negotiation Texts,"We analyze the linguistic behaviour of participants in bilateral electronic negotiations, and discover that particular language characteristics are in contrast with face-to-face negotiations. Language patterns in the later part of electronic negotiation are highly indicative of the successful or unsuccessful outcome of the process, whereas in face-toface negotiations, the first part of the negotiation is more useful for predicting the outcome. We formulate our problem in terms of text classification on negotiation segments of different sizes. The data are represented by a variety of linguistic features that capture the gist of the discussion: negotiationor strategy-related words. We show that, as we consider ever smaller final segments of a negotiation transcript, the negotiationrelated words become more indicative of the negotiation outcome, and give predictions with higher Accuracy than larger segments from the beginning of the process.",The Telling Tail: Signals of Success in Electronic Negotiation Texts,"We analyze the linguistic behaviour of participants in bilateral electronic negotiations, and discover that particular language characteristics are in contrast with face-to-face negotiations. Language patterns in the later part of electronic negotiation are highly indicative of the successful or unsuccessful outcome of the process, whereas in face-toface negotiations, the first part of the negotiation is more useful for predicting the outcome. We formulate our problem in terms of text classification on negotiation segments of different sizes. The data are represented by a variety of linguistic features that capture the gist of the discussion: negotiationor strategy-related words. We show that, as we consider ever smaller final segments of a negotiation transcript, the negotiationrelated words become more indicative of the negotiation outcome, and give predictions with higher Accuracy than larger segments from the beginning of the process.",Partial support for this work came from the Natural Sciences and Engineering Research Council of Canada.,"The Telling Tail: Signals of Success in Electronic Negotiation Texts. We analyze the linguistic behaviour of participants in bilateral electronic negotiations, and discover that particular language characteristics are in contrast with face-to-face negotiations. Language patterns in the later part of electronic negotiation are highly indicative of the successful or unsuccessful outcome of the process, whereas in face-toface negotiations, the first part of the negotiation is more useful for predicting the outcome. We formulate our problem in terms of text classification on negotiation segments of different sizes. The data are represented by a variety of linguistic features that capture the gist of the discussion: negotiationor strategy-related words. We show that, as we consider ever smaller final segments of a negotiation transcript, the negotiationrelated words become more indicative of the negotiation outcome, and give predictions with higher Accuracy than larger segments from the beginning of the process.",2008
jana-biemann-2021-investigation,https://aclanthology.org/2021.privatenlp-1.4,1,,,,privacy_protection,,,"An Investigation towards Differentially Private Sequence Tagging in a Federated Framework. To build machine learning-based applications for sensitive domains like medical, legal, etc. where the digitized text contains private information, anonymization of text is required for preserving privacy. Sequence tagging, e.g. as used for Named Entity Recognition (NER), can help to detect private information. However, to train sequence tagging models, a sufficient amount of labeled data are required but for privacy-sensitive domains, such labeled data also can not be shared directly. In this paper, we investigate the applicability of a privacy-preserving framework for sequence tagging tasks, specifically NER. Hence, we analyze a framework for the NER task, which incorporates two levels of privacy protection. Firstly, we deploy a federated learning (FL) framework where the labeled data are neither shared with the centralized server nor with the peer clients. Secondly, we apply differential privacy (DP) while the models are being trained in each client instance. While both privacy measures are suitable for privacy-aware models, their combination results in unstable models. To our knowledge, this is the first study of its kind on privacy-aware sequence tagging models.",An Investigation towards Differentially Private Sequence Tagging in a Federated Framework,"To build machine learning-based applications for sensitive domains like medical, legal, etc. where the digitized text contains private information, anonymization of text is required for preserving privacy. Sequence tagging, e.g. as used for Named Entity Recognition (NER), can help to detect private information. However, to train sequence tagging models, a sufficient amount of labeled data are required but for privacy-sensitive domains, such labeled data also can not be shared directly. In this paper, we investigate the applicability of a privacy-preserving framework for sequence tagging tasks, specifically NER. Hence, we analyze a framework for the NER task, which incorporates two levels of privacy protection. Firstly, we deploy a federated learning (FL) framework where the labeled data are neither shared with the centralized server nor with the peer clients. Secondly, we apply differential privacy (DP) while the models are being trained in each client instance. While both privacy measures are suitable for privacy-aware models, their combination results in unstable models. To our knowledge, this is the first study of its kind on privacy-aware sequence tagging models.",An Investigation towards Differentially Private Sequence Tagging in a Federated Framework,"To build machine learning-based applications for sensitive domains like medical, legal, etc. where the digitized text contains private information, anonymization of text is required for preserving privacy. Sequence tagging, e.g. as used for Named Entity Recognition (NER), can help to detect private information. However, to train sequence tagging models, a sufficient amount of labeled data are required but for privacy-sensitive domains, such labeled data also can not be shared directly. In this paper, we investigate the applicability of a privacy-preserving framework for sequence tagging tasks, specifically NER. Hence, we analyze a framework for the NER task, which incorporates two levels of privacy protection. Firstly, we deploy a federated learning (FL) framework where the labeled data are neither shared with the centralized server nor with the peer clients. Secondly, we apply differential privacy (DP) while the models are being trained in each client instance. While both privacy measures are suitable for privacy-aware models, their combination results in unstable models. To our knowledge, this is the first study of its kind on privacy-aware sequence tagging models.","This research was funded by the German Federal Ministry of Education and Research (BMBF) as part of the HILANO project, ID 01IS18085C.","An Investigation towards Differentially Private Sequence Tagging in a Federated Framework. To build machine learning-based applications for sensitive domains like medical, legal, etc. where the digitized text contains private information, anonymization of text is required for preserving privacy. Sequence tagging, e.g. as used for Named Entity Recognition (NER), can help to detect private information. However, to train sequence tagging models, a sufficient amount of labeled data are required but for privacy-sensitive domains, such labeled data also can not be shared directly. In this paper, we investigate the applicability of a privacy-preserving framework for sequence tagging tasks, specifically NER. Hence, we analyze a framework for the NER task, which incorporates two levels of privacy protection. Firstly, we deploy a federated learning (FL) framework where the labeled data are neither shared with the centralized server nor with the peer clients. Secondly, we apply differential privacy (DP) while the models are being trained in each client instance. While both privacy measures are suitable for privacy-aware models, their combination results in unstable models. To our knowledge, this is the first study of its kind on privacy-aware sequence tagging models.",2021
fang-etal-2020-video2commonsense,https://aclanthology.org/2020.emnlp-main.61,0,,,,,,,"Video2Commonsense: Generating Commonsense Descriptions to Enrich Video Captioning. Captioning is a crucial and challenging task for video understanding. In videos that involve active agents such as humans, the agent's actions can bring about myriad changes in the scene. Observable changes such as movements, manipulations, and transformations of the objects in the scene, are reflected in conventional video captioning. Unlike images, actions in videos are also inherently linked to social aspects such as intentions (why the action is taking place), effects (what changes due to the action), and attributes that describe the agent. Thus for video understanding, such as when captioning videos or when answering questions about videos, one must have an understanding of these commonsense aspects. We present the first work on generating commonsense captions directly from videos, to describe latent aspects such as intentions, effects, and attributes. We present a new dataset ""Video-to-Commonsense (V2C)"" that contains ∼ 9k videos of human agents performing various actions, annotated with 3 types of commonsense descriptions. Additionally we explore the use of open-ended video-based commonsense question answering (V2C-QA) as a way to enrich our captions. Both the generation task and the QA task can be used to enrich video captions.",{V}ideo2{C}ommonsense: Generating Commonsense Descriptions to Enrich Video Captioning,"Captioning is a crucial and challenging task for video understanding. In videos that involve active agents such as humans, the agent's actions can bring about myriad changes in the scene. Observable changes such as movements, manipulations, and transformations of the objects in the scene, are reflected in conventional video captioning. Unlike images, actions in videos are also inherently linked to social aspects such as intentions (why the action is taking place), effects (what changes due to the action), and attributes that describe the agent. Thus for video understanding, such as when captioning videos or when answering questions about videos, one must have an understanding of these commonsense aspects. We present the first work on generating commonsense captions directly from videos, to describe latent aspects such as intentions, effects, and attributes. We present a new dataset ""Video-to-Commonsense (V2C)"" that contains ∼ 9k videos of human agents performing various actions, annotated with 3 types of commonsense descriptions. Additionally we explore the use of open-ended video-based commonsense question answering (V2C-QA) as a way to enrich our captions. Both the generation task and the QA task can be used to enrich video captions.",Video2Commonsense: Generating Commonsense Descriptions to Enrich Video Captioning,"Captioning is a crucial and challenging task for video understanding. In videos that involve active agents such as humans, the agent's actions can bring about myriad changes in the scene. Observable changes such as movements, manipulations, and transformations of the objects in the scene, are reflected in conventional video captioning. Unlike images, actions in videos are also inherently linked to social aspects such as intentions (why the action is taking place), effects (what changes due to the action), and attributes that describe the agent. Thus for video understanding, such as when captioning videos or when answering questions about videos, one must have an understanding of these commonsense aspects. We present the first work on generating commonsense captions directly from videos, to describe latent aspects such as intentions, effects, and attributes. We present a new dataset ""Video-to-Commonsense (V2C)"" that contains ∼ 9k videos of human agents performing various actions, annotated with 3 types of commonsense descriptions. Additionally we explore the use of open-ended video-based commonsense question answering (V2C-QA) as a way to enrich our captions. Both the generation task and the QA task can be used to enrich video captions.","The authors acknowledge support from the NSF Robust Intelligence Program project #1816039, the DARPA KAIROS program (LESTAT project), the DARPA SAIL-ON program, and ONR award N00014-20-1-2332. ZF, TG, YY thank the organizers and the participants of the Telluride Neuromorphic Cognition Workshop, especially the Machine Common Sense (MCS) group.","Video2Commonsense: Generating Commonsense Descriptions to Enrich Video Captioning. Captioning is a crucial and challenging task for video understanding. In videos that involve active agents such as humans, the agent's actions can bring about myriad changes in the scene. Observable changes such as movements, manipulations, and transformations of the objects in the scene, are reflected in conventional video captioning. Unlike images, actions in videos are also inherently linked to social aspects such as intentions (why the action is taking place), effects (what changes due to the action), and attributes that describe the agent. Thus for video understanding, such as when captioning videos or when answering questions about videos, one must have an understanding of these commonsense aspects. We present the first work on generating commonsense captions directly from videos, to describe latent aspects such as intentions, effects, and attributes. We present a new dataset ""Video-to-Commonsense (V2C)"" that contains ∼ 9k videos of human agents performing various actions, annotated with 3 types of commonsense descriptions. Additionally we explore the use of open-ended video-based commonsense question answering (V2C-QA) as a way to enrich our captions. Both the generation task and the QA task can be used to enrich video captions.",2020
hossain-etal-2019-cnl,https://aclanthology.org/U19-1017,0,,,,,,,"CNL-ER: A Controlled Natural Language for Specifying and Verbalising Entity Relationship Models. The first step towards designing an information system is conceptual modelling where domain experts and knowledge engineers identify the necessary information together to build an information system. Entity relationship modelling is one of the most popular conceptual modelling techniques that represents an information system in terms of entities, attributes and relationships. Entity relationship models are constructed graphically but are often difficult to understand by domain experts. To overcome this problem, we suggest to verbalise these models in a controlled natural language. In this paper, we present CNL ER , a controlled natural language for specifying and verbalising entity relationship (ER) models that not only solves the verbalisation problem for these models but also provides the benefits of automatic verification and validation, and semantic round-tripping which makes the communication process transparent between the domain experts and the knowledge engineers.",{CNL}-{ER}: A Controlled Natural Language for Specifying and Verbalising Entity Relationship Models,"The first step towards designing an information system is conceptual modelling where domain experts and knowledge engineers identify the necessary information together to build an information system. Entity relationship modelling is one of the most popular conceptual modelling techniques that represents an information system in terms of entities, attributes and relationships. Entity relationship models are constructed graphically but are often difficult to understand by domain experts. To overcome this problem, we suggest to verbalise these models in a controlled natural language. In this paper, we present CNL ER , a controlled natural language for specifying and verbalising entity relationship (ER) models that not only solves the verbalisation problem for these models but also provides the benefits of automatic verification and validation, and semantic round-tripping which makes the communication process transparent between the domain experts and the knowledge engineers.",CNL-ER: A Controlled Natural Language for Specifying and Verbalising Entity Relationship Models,"The first step towards designing an information system is conceptual modelling where domain experts and knowledge engineers identify the necessary information together to build an information system. Entity relationship modelling is one of the most popular conceptual modelling techniques that represents an information system in terms of entities, attributes and relationships. Entity relationship models are constructed graphically but are often difficult to understand by domain experts. To overcome this problem, we suggest to verbalise these models in a controlled natural language. In this paper, we present CNL ER , a controlled natural language for specifying and verbalising entity relationship (ER) models that not only solves the verbalisation problem for these models but also provides the benefits of automatic verification and validation, and semantic round-tripping which makes the communication process transparent between the domain experts and the knowledge engineers.",,"CNL-ER: A Controlled Natural Language for Specifying and Verbalising Entity Relationship Models. The first step towards designing an information system is conceptual modelling where domain experts and knowledge engineers identify the necessary information together to build an information system. Entity relationship modelling is one of the most popular conceptual modelling techniques that represents an information system in terms of entities, attributes and relationships. Entity relationship models are constructed graphically but are often difficult to understand by domain experts. To overcome this problem, we suggest to verbalise these models in a controlled natural language. In this paper, we present CNL ER , a controlled natural language for specifying and verbalising entity relationship (ER) models that not only solves the verbalisation problem for these models but also provides the benefits of automatic verification and validation, and semantic round-tripping which makes the communication process transparent between the domain experts and the knowledge engineers.",2019
kumar-etal-2015-error,https://aclanthology.org/2015.mtsummit-papers.18,0,,,,,,,"Error-tolerant speech-to-speech translation. Recent efforts to improve two-way speech-to-speech translation (S2S) systems have focused on developing error detection and interactive error recovery capabilities. This article describes our current work on developing an eyes-free English-Iraqi Arabic S2S system that detects ASR errors and attempts to resolve them by eliciting user feedback. Here, we report improvements in performance across multiple system components (ASR, MT and error detection). We also present a controlled evaluation of the S2S system that quantifies the effect of error recovery on user effort and conversational goal achievement.",Error-tolerant speech-to-speech translation,"Recent efforts to improve two-way speech-to-speech translation (S2S) systems have focused on developing error detection and interactive error recovery capabilities. This article describes our current work on developing an eyes-free English-Iraqi Arabic S2S system that detects ASR errors and attempts to resolve them by eliciting user feedback. Here, we report improvements in performance across multiple system components (ASR, MT and error detection). We also present a controlled evaluation of the S2S system that quantifies the effect of error recovery on user effort and conversational goal achievement.",Error-tolerant speech-to-speech translation,"Recent efforts to improve two-way speech-to-speech translation (S2S) systems have focused on developing error detection and interactive error recovery capabilities. This article describes our current work on developing an eyes-free English-Iraqi Arabic S2S system that detects ASR errors and attempts to resolve them by eliciting user feedback. Here, we report improvements in performance across multiple system components (ASR, MT and error detection). We also present a controlled evaluation of the S2S system that quantifies the effect of error recovery on user effort and conversational goal achievement.",,"Error-tolerant speech-to-speech translation. Recent efforts to improve two-way speech-to-speech translation (S2S) systems have focused on developing error detection and interactive error recovery capabilities. This article describes our current work on developing an eyes-free English-Iraqi Arabic S2S system that detects ASR errors and attempts to resolve them by eliciting user feedback. Here, we report improvements in performance across multiple system components (ASR, MT and error detection). We also present a controlled evaluation of the S2S system that quantifies the effect of error recovery on user effort and conversational goal achievement.",2015
piccioni-zanchetta-2004-xterm,http://www.lrec-conf.org/proceedings/lrec2004/pdf/588.pdf,0,,,,,,,"XTERM: A Flexible Standard-Compliant XML-Based Termbase Management System. This paper introduces XTerm, a Termbase management system (TBMS) currently under development at the Terminology Center of the School for Interpreters and Translators of the University of Bologna. The system is designed to be ISO and XML compliant and to provide a friendly environment for the insertion and visualization of terminological data. It is also open to the future evolution of international standards since it does not rely on a closed set of hard-coded data representation models. In this paper we will first introduce the project ""Languages and Productive Activities"", then we will outline the main features of the XTerm TBMS: XTerm.NET, the graphical user interface (the main tool of the terminographer), XTerm.portal, the web application that provides online access to the termbase and two tools that provide innovative functionalities to the whole system: CARMA and COSY Generator.",{XTERM}: A Flexible Standard-Compliant {XML}-Based Termbase Management System,"This paper introduces XTerm, a Termbase management system (TBMS) currently under development at the Terminology Center of the School for Interpreters and Translators of the University of Bologna. The system is designed to be ISO and XML compliant and to provide a friendly environment for the insertion and visualization of terminological data. It is also open to the future evolution of international standards since it does not rely on a closed set of hard-coded data representation models. In this paper we will first introduce the project ""Languages and Productive Activities"", then we will outline the main features of the XTerm TBMS: XTerm.NET, the graphical user interface (the main tool of the terminographer), XTerm.portal, the web application that provides online access to the termbase and two tools that provide innovative functionalities to the whole system: CARMA and COSY Generator.",XTERM: A Flexible Standard-Compliant XML-Based Termbase Management System,"This paper introduces XTerm, a Termbase management system (TBMS) currently under development at the Terminology Center of the School for Interpreters and Translators of the University of Bologna. The system is designed to be ISO and XML compliant and to provide a friendly environment for the insertion and visualization of terminological data. It is also open to the future evolution of international standards since it does not rely on a closed set of hard-coded data representation models. In this paper we will first introduce the project ""Languages and Productive Activities"", then we will outline the main features of the XTerm TBMS: XTerm.NET, the graphical user interface (the main tool of the terminographer), XTerm.portal, the web application that provides online access to the termbase and two tools that provide innovative functionalities to the whole system: CARMA and COSY Generator.",,"XTERM: A Flexible Standard-Compliant XML-Based Termbase Management System. This paper introduces XTerm, a Termbase management system (TBMS) currently under development at the Terminology Center of the School for Interpreters and Translators of the University of Bologna. The system is designed to be ISO and XML compliant and to provide a friendly environment for the insertion and visualization of terminological data. It is also open to the future evolution of international standards since it does not rely on a closed set of hard-coded data representation models. In this paper we will first introduce the project ""Languages and Productive Activities"", then we will outline the main features of the XTerm TBMS: XTerm.NET, the graphical user interface (the main tool of the terminographer), XTerm.portal, the web application that provides online access to the termbase and two tools that provide innovative functionalities to the whole system: CARMA and COSY Generator.",2004
beigman-klebanov-etal-2010-vocabulary,https://aclanthology.org/P10-2047,0,,,,,,,"Vocabulary Choice as an Indicator of Perspective. We establish the following characteristics of the task of perspective classification: (a) using term frequencies in a document does not improve classification achieved with absence/presence features; (b) for datasets allowing the relevant comparisons, a small number of top features is found to be as effective as the full feature set and indispensable for the best achieved performance, testifying to the existence of perspective-specific keywords. We relate our findings to research on word frequency distributions and to discourse analytic studies of perspective.",Vocabulary Choice as an Indicator of Perspective,"We establish the following characteristics of the task of perspective classification: (a) using term frequencies in a document does not improve classification achieved with absence/presence features; (b) for datasets allowing the relevant comparisons, a small number of top features is found to be as effective as the full feature set and indispensable for the best achieved performance, testifying to the existence of perspective-specific keywords. We relate our findings to research on word frequency distributions and to discourse analytic studies of perspective.",Vocabulary Choice as an Indicator of Perspective,"We establish the following characteristics of the task of perspective classification: (a) using term frequencies in a document does not improve classification achieved with absence/presence features; (b) for datasets allowing the relevant comparisons, a small number of top features is found to be as effective as the full feature set and indispensable for the best achieved performance, testifying to the existence of perspective-specific keywords. We relate our findings to research on word frequency distributions and to discourse analytic studies of perspective.",,"Vocabulary Choice as an Indicator of Perspective. We establish the following characteristics of the task of perspective classification: (a) using term frequencies in a document does not improve classification achieved with absence/presence features; (b) for datasets allowing the relevant comparisons, a small number of top features is found to be as effective as the full feature set and indispensable for the best achieved performance, testifying to the existence of perspective-specific keywords. We relate our findings to research on word frequency distributions and to discourse analytic studies of perspective.",2010
chen-yang-2021-structure,https://aclanthology.org/2021.naacl-main.109,0,,,,,,,"Structure-Aware Abstractive Conversation Summarization via Discourse and Action Graphs. ive conversation summarization has received much attention recently. However, these generated summaries often suffer from insufficient, redundant, or incorrect content, largely due to the unstructured and complex characteristics of human-human interactions. To this end, we propose to explicitly model the rich structures in conversations for more precise and accurate conversation summarization, by first incorporating discourse relations between utterances and action triples (""WHO-DOING-WHAT"") in utterances through structured graphs to better encode conversations, and then designing a multi-granularity decoder to generate summaries by combining all levels of information. Experiments show that our proposed models outperform state-of-theart methods and generalize well in other domains in terms of both automatic evaluations and human judgments. We have publicly released our code at https://github.com/ GT-SALT/Structure-Aware-BART.",Structure-Aware Abstractive Conversation Summarization via Discourse and Action Graphs,"ive conversation summarization has received much attention recently. However, these generated summaries often suffer from insufficient, redundant, or incorrect content, largely due to the unstructured and complex characteristics of human-human interactions. To this end, we propose to explicitly model the rich structures in conversations for more precise and accurate conversation summarization, by first incorporating discourse relations between utterances and action triples (""WHO-DOING-WHAT"") in utterances through structured graphs to better encode conversations, and then designing a multi-granularity decoder to generate summaries by combining all levels of information. Experiments show that our proposed models outperform state-of-theart methods and generalize well in other domains in terms of both automatic evaluations and human judgments. We have publicly released our code at https://github.com/ GT-SALT/Structure-Aware-BART.",Structure-Aware Abstractive Conversation Summarization via Discourse and Action Graphs,"ive conversation summarization has received much attention recently. However, these generated summaries often suffer from insufficient, redundant, or incorrect content, largely due to the unstructured and complex characteristics of human-human interactions. To this end, we propose to explicitly model the rich structures in conversations for more precise and accurate conversation summarization, by first incorporating discourse relations between utterances and action triples (""WHO-DOING-WHAT"") in utterances through structured graphs to better encode conversations, and then designing a multi-granularity decoder to generate summaries by combining all levels of information. Experiments show that our proposed models outperform state-of-theart methods and generalize well in other domains in terms of both automatic evaluations and human judgments. We have publicly released our code at https://github.com/ GT-SALT/Structure-Aware-BART.","We would like to thank the anonymous reviewers for their helpful comments, and the members of Georgia Tech SALT group for their feedback. This work is supported in part by grants from Google, Amazon and Salesforce.","Structure-Aware Abstractive Conversation Summarization via Discourse and Action Graphs. ive conversation summarization has received much attention recently. However, these generated summaries often suffer from insufficient, redundant, or incorrect content, largely due to the unstructured and complex characteristics of human-human interactions. To this end, we propose to explicitly model the rich structures in conversations for more precise and accurate conversation summarization, by first incorporating discourse relations between utterances and action triples (""WHO-DOING-WHAT"") in utterances through structured graphs to better encode conversations, and then designing a multi-granularity decoder to generate summaries by combining all levels of information. Experiments show that our proposed models outperform state-of-theart methods and generalize well in other domains in terms of both automatic evaluations and human judgments. We have publicly released our code at https://github.com/ GT-SALT/Structure-Aware-BART.",2021
san-segundo-etal-2001-telephone,https://aclanthology.org/W01-1619,0,,,,,,,"A Telephone-Based Railway Information System for Spanish: Development of a Methodology for Spoken Dialogue Design. This methodology is similar to the Life-Cycle Model presented in (Bernsen, 1998) and (www.disc2.dk), but we incorporate the step ""design by observation"" where human-human interactions are analysed and we present measures to evaluate the different design alternatives at every step of the methodology. ",A Telephone-Based Railway Information System for {S}panish: Development of a Methodology for Spoken Dialogue Design,"This methodology is similar to the Life-Cycle Model presented in (Bernsen, 1998) and (www.disc2.dk), but we incorporate the step ""design by observation"" where human-human interactions are analysed and we present measures to evaluate the different design alternatives at every step of the methodology. ",A Telephone-Based Railway Information System for Spanish: Development of a Methodology for Spoken Dialogue Design,"This methodology is similar to the Life-Cycle Model presented in (Bernsen, 1998) and (www.disc2.dk), but we incorporate the step ""design by observation"" where human-human interactions are analysed and we present measures to evaluate the different design alternatives at every step of the methodology. ",,"A Telephone-Based Railway Information System for Spanish: Development of a Methodology for Spoken Dialogue Design. This methodology is similar to the Life-Cycle Model presented in (Bernsen, 1998) and (www.disc2.dk), but we incorporate the step ""design by observation"" where human-human interactions are analysed and we present measures to evaluate the different design alternatives at every step of the methodology. ",2001
zhang-clark-2010-fast,https://aclanthology.org/D10-1082,0,,,,,,,"A Fast Decoder for Joint Word Segmentation and POS-Tagging Using a Single Discriminative Model. We show that the standard beam-search algorithm can be used as an efficient decoder for the global linear model of Zhang and Clark (2008) for joint word segmentation and POS-tagging, achieving a significant speed improvement. Such decoding is enabled by: (1) separating full word features from partial word features so that feature templates can be instantiated incrementally, according to whether the current character is separated or appended; (2) deciding the POS-tag of a potential word when its first character is processed. Early-update is used with perceptron training so that the linear model gives a high score to a correct partial candidate as well as a full output. Effective scoring of partial structures allows the decoder to give high accuracy with a small beam-size of 16. In our 10-fold crossvalidation experiments with the Chinese Treebank, our system performed over 10 times as fast as Zhang and Clark (2008) with little accuracy loss. The accuracy of our system on the standard CTB 5 test was competitive with the best in the literature.",A Fast Decoder for Joint Word Segmentation and {POS}-Tagging Using a Single Discriminative Model,"We show that the standard beam-search algorithm can be used as an efficient decoder for the global linear model of Zhang and Clark (2008) for joint word segmentation and POS-tagging, achieving a significant speed improvement. Such decoding is enabled by: (1) separating full word features from partial word features so that feature templates can be instantiated incrementally, according to whether the current character is separated or appended; (2) deciding the POS-tag of a potential word when its first character is processed. Early-update is used with perceptron training so that the linear model gives a high score to a correct partial candidate as well as a full output. Effective scoring of partial structures allows the decoder to give high accuracy with a small beam-size of 16. In our 10-fold crossvalidation experiments with the Chinese Treebank, our system performed over 10 times as fast as Zhang and Clark (2008) with little accuracy loss. The accuracy of our system on the standard CTB 5 test was competitive with the best in the literature.",A Fast Decoder for Joint Word Segmentation and POS-Tagging Using a Single Discriminative Model,"We show that the standard beam-search algorithm can be used as an efficient decoder for the global linear model of Zhang and Clark (2008) for joint word segmentation and POS-tagging, achieving a significant speed improvement. Such decoding is enabled by: (1) separating full word features from partial word features so that feature templates can be instantiated incrementally, according to whether the current character is separated or appended; (2) deciding the POS-tag of a potential word when its first character is processed. Early-update is used with perceptron training so that the linear model gives a high score to a correct partial candidate as well as a full output. Effective scoring of partial structures allows the decoder to give high accuracy with a small beam-size of 16. In our 10-fold crossvalidation experiments with the Chinese Treebank, our system performed over 10 times as fast as Zhang and Clark (2008) with little accuracy loss. The accuracy of our system on the standard CTB 5 test was competitive with the best in the literature.","We thank Canasai Kruengkrai for discussion on efficiency issues, and the anonymous reviewers for their suggestions. Yue Zhang and Stephen Clark are supported by the European Union Seventh Framework Programme (FP7-ICT-2009-4) under grant agreement no. 247762.","A Fast Decoder for Joint Word Segmentation and POS-Tagging Using a Single Discriminative Model. We show that the standard beam-search algorithm can be used as an efficient decoder for the global linear model of Zhang and Clark (2008) for joint word segmentation and POS-tagging, achieving a significant speed improvement. Such decoding is enabled by: (1) separating full word features from partial word features so that feature templates can be instantiated incrementally, according to whether the current character is separated or appended; (2) deciding the POS-tag of a potential word when its first character is processed. Early-update is used with perceptron training so that the linear model gives a high score to a correct partial candidate as well as a full output. Effective scoring of partial structures allows the decoder to give high accuracy with a small beam-size of 16. In our 10-fold crossvalidation experiments with the Chinese Treebank, our system performed over 10 times as fast as Zhang and Clark (2008) with little accuracy loss. The accuracy of our system on the standard CTB 5 test was competitive with the best in the literature.",2010
eskander-etal-2013-automatic-correction,https://aclanthology.org/W13-2301,0,,,,,,,"Automatic Correction and Extension of Morphological Annotations. For languages with complex morphologies, limited resources and tools, and/or lack of standard grammars, developing annotated resources can be a challenging task. Annotated resources developed under time/money constraints for such languages tend to tradeoff depth of representation with degree of noise. We present two methods for automatic correction and extension of morphological annotations, and demonstrate their success on three divergent Egyptian Arabic corpora. 2 Habash and Rambow (2006) reported that a state-of-theart MSA morphological analyzer has only 60% coverage of Levantine Arabic verb forms. 3 Arabic orthographic transliteration is presented in the Habash-Soudi-Buckwalter scheme (Habash et al., 2007): A b t θ j H x dð r z s š S D TĎ ς γ f q k l m n hw y",Automatic Correction and Extension of Morphological Annotations,"For languages with complex morphologies, limited resources and tools, and/or lack of standard grammars, developing annotated resources can be a challenging task. Annotated resources developed under time/money constraints for such languages tend to tradeoff depth of representation with degree of noise. We present two methods for automatic correction and extension of morphological annotations, and demonstrate their success on three divergent Egyptian Arabic corpora. 2 Habash and Rambow (2006) reported that a state-of-theart MSA morphological analyzer has only 60% coverage of Levantine Arabic verb forms. 3 Arabic orthographic transliteration is presented in the Habash-Soudi-Buckwalter scheme (Habash et al., 2007): A b t θ j H x dð r z s š S D TĎ ς γ f q k l m n hw y",Automatic Correction and Extension of Morphological Annotations,"For languages with complex morphologies, limited resources and tools, and/or lack of standard grammars, developing annotated resources can be a challenging task. Annotated resources developed under time/money constraints for such languages tend to tradeoff depth of representation with degree of noise. We present two methods for automatic correction and extension of morphological annotations, and demonstrate their success on three divergent Egyptian Arabic corpora. 2 Habash and Rambow (2006) reported that a state-of-theart MSA morphological analyzer has only 60% coverage of Levantine Arabic verb forms. 3 Arabic orthographic transliteration is presented in the Habash-Soudi-Buckwalter scheme (Habash et al., 2007): A b t θ j H x dð r z s š S D TĎ ς γ f q k l m n hw y","This paper is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under contracts No. HR0011-12-C-0014 and HR0011-11-C-0145. Any opinions, findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of DARPA. We also would like to thank Emad Mohamed and Kemal Oflazer for providing us with the CMUEAC corpus. We thank Ryan Roth for help with MADA-ARZ. Finally, we thank Owen Rambow, Mona Diab and Warren Churchill for helpful discussions.","Automatic Correction and Extension of Morphological Annotations. For languages with complex morphologies, limited resources and tools, and/or lack of standard grammars, developing annotated resources can be a challenging task. Annotated resources developed under time/money constraints for such languages tend to tradeoff depth of representation with degree of noise. We present two methods for automatic correction and extension of morphological annotations, and demonstrate their success on three divergent Egyptian Arabic corpora. 2 Habash and Rambow (2006) reported that a state-of-theart MSA morphological analyzer has only 60% coverage of Levantine Arabic verb forms. 3 Arabic orthographic transliteration is presented in the Habash-Soudi-Buckwalter scheme (Habash et al., 2007): A b t θ j H x dð r z s š S D TĎ ς γ f q k l m n hw y",2013
sokolova-schramm-2011-building,https://aclanthology.org/R11-1111,1,,,,health,,,Building a Patient-based Ontology for User-written Web Messages. We introduce an ontology that is representative of health discussions and vocabulary used by the general public. The ontology structure is built upon general categories of information that patients use when describing their health in clinical encounters. The pilot study shows that the general structure makes the ontology useful in text mining of social networking web sites.,Building a Patient-based Ontology for User-written Web Messages,We introduce an ontology that is representative of health discussions and vocabulary used by the general public. The ontology structure is built upon general categories of information that patients use when describing their health in clinical encounters. The pilot study shows that the general structure makes the ontology useful in text mining of social networking web sites.,Building a Patient-based Ontology for User-written Web Messages,We introduce an ontology that is representative of health discussions and vocabulary used by the general public. The ontology structure is built upon general categories of information that patients use when describing their health in clinical encounters. The pilot study shows that the general structure makes the ontology useful in text mining of social networking web sites.,This work is in part funded by a NSERC Discovery grant available to the first author and The Ottawa Hospital Academic Medical Organizationto the second author.,Building a Patient-based Ontology for User-written Web Messages. We introduce an ontology that is representative of health discussions and vocabulary used by the general public. The ontology structure is built upon general categories of information that patients use when describing their health in clinical encounters. The pilot study shows that the general structure makes the ontology useful in text mining of social networking web sites.,2011
l-2014-keynote,https://aclanthology.org/W14-5110,0,,,,,,,Keynote Lecture 2: Text Analysis for identifying Entities and their mentions in Indian languages. The talk deals with the analysis of text at syntactic-semantic level to identify a common feature set which can work across various Indian languages for recognizing named entities and their mentions. The development of corpora and the method adopted to develop each module is discussed. The talk includes the evaluation of the common feature set using a statistical method which gives acceptable levels of recall and precision.,Keynote Lecture 2: Text Analysis for identifying Entities and their mentions in {I}ndian languages,The talk deals with the analysis of text at syntactic-semantic level to identify a common feature set which can work across various Indian languages for recognizing named entities and their mentions. The development of corpora and the method adopted to develop each module is discussed. The talk includes the evaluation of the common feature set using a statistical method which gives acceptable levels of recall and precision.,Keynote Lecture 2: Text Analysis for identifying Entities and their mentions in Indian languages,The talk deals with the analysis of text at syntactic-semantic level to identify a common feature set which can work across various Indian languages for recognizing named entities and their mentions. The development of corpora and the method adopted to develop each module is discussed. The talk includes the evaluation of the common feature set using a statistical method which gives acceptable levels of recall and precision.,,Keynote Lecture 2: Text Analysis for identifying Entities and their mentions in Indian languages. The talk deals with the analysis of text at syntactic-semantic level to identify a common feature set which can work across various Indian languages for recognizing named entities and their mentions. The development of corpora and the method adopted to develop each module is discussed. The talk includes the evaluation of the common feature set using a statistical method which gives acceptable levels of recall and precision.,2014
nn-2007-briefly-noted,https://aclanthology.org/J07-4008,0,,,,,,,"Briefly Noted/Publications Received. This comprehensive NLP textbook is strongly algorithm-oriented and designed for talented computer programmers who might or might not be linguists. The book occupies a market niche in between that of Jurafsky and Martin (2008) and my own humble effort (Covington 1994); it resembles the latter in approach and the former in scope. Perhaps more than either of those, Nugues's book is also useful to working professionals as a handbook of techniques and algorithms. Everything is here-everything, that is, except speech synthesis and recognition; phonetics receives only a four-page summary. Those wanting to start an NLP course by covering phonetics in some depth should consider Coleman (2005) as well as Jurafsky and Martin (2008). After a brief overview, Nugues covers corpus linguistics, markup languages, text statistics, morphology, part-of-speech tagging (two ways), parsing (several ways), semantics, and discourse. ""Neat"" and ""scruffy"" approaches are deftly interleaved and compared. Unification-based grammar, event semantics, and tools such as WordNet and the Penn Treebank are covered in some detail. The syntax section includes dependency grammar and even the very recent work of Nivre (2006), as well as partial parsing and statistical approaches. Many important algorithms are presented ready to run, or nearly so, as Prolog or Perl code. If, for example, you want to build a Cocke-Kasami-Younger parser, this is the place to look for directions. Explanations are lucid and to-the-point. Here is an example. Nugues is discussing the fact that, if you sample a corpus for n-grams, some will not occur in your sample at all, but it would be a mistake to consider the unseen ones to be infinitely rare (frequency 0). Thus the counts need to be adjusted: Good-Turing estimation. .. reestimates the counts of the n-grams observed in the corpus by discounting them, and it shifts the probability mass it has shaved to the unseen bigrams.",Briefly Noted/Publications Received,"This comprehensive NLP textbook is strongly algorithm-oriented and designed for talented computer programmers who might or might not be linguists. The book occupies a market niche in between that of Jurafsky and Martin (2008) and my own humble effort (Covington 1994); it resembles the latter in approach and the former in scope. Perhaps more than either of those, Nugues's book is also useful to working professionals as a handbook of techniques and algorithms. Everything is here-everything, that is, except speech synthesis and recognition; phonetics receives only a four-page summary. Those wanting to start an NLP course by covering phonetics in some depth should consider Coleman (2005) as well as Jurafsky and Martin (2008). After a brief overview, Nugues covers corpus linguistics, markup languages, text statistics, morphology, part-of-speech tagging (two ways), parsing (several ways), semantics, and discourse. ""Neat"" and ""scruffy"" approaches are deftly interleaved and compared. Unification-based grammar, event semantics, and tools such as WordNet and the Penn Treebank are covered in some detail. The syntax section includes dependency grammar and even the very recent work of Nivre (2006), as well as partial parsing and statistical approaches. Many important algorithms are presented ready to run, or nearly so, as Prolog or Perl code. If, for example, you want to build a Cocke-Kasami-Younger parser, this is the place to look for directions. Explanations are lucid and to-the-point. Here is an example. Nugues is discussing the fact that, if you sample a corpus for n-grams, some will not occur in your sample at all, but it would be a mistake to consider the unseen ones to be infinitely rare (frequency 0). Thus the counts need to be adjusted: Good-Turing estimation. .. reestimates the counts of the n-grams observed in the corpus by discounting them, and it shifts the probability mass it has shaved to the unseen bigrams.",Briefly Noted/Publications Received,"This comprehensive NLP textbook is strongly algorithm-oriented and designed for talented computer programmers who might or might not be linguists. The book occupies a market niche in between that of Jurafsky and Martin (2008) and my own humble effort (Covington 1994); it resembles the latter in approach and the former in scope. Perhaps more than either of those, Nugues's book is also useful to working professionals as a handbook of techniques and algorithms. Everything is here-everything, that is, except speech synthesis and recognition; phonetics receives only a four-page summary. Those wanting to start an NLP course by covering phonetics in some depth should consider Coleman (2005) as well as Jurafsky and Martin (2008). After a brief overview, Nugues covers corpus linguistics, markup languages, text statistics, morphology, part-of-speech tagging (two ways), parsing (several ways), semantics, and discourse. ""Neat"" and ""scruffy"" approaches are deftly interleaved and compared. Unification-based grammar, event semantics, and tools such as WordNet and the Penn Treebank are covered in some detail. The syntax section includes dependency grammar and even the very recent work of Nivre (2006), as well as partial parsing and statistical approaches. Many important algorithms are presented ready to run, or nearly so, as Prolog or Perl code. If, for example, you want to build a Cocke-Kasami-Younger parser, this is the place to look for directions. Explanations are lucid and to-the-point. Here is an example. Nugues is discussing the fact that, if you sample a corpus for n-grams, some will not occur in your sample at all, but it would be a mistake to consider the unseen ones to be infinitely rare (frequency 0). Thus the counts need to be adjusted: Good-Turing estimation. .. reestimates the counts of the n-grams observed in the corpus by discounting them, and it shifts the probability mass it has shaved to the unseen bigrams.",,"Briefly Noted/Publications Received. This comprehensive NLP textbook is strongly algorithm-oriented and designed for talented computer programmers who might or might not be linguists. The book occupies a market niche in between that of Jurafsky and Martin (2008) and my own humble effort (Covington 1994); it resembles the latter in approach and the former in scope. Perhaps more than either of those, Nugues's book is also useful to working professionals as a handbook of techniques and algorithms. Everything is here-everything, that is, except speech synthesis and recognition; phonetics receives only a four-page summary. Those wanting to start an NLP course by covering phonetics in some depth should consider Coleman (2005) as well as Jurafsky and Martin (2008). After a brief overview, Nugues covers corpus linguistics, markup languages, text statistics, morphology, part-of-speech tagging (two ways), parsing (several ways), semantics, and discourse. ""Neat"" and ""scruffy"" approaches are deftly interleaved and compared. Unification-based grammar, event semantics, and tools such as WordNet and the Penn Treebank are covered in some detail. The syntax section includes dependency grammar and even the very recent work of Nivre (2006), as well as partial parsing and statistical approaches. Many important algorithms are presented ready to run, or nearly so, as Prolog or Perl code. If, for example, you want to build a Cocke-Kasami-Younger parser, this is the place to look for directions. Explanations are lucid and to-the-point. Here is an example. Nugues is discussing the fact that, if you sample a corpus for n-grams, some will not occur in your sample at all, but it would be a mistake to consider the unseen ones to be infinitely rare (frequency 0). Thus the counts need to be adjusted: Good-Turing estimation. .. reestimates the counts of the n-grams observed in the corpus by discounting them, and it shifts the probability mass it has shaved to the unseen bigrams.",2007
fischer-1997-formal,https://aclanthology.org/W97-0804,0,,,,,,,"Formal redundancy and consistency checking rules for the lexicai database WordNet 1.5. In a manually built-up semantic net in which not the concept definitions automatically determine the position of the concepts in the net, but rather the links coded by the lexicographers, the formal properties of the encoded attributes and relations provide necessary but not sufficient conditions to support maintenance of internal consistency and avoidance of redundancy. According to our experience the potential of this methodology has not yet been fully exploited due to lack of understanding of applicable formal rules, or due to inflexibility of available software tools. Based on a more comprehensive inquiry performed on the lexical database Word-Net TM 1.5, this paper presents a selection of pertinent checking rules and the results of their application to WordNet 1.5. Transferable insights are: 1. Semantic relations which are closely related but differing in a checkable property, should be differentiated. 2. Inferable relations-such as the transitive closure of a hierarchical relation or semantic relations induced by lexical ones-need to be taken into account when checking real relations, i.e. directly stored relations. 3. A semantic net needs proper representation of lexical gaps. A disjunctive hypernym, implemented as a set of hypernyms, is considered harmful.",Formal redundancy and consistency checking rules for the lexicai database {W}ord{N}et 1.5,"In a manually built-up semantic net in which not the concept definitions automatically determine the position of the concepts in the net, but rather the links coded by the lexicographers, the formal properties of the encoded attributes and relations provide necessary but not sufficient conditions to support maintenance of internal consistency and avoidance of redundancy. According to our experience the potential of this methodology has not yet been fully exploited due to lack of understanding of applicable formal rules, or due to inflexibility of available software tools. Based on a more comprehensive inquiry performed on the lexical database Word-Net TM 1.5, this paper presents a selection of pertinent checking rules and the results of their application to WordNet 1.5. Transferable insights are: 1. Semantic relations which are closely related but differing in a checkable property, should be differentiated. 2. Inferable relations-such as the transitive closure of a hierarchical relation or semantic relations induced by lexical ones-need to be taken into account when checking real relations, i.e. directly stored relations. 3. A semantic net needs proper representation of lexical gaps. A disjunctive hypernym, implemented as a set of hypernyms, is considered harmful.",Formal redundancy and consistency checking rules for the lexicai database WordNet 1.5,"In a manually built-up semantic net in which not the concept definitions automatically determine the position of the concepts in the net, but rather the links coded by the lexicographers, the formal properties of the encoded attributes and relations provide necessary but not sufficient conditions to support maintenance of internal consistency and avoidance of redundancy. According to our experience the potential of this methodology has not yet been fully exploited due to lack of understanding of applicable formal rules, or due to inflexibility of available software tools. Based on a more comprehensive inquiry performed on the lexical database Word-Net TM 1.5, this paper presents a selection of pertinent checking rules and the results of their application to WordNet 1.5. Transferable insights are: 1. Semantic relations which are closely related but differing in a checkable property, should be differentiated. 2. Inferable relations-such as the transitive closure of a hierarchical relation or semantic relations induced by lexical ones-need to be taken into account when checking real relations, i.e. directly stored relations. 3. A semantic net needs proper representation of lexical gaps. A disjunctive hypernym, implemented as a set of hypernyms, is considered harmful.","I am indebted to Melina Alexa and John Bateman for encouraging this work, and to them both and Wiebke Mt~hr, Renato Reinau, Lothar Rostek, and Ingrid Schmidt for valuable help to improve this paper.","Formal redundancy and consistency checking rules for the lexicai database WordNet 1.5. In a manually built-up semantic net in which not the concept definitions automatically determine the position of the concepts in the net, but rather the links coded by the lexicographers, the formal properties of the encoded attributes and relations provide necessary but not sufficient conditions to support maintenance of internal consistency and avoidance of redundancy. According to our experience the potential of this methodology has not yet been fully exploited due to lack of understanding of applicable formal rules, or due to inflexibility of available software tools. Based on a more comprehensive inquiry performed on the lexical database Word-Net TM 1.5, this paper presents a selection of pertinent checking rules and the results of their application to WordNet 1.5. Transferable insights are: 1. Semantic relations which are closely related but differing in a checkable property, should be differentiated. 2. Inferable relations-such as the transitive closure of a hierarchical relation or semantic relations induced by lexical ones-need to be taken into account when checking real relations, i.e. directly stored relations. 3. A semantic net needs proper representation of lexical gaps. A disjunctive hypernym, implemented as a set of hypernyms, is considered harmful.",1997
barrena-etal-2016-alleviating,https://aclanthology.org/P16-1179,0,,,,,,,"Alleviating Poor Context with Background Knowledge for Named Entity Disambiguation. Named Entity Disambiguation (NED) algorithms disambiguate mentions of named entities with respect to a knowledge-base, but sometimes the context might be poor or misleading. In this paper we introduce the acquisition of two kinds of background information to alleviate that problem: entity similarity and selectional preferences for syntactic positions. We show, using a generative Näive Bayes model for NED, that the additional sources of context are complementary, and improve results in the CoNLL 2003 and TAC KBP DEL 2014 datasets, yielding the third best and the best results, respectively. We provide examples and analysis which show the value of the acquired background information.",Alleviating Poor Context with Background Knowledge for Named Entity Disambiguation,"Named Entity Disambiguation (NED) algorithms disambiguate mentions of named entities with respect to a knowledge-base, but sometimes the context might be poor or misleading. In this paper we introduce the acquisition of two kinds of background information to alleviate that problem: entity similarity and selectional preferences for syntactic positions. We show, using a generative Näive Bayes model for NED, that the additional sources of context are complementary, and improve results in the CoNLL 2003 and TAC KBP DEL 2014 datasets, yielding the third best and the best results, respectively. We provide examples and analysis which show the value of the acquired background information.",Alleviating Poor Context with Background Knowledge for Named Entity Disambiguation,"Named Entity Disambiguation (NED) algorithms disambiguate mentions of named entities with respect to a knowledge-base, but sometimes the context might be poor or misleading. In this paper we introduce the acquisition of two kinds of background information to alleviate that problem: entity similarity and selectional preferences for syntactic positions. We show, using a generative Näive Bayes model for NED, that the additional sources of context are complementary, and improve results in the CoNLL 2003 and TAC KBP DEL 2014 datasets, yielding the third best and the best results, respectively. We provide examples and analysis which show the value of the acquired background information.","We thank the reviewers for their suggestions. This work was partially funded by MINECO (TUNER project, TIN2015-65308-C5-1-R). The IXA group","Alleviating Poor Context with Background Knowledge for Named Entity Disambiguation. Named Entity Disambiguation (NED) algorithms disambiguate mentions of named entities with respect to a knowledge-base, but sometimes the context might be poor or misleading. In this paper we introduce the acquisition of two kinds of background information to alleviate that problem: entity similarity and selectional preferences for syntactic positions. We show, using a generative Näive Bayes model for NED, that the additional sources of context are complementary, and improve results in the CoNLL 2003 and TAC KBP DEL 2014 datasets, yielding the third best and the best results, respectively. We provide examples and analysis which show the value of the acquired background information.",2016
xie-etal-2021-zjuklab,https://aclanthology.org/2021.semeval-1.108,0,,,,,,,"ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning. This paper presents our systems for the three Subtasks of SemEval Task4: Reading Comprehension of Abstract Meaning (ReCAM). We explain the algorithms used to learn our models and the process of tuning the algorithms and selecting the best model. Inspired by the similarity of the ReCAM task and the language pre-training, we propose a simple yet effective technology, namely, negative augmentation with language model. Evaluation results demonstrate the effectiveness of our proposed approach. Our models achieve the 4th rank on both official test sets of Subtask 1 and Subtask 2 with an accuracy of 87.9% and an accuracy of 92.8%, respectively 1. We further conduct comprehensive model analysis and observe interesting error cases, which may promote future researches.",{ZJUKLAB} at {S}em{E}val-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning,"This paper presents our systems for the three Subtasks of SemEval Task4: Reading Comprehension of Abstract Meaning (ReCAM). We explain the algorithms used to learn our models and the process of tuning the algorithms and selecting the best model. Inspired by the similarity of the ReCAM task and the language pre-training, we propose a simple yet effective technology, namely, negative augmentation with language model. Evaluation results demonstrate the effectiveness of our proposed approach. Our models achieve the 4th rank on both official test sets of Subtask 1 and Subtask 2 with an accuracy of 87.9% and an accuracy of 92.8%, respectively 1. We further conduct comprehensive model analysis and observe interesting error cases, which may promote future researches.",ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning,"This paper presents our systems for the three Subtasks of SemEval Task4: Reading Comprehension of Abstract Meaning (ReCAM). We explain the algorithms used to learn our models and the process of tuning the algorithms and selecting the best model. Inspired by the similarity of the ReCAM task and the language pre-training, we propose a simple yet effective technology, namely, negative augmentation with language model. Evaluation results demonstrate the effectiveness of our proposed approach. Our models achieve the 4th rank on both official test sets of Subtask 1 and Subtask 2 with an accuracy of 87.9% and an accuracy of 92.8%, respectively 1. We further conduct comprehensive model analysis and observe interesting error cases, which may promote future researches.",We want to express gratitude to the anonymous reviewers for their hard work and kind comments.This work is funded by 2018YFB1402800/NSFC91846204/NSFCU19B2027.,"ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning. This paper presents our systems for the three Subtasks of SemEval Task4: Reading Comprehension of Abstract Meaning (ReCAM). We explain the algorithms used to learn our models and the process of tuning the algorithms and selecting the best model. Inspired by the similarity of the ReCAM task and the language pre-training, we propose a simple yet effective technology, namely, negative augmentation with language model. Evaluation results demonstrate the effectiveness of our proposed approach. Our models achieve the 4th rank on both official test sets of Subtask 1 and Subtask 2 with an accuracy of 87.9% and an accuracy of 92.8%, respectively 1. We further conduct comprehensive model analysis and observe interesting error cases, which may promote future researches.",2021
gimenez-marquez-2006-low,https://aclanthology.org/P06-2037,0,,,,,,,"Low-Cost Enrichment of Spanish WordNet with Automatically Translated Glosses: Combining General and Specialized Models. This paper studies the enrichment of Spanish WordNet with synset glosses automatically obtained from the English Word-Net glosses using a phrase-based Statistical Machine Translation system. We construct the English-Spanish translation system from a parallel corpus of proceedings of the European Parliament, and study how to adapt statistical models to the domain of dictionary definitions. We build specialized language and translation models from a small set of parallel definitions and experiment with robust manners to combine them. A statistically significant increase in performance is obtained. The best system is finally used to generate a definition for all Spanish synsets, which are currently ready for a manual revision. As a complementary issue, we analyze the impact of the amount of in-domain data needed to improve a system trained entirely on out-of-domain data.",Low-Cost Enrichment of {S}panish {W}ord{N}et with Automatically Translated Glosses: Combining General and Specialized Models,"This paper studies the enrichment of Spanish WordNet with synset glosses automatically obtained from the English Word-Net glosses using a phrase-based Statistical Machine Translation system. We construct the English-Spanish translation system from a parallel corpus of proceedings of the European Parliament, and study how to adapt statistical models to the domain of dictionary definitions. We build specialized language and translation models from a small set of parallel definitions and experiment with robust manners to combine them. A statistically significant increase in performance is obtained. The best system is finally used to generate a definition for all Spanish synsets, which are currently ready for a manual revision. As a complementary issue, we analyze the impact of the amount of in-domain data needed to improve a system trained entirely on out-of-domain data.",Low-Cost Enrichment of Spanish WordNet with Automatically Translated Glosses: Combining General and Specialized Models,"This paper studies the enrichment of Spanish WordNet with synset glosses automatically obtained from the English Word-Net glosses using a phrase-based Statistical Machine Translation system. We construct the English-Spanish translation system from a parallel corpus of proceedings of the European Parliament, and study how to adapt statistical models to the domain of dictionary definitions. We build specialized language and translation models from a small set of parallel definitions and experiment with robust manners to combine them. A statistically significant increase in performance is obtained. The best system is finally used to generate a definition for all Spanish synsets, which are currently ready for a manual revision. As a complementary issue, we analyze the impact of the amount of in-domain data needed to improve a system trained entirely on out-of-domain data.","This research has been funded by the Spanish Ministry of Science and Technology (ALIADO TIC2002-04447-C02) and the Spanish Ministry of Education and Science (TRANGRAM, TIN2004-07925-C03-02). Our research group, TALP Research Center, is recognized as a Quality Research Group (2001 SGR 00254) by DURSI, the Research Department of the Catalan Government. Authors are grateful to Patrik Lambert for providing us with the implementation of the Simplex Method, and specially to German Rigau for motivating in its origin all this work.","Low-Cost Enrichment of Spanish WordNet with Automatically Translated Glosses: Combining General and Specialized Models. This paper studies the enrichment of Spanish WordNet with synset glosses automatically obtained from the English Word-Net glosses using a phrase-based Statistical Machine Translation system. We construct the English-Spanish translation system from a parallel corpus of proceedings of the European Parliament, and study how to adapt statistical models to the domain of dictionary definitions. We build specialized language and translation models from a small set of parallel definitions and experiment with robust manners to combine them. A statistically significant increase in performance is obtained. The best system is finally used to generate a definition for all Spanish synsets, which are currently ready for a manual revision. As a complementary issue, we analyze the impact of the amount of in-domain data needed to improve a system trained entirely on out-of-domain data.",2006
vijay-etal-2018-corpus,https://aclanthology.org/N18-4018,0,,,,,,,"Corpus Creation and Emotion Prediction for Hindi-English Code-Mixed Social Media Text. Emotion Prediction is a Natural Language Processing (NLP) task dealing with detection and classification of emotions in various monolingual and bilingual texts. While some work has been done on code-mixed social media text and in emotion prediction separately, our work is the first attempt which aims at identifying the emotion associated with Hindi-English code-mixed social media text. In this paper, we analyze the problem of emotion identification in code-mixed content and present a Hindi-English code-mixed corpus extracted from twitter and annotated with the associated emotion. For every tweet in the dataset, we annotate the source language of all the words present, and also the causal language of the expressed emotion. Finally, we propose a supervised classification system which uses various machine learning techniques for detecting the emotion associated with the text using a variety of character level, word level, and lexicon based features.",Corpus Creation and Emotion Prediction for {H}indi-{E}nglish Code-Mixed Social Media Text,"Emotion Prediction is a Natural Language Processing (NLP) task dealing with detection and classification of emotions in various monolingual and bilingual texts. While some work has been done on code-mixed social media text and in emotion prediction separately, our work is the first attempt which aims at identifying the emotion associated with Hindi-English code-mixed social media text. In this paper, we analyze the problem of emotion identification in code-mixed content and present a Hindi-English code-mixed corpus extracted from twitter and annotated with the associated emotion. For every tweet in the dataset, we annotate the source language of all the words present, and also the causal language of the expressed emotion. Finally, we propose a supervised classification system which uses various machine learning techniques for detecting the emotion associated with the text using a variety of character level, word level, and lexicon based features.",Corpus Creation and Emotion Prediction for Hindi-English Code-Mixed Social Media Text,"Emotion Prediction is a Natural Language Processing (NLP) task dealing with detection and classification of emotions in various monolingual and bilingual texts. While some work has been done on code-mixed social media text and in emotion prediction separately, our work is the first attempt which aims at identifying the emotion associated with Hindi-English code-mixed social media text. In this paper, we analyze the problem of emotion identification in code-mixed content and present a Hindi-English code-mixed corpus extracted from twitter and annotated with the associated emotion. For every tweet in the dataset, we annotate the source language of all the words present, and also the causal language of the expressed emotion. Finally, we propose a supervised classification system which uses various machine learning techniques for detecting the emotion associated with the text using a variety of character level, word level, and lexicon based features.",,"Corpus Creation and Emotion Prediction for Hindi-English Code-Mixed Social Media Text. Emotion Prediction is a Natural Language Processing (NLP) task dealing with detection and classification of emotions in various monolingual and bilingual texts. While some work has been done on code-mixed social media text and in emotion prediction separately, our work is the first attempt which aims at identifying the emotion associated with Hindi-English code-mixed social media text. In this paper, we analyze the problem of emotion identification in code-mixed content and present a Hindi-English code-mixed corpus extracted from twitter and annotated with the associated emotion. For every tweet in the dataset, we annotate the source language of all the words present, and also the causal language of the expressed emotion. Finally, we propose a supervised classification system which uses various machine learning techniques for detecting the emotion associated with the text using a variety of character level, word level, and lexicon based features.",2018
schafer-burtenshaw-2019-offence,https://aclanthology.org/R19-1125,1,,,,hate_speech,,,"Offence in Dialogues: A Corpus-Based Study. In recent years an increasing number of analyses of offensive language has been published, however, dealing mainly with the automatic detection and classification of isolated instances. In this paper we aim to understand the impact of offensive messages in online conversations diachronically, and in particular the change in offensiveness of dialogue turns. In turn, we aim to measure the progression of offence level as well as its direction-For example, whether a conversation is escalating or declining in offence. We present our method of extracting linear dialogues from tree-structured conversations in social media data and make our code publicly available. 1 Furthermore, we discuss methods to analyse this dataset through changes in discourse offensiveness. Our paper includes two main contributions; first, using a neural network to measure the level of offensiveness in conversations; and second, the analysis of conversations around offensive comments using decoupling functions.",Offence in Dialogues: A Corpus-Based Study,"In recent years an increasing number of analyses of offensive language has been published, however, dealing mainly with the automatic detection and classification of isolated instances. In this paper we aim to understand the impact of offensive messages in online conversations diachronically, and in particular the change in offensiveness of dialogue turns. In turn, we aim to measure the progression of offence level as well as its direction-For example, whether a conversation is escalating or declining in offence. We present our method of extracting linear dialogues from tree-structured conversations in social media data and make our code publicly available. 1 Furthermore, we discuss methods to analyse this dataset through changes in discourse offensiveness. Our paper includes two main contributions; first, using a neural network to measure the level of offensiveness in conversations; and second, the analysis of conversations around offensive comments using decoupling functions.",Offence in Dialogues: A Corpus-Based Study,"In recent years an increasing number of analyses of offensive language has been published, however, dealing mainly with the automatic detection and classification of isolated instances. In this paper we aim to understand the impact of offensive messages in online conversations diachronically, and in particular the change in offensiveness of dialogue turns. In turn, we aim to measure the progression of offence level as well as its direction-For example, whether a conversation is escalating or declining in offence. We present our method of extracting linear dialogues from tree-structured conversations in social media data and make our code publicly available. 1 Furthermore, we discuss methods to analyse this dataset through changes in discourse offensiveness. Our paper includes two main contributions; first, using a neural network to measure the level of offensiveness in conversations; and second, the analysis of conversations around offensive comments using decoupling functions.",,"Offence in Dialogues: A Corpus-Based Study. In recent years an increasing number of analyses of offensive language has been published, however, dealing mainly with the automatic detection and classification of isolated instances. In this paper we aim to understand the impact of offensive messages in online conversations diachronically, and in particular the change in offensiveness of dialogue turns. In turn, we aim to measure the progression of offence level as well as its direction-For example, whether a conversation is escalating or declining in offence. We present our method of extracting linear dialogues from tree-structured conversations in social media data and make our code publicly available. 1 Furthermore, we discuss methods to analyse this dataset through changes in discourse offensiveness. Our paper includes two main contributions; first, using a neural network to measure the level of offensiveness in conversations; and second, the analysis of conversations around offensive comments using decoupling functions.",2019
minkov-cohen-2012-graph,https://aclanthology.org/W12-4104,0,,,,,,,"Graph Based Similarity Measures for Synonym Extraction from Parsed Text. We learn graph-based similarity measures for the task of extracting word synonyms from a corpus of parsed text. A constrained graph walk variant that has been successfully applied in the past in similar settings is shown to outperform a state-of-the-art syntactic vectorbased approach on this task. Further, we show that learning specialized similarity measures for different word types is advantageous.",Graph Based Similarity Measures for Synonym Extraction from Parsed Text,"We learn graph-based similarity measures for the task of extracting word synonyms from a corpus of parsed text. A constrained graph walk variant that has been successfully applied in the past in similar settings is shown to outperform a state-of-the-art syntactic vectorbased approach on this task. Further, we show that learning specialized similarity measures for different word types is advantageous.",Graph Based Similarity Measures for Synonym Extraction from Parsed Text,"We learn graph-based similarity measures for the task of extracting word synonyms from a corpus of parsed text. A constrained graph walk variant that has been successfully applied in the past in similar settings is shown to outperform a state-of-the-art syntactic vectorbased approach on this task. Further, we show that learning specialized similarity measures for different word types is advantageous.",,"Graph Based Similarity Measures for Synonym Extraction from Parsed Text. We learn graph-based similarity measures for the task of extracting word synonyms from a corpus of parsed text. A constrained graph walk variant that has been successfully applied in the past in similar settings is shown to outperform a state-of-the-art syntactic vectorbased approach on this task. Further, we show that learning specialized similarity measures for different word types is advantageous.",2012
yin-neubig-2018-tranx,https://aclanthology.org/D18-2002,0,,,,,,,"TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation. We present TRANX, a transition-based neural semantic parser that maps natural language (NL) utterances into formal meaning representations (MRs). TRANX uses a transition system based on the abstract syntax description language for the target MR, which gives it two major advantages: (1) it is highly accurate, using information from the syntax of the target MR to constrain the output space and model the information flow, and (2) it is highly generalizable, and can easily be applied to new types of MR by just writing a new abstract syntax description corresponding to the allowable structures in the MR. Experiments on four different semantic parsing and code generation tasks show that our system is generalizable, extensible, and effective, registering strong results compared to existing neural semantic parsers. 1",{TRANX}: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation,"We present TRANX, a transition-based neural semantic parser that maps natural language (NL) utterances into formal meaning representations (MRs). TRANX uses a transition system based on the abstract syntax description language for the target MR, which gives it two major advantages: (1) it is highly accurate, using information from the syntax of the target MR to constrain the output space and model the information flow, and (2) it is highly generalizable, and can easily be applied to new types of MR by just writing a new abstract syntax description corresponding to the allowable structures in the MR. Experiments on four different semantic parsing and code generation tasks show that our system is generalizable, extensible, and effective, registering strong results compared to existing neural semantic parsers. 1",TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation,"We present TRANX, a transition-based neural semantic parser that maps natural language (NL) utterances into formal meaning representations (MRs). TRANX uses a transition system based on the abstract syntax description language for the target MR, which gives it two major advantages: (1) it is highly accurate, using information from the syntax of the target MR to constrain the output space and model the information flow, and (2) it is highly generalizable, and can easily be applied to new types of MR by just writing a new abstract syntax description corresponding to the allowable structures in the MR. Experiments on four different semantic parsing and code generation tasks show that our system is generalizable, extensible, and effective, registering strong results compared to existing neural semantic parsers. 1",This material is based upon work supported by the National Science Foundation under Grant No. 1815287.,"TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation. We present TRANX, a transition-based neural semantic parser that maps natural language (NL) utterances into formal meaning representations (MRs). TRANX uses a transition system based on the abstract syntax description language for the target MR, which gives it two major advantages: (1) it is highly accurate, using information from the syntax of the target MR to constrain the output space and model the information flow, and (2) it is highly generalizable, and can easily be applied to new types of MR by just writing a new abstract syntax description corresponding to the allowable structures in the MR. Experiments on four different semantic parsing and code generation tasks show that our system is generalizable, extensible, and effective, registering strong results compared to existing neural semantic parsers. 1",2018
grishman-1976-survey,https://aclanthology.org/J76-2006,0,,,,,,,"A Survey of Syntactic Analysis Procedures for Natural Language. FYris survey was prepared under contract No. N00014-67A-0467-0032 w i t h the Office of N a v a l Research, and was o r i g i n a l l y i s s u e d as Report No. NSO-8 of the Courant Institute of Mathematical Sciences, New York ~niversity.",A Survey of Syntactic Analysis Procedures for Natural Language,"FYris survey was prepared under contract No. N00014-67A-0467-0032 w i t h the Office of N a v a l Research, and was o r i g i n a l l y i s s u e d as Report No. NSO-8 of the Courant Institute of Mathematical Sciences, New York ~niversity.",A Survey of Syntactic Analysis Procedures for Natural Language,"FYris survey was prepared under contract No. N00014-67A-0467-0032 w i t h the Office of N a v a l Research, and was o r i g i n a l l y i s s u e d as Report No. NSO-8 of the Courant Institute of Mathematical Sciences, New York ~niversity.",,"A Survey of Syntactic Analysis Procedures for Natural Language. FYris survey was prepared under contract No. N00014-67A-0467-0032 w i t h the Office of N a v a l Research, and was o r i g i n a l l y i s s u e d as Report No. NSO-8 of the Courant Institute of Mathematical Sciences, New York ~niversity.",1976
bai-etal-2013-translating,https://aclanthology.org/I13-1103,0,,,,,,,"Translating Chinese Unknown Words by Automatically Acquired Templates. In this paper, we present a translation template model to translate Chinese unknown words. The model exploits translation templates, which are extracted automatically from a word-aligned parallel corpus, to translate unknown words. The translation templates are designed in accordance with the structure of unknown words. When an unknown word is detected during translation, the model applies translation templates to the word to get a set of matched templates, and then translates the word into a set of suggested translations. Our experiment results demonstrate that the translations suggested by the unknown word translation template model significantly improve the performance of the Moses machine translation system.",Translating {C}hinese Unknown Words by Automatically Acquired Templates,"In this paper, we present a translation template model to translate Chinese unknown words. The model exploits translation templates, which are extracted automatically from a word-aligned parallel corpus, to translate unknown words. The translation templates are designed in accordance with the structure of unknown words. When an unknown word is detected during translation, the model applies translation templates to the word to get a set of matched templates, and then translates the word into a set of suggested translations. Our experiment results demonstrate that the translations suggested by the unknown word translation template model significantly improve the performance of the Moses machine translation system.",Translating Chinese Unknown Words by Automatically Acquired Templates,"In this paper, we present a translation template model to translate Chinese unknown words. The model exploits translation templates, which are extracted automatically from a word-aligned parallel corpus, to translate unknown words. The translation templates are designed in accordance with the structure of unknown words. When an unknown word is detected during translation, the model applies translation templates to the word to get a set of matched templates, and then translates the word into a set of suggested translations. Our experiment results demonstrate that the translations suggested by the unknown word translation template model significantly improve the performance of the Moses machine translation system.",,"Translating Chinese Unknown Words by Automatically Acquired Templates. In this paper, we present a translation template model to translate Chinese unknown words. The model exploits translation templates, which are extracted automatically from a word-aligned parallel corpus, to translate unknown words. The translation templates are designed in accordance with the structure of unknown words. When an unknown word is detected during translation, the model applies translation templates to the word to get a set of matched templates, and then translates the word into a set of suggested translations. Our experiment results demonstrate that the translations suggested by the unknown word translation template model significantly improve the performance of the Moses machine translation system.",2013
wu-1995-trainable,https://aclanthology.org/W95-0106,0,,,,,,,"Trainable Coarse Bilingual Grammars for Parallel Text Bracketing. We describe two new strategies to automatic bracketing of parallel corpora, with particular application to languages where prior grammar resources are scarce: (1) coarse bilingual grammars, and (2) unsupervised training of such grammars via EM (expectation-maximization). Both methods build upon a formalism we recently introduced called stochastic inversion transduction grammars. The first approach borrows a coarse monolingual grammar into our bilingual formalism, in order to transfer knowledge of one language's constraints to the task of bracketing the texts in both languages. The second approach generalizes the inside-outside algorithm to adjust the grammar parameters so as to improve the likelihood of a training corpus. Preliminary experiments on parallel English-Chinese text are supportive of these strategies.",Trainable Coarse Bilingual Grammars for Parallel Text Bracketing,"We describe two new strategies to automatic bracketing of parallel corpora, with particular application to languages where prior grammar resources are scarce: (1) coarse bilingual grammars, and (2) unsupervised training of such grammars via EM (expectation-maximization). Both methods build upon a formalism we recently introduced called stochastic inversion transduction grammars. The first approach borrows a coarse monolingual grammar into our bilingual formalism, in order to transfer knowledge of one language's constraints to the task of bracketing the texts in both languages. The second approach generalizes the inside-outside algorithm to adjust the grammar parameters so as to improve the likelihood of a training corpus. Preliminary experiments on parallel English-Chinese text are supportive of these strategies.",Trainable Coarse Bilingual Grammars for Parallel Text Bracketing,"We describe two new strategies to automatic bracketing of parallel corpora, with particular application to languages where prior grammar resources are scarce: (1) coarse bilingual grammars, and (2) unsupervised training of such grammars via EM (expectation-maximization). Both methods build upon a formalism we recently introduced called stochastic inversion transduction grammars. The first approach borrows a coarse monolingual grammar into our bilingual formalism, in order to transfer knowledge of one language's constraints to the task of bracketing the texts in both languages. The second approach generalizes the inside-outside algorithm to adjust the grammar parameters so as to improve the likelihood of a training corpus. Preliminary experiments on parallel English-Chinese text are supportive of these strategies.",,"Trainable Coarse Bilingual Grammars for Parallel Text Bracketing. We describe two new strategies to automatic bracketing of parallel corpora, with particular application to languages where prior grammar resources are scarce: (1) coarse bilingual grammars, and (2) unsupervised training of such grammars via EM (expectation-maximization). Both methods build upon a formalism we recently introduced called stochastic inversion transduction grammars. The first approach borrows a coarse monolingual grammar into our bilingual formalism, in order to transfer knowledge of one language's constraints to the task of bracketing the texts in both languages. The second approach generalizes the inside-outside algorithm to adjust the grammar parameters so as to improve the likelihood of a training corpus. Preliminary experiments on parallel English-Chinese text are supportive of these strategies.",1995
laokulrat-etal-2018-incorporating,https://aclanthology.org/L18-1477,0,,,,,,,"Incorporating Semantic Attention in Video Description Generation. Automatically generating video description is one of the approaches to enable computers to deeply understand videos, which can have a great impact and can be useful to many other applications. However, generated descriptions by computers often fail to correctly mention objects and actions appearing in the videos. This work aims to alleviate this problem by including external fine-grained visual information, which can be detected from all video frames, in the description generation model. In this paper, we propose an LSTM-based sequence-to-sequence model with semantic attention mechanism for video description generation. The model is flexible so that we can change the source of the external information without affecting the encoding and decoding parts of the model. The results show that using semantic attention to selectively focus on external fine-grained visual information can guide the system to correctly mention objects and actions in videos and have a better quality of video descriptions.",Incorporating Semantic Attention in Video Description Generation,"Automatically generating video description is one of the approaches to enable computers to deeply understand videos, which can have a great impact and can be useful to many other applications. However, generated descriptions by computers often fail to correctly mention objects and actions appearing in the videos. This work aims to alleviate this problem by including external fine-grained visual information, which can be detected from all video frames, in the description generation model. In this paper, we propose an LSTM-based sequence-to-sequence model with semantic attention mechanism for video description generation. The model is flexible so that we can change the source of the external information without affecting the encoding and decoding parts of the model. The results show that using semantic attention to selectively focus on external fine-grained visual information can guide the system to correctly mention objects and actions in videos and have a better quality of video descriptions.",Incorporating Semantic Attention in Video Description Generation,"Automatically generating video description is one of the approaches to enable computers to deeply understand videos, which can have a great impact and can be useful to many other applications. However, generated descriptions by computers often fail to correctly mention objects and actions appearing in the videos. This work aims to alleviate this problem by including external fine-grained visual information, which can be detected from all video frames, in the description generation model. In this paper, we propose an LSTM-based sequence-to-sequence model with semantic attention mechanism for video description generation. The model is flexible so that we can change the source of the external information without affecting the encoding and decoding parts of the model. The results show that using semantic attention to selectively focus on external fine-grained visual information can guide the system to correctly mention objects and actions in videos and have a better quality of video descriptions.","This paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO). We also would like to thank the anonymous reviewers for their insightful comments and suggestions, which were helpful in improving the quality of the paper.","Incorporating Semantic Attention in Video Description Generation. Automatically generating video description is one of the approaches to enable computers to deeply understand videos, which can have a great impact and can be useful to many other applications. However, generated descriptions by computers often fail to correctly mention objects and actions appearing in the videos. This work aims to alleviate this problem by including external fine-grained visual information, which can be detected from all video frames, in the description generation model. In this paper, we propose an LSTM-based sequence-to-sequence model with semantic attention mechanism for video description generation. The model is flexible so that we can change the source of the external information without affecting the encoding and decoding parts of the model. The results show that using semantic attention to selectively focus on external fine-grained visual information can guide the system to correctly mention objects and actions in videos and have a better quality of video descriptions.",2018
walker-etal-1992-case,https://aclanthology.org/C92-2122,0,,,,,,,A Case Study of Natural Language Customisation: The Practical Effects of World Knowledge. This paper proposes a methodology for the eustomisation of natural language interfaces to information retrieval applications. We report a field study in which we tested this methodology by customising a commercially available natural language system to a large database of sales and marketing information. We note that it was difficult to tailor the common sense reasoning capabilities of the particular system we used to our application. This study validates aspects of the suggested methodology as well as providing insights that should inform the design of natural lauguage systems for this class of applications.,A Case Study of Natural Language Customisation: The Practical Effects of World Knowledge,This paper proposes a methodology for the eustomisation of natural language interfaces to information retrieval applications. We report a field study in which we tested this methodology by customising a commercially available natural language system to a large database of sales and marketing information. We note that it was difficult to tailor the common sense reasoning capabilities of the particular system we used to our application. This study validates aspects of the suggested methodology as well as providing insights that should inform the design of natural lauguage systems for this class of applications.,A Case Study of Natural Language Customisation: The Practical Effects of World Knowledge,This paper proposes a methodology for the eustomisation of natural language interfaces to information retrieval applications. We report a field study in which we tested this methodology by customising a commercially available natural language system to a large database of sales and marketing information. We note that it was difficult to tailor the common sense reasoning capabilities of the particular system we used to our application. This study validates aspects of the suggested methodology as well as providing insights that should inform the design of natural lauguage systems for this class of applications.,,A Case Study of Natural Language Customisation: The Practical Effects of World Knowledge. This paper proposes a methodology for the eustomisation of natural language interfaces to information retrieval applications. We report a field study in which we tested this methodology by customising a commercially available natural language system to a large database of sales and marketing information. We note that it was difficult to tailor the common sense reasoning capabilities of the particular system we used to our application. This study validates aspects of the suggested methodology as well as providing insights that should inform the design of natural lauguage systems for this class of applications.,1992
liu-etal-2013-tuning,https://aclanthology.org/I13-1032,0,,,,,,,"Tuning SMT with a Large Number of Features via Online Feature Grouping. In this paper, we consider the tuning of statistical machine translation (SMT) models employing a large number of features. We argue that existing tuning methods for these models suffer serious sparsity problems, in which features appearing in the tuning data may not appear in the testing data and thus those features may be over tuned in the tuning data. As a result, we face an over-fitting problem, which limits the generalization abilities of the learned models. Based on our analysis, we propose a novel method based on feature grouping via OSCAR to overcome these pitfalls. Our feature grouping is implemented within an online learning framework and thus it is efficient for a large scale (both for features and examples) of learning in our scenario. Experiment results on IWSLT translation tasks show that the proposed method significantly outperforms the state of the art tuning methods.",Tuning {SMT} with a Large Number of Features via Online Feature Grouping,"In this paper, we consider the tuning of statistical machine translation (SMT) models employing a large number of features. We argue that existing tuning methods for these models suffer serious sparsity problems, in which features appearing in the tuning data may not appear in the testing data and thus those features may be over tuned in the tuning data. As a result, we face an over-fitting problem, which limits the generalization abilities of the learned models. Based on our analysis, we propose a novel method based on feature grouping via OSCAR to overcome these pitfalls. Our feature grouping is implemented within an online learning framework and thus it is efficient for a large scale (both for features and examples) of learning in our scenario. Experiment results on IWSLT translation tasks show that the proposed method significantly outperforms the state of the art tuning methods.",Tuning SMT with a Large Number of Features via Online Feature Grouping,"In this paper, we consider the tuning of statistical machine translation (SMT) models employing a large number of features. We argue that existing tuning methods for these models suffer serious sparsity problems, in which features appearing in the tuning data may not appear in the testing data and thus those features may be over tuned in the tuning data. As a result, we face an over-fitting problem, which limits the generalization abilities of the learned models. Based on our analysis, we propose a novel method based on feature grouping via OSCAR to overcome these pitfalls. Our feature grouping is implemented within an online learning framework and thus it is efficient for a large scale (both for features and examples) of learning in our scenario. Experiment results on IWSLT translation tasks show that the proposed method significantly outperforms the state of the art tuning methods.","We would like to thank our colleagues in both HIT and NICT for insightful discussions, and three anonymous reviewers for many invaluable comments and suggestions to improve our paper. This work is supported by National Natural Science Foundation of China (61173073, 61100093, 61073130, 61272384), and the Key Project of the National High Technology Research and Development Program of China (2011AA01A207).","Tuning SMT with a Large Number of Features via Online Feature Grouping. In this paper, we consider the tuning of statistical machine translation (SMT) models employing a large number of features. We argue that existing tuning methods for these models suffer serious sparsity problems, in which features appearing in the tuning data may not appear in the testing data and thus those features may be over tuned in the tuning data. As a result, we face an over-fitting problem, which limits the generalization abilities of the learned models. Based on our analysis, we propose a novel method based on feature grouping via OSCAR to overcome these pitfalls. Our feature grouping is implemented within an online learning framework and thus it is efficient for a large scale (both for features and examples) of learning in our scenario. Experiment results on IWSLT translation tasks show that the proposed method significantly outperforms the state of the art tuning methods.",2013
ying-etal-2021-longsumm,https://aclanthology.org/2021.sdp-1.12,1,,,,industry_innovation_infrastructure,,,"LongSumm 2021: Session based automatic summarization model for scientific document. Most summarization task focuses on generating relatively short summaries. Such a length constraint might not be appropriate when summarizing scientific work. The LongSumm task needs participants generate long summary for scientific document. This task usual can be solved by language model. But an important problem is that model like BERT is limit to memory, and can not deal with a long input like a document. Also generate a long output is hard. In this paper, we propose a session based automatic summarization model (SBAS) which using a session and ensemble mechanism to generate long summary. And our model achieves the best performance in the LongSumm task.",{L}ong{S}umm 2021: Session based automatic summarization model for scientific document,"Most summarization task focuses on generating relatively short summaries. Such a length constraint might not be appropriate when summarizing scientific work. The LongSumm task needs participants generate long summary for scientific document. This task usual can be solved by language model. But an important problem is that model like BERT is limit to memory, and can not deal with a long input like a document. Also generate a long output is hard. In this paper, we propose a session based automatic summarization model (SBAS) which using a session and ensemble mechanism to generate long summary. And our model achieves the best performance in the LongSumm task.",LongSumm 2021: Session based automatic summarization model for scientific document,"Most summarization task focuses on generating relatively short summaries. Such a length constraint might not be appropriate when summarizing scientific work. The LongSumm task needs participants generate long summary for scientific document. This task usual can be solved by language model. But an important problem is that model like BERT is limit to memory, and can not deal with a long input like a document. Also generate a long output is hard. In this paper, we propose a session based automatic summarization model (SBAS) which using a session and ensemble mechanism to generate long summary. And our model achieves the best performance in the LongSumm task.",,"LongSumm 2021: Session based automatic summarization model for scientific document. Most summarization task focuses on generating relatively short summaries. Such a length constraint might not be appropriate when summarizing scientific work. The LongSumm task needs participants generate long summary for scientific document. This task usual can be solved by language model. But an important problem is that model like BERT is limit to memory, and can not deal with a long input like a document. Also generate a long output is hard. In this paper, we propose a session based automatic summarization model (SBAS) which using a session and ensemble mechanism to generate long summary. And our model achieves the best performance in the LongSumm task.",2021
brugman-etal-2008-common,http://www.lrec-conf.org/proceedings/lrec2008/pdf/330_paper.pdf,0,,,,,,,"A Common Multimedia Annotation Framework for Cross Linking Cultural Heritage Digital Collections. In the context of the CATCH research program that is currently carried out at a number of large Dutch cultural heritage institutions our ambition is to combine and exchange heterogeneous multimedia annotations between projects and institutions. As first step we designed an Annotation Meta Model: a simple but powerful RDF/OWL model mainly addressing the anchoring of annotations to segments of the many different media types used in the collections of the archives, museums and libraries involved. The model includes support for the annotation of annotations themselves, and of segments of annotation values, to be able to layer annotations and in this way enable projects to process each other's annotation data as the primary data for further annotation. On basis of AMM we designed an application programming interface for accessing annotation repositories and implemented it both as a software library and as a web service. Finally, we report on our experiences with the application of model, API and repository when developing web applications for collection managers in cultural heritage institutions.",A Common Multimedia Annotation Framework for Cross Linking Cultural Heritage Digital Collections,"In the context of the CATCH research program that is currently carried out at a number of large Dutch cultural heritage institutions our ambition is to combine and exchange heterogeneous multimedia annotations between projects and institutions. As first step we designed an Annotation Meta Model: a simple but powerful RDF/OWL model mainly addressing the anchoring of annotations to segments of the many different media types used in the collections of the archives, museums and libraries involved. The model includes support for the annotation of annotations themselves, and of segments of annotation values, to be able to layer annotations and in this way enable projects to process each other's annotation data as the primary data for further annotation. On basis of AMM we designed an application programming interface for accessing annotation repositories and implemented it both as a software library and as a web service. Finally, we report on our experiences with the application of model, API and repository when developing web applications for collection managers in cultural heritage institutions.",A Common Multimedia Annotation Framework for Cross Linking Cultural Heritage Digital Collections,"In the context of the CATCH research program that is currently carried out at a number of large Dutch cultural heritage institutions our ambition is to combine and exchange heterogeneous multimedia annotations between projects and institutions. As first step we designed an Annotation Meta Model: a simple but powerful RDF/OWL model mainly addressing the anchoring of annotations to segments of the many different media types used in the collections of the archives, museums and libraries involved. The model includes support for the annotation of annotations themselves, and of segments of annotation values, to be able to layer annotations and in this way enable projects to process each other's annotation data as the primary data for further annotation. On basis of AMM we designed an application programming interface for accessing annotation repositories and implemented it both as a software library and as a web service. Finally, we report on our experiences with the application of model, API and repository when developing web applications for collection managers in cultural heritage institutions.",,"A Common Multimedia Annotation Framework for Cross Linking Cultural Heritage Digital Collections. In the context of the CATCH research program that is currently carried out at a number of large Dutch cultural heritage institutions our ambition is to combine and exchange heterogeneous multimedia annotations between projects and institutions. As first step we designed an Annotation Meta Model: a simple but powerful RDF/OWL model mainly addressing the anchoring of annotations to segments of the many different media types used in the collections of the archives, museums and libraries involved. The model includes support for the annotation of annotations themselves, and of segments of annotation values, to be able to layer annotations and in this way enable projects to process each other's annotation data as the primary data for further annotation. On basis of AMM we designed an application programming interface for accessing annotation repositories and implemented it both as a software library and as a web service. Finally, we report on our experiences with the application of model, API and repository when developing web applications for collection managers in cultural heritage institutions.",2008
wilton-1973-bilingual,https://aclanthology.org/C73-1029,0,,,,,,,"Bilingual Lexicography: Computer-Aided Editing. Bilingual dictionaries present special difficulties for the lexicographer who is determined to employ the computer to facilitate his editorial work. In a sense, these dictionaries include evdrything contained in a normal monolingual edition and a good deal more. The singlelanguage definition dictionary is consulted as the authority on orthography, pronunciation and stress-pattern, grammar, level of formality, field of application, definitions, examples, usage and etymology. A bilingual dictionary which purports to be more than a pocket edition will treat all of these with the exception of etymology, which is not normally in the domain of the translator. In addition, it will devote itself to providing accurate translations, which necessarily presuppose an intimate acquaintance with the correct definitions in both languages. Such a dictionary• is a far cry from its mediaeval ancestor, the two-language glossary, which was usually a one-way device furnishing equivalent forms for simple words and expressions in the opposite language. The modern bilingual dictionary is usually two-way, each section constituting a complete dictionary in its own right and contrived to cater for a variety of translation requirements. Yet the two sections are inextric-• ably linked by an intricate network of translations and cross-references which guide the consulter and ensure that he does not falter when semantic equivalence fails to overlap smoothly. Since semantic equivalence is the important basic feature of bilingual dictionaries, deviations from the normal pattern will require special treatment. In closely related languages, like French and English, numerous pairs of words of common origin are only slightly, if at all, altered in their modern form (e.g. Eng. versatile/ Fr. versatile). But the disparate development of two modes of expression in different cultural and historical environments has left a residue of such word pairs whose only similarity is in fact the visual image of the sign. Their definitions are often very remote from each other. It is yet another task of bilingual lexicography to distinguish clearly between the meanings of these deceptive cognates or ""faux amis ""
These, then, in very brief outline, are some of the features common to all good bilingual dictionaries. The Canadian Dictionary (/Dictionnaire canadien) i is no exception to these general remarks. First published ten years ago under the editorship of Professor Jean-Paul Vinay at the University of Montreal, it is now undergoing a major revision and updating at the University of Victoria, still under Vinay's supervision. The new editions should see the corpus of the original version increased from 40,000 to about 100,000 entry words. The first edition was specifically tailored for the unique linguistic situation in Canada and takes into account the two main dialects of each of the official languages it represents, namely, European and Canadian French, and British and Canadian English. This, however, is a gross simplification of a complicated dialect situation fraught with all the problems associated with social and official acceptability. But it is sufficient for the purposes of this discussion to mention that a good deal of importance is attached to Canadian content in both languages, thereby adding a further unit of complexity to the material to be presented. Accordingly, in addition to the data common to all bilingual dictionaries, The Canadian Dictionary furnishes information on the dialect status of most words and expressions.",Bilingual Lexicography: Computer-Aided Editing,"Bilingual dictionaries present special difficulties for the lexicographer who is determined to employ the computer to facilitate his editorial work. In a sense, these dictionaries include evdrything contained in a normal monolingual edition and a good deal more. The singlelanguage definition dictionary is consulted as the authority on orthography, pronunciation and stress-pattern, grammar, level of formality, field of application, definitions, examples, usage and etymology. A bilingual dictionary which purports to be more than a pocket edition will treat all of these with the exception of etymology, which is not normally in the domain of the translator. In addition, it will devote itself to providing accurate translations, which necessarily presuppose an intimate acquaintance with the correct definitions in both languages. Such a dictionary• is a far cry from its mediaeval ancestor, the two-language glossary, which was usually a one-way device furnishing equivalent forms for simple words and expressions in the opposite language. The modern bilingual dictionary is usually two-way, each section constituting a complete dictionary in its own right and contrived to cater for a variety of translation requirements. Yet the two sections are inextric-• ably linked by an intricate network of translations and cross-references which guide the consulter and ensure that he does not falter when semantic equivalence fails to overlap smoothly. Since semantic equivalence is the important basic feature of bilingual dictionaries, deviations from the normal pattern will require special treatment. In closely related languages, like French and English, numerous pairs of words of common origin are only slightly, if at all, altered in their modern form (e.g. Eng. versatile/ Fr. versatile). But the disparate development of two modes of expression in different cultural and historical environments has left a residue of such word pairs whose only similarity is in fact the visual image of the sign. Their definitions are often very remote from each other. It is yet another task of bilingual lexicography to distinguish clearly between the meanings of these deceptive cognates or ""faux amis ""
These, then, in very brief outline, are some of the features common to all good bilingual dictionaries. The Canadian Dictionary (/Dictionnaire canadien) i is no exception to these general remarks. First published ten years ago under the editorship of Professor Jean-Paul Vinay at the University of Montreal, it is now undergoing a major revision and updating at the University of Victoria, still under Vinay's supervision. The new editions should see the corpus of the original version increased from 40,000 to about 100,000 entry words. The first edition was specifically tailored for the unique linguistic situation in Canada and takes into account the two main dialects of each of the official languages it represents, namely, European and Canadian French, and British and Canadian English. This, however, is a gross simplification of a complicated dialect situation fraught with all the problems associated with social and official acceptability. But it is sufficient for the purposes of this discussion to mention that a good deal of importance is attached to Canadian content in both languages, thereby adding a further unit of complexity to the material to be presented. Accordingly, in addition to the data common to all bilingual dictionaries, The Canadian Dictionary furnishes information on the dialect status of most words and expressions.",Bilingual Lexicography: Computer-Aided Editing,"Bilingual dictionaries present special difficulties for the lexicographer who is determined to employ the computer to facilitate his editorial work. In a sense, these dictionaries include evdrything contained in a normal monolingual edition and a good deal more. The singlelanguage definition dictionary is consulted as the authority on orthography, pronunciation and stress-pattern, grammar, level of formality, field of application, definitions, examples, usage and etymology. A bilingual dictionary which purports to be more than a pocket edition will treat all of these with the exception of etymology, which is not normally in the domain of the translator. In addition, it will devote itself to providing accurate translations, which necessarily presuppose an intimate acquaintance with the correct definitions in both languages. Such a dictionary• is a far cry from its mediaeval ancestor, the two-language glossary, which was usually a one-way device furnishing equivalent forms for simple words and expressions in the opposite language. The modern bilingual dictionary is usually two-way, each section constituting a complete dictionary in its own right and contrived to cater for a variety of translation requirements. Yet the two sections are inextric-• ably linked by an intricate network of translations and cross-references which guide the consulter and ensure that he does not falter when semantic equivalence fails to overlap smoothly. Since semantic equivalence is the important basic feature of bilingual dictionaries, deviations from the normal pattern will require special treatment. In closely related languages, like French and English, numerous pairs of words of common origin are only slightly, if at all, altered in their modern form (e.g. Eng. versatile/ Fr. versatile). But the disparate development of two modes of expression in different cultural and historical environments has left a residue of such word pairs whose only similarity is in fact the visual image of the sign. Their definitions are often very remote from each other. It is yet another task of bilingual lexicography to distinguish clearly between the meanings of these deceptive cognates or ""faux amis ""
These, then, in very brief outline, are some of the features common to all good bilingual dictionaries. The Canadian Dictionary (/Dictionnaire canadien) i is no exception to these general remarks. First published ten years ago under the editorship of Professor Jean-Paul Vinay at the University of Montreal, it is now undergoing a major revision and updating at the University of Victoria, still under Vinay's supervision. The new editions should see the corpus of the original version increased from 40,000 to about 100,000 entry words. The first edition was specifically tailored for the unique linguistic situation in Canada and takes into account the two main dialects of each of the official languages it represents, namely, European and Canadian French, and British and Canadian English. This, however, is a gross simplification of a complicated dialect situation fraught with all the problems associated with social and official acceptability. But it is sufficient for the purposes of this discussion to mention that a good deal of importance is attached to Canadian content in both languages, thereby adding a further unit of complexity to the material to be presented. Accordingly, in addition to the data common to all bilingual dictionaries, The Canadian Dictionary furnishes information on the dialect status of most words and expressions.",,"Bilingual Lexicography: Computer-Aided Editing. Bilingual dictionaries present special difficulties for the lexicographer who is determined to employ the computer to facilitate his editorial work. In a sense, these dictionaries include evdrything contained in a normal monolingual edition and a good deal more. The singlelanguage definition dictionary is consulted as the authority on orthography, pronunciation and stress-pattern, grammar, level of formality, field of application, definitions, examples, usage and etymology. A bilingual dictionary which purports to be more than a pocket edition will treat all of these with the exception of etymology, which is not normally in the domain of the translator. In addition, it will devote itself to providing accurate translations, which necessarily presuppose an intimate acquaintance with the correct definitions in both languages. Such a dictionary• is a far cry from its mediaeval ancestor, the two-language glossary, which was usually a one-way device furnishing equivalent forms for simple words and expressions in the opposite language. The modern bilingual dictionary is usually two-way, each section constituting a complete dictionary in its own right and contrived to cater for a variety of translation requirements. Yet the two sections are inextric-• ably linked by an intricate network of translations and cross-references which guide the consulter and ensure that he does not falter when semantic equivalence fails to overlap smoothly. Since semantic equivalence is the important basic feature of bilingual dictionaries, deviations from the normal pattern will require special treatment. In closely related languages, like French and English, numerous pairs of words of common origin are only slightly, if at all, altered in their modern form (e.g. Eng. versatile/ Fr. versatile). But the disparate development of two modes of expression in different cultural and historical environments has left a residue of such word pairs whose only similarity is in fact the visual image of the sign. Their definitions are often very remote from each other. It is yet another task of bilingual lexicography to distinguish clearly between the meanings of these deceptive cognates or ""faux amis ""
These, then, in very brief outline, are some of the features common to all good bilingual dictionaries. The Canadian Dictionary (/Dictionnaire canadien) i is no exception to these general remarks. First published ten years ago under the editorship of Professor Jean-Paul Vinay at the University of Montreal, it is now undergoing a major revision and updating at the University of Victoria, still under Vinay's supervision. The new editions should see the corpus of the original version increased from 40,000 to about 100,000 entry words. The first edition was specifically tailored for the unique linguistic situation in Canada and takes into account the two main dialects of each of the official languages it represents, namely, European and Canadian French, and British and Canadian English. This, however, is a gross simplification of a complicated dialect situation fraught with all the problems associated with social and official acceptability. But it is sufficient for the purposes of this discussion to mention that a good deal of importance is attached to Canadian content in both languages, thereby adding a further unit of complexity to the material to be presented. Accordingly, in addition to the data common to all bilingual dictionaries, The Canadian Dictionary furnishes information on the dialect status of most words and expressions.",1973
jing-2000-sentence,https://aclanthology.org/A00-1043,0,,,,,,,"Sentence Reduction for Automatic Text Summarization. We present a novel sentence reduction system for automatically removing extraneous phrases from sentences that are extracted from a document for summarization purpose. The system uses multiple sources of knowledge to decide which phrases in an extracted sentence can be removed, including syntactic knowledge, context information, and statistics computed from a corpus which consists of examples written by human professionals. Reduction can significantly improve the conciseness of automatic summaries.",Sentence Reduction for Automatic Text Summarization,"We present a novel sentence reduction system for automatically removing extraneous phrases from sentences that are extracted from a document for summarization purpose. The system uses multiple sources of knowledge to decide which phrases in an extracted sentence can be removed, including syntactic knowledge, context information, and statistics computed from a corpus which consists of examples written by human professionals. Reduction can significantly improve the conciseness of automatic summaries.",Sentence Reduction for Automatic Text Summarization,"We present a novel sentence reduction system for automatically removing extraneous phrases from sentences that are extracted from a document for summarization purpose. The system uses multiple sources of knowledge to decide which phrases in an extracted sentence can be removed, including syntactic knowledge, context information, and statistics computed from a corpus which consists of examples written by human professionals. Reduction can significantly improve the conciseness of automatic summaries.","This material is based upon work supported by the National Science Foundation under Grant No. IRI 96-19124 and IRI 96-18797. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.","Sentence Reduction for Automatic Text Summarization. We present a novel sentence reduction system for automatically removing extraneous phrases from sentences that are extracted from a document for summarization purpose. The system uses multiple sources of knowledge to decide which phrases in an extracted sentence can be removed, including syntactic knowledge, context information, and statistics computed from a corpus which consists of examples written by human professionals. Reduction can significantly improve the conciseness of automatic summaries.",2000
shirani-etal-2021-psed,https://aclanthology.org/2021.findings-acl.377,0,,,,,,,"PSED: A Dataset for Selecting Emphasis in Presentation Slides. Emphasizing words in presentation slides allows viewers to direct their gaze to focal points without reading the entire slide, retaining their attention on the speaker. Despite many studies on automatic slide generation, few have addressed helping authors choose which words to emphasize. Motivated by this, we study the problem of choosing candidates for emphasis by introducing a new dataset containing presentation slides with a wide variety of topics. We evaluated a range of state-of-the-art models on this novel dataset by organizing a shared task and inviting multiple researchers to model emphasis in slides.",{PSED}: A Dataset for Selecting Emphasis in Presentation Slides,"Emphasizing words in presentation slides allows viewers to direct their gaze to focal points without reading the entire slide, retaining their attention on the speaker. Despite many studies on automatic slide generation, few have addressed helping authors choose which words to emphasize. Motivated by this, we study the problem of choosing candidates for emphasis by introducing a new dataset containing presentation slides with a wide variety of topics. We evaluated a range of state-of-the-art models on this novel dataset by organizing a shared task and inviting multiple researchers to model emphasis in slides.",PSED: A Dataset for Selecting Emphasis in Presentation Slides,"Emphasizing words in presentation slides allows viewers to direct their gaze to focal points without reading the entire slide, retaining their attention on the speaker. Despite many studies on automatic slide generation, few have addressed helping authors choose which words to emphasize. Motivated by this, we study the problem of choosing candidates for emphasis by introducing a new dataset containing presentation slides with a wide variety of topics. We evaluated a range of state-of-the-art models on this novel dataset by organizing a shared task and inviting multiple researchers to model emphasis in slides.",We thank the reviewers for their thoughtful comments and efforts towards improving our work. We also thank Andrew Greene for his help in creating the corpus.,"PSED: A Dataset for Selecting Emphasis in Presentation Slides. Emphasizing words in presentation slides allows viewers to direct their gaze to focal points without reading the entire slide, retaining their attention on the speaker. Despite many studies on automatic slide generation, few have addressed helping authors choose which words to emphasize. Motivated by this, we study the problem of choosing candidates for emphasis by introducing a new dataset containing presentation slides with a wide variety of topics. We evaluated a range of state-of-the-art models on this novel dataset by organizing a shared task and inviting multiple researchers to model emphasis in slides.",2021
christodoulopoulos-etal-2012-turning,https://aclanthology.org/W12-1913,0,,,,,,,"Turning the pipeline into a loop: Iterated unsupervised dependency parsing and PoS induction. Most unsupervised dependency systems rely on gold-standard Part-of-Speech (PoS) tags, either directly, using the PoS tags instead of words, or indirectly in the back-off mechanism of fully lexicalized models (Headden et al., 2009) .",Turning the pipeline into a loop: Iterated unsupervised dependency parsing and {P}o{S} induction,"Most unsupervised dependency systems rely on gold-standard Part-of-Speech (PoS) tags, either directly, using the PoS tags instead of words, or indirectly in the back-off mechanism of fully lexicalized models (Headden et al., 2009) .",Turning the pipeline into a loop: Iterated unsupervised dependency parsing and PoS induction,"Most unsupervised dependency systems rely on gold-standard Part-of-Speech (PoS) tags, either directly, using the PoS tags instead of words, or indirectly in the back-off mechanism of fully lexicalized models (Headden et al., 2009) .",,"Turning the pipeline into a loop: Iterated unsupervised dependency parsing and PoS induction. Most unsupervised dependency systems rely on gold-standard Part-of-Speech (PoS) tags, either directly, using the PoS tags instead of words, or indirectly in the back-off mechanism of fully lexicalized models (Headden et al., 2009) .",2012
sheng-etal-2021-nice,https://aclanthology.org/2021.naacl-main.60,1,,,,hate_speech,,,"``Nice Try, Kiddo'': Investigating Ad Hominems in Dialogue Responses. Ad hominem attacks are those that target some feature of a person's character instead of the position the person is maintaining. These attacks are harmful because they propagate implicit biases and diminish a person's credibility. Since dialogue systems respond directly to user input, it is important to study ad hominems in dialogue responses. To this end, we propose categories of ad hominems, compose an annotated dataset, and build a classifier to analyze human and dialogue system responses to English Twitter posts. We specifically compare responses to Twitter topics about marginalized communities (#Black-LivesMatter, #MeToo) versus other topics (#Vegan, #WFH), because the abusive language of ad hominems could further amplify the skew of power away from marginalized populations. Furthermore, we propose a constrained decoding technique that uses salient n-gram similarity as a soft constraint for top-k sampling to reduce the amount of ad hominems generated. Our results indicate that 1) responses from both humans and DialoGPT contain more ad hominems for discussions around marginalized communities, 2) different quantities of ad hominems in the training data can influence the likelihood of generating ad hominems, and 3) we can use constrained decoding techniques to reduce ad hominems in generated dialogue responses. Post: Many are trying to co-opt and mischaracterize the #blacklivesmatter movement. We won't allow it! Resp: I hate how much of a victim complex you guys have.","{``}Nice Try, Kiddo{''}: Investigating Ad Hominems in Dialogue Responses","Ad hominem attacks are those that target some feature of a person's character instead of the position the person is maintaining. These attacks are harmful because they propagate implicit biases and diminish a person's credibility. Since dialogue systems respond directly to user input, it is important to study ad hominems in dialogue responses. To this end, we propose categories of ad hominems, compose an annotated dataset, and build a classifier to analyze human and dialogue system responses to English Twitter posts. We specifically compare responses to Twitter topics about marginalized communities (#Black-LivesMatter, #MeToo) versus other topics (#Vegan, #WFH), because the abusive language of ad hominems could further amplify the skew of power away from marginalized populations. Furthermore, we propose a constrained decoding technique that uses salient n-gram similarity as a soft constraint for top-k sampling to reduce the amount of ad hominems generated. Our results indicate that 1) responses from both humans and DialoGPT contain more ad hominems for discussions around marginalized communities, 2) different quantities of ad hominems in the training data can influence the likelihood of generating ad hominems, and 3) we can use constrained decoding techniques to reduce ad hominems in generated dialogue responses. Post: Many are trying to co-opt and mischaracterize the #blacklivesmatter movement. We won't allow it! Resp: I hate how much of a victim complex you guys have.","``Nice Try, Kiddo'': Investigating Ad Hominems in Dialogue Responses","Ad hominem attacks are those that target some feature of a person's character instead of the position the person is maintaining. These attacks are harmful because they propagate implicit biases and diminish a person's credibility. Since dialogue systems respond directly to user input, it is important to study ad hominems in dialogue responses. To this end, we propose categories of ad hominems, compose an annotated dataset, and build a classifier to analyze human and dialogue system responses to English Twitter posts. We specifically compare responses to Twitter topics about marginalized communities (#Black-LivesMatter, #MeToo) versus other topics (#Vegan, #WFH), because the abusive language of ad hominems could further amplify the skew of power away from marginalized populations. Furthermore, we propose a constrained decoding technique that uses salient n-gram similarity as a soft constraint for top-k sampling to reduce the amount of ad hominems generated. Our results indicate that 1) responses from both humans and DialoGPT contain more ad hominems for discussions around marginalized communities, 2) different quantities of ad hominems in the training data can influence the likelihood of generating ad hominems, and 3) we can use constrained decoding techniques to reduce ad hominems in generated dialogue responses. Post: Many are trying to co-opt and mischaracterize the #blacklivesmatter movement. We won't allow it! Resp: I hate how much of a victim complex you guys have.","We would like to thank members of the PLUS Lab and the anonymous reviewers for the helpful feedback, and Jason Teoh for the many discussions. This paper is supported in part by NSF IIS 1927554 and by the CwC program under Con-tract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.","``Nice Try, Kiddo'': Investigating Ad Hominems in Dialogue Responses. Ad hominem attacks are those that target some feature of a person's character instead of the position the person is maintaining. These attacks are harmful because they propagate implicit biases and diminish a person's credibility. Since dialogue systems respond directly to user input, it is important to study ad hominems in dialogue responses. To this end, we propose categories of ad hominems, compose an annotated dataset, and build a classifier to analyze human and dialogue system responses to English Twitter posts. We specifically compare responses to Twitter topics about marginalized communities (#Black-LivesMatter, #MeToo) versus other topics (#Vegan, #WFH), because the abusive language of ad hominems could further amplify the skew of power away from marginalized populations. Furthermore, we propose a constrained decoding technique that uses salient n-gram similarity as a soft constraint for top-k sampling to reduce the amount of ad hominems generated. Our results indicate that 1) responses from both humans and DialoGPT contain more ad hominems for discussions around marginalized communities, 2) different quantities of ad hominems in the training data can influence the likelihood of generating ad hominems, and 3) we can use constrained decoding techniques to reduce ad hominems in generated dialogue responses. Post: Many are trying to co-opt and mischaracterize the #blacklivesmatter movement. We won't allow it! Resp: I hate how much of a victim complex you guys have.",2021
zhou-etal-2013-statistical,https://aclanthology.org/P13-1084,0,,,,,,,"Statistical Machine Translation Improves Question Retrieval in Community Question Answering via Matrix Factorization. Community question answering (CQA) has become an increasingly popular research topic. In this paper, we focus on the problem of question retrieval. Question retrieval in CQA can automatically find the most relevant and recent questions that have been solved by other users. However, the word ambiguity and word mismatch problems bring about new challenges for question retrieval in CQA. State-of-the-art approaches address these issues by implicitly expanding the queried questions with additional words or phrases using monolingual translation models. While useful, the effectiveness of these models is highly dependent on the availability of quality parallel monolingual corpora (e.g., question-answer pairs) in the absence of which they are troubled by noise issue. In this work, we propose an alternative way to address the word ambiguity and word mismatch problems by taking advantage of potentially rich semantic information drawn from other languages. Our proposed method employs statistical machine translation to improve question retrieval and enriches the question representation with the translated words from other languages via matrix factorization. Experiments conducted on a real CQA data show that our proposed approach is promising.",Statistical Machine Translation Improves Question Retrieval in Community Question Answering via Matrix Factorization,"Community question answering (CQA) has become an increasingly popular research topic. In this paper, we focus on the problem of question retrieval. Question retrieval in CQA can automatically find the most relevant and recent questions that have been solved by other users. However, the word ambiguity and word mismatch problems bring about new challenges for question retrieval in CQA. State-of-the-art approaches address these issues by implicitly expanding the queried questions with additional words or phrases using monolingual translation models. While useful, the effectiveness of these models is highly dependent on the availability of quality parallel monolingual corpora (e.g., question-answer pairs) in the absence of which they are troubled by noise issue. In this work, we propose an alternative way to address the word ambiguity and word mismatch problems by taking advantage of potentially rich semantic information drawn from other languages. Our proposed method employs statistical machine translation to improve question retrieval and enriches the question representation with the translated words from other languages via matrix factorization. Experiments conducted on a real CQA data show that our proposed approach is promising.",Statistical Machine Translation Improves Question Retrieval in Community Question Answering via Matrix Factorization,"Community question answering (CQA) has become an increasingly popular research topic. In this paper, we focus on the problem of question retrieval. Question retrieval in CQA can automatically find the most relevant and recent questions that have been solved by other users. However, the word ambiguity and word mismatch problems bring about new challenges for question retrieval in CQA. State-of-the-art approaches address these issues by implicitly expanding the queried questions with additional words or phrases using monolingual translation models. While useful, the effectiveness of these models is highly dependent on the availability of quality parallel monolingual corpora (e.g., question-answer pairs) in the absence of which they are troubled by noise issue. In this work, we propose an alternative way to address the word ambiguity and word mismatch problems by taking advantage of potentially rich semantic information drawn from other languages. Our proposed method employs statistical machine translation to improve question retrieval and enriches the question representation with the translated words from other languages via matrix factorization. Experiments conducted on a real CQA data show that our proposed approach is promising.","This work was supported by the National Natural Science Foundation of China (No. 61070106, No. 61272332 and No. 61202329) We thank the anonymous reviewers for their insightful comments. We also thank Dr. Gao Cong for providing the data set and Dr. Li Cai for some discussion.","Statistical Machine Translation Improves Question Retrieval in Community Question Answering via Matrix Factorization. Community question answering (CQA) has become an increasingly popular research topic. In this paper, we focus on the problem of question retrieval. Question retrieval in CQA can automatically find the most relevant and recent questions that have been solved by other users. However, the word ambiguity and word mismatch problems bring about new challenges for question retrieval in CQA. State-of-the-art approaches address these issues by implicitly expanding the queried questions with additional words or phrases using monolingual translation models. While useful, the effectiveness of these models is highly dependent on the availability of quality parallel monolingual corpora (e.g., question-answer pairs) in the absence of which they are troubled by noise issue. In this work, we propose an alternative way to address the word ambiguity and word mismatch problems by taking advantage of potentially rich semantic information drawn from other languages. Our proposed method employs statistical machine translation to improve question retrieval and enriches the question representation with the translated words from other languages via matrix factorization. Experiments conducted on a real CQA data show that our proposed approach is promising.",2013
gokhman-etal-2012-search,https://aclanthology.org/W12-0404,1,,,,deception_detection,,,"In Search of a Gold Standard in Studies of Deception. In this study, we explore several popular techniques for obtaining corpora for deception research. Through a survey of traditional as well as non-gold standard creation approaches, we identify advantages and limitations of these techniques for webbased deception detection and offer crowdsourcing as a novel avenue toward achieving a gold standard corpus. Through an indepth case study of online hotel reviews, we demonstrate the implementation of this crowdsourcing technique and illustrate its applicability to a broad array of online reviews.",In Search of a Gold Standard in Studies of Deception,"In this study, we explore several popular techniques for obtaining corpora for deception research. Through a survey of traditional as well as non-gold standard creation approaches, we identify advantages and limitations of these techniques for webbased deception detection and offer crowdsourcing as a novel avenue toward achieving a gold standard corpus. Through an indepth case study of online hotel reviews, we demonstrate the implementation of this crowdsourcing technique and illustrate its applicability to a broad array of online reviews.",In Search of a Gold Standard in Studies of Deception,"In this study, we explore several popular techniques for obtaining corpora for deception research. Through a survey of traditional as well as non-gold standard creation approaches, we identify advantages and limitations of these techniques for webbased deception detection and offer crowdsourcing as a novel avenue toward achieving a gold standard corpus. Through an indepth case study of online hotel reviews, we demonstrate the implementation of this crowdsourcing technique and illustrate its applicability to a broad array of online reviews.","This work was supported in part by National Science Foundation Grant NSCC-0904913, and the Jack Kent Cooke Foundation. We also thank the EACL reviewers for their insightful comments, suggestions and advice on various aspects of this work.","In Search of a Gold Standard in Studies of Deception. In this study, we explore several popular techniques for obtaining corpora for deception research. Through a survey of traditional as well as non-gold standard creation approaches, we identify advantages and limitations of these techniques for webbased deception detection and offer crowdsourcing as a novel avenue toward achieving a gold standard corpus. Through an indepth case study of online hotel reviews, we demonstrate the implementation of this crowdsourcing technique and illustrate its applicability to a broad array of online reviews.",2012
sen-etal-2018-tempo,https://aclanthology.org/N18-1026,0,,,,,,,"Tempo-Lexical Context Driven Word Embedding for Cross-Session Search Task Extraction. Search task extraction in information retrieval is the process of identifying search intents over a set of queries relating to the same topical information need. Search tasks may potentially span across multiple search sessions. Most existing research on search task extraction has focused on identifying tasks within a single session, where the notion of a session is defined by a fixed length time window. By contrast, in this work we seek to identify tasks that span across multiple sessions. To identify tasks, we conduct a global analysis of a query log in its entirety without restricting analysis to individual temporal windows. To capture inherent task semantics, we represent queries as vectors in an abstract space. We learn the embedding of query words in this space by leveraging the temporal and lexical contexts of queries. To evaluate the effectiveness of the proposed query embedding, we conduct experiments of clustering queries into tasks with a particular interest of measuring the cross-session search task recall. Results of our experiments demonstrate that task extraction effectiveness, including cross-session recall, is improved significantly with the help of our proposed method of embedding the query terms by leveraging the temporal and templexical contexts of queries.",Tempo-Lexical Context Driven Word Embedding for Cross-Session Search Task Extraction,"Search task extraction in information retrieval is the process of identifying search intents over a set of queries relating to the same topical information need. Search tasks may potentially span across multiple search sessions. Most existing research on search task extraction has focused on identifying tasks within a single session, where the notion of a session is defined by a fixed length time window. By contrast, in this work we seek to identify tasks that span across multiple sessions. To identify tasks, we conduct a global analysis of a query log in its entirety without restricting analysis to individual temporal windows. To capture inherent task semantics, we represent queries as vectors in an abstract space. We learn the embedding of query words in this space by leveraging the temporal and lexical contexts of queries. To evaluate the effectiveness of the proposed query embedding, we conduct experiments of clustering queries into tasks with a particular interest of measuring the cross-session search task recall. Results of our experiments demonstrate that task extraction effectiveness, including cross-session recall, is improved significantly with the help of our proposed method of embedding the query terms by leveraging the temporal and templexical contexts of queries.",Tempo-Lexical Context Driven Word Embedding for Cross-Session Search Task Extraction,"Search task extraction in information retrieval is the process of identifying search intents over a set of queries relating to the same topical information need. Search tasks may potentially span across multiple search sessions. Most existing research on search task extraction has focused on identifying tasks within a single session, where the notion of a session is defined by a fixed length time window. By contrast, in this work we seek to identify tasks that span across multiple sessions. To identify tasks, we conduct a global analysis of a query log in its entirety without restricting analysis to individual temporal windows. To capture inherent task semantics, we represent queries as vectors in an abstract space. We learn the embedding of query words in this space by leveraging the temporal and lexical contexts of queries. To evaluate the effectiveness of the proposed query embedding, we conduct experiments of clustering queries into tasks with a particular interest of measuring the cross-session search task recall. Results of our experiments demonstrate that task extraction effectiveness, including cross-session recall, is improved significantly with the help of our proposed method of embedding the query terms by leveraging the temporal and templexical contexts of queries.",This work was supported by Science Foundation Ireland as part of the ADAPT Centre (Grant No. 13/RC/2106) (www.adaptcentre.ie).,"Tempo-Lexical Context Driven Word Embedding for Cross-Session Search Task Extraction. Search task extraction in information retrieval is the process of identifying search intents over a set of queries relating to the same topical information need. Search tasks may potentially span across multiple search sessions. Most existing research on search task extraction has focused on identifying tasks within a single session, where the notion of a session is defined by a fixed length time window. By contrast, in this work we seek to identify tasks that span across multiple sessions. To identify tasks, we conduct a global analysis of a query log in its entirety without restricting analysis to individual temporal windows. To capture inherent task semantics, we represent queries as vectors in an abstract space. We learn the embedding of query words in this space by leveraging the temporal and lexical contexts of queries. To evaluate the effectiveness of the proposed query embedding, we conduct experiments of clustering queries into tasks with a particular interest of measuring the cross-session search task recall. Results of our experiments demonstrate that task extraction effectiveness, including cross-session recall, is improved significantly with the help of our proposed method of embedding the query terms by leveraging the temporal and templexical contexts of queries.",2018
xu-etal-2018-unpaired,https://aclanthology.org/P18-1090,0,,,,,,,"Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach. The goal of sentiment-to-sentiment ""translation"" is to change the underlying sentiment of a sentence while keeping its content. The main challenge is the lack of parallel data. To solve this problem, we propose a cycled reinforcement learning method that enables training on unpaired data by collaboration between a neutralization module and an emotionalization module. We evaluate our approach on two review datasets, Yelp and Amazon. Experimental results show that our approach significantly outperforms the state-of-the-art systems. Especially, the proposed method substantially improves the content preservation performance. The BLEU score is improved from 1.64 to 22.46 and from 0.56 to 14.06 on the two datasets, respectively. 1",Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach,"The goal of sentiment-to-sentiment ""translation"" is to change the underlying sentiment of a sentence while keeping its content. The main challenge is the lack of parallel data. To solve this problem, we propose a cycled reinforcement learning method that enables training on unpaired data by collaboration between a neutralization module and an emotionalization module. We evaluate our approach on two review datasets, Yelp and Amazon. Experimental results show that our approach significantly outperforms the state-of-the-art systems. Especially, the proposed method substantially improves the content preservation performance. The BLEU score is improved from 1.64 to 22.46 and from 0.56 to 14.06 on the two datasets, respectively. 1",Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach,"The goal of sentiment-to-sentiment ""translation"" is to change the underlying sentiment of a sentence while keeping its content. The main challenge is the lack of parallel data. To solve this problem, we propose a cycled reinforcement learning method that enables training on unpaired data by collaboration between a neutralization module and an emotionalization module. We evaluate our approach on two review datasets, Yelp and Amazon. Experimental results show that our approach significantly outperforms the state-of-the-art systems. Especially, the proposed method substantially improves the content preservation performance. The BLEU score is improved from 1.64 to 22.46 and from 0.56 to 14.06 on the two datasets, respectively. 1","This work was supported in part by National Natural Science Foundation of China (No. 61673028), National High Technology Research and Development Program of China (863 Program, No. 2015AA015404), and the National Thousand Young Talents Program. Xu Sun is the corresponding author of this paper.","Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach. The goal of sentiment-to-sentiment ""translation"" is to change the underlying sentiment of a sentence while keeping its content. The main challenge is the lack of parallel data. To solve this problem, we propose a cycled reinforcement learning method that enables training on unpaired data by collaboration between a neutralization module and an emotionalization module. We evaluate our approach on two review datasets, Yelp and Amazon. Experimental results show that our approach significantly outperforms the state-of-the-art systems. Especially, the proposed method substantially improves the content preservation performance. The BLEU score is improved from 1.64 to 22.46 and from 0.56 to 14.06 on the two datasets, respectively. 1",2018
delpeuch-preller-2014-natural,https://aclanthology.org/W14-1407,0,,,,,,,"From Natural Language to RDF Graphs with Pregroups. We define an algorithm translating natural language sentences to the formal syntax of RDF, an existential conjunctive logic widely used on the Semantic Web. Our translation is based on pregroup grammars, an efficient type-logical grammatical framework with a transparent syntax-semantics interface. We introduce a restricted notion of side effects in the semantic category of finitely generated free semimodules over 0, 1 to that end. The translation gives an intensional counterpart to previous extensional models. We establish a one-to-one correspondence between extensional models and RDF models such that satisfaction is preserved. Our translation encompasses the expressivity of the target language and supports complex linguistic constructions like relative clauses and unbounded dependencies.",From Natural Language to {RDF} Graphs with Pregroups,"We define an algorithm translating natural language sentences to the formal syntax of RDF, an existential conjunctive logic widely used on the Semantic Web. Our translation is based on pregroup grammars, an efficient type-logical grammatical framework with a transparent syntax-semantics interface. We introduce a restricted notion of side effects in the semantic category of finitely generated free semimodules over {0, 1} to that end. The translation gives an intensional counterpart to previous extensional models. We establish a one-to-one correspondence between extensional models and RDF models such that satisfaction is preserved. Our translation encompasses the expressivity of the target language and supports complex linguistic constructions like relative clauses and unbounded dependencies.",From Natural Language to RDF Graphs with Pregroups,"We define an algorithm translating natural language sentences to the formal syntax of RDF, an existential conjunctive logic widely used on the Semantic Web. Our translation is based on pregroup grammars, an efficient type-logical grammatical framework with a transparent syntax-semantics interface. We introduce a restricted notion of side effects in the semantic category of finitely generated free semimodules over 0, 1 to that end. The translation gives an intensional counterpart to previous extensional models. We establish a one-to-one correspondence between extensional models and RDF models such that satisfaction is preserved. Our translation encompasses the expressivity of the target language and supports complex linguistic constructions like relative clauses and unbounded dependencies.","This work was supported by the École Normale Supérieure and the LIRMM. The first author wishes to thank David Naccache, Alain Lecomte, Antoine Amarilli, Hugo Vanneuville and both authors the members of the TEXTE group at the LIRMM for their interest in the project.","From Natural Language to RDF Graphs with Pregroups. We define an algorithm translating natural language sentences to the formal syntax of RDF, an existential conjunctive logic widely used on the Semantic Web. Our translation is based on pregroup grammars, an efficient type-logical grammatical framework with a transparent syntax-semantics interface. We introduce a restricted notion of side effects in the semantic category of finitely generated free semimodules over 0, 1 to that end. The translation gives an intensional counterpart to previous extensional models. We establish a one-to-one correspondence between extensional models and RDF models such that satisfaction is preserved. Our translation encompasses the expressivity of the target language and supports complex linguistic constructions like relative clauses and unbounded dependencies.",2014
lager-1998-logic,https://aclanthology.org/W98-1616,0,,,,,,,Logic for Part-of-Speech Tagging and Shallow Parsing. ,Logic for Part-of-Speech Tagging and Shallow Parsing,,Logic for Part-of-Speech Tagging and Shallow Parsing,,"T h is w o rk w a s c o n d u c te d w ith in th e T a g L o g P ro je c t, s u p p o rte d b y N IJT E K a n d H S F R . I am g ra te fu l to m y c o lle a g u e s a t U p p sa la U n iv e rsity a n d G ö te b o rg U n iv e rs ity fo r u se fu l d isc u ssio n s , a n d in p a rtic u la r to J o a k im N iv re in G ö teb o rg .",Logic for Part-of-Speech Tagging and Shallow Parsing. ,1998
wang-etal-2018-denoising,https://aclanthology.org/W18-6314,0,,,,,,,"Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection. Measuring domain relevance of data and identifying or selecting well-fit domain data for machine translation (MT) is a well-studied topic, but denoising is not yet. Denoising is concerned with a different type of data quality and tries to reduce the negative impact of data noise on MT training, in particular, neural MT (NMT) training. This paper generalizes methods for measuring and selecting data for domain MT and applies them to denoising NMT training. The proposed approach uses trusted data and a denoising curriculum realized by online data selection. Intrinsic and extrinsic evaluations of the approach show its significant effectiveness for NMT to train on data with severe noise.",Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection,"Measuring domain relevance of data and identifying or selecting well-fit domain data for machine translation (MT) is a well-studied topic, but denoising is not yet. Denoising is concerned with a different type of data quality and tries to reduce the negative impact of data noise on MT training, in particular, neural MT (NMT) training. This paper generalizes methods for measuring and selecting data for domain MT and applies them to denoising NMT training. The proposed approach uses trusted data and a denoising curriculum realized by online data selection. Intrinsic and extrinsic evaluations of the approach show its significant effectiveness for NMT to train on data with severe noise.",Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection,"Measuring domain relevance of data and identifying or selecting well-fit domain data for machine translation (MT) is a well-studied topic, but denoising is not yet. Denoising is concerned with a different type of data quality and tries to reduce the negative impact of data noise on MT training, in particular, neural MT (NMT) training. This paper generalizes methods for measuring and selecting data for domain MT and applies them to denoising NMT training. The proposed approach uses trusted data and a denoising curriculum realized by online data selection. Intrinsic and extrinsic evaluations of the approach show its significant effectiveness for NMT to train on data with severe noise.","The authors would like to thank George Foster for his help refine the paper and advice on various technical isses in the paper, Thorsten Brants for his earlier work on the topic, Christian Buck for his help with the Paracrawl data, Yuan Cao for his valuable comments and suggestions on the paper, and the anonymous reviewers for their constructive reviews.","Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection. Measuring domain relevance of data and identifying or selecting well-fit domain data for machine translation (MT) is a well-studied topic, but denoising is not yet. Denoising is concerned with a different type of data quality and tries to reduce the negative impact of data noise on MT training, in particular, neural MT (NMT) training. This paper generalizes methods for measuring and selecting data for domain MT and applies them to denoising NMT training. The proposed approach uses trusted data and a denoising curriculum realized by online data selection. Intrinsic and extrinsic evaluations of the approach show its significant effectiveness for NMT to train on data with severe noise.",2018
templeton-burger-1983-problems,https://aclanthology.org/A83-1002,0,,,,,,,"Problems in Natural-Language Interface to DBMS With Examples From EUFID. For five years the End-User Friendly Interface to Data management (EUFID) project team at System Development Corporation worked on the design and implementation of a Natural-Language Interface (NLI) system that was to be independent of both the application and the database management system. In this paper we describe application, natural-language and database management problems involved in NLI development, with specific reference to the EUFID system as an example. Language"", ""World
Language"", and ""Data Base Language"" and appear to correspond roughly to the ""external"", ""conceptual"", and ""internal"" views of data as described by C. J. Date",Problems in Natural-Language Interface to {DBMS} With Examples From {EUFID},"For five years the End-User Friendly Interface to Data management (EUFID) project team at System Development Corporation worked on the design and implementation of a Natural-Language Interface (NLI) system that was to be independent of both the application and the database management system. In this paper we describe application, natural-language and database management problems involved in NLI development, with specific reference to the EUFID system as an example. Language"", ""World
Language"", and ""Data Base Language"" and appear to correspond roughly to the ""external"", ""conceptual"", and ""internal"" views of data as described by C. J. Date",Problems in Natural-Language Interface to DBMS With Examples From EUFID,"For five years the End-User Friendly Interface to Data management (EUFID) project team at System Development Corporation worked on the design and implementation of a Natural-Language Interface (NLI) system that was to be independent of both the application and the database management system. In this paper we describe application, natural-language and database management problems involved in NLI development, with specific reference to the EUFID system as an example. Language"", ""World
Language"", and ""Data Base Language"" and appear to correspond roughly to the ""external"", ""conceptual"", and ""internal"" views of data as described by C. J. Date",We would like to acknowledge,"Problems in Natural-Language Interface to DBMS With Examples From EUFID. For five years the End-User Friendly Interface to Data management (EUFID) project team at System Development Corporation worked on the design and implementation of a Natural-Language Interface (NLI) system that was to be independent of both the application and the database management system. In this paper we describe application, natural-language and database management problems involved in NLI development, with specific reference to the EUFID system as an example. Language"", ""World
Language"", and ""Data Base Language"" and appear to correspond roughly to the ""external"", ""conceptual"", and ""internal"" views of data as described by C. J. Date",1983
kar-etal-2018-folksonomication,https://aclanthology.org/C18-1244,0,,,,,,,"Folksonomication: Predicting Tags for Movies from Plot Synopses using Emotion Flow Encoded Neural Network. Folksonomy of movies covers a wide range of heterogeneous information about movies, like the genre, plot structure, visual experiences, soundtracks, metadata, and emotional experiences from watching a movie. Being able to automatically generate or predict tags for movies can help recommendation engines improve retrieval of similar movies, and help viewers know what to expect from a movie in advance. In this work, we explore the problem of creating tags for movies from plot synopses. We propose a novel neural network model that merges information from synopses and emotion flows throughout the plots to predict a set of tags for movies. We compare our system with multiple baselines and found that the addition of emotion flows boosts the performance of the network by learning ≈18% more tags than a traditional machine learning system.",{F}olksonomication: Predicting Tags for Movies from Plot Synopses using Emotion Flow Encoded Neural Network,"Folksonomy of movies covers a wide range of heterogeneous information about movies, like the genre, plot structure, visual experiences, soundtracks, metadata, and emotional experiences from watching a movie. Being able to automatically generate or predict tags for movies can help recommendation engines improve retrieval of similar movies, and help viewers know what to expect from a movie in advance. In this work, we explore the problem of creating tags for movies from plot synopses. We propose a novel neural network model that merges information from synopses and emotion flows throughout the plots to predict a set of tags for movies. We compare our system with multiple baselines and found that the addition of emotion flows boosts the performance of the network by learning ≈18% more tags than a traditional machine learning system.",Folksonomication: Predicting Tags for Movies from Plot Synopses using Emotion Flow Encoded Neural Network,"Folksonomy of movies covers a wide range of heterogeneous information about movies, like the genre, plot structure, visual experiences, soundtracks, metadata, and emotional experiences from watching a movie. Being able to automatically generate or predict tags for movies can help recommendation engines improve retrieval of similar movies, and help viewers know what to expect from a movie in advance. In this work, we explore the problem of creating tags for movies from plot synopses. We propose a novel neural network model that merges information from synopses and emotion flows throughout the plots to predict a set of tags for movies. We compare our system with multiple baselines and found that the addition of emotion flows boosts the performance of the network by learning ≈18% more tags than a traditional machine learning system.",This work was partially supported by the National Science Foundation under grant number 1462141 and by the U.S. Department of Defense under grant W911NF-16-1-0422.,"Folksonomication: Predicting Tags for Movies from Plot Synopses using Emotion Flow Encoded Neural Network. Folksonomy of movies covers a wide range of heterogeneous information about movies, like the genre, plot structure, visual experiences, soundtracks, metadata, and emotional experiences from watching a movie. Being able to automatically generate or predict tags for movies can help recommendation engines improve retrieval of similar movies, and help viewers know what to expect from a movie in advance. In this work, we explore the problem of creating tags for movies from plot synopses. We propose a novel neural network model that merges information from synopses and emotion flows throughout the plots to predict a set of tags for movies. We compare our system with multiple baselines and found that the addition of emotion flows boosts the performance of the network by learning ≈18% more tags than a traditional machine learning system.",2018
thomas-1980-computer,https://aclanthology.org/P80-1022,0,,,,,,,"The Computer as an Active Communication Medium. goals r4imetacomments that direct the conversation [~ Communication is often conceived of in basically the following terms. A person has some idea which he or she wants to communicate to a second person. The first person translates that idea into some symbol system which is transmitted through some medium to the receiver. The receiver receives the transmission and translates it into some internal idea. Communication, in this view, is considered good to the extent that there is an isomorphism between the idea in the head of the sender before sending the message and the idea in the receiver's head after recieving the message. A good medium of communication, in this view, is one that adds minimal noise to the signal. Messages are considered good partly to the extent that they are unabmiguous. This is, by and large, the view of many of the people concerned with computers and communication.
For a moment, consider a quite different view of communication. In this view, communication is basically a design-interpretation process. One person has goals that they believe can be aided by communicating. The person therefore designs a message which is intended to facillitate those goals. In most cases, the goal includes changing some cognitive structure in one or more other people's minds. Each receiver of a message however has his or her own goals in mind and a model of the world (including a model of the sender) and interprets the received message in light of that other world information and relative to the perceived goals of the sender. This view has been articulated further elsewhere !~] .",The Computer as an Active Communication Medium,"goals r4imetacomments that direct the conversation [~ Communication is often conceived of in basically the following terms. A person has some idea which he or she wants to communicate to a second person. The first person translates that idea into some symbol system which is transmitted through some medium to the receiver. The receiver receives the transmission and translates it into some internal idea. Communication, in this view, is considered good to the extent that there is an isomorphism between the idea in the head of the sender before sending the message and the idea in the receiver's head after recieving the message. A good medium of communication, in this view, is one that adds minimal noise to the signal. Messages are considered good partly to the extent that they are unabmiguous. This is, by and large, the view of many of the people concerned with computers and communication.
For a moment, consider a quite different view of communication. In this view, communication is basically a design-interpretation process. One person has goals that they believe can be aided by communicating. The person therefore designs a message which is intended to facillitate those goals. In most cases, the goal includes changing some cognitive structure in one or more other people's minds. Each receiver of a message however has his or her own goals in mind and a model of the world (including a model of the sender) and interprets the received message in light of that other world information and relative to the perceived goals of the sender. This view has been articulated further elsewhere !~] .",The Computer as an Active Communication Medium,"goals r4imetacomments that direct the conversation [~ Communication is often conceived of in basically the following terms. A person has some idea which he or she wants to communicate to a second person. The first person translates that idea into some symbol system which is transmitted through some medium to the receiver. The receiver receives the transmission and translates it into some internal idea. Communication, in this view, is considered good to the extent that there is an isomorphism between the idea in the head of the sender before sending the message and the idea in the receiver's head after recieving the message. A good medium of communication, in this view, is one that adds minimal noise to the signal. Messages are considered good partly to the extent that they are unabmiguous. This is, by and large, the view of many of the people concerned with computers and communication.
For a moment, consider a quite different view of communication. In this view, communication is basically a design-interpretation process. One person has goals that they believe can be aided by communicating. The person therefore designs a message which is intended to facillitate those goals. In most cases, the goal includes changing some cognitive structure in one or more other people's minds. Each receiver of a message however has his or her own goals in mind and a model of the world (including a model of the sender) and interprets the received message in light of that other world information and relative to the perceived goals of the sender. This view has been articulated further elsewhere !~] .",,"The Computer as an Active Communication Medium. goals r4imetacomments that direct the conversation [~ Communication is often conceived of in basically the following terms. A person has some idea which he or she wants to communicate to a second person. The first person translates that idea into some symbol system which is transmitted through some medium to the receiver. The receiver receives the transmission and translates it into some internal idea. Communication, in this view, is considered good to the extent that there is an isomorphism between the idea in the head of the sender before sending the message and the idea in the receiver's head after recieving the message. A good medium of communication, in this view, is one that adds minimal noise to the signal. Messages are considered good partly to the extent that they are unabmiguous. This is, by and large, the view of many of the people concerned with computers and communication.
For a moment, consider a quite different view of communication. In this view, communication is basically a design-interpretation process. One person has goals that they believe can be aided by communicating. The person therefore designs a message which is intended to facillitate those goals. In most cases, the goal includes changing some cognitive structure in one or more other people's minds. Each receiver of a message however has his or her own goals in mind and a model of the world (including a model of the sender) and interprets the received message in light of that other world information and relative to the perceived goals of the sender. This view has been articulated further elsewhere !~] .",1980
joshi-srinivas-1994-disambiguation,https://aclanthology.org/C94-1024,0,,,,,,,"Disambiguation of Super Parts of Speech (or Supertags): Almost Parsing. In a lexicalized grammar Ibrnlalisni such as LexicMized Tree-Adjoining (h'~unmar (I3'AG), each lexicM item is associated with at least one elementary structure (supertag) that localizes syntactic a.nd semantic dependencies. Thus a parser for a lexicalized grammar must search a large set of supertags to choose the right ones to combine for the parse of the sentence. We present techniques I'or dlsambiguating supertags using local inlorlnlttion s~Lch as lexicM preference and local lexicN dependencies. Tim similarity between LTAG and l)ependency grammars is exploited in the dependency niodel of snpertag disa.mbiguation. The performance results for variotis models of supert;tg disambigu~ttk)n such as unigram; trigram and dependency-based models are presented.",Disambiguation of Super Parts of Speech (or Supertags): Almost Parsing,"In a lexicalized grammar Ibrnlalisni such as LexicMized Tree-Adjoining (h'~unmar (I3'AG), each lexicM item is associated with at least one elementary structure (supertag) that localizes syntactic a.nd semantic dependencies. Thus a parser for a lexicalized grammar must search a large set of supertags to choose the right ones to combine for the parse of the sentence. We present techniques I'or dlsambiguating supertags using local inlorlnlttion s~Lch as lexicM preference and local lexicN dependencies. Tim similarity between LTAG and l)ependency grammars is exploited in the dependency niodel of snpertag disa.mbiguation. The performance results for variotis models of supert;tg disambigu~ttk)n such as unigram; trigram and dependency-based models are presented.",Disambiguation of Super Parts of Speech (or Supertags): Almost Parsing,"In a lexicalized grammar Ibrnlalisni such as LexicMized Tree-Adjoining (h'~unmar (I3'AG), each lexicM item is associated with at least one elementary structure (supertag) that localizes syntactic a.nd semantic dependencies. Thus a parser for a lexicalized grammar must search a large set of supertags to choose the right ones to combine for the parse of the sentence. We present techniques I'or dlsambiguating supertags using local inlorlnlttion s~Lch as lexicM preference and local lexicN dependencies. Tim similarity between LTAG and l)ependency grammars is exploited in the dependency niodel of snpertag disa.mbiguation. The performance results for variotis models of supert;tg disambigu~ttk)n such as unigram; trigram and dependency-based models are presented.",,"Disambiguation of Super Parts of Speech (or Supertags): Almost Parsing. In a lexicalized grammar Ibrnlalisni such as LexicMized Tree-Adjoining (h'~unmar (I3'AG), each lexicM item is associated with at least one elementary structure (supertag) that localizes syntactic a.nd semantic dependencies. Thus a parser for a lexicalized grammar must search a large set of supertags to choose the right ones to combine for the parse of the sentence. We present techniques I'or dlsambiguating supertags using local inlorlnlttion s~Lch as lexicM preference and local lexicN dependencies. Tim similarity between LTAG and l)ependency grammars is exploited in the dependency niodel of snpertag disa.mbiguation. The performance results for variotis models of supert;tg disambigu~ttk)n such as unigram; trigram and dependency-based models are presented.",1994
dugast-etal-2008-relearn,https://aclanthology.org/W08-0327,0,,,,,,,"Can we Relearn an RBMT System?. This paper describes SYSTRAN submissions for the shared task of the third Workshop on Statistical Machine Translation at ACL. Our main contribution consists in a French-English statistical model trained without the use of any human-translated parallel corpus. In substitution, we translated a monolingual corpus with SYSTRAN rule-based translation engine to produce the parallel corpus. The results are provided herein, along with a measure of error analysis.",Can we Relearn an {RBMT} System?,"This paper describes SYSTRAN submissions for the shared task of the third Workshop on Statistical Machine Translation at ACL. Our main contribution consists in a French-English statistical model trained without the use of any human-translated parallel corpus. In substitution, we translated a monolingual corpus with SYSTRAN rule-based translation engine to produce the parallel corpus. The results are provided herein, along with a measure of error analysis.",Can we Relearn an RBMT System?,"This paper describes SYSTRAN submissions for the shared task of the third Workshop on Statistical Machine Translation at ACL. Our main contribution consists in a French-English statistical model trained without the use of any human-translated parallel corpus. In substitution, we translated a monolingual corpus with SYSTRAN rule-based translation engine to produce the parallel corpus. The results are provided herein, along with a measure of error analysis.",,"Can we Relearn an RBMT System?. This paper describes SYSTRAN submissions for the shared task of the third Workshop on Statistical Machine Translation at ACL. Our main contribution consists in a French-English statistical model trained without the use of any human-translated parallel corpus. In substitution, we translated a monolingual corpus with SYSTRAN rule-based translation engine to produce the parallel corpus. The results are provided herein, along with a measure of error analysis.",2008
shutova-2009-sense,https://aclanthology.org/P09-3001,0,,,,,,,"Sense-based Interpretation of Logical Metonymy Using a Statistical Method. The use of figurative language is ubiquitous in natural language texts and it is a serious bottleneck in automatic text understanding. We address the problem of interpretation of logical metonymy, using a statistical method. Our approach originates from that of Lapata and Lascarides (2003), which generates a list of nondisambiguated interpretations with their likelihood derived from a corpus. We propose a novel sense-based representation of the interpretation of logical metonymy and a more thorough evaluation method than that of Lapata and Lascarides (2003). By carrying out a human experiment we prove that such a representation is intuitive to human subjects. We derive a ranking scheme for verb senses using an unannotated corpus, WordNet sense numbering and glosses. We also provide an account of the requirements that different aspectual verbs impose onto the interpretation of logical metonymy. We tested our system on verb-object metonymic phrases. It identifies and ranks metonymic interpretations with the mean average precision of 0.83 as compared to the gold standard.",Sense-based Interpretation of Logical Metonymy Using a Statistical Method,"The use of figurative language is ubiquitous in natural language texts and it is a serious bottleneck in automatic text understanding. We address the problem of interpretation of logical metonymy, using a statistical method. Our approach originates from that of Lapata and Lascarides (2003), which generates a list of nondisambiguated interpretations with their likelihood derived from a corpus. We propose a novel sense-based representation of the interpretation of logical metonymy and a more thorough evaluation method than that of Lapata and Lascarides (2003). By carrying out a human experiment we prove that such a representation is intuitive to human subjects. We derive a ranking scheme for verb senses using an unannotated corpus, WordNet sense numbering and glosses. We also provide an account of the requirements that different aspectual verbs impose onto the interpretation of logical metonymy. We tested our system on verb-object metonymic phrases. It identifies and ranks metonymic interpretations with the mean average precision of 0.83 as compared to the gold standard.",Sense-based Interpretation of Logical Metonymy Using a Statistical Method,"The use of figurative language is ubiquitous in natural language texts and it is a serious bottleneck in automatic text understanding. We address the problem of interpretation of logical metonymy, using a statistical method. Our approach originates from that of Lapata and Lascarides (2003), which generates a list of nondisambiguated interpretations with their likelihood derived from a corpus. We propose a novel sense-based representation of the interpretation of logical metonymy and a more thorough evaluation method than that of Lapata and Lascarides (2003). By carrying out a human experiment we prove that such a representation is intuitive to human subjects. We derive a ranking scheme for verb senses using an unannotated corpus, WordNet sense numbering and glosses. We also provide an account of the requirements that different aspectual verbs impose onto the interpretation of logical metonymy. We tested our system on verb-object metonymic phrases. It identifies and ranks metonymic interpretations with the mean average precision of 0.83 as compared to the gold standard.",I would like to thank Simone Teufel and Anna Korhonen for their valuable feedback on this project and my anonymous reviewers whose comments helped to improve the paper. I am also very grateful to Cambridge Overseas Trust who made this research possible by funding my studies.,"Sense-based Interpretation of Logical Metonymy Using a Statistical Method. The use of figurative language is ubiquitous in natural language texts and it is a serious bottleneck in automatic text understanding. We address the problem of interpretation of logical metonymy, using a statistical method. Our approach originates from that of Lapata and Lascarides (2003), which generates a list of nondisambiguated interpretations with their likelihood derived from a corpus. We propose a novel sense-based representation of the interpretation of logical metonymy and a more thorough evaluation method than that of Lapata and Lascarides (2003). By carrying out a human experiment we prove that such a representation is intuitive to human subjects. We derive a ranking scheme for verb senses using an unannotated corpus, WordNet sense numbering and glosses. We also provide an account of the requirements that different aspectual verbs impose onto the interpretation of logical metonymy. We tested our system on verb-object metonymic phrases. It identifies and ranks metonymic interpretations with the mean average precision of 0.83 as compared to the gold standard.",2009
wilks-1993-corpora,https://aclanthology.org/1993.mtsummit-1.12,0,,,,,,,"Corpora and Machine Translation. The paper discusses the benefits of the world-wide move in recent years towards the use of corpora in natural language processing. The spoken paper will discuss a range of trends in that area, but this version concentrates on one extreme example of work based only on corpora and statistics: the IBM approach to machine translation, where I argue that it has done rather better after a few years than many sceptics believed it could. However, it is neither as novel as its proponents suggest nor is it making claims as clear and simple as they would have us believe. The performance of the purely statistical system (and we discuss what that phrase could mean) has not equalled the performance of SYSTRAN. More importantly, the system is now being shifted to a hybrid that incorporates much of the linguistic information that it was initially claimed by IBM would not be needed for MT. Hence, one might infer that its own proponent do not believe ""pure"" statistics sufficient for MT of a usable quality. In addition to real limits on the statistical method, there are also strong economic limits imposed by their methodology of data gathering. However, the paper concludes that the IBM group have done the field a great service in pushing these methods far further than before, and by reminding everyone of the virtues of empiricism in the field and the need for large scale gathering of data.",Corpora and Machine Translation,"The paper discusses the benefits of the world-wide move in recent years towards the use of corpora in natural language processing. The spoken paper will discuss a range of trends in that area, but this version concentrates on one extreme example of work based only on corpora and statistics: the IBM approach to machine translation, where I argue that it has done rather better after a few years than many sceptics believed it could. However, it is neither as novel as its proponents suggest nor is it making claims as clear and simple as they would have us believe. The performance of the purely statistical system (and we discuss what that phrase could mean) has not equalled the performance of SYSTRAN. More importantly, the system is now being shifted to a hybrid that incorporates much of the linguistic information that it was initially claimed by IBM would not be needed for MT. Hence, one might infer that its own proponent do not believe ""pure"" statistics sufficient for MT of a usable quality. In addition to real limits on the statistical method, there are also strong economic limits imposed by their methodology of data gathering. However, the paper concludes that the IBM group have done the field a great service in pushing these methods far further than before, and by reminding everyone of the virtues of empiricism in the field and the need for large scale gathering of data.",Corpora and Machine Translation,"The paper discusses the benefits of the world-wide move in recent years towards the use of corpora in natural language processing. The spoken paper will discuss a range of trends in that area, but this version concentrates on one extreme example of work based only on corpora and statistics: the IBM approach to machine translation, where I argue that it has done rather better after a few years than many sceptics believed it could. However, it is neither as novel as its proponents suggest nor is it making claims as clear and simple as they would have us believe. The performance of the purely statistical system (and we discuss what that phrase could mean) has not equalled the performance of SYSTRAN. More importantly, the system is now being shifted to a hybrid that incorporates much of the linguistic information that it was initially claimed by IBM would not be needed for MT. Hence, one might infer that its own proponent do not believe ""pure"" statistics sufficient for MT of a usable quality. In addition to real limits on the statistical method, there are also strong economic limits imposed by their methodology of data gathering. However, the paper concludes that the IBM group have done the field a great service in pushing these methods far further than before, and by reminding everyone of the virtues of empiricism in the field and the need for large scale gathering of data.","Acknowledgements: James Pustejovsky, Bob Ingria, Bran Boguraev, Sergei Nirenburg, Ted Dunning and others in the CRL natural language processing group.","Corpora and Machine Translation. The paper discusses the benefits of the world-wide move in recent years towards the use of corpora in natural language processing. The spoken paper will discuss a range of trends in that area, but this version concentrates on one extreme example of work based only on corpora and statistics: the IBM approach to machine translation, where I argue that it has done rather better after a few years than many sceptics believed it could. However, it is neither as novel as its proponents suggest nor is it making claims as clear and simple as they would have us believe. The performance of the purely statistical system (and we discuss what that phrase could mean) has not equalled the performance of SYSTRAN. More importantly, the system is now being shifted to a hybrid that incorporates much of the linguistic information that it was initially claimed by IBM would not be needed for MT. Hence, one might infer that its own proponent do not believe ""pure"" statistics sufficient for MT of a usable quality. In addition to real limits on the statistical method, there are also strong economic limits imposed by their methodology of data gathering. However, the paper concludes that the IBM group have done the field a great service in pushing these methods far further than before, and by reminding everyone of the virtues of empiricism in the field and the need for large scale gathering of data.",1993
luce-etal-2016-cogalex,https://aclanthology.org/W16-5315,0,,,,,,,"CogALex-V Shared Task: LOPE. This paper attempts the answer two questions posed by the CogALex shared task: How to determine if two words are semantically related and, if they are related, which semantic relation holds between them. We present a simple, effective approach to the first problem, using word vectors to calculate similarity, and a naive approach to the second problem, by assigning word pairs semantic relations based on their parts of speech. The results of the second task are significantly improved in our post-hoc experiment, where we attempt to apply linguistic regularities in word representations (Mikolov 2013b) to these particular semantic relations.",{C}og{AL}ex-{V} Shared Task: {LOPE},"This paper attempts the answer two questions posed by the CogALex shared task: How to determine if two words are semantically related and, if they are related, which semantic relation holds between them. We present a simple, effective approach to the first problem, using word vectors to calculate similarity, and a naive approach to the second problem, by assigning word pairs semantic relations based on their parts of speech. The results of the second task are significantly improved in our post-hoc experiment, where we attempt to apply linguistic regularities in word representations (Mikolov 2013b) to these particular semantic relations.",CogALex-V Shared Task: LOPE,"This paper attempts the answer two questions posed by the CogALex shared task: How to determine if two words are semantically related and, if they are related, which semantic relation holds between them. We present a simple, effective approach to the first problem, using word vectors to calculate similarity, and a naive approach to the second problem, by assigning word pairs semantic relations based on their parts of speech. The results of the second task are significantly improved in our post-hoc experiment, where we attempt to apply linguistic regularities in word representations (Mikolov 2013b) to these particular semantic relations.",,"CogALex-V Shared Task: LOPE. This paper attempts the answer two questions posed by the CogALex shared task: How to determine if two words are semantically related and, if they are related, which semantic relation holds between them. We present a simple, effective approach to the first problem, using word vectors to calculate similarity, and a naive approach to the second problem, by assigning word pairs semantic relations based on their parts of speech. The results of the second task are significantly improved in our post-hoc experiment, where we attempt to apply linguistic regularities in word representations (Mikolov 2013b) to these particular semantic relations.",2016
yoshimura-etal-2020-reference,https://aclanthology.org/2020.coling-main.573,0,,,,,,,"SOME: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction. We propose a reference-less metric trained on manual evaluations of system outputs for grammatical error correction. Previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluation of the system output because there is no dataset of system output with manual evaluation. This study manually evaluates the output of grammatical error correction systems to optimize the metrics. Experimental results show that the proposed metric improves the correlation with manual evaluation in both systemand sentence-level meta-evaluation. Our dataset and metric will be made publicly available. 1 2 Related Work Napoles et al. (2016) pioneered the reference-less GEC metric. They presented a metric based on grammatical error detection tools and linguistic features such as language models, and demonstrated that its performance was close to that of reference-based metrics. Asano et al. (2017) combined three submetrics: grammaticality, fluency, and meaning preservation, and outperformed reference-based metrics. They trained a logistic regression model on the GUG dataset 2 (Heilman et al.",{SOME}: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction,"We propose a reference-less metric trained on manual evaluations of system outputs for grammatical error correction. Previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluation of the system output because there is no dataset of system output with manual evaluation. This study manually evaluates the output of grammatical error correction systems to optimize the metrics. Experimental results show that the proposed metric improves the correlation with manual evaluation in both systemand sentence-level meta-evaluation. Our dataset and metric will be made publicly available. 1 2 Related Work Napoles et al. (2016) pioneered the reference-less GEC metric. They presented a metric based on grammatical error detection tools and linguistic features such as language models, and demonstrated that its performance was close to that of reference-based metrics. Asano et al. (2017) combined three submetrics: grammaticality, fluency, and meaning preservation, and outperformed reference-based metrics. They trained a logistic regression model on the GUG dataset 2 (Heilman et al.",SOME: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction,"We propose a reference-less metric trained on manual evaluations of system outputs for grammatical error correction. Previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluation of the system output because there is no dataset of system output with manual evaluation. This study manually evaluates the output of grammatical error correction systems to optimize the metrics. Experimental results show that the proposed metric improves the correlation with manual evaluation in both systemand sentence-level meta-evaluation. Our dataset and metric will be made publicly available. 1 2 Related Work Napoles et al. (2016) pioneered the reference-less GEC metric. They presented a metric based on grammatical error detection tools and linguistic features such as language models, and demonstrated that its performance was close to that of reference-based metrics. Asano et al. (2017) combined three submetrics: grammaticality, fluency, and meaning preservation, and outperformed reference-based metrics. They trained a logistic regression model on the GUG dataset 2 (Heilman et al.",This work was supported by JSPS KAKENHI Grant Number JP20K19861. We would like to thank Hiroki Asano for giving the implementation code and Keisuke Sakaguchi for the system output of JFLEG.,"SOME: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction. We propose a reference-less metric trained on manual evaluations of system outputs for grammatical error correction. Previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluation of the system output because there is no dataset of system output with manual evaluation. This study manually evaluates the output of grammatical error correction systems to optimize the metrics. Experimental results show that the proposed metric improves the correlation with manual evaluation in both systemand sentence-level meta-evaluation. Our dataset and metric will be made publicly available. 1 2 Related Work Napoles et al. (2016) pioneered the reference-less GEC metric. They presented a metric based on grammatical error detection tools and linguistic features such as language models, and demonstrated that its performance was close to that of reference-based metrics. Asano et al. (2017) combined three submetrics: grammaticality, fluency, and meaning preservation, and outperformed reference-based metrics. They trained a logistic regression model on the GUG dataset 2 (Heilman et al.",2020
michel-etal-2020-exploring,https://aclanthology.org/2020.lrec-1.313,0,,,,,,,"Exploring Bilingual Word Embeddings for Hiligaynon, a Low-Resource Language. This paper investigates the use of bilingual word embeddings for mining Hiligaynon translations of English words. There is very little research on Hiligaynon, an extremely low-resource language of Malayo-Polynesian origin with over 9 million speakers in the Philippines (we found just one paper). We use a publicly available Hiligaynon corpus with only 300K words, and match it with a comparable corpus in English. As there are no bilingual resources available, we manually develop a English-Hiligaynon lexicon and use this to train bilingual word embeddings. But we fail to mine accurate translations due to the small amount of data. To find out if the same holds true for a related language pair, we simulate the same low-resource setup on English to German and arrive at similar results. We then vary the size of the comparable English and German corpora to determine the minimum corpus size necessary to achieve competitive results. Further, we investigate the role of the seed lexicon. We show that with the same corpus size but with a smaller seed lexicon, performance can surpass results of previous studies. We release the lexicon of 1,200 English-Hiligaynon word pairs we created to encourage further investigation.","Exploring Bilingual Word Embeddings for {H}iligaynon, a Low-Resource Language","This paper investigates the use of bilingual word embeddings for mining Hiligaynon translations of English words. There is very little research on Hiligaynon, an extremely low-resource language of Malayo-Polynesian origin with over 9 million speakers in the Philippines (we found just one paper). We use a publicly available Hiligaynon corpus with only 300K words, and match it with a comparable corpus in English. As there are no bilingual resources available, we manually develop a English-Hiligaynon lexicon and use this to train bilingual word embeddings. But we fail to mine accurate translations due to the small amount of data. To find out if the same holds true for a related language pair, we simulate the same low-resource setup on English to German and arrive at similar results. We then vary the size of the comparable English and German corpora to determine the minimum corpus size necessary to achieve competitive results. Further, we investigate the role of the seed lexicon. We show that with the same corpus size but with a smaller seed lexicon, performance can surpass results of previous studies. We release the lexicon of 1,200 English-Hiligaynon word pairs we created to encourage further investigation.","Exploring Bilingual Word Embeddings for Hiligaynon, a Low-Resource Language","This paper investigates the use of bilingual word embeddings for mining Hiligaynon translations of English words. There is very little research on Hiligaynon, an extremely low-resource language of Malayo-Polynesian origin with over 9 million speakers in the Philippines (we found just one paper). We use a publicly available Hiligaynon corpus with only 300K words, and match it with a comparable corpus in English. As there are no bilingual resources available, we manually develop a English-Hiligaynon lexicon and use this to train bilingual word embeddings. But we fail to mine accurate translations due to the small amount of data. To find out if the same holds true for a related language pair, we simulate the same low-resource setup on English to German and arrive at similar results. We then vary the size of the comparable English and German corpora to determine the minimum corpus size necessary to achieve competitive results. Further, we investigate the role of the seed lexicon. We show that with the same corpus size but with a smaller seed lexicon, performance can surpass results of previous studies. We release the lexicon of 1,200 English-Hiligaynon word pairs we created to encourage further investigation.",We would like to thank the reviewers for their valuable input. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement № 640550).,"Exploring Bilingual Word Embeddings for Hiligaynon, a Low-Resource Language. This paper investigates the use of bilingual word embeddings for mining Hiligaynon translations of English words. There is very little research on Hiligaynon, an extremely low-resource language of Malayo-Polynesian origin with over 9 million speakers in the Philippines (we found just one paper). We use a publicly available Hiligaynon corpus with only 300K words, and match it with a comparable corpus in English. As there are no bilingual resources available, we manually develop a English-Hiligaynon lexicon and use this to train bilingual word embeddings. But we fail to mine accurate translations due to the small amount of data. To find out if the same holds true for a related language pair, we simulate the same low-resource setup on English to German and arrive at similar results. We then vary the size of the comparable English and German corpora to determine the minimum corpus size necessary to achieve competitive results. Further, we investigate the role of the seed lexicon. We show that with the same corpus size but with a smaller seed lexicon, performance can surpass results of previous studies. We release the lexicon of 1,200 English-Hiligaynon word pairs we created to encourage further investigation.",2020
zhang-zong-2013-learning,https://aclanthology.org/P13-1140,0,,,,,,,"Learning a Phrase-based Translation Model from Monolingual Data with Application to Domain Adaptation. Currently, almost all of the statistical machine translation (SMT) models are trained with the parallel corpora in some specific domains. However, when it comes to a language pair or a different domain without any bilingual resources, the traditional SMT loses its power. Recently, some research works study the unsupervised SMT for inducing a simple word-based translation model from the monolingual corpora. It successfully bypasses the constraint of bitext for SMT and obtains a relatively promising result. In this paper, we take a step forward and propose a simple but effective method to induce a phrase-based model from the monolingual corpora given an automatically-induced translation lexicon or a manually-edited translation dictionary. We apply our method for the domain adaptation task and the extensive experiments show that our proposed method can substantially improve the translation quality.",Learning a Phrase-based Translation Model from Monolingual Data with Application to Domain Adaptation,"Currently, almost all of the statistical machine translation (SMT) models are trained with the parallel corpora in some specific domains. However, when it comes to a language pair or a different domain without any bilingual resources, the traditional SMT loses its power. Recently, some research works study the unsupervised SMT for inducing a simple word-based translation model from the monolingual corpora. It successfully bypasses the constraint of bitext for SMT and obtains a relatively promising result. In this paper, we take a step forward and propose a simple but effective method to induce a phrase-based model from the monolingual corpora given an automatically-induced translation lexicon or a manually-edited translation dictionary. We apply our method for the domain adaptation task and the extensive experiments show that our proposed method can substantially improve the translation quality.",Learning a Phrase-based Translation Model from Monolingual Data with Application to Domain Adaptation,"Currently, almost all of the statistical machine translation (SMT) models are trained with the parallel corpora in some specific domains. However, when it comes to a language pair or a different domain without any bilingual resources, the traditional SMT loses its power. Recently, some research works study the unsupervised SMT for inducing a simple word-based translation model from the monolingual corpora. It successfully bypasses the constraint of bitext for SMT and obtains a relatively promising result. In this paper, we take a step forward and propose a simple but effective method to induce a phrase-based model from the monolingual corpora given an automatically-induced translation lexicon or a manually-edited translation dictionary. We apply our method for the domain adaptation task and the extensive experiments show that our proposed method can substantially improve the translation quality.","The research work has been funded by the Hi-Tech Research and Development Program (""863"" Program) of China under Grant No. 2011AA01A207, 2012AA011101 and 2012AA011102, and also supported by the Key Project of Knowledge Innovation of Program of Chinese Academy of Sciences under Grant No. KGZD-EW-501. We would also like to thank the anonymous reviewers for their valuable suggestions. ","Learning a Phrase-based Translation Model from Monolingual Data with Application to Domain Adaptation. Currently, almost all of the statistical machine translation (SMT) models are trained with the parallel corpora in some specific domains. However, when it comes to a language pair or a different domain without any bilingual resources, the traditional SMT loses its power. Recently, some research works study the unsupervised SMT for inducing a simple word-based translation model from the monolingual corpora. It successfully bypasses the constraint of bitext for SMT and obtains a relatively promising result. In this paper, we take a step forward and propose a simple but effective method to induce a phrase-based model from the monolingual corpora given an automatically-induced translation lexicon or a manually-edited translation dictionary. We apply our method for the domain adaptation task and the extensive experiments show that our proposed method can substantially improve the translation quality.",2013
grimm-cimiano-2021-biquad,https://aclanthology.org/2021.starsem-1.10,0,,,,,,,"BiQuAD: Towards QA based on deeper text understanding. Recent question answering and machine reading benchmarks frequently reduce the task to one of pinpointing spans within a certain text passage that answers the given question. Typically, these systems are not required to actually understand the text on a deeper level that allows for more complex reasoning on the information contained. We introduce a new dataset called BiQuAD that requires deeper comprehension in order to answer questions in both extractive and deductive fashion. The dataset consist of 4, 190 closed-domain texts and a total of 99, 149 question-answer pairs. The texts are synthetically generated soccer match reports that verbalize the main events of each match. All texts are accompanied by a structured Datalog program that represents a (logical) model of its information. We show that state-of-the-art QA models do not perform well on the challenging long form contexts and reasoning requirements posed by the dataset. In particular, transformer based state-of-theart models achieve F 1-scores of only 39.0. We demonstrate how these synthetic datasets align structured knowledge with natural text and aid model introspection when approaching complex text understanding.",{B}i{Q}u{AD}: Towards {QA} based on deeper text understanding,"Recent question answering and machine reading benchmarks frequently reduce the task to one of pinpointing spans within a certain text passage that answers the given question. Typically, these systems are not required to actually understand the text on a deeper level that allows for more complex reasoning on the information contained. We introduce a new dataset called BiQuAD that requires deeper comprehension in order to answer questions in both extractive and deductive fashion. The dataset consist of 4, 190 closed-domain texts and a total of 99, 149 question-answer pairs. The texts are synthetically generated soccer match reports that verbalize the main events of each match. All texts are accompanied by a structured Datalog program that represents a (logical) model of its information. We show that state-of-the-art QA models do not perform well on the challenging long form contexts and reasoning requirements posed by the dataset. In particular, transformer based state-of-theart models achieve F 1-scores of only 39.0. We demonstrate how these synthetic datasets align structured knowledge with natural text and aid model introspection when approaching complex text understanding.",BiQuAD: Towards QA based on deeper text understanding,"Recent question answering and machine reading benchmarks frequently reduce the task to one of pinpointing spans within a certain text passage that answers the given question. Typically, these systems are not required to actually understand the text on a deeper level that allows for more complex reasoning on the information contained. We introduce a new dataset called BiQuAD that requires deeper comprehension in order to answer questions in both extractive and deductive fashion. The dataset consist of 4, 190 closed-domain texts and a total of 99, 149 question-answer pairs. The texts are synthetically generated soccer match reports that verbalize the main events of each match. All texts are accompanied by a structured Datalog program that represents a (logical) model of its information. We show that state-of-the-art QA models do not perform well on the challenging long form contexts and reasoning requirements posed by the dataset. In particular, transformer based state-of-theart models achieve F 1-scores of only 39.0. We demonstrate how these synthetic datasets align structured knowledge with natural text and aid model introspection when approaching complex text understanding.",We would like to thank the anonymous reviewers for their valuable feedback.,"BiQuAD: Towards QA based on deeper text understanding. Recent question answering and machine reading benchmarks frequently reduce the task to one of pinpointing spans within a certain text passage that answers the given question. Typically, these systems are not required to actually understand the text on a deeper level that allows for more complex reasoning on the information contained. We introduce a new dataset called BiQuAD that requires deeper comprehension in order to answer questions in both extractive and deductive fashion. The dataset consist of 4, 190 closed-domain texts and a total of 99, 149 question-answer pairs. The texts are synthetically generated soccer match reports that verbalize the main events of each match. All texts are accompanied by a structured Datalog program that represents a (logical) model of its information. We show that state-of-the-art QA models do not perform well on the challenging long form contexts and reasoning requirements posed by the dataset. In particular, transformer based state-of-theart models achieve F 1-scores of only 39.0. We demonstrate how these synthetic datasets align structured knowledge with natural text and aid model introspection when approaching complex text understanding.",2021
candido-etal-2009-supporting,https://aclanthology.org/W09-2105,1,,,,social_equality,education,,"Supporting the Adaptation of Texts for Poor Literacy Readers: a Text Simplification Editor for Brazilian Portuguese. In this paper we investigate the task of text simplification for Brazilian Portuguese. Our purpose is threefold: to introduce a simplification tool for such language and its underlying development methodology, to present an on-line authoring system of simplified text based on the previous tool, and finally to discuss the potentialities of such technology for education. The resources and tools we present are new for Portuguese and innovative in many aspects with respect to previous initiatives for other languages.",Supporting the Adaptation of Texts for Poor Literacy Readers: a Text Simplification Editor for {B}razilian {P}ortuguese,"In this paper we investigate the task of text simplification for Brazilian Portuguese. Our purpose is threefold: to introduce a simplification tool for such language and its underlying development methodology, to present an on-line authoring system of simplified text based on the previous tool, and finally to discuss the potentialities of such technology for education. The resources and tools we present are new for Portuguese and innovative in many aspects with respect to previous initiatives for other languages.",Supporting the Adaptation of Texts for Poor Literacy Readers: a Text Simplification Editor for Brazilian Portuguese,"In this paper we investigate the task of text simplification for Brazilian Portuguese. Our purpose is threefold: to introduce a simplification tool for such language and its underlying development methodology, to present an on-line authoring system of simplified text based on the previous tool, and finally to discuss the potentialities of such technology for education. The resources and tools we present are new for Portuguese and innovative in many aspects with respect to previous initiatives for other languages.",We thank the Brazilian Science Foundation FAPESP and Microsoft Research for financial support.,"Supporting the Adaptation of Texts for Poor Literacy Readers: a Text Simplification Editor for Brazilian Portuguese. In this paper we investigate the task of text simplification for Brazilian Portuguese. Our purpose is threefold: to introduce a simplification tool for such language and its underlying development methodology, to present an on-line authoring system of simplified text based on the previous tool, and finally to discuss the potentialities of such technology for education. The resources and tools we present are new for Portuguese and innovative in many aspects with respect to previous initiatives for other languages.",2009
obamuyide-vlachos-2019-model,https://aclanthology.org/P19-1589,0,,,,,,,"Model-Agnostic Meta-Learning for Relation Classification with Limited Supervision. In this paper we frame the task of supervised relation classification as an instance of metalearning. We propose a model-agnostic metalearning protocol for training relation classifiers to achieve enhanced predictive performance in limited supervision settings. During training, we aim to not only learn good parameters for classifying relations with sufficient supervision, but also learn model parameters that can be fine-tuned to enhance predictive performance for relations with limited supervision. In experiments conducted on two relation classification datasets, we demonstrate that the proposed meta-learning approach improves the predictive performance of two state-of-the-art supervised relation classification models.",Model-Agnostic Meta-Learning for Relation Classification with Limited Supervision,"In this paper we frame the task of supervised relation classification as an instance of metalearning. We propose a model-agnostic metalearning protocol for training relation classifiers to achieve enhanced predictive performance in limited supervision settings. During training, we aim to not only learn good parameters for classifying relations with sufficient supervision, but also learn model parameters that can be fine-tuned to enhance predictive performance for relations with limited supervision. In experiments conducted on two relation classification datasets, we demonstrate that the proposed meta-learning approach improves the predictive performance of two state-of-the-art supervised relation classification models.",Model-Agnostic Meta-Learning for Relation Classification with Limited Supervision,"In this paper we frame the task of supervised relation classification as an instance of metalearning. We propose a model-agnostic metalearning protocol for training relation classifiers to achieve enhanced predictive performance in limited supervision settings. During training, we aim to not only learn good parameters for classifying relations with sufficient supervision, but also learn model parameters that can be fine-tuned to enhance predictive performance for relations with limited supervision. In experiments conducted on two relation classification datasets, we demonstrate that the proposed meta-learning approach improves the predictive performance of two state-of-the-art supervised relation classification models.",The authors acknowledge support from the EU H2020 SUMMA project (grant agreement number 688139). We are grateful to Yuhao Zhang for sharing his data with us.,"Model-Agnostic Meta-Learning for Relation Classification with Limited Supervision. In this paper we frame the task of supervised relation classification as an instance of metalearning. We propose a model-agnostic metalearning protocol for training relation classifiers to achieve enhanced predictive performance in limited supervision settings. During training, we aim to not only learn good parameters for classifying relations with sufficient supervision, but also learn model parameters that can be fine-tuned to enhance predictive performance for relations with limited supervision. In experiments conducted on two relation classification datasets, we demonstrate that the proposed meta-learning approach improves the predictive performance of two state-of-the-art supervised relation classification models.",2019
lehman-etal-2019-inferring,https://aclanthology.org/N19-1371,1,,,,health,,,"Inferring Which Medical Treatments Work from Reports of Clinical Trials. How do we know if a particular medical treatment actually works? Ideally one would consult all available evidence from relevant clinical trials. Unfortunately, such results are primarily disseminated in natural language scientific articles, imposing substantial burden on those trying to make sense of them. In this paper, we present a new task and corpus for making this unstructured evidence actionable. The task entails inferring reported findings from a full-text article describing a randomized controlled trial (RCT) with respect to a given intervention, comparator, and outcome of interest, e.g., inferring if an article provides evidence supporting the use of aspirin to reduce risk of stroke, as compared to placebo. We present a new corpus for this task comprising 10,000+ prompts coupled with fulltext articles describing RCTs. Results using a suite of models-ranging from heuristic (rule-based) approaches to attentive neural architectures-demonstrate the difficulty of the task, which we believe largely owes to the lengthy, technical input texts. To facilitate further work on this important, challenging problem we make the corpus, documentation, a website and leaderboard, and code for baselines and evaluation available at http:",Inferring Which Medical Treatments Work from Reports of Clinical Trials,"How do we know if a particular medical treatment actually works? Ideally one would consult all available evidence from relevant clinical trials. Unfortunately, such results are primarily disseminated in natural language scientific articles, imposing substantial burden on those trying to make sense of them. In this paper, we present a new task and corpus for making this unstructured evidence actionable. The task entails inferring reported findings from a full-text article describing a randomized controlled trial (RCT) with respect to a given intervention, comparator, and outcome of interest, e.g., inferring if an article provides evidence supporting the use of aspirin to reduce risk of stroke, as compared to placebo. We present a new corpus for this task comprising 10,000+ prompts coupled with fulltext articles describing RCTs. Results using a suite of models-ranging from heuristic (rule-based) approaches to attentive neural architectures-demonstrate the difficulty of the task, which we believe largely owes to the lengthy, technical input texts. To facilitate further work on this important, challenging problem we make the corpus, documentation, a website and leaderboard, and code for baselines and evaluation available at http:",Inferring Which Medical Treatments Work from Reports of Clinical Trials,"How do we know if a particular medical treatment actually works? Ideally one would consult all available evidence from relevant clinical trials. Unfortunately, such results are primarily disseminated in natural language scientific articles, imposing substantial burden on those trying to make sense of them. In this paper, we present a new task and corpus for making this unstructured evidence actionable. The task entails inferring reported findings from a full-text article describing a randomized controlled trial (RCT) with respect to a given intervention, comparator, and outcome of interest, e.g., inferring if an article provides evidence supporting the use of aspirin to reduce risk of stroke, as compared to placebo. We present a new corpus for this task comprising 10,000+ prompts coupled with fulltext articles describing RCTs. Results using a suite of models-ranging from heuristic (rule-based) approaches to attentive neural architectures-demonstrate the difficulty of the task, which we believe largely owes to the lengthy, technical input texts. To facilitate further work on this important, challenging problem we make the corpus, documentation, a website and leaderboard, and code for baselines and evaluation available at http:",This work was supported by NSF CAREER Award 1750978.We also acknowledge ITS at Northeastern for providing high performance computing resources that have supported this research.,"Inferring Which Medical Treatments Work from Reports of Clinical Trials. How do we know if a particular medical treatment actually works? Ideally one would consult all available evidence from relevant clinical trials. Unfortunately, such results are primarily disseminated in natural language scientific articles, imposing substantial burden on those trying to make sense of them. In this paper, we present a new task and corpus for making this unstructured evidence actionable. The task entails inferring reported findings from a full-text article describing a randomized controlled trial (RCT) with respect to a given intervention, comparator, and outcome of interest, e.g., inferring if an article provides evidence supporting the use of aspirin to reduce risk of stroke, as compared to placebo. We present a new corpus for this task comprising 10,000+ prompts coupled with fulltext articles describing RCTs. Results using a suite of models-ranging from heuristic (rule-based) approaches to attentive neural architectures-demonstrate the difficulty of the task, which we believe largely owes to the lengthy, technical input texts. To facilitate further work on this important, challenging problem we make the corpus, documentation, a website and leaderboard, and code for baselines and evaluation available at http:",2019
knorz-1982-recognition,https://aclanthology.org/C82-1026,0,,,,,,,"Recognition of Abstract Objects - A Decision Theory Approach Within Natural Language Processing. The DAISY/ALIBABA-system developed within the WAl-project represents both a specific solution to the automatic indexing problem and a general framework for problems in the field of natural language processing, characterized by fuzziness and uncertainty• The WAI approach to the indexing problem has already been published [3], [5]. This paper however presents the underlying paradigm of recognizing abstract objects. The basic concepts are described, including the decision theory approach used for recognition.",Recognition of Abstract Objects - A Decision Theory Approach Within Natural Language Processing,"The DAISY/ALIBABA-system developed within the WAl-project represents both a specific solution to the automatic indexing problem and a general framework for problems in the field of natural language processing, characterized by fuzziness and uncertainty• The WAI approach to the indexing problem has already been published [3], [5]. This paper however presents the underlying paradigm of recognizing abstract objects. The basic concepts are described, including the decision theory approach used for recognition.",Recognition of Abstract Objects - A Decision Theory Approach Within Natural Language Processing,"The DAISY/ALIBABA-system developed within the WAl-project represents both a specific solution to the automatic indexing problem and a general framework for problems in the field of natural language processing, characterized by fuzziness and uncertainty• The WAI approach to the indexing problem has already been published [3], [5]. This paper however presents the underlying paradigm of recognizing abstract objects. The basic concepts are described, including the decision theory approach used for recognition.",Abstracts (FSTA 71/72) containing about 33.000 documents were used as a basis for dictionary.construction.,"Recognition of Abstract Objects - A Decision Theory Approach Within Natural Language Processing. The DAISY/ALIBABA-system developed within the WAl-project represents both a specific solution to the automatic indexing problem and a general framework for problems in the field of natural language processing, characterized by fuzziness and uncertainty• The WAI approach to the indexing problem has already been published [3], [5]. This paper however presents the underlying paradigm of recognizing abstract objects. The basic concepts are described, including the decision theory approach used for recognition.",1982
becker-1975-phrasal,https://aclanthology.org/T75-2013,0,,,,,,,"The Phrasal Lexicon. to understand the workings of these systems without vainly pretending that they can be reduced to pristine-pure mathematical formulations.
I I I t I 1 t ! 1 | ! I ! | | ! ! Let's Face Facts",The Phrasal Lexicon,"to understand the workings of these systems without vainly pretending that they can be reduced to pristine-pure mathematical formulations.
I I I t I 1 t ! 1 | ! I ! | | ! ! Let's Face Facts",The Phrasal Lexicon,"to understand the workings of these systems without vainly pretending that they can be reduced to pristine-pure mathematical formulations.
I I I t I 1 t ! 1 | ! I ! | | ! ! Let's Face Facts",,"The Phrasal Lexicon. to understand the workings of these systems without vainly pretending that they can be reduced to pristine-pure mathematical formulations.
I I I t I 1 t ! 1 | ! I ! | | ! ! Let's Face Facts",1975
loveys-etal-2017-small,https://aclanthology.org/W17-3110,1,,,,health,,,"Small but Mighty: Affective Micropatterns for Quantifying Mental Health from Social Media Language. Many psychological phenomena occur in small time windows, measured in minutes or hours. However, most computational linguistic techniques look at data on the order of weeks, months, or years. We explore micropatterns in sequences of messages occurring over a short time window for their prevalence and power for quantifying psychological phenomena, specifically, patterns in affect. We examine affective micropatterns in social media posts from users with anxiety, eating disorders, panic attacks, schizophrenia, suicidality, and matched controls.",Small but Mighty: Affective Micropatterns for Quantifying Mental Health from Social Media Language,"Many psychological phenomena occur in small time windows, measured in minutes or hours. However, most computational linguistic techniques look at data on the order of weeks, months, or years. We explore micropatterns in sequences of messages occurring over a short time window for their prevalence and power for quantifying psychological phenomena, specifically, patterns in affect. We examine affective micropatterns in social media posts from users with anxiety, eating disorders, panic attacks, schizophrenia, suicidality, and matched controls.",Small but Mighty: Affective Micropatterns for Quantifying Mental Health from Social Media Language,"Many psychological phenomena occur in small time windows, measured in minutes or hours. However, most computational linguistic techniques look at data on the order of weeks, months, or years. We explore micropatterns in sequences of messages occurring over a short time window for their prevalence and power for quantifying psychological phenomena, specifically, patterns in affect. We examine affective micropatterns in social media posts from users with anxiety, eating disorders, panic attacks, schizophrenia, suicidality, and matched controls.","The authors would like ackowledge the support of the 2016 Jelinek Memorial Workshop on Speech and Language Technology, at Johns Hopkins University, for providing the concerted time to perform this research. The authors would like to especially thank Craig and Annabelle Bryan for the inspiration for this work and the generosity with which they shared their time to mutually explore results. Finally and more importantly, the authors would like to thank the people who donated their data at OurDataHelps.org to support this and other research endeavors at the intersection of data science and mental health.","Small but Mighty: Affective Micropatterns for Quantifying Mental Health from Social Media Language. Many psychological phenomena occur in small time windows, measured in minutes or hours. However, most computational linguistic techniques look at data on the order of weeks, months, or years. We explore micropatterns in sequences of messages occurring over a short time window for their prevalence and power for quantifying psychological phenomena, specifically, patterns in affect. We examine affective micropatterns in social media posts from users with anxiety, eating disorders, panic attacks, schizophrenia, suicidality, and matched controls.",2017
avvaru-vobilisetty-2020-bert,https://aclanthology.org/2020.semeval-1.144,0,,,,,,,"BERT at SemEval-2020 Task 8: Using BERT to Analyse Meme Emotions. Sentiment analysis, being one of the most sought after research problems within Natural Language Processing (NLP) researchers. The range of problems being addressed by sentiment analysis is ever increasing. Till now, most of the research focuses on predicting sentiment, or sentiment categories like sarcasm, humor, offense and motivation on text data. But, there is very limited research that is focusing on predicting or analyzing the sentiment of internet memes. We try to address this problem as part of ""Task 8 of SemEval 2020: Memotion Analysis"" (Sharma et al., 2020). We have participated in all the three tasks of Memotion Analysis. Our system built using state-of-the-art pre-trained Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) performed better compared to baseline models for the two tasks A and C and performed close to the baseline model for task B. In this paper, we present the data used for training, data cleaning and preparation steps, the fine-tuning process of BERT based model and finally predict the sentiment or sentiment categories. We found that the sequence models like Long Short Term Memory(LSTM) (Hochreiter and Schmidhuber, 1997) and its variants performed below par in predicting the sentiments. We also performed a comparative analysis with other Transformer based models like DistilBERT (Sanh et al., 2019) and XLNet (Yang et al., 2019).",{BERT} at {S}em{E}val-2020 Task 8: Using {BERT} to Analyse Meme Emotions,"Sentiment analysis, being one of the most sought after research problems within Natural Language Processing (NLP) researchers. The range of problems being addressed by sentiment analysis is ever increasing. Till now, most of the research focuses on predicting sentiment, or sentiment categories like sarcasm, humor, offense and motivation on text data. But, there is very limited research that is focusing on predicting or analyzing the sentiment of internet memes. We try to address this problem as part of ""Task 8 of SemEval 2020: Memotion Analysis"" (Sharma et al., 2020). We have participated in all the three tasks of Memotion Analysis. Our system built using state-of-the-art pre-trained Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) performed better compared to baseline models for the two tasks A and C and performed close to the baseline model for task B. In this paper, we present the data used for training, data cleaning and preparation steps, the fine-tuning process of BERT based model and finally predict the sentiment or sentiment categories. We found that the sequence models like Long Short Term Memory(LSTM) (Hochreiter and Schmidhuber, 1997) and its variants performed below par in predicting the sentiments. We also performed a comparative analysis with other Transformer based models like DistilBERT (Sanh et al., 2019) and XLNet (Yang et al., 2019).",BERT at SemEval-2020 Task 8: Using BERT to Analyse Meme Emotions,"Sentiment analysis, being one of the most sought after research problems within Natural Language Processing (NLP) researchers. The range of problems being addressed by sentiment analysis is ever increasing. Till now, most of the research focuses on predicting sentiment, or sentiment categories like sarcasm, humor, offense and motivation on text data. But, there is very limited research that is focusing on predicting or analyzing the sentiment of internet memes. We try to address this problem as part of ""Task 8 of SemEval 2020: Memotion Analysis"" (Sharma et al., 2020). We have participated in all the three tasks of Memotion Analysis. Our system built using state-of-the-art pre-trained Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) performed better compared to baseline models for the two tasks A and C and performed close to the baseline model for task B. In this paper, we present the data used for training, data cleaning and preparation steps, the fine-tuning process of BERT based model and finally predict the sentiment or sentiment categories. We found that the sequence models like Long Short Term Memory(LSTM) (Hochreiter and Schmidhuber, 1997) and its variants performed below par in predicting the sentiments. We also performed a comparative analysis with other Transformer based models like DistilBERT (Sanh et al., 2019) and XLNet (Yang et al., 2019).",,"BERT at SemEval-2020 Task 8: Using BERT to Analyse Meme Emotions. Sentiment analysis, being one of the most sought after research problems within Natural Language Processing (NLP) researchers. The range of problems being addressed by sentiment analysis is ever increasing. Till now, most of the research focuses on predicting sentiment, or sentiment categories like sarcasm, humor, offense and motivation on text data. But, there is very limited research that is focusing on predicting or analyzing the sentiment of internet memes. We try to address this problem as part of ""Task 8 of SemEval 2020: Memotion Analysis"" (Sharma et al., 2020). We have participated in all the three tasks of Memotion Analysis. Our system built using state-of-the-art pre-trained Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) performed better compared to baseline models for the two tasks A and C and performed close to the baseline model for task B. In this paper, we present the data used for training, data cleaning and preparation steps, the fine-tuning process of BERT based model and finally predict the sentiment or sentiment categories. We found that the sequence models like Long Short Term Memory(LSTM) (Hochreiter and Schmidhuber, 1997) and its variants performed below par in predicting the sentiments. We also performed a comparative analysis with other Transformer based models like DistilBERT (Sanh et al., 2019) and XLNet (Yang et al., 2019).",2020
adam-etal-2017-zikahack,https://aclanthology.org/W17-5806,1,,,,health,,,"ZikaHack 2016: A digital disease detection competition. Effective response to infectious diseases outbreaks relies on the rapid and early detection of those outbreaks. Invalidated, yet timely and openly available digital information can be used for the early detection of outbreaks. Public health surveillance authorities can exploit these early warnings to plan and coordinate rapid surveillance and emergency response programs. In 2016, a digital disease detection competition named ZikaHack was launched. The objective of the competition was for multidisciplinary teams to design, develop and demonstrate innovative digital disease detection solutions to retrospectively detect the 2015-16 Brazilian Zika virus outbreak earlier than traditional surveillance methods. In this paper, an overview of the Zik-aHack competition is provided. The challenges and lessons learned in organizing this competition are also discussed for use by other researchers interested in organizing similar competitions.",{Z}ika{H}ack 2016: A digital disease detection competition,"Effective response to infectious diseases outbreaks relies on the rapid and early detection of those outbreaks. Invalidated, yet timely and openly available digital information can be used for the early detection of outbreaks. Public health surveillance authorities can exploit these early warnings to plan and coordinate rapid surveillance and emergency response programs. In 2016, a digital disease detection competition named ZikaHack was launched. The objective of the competition was for multidisciplinary teams to design, develop and demonstrate innovative digital disease detection solutions to retrospectively detect the 2015-16 Brazilian Zika virus outbreak earlier than traditional surveillance methods. In this paper, an overview of the Zik-aHack competition is provided. The challenges and lessons learned in organizing this competition are also discussed for use by other researchers interested in organizing similar competitions.",ZikaHack 2016: A digital disease detection competition,"Effective response to infectious diseases outbreaks relies on the rapid and early detection of those outbreaks. Invalidated, yet timely and openly available digital information can be used for the early detection of outbreaks. Public health surveillance authorities can exploit these early warnings to plan and coordinate rapid surveillance and emergency response programs. In 2016, a digital disease detection competition named ZikaHack was launched. The objective of the competition was for multidisciplinary teams to design, develop and demonstrate innovative digital disease detection solutions to retrospectively detect the 2015-16 Brazilian Zika virus outbreak earlier than traditional surveillance methods. In this paper, an overview of the Zik-aHack competition is provided. The challenges and lessons learned in organizing this competition are also discussed for use by other researchers interested in organizing similar competitions.",Funding for this competition was provided by the National health and medical research council's,"ZikaHack 2016: A digital disease detection competition. Effective response to infectious diseases outbreaks relies on the rapid and early detection of those outbreaks. Invalidated, yet timely and openly available digital information can be used for the early detection of outbreaks. Public health surveillance authorities can exploit these early warnings to plan and coordinate rapid surveillance and emergency response programs. In 2016, a digital disease detection competition named ZikaHack was launched. The objective of the competition was for multidisciplinary teams to design, develop and demonstrate innovative digital disease detection solutions to retrospectively detect the 2015-16 Brazilian Zika virus outbreak earlier than traditional surveillance methods. In this paper, an overview of the Zik-aHack competition is provided. The challenges and lessons learned in organizing this competition are also discussed for use by other researchers interested in organizing similar competitions.",2017
martinez-etal-2002-syntactic,https://aclanthology.org/C02-1112,0,,,,,,,"Syntactic Features for High Precision Word Sense Disambiguation. This paper explores the contribution of a broad range of syntactic features to WSD: grammatical relations coded as the presence of adjuncts/arguments in isolation or as subcategorization frames, and instantiated grammatical relations between words. We have tested the performance of syntactic features using two different ML algorithms (Decision Lists and AdaBoost) on the Senseval-2 data. Adding syntactic features to a basic set of traditional features improves performance, especially for AdaBoost. In addition, several methods to build arbitrarily high accuracy WSD systems are also tried, showing that syntactic features allow for a precision of 86% and a coverage of 26% or 95% precision and 8% coverage.",Syntactic Features for High Precision Word Sense Disambiguation,"This paper explores the contribution of a broad range of syntactic features to WSD: grammatical relations coded as the presence of adjuncts/arguments in isolation or as subcategorization frames, and instantiated grammatical relations between words. We have tested the performance of syntactic features using two different ML algorithms (Decision Lists and AdaBoost) on the Senseval-2 data. Adding syntactic features to a basic set of traditional features improves performance, especially for AdaBoost. In addition, several methods to build arbitrarily high accuracy WSD systems are also tried, showing that syntactic features allow for a precision of 86% and a coverage of 26% or 95% precision and 8% coverage.",Syntactic Features for High Precision Word Sense Disambiguation,"This paper explores the contribution of a broad range of syntactic features to WSD: grammatical relations coded as the presence of adjuncts/arguments in isolation or as subcategorization frames, and instantiated grammatical relations between words. We have tested the performance of syntactic features using two different ML algorithms (Decision Lists and AdaBoost) on the Senseval-2 data. Adding syntactic features to a basic set of traditional features improves performance, especially for AdaBoost. In addition, several methods to build arbitrarily high accuracy WSD systems are also tried, showing that syntactic features allow for a precision of 86% and a coverage of 26% or 95% precision and 8% coverage.","This research has been partially funded by McyT (Hermes project TIC-2000-0335-C03-03). David Martinez was funded by the Basque Government, grant AE-BFI:01.245).","Syntactic Features for High Precision Word Sense Disambiguation. This paper explores the contribution of a broad range of syntactic features to WSD: grammatical relations coded as the presence of adjuncts/arguments in isolation or as subcategorization frames, and instantiated grammatical relations between words. We have tested the performance of syntactic features using two different ML algorithms (Decision Lists and AdaBoost) on the Senseval-2 data. Adding syntactic features to a basic set of traditional features improves performance, especially for AdaBoost. In addition, several methods to build arbitrarily high accuracy WSD systems are also tried, showing that syntactic features allow for a precision of 86% and a coverage of 26% or 95% precision and 8% coverage.",2002
martinez-alonso-etal-2013-annotation,https://aclanthology.org/P13-2127,0,,,,,,,"Annotation of regular polysemy and underspecification. We present the result of an annotation task on regular polysemy for a series of semantic classes or dot types in English, Danish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods: majority voting with a theory-compliant backoff strategy, and MACE, an unsupervised system to choose the most likely sense from all the annotations.",Annotation of regular polysemy and underspecification,"We present the result of an annotation task on regular polysemy for a series of semantic classes or dot types in English, Danish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods: majority voting with a theory-compliant backoff strategy, and MACE, an unsupervised system to choose the most likely sense from all the annotations.",Annotation of regular polysemy and underspecification,"We present the result of an annotation task on regular polysemy for a series of semantic classes or dot types in English, Danish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods: majority voting with a theory-compliant backoff strategy, and MACE, an unsupervised system to choose the most likely sense from all the annotations.",The research leading to these results has been funded by the European Commission's 7th Framework Program under grant agreement 238405 (CLARA).,"Annotation of regular polysemy and underspecification. We present the result of an annotation task on regular polysemy for a series of semantic classes or dot types in English, Danish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods: majority voting with a theory-compliant backoff strategy, and MACE, an unsupervised system to choose the most likely sense from all the annotations.",2013
agrawal-carpuat-2020-generating,https://aclanthology.org/2020.ngt-1.21,1,,,,education,,,"Generating Diverse Translations via Weighted Fine-tuning and Hypotheses Filtering for the Duolingo STAPLE Task. This paper describes the University of Maryland's submission to the Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). Unlike the standard machine translation task, STAPLE requires generating a set of outputs for a given input sequence, aiming to cover the space of translations produced by language learners. We adapt neural machine translation models to this requirement by (a) generating n-best translation hypotheses from a model fine-tuned on learner translations, oversampled to reflect the distribution of learner responses, and (b) filtering hypotheses using a feature-rich binary classifier that directly optimizes a close approximation of the official evaluation metric. Combination of systems that use these two strategies achieves F1 scores of 53.9% and 52.5% on Vietnamese and Portuguese, respectively ranking 2 nd and 4 th on the leaderboard.",Generating Diverse Translations via Weighted Fine-tuning and Hypotheses Filtering for the {D}uolingo {STAPLE} Task,"This paper describes the University of Maryland's submission to the Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). Unlike the standard machine translation task, STAPLE requires generating a set of outputs for a given input sequence, aiming to cover the space of translations produced by language learners. We adapt neural machine translation models to this requirement by (a) generating n-best translation hypotheses from a model fine-tuned on learner translations, oversampled to reflect the distribution of learner responses, and (b) filtering hypotheses using a feature-rich binary classifier that directly optimizes a close approximation of the official evaluation metric. Combination of systems that use these two strategies achieves F1 scores of 53.9% and 52.5% on Vietnamese and Portuguese, respectively ranking 2 nd and 4 th on the leaderboard.",Generating Diverse Translations via Weighted Fine-tuning and Hypotheses Filtering for the Duolingo STAPLE Task,"This paper describes the University of Maryland's submission to the Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). Unlike the standard machine translation task, STAPLE requires generating a set of outputs for a given input sequence, aiming to cover the space of translations produced by language learners. We adapt neural machine translation models to this requirement by (a) generating n-best translation hypotheses from a model fine-tuned on learner translations, oversampled to reflect the distribution of learner responses, and (b) filtering hypotheses using a feature-rich binary classifier that directly optimizes a close approximation of the official evaluation metric. Combination of systems that use these two strategies achieves F1 scores of 53.9% and 52.5% on Vietnamese and Portuguese, respectively ranking 2 nd and 4 th on the leaderboard.",,"Generating Diverse Translations via Weighted Fine-tuning and Hypotheses Filtering for the Duolingo STAPLE Task. This paper describes the University of Maryland's submission to the Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). Unlike the standard machine translation task, STAPLE requires generating a set of outputs for a given input sequence, aiming to cover the space of translations produced by language learners. We adapt neural machine translation models to this requirement by (a) generating n-best translation hypotheses from a model fine-tuned on learner translations, oversampled to reflect the distribution of learner responses, and (b) filtering hypotheses using a feature-rich binary classifier that directly optimizes a close approximation of the official evaluation metric. Combination of systems that use these two strategies achieves F1 scores of 53.9% and 52.5% on Vietnamese and Portuguese, respectively ranking 2 nd and 4 th on the leaderboard.",2020
yang-etal-2008-chinese,https://aclanthology.org/C08-1130,0,,,,,,,Chinese Term Extraction Using Minimal Resources. This pap term extraction nimal resources. A term candidate extraction algorithm is proposed to i tures of the 1 Ter st fun domain. Term,{C}hinese Term Extraction Using Minimal Resources,This pap term extraction nimal resources. A term candidate extraction algorithm is proposed to i tures of the 1 Ter st fun domain. Term,Chinese Term Extraction Using Minimal Resources,This pap term extraction nimal resources. A term candidate extraction algorithm is proposed to i tures of the 1 Ter st fun domain. Term,This work was done while the first author was working at the Hong Kong Polytechnic University supported by CERG Grant B-Q941 and Central Research Grant: G-U297.,Chinese Term Extraction Using Minimal Resources. This pap term extraction nimal resources. A term candidate extraction algorithm is proposed to i tures of the 1 Ter st fun domain. Term,2008
goldberg-etal-2013-efficient,https://aclanthology.org/P13-2111,0,,,,,,,"Efficient Implementation of Beam-Search Incremental Parsers. Beam search incremental parsers are accurate, but not as fast as they could be. We demonstrate that, contrary to popular belief, most current implementations of beam parsers in fact run in O(n 2), rather than linear time, because each statetransition is actually implemented as an O(n) operation. We present an improved implementation, based on Tree Structured Stack (TSS), in which a transition is performed in O(1), resulting in a real lineartime algorithm, which is verified empirically. We further improve parsing speed by sharing feature-extraction and dotproduct across beam items. Practically, our methods combined offer a speedup of ∼2x over strong baselines on Penn Treebank sentences, and are orders of magnitude faster on much longer sentences.",Efficient Implementation of Beam-Search Incremental Parsers,"Beam search incremental parsers are accurate, but not as fast as they could be. We demonstrate that, contrary to popular belief, most current implementations of beam parsers in fact run in O(n 2), rather than linear time, because each statetransition is actually implemented as an O(n) operation. We present an improved implementation, based on Tree Structured Stack (TSS), in which a transition is performed in O(1), resulting in a real lineartime algorithm, which is verified empirically. We further improve parsing speed by sharing feature-extraction and dotproduct across beam items. Practically, our methods combined offer a speedup of ∼2x over strong baselines on Penn Treebank sentences, and are orders of magnitude faster on much longer sentences.",Efficient Implementation of Beam-Search Incremental Parsers,"Beam search incremental parsers are accurate, but not as fast as they could be. We demonstrate that, contrary to popular belief, most current implementations of beam parsers in fact run in O(n 2), rather than linear time, because each statetransition is actually implemented as an O(n) operation. We present an improved implementation, based on Tree Structured Stack (TSS), in which a transition is performed in O(1), resulting in a real lineartime algorithm, which is verified empirically. We further improve parsing speed by sharing feature-extraction and dotproduct across beam items. Practically, our methods combined offer a speedup of ∼2x over strong baselines on Penn Treebank sentences, and are orders of magnitude faster on much longer sentences.",,"Efficient Implementation of Beam-Search Incremental Parsers. Beam search incremental parsers are accurate, but not as fast as they could be. We demonstrate that, contrary to popular belief, most current implementations of beam parsers in fact run in O(n 2), rather than linear time, because each statetransition is actually implemented as an O(n) operation. We present an improved implementation, based on Tree Structured Stack (TSS), in which a transition is performed in O(1), resulting in a real lineartime algorithm, which is verified empirically. We further improve parsing speed by sharing feature-extraction and dotproduct across beam items. Practically, our methods combined offer a speedup of ∼2x over strong baselines on Penn Treebank sentences, and are orders of magnitude faster on much longer sentences.",2013
nissim-etal-2013-cross,https://aclanthology.org/W13-0501,0,,,,,,,"Cross-linguistic annotation of modality: a data-driven hierarchical model. We present an annotation model of modality which is (i) cross-linguistic, relying on a wide, strongly typologically motivated approach, and (ii) hierarchical and layered, accounting for both factuality and speaker's attitude, while modelling these two aspects through separate annotation schemes. Modality is defined through cross-linguistic categories, but the classification of actual linguistic expressions is language-specific. This makes our annotation model a powerful tool for investigating linguistic diversity in the field of modality on the basis of real language data, being thus also useful from the perspective of machine translation systems.",Cross-linguistic annotation of modality: a data-driven hierarchical model,"We present an annotation model of modality which is (i) cross-linguistic, relying on a wide, strongly typologically motivated approach, and (ii) hierarchical and layered, accounting for both factuality and speaker's attitude, while modelling these two aspects through separate annotation schemes. Modality is defined through cross-linguistic categories, but the classification of actual linguistic expressions is language-specific. This makes our annotation model a powerful tool for investigating linguistic diversity in the field of modality on the basis of real language data, being thus also useful from the perspective of machine translation systems.",Cross-linguistic annotation of modality: a data-driven hierarchical model,"We present an annotation model of modality which is (i) cross-linguistic, relying on a wide, strongly typologically motivated approach, and (ii) hierarchical and layered, accounting for both factuality and speaker's attitude, while modelling these two aspects through separate annotation schemes. Modality is defined through cross-linguistic categories, but the classification of actual linguistic expressions is language-specific. This makes our annotation model a powerful tool for investigating linguistic diversity in the field of modality on the basis of real language data, being thus also useful from the perspective of machine translation systems.",,"Cross-linguistic annotation of modality: a data-driven hierarchical model. We present an annotation model of modality which is (i) cross-linguistic, relying on a wide, strongly typologically motivated approach, and (ii) hierarchical and layered, accounting for both factuality and speaker's attitude, while modelling these two aspects through separate annotation schemes. Modality is defined through cross-linguistic categories, but the classification of actual linguistic expressions is language-specific. This makes our annotation model a powerful tool for investigating linguistic diversity in the field of modality on the basis of real language data, being thus also useful from the perspective of machine translation systems.",2013
jauregi-unanue-etal-2020-leveraging,https://aclanthology.org/2020.coling-main.395,0,,,,,,,"Leveraging Discourse Rewards for Document-Level Neural Machine Translation. Document-level machine translation focuses on the translation of entire documents from a source to a target language. It is widely regarded as a challenging task since the translation of the individual sentences in the document needs to retain aspects of the discourse at document level. However, document-level translation models are usually not trained to explicitly ensure discourse quality. Therefore, in this paper we propose a training approach that explicitly optimizes two established discourse metrics, lexical cohesion (LC) and coherence (COH), by using a reinforcement learning objective. Experiments over four different language pairs and three translation domains have shown that our training approach has been able to achieve more cohesive and coherent document translations than other competitive approaches, yet without compromising the faithfulness to the reference translation. In the case of the Zh-En language pair, our method has achieved an improvement of 2.46 percentage points (pp) in LC and 1.17 pp in COH over the runner-up, while at the same time improving 0.63 pp in BLEU score and 0.47 pp in F BERT .",Leveraging Discourse Rewards for Document-Level Neural Machine Translation,"Document-level machine translation focuses on the translation of entire documents from a source to a target language. It is widely regarded as a challenging task since the translation of the individual sentences in the document needs to retain aspects of the discourse at document level. However, document-level translation models are usually not trained to explicitly ensure discourse quality. Therefore, in this paper we propose a training approach that explicitly optimizes two established discourse metrics, lexical cohesion (LC) and coherence (COH), by using a reinforcement learning objective. Experiments over four different language pairs and three translation domains have shown that our training approach has been able to achieve more cohesive and coherent document translations than other competitive approaches, yet without compromising the faithfulness to the reference translation. In the case of the Zh-En language pair, our method has achieved an improvement of 2.46 percentage points (pp) in LC and 1.17 pp in COH over the runner-up, while at the same time improving 0.63 pp in BLEU score and 0.47 pp in F BERT .",Leveraging Discourse Rewards for Document-Level Neural Machine Translation,"Document-level machine translation focuses on the translation of entire documents from a source to a target language. It is widely regarded as a challenging task since the translation of the individual sentences in the document needs to retain aspects of the discourse at document level. However, document-level translation models are usually not trained to explicitly ensure discourse quality. Therefore, in this paper we propose a training approach that explicitly optimizes two established discourse metrics, lexical cohesion (LC) and coherence (COH), by using a reinforcement learning objective. Experiments over four different language pairs and three translation domains have shown that our training approach has been able to achieve more cohesive and coherent document translations than other competitive approaches, yet without compromising the faithfulness to the reference translation. In the case of the Zh-En language pair, our method has achieved an improvement of 2.46 percentage points (pp) in LC and 1.17 pp in COH over the runner-up, while at the same time improving 0.63 pp in BLEU score and 0.47 pp in F BERT .",The authors would like to thank the RoZetta Institute (formerly CMCRC) for providing financial support to this research. Warmest thanks also go to Dr. Sameen Maruf for her feedback on an early version of this paper.,"Leveraging Discourse Rewards for Document-Level Neural Machine Translation. Document-level machine translation focuses on the translation of entire documents from a source to a target language. It is widely regarded as a challenging task since the translation of the individual sentences in the document needs to retain aspects of the discourse at document level. However, document-level translation models are usually not trained to explicitly ensure discourse quality. Therefore, in this paper we propose a training approach that explicitly optimizes two established discourse metrics, lexical cohesion (LC) and coherence (COH), by using a reinforcement learning objective. Experiments over four different language pairs and three translation domains have shown that our training approach has been able to achieve more cohesive and coherent document translations than other competitive approaches, yet without compromising the faithfulness to the reference translation. In the case of the Zh-En language pair, our method has achieved an improvement of 2.46 percentage points (pp) in LC and 1.17 pp in COH over the runner-up, while at the same time improving 0.63 pp in BLEU score and 0.47 pp in F BERT .",2020
alonso-etal-2000-redefinition,https://aclanthology.org/W00-2002,0,,,,,,,A redefinition of Embedded Push-Down Automata. A new definition of Embedded Push-Down Automata is provided. We prove this new definition. •. resen1es the equivalence with tree adjoining languages and we provide a tabulationframework to execute any automaton in polynomial time with respect to the length of the input string.,A redefinition of Embedded Push-Down Automata,A new definition of Embedded Push-Down Automata is provided. We prove this new definition. •. resen1es the equivalence with tree adjoining languages and we provide a tabulationframework to execute any automaton in polynomial time with respect to the length of the input string.,A redefinition of Embedded Push-Down Automata,A new definition of Embedded Push-Down Automata is provided. We prove this new definition. •. resen1es the equivalence with tree adjoining languages and we provide a tabulationframework to execute any automaton in polynomial time with respect to the length of the input string.,,A redefinition of Embedded Push-Down Automata. A new definition of Embedded Push-Down Automata is provided. We prove this new definition. •. resen1es the equivalence with tree adjoining languages and we provide a tabulationframework to execute any automaton in polynomial time with respect to the length of the input string.,2000
lafourcade-1996-structured,https://aclanthology.org/C96-2199,0,,,,,,,"Structured lexical data: how to make them widely available, useful and reasonably protected? A practicalexample with a trilingual dictionary. We are studying under which constraints structured lexical data can bemade, at the same time, widely available to the general public (freely ornot), electronically supported, published and reasonably protected frompiracy? A three facet approach-with dictionary tools, web servers and e-mail servers-seems to be effective. We illustrate our views with Alex, a genericdictionary tool, which is used with a French-English-Malay dictionary. Thevery distinction between output, logical and coding formats is made. Storage is based onthe latter and output formats are dynamically generated on the fly atrequest times-making the tool usable in many configurations. Keeping the data structuredis necessary to make them usable also by automated processes and to allowdynamic filtering.","Structured lexical data: how to make them widely available, useful and reasonably protected? A practicalexample with a trilingual dictionary","We are studying under which constraints structured lexical data can bemade, at the same time, widely available to the general public (freely ornot), electronically supported, published and reasonably protected frompiracy? A three facet approach-with dictionary tools, web servers and e-mail servers-seems to be effective. We illustrate our views with Alex, a genericdictionary tool, which is used with a French-English-Malay dictionary. Thevery distinction between output, logical and coding formats is made. Storage is based onthe latter and output formats are dynamically generated on the fly atrequest times-making the tool usable in many configurations. Keeping the data structuredis necessary to make them usable also by automated processes and to allowdynamic filtering.","Structured lexical data: how to make them widely available, useful and reasonably protected? A practicalexample with a trilingual dictionary","We are studying under which constraints structured lexical data can bemade, at the same time, widely available to the general public (freely ornot), electronically supported, published and reasonably protected frompiracy? A three facet approach-with dictionary tools, web servers and e-mail servers-seems to be effective. We illustrate our views with Alex, a genericdictionary tool, which is used with a French-English-Malay dictionary. Thevery distinction between output, logical and coding formats is made. Storage is based onthe latter and output formats are dynamically generated on the fly atrequest times-making the tool usable in many configurations. Keeping the data structuredis necessary to make them usable also by automated processes and to allowdynamic filtering.","My gratefulness goes to the staff of theUTMK and USM, the Dewan Bahasa dan Pustaka and the French Embassy at KualaLumpur. I do not forget the staff of the GETA-CLIPS-IMAG laboratory forsupporting this project and the reviewers of this paper, namely H. Blanchon,Ch. Boitet, J. Gaschler and G. Sdrasset. Of course, all errors remainmine.","Structured lexical data: how to make them widely available, useful and reasonably protected? A practicalexample with a trilingual dictionary. We are studying under which constraints structured lexical data can bemade, at the same time, widely available to the general public (freely ornot), electronically supported, published and reasonably protected frompiracy? A three facet approach-with dictionary tools, web servers and e-mail servers-seems to be effective. We illustrate our views with Alex, a genericdictionary tool, which is used with a French-English-Malay dictionary. Thevery distinction between output, logical and coding formats is made. Storage is based onthe latter and output formats are dynamically generated on the fly atrequest times-making the tool usable in many configurations. Keeping the data structuredis necessary to make them usable also by automated processes and to allowdynamic filtering.",1996
grenager-etal-2005-unsupervised,https://aclanthology.org/P05-1046,0,,,,,,,"Unsupervised Learning of Field Segmentation Models for Information Extraction. The applicability of many current information extraction techniques is severely limited by the need for supervised training data. We demonstrate that for certain field structured extraction tasks, such as classified advertisements and bibliographic citations, small amounts of prior knowledge can be used to learn effective models in a primarily unsupervised fashion. Although hidden Markov models (HMMs) provide a suitable generative model for field structured text, general unsupervised HMM learning fails to learn useful structure in either of our domains. However, one can dramatically improve the quality of the learned structure by exploiting simple prior knowledge of the desired solutions. In both domains, we found that unsupervised methods can attain accuracies with 400 unlabeled examples comparable to those attained by supervised methods on 50 labeled examples, and that semi-supervised methods can make good use of small amounts of labeled data.",Unsupervised Learning of Field Segmentation Models for Information Extraction,"The applicability of many current information extraction techniques is severely limited by the need for supervised training data. We demonstrate that for certain field structured extraction tasks, such as classified advertisements and bibliographic citations, small amounts of prior knowledge can be used to learn effective models in a primarily unsupervised fashion. Although hidden Markov models (HMMs) provide a suitable generative model for field structured text, general unsupervised HMM learning fails to learn useful structure in either of our domains. However, one can dramatically improve the quality of the learned structure by exploiting simple prior knowledge of the desired solutions. In both domains, we found that unsupervised methods can attain accuracies with 400 unlabeled examples comparable to those attained by supervised methods on 50 labeled examples, and that semi-supervised methods can make good use of small amounts of labeled data.",Unsupervised Learning of Field Segmentation Models for Information Extraction,"The applicability of many current information extraction techniques is severely limited by the need for supervised training data. We demonstrate that for certain field structured extraction tasks, such as classified advertisements and bibliographic citations, small amounts of prior knowledge can be used to learn effective models in a primarily unsupervised fashion. Although hidden Markov models (HMMs) provide a suitable generative model for field structured text, general unsupervised HMM learning fails to learn useful structure in either of our domains. However, one can dramatically improve the quality of the learned structure by exploiting simple prior knowledge of the desired solutions. In both domains, we found that unsupervised methods can attain accuracies with 400 unlabeled examples comparable to those attained by supervised methods on 50 labeled examples, and that semi-supervised methods can make good use of small amounts of labeled data.",We would like to thank the reviewers for their consideration and insightful comments.,"Unsupervised Learning of Field Segmentation Models for Information Extraction. The applicability of many current information extraction techniques is severely limited by the need for supervised training data. We demonstrate that for certain field structured extraction tasks, such as classified advertisements and bibliographic citations, small amounts of prior knowledge can be used to learn effective models in a primarily unsupervised fashion. Although hidden Markov models (HMMs) provide a suitable generative model for field structured text, general unsupervised HMM learning fails to learn useful structure in either of our domains. However, one can dramatically improve the quality of the learned structure by exploiting simple prior knowledge of the desired solutions. In both domains, we found that unsupervised methods can attain accuracies with 400 unlabeled examples comparable to those attained by supervised methods on 50 labeled examples, and that semi-supervised methods can make good use of small amounts of labeled data.",2005
mencarini-2018-potential,https://aclanthology.org/W18-1109,1,,,,peace_justice_and_strong_institutions,,,"The Potential of the Computational Linguistic Analysis of Social Media for Population Studies. The paper provides an outline of the scope for synergy between computational linguistic analysis and population studies. It first reviews where population studies stand in terms of using social media data. Demographers are entering the realm of big data in force. But, this paper argues, population studies have much to gain from computational linguistic analysis, especially in terms of explaining the drivers behind population processes. The paper gives two examples of how the method can be applied, and concludes with a fundamental caveat. Yes, computational linguistic analysis provides a possible key for integrating micro theory into any demographic analysis of social media data. But results may be of little value in as much as knowledge about fundamental sample characteristics are unknown.",The Potential of the Computational Linguistic Analysis of Social Media for Population Studies,"The paper provides an outline of the scope for synergy between computational linguistic analysis and population studies. It first reviews where population studies stand in terms of using social media data. Demographers are entering the realm of big data in force. But, this paper argues, population studies have much to gain from computational linguistic analysis, especially in terms of explaining the drivers behind population processes. The paper gives two examples of how the method can be applied, and concludes with a fundamental caveat. Yes, computational linguistic analysis provides a possible key for integrating micro theory into any demographic analysis of social media data. But results may be of little value in as much as knowledge about fundamental sample characteristics are unknown.",The Potential of the Computational Linguistic Analysis of Social Media for Population Studies,"The paper provides an outline of the scope for synergy between computational linguistic analysis and population studies. It first reviews where population studies stand in terms of using social media data. Demographers are entering the realm of big data in force. But, this paper argues, population studies have much to gain from computational linguistic analysis, especially in terms of explaining the drivers behind population processes. The paper gives two examples of how the method can be applied, and concludes with a fundamental caveat. Yes, computational linguistic analysis provides a possible key for integrating micro theory into any demographic analysis of social media data. But results may be of little value in as much as knowledge about fundamental sample characteristics are unknown.",,"The Potential of the Computational Linguistic Analysis of Social Media for Population Studies. The paper provides an outline of the scope for synergy between computational linguistic analysis and population studies. It first reviews where population studies stand in terms of using social media data. Demographers are entering the realm of big data in force. But, this paper argues, population studies have much to gain from computational linguistic analysis, especially in terms of explaining the drivers behind population processes. The paper gives two examples of how the method can be applied, and concludes with a fundamental caveat. Yes, computational linguistic analysis provides a possible key for integrating micro theory into any demographic analysis of social media data. But results may be of little value in as much as knowledge about fundamental sample characteristics are unknown.",2018
liu-etal-2017-generating,https://aclanthology.org/P17-1010,0,,,,,,,"Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution. Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers. Therefore, the lack of annotated data becomes a major obstacle in the progress of zero pronoun resolution task. Also, it is expensive to spend manpower on labeling the data for better performance. To alleviate the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Furthermore, we successfully transfer the cloze-style reading comprehension neural network model into zero pronoun resolution task and propose a two-step training mechanism to overcome the gap between the pseudo training data and the real one. Experimental results show that the proposed approach significantly outperforms the state-of-the-art systems with an absolute improvements of 3.1% F-score on OntoNotes 5.0 data.",Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution,"Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers. Therefore, the lack of annotated data becomes a major obstacle in the progress of zero pronoun resolution task. Also, it is expensive to spend manpower on labeling the data for better performance. To alleviate the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Furthermore, we successfully transfer the cloze-style reading comprehension neural network model into zero pronoun resolution task and propose a two-step training mechanism to overcome the gap between the pseudo training data and the real one. Experimental results show that the proposed approach significantly outperforms the state-of-the-art systems with an absolute improvements of 3.1% F-score on OntoNotes 5.0 data.",Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution,"Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers. Therefore, the lack of annotated data becomes a major obstacle in the progress of zero pronoun resolution task. Also, it is expensive to spend manpower on labeling the data for better performance. To alleviate the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Furthermore, we successfully transfer the cloze-style reading comprehension neural network model into zero pronoun resolution task and propose a two-step training mechanism to overcome the gap between the pseudo training data and the real one. Experimental results show that the proposed approach significantly outperforms the state-of-the-art systems with an absolute improvements of 3.1% F-score on OntoNotes 5.0 data.","We would like to thank the anonymous reviewers for their thorough reviewing and proposing thoughtful comments to improve our paper. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015407, Key Projects of National Natural Science Foundation of China via grant 61632011,","Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution. Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers. Therefore, the lack of annotated data becomes a major obstacle in the progress of zero pronoun resolution task. Also, it is expensive to spend manpower on labeling the data for better performance. To alleviate the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Furthermore, we successfully transfer the cloze-style reading comprehension neural network model into zero pronoun resolution task and propose a two-step training mechanism to overcome the gap between the pseudo training data and the real one. Experimental results show that the proposed approach significantly outperforms the state-of-the-art systems with an absolute improvements of 3.1% F-score on OntoNotes 5.0 data.",2017
kuhn-etal-2006-segment,https://aclanthology.org/N06-1004,0,,,,,,,"Segment Choice Models: Feature-Rich Models for Global Distortion in Statistical Machine Translation. This paper presents a new approach to distortion (phrase reordering) in phrasebased machine translation (MT). Distortion is modeled as a sequence of choices during translation. The approach yields trainable, probabilistic distortion models that are global: they assign a probability to each possible phrase reordering. These ""segment choice"" models (SCMs) can be trained on ""segment-aligned"" sentence pairs; they can be applied during decoding or rescoring. The approach yields a metric called ""distortion perplexity"" (""disperp"") for comparing SCMs offline on test data, analogous to perplexity for language models. A decision-tree-based SCM is tested on Chinese-to-English translation, and outperforms a baseline distortion penalty approach at the 99% confidence level.",Segment Choice Models: Feature-Rich Models for Global Distortion in Statistical Machine Translation,"This paper presents a new approach to distortion (phrase reordering) in phrasebased machine translation (MT). Distortion is modeled as a sequence of choices during translation. The approach yields trainable, probabilistic distortion models that are global: they assign a probability to each possible phrase reordering. These ""segment choice"" models (SCMs) can be trained on ""segment-aligned"" sentence pairs; they can be applied during decoding or rescoring. The approach yields a metric called ""distortion perplexity"" (""disperp"") for comparing SCMs offline on test data, analogous to perplexity for language models. A decision-tree-based SCM is tested on Chinese-to-English translation, and outperforms a baseline distortion penalty approach at the 99% confidence level.",Segment Choice Models: Feature-Rich Models for Global Distortion in Statistical Machine Translation,"This paper presents a new approach to distortion (phrase reordering) in phrasebased machine translation (MT). Distortion is modeled as a sequence of choices during translation. The approach yields trainable, probabilistic distortion models that are global: they assign a probability to each possible phrase reordering. These ""segment choice"" models (SCMs) can be trained on ""segment-aligned"" sentence pairs; they can be applied during decoding or rescoring. The approach yields a metric called ""distortion perplexity"" (""disperp"") for comparing SCMs offline on test data, analogous to perplexity for language models. A decision-tree-based SCM is tested on Chinese-to-English translation, and outperforms a baseline distortion penalty approach at the 99% confidence level.",,"Segment Choice Models: Feature-Rich Models for Global Distortion in Statistical Machine Translation. This paper presents a new approach to distortion (phrase reordering) in phrasebased machine translation (MT). Distortion is modeled as a sequence of choices during translation. The approach yields trainable, probabilistic distortion models that are global: they assign a probability to each possible phrase reordering. These ""segment choice"" models (SCMs) can be trained on ""segment-aligned"" sentence pairs; they can be applied during decoding or rescoring. The approach yields a metric called ""distortion perplexity"" (""disperp"") for comparing SCMs offline on test data, analogous to perplexity for language models. A decision-tree-based SCM is tested on Chinese-to-English translation, and outperforms a baseline distortion penalty approach at the 99% confidence level.",2006
gnehm-clematide-2020-text,https://aclanthology.org/2020.nlpcss-1.10,1,,,,decent_work_and_economy,,,"Text Zoning and Classification for Job Advertisements in German, French and English. We present experiments to structure job ads into text zones and classify them into professions, industries and management functions, thereby facilitating social science analyses on labor marked demand. Our main contribution are empirical findings on the benefits of contextualized embeddings and the potential of multi-task models for this purpose. With contextualized in-domain embeddings in BiLSTM-CRF models, we reach an accuracy of 91% for token-level text zoning and outperform previous approaches. A multi-tasking BERT model performs well for our classification tasks. We further compare transfer approaches for our multilingual data.","Text Zoning and Classification for Job Advertisements in {G}erman, {F}rench and {E}nglish","We present experiments to structure job ads into text zones and classify them into professions, industries and management functions, thereby facilitating social science analyses on labor marked demand. Our main contribution are empirical findings on the benefits of contextualized embeddings and the potential of multi-task models for this purpose. With contextualized in-domain embeddings in BiLSTM-CRF models, we reach an accuracy of 91% for token-level text zoning and outperform previous approaches. A multi-tasking BERT model performs well for our classification tasks. We further compare transfer approaches for our multilingual data.","Text Zoning and Classification for Job Advertisements in German, French and English","We present experiments to structure job ads into text zones and classify them into professions, industries and management functions, thereby facilitating social science analyses on labor marked demand. Our main contribution are empirical findings on the benefits of contextualized embeddings and the potential of multi-task models for this purpose. With contextualized in-domain embeddings in BiLSTM-CRF models, we reach an accuracy of 91% for token-level text zoning and outperform previous approaches. A multi-tasking BERT model performs well for our classification tasks. We further compare transfer approaches for our multilingual data.","We thank Dong Nguyen and the anonymous reviewers for their careful reading of this article and their helpful comments and suggestions, and Helen Buchs for her efforts in post-evaluation. This work is supported by the Swiss National Science Foundation under grant number 407740 187333.","Text Zoning and Classification for Job Advertisements in German, French and English. We present experiments to structure job ads into text zones and classify them into professions, industries and management functions, thereby facilitating social science analyses on labor marked demand. Our main contribution are empirical findings on the benefits of contextualized embeddings and the potential of multi-task models for this purpose. With contextualized in-domain embeddings in BiLSTM-CRF models, we reach an accuracy of 91% for token-level text zoning and outperform previous approaches. A multi-tasking BERT model performs well for our classification tasks. We further compare transfer approaches for our multilingual data.",2020
yu-etal-2018-device,https://aclanthology.org/C18-2028,0,,,,,,,"On-Device Neural Language Model Based Word Prediction. Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation. However, deploying huge language models on mobile devices for on-device keyboards poses computation as a bottleneck due to their puny computation capacities. In this work, we propose an on-device neural language model based word prediction method that optimizes run-time memory and also provides a realtime prediction environment. Our model size is 7.40MB and has average prediction time of 6.47 ms. The proposed model outperforms existing methods for word prediction in terms of keystroke savings and word prediction rate and has been successfully commercialized.",On-Device Neural Language Model Based Word Prediction,"Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation. However, deploying huge language models on mobile devices for on-device keyboards poses computation as a bottleneck due to their puny computation capacities. In this work, we propose an on-device neural language model based word prediction method that optimizes run-time memory and also provides a realtime prediction environment. Our model size is 7.40MB and has average prediction time of 6.47 ms. The proposed model outperforms existing methods for word prediction in terms of keystroke savings and word prediction rate and has been successfully commercialized.",On-Device Neural Language Model Based Word Prediction,"Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation. However, deploying huge language models on mobile devices for on-device keyboards poses computation as a bottleneck due to their puny computation capacities. In this work, we propose an on-device neural language model based word prediction method that optimizes run-time memory and also provides a realtime prediction environment. Our model size is 7.40MB and has average prediction time of 6.47 ms. The proposed model outperforms existing methods for word prediction in terms of keystroke savings and word prediction rate and has been successfully commercialized.",,"On-Device Neural Language Model Based Word Prediction. Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation. However, deploying huge language models on mobile devices for on-device keyboards poses computation as a bottleneck due to their puny computation capacities. In this work, we propose an on-device neural language model based word prediction method that optimizes run-time memory and also provides a realtime prediction environment. Our model size is 7.40MB and has average prediction time of 6.47 ms. The proposed model outperforms existing methods for word prediction in terms of keystroke savings and word prediction rate and has been successfully commercialized.",2018
chen-etal-2013-identifying,https://aclanthology.org/N13-1124,0,,,,,,,"Identifying Intention Posts in Discussion Forums. This paper proposes to study the problem of identifying intention posts in online discussion forums. For example, in a discussion forum, a user wrote ""I plan to buy a camera,"" which indicates a buying intention. This intention can be easily exploited by advertisers. To the best of our knowledge, there is still no reported study of this problem. Our research found that this problem is particularly suited to transfer learning because in different domains, people express the same intention in similar ways. We then propose a new transfer learning method which, unlike a general transfer learning algorithm, exploits several special characteristics of the problem. Experimental results show that the proposed method outperforms several strong baselines, including supervised learning in the target domain and a recent transfer learning method.",Identifying Intention Posts in Discussion Forums,"This paper proposes to study the problem of identifying intention posts in online discussion forums. For example, in a discussion forum, a user wrote ""I plan to buy a camera,"" which indicates a buying intention. This intention can be easily exploited by advertisers. To the best of our knowledge, there is still no reported study of this problem. Our research found that this problem is particularly suited to transfer learning because in different domains, people express the same intention in similar ways. We then propose a new transfer learning method which, unlike a general transfer learning algorithm, exploits several special characteristics of the problem. Experimental results show that the proposed method outperforms several strong baselines, including supervised learning in the target domain and a recent transfer learning method.",Identifying Intention Posts in Discussion Forums,"This paper proposes to study the problem of identifying intention posts in online discussion forums. For example, in a discussion forum, a user wrote ""I plan to buy a camera,"" which indicates a buying intention. This intention can be easily exploited by advertisers. To the best of our knowledge, there is still no reported study of this problem. Our research found that this problem is particularly suited to transfer learning because in different domains, people express the same intention in similar ways. We then propose a new transfer learning method which, unlike a general transfer learning algorithm, exploits several special characteristics of the problem. Experimental results show that the proposed method outperforms several strong baselines, including supervised learning in the target domain and a recent transfer learning method.","This work was supported in part by a grant from National Science Foundation (NSF) under grant no. IIS-1111092, and a grant from HP Labs Innovation Research Program.","Identifying Intention Posts in Discussion Forums. This paper proposes to study the problem of identifying intention posts in online discussion forums. For example, in a discussion forum, a user wrote ""I plan to buy a camera,"" which indicates a buying intention. This intention can be easily exploited by advertisers. To the best of our knowledge, there is still no reported study of this problem. Our research found that this problem is particularly suited to transfer learning because in different domains, people express the same intention in similar ways. We then propose a new transfer learning method which, unlike a general transfer learning algorithm, exploits several special characteristics of the problem. Experimental results show that the proposed method outperforms several strong baselines, including supervised learning in the target domain and a recent transfer learning method.",2013
nguyen-verspoor-2018-improved,https://aclanthology.org/K18-2008,0,,,,,,,"An Improved Neural Network Model for Joint POS Tagging and Dependency Parsing. We propose a novel neural network model for joint part-of-speech (POS) tagging and dependency parsing. Our model extends the well-known BIST graph-based dependency parser (Kiperwasser and Goldberg, 2016) by incorporating a BiLSTM-based tagging component to produce automatically predicted POS tags for the parser. On the benchmark English Penn treebank, our model obtains strong UAS and LAS scores at 94.51% and 92.87%, respectively, producing 1.5+% absolute improvements to the BIST graph-based parser, and also obtaining a state-of-the-art POS tagging accuracy at 97.97%. Furthermore, experimental results on parsing 61 ""big"" Universal Dependencies treebanks from raw texts show that our model outperforms the baseline UDPipe (Straka and Straková, 2017) with 0.8% higher average POS tagging score and 3.6% higher average LAS score. In addition, with our model, we also obtain state-of-the-art downstream task scores for biomedical event extraction and opinion analysis applications.",An Improved Neural Network Model for Joint {POS} Tagging and Dependency Parsing,"We propose a novel neural network model for joint part-of-speech (POS) tagging and dependency parsing. Our model extends the well-known BIST graph-based dependency parser (Kiperwasser and Goldberg, 2016) by incorporating a BiLSTM-based tagging component to produce automatically predicted POS tags for the parser. On the benchmark English Penn treebank, our model obtains strong UAS and LAS scores at 94.51% and 92.87%, respectively, producing 1.5+% absolute improvements to the BIST graph-based parser, and also obtaining a state-of-the-art POS tagging accuracy at 97.97%. Furthermore, experimental results on parsing 61 ""big"" Universal Dependencies treebanks from raw texts show that our model outperforms the baseline UDPipe (Straka and Straková, 2017) with 0.8% higher average POS tagging score and 3.6% higher average LAS score. In addition, with our model, we also obtain state-of-the-art downstream task scores for biomedical event extraction and opinion analysis applications.",An Improved Neural Network Model for Joint POS Tagging and Dependency Parsing,"We propose a novel neural network model for joint part-of-speech (POS) tagging and dependency parsing. Our model extends the well-known BIST graph-based dependency parser (Kiperwasser and Goldberg, 2016) by incorporating a BiLSTM-based tagging component to produce automatically predicted POS tags for the parser. On the benchmark English Penn treebank, our model obtains strong UAS and LAS scores at 94.51% and 92.87%, respectively, producing 1.5+% absolute improvements to the BIST graph-based parser, and also obtaining a state-of-the-art POS tagging accuracy at 97.97%. Furthermore, experimental results on parsing 61 ""big"" Universal Dependencies treebanks from raw texts show that our model outperforms the baseline UDPipe (Straka and Straková, 2017) with 0.8% higher average POS tagging score and 3.6% higher average LAS score. In addition, with our model, we also obtain state-of-the-art downstream task scores for biomedical event extraction and opinion analysis applications.",This work was supported by the ARC Discovery Project DP150101550 and ARC Linkage Project LP160101469.,"An Improved Neural Network Model for Joint POS Tagging and Dependency Parsing. We propose a novel neural network model for joint part-of-speech (POS) tagging and dependency parsing. Our model extends the well-known BIST graph-based dependency parser (Kiperwasser and Goldberg, 2016) by incorporating a BiLSTM-based tagging component to produce automatically predicted POS tags for the parser. On the benchmark English Penn treebank, our model obtains strong UAS and LAS scores at 94.51% and 92.87%, respectively, producing 1.5+% absolute improvements to the BIST graph-based parser, and also obtaining a state-of-the-art POS tagging accuracy at 97.97%. Furthermore, experimental results on parsing 61 ""big"" Universal Dependencies treebanks from raw texts show that our model outperforms the baseline UDPipe (Straka and Straková, 2017) with 0.8% higher average POS tagging score and 3.6% higher average LAS score. In addition, with our model, we also obtain state-of-the-art downstream task scores for biomedical event extraction and opinion analysis applications.",2018
hsu-2014-frequency,https://aclanthology.org/W14-4714,0,,,,,,,"When Frequency Data Meet Dispersion Data in the Extraction of Multi-word Units from a Corpus: A Study of Trigrams in Chinese. One of the main approaches to extract multi-word units is the frequency threshold approach, but the way this approach considers dispersion data still leaves a lot to be desired. This study adopts Gries's (2008) dispersion measure to extract trigrams from a Chinese corpus, and the results are compared with those of the frequency threshold approach. It is found that the overlap between the two approaches is not very large. This demonstrates the necessity of taking dispersion data more seriously and the dynamic nature of lexical representations. Moreover, the trigrams extracted in the present study can be used in a wide range of language resources in Chinese.",When Frequency Data Meet Dispersion Data in the Extraction of Multi-word Units from a Corpus: A Study of Trigrams in {C}hinese,"One of the main approaches to extract multi-word units is the frequency threshold approach, but the way this approach considers dispersion data still leaves a lot to be desired. This study adopts Gries's (2008) dispersion measure to extract trigrams from a Chinese corpus, and the results are compared with those of the frequency threshold approach. It is found that the overlap between the two approaches is not very large. This demonstrates the necessity of taking dispersion data more seriously and the dynamic nature of lexical representations. Moreover, the trigrams extracted in the present study can be used in a wide range of language resources in Chinese.",When Frequency Data Meet Dispersion Data in the Extraction of Multi-word Units from a Corpus: A Study of Trigrams in Chinese,"One of the main approaches to extract multi-word units is the frequency threshold approach, but the way this approach considers dispersion data still leaves a lot to be desired. This study adopts Gries's (2008) dispersion measure to extract trigrams from a Chinese corpus, and the results are compared with those of the frequency threshold approach. It is found that the overlap between the two approaches is not very large. This demonstrates the necessity of taking dispersion data more seriously and the dynamic nature of lexical representations. Moreover, the trigrams extracted in the present study can be used in a wide range of language resources in Chinese.",,"When Frequency Data Meet Dispersion Data in the Extraction of Multi-word Units from a Corpus: A Study of Trigrams in Chinese. One of the main approaches to extract multi-word units is the frequency threshold approach, but the way this approach considers dispersion data still leaves a lot to be desired. This study adopts Gries's (2008) dispersion measure to extract trigrams from a Chinese corpus, and the results are compared with those of the frequency threshold approach. It is found that the overlap between the two approaches is not very large. This demonstrates the necessity of taking dispersion data more seriously and the dynamic nature of lexical representations. Moreover, the trigrams extracted in the present study can be used in a wide range of language resources in Chinese.",2014
vijay-shanker-etal-1987-characterizing,https://aclanthology.org/P87-1015,0,,,,,,,"Characterizing Structural Descriptions Produced by Various Grammatical Formalisms. We consider the structural descriptions produced by various grammatical formalisms in ~ of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties of their deriva:ion trees. We find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free C-ramma~. On the basis of this observation, we describe a class of formalisms which we call Linear Context-Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.",Characterizing Structural Descriptions Produced by Various Grammatical Formalisms,"We consider the structural descriptions produced by various grammatical formalisms in ~ of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties of their deriva:ion trees. We find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free C-ramma~. On the basis of this observation, we describe a class of formalisms which we call Linear Context-Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.",Characterizing Structural Descriptions Produced by Various Grammatical Formalisms,"We consider the structural descriptions produced by various grammatical formalisms in ~ of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties of their deriva:ion trees. We find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free C-ramma~. On the basis of this observation, we describe a class of formalisms which we call Linear Context-Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.",,"Characterizing Structural Descriptions Produced by Various Grammatical Formalisms. We consider the structural descriptions produced by various grammatical formalisms in ~ of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties of their deriva:ion trees. We find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free C-ramma~. On the basis of this observation, we describe a class of formalisms which we call Linear Context-Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.",1987
bos-etal-2003-dipper,https://aclanthology.org/W03-2123,0,,,,,,,"DIPPER: Description and Formalisation of an Information-State Update Dialogue System Architecture. The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIP-PER information state update language. The language is independent of particular programming languages, and incorporates procedural attachments for access to external resources using OAA. Definition: Update. An ordered set of effects e 1 ,. .. , e n are successfully applied to an information state s, resulting an information state s if U(",{DIPPER}: Description and Formalisation of an Information-State Update Dialogue System Architecture,"The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIP-PER information state update language. The language is independent of particular programming languages, and incorporates procedural attachments for access to external resources using OAA. Definition: Update. An ordered set of effects {e 1 ,. .. , e n } are successfully applied to an information state s, resulting an information state s if U(",DIPPER: Description and Formalisation of an Information-State Update Dialogue System Architecture,"The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIP-PER information state update language. The language is independent of particular programming languages, and incorporates procedural attachments for access to external resources using OAA. Definition: Update. An ordered set of effects e 1 ,. .. , e n are successfully applied to an information state s, resulting an information state s if U(",Part of this work was supported by the EU Project MagiCster (IST 1999-29078). We thank Nuance for permission to use their software and tools.,"DIPPER: Description and Formalisation of an Information-State Update Dialogue System Architecture. The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIP-PER information state update language. The language is independent of particular programming languages, and incorporates procedural attachments for access to external resources using OAA. Definition: Update. An ordered set of effects e 1 ,. .. , e n are successfully applied to an information state s, resulting an information state s if U(",2003
fort-etal-2012-modeling,https://aclanthology.org/C12-1055,0,,,,,,,"Modeling the Complexity of Manual Annotation Tasks: a Grid of Analysis. Manual corpus annotation is getting widely used in Natural Language Processing (NLP). While being recognized as a difficult task, no in-depth analysis of its complexity has been performed yet. We provide in this article a grid of analysis of the different complexity dimensions of an annotation task, which helps estimating beforehand the difficulties and cost of annotation campaigns. We observe the applicability of this grid on existing annotation campaigns and detail its application on a real-world example.",Modeling the Complexity of Manual Annotation Tasks: a Grid of Analysis,"Manual corpus annotation is getting widely used in Natural Language Processing (NLP). While being recognized as a difficult task, no in-depth analysis of its complexity has been performed yet. We provide in this article a grid of analysis of the different complexity dimensions of an annotation task, which helps estimating beforehand the difficulties and cost of annotation campaigns. We observe the applicability of this grid on existing annotation campaigns and detail its application on a real-world example.",Modeling the Complexity of Manual Annotation Tasks: a Grid of Analysis,"Manual corpus annotation is getting widely used in Natural Language Processing (NLP). While being recognized as a difficult task, no in-depth analysis of its complexity has been performed yet. We provide in this article a grid of analysis of the different complexity dimensions of an annotation task, which helps estimating beforehand the difficulties and cost of annotation campaigns. We observe the applicability of this grid on existing annotation campaigns and detail its application on a real-world example.","This work was realized as part of the Quaero Programme 12 , funded by OSEO, French State agency for innovation.","Modeling the Complexity of Manual Annotation Tasks: a Grid of Analysis. Manual corpus annotation is getting widely used in Natural Language Processing (NLP). While being recognized as a difficult task, no in-depth analysis of its complexity has been performed yet. We provide in this article a grid of analysis of the different complexity dimensions of an annotation task, which helps estimating beforehand the difficulties and cost of annotation campaigns. We observe the applicability of this grid on existing annotation campaigns and detail its application on a real-world example.",2012
wilson-etal-2009-articles,https://aclanthology.org/J09-3003,0,,,,,,,"Articles: Recognizing Contextual Polarity: An Exploration of Features for Phrase-Level Sentiment Analysis. Many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity (also called semantic orientation). However, the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the word's prior polarity. Positive words are used in phrases expressing negative sentiments, or vice versa. Also, quite often words that are positive or negative out of context are neutral in context, meaning they are not even being used to express a sentiment. The goal of this work is to automatically distinguish between prior and contextual polarity, with a focus on understanding which features are important for this task. Because an important aspect of the problem is identifying when polar terms are being used in neutral contexts, features for distinguishing between neutral and polar instances are evaluated, as well as features for distinguishing between positive and negative contextual polarity. The evaluation includes assessing the performance of features across multiple machine learning algorithms. For all learning algorithms except one, the combination of all features together gives the best performance. Another facet of the evaluation considers how the presence of neutral instances affects the performance of features for distinguishing between positive and negative polarity. These experiments show that the presence of neutral instances greatly degrades the performance of these features, and that perhaps the best way to improve performance across all polarity classes is to improve the system's ability to identify when an instance is neutral.",{A}rticles: Recognizing Contextual Polarity: An Exploration of Features for Phrase-Level Sentiment Analysis,"Many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity (also called semantic orientation). However, the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the word's prior polarity. Positive words are used in phrases expressing negative sentiments, or vice versa. Also, quite often words that are positive or negative out of context are neutral in context, meaning they are not even being used to express a sentiment. The goal of this work is to automatically distinguish between prior and contextual polarity, with a focus on understanding which features are important for this task. Because an important aspect of the problem is identifying when polar terms are being used in neutral contexts, features for distinguishing between neutral and polar instances are evaluated, as well as features for distinguishing between positive and negative contextual polarity. The evaluation includes assessing the performance of features across multiple machine learning algorithms. For all learning algorithms except one, the combination of all features together gives the best performance. Another facet of the evaluation considers how the presence of neutral instances affects the performance of features for distinguishing between positive and negative polarity. These experiments show that the presence of neutral instances greatly degrades the performance of these features, and that perhaps the best way to improve performance across all polarity classes is to improve the system's ability to identify when an instance is neutral.",Articles: Recognizing Contextual Polarity: An Exploration of Features for Phrase-Level Sentiment Analysis,"Many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity (also called semantic orientation). However, the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the word's prior polarity. Positive words are used in phrases expressing negative sentiments, or vice versa. Also, quite often words that are positive or negative out of context are neutral in context, meaning they are not even being used to express a sentiment. The goal of this work is to automatically distinguish between prior and contextual polarity, with a focus on understanding which features are important for this task. Because an important aspect of the problem is identifying when polar terms are being used in neutral contexts, features for distinguishing between neutral and polar instances are evaluated, as well as features for distinguishing between positive and negative contextual polarity. The evaluation includes assessing the performance of features across multiple machine learning algorithms. For all learning algorithms except one, the combination of all features together gives the best performance. Another facet of the evaluation considers how the presence of neutral instances affects the performance of features for distinguishing between positive and negative polarity. These experiments show that the presence of neutral instances greatly degrades the performance of these features, and that perhaps the best way to improve performance across all polarity classes is to improve the system's ability to identify when an instance is neutral.","We would like to thank the anonymous reviewers for their valuable comments and suggestions. This work was supported in part by an Andrew Mellow Predoctoral Fellowship, by the NSF under grant IIS-0208798, by the Advanced Research and Development Activity (ARDA), and by the European IST Programme through the AMIDA Integrated Project FP6-0033812.","Articles: Recognizing Contextual Polarity: An Exploration of Features for Phrase-Level Sentiment Analysis. Many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity (also called semantic orientation). However, the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the word's prior polarity. Positive words are used in phrases expressing negative sentiments, or vice versa. Also, quite often words that are positive or negative out of context are neutral in context, meaning they are not even being used to express a sentiment. The goal of this work is to automatically distinguish between prior and contextual polarity, with a focus on understanding which features are important for this task. Because an important aspect of the problem is identifying when polar terms are being used in neutral contexts, features for distinguishing between neutral and polar instances are evaluated, as well as features for distinguishing between positive and negative contextual polarity. The evaluation includes assessing the performance of features across multiple machine learning algorithms. For all learning algorithms except one, the combination of all features together gives the best performance. Another facet of the evaluation considers how the presence of neutral instances affects the performance of features for distinguishing between positive and negative polarity. These experiments show that the presence of neutral instances greatly degrades the performance of these features, and that perhaps the best way to improve performance across all polarity classes is to improve the system's ability to identify when an instance is neutral.",2009
konig-1990-complexity,https://aclanthology.org/C90-2041,0,,,,,,,"The Complexity of Parsing With Extended Categorial Grammars. Instead of incorporating a gap-percolation mechanism for handling certain ""movement"" phenomena, the extended categorial grammars contain special inference rules for treating these problems. The Lambek categorial grammar is one representative of the grammar family under consideration. It allows for a restricted use of hypothetical reasoning. We define a modification of the Cocke-Younger-Kasami (CKY) parsing algorithm which covers this additional deductive power and analyze its time complexity.",The Complexity of Parsing With Extended Categorial Grammars,"Instead of incorporating a gap-percolation mechanism for handling certain ""movement"" phenomena, the extended categorial grammars contain special inference rules for treating these problems. The Lambek categorial grammar is one representative of the grammar family under consideration. It allows for a restricted use of hypothetical reasoning. We define a modification of the Cocke-Younger-Kasami (CKY) parsing algorithm which covers this additional deductive power and analyze its time complexity.",The Complexity of Parsing With Extended Categorial Grammars,"Instead of incorporating a gap-percolation mechanism for handling certain ""movement"" phenomena, the extended categorial grammars contain special inference rules for treating these problems. The Lambek categorial grammar is one representative of the grammar family under consideration. It allows for a restricted use of hypothetical reasoning. We define a modification of the Cocke-Younger-Kasami (CKY) parsing algorithm which covers this additional deductive power and analyze its time complexity.","The research reported in this paper is supported by the LILOG project, and a doctoral fellowship, both from IBM Deutschland OmbH, and by the Esprit Basic Research Action Project 3175 (DYANA). I thank Andreas Eisele for discussion. The responsibility for errors resides with me.","The Complexity of Parsing With Extended Categorial Grammars. Instead of incorporating a gap-percolation mechanism for handling certain ""movement"" phenomena, the extended categorial grammars contain special inference rules for treating these problems. The Lambek categorial grammar is one representative of the grammar family under consideration. It allows for a restricted use of hypothetical reasoning. We define a modification of the Cocke-Younger-Kasami (CKY) parsing algorithm which covers this additional deductive power and analyze its time complexity.",1990
sharma-etal-2021-lrg,https://aclanthology.org/2021.semeval-1.21,0,,,,,,,"LRG at SemEval-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and Voting. In this article, we present our methodologies for SemEval-2021 Task-4: Reading Comprehension of Abstract Meaning. Given a fill-inthe-blank-type question and a corresponding context, the task is to predict the most suitable word from a list of 5 options. There are three sub-tasks within this task: Imperceptibility (subtask-I), Non-Specificity (subtask-II), and Intersection (subtask-III). We use encoders of transformers-based models pre-trained on the masked language modelling (MLM) task to build our Fill-in-the-blank (FitB) models. Moreover, to model imperceptibility, we define certain linguistic features, and to model non-specificity, we leverage information from hypernyms and hyponyms provided by a lexical database. Specifically, for non-specificity, we try out augmentation techniques, and other statistical techniques. We also propose variants, namely Chunk Voting and Max Context, to take care of input length restrictions for BERT, etc. Additionally, we perform a thorough ablation study, and use Integrated Gradients to explain our predictions on a few samples. Our best submissions achieve accuracies of 75.31% and 77.84%, on the test sets for subtask-I and subtask-II, respectively. For subtask-III, we achieve accuracies of 65.64% and 62.27%. The code is available here.","{LRG} at {S}em{E}val-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and Voting","In this article, we present our methodologies for SemEval-2021 Task-4: Reading Comprehension of Abstract Meaning. Given a fill-inthe-blank-type question and a corresponding context, the task is to predict the most suitable word from a list of 5 options. There are three sub-tasks within this task: Imperceptibility (subtask-I), Non-Specificity (subtask-II), and Intersection (subtask-III). We use encoders of transformers-based models pre-trained on the masked language modelling (MLM) task to build our Fill-in-the-blank (FitB) models. Moreover, to model imperceptibility, we define certain linguistic features, and to model non-specificity, we leverage information from hypernyms and hyponyms provided by a lexical database. Specifically, for non-specificity, we try out augmentation techniques, and other statistical techniques. We also propose variants, namely Chunk Voting and Max Context, to take care of input length restrictions for BERT, etc. Additionally, we perform a thorough ablation study, and use Integrated Gradients to explain our predictions on a few samples. Our best submissions achieve accuracies of 75.31% and 77.84%, on the test sets for subtask-I and subtask-II, respectively. For subtask-III, we achieve accuracies of 65.64% and 62.27%. The code is available here.","LRG at SemEval-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and Voting","In this article, we present our methodologies for SemEval-2021 Task-4: Reading Comprehension of Abstract Meaning. Given a fill-inthe-blank-type question and a corresponding context, the task is to predict the most suitable word from a list of 5 options. There are three sub-tasks within this task: Imperceptibility (subtask-I), Non-Specificity (subtask-II), and Intersection (subtask-III). We use encoders of transformers-based models pre-trained on the masked language modelling (MLM) task to build our Fill-in-the-blank (FitB) models. Moreover, to model imperceptibility, we define certain linguistic features, and to model non-specificity, we leverage information from hypernyms and hyponyms provided by a lexical database. Specifically, for non-specificity, we try out augmentation techniques, and other statistical techniques. We also propose variants, namely Chunk Voting and Max Context, to take care of input length restrictions for BERT, etc. Additionally, we perform a thorough ablation study, and use Integrated Gradients to explain our predictions on a few samples. Our best submissions achieve accuracies of 75.31% and 77.84%, on the test sets for subtask-I and subtask-II, respectively. For subtask-III, we achieve accuracies of 65.64% and 62.27%. The code is available here.","We thank Rajaswa Patil 1 and Somesh Singh 2 for their support. We would also like to express our gratitude to our colleagues at the Language Research Group (LRG) 3 , who have been with us at every stepping stone.","LRG at SemEval-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and Voting. In this article, we present our methodologies for SemEval-2021 Task-4: Reading Comprehension of Abstract Meaning. Given a fill-inthe-blank-type question and a corresponding context, the task is to predict the most suitable word from a list of 5 options. There are three sub-tasks within this task: Imperceptibility (subtask-I), Non-Specificity (subtask-II), and Intersection (subtask-III). We use encoders of transformers-based models pre-trained on the masked language modelling (MLM) task to build our Fill-in-the-blank (FitB) models. Moreover, to model imperceptibility, we define certain linguistic features, and to model non-specificity, we leverage information from hypernyms and hyponyms provided by a lexical database. Specifically, for non-specificity, we try out augmentation techniques, and other statistical techniques. We also propose variants, namely Chunk Voting and Max Context, to take care of input length restrictions for BERT, etc. Additionally, we perform a thorough ablation study, and use Integrated Gradients to explain our predictions on a few samples. Our best submissions achieve accuracies of 75.31% and 77.84%, on the test sets for subtask-I and subtask-II, respectively. For subtask-III, we achieve accuracies of 65.64% and 62.27%. The code is available here.",2021
liberman-2009-annotation,https://aclanthology.org/W09-0102,0,,,,,,,"The Annotation Conundrum. Without lengthy, iterative refinement of guidelines, and equally lengthy and iterative training of annotators, the level of inter-subjective agreement on simple tasks of phonetic, phonological, syntactic, semantic, and pragmatic annotation is shockingly low. This is a significant practical problem in speech and language technology, but it poses questions of interest to psychologists, philosophers of language, and theoretical linguists as well.",The Annotation Conundrum,"Without lengthy, iterative refinement of guidelines, and equally lengthy and iterative training of annotators, the level of inter-subjective agreement on simple tasks of phonetic, phonological, syntactic, semantic, and pragmatic annotation is shockingly low. This is a significant practical problem in speech and language technology, but it poses questions of interest to psychologists, philosophers of language, and theoretical linguists as well.",The Annotation Conundrum,"Without lengthy, iterative refinement of guidelines, and equally lengthy and iterative training of annotators, the level of inter-subjective agreement on simple tasks of phonetic, phonological, syntactic, semantic, and pragmatic annotation is shockingly low. This is a significant practical problem in speech and language technology, but it poses questions of interest to psychologists, philosophers of language, and theoretical linguists as well.",,"The Annotation Conundrum. Without lengthy, iterative refinement of guidelines, and equally lengthy and iterative training of annotators, the level of inter-subjective agreement on simple tasks of phonetic, phonological, syntactic, semantic, and pragmatic annotation is shockingly low. This is a significant practical problem in speech and language technology, but it poses questions of interest to psychologists, philosophers of language, and theoretical linguists as well.",2009
indurthi-etal-2019-fermi,https://aclanthology.org/S19-2009,1,,,,hate_speech,,,"FERMI at SemEval-2019 Task 5: Using Sentence embeddings to Identify Hate Speech Against Immigrants and Women in Twitter. This paper describes our system (Fermi) for Task 5 of SemEval-2019: HatEval: Multilingual Detection of Hate Speech Against Immigrants and Women on Twitter. We participated in the subtask A for English and ranked first in the evaluation on the test set. We evaluate the quality of multiple sentence embeddings and explore multiple training models to evaluate the performance of simple yet effective embedding-ML combination algorithms. Our team-Fermi's model achieved an accuracy of 65.00% for English language in task A. Our models, which use pretrained Universal Encoder sentence embeddings for transforming the input and SVM (with RBF kernel) for classification, scored first position (among 68) in the leaderboard on the test set for Subtask A in English language. In this paper we provide a detailed description of the approach, as well as the results obtained in the task.",{FERMI} at {S}em{E}val-2019 Task 5: Using Sentence embeddings to Identify Hate Speech Against Immigrants and Women in {T}witter,"This paper describes our system (Fermi) for Task 5 of SemEval-2019: HatEval: Multilingual Detection of Hate Speech Against Immigrants and Women on Twitter. We participated in the subtask A for English and ranked first in the evaluation on the test set. We evaluate the quality of multiple sentence embeddings and explore multiple training models to evaluate the performance of simple yet effective embedding-ML combination algorithms. Our team-Fermi's model achieved an accuracy of 65.00% for English language in task A. Our models, which use pretrained Universal Encoder sentence embeddings for transforming the input and SVM (with RBF kernel) for classification, scored first position (among 68) in the leaderboard on the test set for Subtask A in English language. In this paper we provide a detailed description of the approach, as well as the results obtained in the task.",FERMI at SemEval-2019 Task 5: Using Sentence embeddings to Identify Hate Speech Against Immigrants and Women in Twitter,"This paper describes our system (Fermi) for Task 5 of SemEval-2019: HatEval: Multilingual Detection of Hate Speech Against Immigrants and Women on Twitter. We participated in the subtask A for English and ranked first in the evaluation on the test set. We evaluate the quality of multiple sentence embeddings and explore multiple training models to evaluate the performance of simple yet effective embedding-ML combination algorithms. Our team-Fermi's model achieved an accuracy of 65.00% for English language in task A. Our models, which use pretrained Universal Encoder sentence embeddings for transforming the input and SVM (with RBF kernel) for classification, scored first position (among 68) in the leaderboard on the test set for Subtask A in English language. In this paper we provide a detailed description of the approach, as well as the results obtained in the task.",,"FERMI at SemEval-2019 Task 5: Using Sentence embeddings to Identify Hate Speech Against Immigrants and Women in Twitter. This paper describes our system (Fermi) for Task 5 of SemEval-2019: HatEval: Multilingual Detection of Hate Speech Against Immigrants and Women on Twitter. We participated in the subtask A for English and ranked first in the evaluation on the test set. We evaluate the quality of multiple sentence embeddings and explore multiple training models to evaluate the performance of simple yet effective embedding-ML combination algorithms. Our team-Fermi's model achieved an accuracy of 65.00% for English language in task A. Our models, which use pretrained Universal Encoder sentence embeddings for transforming the input and SVM (with RBF kernel) for classification, scored first position (among 68) in the leaderboard on the test set for Subtask A in English language. In this paper we provide a detailed description of the approach, as well as the results obtained in the task.",2019
lewin-2007-basenps,https://aclanthology.org/W07-1022,1,,,,health,,,"BaseNPs that contain gene names: domain specificity and genericity. The names of named entities very often occur as constituents of larger noun phrases which denote different types of entity. Understanding the structure of the embedding phrase can be an enormously beneficial first step to enhancing whatever processing is intended to follow the named entity recognition in the first place. In this paper, we examine the integration of general purpose linguistic processors together with domain specific named entity recognition in order to carry out the task of baseNP detection. We report a best F-score of 87.17% on this task. We also report an inter-annotator agreement score of 98.8 Kappa on the task of baseNP annotation of a new data set.",{B}ase{NP}s that contain gene names: domain specificity and genericity,"The names of named entities very often occur as constituents of larger noun phrases which denote different types of entity. Understanding the structure of the embedding phrase can be an enormously beneficial first step to enhancing whatever processing is intended to follow the named entity recognition in the first place. In this paper, we examine the integration of general purpose linguistic processors together with domain specific named entity recognition in order to carry out the task of baseNP detection. We report a best F-score of 87.17% on this task. We also report an inter-annotator agreement score of 98.8 Kappa on the task of baseNP annotation of a new data set.",BaseNPs that contain gene names: domain specificity and genericity,"The names of named entities very often occur as constituents of larger noun phrases which denote different types of entity. Understanding the structure of the embedding phrase can be an enormously beneficial first step to enhancing whatever processing is intended to follow the named entity recognition in the first place. In this paper, we examine the integration of general purpose linguistic processors together with domain specific named entity recognition in order to carry out the task of baseNP detection. We report a best F-score of 87.17% on this task. We also report an inter-annotator agreement score of 98.8 Kappa on the task of baseNP annotation of a new data set.",,"BaseNPs that contain gene names: domain specificity and genericity. The names of named entities very often occur as constituents of larger noun phrases which denote different types of entity. Understanding the structure of the embedding phrase can be an enormously beneficial first step to enhancing whatever processing is intended to follow the named entity recognition in the first place. In this paper, we examine the integration of general purpose linguistic processors together with domain specific named entity recognition in order to carry out the task of baseNP detection. We report a best F-score of 87.17% on this task. We also report an inter-annotator agreement score of 98.8 Kappa on the task of baseNP annotation of a new data set.",2007
cieri-etal-2012-twenty,http://www.lrec-conf.org/proceedings/lrec2012/pdf/1117_Paper.pdf,0,,,,,,,"Twenty Years of Language Resource Development and Distribution: A Progress Report on LDC Activities. On the Linguistic Data Consortium's (LDC) 20th anniversary, this paper describes the changes to the language resource landscape over the past two decades, how LDC has adjusted its practice to adapt to them and how the business model continues to grow. Specifically, we will discuss LDC's evolving roles and changes in the sizes and types of LDC language resources (LR) as well as the data they include and the annotations of that data. We will also discuss adaptations of the LDC business model and the sponsored projects it supports.",Twenty Years of Language Resource Development and Distribution: A Progress Report on {LDC} Activities,"On the Linguistic Data Consortium's (LDC) 20th anniversary, this paper describes the changes to the language resource landscape over the past two decades, how LDC has adjusted its practice to adapt to them and how the business model continues to grow. Specifically, we will discuss LDC's evolving roles and changes in the sizes and types of LDC language resources (LR) as well as the data they include and the annotations of that data. We will also discuss adaptations of the LDC business model and the sponsored projects it supports.",Twenty Years of Language Resource Development and Distribution: A Progress Report on LDC Activities,"On the Linguistic Data Consortium's (LDC) 20th anniversary, this paper describes the changes to the language resource landscape over the past two decades, how LDC has adjusted its practice to adapt to them and how the business model continues to grow. Specifically, we will discuss LDC's evolving roles and changes in the sizes and types of LDC language resources (LR) as well as the data they include and the annotations of that data. We will also discuss adaptations of the LDC business model and the sponsored projects it supports.",,"Twenty Years of Language Resource Development and Distribution: A Progress Report on LDC Activities. On the Linguistic Data Consortium's (LDC) 20th anniversary, this paper describes the changes to the language resource landscape over the past two decades, how LDC has adjusted its practice to adapt to them and how the business model continues to grow. Specifically, we will discuss LDC's evolving roles and changes in the sizes and types of LDC language resources (LR) as well as the data they include and the annotations of that data. We will also discuss adaptations of the LDC business model and the sponsored projects it supports.",2012
lee-haug-2010-porting,http://www.lrec-conf.org/proceedings/lrec2010/pdf/631_Paper.pdf,0,,,,,,,"Porting an Ancient Greek and Latin Treebank. We have recently converted a dependency treebank, consisting of ancient Greek and Latin texts, from one annotation scheme to another that was independently designed. This paper makes two observations about this conversion process. First, we show that, despite significant surface differences between the two treebanks, a number of straightforward transformation rules yield a substantial level of compatibility between them, giving evidence for their sound design and high quality of annotation. Second, we analyze some linguistic annotations that require further disambiguation, proposing some simple yet effective machine learning methods.",Porting an {A}ncient {G}reek and {L}atin Treebank,"We have recently converted a dependency treebank, consisting of ancient Greek and Latin texts, from one annotation scheme to another that was independently designed. This paper makes two observations about this conversion process. First, we show that, despite significant surface differences between the two treebanks, a number of straightforward transformation rules yield a substantial level of compatibility between them, giving evidence for their sound design and high quality of annotation. Second, we analyze some linguistic annotations that require further disambiguation, proposing some simple yet effective machine learning methods.",Porting an Ancient Greek and Latin Treebank,"We have recently converted a dependency treebank, consisting of ancient Greek and Latin texts, from one annotation scheme to another that was independently designed. This paper makes two observations about this conversion process. First, we show that, despite significant surface differences between the two treebanks, a number of straightforward transformation rules yield a substantial level of compatibility between them, giving evidence for their sound design and high quality of annotation. Second, we analyze some linguistic annotations that require further disambiguation, proposing some simple yet effective machine learning methods.","We thank the Perseus Project, Tufts University, for providing the Greek test set. The first author gratefully acknowledges the support of the Faculty of Humanities at the University of Oslo, where he conducted part of this research.","Porting an Ancient Greek and Latin Treebank. We have recently converted a dependency treebank, consisting of ancient Greek and Latin texts, from one annotation scheme to another that was independently designed. This paper makes two observations about this conversion process. First, we show that, despite significant surface differences between the two treebanks, a number of straightforward transformation rules yield a substantial level of compatibility between them, giving evidence for their sound design and high quality of annotation. Second, we analyze some linguistic annotations that require further disambiguation, proposing some simple yet effective machine learning methods.",2010
shelmanov-etal-2021-certain,https://aclanthology.org/2021.eacl-main.157,0,,,,,,,"How Certain is Your Transformer?. In this work, we consider the problem of uncertainty estimation for Transformer-based models. We investigate the applicability of uncertainty estimates based on dropout usage at the inference stage (Monte Carlo dropout). The series of experiments on natural language understanding tasks shows that the resulting uncertainty estimates improve the quality of detection of error-prone instances. Special attention is paid to the construction of computationally inexpensive estimates via Monte Carlo dropout and Determinantal Point Processes.",How Certain is Your {T}ransformer?,"In this work, we consider the problem of uncertainty estimation for Transformer-based models. We investigate the applicability of uncertainty estimates based on dropout usage at the inference stage (Monte Carlo dropout). The series of experiments on natural language understanding tasks shows that the resulting uncertainty estimates improve the quality of detection of error-prone instances. Special attention is paid to the construction of computationally inexpensive estimates via Monte Carlo dropout and Determinantal Point Processes.",How Certain is Your Transformer?,"In this work, we consider the problem of uncertainty estimation for Transformer-based models. We investigate the applicability of uncertainty estimates based on dropout usage at the inference stage (Monte Carlo dropout). The series of experiments on natural language understanding tasks shows that the resulting uncertainty estimates improve the quality of detection of error-prone instances. Special attention is paid to the construction of computationally inexpensive estimates via Monte Carlo dropout and Determinantal Point Processes.","We thank the reviewers for their valuable feedback. The development of uncertainty estimation algorithms for Transformer models (Section 3) was supported by the joint MTS-Skoltech lab. The development of a software system for the experimental study of uncertainty estimation methods and its application to NLP tasks (Section 4) was supported by the Russian Science Foundation grant 20-71-10135. The Zhores supercomputer (Zacharov et al., 2019) was used for computations.","How Certain is Your Transformer?. In this work, we consider the problem of uncertainty estimation for Transformer-based models. We investigate the applicability of uncertainty estimates based on dropout usage at the inference stage (Monte Carlo dropout). The series of experiments on natural language understanding tasks shows that the resulting uncertainty estimates improve the quality of detection of error-prone instances. Special attention is paid to the construction of computationally inexpensive estimates via Monte Carlo dropout and Determinantal Point Processes.",2021
hernandez-calvo-2014-conll,https://aclanthology.org/W14-1707,0,,,,,,,"CoNLL 2014 Shared Task: Grammatical Error Correction with a Syntactic N-gram Language Model from a Big Corpora. We describe our approach to grammatical error correction presented in the CoNLL Shared Task 2014. Our work is focused on error detection in sentences with a language model based on syntactic tri-grams and bi-grams extracted from dependency trees generated from 90% of the English Wikipedia. Also, we add a naïve module to error correction that outputs a set of possible answers, those sentences are scored using a syntactic n-gram language model. The sentence with the best score is the final suggestion of the system. The system was ranked 11th, evidently this is a very simple approach, but since the beginning our main goal was to test the syntactic n-gram language model with a big corpus to future comparison.",{C}o{NLL} 2014 Shared Task: Grammatical Error Correction with a Syntactic N-gram Language Model from a Big Corpora,"We describe our approach to grammatical error correction presented in the CoNLL Shared Task 2014. Our work is focused on error detection in sentences with a language model based on syntactic tri-grams and bi-grams extracted from dependency trees generated from 90% of the English Wikipedia. Also, we add a naïve module to error correction that outputs a set of possible answers, those sentences are scored using a syntactic n-gram language model. The sentence with the best score is the final suggestion of the system. The system was ranked 11th, evidently this is a very simple approach, but since the beginning our main goal was to test the syntactic n-gram language model with a big corpus to future comparison.",CoNLL 2014 Shared Task: Grammatical Error Correction with a Syntactic N-gram Language Model from a Big Corpora,"We describe our approach to grammatical error correction presented in the CoNLL Shared Task 2014. Our work is focused on error detection in sentences with a language model based on syntactic tri-grams and bi-grams extracted from dependency trees generated from 90% of the English Wikipedia. Also, we add a naïve module to error correction that outputs a set of possible answers, those sentences are scored using a syntactic n-gram language model. The sentence with the best score is the final suggestion of the system. The system was ranked 11th, evidently this is a very simple approach, but since the beginning our main goal was to test the syntactic n-gram language model with a big corpus to future comparison.","Work done under partial support of Mexican Government (CONACYT, SNI) and Instituto Politécnico Nacional, México (SIP-IPN, COFAA-IPN, PIFI-IPN).","CoNLL 2014 Shared Task: Grammatical Error Correction with a Syntactic N-gram Language Model from a Big Corpora. We describe our approach to grammatical error correction presented in the CoNLL Shared Task 2014. Our work is focused on error detection in sentences with a language model based on syntactic tri-grams and bi-grams extracted from dependency trees generated from 90% of the English Wikipedia. Also, we add a naïve module to error correction that outputs a set of possible answers, those sentences are scored using a syntactic n-gram language model. The sentence with the best score is the final suggestion of the system. The system was ranked 11th, evidently this is a very simple approach, but since the beginning our main goal was to test the syntactic n-gram language model with a big corpus to future comparison.",2014
ayari-etal-2010-fine,http://www.lrec-conf.org/proceedings/lrec2010/pdf/520_Paper.pdf,0,,,,,,,"Fine-grained Linguistic Evaluation of Question Answering Systems. Question answering systems are complex systems using natural language processing. Some evaluation campaigns are organized to evaluate such systems in order to propose a classification of systems based on final results (number of correct answers). Nevertheless, teams need to evaluate more precisely the results obtained by their systems if they want to do a diagnostic evaluation. There are no tools or methods to do these evaluations systematically. We present REVISE, a tool for glass box evaluation based on diagnostic of question answering system results.",Fine-grained Linguistic Evaluation of Question Answering Systems,"Question answering systems are complex systems using natural language processing. Some evaluation campaigns are organized to evaluate such systems in order to propose a classification of systems based on final results (number of correct answers). Nevertheless, teams need to evaluate more precisely the results obtained by their systems if they want to do a diagnostic evaluation. There are no tools or methods to do these evaluations systematically. We present REVISE, a tool for glass box evaluation based on diagnostic of question answering system results.",Fine-grained Linguistic Evaluation of Question Answering Systems,"Question answering systems are complex systems using natural language processing. Some evaluation campaigns are organized to evaluate such systems in order to propose a classification of systems based on final results (number of correct answers). Nevertheless, teams need to evaluate more precisely the results obtained by their systems if they want to do a diagnostic evaluation. There are no tools or methods to do these evaluations systematically. We present REVISE, a tool for glass box evaluation based on diagnostic of question answering system results.",,"Fine-grained Linguistic Evaluation of Question Answering Systems. Question answering systems are complex systems using natural language processing. Some evaluation campaigns are organized to evaluate such systems in order to propose a classification of systems based on final results (number of correct answers). Nevertheless, teams need to evaluate more precisely the results obtained by their systems if they want to do a diagnostic evaluation. There are no tools or methods to do these evaluations systematically. We present REVISE, a tool for glass box evaluation based on diagnostic of question answering system results.",2010
swanson-charniak-2014-data,https://aclanthology.org/E14-4033,0,,,,,,,"Data Driven Language Transfer Hypotheses. Language transfer, the preferential second language behavior caused by similarities to the speaker's native language, requires considerable expertise to be detected by humans alone. Our goal in this work is to replace expert intervention by data-driven methods wherever possible. We define a computational methodology that produces a concise list of lexicalized syntactic patterns that are controlled for redundancy and ranked by relevancy to language transfer. We demonstrate the ability of our methodology to detect hundreds of such candidate patterns from currently available data sources, and validate the quality of the proposed patterns through classification experiments.",Data Driven Language Transfer Hypotheses,"Language transfer, the preferential second language behavior caused by similarities to the speaker's native language, requires considerable expertise to be detected by humans alone. Our goal in this work is to replace expert intervention by data-driven methods wherever possible. We define a computational methodology that produces a concise list of lexicalized syntactic patterns that are controlled for redundancy and ranked by relevancy to language transfer. We demonstrate the ability of our methodology to detect hundreds of such candidate patterns from currently available data sources, and validate the quality of the proposed patterns through classification experiments.",Data Driven Language Transfer Hypotheses,"Language transfer, the preferential second language behavior caused by similarities to the speaker's native language, requires considerable expertise to be detected by humans alone. Our goal in this work is to replace expert intervention by data-driven methods wherever possible. We define a computational methodology that produces a concise list of lexicalized syntactic patterns that are controlled for redundancy and ranked by relevancy to language transfer. We demonstrate the ability of our methodology to detect hundreds of such candidate patterns from currently available data sources, and validate the quality of the proposed patterns through classification experiments.",,"Data Driven Language Transfer Hypotheses. Language transfer, the preferential second language behavior caused by similarities to the speaker's native language, requires considerable expertise to be detected by humans alone. Our goal in this work is to replace expert intervention by data-driven methods wherever possible. We define a computational methodology that produces a concise list of lexicalized syntactic patterns that are controlled for redundancy and ranked by relevancy to language transfer. We demonstrate the ability of our methodology to detect hundreds of such candidate patterns from currently available data sources, and validate the quality of the proposed patterns through classification experiments.",2014
ruder-plank-2017-learning,https://aclanthology.org/D17-1038,0,,,,,,,"Learning to select data for transfer learning with Bayesian Optimization. Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. Inspired by work on curriculum learning, we propose to learn data selection measures using Bayesian Optimization and evaluate them across models, domains and tasks. Our learned measures outperform existing domain similarity measures significantly on three tasks: sentiment analysis, partof-speech tagging, and parsing. We show the importance of complementing similarity with diversity, and that learned measures are-to some degree-transferable across models, domains, and even tasks.",Learning to select data for transfer learning with {B}ayesian Optimization,"Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. Inspired by work on curriculum learning, we propose to learn data selection measures using Bayesian Optimization and evaluate them across models, domains and tasks. Our learned measures outperform existing domain similarity measures significantly on three tasks: sentiment analysis, partof-speech tagging, and parsing. We show the importance of complementing similarity with diversity, and that learned measures are-to some degree-transferable across models, domains, and even tasks.",Learning to select data for transfer learning with Bayesian Optimization,"Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. Inspired by work on curriculum learning, we propose to learn data selection measures using Bayesian Optimization and evaluate them across models, domains and tasks. Our learned measures outperform existing domain similarity measures significantly on three tasks: sentiment analysis, partof-speech tagging, and parsing. We show the importance of complementing similarity with diversity, and that learned measures are-to some degree-transferable across models, domains, and even tasks.",We thank the anonymous reviewers for their valuable feedback. Sebastian is supported by Irish Research Council Grant Number EBPPG/2014/30 and Science Foundation Ireland Grant Number SFI/12/RC/2289. Barbara is supported by NVIDIA corporation and the Computing Center of the University of Groningen.,"Learning to select data for transfer learning with Bayesian Optimization. Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. Inspired by work on curriculum learning, we propose to learn data selection measures using Bayesian Optimization and evaluate them across models, domains and tasks. Our learned measures outperform existing domain similarity measures significantly on three tasks: sentiment analysis, partof-speech tagging, and parsing. We show the importance of complementing similarity with diversity, and that learned measures are-to some degree-transferable across models, domains, and even tasks.",2017
ogbuju-onyesolu-2019-development,https://aclanthology.org/W19-3601,0,,,,,,,Development of a General Purpose Sentiment Lexicon for Igbo Language. ,Development of a General Purpose Sentiment Lexicon for {I}gbo Language,,Development of a General Purpose Sentiment Lexicon for Igbo Language,,,Development of a General Purpose Sentiment Lexicon for Igbo Language. ,2019
yates-etal-2016-effects,https://aclanthology.org/L16-1479,1,,,,peace_justice_and_strong_institutions,,,"Effects of Sampling on Twitter Trend Detection. Much research has focused on detecting trends on Twitter, including health-related trends such as mentions of Influenza-like illnesses or their symptoms. The majority of this research has been conducted using Twitter's public feed, which includes only about 1% of all public tweets. It is unclear if, when, and how using Twitter's 1% feed has affected the evaluation of trend detection methods. In this work we use a larger feed to investigate the effects of sampling on Twitter trend detection. We focus on using health-related trends to estimate the prevalence of Influenza-like illnesses based on tweets, and use ground truth obtained from the CDC and Google Flu Trends to explore how the prevalence estimates degrade when moving from a 100% to a 1% sample. We find that using the public 1% sample is unlikely to substantially harm ILI estimates made at the national level, but can cause poor performance when estimates are made at the city level.",Effects of Sampling on {T}witter Trend Detection,"Much research has focused on detecting trends on Twitter, including health-related trends such as mentions of Influenza-like illnesses or their symptoms. The majority of this research has been conducted using Twitter's public feed, which includes only about 1% of all public tweets. It is unclear if, when, and how using Twitter's 1% feed has affected the evaluation of trend detection methods. In this work we use a larger feed to investigate the effects of sampling on Twitter trend detection. We focus on using health-related trends to estimate the prevalence of Influenza-like illnesses based on tweets, and use ground truth obtained from the CDC and Google Flu Trends to explore how the prevalence estimates degrade when moving from a 100% to a 1% sample. We find that using the public 1% sample is unlikely to substantially harm ILI estimates made at the national level, but can cause poor performance when estimates are made at the city level.",Effects of Sampling on Twitter Trend Detection,"Much research has focused on detecting trends on Twitter, including health-related trends such as mentions of Influenza-like illnesses or their symptoms. The majority of this research has been conducted using Twitter's public feed, which includes only about 1% of all public tweets. It is unclear if, when, and how using Twitter's 1% feed has affected the evaluation of trend detection methods. In this work we use a larger feed to investigate the effects of sampling on Twitter trend detection. We focus on using health-related trends to estimate the prevalence of Influenza-like illnesses based on tweets, and use ground truth obtained from the CDC and Google Flu Trends to explore how the prevalence estimates degrade when moving from a 100% to a 1% sample. We find that using the public 1% sample is unlikely to substantially harm ILI estimates made at the national level, but can cause poor performance when estimates are made at the city level.",,"Effects of Sampling on Twitter Trend Detection. Much research has focused on detecting trends on Twitter, including health-related trends such as mentions of Influenza-like illnesses or their symptoms. The majority of this research has been conducted using Twitter's public feed, which includes only about 1% of all public tweets. It is unclear if, when, and how using Twitter's 1% feed has affected the evaluation of trend detection methods. In this work we use a larger feed to investigate the effects of sampling on Twitter trend detection. We focus on using health-related trends to estimate the prevalence of Influenza-like illnesses based on tweets, and use ground truth obtained from the CDC and Google Flu Trends to explore how the prevalence estimates degrade when moving from a 100% to a 1% sample. We find that using the public 1% sample is unlikely to substantially harm ILI estimates made at the national level, but can cause poor performance when estimates are made at the city level.",2016
hubert-etal-2016-training,https://aclanthology.org/L16-1514,0,,,,,,,"Training \& Quality Assessment of an Optical Character Recognition Model for Northern Haida. In this paper, we are presenting our work on the creation of the first optical character recognition (OCR) model for Northern Haida, also known as Masset or Xaad Kil, a nearly extinct First Nations language spoken in the Haida Gwaii archipelago in British Columbia, Canada. We are addressing the challenges of training an OCR model for a language with an extensive, non-standard Latin character set as follows: (1) We have compared various training approaches and present the results of practical analyses to maximize recognition accuracy and minimize manual labor. An approach using just one or two pages of Source Images directly performed better than the Image Generation approach, and better than models based on three or more pages. Analyses also suggest that a character's frequency is directly correlated with its recognition accuracy. (2) We present an overview of current OCR accuracy analysis tools available. (3) We have ported the once de-facto standardized OCR accuracy tools to be able to cope with Unicode input. We hope that our work can encourage further OCR endeavors for other endangered and/or underresearched languages. Our work adds to a growing body of research on OCR for particularly challenging character sets, and contributes to creating the largest electronic corpus for this severely endangered language.",Training {\&} Quality Assessment of an Optical Character Recognition Model for {N}orthern {H}aida,"In this paper, we are presenting our work on the creation of the first optical character recognition (OCR) model for Northern Haida, also known as Masset or Xaad Kil, a nearly extinct First Nations language spoken in the Haida Gwaii archipelago in British Columbia, Canada. We are addressing the challenges of training an OCR model for a language with an extensive, non-standard Latin character set as follows: (1) We have compared various training approaches and present the results of practical analyses to maximize recognition accuracy and minimize manual labor. An approach using just one or two pages of Source Images directly performed better than the Image Generation approach, and better than models based on three or more pages. Analyses also suggest that a character's frequency is directly correlated with its recognition accuracy. (2) We present an overview of current OCR accuracy analysis tools available. (3) We have ported the once de-facto standardized OCR accuracy tools to be able to cope with Unicode input. We hope that our work can encourage further OCR endeavors for other endangered and/or underresearched languages. Our work adds to a growing body of research on OCR for particularly challenging character sets, and contributes to creating the largest electronic corpus for this severely endangered language.",Training \& Quality Assessment of an Optical Character Recognition Model for Northern Haida,"In this paper, we are presenting our work on the creation of the first optical character recognition (OCR) model for Northern Haida, also known as Masset or Xaad Kil, a nearly extinct First Nations language spoken in the Haida Gwaii archipelago in British Columbia, Canada. We are addressing the challenges of training an OCR model for a language with an extensive, non-standard Latin character set as follows: (1) We have compared various training approaches and present the results of practical analyses to maximize recognition accuracy and minimize manual labor. An approach using just one or two pages of Source Images directly performed better than the Image Generation approach, and better than models based on three or more pages. Analyses also suggest that a character's frequency is directly correlated with its recognition accuracy. (2) We present an overview of current OCR accuracy analysis tools available. (3) We have ported the once de-facto standardized OCR accuracy tools to be able to cope with Unicode input. We hope that our work can encourage further OCR endeavors for other endangered and/or underresearched languages. Our work adds to a growing body of research on OCR for particularly challenging character sets, and contributes to creating the largest electronic corpus for this severely endangered language.",,"Training \& Quality Assessment of an Optical Character Recognition Model for Northern Haida. In this paper, we are presenting our work on the creation of the first optical character recognition (OCR) model for Northern Haida, also known as Masset or Xaad Kil, a nearly extinct First Nations language spoken in the Haida Gwaii archipelago in British Columbia, Canada. We are addressing the challenges of training an OCR model for a language with an extensive, non-standard Latin character set as follows: (1) We have compared various training approaches and present the results of practical analyses to maximize recognition accuracy and minimize manual labor. An approach using just one or two pages of Source Images directly performed better than the Image Generation approach, and better than models based on three or more pages. Analyses also suggest that a character's frequency is directly correlated with its recognition accuracy. (2) We present an overview of current OCR accuracy analysis tools available. (3) We have ported the once de-facto standardized OCR accuracy tools to be able to cope with Unicode input. We hope that our work can encourage further OCR endeavors for other endangered and/or underresearched languages. Our work adds to a growing body of research on OCR for particularly challenging character sets, and contributes to creating the largest electronic corpus for this severely endangered language.",2016
segond-etal-2005-situational,https://aclanthology.org/W05-0213,1,,,,decent_work_and_economy,,,"Situational Language Training for Hotel Receptionists. This paper presents the lessons learned in experimenting with Thetis 1 , an EC project focusing on the creation and localization of enhanced on-line pedagogical content for language learning in tourism industry. It is based on a general innovative approach to language learning that allows employees to acquire practical oral and written skills while navigating a relevant professional scenario. The approach is enabled by an underlying platform (EXILLS) that integrates virtual reality with a set of linguistic, technologies to create a new form of dynamic, extensible, goal-directed e-content. 1 Credits The work described in this paper has been supported by the European Commission in the frame of the eContent program 2 .",Situational Language Training for Hotel Receptionists,"This paper presents the lessons learned in experimenting with Thetis 1 , an EC project focusing on the creation and localization of enhanced on-line pedagogical content for language learning in tourism industry. It is based on a general innovative approach to language learning that allows employees to acquire practical oral and written skills while navigating a relevant professional scenario. The approach is enabled by an underlying platform (EXILLS) that integrates virtual reality with a set of linguistic, technologies to create a new form of dynamic, extensible, goal-directed e-content. 1 Credits The work described in this paper has been supported by the European Commission in the frame of the eContent program 2 .",Situational Language Training for Hotel Receptionists,"This paper presents the lessons learned in experimenting with Thetis 1 , an EC project focusing on the creation and localization of enhanced on-line pedagogical content for language learning in tourism industry. It is based on a general innovative approach to language learning that allows employees to acquire practical oral and written skills while navigating a relevant professional scenario. The approach is enabled by an underlying platform (EXILLS) that integrates virtual reality with a set of linguistic, technologies to create a new form of dynamic, extensible, goal-directed e-content. 1 Credits The work described in this paper has been supported by the European Commission in the frame of the eContent program 2 .",,"Situational Language Training for Hotel Receptionists. This paper presents the lessons learned in experimenting with Thetis 1 , an EC project focusing on the creation and localization of enhanced on-line pedagogical content for language learning in tourism industry. It is based on a general innovative approach to language learning that allows employees to acquire practical oral and written skills while navigating a relevant professional scenario. The approach is enabled by an underlying platform (EXILLS) that integrates virtual reality with a set of linguistic, technologies to create a new form of dynamic, extensible, goal-directed e-content. 1 Credits The work described in this paper has been supported by the European Commission in the frame of the eContent program 2 .",2005
tiedemann-2008-synchronizing,http://www.lrec-conf.org/proceedings/lrec2008/pdf/484_paper.pdf,0,,,,,,,"Synchronizing Translated Movie Subtitles. This paper addresses the problem of synchronizing movie subtitles, which is necessary to improve alignment quality when building a parallel corpus out of translated subtitles. In particular, synchronization is done on the basis of aligned anchor points. Previous studies have shown that cognate filters are useful for the identification of such points. However, this restricts the approach to related languages with similar alphabets. Here, we propose a dictionary-based approach using automatic word alignment. We can show an improvement in alignment quality even for related languages compared to the cognate-based approach.",Synchronizing Translated Movie Subtitles,"This paper addresses the problem of synchronizing movie subtitles, which is necessary to improve alignment quality when building a parallel corpus out of translated subtitles. In particular, synchronization is done on the basis of aligned anchor points. Previous studies have shown that cognate filters are useful for the identification of such points. However, this restricts the approach to related languages with similar alphabets. Here, we propose a dictionary-based approach using automatic word alignment. We can show an improvement in alignment quality even for related languages compared to the cognate-based approach.",Synchronizing Translated Movie Subtitles,"This paper addresses the problem of synchronizing movie subtitles, which is necessary to improve alignment quality when building a parallel corpus out of translated subtitles. In particular, synchronization is done on the basis of aligned anchor points. Previous studies have shown that cognate filters are useful for the identification of such points. However, this restricts the approach to related languages with similar alphabets. Here, we propose a dictionary-based approach using automatic word alignment. We can show an improvement in alignment quality even for related languages compared to the cognate-based approach.",,"Synchronizing Translated Movie Subtitles. This paper addresses the problem of synchronizing movie subtitles, which is necessary to improve alignment quality when building a parallel corpus out of translated subtitles. In particular, synchronization is done on the basis of aligned anchor points. Previous studies have shown that cognate filters are useful for the identification of such points. However, this restricts the approach to related languages with similar alphabets. Here, we propose a dictionary-based approach using automatic word alignment. We can show an improvement in alignment quality even for related languages compared to the cognate-based approach.",2008
hsiao-etal-2017-integrating,https://aclanthology.org/I17-1098,0,,,,,,,"Integrating Subject, Type, and Property Identification for Simple Question Answering over Knowledge Base. This paper presents an approach to identify subject, type and property from knowledge base for answering simple questions. We propose new features to rank entity candidates in KB. Besides, we split a relation in KB into type and property. Each of them is modeled by a bi-directional LSTM. Experimental results show that our model achieves the state-of-the-art performance on the SimpleQuestions dataset. The hard questions in the experiments are also analyzed in detail.","Integrating Subject, Type, and Property Identification for Simple Question Answering over Knowledge Base","This paper presents an approach to identify subject, type and property from knowledge base for answering simple questions. We propose new features to rank entity candidates in KB. Besides, we split a relation in KB into type and property. Each of them is modeled by a bi-directional LSTM. Experimental results show that our model achieves the state-of-the-art performance on the SimpleQuestions dataset. The hard questions in the experiments are also analyzed in detail.","Integrating Subject, Type, and Property Identification for Simple Question Answering over Knowledge Base","This paper presents an approach to identify subject, type and property from knowledge base for answering simple questions. We propose new features to rank entity candidates in KB. Besides, we split a relation in KB into type and property. Each of them is modeled by a bi-directional LSTM. Experimental results show that our model achieves the state-of-the-art performance on the SimpleQuestions dataset. The hard questions in the experiments are also analyzed in detail.",,"Integrating Subject, Type, and Property Identification for Simple Question Answering over Knowledge Base. This paper presents an approach to identify subject, type and property from knowledge base for answering simple questions. We propose new features to rank entity candidates in KB. Besides, we split a relation in KB into type and property. Each of them is modeled by a bi-directional LSTM. Experimental results show that our model achieves the state-of-the-art performance on the SimpleQuestions dataset. The hard questions in the experiments are also analyzed in detail.",2017
kwong-2001-forming,https://aclanthology.org/Y01-1010,0,,,,,,,"Forming an Integrated Lexical Resource for Word Sense Disambiguation. This paper reports a full-scale linkage of noun senses between two existing lexical resources, namely WordNet and Roget's Thesaurus, to form an Integrated Lexical Resource (ILR) for use in natural language processing (NLP). The linkage was founded on a structurally-based sense-mapping algorithm. About 18,000 nouns with over 30,000 senses were mapped. Although exhaustive verification is impractical, we show that it is reasonable to expect some 70-80% accuracy of the resultant mappings. More importantly, the ILR, which contains enriched lexical information, is readily usable in many NLP tasks. We shall explore some practical use of the ILR in word sense disambiguation (WSD), as WSD notably requires a wide range of lexical information.",Forming an Integrated Lexical Resource for Word Sense Disambiguation,"This paper reports a full-scale linkage of noun senses between two existing lexical resources, namely WordNet and Roget's Thesaurus, to form an Integrated Lexical Resource (ILR) for use in natural language processing (NLP). The linkage was founded on a structurally-based sense-mapping algorithm. About 18,000 nouns with over 30,000 senses were mapped. Although exhaustive verification is impractical, we show that it is reasonable to expect some 70-80% accuracy of the resultant mappings. More importantly, the ILR, which contains enriched lexical information, is readily usable in many NLP tasks. We shall explore some practical use of the ILR in word sense disambiguation (WSD), as WSD notably requires a wide range of lexical information.",Forming an Integrated Lexical Resource for Word Sense Disambiguation,"This paper reports a full-scale linkage of noun senses between two existing lexical resources, namely WordNet and Roget's Thesaurus, to form an Integrated Lexical Resource (ILR) for use in natural language processing (NLP). The linkage was founded on a structurally-based sense-mapping algorithm. About 18,000 nouns with over 30,000 senses were mapped. Although exhaustive verification is impractical, we show that it is reasonable to expect some 70-80% accuracy of the resultant mappings. More importantly, the ILR, which contains enriched lexical information, is readily usable in many NLP tasks. We shall explore some practical use of the ILR in word sense disambiguation (WSD), as WSD notably requires a wide range of lexical information.","This work was done at the Computer Laboratory, University of Cambridge. The author would like to thank Prof. Karen Sparck Jones for her advice and comments. The work was financially supported by the Committee of Vice-Chancellors and Principals of the Universities of the United Kingdom, the Cambridge Commonwealth Trust, Downing College, and the Croucher Foundation.","Forming an Integrated Lexical Resource for Word Sense Disambiguation. This paper reports a full-scale linkage of noun senses between two existing lexical resources, namely WordNet and Roget's Thesaurus, to form an Integrated Lexical Resource (ILR) for use in natural language processing (NLP). The linkage was founded on a structurally-based sense-mapping algorithm. About 18,000 nouns with over 30,000 senses were mapped. Although exhaustive verification is impractical, we show that it is reasonable to expect some 70-80% accuracy of the resultant mappings. More importantly, the ILR, which contains enriched lexical information, is readily usable in many NLP tasks. We shall explore some practical use of the ILR in word sense disambiguation (WSD), as WSD notably requires a wide range of lexical information.",2001
hirasawa-komachi-2019-debiasing,https://aclanthology.org/W19-6604,0,,,,,,,"Debiasing Word Embeddings Improves Multimodal Machine Translation. In recent years, pretrained word embeddings have proved useful for multimodal neural machine translation (NMT) models to address the shortage of available datasets. However, the integration of pretrained word embeddings has not yet been explored extensively. Further, pretrained word embeddings in high dimensional spaces have been reported to suffer from the hubness problem. Although some debiasing techniques have been proposed to address this problem for other natural language processing tasks, they have seldom been studied for multimodal NMT models. In this study, we examine various kinds of word embeddings and introduce two debiasing techniques for three multimodal NMT models and two language pairs-English-German translation and English-French translation. With our optimal settings, the overall performance of multimodal models was improved by up to +1.62 BLEU and +1.14 METEOR for English-German translation and +1.40 BLEU and +1.13 METEOR for English-French translation.",Debiasing Word Embeddings Improves Multimodal Machine Translation,"In recent years, pretrained word embeddings have proved useful for multimodal neural machine translation (NMT) models to address the shortage of available datasets. However, the integration of pretrained word embeddings has not yet been explored extensively. Further, pretrained word embeddings in high dimensional spaces have been reported to suffer from the hubness problem. Although some debiasing techniques have been proposed to address this problem for other natural language processing tasks, they have seldom been studied for multimodal NMT models. In this study, we examine various kinds of word embeddings and introduce two debiasing techniques for three multimodal NMT models and two language pairs-English-German translation and English-French translation. With our optimal settings, the overall performance of multimodal models was improved by up to +1.62 BLEU and +1.14 METEOR for English-German translation and +1.40 BLEU and +1.13 METEOR for English-French translation.",Debiasing Word Embeddings Improves Multimodal Machine Translation,"In recent years, pretrained word embeddings have proved useful for multimodal neural machine translation (NMT) models to address the shortage of available datasets. However, the integration of pretrained word embeddings has not yet been explored extensively. Further, pretrained word embeddings in high dimensional spaces have been reported to suffer from the hubness problem. Although some debiasing techniques have been proposed to address this problem for other natural language processing tasks, they have seldom been studied for multimodal NMT models. In this study, we examine various kinds of word embeddings and introduce two debiasing techniques for three multimodal NMT models and two language pairs-English-German translation and English-French translation. With our optimal settings, the overall performance of multimodal models was improved by up to +1.62 BLEU and +1.14 METEOR for English-German translation and +1.40 BLEU and +1.13 METEOR for English-French translation.",This work was partially supported by JSPS Grantin-Aid for Scientific Research (C) Grant Number JP19K12099.,"Debiasing Word Embeddings Improves Multimodal Machine Translation. In recent years, pretrained word embeddings have proved useful for multimodal neural machine translation (NMT) models to address the shortage of available datasets. However, the integration of pretrained word embeddings has not yet been explored extensively. Further, pretrained word embeddings in high dimensional spaces have been reported to suffer from the hubness problem. Although some debiasing techniques have been proposed to address this problem for other natural language processing tasks, they have seldom been studied for multimodal NMT models. In this study, we examine various kinds of word embeddings and introduce two debiasing techniques for three multimodal NMT models and two language pairs-English-German translation and English-French translation. With our optimal settings, the overall performance of multimodal models was improved by up to +1.62 BLEU and +1.14 METEOR for English-German translation and +1.40 BLEU and +1.13 METEOR for English-French translation.",2019
van-deemter-etal-2017-investigating,https://aclanthology.org/W17-3532,0,,,,,,,"Investigating the content and form of referring expressions in Mandarin: introducing the Mtuna corpus. East Asian languages are thought to handle reference differently from English, particularly in terms of the marking of definiteness and number. We present the first Data-Text corpus for Referring Expressions in Mandarin, and we use this corpus to test some initial hypotheses inspired by the theoretical linguistics literature. Our findings suggest that function words deserve more attention in Referring Expression Generation than they have so far received, and they have a bearing on the debate about whether different languages make different trade-offs between clarity and brevity.",Investigating the content and form of referring expressions in {M}andarin: introducing the Mtuna corpus,"East Asian languages are thought to handle reference differently from English, particularly in terms of the marking of definiteness and number. We present the first Data-Text corpus for Referring Expressions in Mandarin, and we use this corpus to test some initial hypotheses inspired by the theoretical linguistics literature. Our findings suggest that function words deserve more attention in Referring Expression Generation than they have so far received, and they have a bearing on the debate about whether different languages make different trade-offs between clarity and brevity.",Investigating the content and form of referring expressions in Mandarin: introducing the Mtuna corpus,"East Asian languages are thought to handle reference differently from English, particularly in terms of the marking of definiteness and number. We present the first Data-Text corpus for Referring Expressions in Mandarin, and we use this corpus to test some initial hypotheses inspired by the theoretical linguistics literature. Our findings suggest that function words deserve more attention in Referring Expression Generation than they have so far received, and they have a bearing on the debate about whether different languages make different trade-offs between clarity and brevity.","This work is partly supported by the National Natural Science Foundation of China, Grant no. 61433015. We thank Stephen Matthews, University of Hong Kong, for comments, and Albert Gatt, University of Malta, for access to Dutch TUNA.","Investigating the content and form of referring expressions in Mandarin: introducing the Mtuna corpus. East Asian languages are thought to handle reference differently from English, particularly in terms of the marking of definiteness and number. We present the first Data-Text corpus for Referring Expressions in Mandarin, and we use this corpus to test some initial hypotheses inspired by the theoretical linguistics literature. Our findings suggest that function words deserve more attention in Referring Expression Generation than they have so far received, and they have a bearing on the debate about whether different languages make different trade-offs between clarity and brevity.",2017
ruan-etal-2016-finding,https://aclanthology.org/P16-2052,1,,,,health,,,"Finding Optimists and Pessimists on Twitter. Optimism is linked to various personality factors as well as both psychological and physical health, but how does it relate to the way a person tweets? We analyze the online activity of a set of Twitter users in order to determine how well machine learning algorithms can detect a person's outlook on life by reading their tweets. A sample of tweets from each user is manually annotated in order to establish ground truth labels, and classifiers are trained to distinguish between optimistic and pessimistic users. Our results suggest that the words in people's tweets provide ample evidence to identify them as optimists, pessimists, or somewhere in between. Additionally, several applications of these trained models are explored.",Finding Optimists and Pessimists on {T}witter,"Optimism is linked to various personality factors as well as both psychological and physical health, but how does it relate to the way a person tweets? We analyze the online activity of a set of Twitter users in order to determine how well machine learning algorithms can detect a person's outlook on life by reading their tweets. A sample of tweets from each user is manually annotated in order to establish ground truth labels, and classifiers are trained to distinguish between optimistic and pessimistic users. Our results suggest that the words in people's tweets provide ample evidence to identify them as optimists, pessimists, or somewhere in between. Additionally, several applications of these trained models are explored.",Finding Optimists and Pessimists on Twitter,"Optimism is linked to various personality factors as well as both psychological and physical health, but how does it relate to the way a person tweets? We analyze the online activity of a set of Twitter users in order to determine how well machine learning algorithms can detect a person's outlook on life by reading their tweets. A sample of tweets from each user is manually annotated in order to establish ground truth labels, and classifiers are trained to distinguish between optimistic and pessimistic users. Our results suggest that the words in people's tweets provide ample evidence to identify them as optimists, pessimists, or somewhere in between. Additionally, several applications of these trained models are explored.","We would like to thank Seong Ju Park, Tian Bao, and Yihan Li for their assistance in the initial project that led to this work. This material is based in part upon work supported by the National Science Foundation award #1344257 and by grant #48503 from the John Templeton Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the John Templeton Foundation.","Finding Optimists and Pessimists on Twitter. Optimism is linked to various personality factors as well as both psychological and physical health, but how does it relate to the way a person tweets? We analyze the online activity of a set of Twitter users in order to determine how well machine learning algorithms can detect a person's outlook on life by reading their tweets. A sample of tweets from each user is manually annotated in order to establish ground truth labels, and classifiers are trained to distinguish between optimistic and pessimistic users. Our results suggest that the words in people's tweets provide ample evidence to identify them as optimists, pessimists, or somewhere in between. Additionally, several applications of these trained models are explored.",2016
mirzaei-etal-2016-automatic,https://aclanthology.org/W16-4122,1,,,,education,,,"Automatic Speech Recognition Errors as a Predictor of L2 Listening Difficulties. This paper investigates the use of automatic speech recognition (ASR) errors as indicators of the second language (L2) learners' listening difficulties and in doing so strives to overcome the shortcomings of Partial and Synchronized Caption (PSC) system. PSC is a system that generates a partial caption including difficult words detected based on high speech rate, low frequency, and specificity. To improve the choice of words in this system, and explore a better method to detect speech challenges, ASR errors were investigated as a model of the L2 listener, hypothesizing that some of these errors are similar to those of language learners' when transcribing the videos. To investigate this hypothesis, ASR errors in transcription of several TED talks were analyzed and compared with PSC's selected words. Both the overlapping and mismatching cases were analyzed to investigate possible improvement for the PSC system. Those ASR errors that were not detected by PSC as cases of learners' difficulties were further analyzed and classified into four categories: homophones, minimal pairs, breached boundaries and negatives. These errors were embedded into the baseline PSC to make the enhanced version and were evaluated in an experiment with L2 learners. The results indicated that the enhanced version, which encompasses the ASR errors addresses most of the L2 learners' difficulties and better assists them in comprehending challenging video segments as compared with the baseline.",Automatic Speech Recognition Errors as a Predictor of {L}2 Listening Difficulties,"This paper investigates the use of automatic speech recognition (ASR) errors as indicators of the second language (L2) learners' listening difficulties and in doing so strives to overcome the shortcomings of Partial and Synchronized Caption (PSC) system. PSC is a system that generates a partial caption including difficult words detected based on high speech rate, low frequency, and specificity. To improve the choice of words in this system, and explore a better method to detect speech challenges, ASR errors were investigated as a model of the L2 listener, hypothesizing that some of these errors are similar to those of language learners' when transcribing the videos. To investigate this hypothesis, ASR errors in transcription of several TED talks were analyzed and compared with PSC's selected words. Both the overlapping and mismatching cases were analyzed to investigate possible improvement for the PSC system. Those ASR errors that were not detected by PSC as cases of learners' difficulties were further analyzed and classified into four categories: homophones, minimal pairs, breached boundaries and negatives. These errors were embedded into the baseline PSC to make the enhanced version and were evaluated in an experiment with L2 learners. The results indicated that the enhanced version, which encompasses the ASR errors addresses most of the L2 learners' difficulties and better assists them in comprehending challenging video segments as compared with the baseline.",Automatic Speech Recognition Errors as a Predictor of L2 Listening Difficulties,"This paper investigates the use of automatic speech recognition (ASR) errors as indicators of the second language (L2) learners' listening difficulties and in doing so strives to overcome the shortcomings of Partial and Synchronized Caption (PSC) system. PSC is a system that generates a partial caption including difficult words detected based on high speech rate, low frequency, and specificity. To improve the choice of words in this system, and explore a better method to detect speech challenges, ASR errors were investigated as a model of the L2 listener, hypothesizing that some of these errors are similar to those of language learners' when transcribing the videos. To investigate this hypothesis, ASR errors in transcription of several TED talks were analyzed and compared with PSC's selected words. Both the overlapping and mismatching cases were analyzed to investigate possible improvement for the PSC system. Those ASR errors that were not detected by PSC as cases of learners' difficulties were further analyzed and classified into four categories: homophones, minimal pairs, breached boundaries and negatives. These errors were embedded into the baseline PSC to make the enhanced version and were evaluated in an experiment with L2 learners. The results indicated that the enhanced version, which encompasses the ASR errors addresses most of the L2 learners' difficulties and better assists them in comprehending challenging video segments as compared with the baseline.",,"Automatic Speech Recognition Errors as a Predictor of L2 Listening Difficulties. This paper investigates the use of automatic speech recognition (ASR) errors as indicators of the second language (L2) learners' listening difficulties and in doing so strives to overcome the shortcomings of Partial and Synchronized Caption (PSC) system. PSC is a system that generates a partial caption including difficult words detected based on high speech rate, low frequency, and specificity. To improve the choice of words in this system, and explore a better method to detect speech challenges, ASR errors were investigated as a model of the L2 listener, hypothesizing that some of these errors are similar to those of language learners' when transcribing the videos. To investigate this hypothesis, ASR errors in transcription of several TED talks were analyzed and compared with PSC's selected words. Both the overlapping and mismatching cases were analyzed to investigate possible improvement for the PSC system. Those ASR errors that were not detected by PSC as cases of learners' difficulties were further analyzed and classified into four categories: homophones, minimal pairs, breached boundaries and negatives. These errors were embedded into the baseline PSC to make the enhanced version and were evaluated in an experiment with L2 learners. The results indicated that the enhanced version, which encompasses the ASR errors addresses most of the L2 learners' difficulties and better assists them in comprehending challenging video segments as compared with the baseline.",2016
forcada-2000-learning,https://aclanthology.org/2000.bcs-1.7,0,,,,business_use,,,Learning machine translation strategies using commercial systems: discovering word reordering rules. ,Learning machine translation strategies using commercial systems: discovering word reordering rules,,Learning machine translation strategies using commercial systems: discovering word reordering rules,,,Learning machine translation strategies using commercial systems: discovering word reordering rules. ,2000
patankar-etal-2022-optimize,https://aclanthology.org/2022.dravidianlangtech-1.36,1,,,,hate_speech,,,"Optimize\_Prime@DravidianLangTech-ACL2022: Abusive Comment Detection in Tamil. This paper tries to address the problem of abusive comment detection in low-resource indic languages. Abusive comments are statements that are offensive to a person or a group of people. These comments are targeted toward individuals belonging to specific ethnicities, genders, caste, race, sexuality, etc. Abusive Comment Detection is a significant problem, especially with the recent rise in social media users. This paper presents the approach used by our team-Optimize_Prime, in the ACL 2022 shared task ""Abusive Comment Detection in Tamil."" This task detects and classifies YouTube comments in Tamil and Tamil-English Codemixed format into multiple categories. We have used three methods to optimize our results: Ensemble models, Recurrent Neural Networks, and Transformers. In the Tamil data, MuRIL and XLM-RoBERTA were our best performing models with a macro-averaged f1 score of 0.43. Furthermore, for the Codemixed data, MuRIL and M-BERT provided sublime results, with a macro-averaged f1 score of 0.45.",{O}ptimize{\_}{P}rime@{D}ravidian{L}ang{T}ech-{ACL}2022: Abusive Comment Detection in {T}amil,"This paper tries to address the problem of abusive comment detection in low-resource indic languages. Abusive comments are statements that are offensive to a person or a group of people. These comments are targeted toward individuals belonging to specific ethnicities, genders, caste, race, sexuality, etc. Abusive Comment Detection is a significant problem, especially with the recent rise in social media users. This paper presents the approach used by our team-Optimize_Prime, in the ACL 2022 shared task ""Abusive Comment Detection in Tamil."" This task detects and classifies YouTube comments in Tamil and Tamil-English Codemixed format into multiple categories. We have used three methods to optimize our results: Ensemble models, Recurrent Neural Networks, and Transformers. In the Tamil data, MuRIL and XLM-RoBERTA were our best performing models with a macro-averaged f1 score of 0.43. Furthermore, for the Codemixed data, MuRIL and M-BERT provided sublime results, with a macro-averaged f1 score of 0.45.",Optimize\_Prime@DravidianLangTech-ACL2022: Abusive Comment Detection in Tamil,"This paper tries to address the problem of abusive comment detection in low-resource indic languages. Abusive comments are statements that are offensive to a person or a group of people. These comments are targeted toward individuals belonging to specific ethnicities, genders, caste, race, sexuality, etc. Abusive Comment Detection is a significant problem, especially with the recent rise in social media users. This paper presents the approach used by our team-Optimize_Prime, in the ACL 2022 shared task ""Abusive Comment Detection in Tamil."" This task detects and classifies YouTube comments in Tamil and Tamil-English Codemixed format into multiple categories. We have used three methods to optimize our results: Ensemble models, Recurrent Neural Networks, and Transformers. In the Tamil data, MuRIL and XLM-RoBERTA were our best performing models with a macro-averaged f1 score of 0.43. Furthermore, for the Codemixed data, MuRIL and M-BERT provided sublime results, with a macro-averaged f1 score of 0.45.",,"Optimize\_Prime@DravidianLangTech-ACL2022: Abusive Comment Detection in Tamil. This paper tries to address the problem of abusive comment detection in low-resource indic languages. Abusive comments are statements that are offensive to a person or a group of people. These comments are targeted toward individuals belonging to specific ethnicities, genders, caste, race, sexuality, etc. Abusive Comment Detection is a significant problem, especially with the recent rise in social media users. This paper presents the approach used by our team-Optimize_Prime, in the ACL 2022 shared task ""Abusive Comment Detection in Tamil."" This task detects and classifies YouTube comments in Tamil and Tamil-English Codemixed format into multiple categories. We have used three methods to optimize our results: Ensemble models, Recurrent Neural Networks, and Transformers. In the Tamil data, MuRIL and XLM-RoBERTA were our best performing models with a macro-averaged f1 score of 0.43. Furthermore, for the Codemixed data, MuRIL and M-BERT provided sublime results, with a macro-averaged f1 score of 0.45.",2022
cieri-etal-2004-fisher,http://www.lrec-conf.org/proceedings/lrec2004/pdf/767.pdf,0,,,,,,,"The Fisher Corpus: a Resource for the Next Generations of Speech-to-Text. This paper describes, within the context of the DARPA EARS program, the design and implementation of the Fisher protocol for collecting conversational telephone speech which has yielded more than 16,000 English conversations. It also discusses the Quick Transcription specification that allowed 2000 hours of Fisher audio to be transcribed in less than one year. Fisher data is already in use within the DARPA EARS programs and will be published via the Linguistic Data Consortium for general use beginning in 2004.",The Fisher Corpus: a Resource for the Next Generations of Speech-to-Text,"This paper describes, within the context of the DARPA EARS program, the design and implementation of the Fisher protocol for collecting conversational telephone speech which has yielded more than 16,000 English conversations. It also discusses the Quick Transcription specification that allowed 2000 hours of Fisher audio to be transcribed in less than one year. Fisher data is already in use within the DARPA EARS programs and will be published via the Linguistic Data Consortium for general use beginning in 2004.",The Fisher Corpus: a Resource for the Next Generations of Speech-to-Text,"This paper describes, within the context of the DARPA EARS program, the design and implementation of the Fisher protocol for collecting conversational telephone speech which has yielded more than 16,000 English conversations. It also discusses the Quick Transcription specification that allowed 2000 hours of Fisher audio to be transcribed in less than one year. Fisher data is already in use within the DARPA EARS programs and will be published via the Linguistic Data Consortium for general use beginning in 2004.",,"The Fisher Corpus: a Resource for the Next Generations of Speech-to-Text. This paper describes, within the context of the DARPA EARS program, the design and implementation of the Fisher protocol for collecting conversational telephone speech which has yielded more than 16,000 English conversations. It also discusses the Quick Transcription specification that allowed 2000 hours of Fisher audio to be transcribed in less than one year. Fisher data is already in use within the DARPA EARS programs and will be published via the Linguistic Data Consortium for general use beginning in 2004.",2004
zhang-etal-2020-query,https://aclanthology.org/2020.coling-industry.4,0,,,,,,,"Query Distillation: BERT-based Distillation for Ensemble Ranking. Recent years have witnessed substantial progress in the development of neural ranking networks, but also an increasingly heavy computational burden due to growing numbers of parameters and the adoption of model ensembles. Knowledge Distillation (KD) is a common solution to balance the effectiveness and efficiency. However, it is not straightforward to apply KD to ranking problems. Ranking Distillation (RD) has been proposed to address this issue, but only shows effectiveness on recommendation tasks. We present a novel two-stage distillation method for ranking problems that allows a smaller student model to be trained while benefitting from the better performance of the teacher model, providing better control of the inference latency and computational burden. We design a novel BERT-based ranking model structure for list-wise ranking to serve as our student model. All ranking candidates are fed to the BERT model simultaneously, such that the self-attention mechanism can enable joint inference to rank the document list. Our experiments confirm the advantages of our method, not just with regard to the inference latency but also in terms of higher-quality rankings compared to the original teacher model.",Query Distillation: {BERT}-based Distillation for Ensemble Ranking,"Recent years have witnessed substantial progress in the development of neural ranking networks, but also an increasingly heavy computational burden due to growing numbers of parameters and the adoption of model ensembles. Knowledge Distillation (KD) is a common solution to balance the effectiveness and efficiency. However, it is not straightforward to apply KD to ranking problems. Ranking Distillation (RD) has been proposed to address this issue, but only shows effectiveness on recommendation tasks. We present a novel two-stage distillation method for ranking problems that allows a smaller student model to be trained while benefitting from the better performance of the teacher model, providing better control of the inference latency and computational burden. We design a novel BERT-based ranking model structure for list-wise ranking to serve as our student model. All ranking candidates are fed to the BERT model simultaneously, such that the self-attention mechanism can enable joint inference to rank the document list. Our experiments confirm the advantages of our method, not just with regard to the inference latency but also in terms of higher-quality rankings compared to the original teacher model.",Query Distillation: BERT-based Distillation for Ensemble Ranking,"Recent years have witnessed substantial progress in the development of neural ranking networks, but also an increasingly heavy computational burden due to growing numbers of parameters and the adoption of model ensembles. Knowledge Distillation (KD) is a common solution to balance the effectiveness and efficiency. However, it is not straightforward to apply KD to ranking problems. Ranking Distillation (RD) has been proposed to address this issue, but only shows effectiveness on recommendation tasks. We present a novel two-stage distillation method for ranking problems that allows a smaller student model to be trained while benefitting from the better performance of the teacher model, providing better control of the inference latency and computational burden. We design a novel BERT-based ranking model structure for list-wise ranking to serve as our student model. All ranking candidates are fed to the BERT model simultaneously, such that the self-attention mechanism can enable joint inference to rank the document list. Our experiments confirm the advantages of our method, not just with regard to the inference latency but also in terms of higher-quality rankings compared to the original teacher model.",,"Query Distillation: BERT-based Distillation for Ensemble Ranking. Recent years have witnessed substantial progress in the development of neural ranking networks, but also an increasingly heavy computational burden due to growing numbers of parameters and the adoption of model ensembles. Knowledge Distillation (KD) is a common solution to balance the effectiveness and efficiency. However, it is not straightforward to apply KD to ranking problems. Ranking Distillation (RD) has been proposed to address this issue, but only shows effectiveness on recommendation tasks. We present a novel two-stage distillation method for ranking problems that allows a smaller student model to be trained while benefitting from the better performance of the teacher model, providing better control of the inference latency and computational burden. We design a novel BERT-based ranking model structure for list-wise ranking to serve as our student model. All ranking candidates are fed to the BERT model simultaneously, such that the self-attention mechanism can enable joint inference to rank the document list. Our experiments confirm the advantages of our method, not just with regard to the inference latency but also in terms of higher-quality rankings compared to the original teacher model.",2020
liu-etal-2016-agreement,https://aclanthology.org/N16-1046,0,,,,,,,"Agreement on Target-bidirectional Neural Machine Translation. Neural machine translation (NMT) with recurrent neural networks, has proven to be an effective technique for end-to-end machine translation. However, in spite of its promising advances over traditional translation methods, it typically suffers from an issue of unbalanced outputs, that arise from both the nature of recurrent neural networks themselves, and the challenges inherent in machine translation. To overcome this issue, we propose an agreement model for neural machine translation and show its effectiveness on large-scale Japaneseto-English and Chinese-to-English translation tasks. Our results show the model can achieve improvements of up to 1.4 BLEU over the strongest baseline NMT system. With the help of an ensemble technique, this new end-to-end NMT approach finally outperformed phrasebased and hierarchical phrase-based Moses baselines by up to 5.6 BLEU points.",Agreement on Target-bidirectional Neural Machine Translation,"Neural machine translation (NMT) with recurrent neural networks, has proven to be an effective technique for end-to-end machine translation. However, in spite of its promising advances over traditional translation methods, it typically suffers from an issue of unbalanced outputs, that arise from both the nature of recurrent neural networks themselves, and the challenges inherent in machine translation. To overcome this issue, we propose an agreement model for neural machine translation and show its effectiveness on large-scale Japaneseto-English and Chinese-to-English translation tasks. Our results show the model can achieve improvements of up to 1.4 BLEU over the strongest baseline NMT system. With the help of an ensemble technique, this new end-to-end NMT approach finally outperformed phrasebased and hierarchical phrase-based Moses baselines by up to 5.6 BLEU points.",Agreement on Target-bidirectional Neural Machine Translation,"Neural machine translation (NMT) with recurrent neural networks, has proven to be an effective technique for end-to-end machine translation. However, in spite of its promising advances over traditional translation methods, it typically suffers from an issue of unbalanced outputs, that arise from both the nature of recurrent neural networks themselves, and the challenges inherent in machine translation. To overcome this issue, we propose an agreement model for neural machine translation and show its effectiveness on large-scale Japaneseto-English and Chinese-to-English translation tasks. Our results show the model can achieve improvements of up to 1.4 BLEU over the strongest baseline NMT system. With the help of an ensemble technique, this new end-to-end NMT approach finally outperformed phrasebased and hierarchical phrase-based Moses baselines by up to 5.6 BLEU points.","We would like to thank the three anonymous reviewers for helpful comments and suggestions. In addition, we would like to thank Rico Sennrich for fruitful discussions.","Agreement on Target-bidirectional Neural Machine Translation. Neural machine translation (NMT) with recurrent neural networks, has proven to be an effective technique for end-to-end machine translation. However, in spite of its promising advances over traditional translation methods, it typically suffers from an issue of unbalanced outputs, that arise from both the nature of recurrent neural networks themselves, and the challenges inherent in machine translation. To overcome this issue, we propose an agreement model for neural machine translation and show its effectiveness on large-scale Japaneseto-English and Chinese-to-English translation tasks. Our results show the model can achieve improvements of up to 1.4 BLEU over the strongest baseline NMT system. With the help of an ensemble technique, this new end-to-end NMT approach finally outperformed phrasebased and hierarchical phrase-based Moses baselines by up to 5.6 BLEU points.",2016
rapp-2006-exploring,https://aclanthology.org/E06-2018,0,,,,,,,"Exploring the Sense Distributions of Homographs. This paper quantitatively investigates in how far local context is useful to disambiguate the senses of an ambiguous word. This is done by comparing the co-occurrence frequencies of particular context words. First, one context word representing a certain sense is chosen, and then the co-occurrence frequencies with two other context words, one of the same and one of another sense, are compared. As expected, it turns out that context words belonging to the same sense have considerably higher co-occurrence frequencies than words belonging to different senses. In our study, the sense inventory is taken from the University of South Florida homograph norms, and the co-occurrence counts are based on the British National Corpus.",Exploring the Sense Distributions of Homographs,"This paper quantitatively investigates in how far local context is useful to disambiguate the senses of an ambiguous word. This is done by comparing the co-occurrence frequencies of particular context words. First, one context word representing a certain sense is chosen, and then the co-occurrence frequencies with two other context words, one of the same and one of another sense, are compared. As expected, it turns out that context words belonging to the same sense have considerably higher co-occurrence frequencies than words belonging to different senses. In our study, the sense inventory is taken from the University of South Florida homograph norms, and the co-occurrence counts are based on the British National Corpus.",Exploring the Sense Distributions of Homographs,"This paper quantitatively investigates in how far local context is useful to disambiguate the senses of an ambiguous word. This is done by comparing the co-occurrence frequencies of particular context words. First, one context word representing a certain sense is chosen, and then the co-occurrence frequencies with two other context words, one of the same and one of another sense, are compared. As expected, it turns out that context words belonging to the same sense have considerably higher co-occurrence frequencies than words belonging to different senses. In our study, the sense inventory is taken from the University of South Florida homograph norms, and the co-occurrence counts are based on the British National Corpus.",I would like to thank the three anonymous reviewers for their detailed and helpful comments.,"Exploring the Sense Distributions of Homographs. This paper quantitatively investigates in how far local context is useful to disambiguate the senses of an ambiguous word. This is done by comparing the co-occurrence frequencies of particular context words. First, one context word representing a certain sense is chosen, and then the co-occurrence frequencies with two other context words, one of the same and one of another sense, are compared. As expected, it turns out that context words belonging to the same sense have considerably higher co-occurrence frequencies than words belonging to different senses. In our study, the sense inventory is taken from the University of South Florida homograph norms, and the co-occurrence counts are based on the British National Corpus.",2006
ding-etal-2020-coupling,https://aclanthology.org/2020.acl-main.595,0,,,,,,,"Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation. Fully supervised neural approaches have achieved significant progress in the task of Chinese word segmentation (CWS). Nevertheless, the performance of supervised models tends to drop dramatically when they are applied to outof-domain data. Performance degradation is caused by the distribution gap across domains and the out of vocabulary (OOV) problem. In order to simultaneously alleviate these two issues, this paper proposes to couple distant annotation and adversarial training for crossdomain CWS. For distant annotation, we rethink the essence of ""Chinese words"" and design an automatic distant annotation mechanism that does not need any supervision or pre-defined dictionaries from the target domain. The approach could effectively explore domain-specific words and distantly annotate the raw texts for the target domain. For adversarial training, we develop a sentence-level training procedure to perform noise reduction and maximum utilization of the source domain information. Experiments on multiple realworld datasets across various domains show the superiority and robustness of our model, significantly outperforming previous state-ofthe-art cross-domain CWS methods.",Coupling Distant Annotation and Adversarial Training for Cross-Domain {C}hinese Word Segmentation,"Fully supervised neural approaches have achieved significant progress in the task of Chinese word segmentation (CWS). Nevertheless, the performance of supervised models tends to drop dramatically when they are applied to outof-domain data. Performance degradation is caused by the distribution gap across domains and the out of vocabulary (OOV) problem. In order to simultaneously alleviate these two issues, this paper proposes to couple distant annotation and adversarial training for crossdomain CWS. For distant annotation, we rethink the essence of ""Chinese words"" and design an automatic distant annotation mechanism that does not need any supervision or pre-defined dictionaries from the target domain. The approach could effectively explore domain-specific words and distantly annotate the raw texts for the target domain. For adversarial training, we develop a sentence-level training procedure to perform noise reduction and maximum utilization of the source domain information. Experiments on multiple realworld datasets across various domains show the superiority and robustness of our model, significantly outperforming previous state-ofthe-art cross-domain CWS methods.",Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation,"Fully supervised neural approaches have achieved significant progress in the task of Chinese word segmentation (CWS). Nevertheless, the performance of supervised models tends to drop dramatically when they are applied to outof-domain data. Performance degradation is caused by the distribution gap across domains and the out of vocabulary (OOV) problem. In order to simultaneously alleviate these two issues, this paper proposes to couple distant annotation and adversarial training for crossdomain CWS. For distant annotation, we rethink the essence of ""Chinese words"" and design an automatic distant annotation mechanism that does not need any supervision or pre-defined dictionaries from the target domain. The approach could effectively explore domain-specific words and distantly annotate the raw texts for the target domain. For adversarial training, we develop a sentence-level training procedure to perform noise reduction and maximum utilization of the source domain information. Experiments on multiple realworld datasets across various domains show the superiority and robustness of our model, significantly outperforming previous state-ofthe-art cross-domain CWS methods.","We sincerely thank all the reviewers for their insightful comments and suggestions. This research is partially supported by National Natural Science Foundation of China (Grant No. 61773229 and 61972219), the Basic Research Fund of Shenzhen City (Grand No. JCYJ20190813165003837), and Overseas Cooperation Research Fund of Graduate School at Shenzhen, Tsinghua University (Grant No. HW2018002).","Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation. Fully supervised neural approaches have achieved significant progress in the task of Chinese word segmentation (CWS). Nevertheless, the performance of supervised models tends to drop dramatically when they are applied to outof-domain data. Performance degradation is caused by the distribution gap across domains and the out of vocabulary (OOV) problem. In order to simultaneously alleviate these two issues, this paper proposes to couple distant annotation and adversarial training for crossdomain CWS. For distant annotation, we rethink the essence of ""Chinese words"" and design an automatic distant annotation mechanism that does not need any supervision or pre-defined dictionaries from the target domain. The approach could effectively explore domain-specific words and distantly annotate the raw texts for the target domain. For adversarial training, we develop a sentence-level training procedure to perform noise reduction and maximum utilization of the source domain information. Experiments on multiple realworld datasets across various domains show the superiority and robustness of our model, significantly outperforming previous state-ofthe-art cross-domain CWS methods.",2020
broeder-etal-2010-data,http://www.lrec-conf.org/proceedings/lrec2010/pdf/163_Paper.pdf,0,,,,,,,"A Data Category Registry- and Component-based Metadata Framework. We describe our computer-supported framework to overcome the rule of metadata schism. It combines the use of controlled vocabularies, managed by a data category registry, with a component-based approach, where the categories can be combined to yield complex metadata structures. A metadata scheme devised in this way will thus be grounded in its use of categories. Schema designers will profit from existing prefabricated larger building blocks, motivating re-use at a larger scale. The common base of any two metadata schemes within this framework will solve, at least to a good extent, the semantic interoperability problem, and consequently, further promote systematic use of metadata for existing resources and tools to be shared.",A Data Category Registry- and Component-based Metadata Framework,"We describe our computer-supported framework to overcome the rule of metadata schism. It combines the use of controlled vocabularies, managed by a data category registry, with a component-based approach, where the categories can be combined to yield complex metadata structures. A metadata scheme devised in this way will thus be grounded in its use of categories. Schema designers will profit from existing prefabricated larger building blocks, motivating re-use at a larger scale. The common base of any two metadata schemes within this framework will solve, at least to a good extent, the semantic interoperability problem, and consequently, further promote systematic use of metadata for existing resources and tools to be shared.",A Data Category Registry- and Component-based Metadata Framework,"We describe our computer-supported framework to overcome the rule of metadata schism. It combines the use of controlled vocabularies, managed by a data category registry, with a component-based approach, where the categories can be combined to yield complex metadata structures. A metadata scheme devised in this way will thus be grounded in its use of categories. Schema designers will profit from existing prefabricated larger building blocks, motivating re-use at a larger scale. The common base of any two metadata schemes within this framework will solve, at least to a good extent, the semantic interoperability problem, and consequently, further promote systematic use of metadata for existing resources and tools to be shared.",,"A Data Category Registry- and Component-based Metadata Framework. We describe our computer-supported framework to overcome the rule of metadata schism. It combines the use of controlled vocabularies, managed by a data category registry, with a component-based approach, where the categories can be combined to yield complex metadata structures. A metadata scheme devised in this way will thus be grounded in its use of categories. Schema designers will profit from existing prefabricated larger building blocks, motivating re-use at a larger scale. The common base of any two metadata schemes within this framework will solve, at least to a good extent, the semantic interoperability problem, and consequently, further promote systematic use of metadata for existing resources and tools to be shared.",2010
sinclair-etal-2018-ability,https://aclanthology.org/W18-5005,1,,,,education,,,"Does Ability Affect Alignment in Second Language Tutorial Dialogue?. The role of alignment between interlocutors in second language learning is different to that in fluent conversational dialogue. Learners gain linguistic skill through increased alignment, yet the extent to which they can align will be constrained by their ability. Tutors may use alignment to teach and encourage the student, yet still must push the student and correct their errors, decreasing alignment. To understand how learner ability interacts with alignment, we measure the influence of ability on lexical priming, an indicator of alignment. We find that lexical priming in learner-tutor dialogues differs from that in conversational and task-based dialogues, and we find evidence that alignment increases with ability and with word complexity.",Does Ability Affect Alignment in Second Language Tutorial Dialogue?,"The role of alignment between interlocutors in second language learning is different to that in fluent conversational dialogue. Learners gain linguistic skill through increased alignment, yet the extent to which they can align will be constrained by their ability. Tutors may use alignment to teach and encourage the student, yet still must push the student and correct their errors, decreasing alignment. To understand how learner ability interacts with alignment, we measure the influence of ability on lexical priming, an indicator of alignment. We find that lexical priming in learner-tutor dialogues differs from that in conversational and task-based dialogues, and we find evidence that alignment increases with ability and with word complexity.",Does Ability Affect Alignment in Second Language Tutorial Dialogue?,"The role of alignment between interlocutors in second language learning is different to that in fluent conversational dialogue. Learners gain linguistic skill through increased alignment, yet the extent to which they can align will be constrained by their ability. Tutors may use alignment to teach and encourage the student, yet still must push the student and correct their errors, decreasing alignment. To understand how learner ability interacts with alignment, we measure the influence of ability on lexical priming, an indicator of alignment. We find that lexical priming in learner-tutor dialogues differs from that in conversational and task-based dialogues, and we find evidence that alignment increases with ability and with word complexity.","Thanks to Amy Isard, Maria Gorinova, Maria Wolters, Federico Fancellu, Sorcha Gilroy, Clara Vania and Marco Damonte as well as the three anonymous reviewers for their useful comments in relation to this paper. A. Sinclair especially acknowledges the help and support of Jon Oberlander during the early development of this idea.","Does Ability Affect Alignment in Second Language Tutorial Dialogue?. The role of alignment between interlocutors in second language learning is different to that in fluent conversational dialogue. Learners gain linguistic skill through increased alignment, yet the extent to which they can align will be constrained by their ability. Tutors may use alignment to teach and encourage the student, yet still must push the student and correct their errors, decreasing alignment. To understand how learner ability interacts with alignment, we measure the influence of ability on lexical priming, an indicator of alignment. We find that lexical priming in learner-tutor dialogues differs from that in conversational and task-based dialogues, and we find evidence that alignment increases with ability and with word complexity.",2018
shudo-etal-2000-collocations,http://www.lrec-conf.org/proceedings/lrec2000/pdf/2.pdf,0,,,,,,,Collocations as Word Co-ocurrence Restriction Data - An Application to Japanese Word Processor -. ,Collocations as Word Co-ocurrence Restriction Data - An Application to {J}apanese Word Processor -,,Collocations as Word Co-ocurrence Restriction Data - An Application to Japanese Word Processor -,,,Collocations as Word Co-ocurrence Restriction Data - An Application to Japanese Word Processor -. ,2000
khlyzova-etal-2022-complementarity,https://aclanthology.org/2022.wassa-1.1,0,,,,,,,"On the Complementarity of Images and Text for the Expression of Emotions in Social Media. Authors of posts in social media communicate their emotions and what causes them with text and images. While there is work on emotion and stimulus detection for each modality separately, it is yet unknown if the modalities contain complementary emotion information in social media. We aim at filling this research gap and contribute a novel, annotated corpus of English multimodal Reddit posts. On this resource, we develop models to automatically detect the relation between image and text, an emotion stimulus category and the emotion class. We evaluate if these tasks require both modalities and find for the imagetext relations, that text alone is sufficient for most categories (complementary, illustrative, opposing): the information in the text allows to predict if an image is required for emotion understanding. The emotions of anger and sadness are best predicted with a multimodal model, while text alone is sufficient for disgust, joy, and surprise. Stimuli depicted by objects, animals, food, or a person are best predicted by image-only models, while multimodal models are most effective on art, events, memes, places, or screenshots. My everyday joy is to see my adorable cat smiles. And I've just realized, my cat can ""dance with music"". Amazing!",On the Complementarity of Images and Text for the Expression of Emotions in Social Media,"Authors of posts in social media communicate their emotions and what causes them with text and images. While there is work on emotion and stimulus detection for each modality separately, it is yet unknown if the modalities contain complementary emotion information in social media. We aim at filling this research gap and contribute a novel, annotated corpus of English multimodal Reddit posts. On this resource, we develop models to automatically detect the relation between image and text, an emotion stimulus category and the emotion class. We evaluate if these tasks require both modalities and find for the imagetext relations, that text alone is sufficient for most categories (complementary, illustrative, opposing): the information in the text allows to predict if an image is required for emotion understanding. The emotions of anger and sadness are best predicted with a multimodal model, while text alone is sufficient for disgust, joy, and surprise. Stimuli depicted by objects, animals, food, or a person are best predicted by image-only models, while multimodal models are most effective on art, events, memes, places, or screenshots. My everyday joy is to see my adorable cat smiles. And I've just realized, my cat can ""dance with music"". Amazing!",On the Complementarity of Images and Text for the Expression of Emotions in Social Media,"Authors of posts in social media communicate their emotions and what causes them with text and images. While there is work on emotion and stimulus detection for each modality separately, it is yet unknown if the modalities contain complementary emotion information in social media. We aim at filling this research gap and contribute a novel, annotated corpus of English multimodal Reddit posts. On this resource, we develop models to automatically detect the relation between image and text, an emotion stimulus category and the emotion class. We evaluate if these tasks require both modalities and find for the imagetext relations, that text alone is sufficient for most categories (complementary, illustrative, opposing): the information in the text allows to predict if an image is required for emotion understanding. The emotions of anger and sadness are best predicted with a multimodal model, while text alone is sufficient for disgust, joy, and surprise. Stimuli depicted by objects, animals, food, or a person are best predicted by image-only models, while multimodal models are most effective on art, events, memes, places, or screenshots. My everyday joy is to see my adorable cat smiles. And I've just realized, my cat can ""dance with music"". Amazing!","This work was supported by Deutsche Forschungsgemeinschaft (project CEAT, KL 2869/1-2).","On the Complementarity of Images and Text for the Expression of Emotions in Social Media. Authors of posts in social media communicate their emotions and what causes them with text and images. While there is work on emotion and stimulus detection for each modality separately, it is yet unknown if the modalities contain complementary emotion information in social media. We aim at filling this research gap and contribute a novel, annotated corpus of English multimodal Reddit posts. On this resource, we develop models to automatically detect the relation between image and text, an emotion stimulus category and the emotion class. We evaluate if these tasks require both modalities and find for the imagetext relations, that text alone is sufficient for most categories (complementary, illustrative, opposing): the information in the text allows to predict if an image is required for emotion understanding. The emotions of anger and sadness are best predicted with a multimodal model, while text alone is sufficient for disgust, joy, and surprise. Stimuli depicted by objects, animals, food, or a person are best predicted by image-only models, while multimodal models are most effective on art, events, memes, places, or screenshots. My everyday joy is to see my adorable cat smiles. And I've just realized, my cat can ""dance with music"". Amazing!",2022
juhasz-etal-2019-tuefact,https://aclanthology.org/S19-2206,1,,,,disinformation_and_fake_news,,,"TueFact at SemEval 2019 Task 8: Fact checking in community question answering forums: context matters. The SemEval 2019 Task 8 on Fact-Checking in community question answering forums aimed to classify questions into categories and verify the correctness of answers given on the QatarLiving public forum. The task was divided into two subtasks: the first classifying the question, the second the answers. The Tue-Fact system described in this paper used different approaches for the two subtasks. Subtask A makes use of word vectors based on a bagof-word-ngram model using up to trigrams. Predictions are done using multi-class logistic regression. The official SemEval result lists an accuracy of 0.60. Subtask B uses vectorized character n-grams up to trigrams instead. Predictions are done using a LSTM model and achieved an accuracy of 0.53 on the final Se-mEval Task 8 evaluation set. In a comparison of contextual inputs to subtask B, it was determined that more contextual data improved results, but only up to a point.",{T}ue{F}act at {S}em{E}val 2019 Task 8: Fact checking in community question answering forums: context matters,"The SemEval 2019 Task 8 on Fact-Checking in community question answering forums aimed to classify questions into categories and verify the correctness of answers given on the QatarLiving public forum. The task was divided into two subtasks: the first classifying the question, the second the answers. The Tue-Fact system described in this paper used different approaches for the two subtasks. Subtask A makes use of word vectors based on a bagof-word-ngram model using up to trigrams. Predictions are done using multi-class logistic regression. The official SemEval result lists an accuracy of 0.60. Subtask B uses vectorized character n-grams up to trigrams instead. Predictions are done using a LSTM model and achieved an accuracy of 0.53 on the final Se-mEval Task 8 evaluation set. In a comparison of contextual inputs to subtask B, it was determined that more contextual data improved results, but only up to a point.",TueFact at SemEval 2019 Task 8: Fact checking in community question answering forums: context matters,"The SemEval 2019 Task 8 on Fact-Checking in community question answering forums aimed to classify questions into categories and verify the correctness of answers given on the QatarLiving public forum. The task was divided into two subtasks: the first classifying the question, the second the answers. The Tue-Fact system described in this paper used different approaches for the two subtasks. Subtask A makes use of word vectors based on a bagof-word-ngram model using up to trigrams. Predictions are done using multi-class logistic regression. The official SemEval result lists an accuracy of 0.60. Subtask B uses vectorized character n-grams up to trigrams instead. Predictions are done using a LSTM model and achieved an accuracy of 0.53 on the final Se-mEval Task 8 evaluation set. In a comparison of contextual inputs to subtask B, it was determined that more contextual data improved results, but only up to a point.",,"TueFact at SemEval 2019 Task 8: Fact checking in community question answering forums: context matters. The SemEval 2019 Task 8 on Fact-Checking in community question answering forums aimed to classify questions into categories and verify the correctness of answers given on the QatarLiving public forum. The task was divided into two subtasks: the first classifying the question, the second the answers. The Tue-Fact system described in this paper used different approaches for the two subtasks. Subtask A makes use of word vectors based on a bagof-word-ngram model using up to trigrams. Predictions are done using multi-class logistic regression. The official SemEval result lists an accuracy of 0.60. Subtask B uses vectorized character n-grams up to trigrams instead. Predictions are done using a LSTM model and achieved an accuracy of 0.53 on the final Se-mEval Task 8 evaluation set. In a comparison of contextual inputs to subtask B, it was determined that more contextual data improved results, but only up to a point.",2019
carreras-padro-2002-flexible,http://www.lrec-conf.org/proceedings/lrec2002/pdf/243.pdf,0,,,,,,,"A Flexible Distributed Architecture for Natural Language Analyzers. Many modern NLP applications require basic language processors such as POS taggers, parsers, etc. All these tools are usually preexisting, and must be adapted to fit in the requirements of the application to be developed. This adaptation procedure is usually time consuming and increases the application development cost. Our proposal to minimize this effort is to use standard engineering solutions for software reusability. In that sense, we converted all our language processors to classes which may be instantiated and accessed from any application via a CORBA broker. Reusability is not the only advantatge, since the distributed CORBA approach also makes it possible to access the analyzers from any remote application, developed in any language, and running on any operating system.",A Flexible Distributed Architecture for Natural Language Analyzers,"Many modern NLP applications require basic language processors such as POS taggers, parsers, etc. All these tools are usually preexisting, and must be adapted to fit in the requirements of the application to be developed. This adaptation procedure is usually time consuming and increases the application development cost. Our proposal to minimize this effort is to use standard engineering solutions for software reusability. In that sense, we converted all our language processors to classes which may be instantiated and accessed from any application via a CORBA broker. Reusability is not the only advantatge, since the distributed CORBA approach also makes it possible to access the analyzers from any remote application, developed in any language, and running on any operating system.",A Flexible Distributed Architecture for Natural Language Analyzers,"Many modern NLP applications require basic language processors such as POS taggers, parsers, etc. All these tools are usually preexisting, and must be adapted to fit in the requirements of the application to be developed. This adaptation procedure is usually time consuming and increases the application development cost. Our proposal to minimize this effort is to use standard engineering solutions for software reusability. In that sense, we converted all our language processors to classes which may be instantiated and accessed from any application via a CORBA broker. Reusability is not the only advantatge, since the distributed CORBA approach also makes it possible to access the analyzers from any remote application, developed in any language, and running on any operating system.",,"A Flexible Distributed Architecture for Natural Language Analyzers. Many modern NLP applications require basic language processors such as POS taggers, parsers, etc. All these tools are usually preexisting, and must be adapted to fit in the requirements of the application to be developed. This adaptation procedure is usually time consuming and increases the application development cost. Our proposal to minimize this effort is to use standard engineering solutions for software reusability. In that sense, we converted all our language processors to classes which may be instantiated and accessed from any application via a CORBA broker. Reusability is not the only advantatge, since the distributed CORBA approach also makes it possible to access the analyzers from any remote application, developed in any language, and running on any operating system.",2002
meerkamp-zhou-2017-boosting,https://aclanthology.org/W17-4307,0,,,,,,,"Boosting Information Extraction Systems with Character-level Neural Networks and Free Noisy Supervision. We present an architecture to boost the precision of existing information extraction systems. This is achieved by augmenting the existing parser, which may be constraint-based or hybrid statistical, with a character-level neural network. Our architecture combines the ability of constraint-based or hybrid extraction systems to easily incorporate domain knowledge with the ability of deep neural networks to leverage large amounts of data to learn complex features. The network is trained using a measure of consistency between extracted data and existing databases as a form of cheap, noisy supervision. Our architecture does not require large scale manual annotation or a system rewrite. It has led to large precision improvements over an existing, highly-tuned production information extraction system used at Bloomberg LP for financial language text.",Boosting Information Extraction Systems with Character-level Neural Networks and Free Noisy Supervision,"We present an architecture to boost the precision of existing information extraction systems. This is achieved by augmenting the existing parser, which may be constraint-based or hybrid statistical, with a character-level neural network. Our architecture combines the ability of constraint-based or hybrid extraction systems to easily incorporate domain knowledge with the ability of deep neural networks to leverage large amounts of data to learn complex features. The network is trained using a measure of consistency between extracted data and existing databases as a form of cheap, noisy supervision. Our architecture does not require large scale manual annotation or a system rewrite. It has led to large precision improvements over an existing, highly-tuned production information extraction system used at Bloomberg LP for financial language text.",Boosting Information Extraction Systems with Character-level Neural Networks and Free Noisy Supervision,"We present an architecture to boost the precision of existing information extraction systems. This is achieved by augmenting the existing parser, which may be constraint-based or hybrid statistical, with a character-level neural network. Our architecture combines the ability of constraint-based or hybrid extraction systems to easily incorporate domain knowledge with the ability of deep neural networks to leverage large amounts of data to learn complex features. The network is trained using a measure of consistency between extracted data and existing databases as a form of cheap, noisy supervision. Our architecture does not require large scale manual annotation or a system rewrite. It has led to large precision improvements over an existing, highly-tuned production information extraction system used at Bloomberg LP for financial language text.","We would like to thank my managers Alex Bozic, Tim Phelan, and Joshwini Pereira for supporting this project, as well as David Rosenberg from the CTO's office for providing access to GPU infrastructure.","Boosting Information Extraction Systems with Character-level Neural Networks and Free Noisy Supervision. We present an architecture to boost the precision of existing information extraction systems. This is achieved by augmenting the existing parser, which may be constraint-based or hybrid statistical, with a character-level neural network. Our architecture combines the ability of constraint-based or hybrid extraction systems to easily incorporate domain knowledge with the ability of deep neural networks to leverage large amounts of data to learn complex features. The network is trained using a measure of consistency between extracted data and existing databases as a form of cheap, noisy supervision. Our architecture does not require large scale manual annotation or a system rewrite. It has led to large precision improvements over an existing, highly-tuned production information extraction system used at Bloomberg LP for financial language text.",2017
meng-wang-2009-mining,https://aclanthology.org/P09-2045,0,,,,,,,"Mining User Reviews: from Specification to Summarization. This paper proposes a method to extract product features from user reviews and generate a review summary. This method only relies on product specifications, which usually are easy to obtain. Other resources like segmenter, POS tagger or parser are not required. At feature extraction stage, multiple specifications are clustered to extend the vocabulary of product features. Hierarchy structure information and unit of measurement information are mined from the specification to improve the accuracy of feature extraction. At summary generation stage, hierarchy information in specifications is used to provide a natural conceptual view of product features.",Mining User Reviews: from Specification to Summarization,"This paper proposes a method to extract product features from user reviews and generate a review summary. This method only relies on product specifications, which usually are easy to obtain. Other resources like segmenter, POS tagger or parser are not required. At feature extraction stage, multiple specifications are clustered to extend the vocabulary of product features. Hierarchy structure information and unit of measurement information are mined from the specification to improve the accuracy of feature extraction. At summary generation stage, hierarchy information in specifications is used to provide a natural conceptual view of product features.",Mining User Reviews: from Specification to Summarization,"This paper proposes a method to extract product features from user reviews and generate a review summary. This method only relies on product specifications, which usually are easy to obtain. Other resources like segmenter, POS tagger or parser are not required. At feature extraction stage, multiple specifications are clustered to extend the vocabulary of product features. Hierarchy structure information and unit of measurement information are mined from the specification to improve the accuracy of feature extraction. At summary generation stage, hierarchy information in specifications is used to provide a natural conceptual view of product features.",This research is supported by National Natural Science Foundation of Chinese (No.60675035) and Beijing Natural Science Foundation (No.4072012).,"Mining User Reviews: from Specification to Summarization. This paper proposes a method to extract product features from user reviews and generate a review summary. This method only relies on product specifications, which usually are easy to obtain. Other resources like segmenter, POS tagger or parser are not required. At feature extraction stage, multiple specifications are clustered to extend the vocabulary of product features. Hierarchy structure information and unit of measurement information are mined from the specification to improve the accuracy of feature extraction. At summary generation stage, hierarchy information in specifications is used to provide a natural conceptual view of product features.",2009
mahata-etal-2017-bucc2017,https://aclanthology.org/W17-2511,0,,,,,,,"BUCC2017: A Hybrid Approach for Identifying Parallel Sentences in Comparable Corpora. A Statistical Machine Translation (SMT) system is always trained using large parallel corpus to produce effective translation. Not only is the corpus scarce, it also involves a lot of manual labor and cost. Parallel corpus can be prepared by employing comparable corpora where a pair of corpora is in two different languages pointing to the same domain. In the present work, we try to build a parallel corpus for French-English language pair from a given comparable corpus. The data and the problem set are provided as part of the shared task organized by BUCC 2017. We have proposed a system that first translates the sentences by heavily relying on Moses and then group the sentences based on sentence length similarity. Finally, the one to one sentence selection was done based on Cosine Similarity algorithm.",{BUCC}2017: A Hybrid Approach for Identifying Parallel Sentences in Comparable Corpora,"A Statistical Machine Translation (SMT) system is always trained using large parallel corpus to produce effective translation. Not only is the corpus scarce, it also involves a lot of manual labor and cost. Parallel corpus can be prepared by employing comparable corpora where a pair of corpora is in two different languages pointing to the same domain. In the present work, we try to build a parallel corpus for French-English language pair from a given comparable corpus. The data and the problem set are provided as part of the shared task organized by BUCC 2017. We have proposed a system that first translates the sentences by heavily relying on Moses and then group the sentences based on sentence length similarity. Finally, the one to one sentence selection was done based on Cosine Similarity algorithm.",BUCC2017: A Hybrid Approach for Identifying Parallel Sentences in Comparable Corpora,"A Statistical Machine Translation (SMT) system is always trained using large parallel corpus to produce effective translation. Not only is the corpus scarce, it also involves a lot of manual labor and cost. Parallel corpus can be prepared by employing comparable corpora where a pair of corpora is in two different languages pointing to the same domain. In the present work, we try to build a parallel corpus for French-English language pair from a given comparable corpus. The data and the problem set are provided as part of the shared task organized by BUCC 2017. We have proposed a system that first translates the sentences by heavily relying on Moses and then group the sentences based on sentence length similarity. Finally, the one to one sentence selection was done based on Cosine Similarity algorithm.",,"BUCC2017: A Hybrid Approach for Identifying Parallel Sentences in Comparable Corpora. A Statistical Machine Translation (SMT) system is always trained using large parallel corpus to produce effective translation. Not only is the corpus scarce, it also involves a lot of manual labor and cost. Parallel corpus can be prepared by employing comparable corpora where a pair of corpora is in two different languages pointing to the same domain. In the present work, we try to build a parallel corpus for French-English language pair from a given comparable corpus. The data and the problem set are provided as part of the shared task organized by BUCC 2017. We have proposed a system that first translates the sentences by heavily relying on Moses and then group the sentences based on sentence length similarity. Finally, the one to one sentence selection was done based on Cosine Similarity algorithm.",2017
yokoyama-2013-analysis,https://aclanthology.org/2013.mtsummit-wpt.3,0,,,,,,,"Analysis of parallel structures in patent sentences, focusing on the head words. One of the characteristics of patent sentences is long, complicated modifications. A modification is identified by the presence of a head word in the modifier. We extracted head words with a high occurrence frequency from about 1 million patent sentences. Based on the results, we constructed a modifier correcting system using these head words. About 60% of the errors could be modified with our system.","Analysis of parallel structures in patent sentences, focusing on the head words","One of the characteristics of patent sentences is long, complicated modifications. A modification is identified by the presence of a head word in the modifier. We extracted head words with a high occurrence frequency from about 1 million patent sentences. Based on the results, we constructed a modifier correcting system using these head words. About 60% of the errors could be modified with our system.","Analysis of parallel structures in patent sentences, focusing on the head words","One of the characteristics of patent sentences is long, complicated modifications. A modification is identified by the presence of a head word in the modifier. We extracted head words with a high occurrence frequency from about 1 million patent sentences. Based on the results, we constructed a modifier correcting system using these head words. About 60% of the errors could be modified with our system.",We thank Japio and the committee members for supporting this research and supplying the patent database.,"Analysis of parallel structures in patent sentences, focusing on the head words. One of the characteristics of patent sentences is long, complicated modifications. A modification is identified by the presence of a head word in the modifier. We extracted head words with a high occurrence frequency from about 1 million patent sentences. Based on the results, we constructed a modifier correcting system using these head words. About 60% of the errors could be modified with our system.",2013
schulze-2001-loom,https://aclanthology.org/Y02-1039,0,,,,,,,"The Loom-LAG for Syntax Analysis : Adding a Language-independent Level to LAG. The left-associative grammar model (LAG) has been applied successfully to the morphologic and syntactic analysis of various european and asian languages. The algebraic definition of the LAG is very well suited for the application to natural language processing as it inherently obeys de Saussure's second law (de Saussure, 1913, p. 103) on the linear nature of language, which phrase-structure grammar (PSG) and categorial grammar (CG) do not. This paper describes the so-called Loom-LAGs (LLAG)-a specialisation of LAGs for the analysis of natural language. Whereas the only means of language-independent abstraction in ordinary LAG is the principle of possible continuations, LLAGs introduce a set of more detailed language-independent generalisations that form the so-called loom of a Loom-LAG. Every LLAG uses the very same loom and adds the language-specific information in the form of a declarative description of the language-much like an ancient mechanised Jacquard-loom would take a program-card providing the specific pattern for the cloth to be woven. The linguistic information is formulated declaratively in so-called syntax plans that describe the sequential structure of clauses and phrases. This approach introduces the explicit notion of phrases and sentence structure to LAG without violating de Saussure's second law and without leaving the ground of the original algebraic definition of LAG. LLAGs can in fact be shown to be just a notational variant of LAG-but one that is much better suited for the manual development of syntax grammars for the robust analysis of free texts. 'For an in-depth discussion see (Hausser, 1989), and (Hausser, 2001).",The Loom-{LAG} for Syntax Analysis : Adding a Language-independent Level to {LAG},"The left-associative grammar model (LAG) has been applied successfully to the morphologic and syntactic analysis of various european and asian languages. The algebraic definition of the LAG is very well suited for the application to natural language processing as it inherently obeys de Saussure's second law (de Saussure, 1913, p. 103) on the linear nature of language, which phrase-structure grammar (PSG) and categorial grammar (CG) do not. This paper describes the so-called Loom-LAGs (LLAG)-a specialisation of LAGs for the analysis of natural language. Whereas the only means of language-independent abstraction in ordinary LAG is the principle of possible continuations, LLAGs introduce a set of more detailed language-independent generalisations that form the so-called loom of a Loom-LAG. Every LLAG uses the very same loom and adds the language-specific information in the form of a declarative description of the language-much like an ancient mechanised Jacquard-loom would take a program-card providing the specific pattern for the cloth to be woven. The linguistic information is formulated declaratively in so-called syntax plans that describe the sequential structure of clauses and phrases. This approach introduces the explicit notion of phrases and sentence structure to LAG without violating de Saussure's second law and without leaving the ground of the original algebraic definition of LAG. LLAGs can in fact be shown to be just a notational variant of LAG-but one that is much better suited for the manual development of syntax grammars for the robust analysis of free texts. 'For an in-depth discussion see (Hausser, 1989), and (Hausser, 2001).",The Loom-LAG for Syntax Analysis : Adding a Language-independent Level to LAG,"The left-associative grammar model (LAG) has been applied successfully to the morphologic and syntactic analysis of various european and asian languages. The algebraic definition of the LAG is very well suited for the application to natural language processing as it inherently obeys de Saussure's second law (de Saussure, 1913, p. 103) on the linear nature of language, which phrase-structure grammar (PSG) and categorial grammar (CG) do not. This paper describes the so-called Loom-LAGs (LLAG)-a specialisation of LAGs for the analysis of natural language. Whereas the only means of language-independent abstraction in ordinary LAG is the principle of possible continuations, LLAGs introduce a set of more detailed language-independent generalisations that form the so-called loom of a Loom-LAG. Every LLAG uses the very same loom and adds the language-specific information in the form of a declarative description of the language-much like an ancient mechanised Jacquard-loom would take a program-card providing the specific pattern for the cloth to be woven. The linguistic information is formulated declaratively in so-called syntax plans that describe the sequential structure of clauses and phrases. This approach introduces the explicit notion of phrases and sentence structure to LAG without violating de Saussure's second law and without leaving the ground of the original algebraic definition of LAG. LLAGs can in fact be shown to be just a notational variant of LAG-but one that is much better suited for the manual development of syntax grammars for the robust analysis of free texts. 'For an in-depth discussion see (Hausser, 1989), and (Hausser, 2001).",,"The Loom-LAG for Syntax Analysis : Adding a Language-independent Level to LAG. The left-associative grammar model (LAG) has been applied successfully to the morphologic and syntactic analysis of various european and asian languages. The algebraic definition of the LAG is very well suited for the application to natural language processing as it inherently obeys de Saussure's second law (de Saussure, 1913, p. 103) on the linear nature of language, which phrase-structure grammar (PSG) and categorial grammar (CG) do not. This paper describes the so-called Loom-LAGs (LLAG)-a specialisation of LAGs for the analysis of natural language. Whereas the only means of language-independent abstraction in ordinary LAG is the principle of possible continuations, LLAGs introduce a set of more detailed language-independent generalisations that form the so-called loom of a Loom-LAG. Every LLAG uses the very same loom and adds the language-specific information in the form of a declarative description of the language-much like an ancient mechanised Jacquard-loom would take a program-card providing the specific pattern for the cloth to be woven. The linguistic information is formulated declaratively in so-called syntax plans that describe the sequential structure of clauses and phrases. This approach introduces the explicit notion of phrases and sentence structure to LAG without violating de Saussure's second law and without leaving the ground of the original algebraic definition of LAG. LLAGs can in fact be shown to be just a notational variant of LAG-but one that is much better suited for the manual development of syntax grammars for the robust analysis of free texts. 'For an in-depth discussion see (Hausser, 1989), and (Hausser, 2001).",2001
balahur-turchi-2013-improving,https://aclanthology.org/R13-1007,0,,,,,,,"Improving Sentiment Analysis in Twitter Using Multilingual Machine Translated Data. Sentiment analysis is currently a very dynamic field in Computational Linguistics. Research herein has concentrated on the development of methods and resources for different types of texts and various languages. Nonetheless, the implementation of a multilingual system that is able to classify sentiment expressed in various languages has not been approached so far. The main challenge this paper addresses is sentiment analysis from tweets in a multilingual setting. We first build a simple sentiment analysis system for tweets in English. Subsequently, we translate the data from English to four other languages-Italian, Spanish, French and German-using a standard machine translation system. Further on, we manually correct the test data and create Gold Standards for each of the target languages. Finally, we test the performance of the sentiment analysis classifiers for the different languages concerned and show that the joint use of training data from multiple languages (especially those pertaining to the same family of languages) significantly improves the results of the sentiment classification.",Improving Sentiment Analysis in {T}witter Using Multilingual Machine Translated Data,"Sentiment analysis is currently a very dynamic field in Computational Linguistics. Research herein has concentrated on the development of methods and resources for different types of texts and various languages. Nonetheless, the implementation of a multilingual system that is able to classify sentiment expressed in various languages has not been approached so far. The main challenge this paper addresses is sentiment analysis from tweets in a multilingual setting. We first build a simple sentiment analysis system for tweets in English. Subsequently, we translate the data from English to four other languages-Italian, Spanish, French and German-using a standard machine translation system. Further on, we manually correct the test data and create Gold Standards for each of the target languages. Finally, we test the performance of the sentiment analysis classifiers for the different languages concerned and show that the joint use of training data from multiple languages (especially those pertaining to the same family of languages) significantly improves the results of the sentiment classification.",Improving Sentiment Analysis in Twitter Using Multilingual Machine Translated Data,"Sentiment analysis is currently a very dynamic field in Computational Linguistics. Research herein has concentrated on the development of methods and resources for different types of texts and various languages. Nonetheless, the implementation of a multilingual system that is able to classify sentiment expressed in various languages has not been approached so far. The main challenge this paper addresses is sentiment analysis from tweets in a multilingual setting. We first build a simple sentiment analysis system for tweets in English. Subsequently, we translate the data from English to four other languages-Italian, Spanish, French and German-using a standard machine translation system. Further on, we manually correct the test data and create Gold Standards for each of the target languages. Finally, we test the performance of the sentiment analysis classifiers for the different languages concerned and show that the joint use of training data from multiple languages (especially those pertaining to the same family of languages) significantly improves the results of the sentiment classification.",,"Improving Sentiment Analysis in Twitter Using Multilingual Machine Translated Data. Sentiment analysis is currently a very dynamic field in Computational Linguistics. Research herein has concentrated on the development of methods and resources for different types of texts and various languages. Nonetheless, the implementation of a multilingual system that is able to classify sentiment expressed in various languages has not been approached so far. The main challenge this paper addresses is sentiment analysis from tweets in a multilingual setting. We first build a simple sentiment analysis system for tweets in English. Subsequently, we translate the data from English to four other languages-Italian, Spanish, French and German-using a standard machine translation system. Further on, we manually correct the test data and create Gold Standards for each of the target languages. Finally, we test the performance of the sentiment analysis classifiers for the different languages concerned and show that the joint use of training data from multiple languages (especially those pertaining to the same family of languages) significantly improves the results of the sentiment classification.",2013
lai-etal-2021-supervised,https://aclanthology.org/2021.paclic-1.62,0,,,,,,,"Supervised Word Sense Disambiguation on Taiwan Hakka Polysemy with Neural Network Models: A Case Study of BUN, TUNG and LAU. This research aims to explore an optimal model for automatic word sense disambiguation for highly polysemous markers BUN, TUNG and LAU in Taiwan Hakka, a low-resource language. The performance of word sense disambiguation tasks is carried out by examining DNN, BiLSTM and CNN models under different window spans. The results show that the CNN model can achieve the best performance with a multiple sliding window of L2R2+ L3R3 and L5R5.","Supervised Word Sense Disambiguation on {T}aiwan {H}akka Polysemy with Neural Network Models: A Case Study of {BUN}, {TUNG} and {LAU}","This research aims to explore an optimal model for automatic word sense disambiguation for highly polysemous markers BUN, TUNG and LAU in Taiwan Hakka, a low-resource language. The performance of word sense disambiguation tasks is carried out by examining DNN, BiLSTM and CNN models under different window spans. The results show that the CNN model can achieve the best performance with a multiple sliding window of L2R2+ L3R3 and L5R5.","Supervised Word Sense Disambiguation on Taiwan Hakka Polysemy with Neural Network Models: A Case Study of BUN, TUNG and LAU","This research aims to explore an optimal model for automatic word sense disambiguation for highly polysemous markers BUN, TUNG and LAU in Taiwan Hakka, a low-resource language. The performance of word sense disambiguation tasks is carried out by examining DNN, BiLSTM and CNN models under different window spans. The results show that the CNN model can achieve the best performance with a multiple sliding window of L2R2+ L3R3 and L5R5.",We would like to thank the PACLIC 2021 anonymous reviewers for the valuable comments on this paper and MOST grant (MOST-108-2410-H-004-050-MY3) for supporting the research discussed herein.,"Supervised Word Sense Disambiguation on Taiwan Hakka Polysemy with Neural Network Models: A Case Study of BUN, TUNG and LAU. This research aims to explore an optimal model for automatic word sense disambiguation for highly polysemous markers BUN, TUNG and LAU in Taiwan Hakka, a low-resource language. The performance of word sense disambiguation tasks is carried out by examining DNN, BiLSTM and CNN models under different window spans. The results show that the CNN model can achieve the best performance with a multiple sliding window of L2R2+ L3R3 and L5R5.",2021
abdul-mageed-etal-2020-nadi,https://aclanthology.org/2020.wanlp-1.9,0,,,,,,,"NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task. We present the results and findings of the First Nuanced Arabic Dialect Identification Shared Task (NADI). This Shared Task includes two subtasks: country-level dialect identification (Subtask 1) and province-level sub-dialect identification (Subtask 2). The data for the shared task covers a total of 100 provinces from 21 Arab countries and are collected from the Twitter domain. As such, NADI is the first shared task to target naturally-occurring fine-grained dialectal text at the sub-country level. A total of 61 teams from 25 countries registered to participate in the tasks, thus reflecting the interest of the community in this area. We received 47 submissions for Subtask 1 from 18 teams and 9 submissions for Subtask 2 from 9 teams. The Arab world is an extensive geographical region across Africa and Asia, with a population of ∼ 400 million people whose native tongue is Arabic. Arabic could be classified into three major types: (1) Classical Arabic (CA), the language of the Qur'an and early literature, (2) Modern Standard Arabic (MSA), the medium used in education and formal and pan-Arab media, and (3) dialectal Arabic (DA), a host of geographically and politically defined variants. Modern day Arabic is also usually described as a diglossic language with a so-called 'High' variety that is used in formal settings (MSA), and a 'Low' variety that is the medium of everyday communication (DA). The presumably 'Low variety' is in reality a collection of variants. One axis of variation for Arabic is geography where people from various sub-regions, countries, or even provinces within the same country, may be using language differently. The goal of the First Nuanced Arabic Dialect Identification (NADI) Shared Task is to provide resources and encourage efforts to investigate questions focused on dialectal variation within the collection of Arabic variants. The NADI shared task targets 21 Arab countries and a total of 100 provinces across these countries. The shared task consists of two subtasks: country-level dialect identification (Subtask 1) and province-level detection (Subtask 2). We provide participants with a new Twitter labeled dataset that we collected exclusively for the purpose of the shared task. The dataset is publicly available for research. 1 A total of 52 teams registered for the shard task, of whom 18 teams ended up submitting their systems for scoring. We then",{NADI} 2020: The First Nuanced {A}rabic Dialect Identification Shared Task,"We present the results and findings of the First Nuanced Arabic Dialect Identification Shared Task (NADI). This Shared Task includes two subtasks: country-level dialect identification (Subtask 1) and province-level sub-dialect identification (Subtask 2). The data for the shared task covers a total of 100 provinces from 21 Arab countries and are collected from the Twitter domain. As such, NADI is the first shared task to target naturally-occurring fine-grained dialectal text at the sub-country level. A total of 61 teams from 25 countries registered to participate in the tasks, thus reflecting the interest of the community in this area. We received 47 submissions for Subtask 1 from 18 teams and 9 submissions for Subtask 2 from 9 teams. The Arab world is an extensive geographical region across Africa and Asia, with a population of ∼ 400 million people whose native tongue is Arabic. Arabic could be classified into three major types: (1) Classical Arabic (CA), the language of the Qur'an and early literature, (2) Modern Standard Arabic (MSA), the medium used in education and formal and pan-Arab media, and (3) dialectal Arabic (DA), a host of geographically and politically defined variants. Modern day Arabic is also usually described as a diglossic language with a so-called 'High' variety that is used in formal settings (MSA), and a 'Low' variety that is the medium of everyday communication (DA). The presumably 'Low variety' is in reality a collection of variants. One axis of variation for Arabic is geography where people from various sub-regions, countries, or even provinces within the same country, may be using language differently. The goal of the First Nuanced Arabic Dialect Identification (NADI) Shared Task is to provide resources and encourage efforts to investigate questions focused on dialectal variation within the collection of Arabic variants. The NADI shared task targets 21 Arab countries and a total of 100 provinces across these countries. The shared task consists of two subtasks: country-level dialect identification (Subtask 1) and province-level detection (Subtask 2). We provide participants with a new Twitter labeled dataset that we collected exclusively for the purpose of the shared task. The dataset is publicly available for research. 1 A total of 52 teams registered for the shard task, of whom 18 teams ended up submitting their systems for scoring. We then",NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task,"We present the results and findings of the First Nuanced Arabic Dialect Identification Shared Task (NADI). This Shared Task includes two subtasks: country-level dialect identification (Subtask 1) and province-level sub-dialect identification (Subtask 2). The data for the shared task covers a total of 100 provinces from 21 Arab countries and are collected from the Twitter domain. As such, NADI is the first shared task to target naturally-occurring fine-grained dialectal text at the sub-country level. A total of 61 teams from 25 countries registered to participate in the tasks, thus reflecting the interest of the community in this area. We received 47 submissions for Subtask 1 from 18 teams and 9 submissions for Subtask 2 from 9 teams. The Arab world is an extensive geographical region across Africa and Asia, with a population of ∼ 400 million people whose native tongue is Arabic. Arabic could be classified into three major types: (1) Classical Arabic (CA), the language of the Qur'an and early literature, (2) Modern Standard Arabic (MSA), the medium used in education and formal and pan-Arab media, and (3) dialectal Arabic (DA), a host of geographically and politically defined variants. Modern day Arabic is also usually described as a diglossic language with a so-called 'High' variety that is used in formal settings (MSA), and a 'Low' variety that is the medium of everyday communication (DA). The presumably 'Low variety' is in reality a collection of variants. One axis of variation for Arabic is geography where people from various sub-regions, countries, or even provinces within the same country, may be using language differently. The goal of the First Nuanced Arabic Dialect Identification (NADI) Shared Task is to provide resources and encourage efforts to investigate questions focused on dialectal variation within the collection of Arabic variants. The NADI shared task targets 21 Arab countries and a total of 100 provinces across these countries. The shared task consists of two subtasks: country-level dialect identification (Subtask 1) and province-level detection (Subtask 2). We provide participants with a new Twitter labeled dataset that we collected exclusively for the purpose of the shared task. The dataset is publicly available for research. 1 A total of 52 teams registered for the shard task, of whom 18 teams ended up submitting their systems for scoring. We then","We gratefully acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca). We thank AbdelRahim Elmadany for assisting with dataset preparation, setting up the Codalab for the shared task, and providing the map in Figure 2 .","NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task. We present the results and findings of the First Nuanced Arabic Dialect Identification Shared Task (NADI). This Shared Task includes two subtasks: country-level dialect identification (Subtask 1) and province-level sub-dialect identification (Subtask 2). The data for the shared task covers a total of 100 provinces from 21 Arab countries and are collected from the Twitter domain. As such, NADI is the first shared task to target naturally-occurring fine-grained dialectal text at the sub-country level. A total of 61 teams from 25 countries registered to participate in the tasks, thus reflecting the interest of the community in this area. We received 47 submissions for Subtask 1 from 18 teams and 9 submissions for Subtask 2 from 9 teams. The Arab world is an extensive geographical region across Africa and Asia, with a population of ∼ 400 million people whose native tongue is Arabic. Arabic could be classified into three major types: (1) Classical Arabic (CA), the language of the Qur'an and early literature, (2) Modern Standard Arabic (MSA), the medium used in education and formal and pan-Arab media, and (3) dialectal Arabic (DA), a host of geographically and politically defined variants. Modern day Arabic is also usually described as a diglossic language with a so-called 'High' variety that is used in formal settings (MSA), and a 'Low' variety that is the medium of everyday communication (DA). The presumably 'Low variety' is in reality a collection of variants. One axis of variation for Arabic is geography where people from various sub-regions, countries, or even provinces within the same country, may be using language differently. The goal of the First Nuanced Arabic Dialect Identification (NADI) Shared Task is to provide resources and encourage efforts to investigate questions focused on dialectal variation within the collection of Arabic variants. The NADI shared task targets 21 Arab countries and a total of 100 provinces across these countries. The shared task consists of two subtasks: country-level dialect identification (Subtask 1) and province-level detection (Subtask 2). We provide participants with a new Twitter labeled dataset that we collected exclusively for the purpose of the shared task. The dataset is publicly available for research. 1 A total of 52 teams registered for the shard task, of whom 18 teams ended up submitting their systems for scoring. We then",2020
chen-kit-2012-higher,https://aclanthology.org/P12-2001,0,,,,,,,"Higher-order Constituent Parsing and Parser Combination. This paper presents a higher-order model for constituent parsing aimed at utilizing more local structural context to decide the score of a grammar rule instance in a parse tree. Experiments on English and Chinese treebanks confirm its advantage over its first-order version. It achieves its best F1 scores of 91.86% and 85.58% on the two languages, respectively, and further pushes them to 92.80% and 85.60% via combination with other highperformance parsers.",Higher-order Constituent Parsing and Parser Combination,"This paper presents a higher-order model for constituent parsing aimed at utilizing more local structural context to decide the score of a grammar rule instance in a parse tree. Experiments on English and Chinese treebanks confirm its advantage over its first-order version. It achieves its best F1 scores of 91.86% and 85.58% on the two languages, respectively, and further pushes them to 92.80% and 85.60% via combination with other highperformance parsers.",Higher-order Constituent Parsing and Parser Combination,"This paper presents a higher-order model for constituent parsing aimed at utilizing more local structural context to decide the score of a grammar rule instance in a parse tree. Experiments on English and Chinese treebanks confirm its advantage over its first-order version. It achieves its best F1 scores of 91.86% and 85.58% on the two languages, respectively, and further pushes them to 92.80% and 85.60% via combination with other highperformance parsers.",,"Higher-order Constituent Parsing and Parser Combination. This paper presents a higher-order model for constituent parsing aimed at utilizing more local structural context to decide the score of a grammar rule instance in a parse tree. Experiments on English and Chinese treebanks confirm its advantage over its first-order version. It achieves its best F1 scores of 91.86% and 85.58% on the two languages, respectively, and further pushes them to 92.80% and 85.60% via combination with other highperformance parsers.",2012
siddharthan-etal-2011-information,https://aclanthology.org/J11-4007,0,,,,,,,"Information Status Distinctions and Referring Expressions: An Empirical Study of References to People in News Summaries. Although there has been much theoretical work on using various information status distinctions to explain the form of references in written text, there have been few studies that attempt to automatically learn these distinctions for generating references in the context of computerregenerated text. In this article, we present a model for generating references to people in news summaries that incorporates insights from both theory and a corpus analysis of human written summaries. In particular, our model captures how two properties of a person referred to in the summary-familiarity to the reader and global salience in the news story-affect the content and form of the initial reference to that person in a summary. We demonstrate that these two distinctions can be learned from a typical input for multi-document summarization and that they can be used to make regeneration decisions that improve the quality of extractive summaries.",Information Status Distinctions and Referring Expressions: An Empirical Study of References to People in News Summaries,"Although there has been much theoretical work on using various information status distinctions to explain the form of references in written text, there have been few studies that attempt to automatically learn these distinctions for generating references in the context of computerregenerated text. In this article, we present a model for generating references to people in news summaries that incorporates insights from both theory and a corpus analysis of human written summaries. In particular, our model captures how two properties of a person referred to in the summary-familiarity to the reader and global salience in the news story-affect the content and form of the initial reference to that person in a summary. We demonstrate that these two distinctions can be learned from a typical input for multi-document summarization and that they can be used to make regeneration decisions that improve the quality of extractive summaries.",Information Status Distinctions and Referring Expressions: An Empirical Study of References to People in News Summaries,"Although there has been much theoretical work on using various information status distinctions to explain the form of references in written text, there have been few studies that attempt to automatically learn these distinctions for generating references in the context of computerregenerated text. In this article, we present a model for generating references to people in news summaries that incorporates insights from both theory and a corpus analysis of human written summaries. In particular, our model captures how two properties of a person referred to in the summary-familiarity to the reader and global salience in the news story-affect the content and form of the initial reference to that person in a summary. We demonstrate that these two distinctions can be learned from a typical input for multi-document summarization and that they can be used to make regeneration decisions that improve the quality of extractive summaries.","Approaches to Reference (PRE-CogSci 2009). Grice, Paul. 1975. Logic and conversation. In P. Cole and J. L. Morgan, editors, Syntax and Semantics, volume 3. Academic Press, New York, pages 43-58. Grosz, Barbara, Aravind Joshi, and Scott Weinstein. 1995 ","Information Status Distinctions and Referring Expressions: An Empirical Study of References to People in News Summaries. Although there has been much theoretical work on using various information status distinctions to explain the form of references in written text, there have been few studies that attempt to automatically learn these distinctions for generating references in the context of computerregenerated text. In this article, we present a model for generating references to people in news summaries that incorporates insights from both theory and a corpus analysis of human written summaries. In particular, our model captures how two properties of a person referred to in the summary-familiarity to the reader and global salience in the news story-affect the content and form of the initial reference to that person in a summary. We demonstrate that these two distinctions can be learned from a typical input for multi-document summarization and that they can be used to make regeneration decisions that improve the quality of extractive summaries.",2011
cer-etal-2010-best,https://aclanthology.org/N10-1080,0,,,,,,,"The Best Lexical Metric for Phrase-Based Statistical MT System Optimization. Translation systems are generally trained to optimize BLEU, but many alternative metrics are available. We explore how optimizing toward various automatic evaluation metrics (BLEU, METEOR, NIST, TER) affects the resulting model. We train a state-of-the-art MT system using MERT on many parameterizations of each metric and evaluate the resulting models on the other metrics and also using human judges. In accordance with popular wisdom, we find that it's important to train on the same metric used in testing. However, we also find that training to a newer metric is only useful to the extent that the MT model's structure and features allow it to take advantage of the metric. Contrasting with TER's good correlation with human judgments, we show that people tend to prefer BLEU and NIST trained models to those trained on edit distance based metrics like TER or WER. Human preferences for METEOR trained models varies depending on the source language. Since using BLEU or NIST produces models that are more robust to evaluation by other metrics and perform well in human judgments, we conclude they are still the best choice for training.",The Best Lexical Metric for Phrase-Based Statistical {MT} System Optimization,"Translation systems are generally trained to optimize BLEU, but many alternative metrics are available. We explore how optimizing toward various automatic evaluation metrics (BLEU, METEOR, NIST, TER) affects the resulting model. We train a state-of-the-art MT system using MERT on many parameterizations of each metric and evaluate the resulting models on the other metrics and also using human judges. In accordance with popular wisdom, we find that it's important to train on the same metric used in testing. However, we also find that training to a newer metric is only useful to the extent that the MT model's structure and features allow it to take advantage of the metric. Contrasting with TER's good correlation with human judgments, we show that people tend to prefer BLEU and NIST trained models to those trained on edit distance based metrics like TER or WER. Human preferences for METEOR trained models varies depending on the source language. Since using BLEU or NIST produces models that are more robust to evaluation by other metrics and perform well in human judgments, we conclude they are still the best choice for training.",The Best Lexical Metric for Phrase-Based Statistical MT System Optimization,"Translation systems are generally trained to optimize BLEU, but many alternative metrics are available. We explore how optimizing toward various automatic evaluation metrics (BLEU, METEOR, NIST, TER) affects the resulting model. We train a state-of-the-art MT system using MERT on many parameterizations of each metric and evaluate the resulting models on the other metrics and also using human judges. In accordance with popular wisdom, we find that it's important to train on the same metric used in testing. However, we also find that training to a newer metric is only useful to the extent that the MT model's structure and features allow it to take advantage of the metric. Contrasting with TER's good correlation with human judgments, we show that people tend to prefer BLEU and NIST trained models to those trained on edit distance based metrics like TER or WER. Human preferences for METEOR trained models varies depending on the source language. Since using BLEU or NIST produces models that are more robust to evaluation by other metrics and perform well in human judgments, we conclude they are still the best choice for training.","The authors thank Alon Lavie for suggesting setting α to 0.5 when training to METEOR. This work was supported by the Defense Advanced Research Projects Agency through IBM. The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred.","The Best Lexical Metric for Phrase-Based Statistical MT System Optimization. Translation systems are generally trained to optimize BLEU, but many alternative metrics are available. We explore how optimizing toward various automatic evaluation metrics (BLEU, METEOR, NIST, TER) affects the resulting model. We train a state-of-the-art MT system using MERT on many parameterizations of each metric and evaluate the resulting models on the other metrics and also using human judges. In accordance with popular wisdom, we find that it's important to train on the same metric used in testing. However, we also find that training to a newer metric is only useful to the extent that the MT model's structure and features allow it to take advantage of the metric. Contrasting with TER's good correlation with human judgments, we show that people tend to prefer BLEU and NIST trained models to those trained on edit distance based metrics like TER or WER. Human preferences for METEOR trained models varies depending on the source language. Since using BLEU or NIST produces models that are more robust to evaluation by other metrics and perform well in human judgments, we conclude they are still the best choice for training.",2010
moran-etal-2018-cross,https://aclanthology.org/L18-1646,0,,,,,,,"Cross-linguistically Small World Networks are Ubiquitous in Child-directed Speech. In this paper we use network theory to model graphs of child-directed speech from caregivers of children from nine typologically and morphologically diverse languages. With the resulting lexical adjacency graphs, we calculate the network statistics N, E, , L, C and compare them against the standard baseline of the same parameters from randomly generated networks of the same size. We show that typologically and morphologically diverse languages all share small world properties in their child-directed speech. Our results add to the repertoire of universal distributional patterns found in the input to children cross-linguistically. We discuss briefly some implications for language acquisition research.",Cross-linguistically Small World Networks are Ubiquitous in Child-directed Speech,"In this paper we use network theory to model graphs of child-directed speech from caregivers of children from nine typologically and morphologically diverse languages. With the resulting lexical adjacency graphs, we calculate the network statistics {N, E, , L, C} and compare them against the standard baseline of the same parameters from randomly generated networks of the same size. We show that typologically and morphologically diverse languages all share small world properties in their child-directed speech. Our results add to the repertoire of universal distributional patterns found in the input to children cross-linguistically. We discuss briefly some implications for language acquisition research.",Cross-linguistically Small World Networks are Ubiquitous in Child-directed Speech,"In this paper we use network theory to model graphs of child-directed speech from caregivers of children from nine typologically and morphologically diverse languages. With the resulting lexical adjacency graphs, we calculate the network statistics N, E, , L, C and compare them against the standard baseline of the same parameters from randomly generated networks of the same size. We show that typologically and morphologically diverse languages all share small world properties in their child-directed speech. Our results add to the repertoire of universal distributional patterns found in the input to children cross-linguistically. We discuss briefly some implications for language acquisition research.","The research leading to these results has received funding from the European Unions Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 615988 (PI Sabine Stoll). We gratefully acknowledge Shanley Allen, Aylin Küntay, and Barbara Pfeiler, who provided the Inuktitut, Turkish, and Yucatec data for our analysis, respectively. We also thank three anonymous reviewers for their feedback.","Cross-linguistically Small World Networks are Ubiquitous in Child-directed Speech. In this paper we use network theory to model graphs of child-directed speech from caregivers of children from nine typologically and morphologically diverse languages. With the resulting lexical adjacency graphs, we calculate the network statistics N, E, , L, C and compare them against the standard baseline of the same parameters from randomly generated networks of the same size. We show that typologically and morphologically diverse languages all share small world properties in their child-directed speech. Our results add to the repertoire of universal distributional patterns found in the input to children cross-linguistically. We discuss briefly some implications for language acquisition research.",2018
wittmann-etal-2014-automatic,http://www.lrec-conf.org/proceedings/lrec2014/pdf/574_Paper.pdf,0,,,,,,,"Automatic Extraction of Synonyms for German Particle Verbs from Parallel Data with Distributional Similarity as a Re-Ranking Feature. We present a method for the extraction of synonyms for German particle verbs based on a word-aligned German-English parallel corpus: by translating the particle verb to a pivot, which is then translated back, a set of synonym candidates can be extracted and ranked according to the respective translation probabilities. In order to deal with separated particle verbs, we apply reordering rules to the German part of the data. In our evaluation against a gold standard, we compare different pre-processing strategies (lemmatized vs. inflected forms) and introduce language model scores of synonym candidates in the context of the input particle verb as well as distributional similarity as additional re-ranking criteria. Our evaluation shows that distributional similarity as a re-ranking feature is more robust than language model scores and leads to an improved ranking of the synonym candidates. In addition to evaluating against a gold standard, we also present a small-scale manual evaluation.",Automatic Extraction of Synonyms for {G}erman Particle Verbs from Parallel Data with Distributional Similarity as a Re-Ranking Feature,"We present a method for the extraction of synonyms for German particle verbs based on a word-aligned German-English parallel corpus: by translating the particle verb to a pivot, which is then translated back, a set of synonym candidates can be extracted and ranked according to the respective translation probabilities. In order to deal with separated particle verbs, we apply reordering rules to the German part of the data. In our evaluation against a gold standard, we compare different pre-processing strategies (lemmatized vs. inflected forms) and introduce language model scores of synonym candidates in the context of the input particle verb as well as distributional similarity as additional re-ranking criteria. Our evaluation shows that distributional similarity as a re-ranking feature is more robust than language model scores and leads to an improved ranking of the synonym candidates. In addition to evaluating against a gold standard, we also present a small-scale manual evaluation.",Automatic Extraction of Synonyms for German Particle Verbs from Parallel Data with Distributional Similarity as a Re-Ranking Feature,"We present a method for the extraction of synonyms for German particle verbs based on a word-aligned German-English parallel corpus: by translating the particle verb to a pivot, which is then translated back, a set of synonym candidates can be extracted and ranked according to the respective translation probabilities. In order to deal with separated particle verbs, we apply reordering rules to the German part of the data. In our evaluation against a gold standard, we compare different pre-processing strategies (lemmatized vs. inflected forms) and introduce language model scores of synonym candidates in the context of the input particle verb as well as distributional similarity as additional re-ranking criteria. Our evaluation shows that distributional similarity as a re-ranking feature is more robust than language model scores and leads to an improved ranking of the synonym candidates. In addition to evaluating against a gold standard, we also present a small-scale manual evaluation.","This work was funded by the DFG Research Project ""Distributional Approaches to Semantic Relatedness"" (Moritz Wittmann, Marion Weller) and the DFG Heisenberg Fellowship SCHU-2580/1-1 (Sabine Schulte im Walde).","Automatic Extraction of Synonyms for German Particle Verbs from Parallel Data with Distributional Similarity as a Re-Ranking Feature. We present a method for the extraction of synonyms for German particle verbs based on a word-aligned German-English parallel corpus: by translating the particle verb to a pivot, which is then translated back, a set of synonym candidates can be extracted and ranked according to the respective translation probabilities. In order to deal with separated particle verbs, we apply reordering rules to the German part of the data. In our evaluation against a gold standard, we compare different pre-processing strategies (lemmatized vs. inflected forms) and introduce language model scores of synonym candidates in the context of the input particle verb as well as distributional similarity as additional re-ranking criteria. Our evaluation shows that distributional similarity as a re-ranking feature is more robust than language model scores and leads to an improved ranking of the synonym candidates. In addition to evaluating against a gold standard, we also present a small-scale manual evaluation.",2014
shioda-etal-2017-suggesting,https://aclanthology.org/W17-5911,0,,,,,,,"Suggesting Sentences for ESL using Kernel Embeddings. Sentence retrieval is an important NLP application for English as a Second Language (ESL) learners. ESL learners are familiar with web search engines, but generic web search results may not be adequate for composing documents in a specific domain. However, if we build our own search system specialized to a domain, it may be subject to the data sparseness problem. Recently proposed word2vec partially addresses the data sparseness problem, but fails to extract sentences relevant to queries owing to the modeling of the latent intent of the query. Thus, we propose a method of retrieving example sentences using kernel embeddings and N-gram windows. This method implicitly models latent intent of query and sentences, and alleviates the problem of noisy alignment. Our results show that our method achieved higher precision in sentence retrieval for ESL in the domain of a university press release corpus, as compared to a previous unsupervised method used for a semantic textual similarity task.",Suggesting Sentences for {ESL} using Kernel Embeddings,"Sentence retrieval is an important NLP application for English as a Second Language (ESL) learners. ESL learners are familiar with web search engines, but generic web search results may not be adequate for composing documents in a specific domain. However, if we build our own search system specialized to a domain, it may be subject to the data sparseness problem. Recently proposed word2vec partially addresses the data sparseness problem, but fails to extract sentences relevant to queries owing to the modeling of the latent intent of the query. Thus, we propose a method of retrieving example sentences using kernel embeddings and N-gram windows. This method implicitly models latent intent of query and sentences, and alleviates the problem of noisy alignment. Our results show that our method achieved higher precision in sentence retrieval for ESL in the domain of a university press release corpus, as compared to a previous unsupervised method used for a semantic textual similarity task.",Suggesting Sentences for ESL using Kernel Embeddings,"Sentence retrieval is an important NLP application for English as a Second Language (ESL) learners. ESL learners are familiar with web search engines, but generic web search results may not be adequate for composing documents in a specific domain. However, if we build our own search system specialized to a domain, it may be subject to the data sparseness problem. Recently proposed word2vec partially addresses the data sparseness problem, but fails to extract sentences relevant to queries owing to the modeling of the latent intent of the query. Thus, we propose a method of retrieving example sentences using kernel embeddings and N-gram windows. This method implicitly models latent intent of query and sentences, and alleviates the problem of noisy alignment. Our results show that our method achieved higher precision in sentence retrieval for ESL in the domain of a university press release corpus, as compared to a previous unsupervised method used for a semantic textual similarity task.",,"Suggesting Sentences for ESL using Kernel Embeddings. Sentence retrieval is an important NLP application for English as a Second Language (ESL) learners. ESL learners are familiar with web search engines, but generic web search results may not be adequate for composing documents in a specific domain. However, if we build our own search system specialized to a domain, it may be subject to the data sparseness problem. Recently proposed word2vec partially addresses the data sparseness problem, but fails to extract sentences relevant to queries owing to the modeling of the latent intent of the query. Thus, we propose a method of retrieving example sentences using kernel embeddings and N-gram windows. This method implicitly models latent intent of query and sentences, and alleviates the problem of noisy alignment. Our results show that our method achieved higher precision in sentence retrieval for ESL in the domain of a university press release corpus, as compared to a previous unsupervised method used for a semantic textual similarity task.",2017
bonin-etal-2020-hbcp,https://aclanthology.org/2020.lrec-1.242,1,,,,health,,,"HBCP Corpus: A New Resource for the Analysis of Behavioural Change Intervention Reports. Due to the fast pace at which research reports in behaviour change are published, researchers, consultants and policymakers would benefit from more automatic ways to process these reports. Automatic extraction of the reports' intervention content, population, settings and their results etc. are essential in synthesising and summarising the literature. However, to the best of our knowledge, no unique resource exists at the moment to facilitate this synthesis. In this paper, we describe the construction of a corpus of published behaviour change intervention evaluation reports aimed at smoking cessation. We also describe and release the annotation of 57 entities, that can be used as an off-the-shelf data resource for tasks such as entity recognition, etc. Both the corpus and the annotation dataset are being made available to the community.",{HBCP} Corpus: A New Resource for the Analysis of Behavioural Change Intervention Reports,"Due to the fast pace at which research reports in behaviour change are published, researchers, consultants and policymakers would benefit from more automatic ways to process these reports. Automatic extraction of the reports' intervention content, population, settings and their results etc. are essential in synthesising and summarising the literature. However, to the best of our knowledge, no unique resource exists at the moment to facilitate this synthesis. In this paper, we describe the construction of a corpus of published behaviour change intervention evaluation reports aimed at smoking cessation. We also describe and release the annotation of 57 entities, that can be used as an off-the-shelf data resource for tasks such as entity recognition, etc. Both the corpus and the annotation dataset are being made available to the community.",HBCP Corpus: A New Resource for the Analysis of Behavioural Change Intervention Reports,"Due to the fast pace at which research reports in behaviour change are published, researchers, consultants and policymakers would benefit from more automatic ways to process these reports. Automatic extraction of the reports' intervention content, population, settings and their results etc. are essential in synthesising and summarising the literature. However, to the best of our knowledge, no unique resource exists at the moment to facilitate this synthesis. In this paper, we describe the construction of a corpus of published behaviour change intervention evaluation reports aimed at smoking cessation. We also describe and release the annotation of 57 entities, that can be used as an off-the-shelf data resource for tasks such as entity recognition, etc. Both the corpus and the annotation dataset are being made available to the community.",,"HBCP Corpus: A New Resource for the Analysis of Behavioural Change Intervention Reports. Due to the fast pace at which research reports in behaviour change are published, researchers, consultants and policymakers would benefit from more automatic ways to process these reports. Automatic extraction of the reports' intervention content, population, settings and their results etc. are essential in synthesising and summarising the literature. However, to the best of our knowledge, no unique resource exists at the moment to facilitate this synthesis. In this paper, we describe the construction of a corpus of published behaviour change intervention evaluation reports aimed at smoking cessation. We also describe and release the annotation of 57 entities, that can be used as an off-the-shelf data resource for tasks such as entity recognition, etc. Both the corpus and the annotation dataset are being made available to the community.",2020
tauchmann-mieskes-2020-language,https://aclanthology.org/2020.lrec-1.822,0,,,,,,,"Language Agnostic Automatic Summarization Evaluation. So far work on automatic summarization has dealt primarily with English data. Accordingly, evaluation methods were primarily developed with this language in mind. In our work, we present experiments of adapting available evaluation methods such as ROUGE and PYRAMID to non-English data. We base our experiments on various English and non-English homogeneous benchmark data sets as well as a non-English heterogeneous data set. Our results indicate that ROUGE can indeed be adapted to non-English data-both homogeneous and heterogeneous. Using a recent implementation of performing an automatic PYRAMID evaluation, we also show its adaptablilty to non-English data.",Language Agnostic Automatic Summarization Evaluation,"So far work on automatic summarization has dealt primarily with English data. Accordingly, evaluation methods were primarily developed with this language in mind. In our work, we present experiments of adapting available evaluation methods such as ROUGE and PYRAMID to non-English data. We base our experiments on various English and non-English homogeneous benchmark data sets as well as a non-English heterogeneous data set. Our results indicate that ROUGE can indeed be adapted to non-English data-both homogeneous and heterogeneous. Using a recent implementation of performing an automatic PYRAMID evaluation, we also show its adaptablilty to non-English data.",Language Agnostic Automatic Summarization Evaluation,"So far work on automatic summarization has dealt primarily with English data. Accordingly, evaluation methods were primarily developed with this language in mind. In our work, we present experiments of adapting available evaluation methods such as ROUGE and PYRAMID to non-English data. We base our experiments on various English and non-English homogeneous benchmark data sets as well as a non-English heterogeneous data set. Our results indicate that ROUGE can indeed be adapted to non-English data-both homogeneous and heterogeneous. Using a recent implementation of performing an automatic PYRAMID evaluation, we also show its adaptablilty to non-English data.","We would like to thank Yanjun Gao and Rebecca Passonneau for providing the PyrEval code as well as kindly assisting with related questions. This work has been supported by the research center for Digital Communication and Media Innovation (DKMI) and the Institute for Communication and Media (IKUM) at the University of Applied Sciences Darmstadt. Part of this research further received support from the German Research Foundation as part of the Research Training Group ""Adaptive Preparation of Information from Heterogeneous Sources"" (AIPHES) under grant No. GRK 1994/1.","Language Agnostic Automatic Summarization Evaluation. So far work on automatic summarization has dealt primarily with English data. Accordingly, evaluation methods were primarily developed with this language in mind. In our work, we present experiments of adapting available evaluation methods such as ROUGE and PYRAMID to non-English data. We base our experiments on various English and non-English homogeneous benchmark data sets as well as a non-English heterogeneous data set. Our results indicate that ROUGE can indeed be adapted to non-English data-both homogeneous and heterogeneous. Using a recent implementation of performing an automatic PYRAMID evaluation, we also show its adaptablilty to non-English data.",2020
moreau-vogel-2014-limitations,https://aclanthology.org/C14-1208,0,,,,,,,"Limitations of MT Quality Estimation Supervised Systems: The Tails Prediction Problem. In this paper we address the question of the reliability of the predictions made by MT Quality Estimation (QE) systems. In particular, we show that standard supervised QE systems, usually trained to minimize MAE, make serious mistakes at predicting the quality of the sentences in the tails of the quality range. We describe the problem and propose several experiments to clarify their causes and effects. We use the WMT12 and WMT13 QE Shared Task datasets to prove that our claims hold in general and are not specific to a dataset or a system.",Limitations of {MT} Quality Estimation Supervised Systems: The Tails Prediction Problem,"In this paper we address the question of the reliability of the predictions made by MT Quality Estimation (QE) systems. In particular, we show that standard supervised QE systems, usually trained to minimize MAE, make serious mistakes at predicting the quality of the sentences in the tails of the quality range. We describe the problem and propose several experiments to clarify their causes and effects. We use the WMT12 and WMT13 QE Shared Task datasets to prove that our claims hold in general and are not specific to a dataset or a system.",Limitations of MT Quality Estimation Supervised Systems: The Tails Prediction Problem,"In this paper we address the question of the reliability of the predictions made by MT Quality Estimation (QE) systems. In particular, we show that standard supervised QE systems, usually trained to minimize MAE, make serious mistakes at predicting the quality of the sentences in the tails of the quality range. We describe the problem and propose several experiments to clarify their causes and effects. We use the WMT12 and WMT13 QE Shared Task datasets to prove that our claims hold in general and are not specific to a dataset or a system.","We are grateful to Lucia Specia, Radu Soricut and Christian Buck, the organizers of the WMT 2012 and 2013 Shared Task on Quality Estimation, for releasing all the data related to the competition, including post-edited sentences, features sets, etc.This research is supported by Science Foundation Ireland (Grant 12/CE/I2267) as part of the Centre for Next Generation Localisation (www.cngl.ie) funding at Trinity College, University of Dublin.The graphics in this paper were created with R (R Core Team, 2012), using the ggplot2 library (Wickham, 2009) .","Limitations of MT Quality Estimation Supervised Systems: The Tails Prediction Problem. In this paper we address the question of the reliability of the predictions made by MT Quality Estimation (QE) systems. In particular, we show that standard supervised QE systems, usually trained to minimize MAE, make serious mistakes at predicting the quality of the sentences in the tails of the quality range. We describe the problem and propose several experiments to clarify their causes and effects. We use the WMT12 and WMT13 QE Shared Task datasets to prove that our claims hold in general and are not specific to a dataset or a system.",2014
ejerhed-1990-swedish,https://aclanthology.org/W89-0102,0,,,,,,,"A Swedish Clause Grammar And Its Implementation. The paper is concerned with the notion of clause as a basic, minimal unit for the segmentation and processing of natural language. The first part of the paper surveys various criteria for clausehood that have been proposed in theoretical linguistics and computational linguistics, cind pro poses that a clause in English or Swedish or any other natural language can be defined in structural terms at the surface level as a regular expression of syntactic categories, equivalently, as a set of sequences of word classes, a possibility which has been explicitly denied by Harris (1968) and later transformational grammarians. The second part of the paper presents a grammar for Swedish clauses, and a newspaper text segmented into clauses by an experimental clause parser intended for a speech synthesis applicar tion. The third part o f the paper presents some phonetic data concerning the distribution of",A {S}wedish Clause Grammar And Its Implementation,"The paper is concerned with the notion of clause as a basic, minimal unit for the segmentation and processing of natural language. The first part of the paper surveys various criteria for clausehood that have been proposed in theoretical linguistics and computational linguistics, cind pro poses that a clause in English or Swedish or any other natural language can be defined in structural terms at the surface level as a regular expression of syntactic categories, equivalently, as a set of sequences of word classes, a possibility which has been explicitly denied by Harris (1968) and later transformational grammarians. The second part of the paper presents a grammar for Swedish clauses, and a newspaper text segmented into clauses by an experimental clause parser intended for a speech synthesis applicar tion. The third part o f the paper presents some phonetic data concerning the distribution of",A Swedish Clause Grammar And Its Implementation,"The paper is concerned with the notion of clause as a basic, minimal unit for the segmentation and processing of natural language. The first part of the paper surveys various criteria for clausehood that have been proposed in theoretical linguistics and computational linguistics, cind pro poses that a clause in English or Swedish or any other natural language can be defined in structural terms at the surface level as a regular expression of syntactic categories, equivalently, as a set of sequences of word classes, a possibility which has been explicitly denied by Harris (1968) and later transformational grammarians. The second part of the paper presents a grammar for Swedish clauses, and a newspaper text segmented into clauses by an experimental clause parser intended for a speech synthesis applicar tion. The third part o f the paper presents some phonetic data concerning the distribution of",,"A Swedish Clause Grammar And Its Implementation. The paper is concerned with the notion of clause as a basic, minimal unit for the segmentation and processing of natural language. The first part of the paper surveys various criteria for clausehood that have been proposed in theoretical linguistics and computational linguistics, cind pro poses that a clause in English or Swedish or any other natural language can be defined in structural terms at the surface level as a regular expression of syntactic categories, equivalently, as a set of sequences of word classes, a possibility which has been explicitly denied by Harris (1968) and later transformational grammarians. The second part of the paper presents a grammar for Swedish clauses, and a newspaper text segmented into clauses by an experimental clause parser intended for a speech synthesis applicar tion. The third part o f the paper presents some phonetic data concerning the distribution of",1990
rubin-vashchilko-2012-identification,https://aclanthology.org/W12-0415,1,,,,deception_detection,,,"Identification of Truth and Deception in Text: Application of Vector Space Model to Rhetorical Structure Theory. The paper proposes to use Rhetorical Structure Theory (RST) analytic framework to identify systematic differences between deceptive and truthful stories in terms of their coherence and structure. A sample of 36 elicited personal stories, self-ranked as completely truthful or completely deceptive, is manually analyzed by assigning RST discourse relations among a story's constituent parts. Vector Space Model (VSM) assesses each story's position in multi-dimensional RST space with respect to its distance to truth and deceptive centers as measures of the story's level of deception and truthfulness. Ten human judges evaluate if each story is deceptive or not, and assign their confidence levels, which produce measures of the human expected deception and truthfulness levels. The paper contributes to deception detection research and RST twofold: a) demonstration of discourse structure analysis in pragmatics as a prominent way of automated deception detection and, as such, an effective complement to lexico-semantic analysis, and b) development of RST-VSM methodology to interpret RST analysis in identification of previously unseen deceptive texts.",Identification of Truth and Deception in Text: Application of Vector Space Model to {R}hetorical {S}tructure {T}heory,"The paper proposes to use Rhetorical Structure Theory (RST) analytic framework to identify systematic differences between deceptive and truthful stories in terms of their coherence and structure. A sample of 36 elicited personal stories, self-ranked as completely truthful or completely deceptive, is manually analyzed by assigning RST discourse relations among a story's constituent parts. Vector Space Model (VSM) assesses each story's position in multi-dimensional RST space with respect to its distance to truth and deceptive centers as measures of the story's level of deception and truthfulness. Ten human judges evaluate if each story is deceptive or not, and assign their confidence levels, which produce measures of the human expected deception and truthfulness levels. The paper contributes to deception detection research and RST twofold: a) demonstration of discourse structure analysis in pragmatics as a prominent way of automated deception detection and, as such, an effective complement to lexico-semantic analysis, and b) development of RST-VSM methodology to interpret RST analysis in identification of previously unseen deceptive texts.",Identification of Truth and Deception in Text: Application of Vector Space Model to Rhetorical Structure Theory,"The paper proposes to use Rhetorical Structure Theory (RST) analytic framework to identify systematic differences between deceptive and truthful stories in terms of their coherence and structure. A sample of 36 elicited personal stories, self-ranked as completely truthful or completely deceptive, is manually analyzed by assigning RST discourse relations among a story's constituent parts. Vector Space Model (VSM) assesses each story's position in multi-dimensional RST space with respect to its distance to truth and deceptive centers as measures of the story's level of deception and truthfulness. Ten human judges evaluate if each story is deceptive or not, and assign their confidence levels, which produce measures of the human expected deception and truthfulness levels. The paper contributes to deception detection research and RST twofold: a) demonstration of discourse structure analysis in pragmatics as a prominent way of automated deception detection and, as such, an effective complement to lexico-semantic analysis, and b) development of RST-VSM methodology to interpret RST analysis in identification of previously unseen deceptive texts.",This research is funded by the New Research and Scholarly Initiative Award (10-303) from the Academic Development Fund at Western.,"Identification of Truth and Deception in Text: Application of Vector Space Model to Rhetorical Structure Theory. The paper proposes to use Rhetorical Structure Theory (RST) analytic framework to identify systematic differences between deceptive and truthful stories in terms of their coherence and structure. A sample of 36 elicited personal stories, self-ranked as completely truthful or completely deceptive, is manually analyzed by assigning RST discourse relations among a story's constituent parts. Vector Space Model (VSM) assesses each story's position in multi-dimensional RST space with respect to its distance to truth and deceptive centers as measures of the story's level of deception and truthfulness. Ten human judges evaluate if each story is deceptive or not, and assign their confidence levels, which produce measures of the human expected deception and truthfulness levels. The paper contributes to deception detection research and RST twofold: a) demonstration of discourse structure analysis in pragmatics as a prominent way of automated deception detection and, as such, an effective complement to lexico-semantic analysis, and b) development of RST-VSM methodology to interpret RST analysis in identification of previously unseen deceptive texts.",2012
venturott-mitkov-2021-fake,https://aclanthology.org/2021.triton-1.16,1,,,,disinformation_and_fake_news,,,"Fake News Detection for Portuguese with Deep Learning. The exponential growth of the internet and social media in the past decade gave way to the increase in dissemination of false or misleading information. Since the 2016 US presidential election, the term ""fake news"" became increasingly popular and this phenomenon has received more attention. In the past years several fact-checking agencies were created, but due to the great number of daily posts on social media, manual checking is insufficient. Currently, there is a pressing need for automatic fake news detection tools, either to assist manual fact-checkers or to operate as standalone tools. There are several projects underway on this topic, but most of them focus on English. This research-in-progress paper discusses the employment of deep learning methods, and the development of a tool, for detecting false news in Portuguese. As a first step we shall compare well-established architectures that were tested in other languages and analyse their performance on our Portuguese data. Based on the preliminary results of these classifiers, we shall choose a deep learning model or combine several deep learning models which hold promise to enhance the performance of our fake news detection system.",Fake News Detection for {P}ortuguese with Deep Learning,"The exponential growth of the internet and social media in the past decade gave way to the increase in dissemination of false or misleading information. Since the 2016 US presidential election, the term ""fake news"" became increasingly popular and this phenomenon has received more attention. In the past years several fact-checking agencies were created, but due to the great number of daily posts on social media, manual checking is insufficient. Currently, there is a pressing need for automatic fake news detection tools, either to assist manual fact-checkers or to operate as standalone tools. There are several projects underway on this topic, but most of them focus on English. This research-in-progress paper discusses the employment of deep learning methods, and the development of a tool, for detecting false news in Portuguese. As a first step we shall compare well-established architectures that were tested in other languages and analyse their performance on our Portuguese data. Based on the preliminary results of these classifiers, we shall choose a deep learning model or combine several deep learning models which hold promise to enhance the performance of our fake news detection system.",Fake News Detection for Portuguese with Deep Learning,"The exponential growth of the internet and social media in the past decade gave way to the increase in dissemination of false or misleading information. Since the 2016 US presidential election, the term ""fake news"" became increasingly popular and this phenomenon has received more attention. In the past years several fact-checking agencies were created, but due to the great number of daily posts on social media, manual checking is insufficient. Currently, there is a pressing need for automatic fake news detection tools, either to assist manual fact-checkers or to operate as standalone tools. There are several projects underway on this topic, but most of them focus on English. This research-in-progress paper discusses the employment of deep learning methods, and the development of a tool, for detecting false news in Portuguese. As a first step we shall compare well-established architectures that were tested in other languages and analyse their performance on our Portuguese data. Based on the preliminary results of these classifiers, we shall choose a deep learning model or combine several deep learning models which hold promise to enhance the performance of our fake news detection system.",,"Fake News Detection for Portuguese with Deep Learning. The exponential growth of the internet and social media in the past decade gave way to the increase in dissemination of false or misleading information. Since the 2016 US presidential election, the term ""fake news"" became increasingly popular and this phenomenon has received more attention. In the past years several fact-checking agencies were created, but due to the great number of daily posts on social media, manual checking is insufficient. Currently, there is a pressing need for automatic fake news detection tools, either to assist manual fact-checkers or to operate as standalone tools. There are several projects underway on this topic, but most of them focus on English. This research-in-progress paper discusses the employment of deep learning methods, and the development of a tool, for detecting false news in Portuguese. As a first step we shall compare well-established architectures that were tested in other languages and analyse their performance on our Portuguese data. Based on the preliminary results of these classifiers, we shall choose a deep learning model or combine several deep learning models which hold promise to enhance the performance of our fake news detection system.",2021
miao-etal-2020-diverse,https://aclanthology.org/2020.acl-main.92,0,,,,,,,"A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers. We present ASDiv (Academia Sinica Diverse MWP Dataset), a diverse (in terms of both language patterns and problem types) English math word problem (MWP) corpus for evaluating the capability of various MWP solvers. Existing MWP corpora for studying AI progress remain limited either in language usage patterns or in problem types. We thus present a new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem types taught in elementary school. Each MWP is annotated with its problem type and grade level (for indicating the level of difficulty). Furthermore, we propose a metric to measure the lexicon usage diversity of a given MWP corpus, and demonstrate that ASDiv is more diverse than existing corpora. Experiments show that our proposed corpus reflects the true capability of MWP solvers more faithfully.",A Diverse Corpus for Evaluating and Developing {E}nglish Math Word Problem Solvers,"We present ASDiv (Academia Sinica Diverse MWP Dataset), a diverse (in terms of both language patterns and problem types) English math word problem (MWP) corpus for evaluating the capability of various MWP solvers. Existing MWP corpora for studying AI progress remain limited either in language usage patterns or in problem types. We thus present a new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem types taught in elementary school. Each MWP is annotated with its problem type and grade level (for indicating the level of difficulty). Furthermore, we propose a metric to measure the lexicon usage diversity of a given MWP corpus, and demonstrate that ASDiv is more diverse than existing corpora. Experiments show that our proposed corpus reflects the true capability of MWP solvers more faithfully.",A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers,"We present ASDiv (Academia Sinica Diverse MWP Dataset), a diverse (in terms of both language patterns and problem types) English math word problem (MWP) corpus for evaluating the capability of various MWP solvers. Existing MWP corpora for studying AI progress remain limited either in language usage patterns or in problem types. We thus present a new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem types taught in elementary school. Each MWP is annotated with its problem type and grade level (for indicating the level of difficulty). Furthermore, we propose a metric to measure the lexicon usage diversity of a given MWP corpus, and demonstrate that ASDiv is more diverse than existing corpora. Experiments show that our proposed corpus reflects the true capability of MWP solvers more faithfully.",,"A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers. We present ASDiv (Academia Sinica Diverse MWP Dataset), a diverse (in terms of both language patterns and problem types) English math word problem (MWP) corpus for evaluating the capability of various MWP solvers. Existing MWP corpora for studying AI progress remain limited either in language usage patterns or in problem types. We thus present a new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem types taught in elementary school. Each MWP is annotated with its problem type and grade level (for indicating the level of difficulty). Furthermore, we propose a metric to measure the lexicon usage diversity of a given MWP corpus, and demonstrate that ASDiv is more diverse than existing corpora. Experiments show that our proposed corpus reflects the true capability of MWP solvers more faithfully.",2020
moreno-ortiz-etal-2002-new,http://www.lrec-conf.org/proceedings/lrec2002/pdf/181.pdf,0,,,,,,,"New Developments in Ontological Semantics. In this paper we discuss ongoing activity within the approach to natural language processing known as ontological semantics, as defined in Nirenburg and Raskin (forthcoming). After a brief discussion of the principal tenets on which this approach is built, and a revision of extant implementations that have led toward its present form, we concentrate on some specific aspects that are key to the development of this approach, such as the acquisition of the semantics of lexical items and, intimately connected with this, the ontology, the central resource in this approach. Although we review the fundamentals of the approach, the focus is on practical aspects of implementation, such as the automation of static knowledge acquisition and the acquisition of scripts to enrich the ontology further.",New Developments in Ontological Semantics,"In this paper we discuss ongoing activity within the approach to natural language processing known as ontological semantics, as defined in Nirenburg and Raskin (forthcoming). After a brief discussion of the principal tenets on which this approach is built, and a revision of extant implementations that have led toward its present form, we concentrate on some specific aspects that are key to the development of this approach, such as the acquisition of the semantics of lexical items and, intimately connected with this, the ontology, the central resource in this approach. Although we review the fundamentals of the approach, the focus is on practical aspects of implementation, such as the automation of static knowledge acquisition and the acquisition of scripts to enrich the ontology further.",New Developments in Ontological Semantics,"In this paper we discuss ongoing activity within the approach to natural language processing known as ontological semantics, as defined in Nirenburg and Raskin (forthcoming). After a brief discussion of the principal tenets on which this approach is built, and a revision of extant implementations that have led toward its present form, we concentrate on some specific aspects that are key to the development of this approach, such as the acquisition of the semantics of lexical items and, intimately connected with this, the ontology, the central resource in this approach. Although we review the fundamentals of the approach, the focus is on practical aspects of implementation, such as the automation of static knowledge acquisition and the acquisition of scripts to enrich the ontology further.",,"New Developments in Ontological Semantics. In this paper we discuss ongoing activity within the approach to natural language processing known as ontological semantics, as defined in Nirenburg and Raskin (forthcoming). After a brief discussion of the principal tenets on which this approach is built, and a revision of extant implementations that have led toward its present form, we concentrate on some specific aspects that are key to the development of this approach, such as the acquisition of the semantics of lexical items and, intimately connected with this, the ontology, the central resource in this approach. Although we review the fundamentals of the approach, the focus is on practical aspects of implementation, such as the automation of static knowledge acquisition and the acquisition of scripts to enrich the ontology further.",2002
guo-etal-2014-crab,https://aclanthology.org/C14-2017,1,,,,health,,,"CRAB 2.0: A text mining tool for supporting literature review in chemical cancer risk assessment. Chemical cancer risk assessment is a literature-dependent task which could greatly benefit from text mining support. In this paper we describe CRAB-the first publicly available tool for supporting the risk assessment workflow. CRAB, currently at version 2.0, facilitates the gathering of relevant literature via PubMed queries as well as semantic classification, statistical analysis and efficient study of the literature. The tool is freely available as an in-browser application.",{CRAB} 2.0: A text mining tool for supporting literature review in chemical cancer risk assessment,"Chemical cancer risk assessment is a literature-dependent task which could greatly benefit from text mining support. In this paper we describe CRAB-the first publicly available tool for supporting the risk assessment workflow. CRAB, currently at version 2.0, facilitates the gathering of relevant literature via PubMed queries as well as semantic classification, statistical analysis and efficient study of the literature. The tool is freely available as an in-browser application.",CRAB 2.0: A text mining tool for supporting literature review in chemical cancer risk assessment,"Chemical cancer risk assessment is a literature-dependent task which could greatly benefit from text mining support. In this paper we describe CRAB-the first publicly available tool for supporting the risk assessment workflow. CRAB, currently at version 2.0, facilitates the gathering of relevant literature via PubMed queries as well as semantic classification, statistical analysis and efficient study of the literature. The tool is freely available as an in-browser application.","This work was supported by the Royal Society, Vinnova and the Swedish Research Council.","CRAB 2.0: A text mining tool for supporting literature review in chemical cancer risk assessment. Chemical cancer risk assessment is a literature-dependent task which could greatly benefit from text mining support. In this paper we describe CRAB-the first publicly available tool for supporting the risk assessment workflow. CRAB, currently at version 2.0, facilitates the gathering of relevant literature via PubMed queries as well as semantic classification, statistical analysis and efficient study of the literature. The tool is freely available as an in-browser application.",2014
sun-grishman-2010-semi,https://aclanthology.org/C10-2137,0,,,,,,,"Semi-supervised Semantic Pattern Discovery with Guidance from Unsupervised Pattern Clusters. We present a simple algorithm for clustering semantic patterns based on distributional similarity and use cluster memberships to guide semi-supervised pattern discovery. We apply this approach to the task of relation extraction. The evaluation results demonstrate that our novel bootstrapping procedure significantly outperforms a standard bootstrapping. Most importantly, our algorithm can effectively prevent semantic drift and provide semi-supervised learning with a natural stopping criterion.",Semi-supervised Semantic Pattern Discovery with Guidance from Unsupervised Pattern Clusters,"We present a simple algorithm for clustering semantic patterns based on distributional similarity and use cluster memberships to guide semi-supervised pattern discovery. We apply this approach to the task of relation extraction. The evaluation results demonstrate that our novel bootstrapping procedure significantly outperforms a standard bootstrapping. Most importantly, our algorithm can effectively prevent semantic drift and provide semi-supervised learning with a natural stopping criterion.",Semi-supervised Semantic Pattern Discovery with Guidance from Unsupervised Pattern Clusters,"We present a simple algorithm for clustering semantic patterns based on distributional similarity and use cluster memberships to guide semi-supervised pattern discovery. We apply this approach to the task of relation extraction. The evaluation results demonstrate that our novel bootstrapping procedure significantly outperforms a standard bootstrapping. Most importantly, our algorithm can effectively prevent semantic drift and provide semi-supervised learning with a natural stopping criterion.",We would like to thank Prof. Satoshi Sekine for his useful suggestions.,"Semi-supervised Semantic Pattern Discovery with Guidance from Unsupervised Pattern Clusters. We present a simple algorithm for clustering semantic patterns based on distributional similarity and use cluster memberships to guide semi-supervised pattern discovery. We apply this approach to the task of relation extraction. The evaluation results demonstrate that our novel bootstrapping procedure significantly outperforms a standard bootstrapping. Most importantly, our algorithm can effectively prevent semantic drift and provide semi-supervised learning with a natural stopping criterion.",2010
nayan-etal-2008-named,https://aclanthology.org/I08-5014,0,,,,,,,Named Entity Recognition for Indian Languages. Stub This paper talks about a new approach to recognize named entities for Indian languages. Phonetic matching technique is used to match the strings of different languages on the basis of their similar sounding property. We have tested our system with a comparable corpus of English and Hindi language data. This approach is language independent and requires only a set of rules appropriate for a language.,Named Entity Recognition for {I}ndian Languages,Stub This paper talks about a new approach to recognize named entities for Indian languages. Phonetic matching technique is used to match the strings of different languages on the basis of their similar sounding property. We have tested our system with a comparable corpus of English and Hindi language data. This approach is language independent and requires only a set of rules appropriate for a language.,Named Entity Recognition for Indian Languages,Stub This paper talks about a new approach to recognize named entities for Indian languages. Phonetic matching technique is used to match the strings of different languages on the basis of their similar sounding property. We have tested our system with a comparable corpus of English and Hindi language data. This approach is language independent and requires only a set of rules appropriate for a language.,"The authors gratefully acknowledge financial assistance from TDIL, MCIT (Govt. of India).",Named Entity Recognition for Indian Languages. Stub This paper talks about a new approach to recognize named entities for Indian languages. Phonetic matching technique is used to match the strings of different languages on the basis of their similar sounding property. We have tested our system with a comparable corpus of English and Hindi language data. This approach is language independent and requires only a set of rules appropriate for a language.,2008
mori-2002-stochastic,https://aclanthology.org/C02-1157,0,,,,,,,"A Stochastic Parser Based on an SLM with Arboreal Context Trees. In this paper, we present a parser based on a stochastic structured language model (SLM) with a exible history reference mechanism. An SLM is an alternative t o a n n-gram model as a language model for a speech recognizer. The advantage of an SLM against an n-gram model is the ability to return the structure of a given sentence. Thus SLMs are expected to play an important part in spoken language understanding systems. The current SLMs refer to a xed part of the history for prediction just like a n n-gram model. We introduce a exible history reference mechanism called an ACT (arboreal context tree; an extension of the context tree to tree-shaped histories) and describe a parser based on an SLM with ACTs. In the experiment, we built an SLM-based parser with a xed history and one with ACTs, and compared their parsing accuracies. The accuracy of our parser was 92.8%, which was higher than that for the parser with the xed history (89.8%). This result shows that the exible history reference mechanism improves the parsing ability of an SLM, which has great importance for language understanding.",A Stochastic Parser Based on an {SLM} with Arboreal Context Trees,"In this paper, we present a parser based on a stochastic structured language model (SLM) with a exible history reference mechanism. An SLM is an alternative t o a n n-gram model as a language model for a speech recognizer. The advantage of an SLM against an n-gram model is the ability to return the structure of a given sentence. Thus SLMs are expected to play an important part in spoken language understanding systems. The current SLMs refer to a xed part of the history for prediction just like a n n-gram model. We introduce a exible history reference mechanism called an ACT (arboreal context tree; an extension of the context tree to tree-shaped histories) and describe a parser based on an SLM with ACTs. In the experiment, we built an SLM-based parser with a xed history and one with ACTs, and compared their parsing accuracies. The accuracy of our parser was 92.8%, which was higher than that for the parser with the xed history (89.8%). This result shows that the exible history reference mechanism improves the parsing ability of an SLM, which has great importance for language understanding.",A Stochastic Parser Based on an SLM with Arboreal Context Trees,"In this paper, we present a parser based on a stochastic structured language model (SLM) with a exible history reference mechanism. An SLM is an alternative t o a n n-gram model as a language model for a speech recognizer. The advantage of an SLM against an n-gram model is the ability to return the structure of a given sentence. Thus SLMs are expected to play an important part in spoken language understanding systems. The current SLMs refer to a xed part of the history for prediction just like a n n-gram model. We introduce a exible history reference mechanism called an ACT (arboreal context tree; an extension of the context tree to tree-shaped histories) and describe a parser based on an SLM with ACTs. In the experiment, we built an SLM-based parser with a xed history and one with ACTs, and compared their parsing accuracies. The accuracy of our parser was 92.8%, which was higher than that for the parser with the xed history (89.8%). This result shows that the exible history reference mechanism improves the parsing ability of an SLM, which has great importance for language understanding.",,"A Stochastic Parser Based on an SLM with Arboreal Context Trees. In this paper, we present a parser based on a stochastic structured language model (SLM) with a exible history reference mechanism. An SLM is an alternative t o a n n-gram model as a language model for a speech recognizer. The advantage of an SLM against an n-gram model is the ability to return the structure of a given sentence. Thus SLMs are expected to play an important part in spoken language understanding systems. The current SLMs refer to a xed part of the history for prediction just like a n n-gram model. We introduce a exible history reference mechanism called an ACT (arboreal context tree; an extension of the context tree to tree-shaped histories) and describe a parser based on an SLM with ACTs. In the experiment, we built an SLM-based parser with a xed history and one with ACTs, and compared their parsing accuracies. The accuracy of our parser was 92.8%, which was higher than that for the parser with the xed history (89.8%). This result shows that the exible history reference mechanism improves the parsing ability of an SLM, which has great importance for language understanding.",2002
seyffarth-2019-identifying,https://aclanthology.org/W19-0115,0,,,,,,,"Identifying Participation of Individual Verbs or VerbNet Classes in the Causative Alternation. Verbs that participate in diathesis alternations have different semantics in their different syntactic environments, which need to be distinguished in order to process these verbs and their contexts correctly. We design and implement 8 approaches to the automatic identification of the causative alternation in English (3 based on VerbNet classes, 5 based on individual verbs). For verbs in this alternation, the semantic roles that contribute to the meaning of the verb can be associated with different syntactic slots. Our most successful approaches use distributional vectors and achieve an F1 score of up to 79% on a balanced test set. We also apply our approaches to the distinction between the causative alternation and the unexpressed object alternation. Our best system for this is based on syntactic information, with an F1 score of 75% on a balanced test set.",Identifying Participation of Individual Verbs or {V}erb{N}et Classes in the Causative Alternation,"Verbs that participate in diathesis alternations have different semantics in their different syntactic environments, which need to be distinguished in order to process these verbs and their contexts correctly. We design and implement 8 approaches to the automatic identification of the causative alternation in English (3 based on VerbNet classes, 5 based on individual verbs). For verbs in this alternation, the semantic roles that contribute to the meaning of the verb can be associated with different syntactic slots. Our most successful approaches use distributional vectors and achieve an F1 score of up to 79% on a balanced test set. We also apply our approaches to the distinction between the causative alternation and the unexpressed object alternation. Our best system for this is based on syntactic information, with an F1 score of 75% on a balanced test set.",Identifying Participation of Individual Verbs or VerbNet Classes in the Causative Alternation,"Verbs that participate in diathesis alternations have different semantics in their different syntactic environments, which need to be distinguished in order to process these verbs and their contexts correctly. We design and implement 8 approaches to the automatic identification of the causative alternation in English (3 based on VerbNet classes, 5 based on individual verbs). For verbs in this alternation, the semantic roles that contribute to the meaning of the verb can be associated with different syntactic slots. Our most successful approaches use distributional vectors and achieve an F1 score of up to 79% on a balanced test set. We also apply our approaches to the distinction between the causative alternation and the unexpressed object alternation. Our best system for this is based on syntactic information, with an F1 score of 75% on a balanced test set.","The work presented in this paper was financed by the Deutsche Forschungsgemeinschaft (DFG) within the CRC 991 ""The Structure of Representations in Language, Cognition, and Science"". The author wishes to thank Laura Kallmeyer, Kilian Evang, Jakub Waszczuk, and three anonymous reviewers for their valuable feedback and helpful comments.","Identifying Participation of Individual Verbs or VerbNet Classes in the Causative Alternation. Verbs that participate in diathesis alternations have different semantics in their different syntactic environments, which need to be distinguished in order to process these verbs and their contexts correctly. We design and implement 8 approaches to the automatic identification of the causative alternation in English (3 based on VerbNet classes, 5 based on individual verbs). For verbs in this alternation, the semantic roles that contribute to the meaning of the verb can be associated with different syntactic slots. Our most successful approaches use distributional vectors and achieve an F1 score of up to 79% on a balanced test set. We also apply our approaches to the distinction between the causative alternation and the unexpressed object alternation. Our best system for this is based on syntactic information, with an F1 score of 75% on a balanced test set.",2019
brooke-etal-2012-building,https://aclanthology.org/W12-2205,1,,,,education,,,"Building Readability Lexicons with Unannotated Corpora. Lexicons of word difficulty are useful for various educational applications, including readability classification and text simplification. In this work, we explore automatic creation of these lexicons using methods which go beyond simple term frequency, but without relying on age-graded texts. In particular, we derive information for each word type from the readability of the web documents they appear in and the words they co-occur with, linearly combining these various features. We show the efficacy of this approach by comparing our lexicon with an existing coarse-grained, low-coverage resource and a new crowdsourced annotation.",Building Readability Lexicons with Unannotated Corpora,"Lexicons of word difficulty are useful for various educational applications, including readability classification and text simplification. In this work, we explore automatic creation of these lexicons using methods which go beyond simple term frequency, but without relying on age-graded texts. In particular, we derive information for each word type from the readability of the web documents they appear in and the words they co-occur with, linearly combining these various features. We show the efficacy of this approach by comparing our lexicon with an existing coarse-grained, low-coverage resource and a new crowdsourced annotation.",Building Readability Lexicons with Unannotated Corpora,"Lexicons of word difficulty are useful for various educational applications, including readability classification and text simplification. In this work, we explore automatic creation of these lexicons using methods which go beyond simple term frequency, but without relying on age-graded texts. In particular, we derive information for each word type from the readability of the web documents they appear in and the words they co-occur with, linearly combining these various features. We show the efficacy of this approach by comparing our lexicon with an existing coarse-grained, low-coverage resource and a new crowdsourced annotation.",This work was financially supported by the Natural Sciences and Engineering Research Council of Canada.,"Building Readability Lexicons with Unannotated Corpora. Lexicons of word difficulty are useful for various educational applications, including readability classification and text simplification. In this work, we explore automatic creation of these lexicons using methods which go beyond simple term frequency, but without relying on age-graded texts. In particular, we derive information for each word type from the readability of the web documents they appear in and the words they co-occur with, linearly combining these various features. We show the efficacy of this approach by comparing our lexicon with an existing coarse-grained, low-coverage resource and a new crowdsourced annotation.",2012
zhou-etal-2019-unsupervised,https://aclanthology.org/D19-1192,0,,,,,,,"Unsupervised Context Rewriting for Open Domain Conversation. Context modeling has a pivotal role in open domain conversation. Existing works either use heuristic methods or jointly learn context modeling and response generation with an encoder-decoder framework. This paper proposes an explicit context rewriting method, which rewrites the last utterance by considering context history. We leverage pseudoparallel data and elaborate a context rewriting network, which is built upon the Copy-Net with the reinforcement learning method. The rewritten utterance is beneficial to candidate retrieval, explainable context modeling, as well as enabling to employ a single-turn framework to the multi-turn scenario. The empirical results show that our model outperforms baselines in terms of the rewriting quality, the multi-turn response generation, and the end-to-end retrieval-based chatbots.",Unsupervised Context Rewriting for Open Domain Conversation,"Context modeling has a pivotal role in open domain conversation. Existing works either use heuristic methods or jointly learn context modeling and response generation with an encoder-decoder framework. This paper proposes an explicit context rewriting method, which rewrites the last utterance by considering context history. We leverage pseudoparallel data and elaborate a context rewriting network, which is built upon the Copy-Net with the reinforcement learning method. The rewritten utterance is beneficial to candidate retrieval, explainable context modeling, as well as enabling to employ a single-turn framework to the multi-turn scenario. The empirical results show that our model outperforms baselines in terms of the rewriting quality, the multi-turn response generation, and the end-to-end retrieval-based chatbots.",Unsupervised Context Rewriting for Open Domain Conversation,"Context modeling has a pivotal role in open domain conversation. Existing works either use heuristic methods or jointly learn context modeling and response generation with an encoder-decoder framework. This paper proposes an explicit context rewriting method, which rewrites the last utterance by considering context history. We leverage pseudoparallel data and elaborate a context rewriting network, which is built upon the Copy-Net with the reinforcement learning method. The rewritten utterance is beneficial to candidate retrieval, explainable context modeling, as well as enabling to employ a single-turn framework to the multi-turn scenario. The empirical results show that our model outperforms baselines in terms of the rewriting quality, the multi-turn response generation, and the end-to-end retrieval-based chatbots.","We are thankful to Yue Liu, Sawyer Zeng and Libin Shi for their supportive work. We also gratefully thank the anonymous reviewers for their insightful comments.","Unsupervised Context Rewriting for Open Domain Conversation. Context modeling has a pivotal role in open domain conversation. Existing works either use heuristic methods or jointly learn context modeling and response generation with an encoder-decoder framework. This paper proposes an explicit context rewriting method, which rewrites the last utterance by considering context history. We leverage pseudoparallel data and elaborate a context rewriting network, which is built upon the Copy-Net with the reinforcement learning method. The rewritten utterance is beneficial to candidate retrieval, explainable context modeling, as well as enabling to employ a single-turn framework to the multi-turn scenario. The empirical results show that our model outperforms baselines in terms of the rewriting quality, the multi-turn response generation, and the end-to-end retrieval-based chatbots.",2019
merchant-1993-tipster,https://aclanthology.org/X93-1001,0,,,,,,,"TIPSTER Program Overview. The task of TIPSTER Phase I was to advance the state of the art in two language technologies, Document Detection and Information Extraction. Document Detection includes two subtasks, routing (running static queries against a stream of new data), and retrieval (running ad hoc queries against archival data).",{TIPSTER} Program Overview,"The task of TIPSTER Phase I was to advance the state of the art in two language technologies, Document Detection and Information Extraction. Document Detection includes two subtasks, routing (running static queries against a stream of new data), and retrieval (running ad hoc queries against archival data).",TIPSTER Program Overview,"The task of TIPSTER Phase I was to advance the state of the art in two language technologies, Document Detection and Information Extraction. Document Detection includes two subtasks, routing (running static queries against a stream of new data), and retrieval (running ad hoc queries against archival data).",,"TIPSTER Program Overview. The task of TIPSTER Phase I was to advance the state of the art in two language technologies, Document Detection and Information Extraction. Document Detection includes two subtasks, routing (running static queries against a stream of new data), and retrieval (running ad hoc queries against archival data).",1993
kilgarriff-1997-using,https://aclanthology.org/W97-0122,0,,,,,,,"Using Word Frequency Lists to Measure Corpus Homogeneity and Similarity between Corpora. How similar are two corpora? A measure of corpus similarity would be very useful for lexicography and language engineering. Word frequency lists are cheap and easy to generate so a measure based on them would be of use as a quick guide in many circumstances; for example, to judge how a newly available corpus related to existing resources, or how easy it might be to port an NLP system designed to work with one text type to work with another. We show that corpus similarity can only be interpreted in the light of corpus homogeneity. The paper presents a measure, based on the XX 2 statistic, for measuring both corpus similarity and corpus homogeneity. The measure is compared with a rank-based measure and shown to outperform it. Some results are presented. A method for evaluating the accuracy of the measure is introduced and some results of using the measure are presented.",Using Word Frequency Lists to Measure Corpus Homogeneity and Similarity between Corpora,"How similar are two corpora? A measure of corpus similarity would be very useful for lexicography and language engineering. Word frequency lists are cheap and easy to generate so a measure based on them would be of use as a quick guide in many circumstances; for example, to judge how a newly available corpus related to existing resources, or how easy it might be to port an NLP system designed to work with one text type to work with another. We show that corpus similarity can only be interpreted in the light of corpus homogeneity. The paper presents a measure, based on the XX 2 statistic, for measuring both corpus similarity and corpus homogeneity. The measure is compared with a rank-based measure and shown to outperform it. Some results are presented. A method for evaluating the accuracy of the measure is introduced and some results of using the measure are presented.",Using Word Frequency Lists to Measure Corpus Homogeneity and Similarity between Corpora,"How similar are two corpora? A measure of corpus similarity would be very useful for lexicography and language engineering. Word frequency lists are cheap and easy to generate so a measure based on them would be of use as a quick guide in many circumstances; for example, to judge how a newly available corpus related to existing resources, or how easy it might be to port an NLP system designed to work with one text type to work with another. We show that corpus similarity can only be interpreted in the light of corpus homogeneity. The paper presents a measure, based on the XX 2 statistic, for measuring both corpus similarity and corpus homogeneity. The measure is compared with a rank-based measure and shown to outperform it. Some results are presented. A method for evaluating the accuracy of the measure is introduced and some results of using the measure are presented.",,"Using Word Frequency Lists to Measure Corpus Homogeneity and Similarity between Corpora. How similar are two corpora? A measure of corpus similarity would be very useful for lexicography and language engineering. Word frequency lists are cheap and easy to generate so a measure based on them would be of use as a quick guide in many circumstances; for example, to judge how a newly available corpus related to existing resources, or how easy it might be to port an NLP system designed to work with one text type to work with another. We show that corpus similarity can only be interpreted in the light of corpus homogeneity. The paper presents a measure, based on the XX 2 statistic, for measuring both corpus similarity and corpus homogeneity. The measure is compared with a rank-based measure and shown to outperform it. Some results are presented. A method for evaluating the accuracy of the measure is introduced and some results of using the measure are presented.",1997
song-lee-2017-learning,https://aclanthology.org/E17-2116,0,,,,,,,"Learning User Embeddings from Emails. Many important email-related tasks, such as email classification or search, highly rely on building quality document representations (e.g., bag-of-words or key phrases) to assist matching and understanding. Despite prior success on representing textual messages, creating quality user representations from emails was overlooked. In this paper, we propose to represent users using embeddings that are trained to reflect the email communication network. Our experiments on Enron dataset suggest that the resulting embeddings capture the semantic distance between users. To assess the quality of embeddings in a real-world application, we carry out auto-foldering task where the lexical representation of an email is enriched with user embedding features. Our results show that folder prediction accuracy is improved when embedding features are present across multiple settings.",Learning User Embeddings from Emails,"Many important email-related tasks, such as email classification or search, highly rely on building quality document representations (e.g., bag-of-words or key phrases) to assist matching and understanding. Despite prior success on representing textual messages, creating quality user representations from emails was overlooked. In this paper, we propose to represent users using embeddings that are trained to reflect the email communication network. Our experiments on Enron dataset suggest that the resulting embeddings capture the semantic distance between users. To assess the quality of embeddings in a real-world application, we carry out auto-foldering task where the lexical representation of an email is enriched with user embedding features. Our results show that folder prediction accuracy is improved when embedding features are present across multiple settings.",Learning User Embeddings from Emails,"Many important email-related tasks, such as email classification or search, highly rely on building quality document representations (e.g., bag-of-words or key phrases) to assist matching and understanding. Despite prior success on representing textual messages, creating quality user representations from emails was overlooked. In this paper, we propose to represent users using embeddings that are trained to reflect the email communication network. Our experiments on Enron dataset suggest that the resulting embeddings capture the semantic distance between users. To assess the quality of embeddings in a real-world application, we carry out auto-foldering task where the lexical representation of an email is enriched with user embedding features. Our results show that folder prediction accuracy is improved when embedding features are present across multiple settings.",,"Learning User Embeddings from Emails. Many important email-related tasks, such as email classification or search, highly rely on building quality document representations (e.g., bag-of-words or key phrases) to assist matching and understanding. Despite prior success on representing textual messages, creating quality user representations from emails was overlooked. In this paper, we propose to represent users using embeddings that are trained to reflect the email communication network. Our experiments on Enron dataset suggest that the resulting embeddings capture the semantic distance between users. To assess the quality of embeddings in a real-world application, we carry out auto-foldering task where the lexical representation of an email is enriched with user embedding features. Our results show that folder prediction accuracy is improved when embedding features are present across multiple settings.",2017
xiong-etal-2019-open,https://aclanthology.org/D19-1521,0,,,,,,,"Open Domain Web Keyphrase Extraction Beyond Language Modeling. This paper studies keyphrase extraction in real-world scenarios where documents are from diverse domains and have variant content quality. We curate and release OpenKP, a large scale open domain keyphrase extraction dataset with near one hundred thousand web documents and expert keyphrase annotations. To handle the variations of domain and content quality, we develop BLING-KPE, a neural keyphrase extraction model that goes beyond language understanding using visual presentations of documents and weak supervision from search queries. Experimental results on OpenKP confirm the effectiveness of BLING-KPE and the contributions of its neural architecture, visual features, and search log weak supervision. Zero-shot evaluations on DUC-2001 demonstrate the improved generalization ability of learning from the open domain data compared to a specific domain.",Open Domain Web Keyphrase Extraction Beyond Language Modeling,"This paper studies keyphrase extraction in real-world scenarios where documents are from diverse domains and have variant content quality. We curate and release OpenKP, a large scale open domain keyphrase extraction dataset with near one hundred thousand web documents and expert keyphrase annotations. To handle the variations of domain and content quality, we develop BLING-KPE, a neural keyphrase extraction model that goes beyond language understanding using visual presentations of documents and weak supervision from search queries. Experimental results on OpenKP confirm the effectiveness of BLING-KPE and the contributions of its neural architecture, visual features, and search log weak supervision. Zero-shot evaluations on DUC-2001 demonstrate the improved generalization ability of learning from the open domain data compared to a specific domain.",Open Domain Web Keyphrase Extraction Beyond Language Modeling,"This paper studies keyphrase extraction in real-world scenarios where documents are from diverse domains and have variant content quality. We curate and release OpenKP, a large scale open domain keyphrase extraction dataset with near one hundred thousand web documents and expert keyphrase annotations. To handle the variations of domain and content quality, we develop BLING-KPE, a neural keyphrase extraction model that goes beyond language understanding using visual presentations of documents and weak supervision from search queries. Experimental results on OpenKP confirm the effectiveness of BLING-KPE and the contributions of its neural architecture, visual features, and search log weak supervision. Zero-shot evaluations on DUC-2001 demonstrate the improved generalization ability of learning from the open domain data compared to a specific domain.",,"Open Domain Web Keyphrase Extraction Beyond Language Modeling. This paper studies keyphrase extraction in real-world scenarios where documents are from diverse domains and have variant content quality. We curate and release OpenKP, a large scale open domain keyphrase extraction dataset with near one hundred thousand web documents and expert keyphrase annotations. To handle the variations of domain and content quality, we develop BLING-KPE, a neural keyphrase extraction model that goes beyond language understanding using visual presentations of documents and weak supervision from search queries. Experimental results on OpenKP confirm the effectiveness of BLING-KPE and the contributions of its neural architecture, visual features, and search log weak supervision. Zero-shot evaluations on DUC-2001 demonstrate the improved generalization ability of learning from the open domain data compared to a specific domain.",2019
yu-kubler-2011-filling,https://aclanthology.org/W11-0323,0,,,,,,,"Filling the Gap: Semi-Supervised Learning for Opinion Detection Across Domains. We investigate the use of Semi-Supervised Learning (SSL) in opinion detection both in sparse data situations and for domain adaptation. We show that co-training reaches the best results in an in-domain setting with small labeled data sets, with a maximum absolute gain of 33.5%. For domain transfer, we show that self-training gains an absolute improvement in labeling accuracy for blog data of 16% over the supervised approach with target domain training data.",Filling the Gap: Semi-Supervised Learning for Opinion Detection Across Domains,"We investigate the use of Semi-Supervised Learning (SSL) in opinion detection both in sparse data situations and for domain adaptation. We show that co-training reaches the best results in an in-domain setting with small labeled data sets, with a maximum absolute gain of 33.5%. For domain transfer, we show that self-training gains an absolute improvement in labeling accuracy for blog data of 16% over the supervised approach with target domain training data.",Filling the Gap: Semi-Supervised Learning for Opinion Detection Across Domains,"We investigate the use of Semi-Supervised Learning (SSL) in opinion detection both in sparse data situations and for domain adaptation. We show that co-training reaches the best results in an in-domain setting with small labeled data sets, with a maximum absolute gain of 33.5%. For domain transfer, we show that self-training gains an absolute improvement in labeling accuracy for blog data of 16% over the supervised approach with target domain training data.",,"Filling the Gap: Semi-Supervised Learning for Opinion Detection Across Domains. We investigate the use of Semi-Supervised Learning (SSL) in opinion detection both in sparse data situations and for domain adaptation. We show that co-training reaches the best results in an in-domain setting with small labeled data sets, with a maximum absolute gain of 33.5%. For domain transfer, we show that self-training gains an absolute improvement in labeling accuracy for blog data of 16% over the supervised approach with target domain training data.",2011
vassileva-etal-2021-automatic-transformation,https://aclanthology.org/2021.ranlp-srw.30,1,,,,health,,,"Automatic Transformation of Clinical Narratives into Structured Format. Vast amounts of data in healthcare are available in unstructured text format, usually in the local language of the countries. These documents contain valuable information. Secondary use of clinical narratives and information extraction of key facts and relations from them about the patient disease history can foster preventive medicine and improve healthcare. In this paper, we propose a hybrid method for the automatic transformation of clinical text into a structured format. The documents are automatically sectioned into the following parts: diagnosis, patient history, patient status, lab results. For the ""Diagnosis"" section a deep learning text-based encoding into ICD-10 codes is applied using MBG-ClinicalBERT-a fine-tuned ClinicalBERT model for Bulgarian medical text. From the ""Patient History"" section, we identify patient symptoms using a rule-based approach enhanced with similarity search based on MBG-ClinicalBERT word embeddings. We also identify symptom relations like negation. For the ""Patient Status"" description, binary classification is used to determine the status of each anatomic organ. In this paper, we demonstrate different methods for adapting NLP tools for English and other languages to a low resource language like Bulgarian.",Automatic Transformation of Clinical Narratives into Structured Format,"Vast amounts of data in healthcare are available in unstructured text format, usually in the local language of the countries. These documents contain valuable information. Secondary use of clinical narratives and information extraction of key facts and relations from them about the patient disease history can foster preventive medicine and improve healthcare. In this paper, we propose a hybrid method for the automatic transformation of clinical text into a structured format. The documents are automatically sectioned into the following parts: diagnosis, patient history, patient status, lab results. For the ""Diagnosis"" section a deep learning text-based encoding into ICD-10 codes is applied using MBG-ClinicalBERT-a fine-tuned ClinicalBERT model for Bulgarian medical text. From the ""Patient History"" section, we identify patient symptoms using a rule-based approach enhanced with similarity search based on MBG-ClinicalBERT word embeddings. We also identify symptom relations like negation. For the ""Patient Status"" description, binary classification is used to determine the status of each anatomic organ. In this paper, we demonstrate different methods for adapting NLP tools for English and other languages to a low resource language like Bulgarian.",Automatic Transformation of Clinical Narratives into Structured Format,"Vast amounts of data in healthcare are available in unstructured text format, usually in the local language of the countries. These documents contain valuable information. Secondary use of clinical narratives and information extraction of key facts and relations from them about the patient disease history can foster preventive medicine and improve healthcare. In this paper, we propose a hybrid method for the automatic transformation of clinical text into a structured format. The documents are automatically sectioned into the following parts: diagnosis, patient history, patient status, lab results. For the ""Diagnosis"" section a deep learning text-based encoding into ICD-10 codes is applied using MBG-ClinicalBERT-a fine-tuned ClinicalBERT model for Bulgarian medical text. From the ""Patient History"" section, we identify patient symptoms using a rule-based approach enhanced with similarity search based on MBG-ClinicalBERT word embeddings. We also identify symptom relations like negation. For the ""Patient Status"" description, binary classification is used to determine the status of each anatomic organ. In this paper, we demonstrate different methods for adapting NLP tools for English and other languages to a low resource language like Bulgarian.","This research is funded by the Bulgarian Ministry of Education and Science, grant DO1-200/2018 'Electronic health care in Bulgaria' (e-Zdrave).Also is partially funded via GATE project by the EU Horizon 2020 WIDESPREAD-2018-2020 TEAMING Phase 2 under GA No. 857155 and OP SE4SG under GA No. BG05M2OP001-1.003-0002-C01.","Automatic Transformation of Clinical Narratives into Structured Format. Vast amounts of data in healthcare are available in unstructured text format, usually in the local language of the countries. These documents contain valuable information. Secondary use of clinical narratives and information extraction of key facts and relations from them about the patient disease history can foster preventive medicine and improve healthcare. In this paper, we propose a hybrid method for the automatic transformation of clinical text into a structured format. The documents are automatically sectioned into the following parts: diagnosis, patient history, patient status, lab results. For the ""Diagnosis"" section a deep learning text-based encoding into ICD-10 codes is applied using MBG-ClinicalBERT-a fine-tuned ClinicalBERT model for Bulgarian medical text. From the ""Patient History"" section, we identify patient symptoms using a rule-based approach enhanced with similarity search based on MBG-ClinicalBERT word embeddings. We also identify symptom relations like negation. For the ""Patient Status"" description, binary classification is used to determine the status of each anatomic organ. In this paper, we demonstrate different methods for adapting NLP tools for English and other languages to a low resource language like Bulgarian.",2021
che-etal-2009-multilingual,https://aclanthology.org/W09-1207,0,,,,,,,"Multilingual Dependency-based Syntactic and Semantic Parsing. Our CoNLL 2009 Shared Task system includes three cascaded components: syntactic parsing, predicate classification, and semantic role labeling. A pseudo-projective high-order graph-based model is used in our syntactic dependency parser. A support vector machine (SVM) model is used to classify predicate senses. Semantic role labeling is achieved using maximum entropy (MaxEnt) model based semantic role classification and integer linear programming (ILP) based post inference. Finally, we win the first place in the joint task, including both the closed and open challenges.",Multilingual Dependency-based Syntactic and Semantic Parsing,"Our CoNLL 2009 Shared Task system includes three cascaded components: syntactic parsing, predicate classification, and semantic role labeling. A pseudo-projective high-order graph-based model is used in our syntactic dependency parser. A support vector machine (SVM) model is used to classify predicate senses. Semantic role labeling is achieved using maximum entropy (MaxEnt) model based semantic role classification and integer linear programming (ILP) based post inference. Finally, we win the first place in the joint task, including both the closed and open challenges.",Multilingual Dependency-based Syntactic and Semantic Parsing,"Our CoNLL 2009 Shared Task system includes three cascaded components: syntactic parsing, predicate classification, and semantic role labeling. A pseudo-projective high-order graph-based model is used in our syntactic dependency parser. A support vector machine (SVM) model is used to classify predicate senses. Semantic role labeling is achieved using maximum entropy (MaxEnt) model based semantic role classification and integer linear programming (ILP) based post inference. Finally, we win the first place in the joint task, including both the closed and open challenges.","This work was supported by National Natural Science Foundation of China (NSFC) via grant 60803093, 60675034, and the ""863"" National High-Tech Research and Development of China via grant 2008AA01Z144.","Multilingual Dependency-based Syntactic and Semantic Parsing. Our CoNLL 2009 Shared Task system includes three cascaded components: syntactic parsing, predicate classification, and semantic role labeling. A pseudo-projective high-order graph-based model is used in our syntactic dependency parser. A support vector machine (SVM) model is used to classify predicate senses. Semantic role labeling is achieved using maximum entropy (MaxEnt) model based semantic role classification and integer linear programming (ILP) based post inference. Finally, we win the first place in the joint task, including both the closed and open challenges.",2009
minkov-zettlemoyer-2012-discriminative,https://aclanthology.org/P12-1089,0,,,,,,,"Discriminative Learning for Joint Template Filling. This paper presents a joint model for template filling, where the goal is to automatically specify the fields of target relations such as seminar announcements or corporate acquisition events. The approach models mention detection, unification and field extraction in a flexible, feature-rich model that allows for joint modeling of interdependencies at all levels and across fields. Such an approach can, for example, learn likely event durations and the fact that start times should come before end times. While the joint inference space is large, we demonstrate effective learning with a Perceptron-style approach that uses simple, greedy beam decoding. Empirical results in two benchmark domains demonstrate consistently strong performance on both mention detection and template filling tasks.",Discriminative Learning for Joint Template Filling,"This paper presents a joint model for template filling, where the goal is to automatically specify the fields of target relations such as seminar announcements or corporate acquisition events. The approach models mention detection, unification and field extraction in a flexible, feature-rich model that allows for joint modeling of interdependencies at all levels and across fields. Such an approach can, for example, learn likely event durations and the fact that start times should come before end times. While the joint inference space is large, we demonstrate effective learning with a Perceptron-style approach that uses simple, greedy beam decoding. Empirical results in two benchmark domains demonstrate consistently strong performance on both mention detection and template filling tasks.",Discriminative Learning for Joint Template Filling,"This paper presents a joint model for template filling, where the goal is to automatically specify the fields of target relations such as seminar announcements or corporate acquisition events. The approach models mention detection, unification and field extraction in a flexible, feature-rich model that allows for joint modeling of interdependencies at all levels and across fields. Such an approach can, for example, learn likely event durations and the fact that start times should come before end times. While the joint inference space is large, we demonstrate effective learning with a Perceptron-style approach that uses simple, greedy beam decoding. Empirical results in two benchmark domains demonstrate consistently strong performance on both mention detection and template filling tasks.",,"Discriminative Learning for Joint Template Filling. This paper presents a joint model for template filling, where the goal is to automatically specify the fields of target relations such as seminar announcements or corporate acquisition events. The approach models mention detection, unification and field extraction in a flexible, feature-rich model that allows for joint modeling of interdependencies at all levels and across fields. Such an approach can, for example, learn likely event durations and the fact that start times should come before end times. While the joint inference space is large, we demonstrate effective learning with a Perceptron-style approach that uses simple, greedy beam decoding. Empirical results in two benchmark domains demonstrate consistently strong performance on both mention detection and template filling tasks.",2012
jones-etal-2014-finding,https://aclanthology.org/C14-1044,0,,,,,,,"Finding Zelig in Text: A Measure for Normalising Linguistic Accommodation. Linguistic accommodation is a recognised indicator of social power and social distance. However, different individuals will vary their language to different degrees, and only a portion of this variance will be due to accommodation. This paper presents the Zelig Quotient, a method of normalising linguistic variation towards a particular individual, using an author's other communications as a baseline, thence to derive a method for identifying accommodation-induced variation with statistical significance. This work provides a platform for future efforts towards examining the importance of such phenomena in large communications datasets.",Finding Zelig in Text: A Measure for Normalising Linguistic Accommodation,"Linguistic accommodation is a recognised indicator of social power and social distance. However, different individuals will vary their language to different degrees, and only a portion of this variance will be due to accommodation. This paper presents the Zelig Quotient, a method of normalising linguistic variation towards a particular individual, using an author's other communications as a baseline, thence to derive a method for identifying accommodation-induced variation with statistical significance. This work provides a platform for future efforts towards examining the importance of such phenomena in large communications datasets.",Finding Zelig in Text: A Measure for Normalising Linguistic Accommodation,"Linguistic accommodation is a recognised indicator of social power and social distance. However, different individuals will vary their language to different degrees, and only a portion of this variance will be due to accommodation. This paper presents the Zelig Quotient, a method of normalising linguistic variation towards a particular individual, using an author's other communications as a baseline, thence to derive a method for identifying accommodation-induced variation with statistical significance. This work provides a platform for future efforts towards examining the importance of such phenomena in large communications datasets.",,"Finding Zelig in Text: A Measure for Normalising Linguistic Accommodation. Linguistic accommodation is a recognised indicator of social power and social distance. However, different individuals will vary their language to different degrees, and only a portion of this variance will be due to accommodation. This paper presents the Zelig Quotient, a method of normalising linguistic variation towards a particular individual, using an author's other communications as a baseline, thence to derive a method for identifying accommodation-induced variation with statistical significance. This work provides a platform for future efforts towards examining the importance of such phenomena in large communications datasets.",2014
rajalakshmi-etal-2022-dlrg,https://aclanthology.org/2022.dravidianlangtech-1.32,1,,,,hate_speech,,,"DLRG@DravidianLangTech-ACL2022: Abusive Comment Detection in Tamil using Multilingual Transformer Models. Online Social Network has let people connect and interact with each other. It does, however, also provide a platform for online abusers to propagate abusive content. The majority of these abusive remarks are written in a multilingual style, which allows them to easily slip past internet inspection. This paper presents a system developed for the Shared Task on Abusive Comment Detection (Misogyny, Misandry, Homophobia, Transphobic, Xenophobia, Coun-terSpeech, Hope Speech) in Tamil Dravidi-anLangTech@ACL 2022 to detect the abusive category of each comment. We approach the task with three methodologies-Machine Learning, Deep Learning and Transformerbased modeling, for two sets of data-Tamil and Tamil+English language dataset. The dataset used in our system can be accessed from the competition on CodaLab. For Machine Learning, eight algorithms were implemented, among which Random Forest gave the best result with Tamil+English dataset, with a weighted average F1-score of 0.78. For Deep Learning, Bi-Directional LSTM gave best result with pre-trained word embeddings. In Transformer-based modeling, we used In-dicBERT and mBERT with fine-tuning, among which mBERT gave the best result for Tamil dataset with a weighted average F1-score of 0.7.",{DLRG}@{D}ravidian{L}ang{T}ech-{ACL}2022: Abusive Comment Detection in {T}amil using Multilingual Transformer Models,"Online Social Network has let people connect and interact with each other. It does, however, also provide a platform for online abusers to propagate abusive content. The majority of these abusive remarks are written in a multilingual style, which allows them to easily slip past internet inspection. This paper presents a system developed for the Shared Task on Abusive Comment Detection (Misogyny, Misandry, Homophobia, Transphobic, Xenophobia, Coun-terSpeech, Hope Speech) in Tamil Dravidi-anLangTech@ACL 2022 to detect the abusive category of each comment. We approach the task with three methodologies-Machine Learning, Deep Learning and Transformerbased modeling, for two sets of data-Tamil and Tamil+English language dataset. The dataset used in our system can be accessed from the competition on CodaLab. For Machine Learning, eight algorithms were implemented, among which Random Forest gave the best result with Tamil+English dataset, with a weighted average F1-score of 0.78. For Deep Learning, Bi-Directional LSTM gave best result with pre-trained word embeddings. In Transformer-based modeling, we used In-dicBERT and mBERT with fine-tuning, among which mBERT gave the best result for Tamil dataset with a weighted average F1-score of 0.7.",DLRG@DravidianLangTech-ACL2022: Abusive Comment Detection in Tamil using Multilingual Transformer Models,"Online Social Network has let people connect and interact with each other. It does, however, also provide a platform for online abusers to propagate abusive content. The majority of these abusive remarks are written in a multilingual style, which allows them to easily slip past internet inspection. This paper presents a system developed for the Shared Task on Abusive Comment Detection (Misogyny, Misandry, Homophobia, Transphobic, Xenophobia, Coun-terSpeech, Hope Speech) in Tamil Dravidi-anLangTech@ACL 2022 to detect the abusive category of each comment. We approach the task with three methodologies-Machine Learning, Deep Learning and Transformerbased modeling, for two sets of data-Tamil and Tamil+English language dataset. The dataset used in our system can be accessed from the competition on CodaLab. For Machine Learning, eight algorithms were implemented, among which Random Forest gave the best result with Tamil+English dataset, with a weighted average F1-score of 0.78. For Deep Learning, Bi-Directional LSTM gave best result with pre-trained word embeddings. In Transformer-based modeling, we used In-dicBERT and mBERT with fine-tuning, among which mBERT gave the best result for Tamil dataset with a weighted average F1-score of 0.7.","We would like to thank the management of Vellore Institute of Technology, Chennai for their support to carry out this research.","DLRG@DravidianLangTech-ACL2022: Abusive Comment Detection in Tamil using Multilingual Transformer Models. Online Social Network has let people connect and interact with each other. It does, however, also provide a platform for online abusers to propagate abusive content. The majority of these abusive remarks are written in a multilingual style, which allows them to easily slip past internet inspection. This paper presents a system developed for the Shared Task on Abusive Comment Detection (Misogyny, Misandry, Homophobia, Transphobic, Xenophobia, Coun-terSpeech, Hope Speech) in Tamil Dravidi-anLangTech@ACL 2022 to detect the abusive category of each comment. We approach the task with three methodologies-Machine Learning, Deep Learning and Transformerbased modeling, for two sets of data-Tamil and Tamil+English language dataset. The dataset used in our system can be accessed from the competition on CodaLab. For Machine Learning, eight algorithms were implemented, among which Random Forest gave the best result with Tamil+English dataset, with a weighted average F1-score of 0.78. For Deep Learning, Bi-Directional LSTM gave best result with pre-trained word embeddings. In Transformer-based modeling, we used In-dicBERT and mBERT with fine-tuning, among which mBERT gave the best result for Tamil dataset with a weighted average F1-score of 0.7.",2022
szubert-steedman-2019-node,https://aclanthology.org/D19-5321,0,,,,,,,"Node Embeddings for Graph Merging: Case of Knowledge Graph Construction. Combining two graphs requires merging the nodes which are counterparts of each other. In this process errors occur, resulting in incorrect merging or incorrect failure to merge. We find a high prevalence of such errors when using AskNET, an algorithm for building Knowledge Graphs from text corpora. AskNET node matching method uses string similarity, which we propose to replace with vector embedding similarity. We explore graph-based and wordbased embedding models and show an overall error reduction of from 56% to 23.6%, with a reduction of over a half in both types of incorrect node matching.",Node Embeddings for Graph Merging: Case of Knowledge Graph Construction,"Combining two graphs requires merging the nodes which are counterparts of each other. In this process errors occur, resulting in incorrect merging or incorrect failure to merge. We find a high prevalence of such errors when using AskNET, an algorithm for building Knowledge Graphs from text corpora. AskNET node matching method uses string similarity, which we propose to replace with vector embedding similarity. We explore graph-based and wordbased embedding models and show an overall error reduction of from 56% to 23.6%, with a reduction of over a half in both types of incorrect node matching.",Node Embeddings for Graph Merging: Case of Knowledge Graph Construction,"Combining two graphs requires merging the nodes which are counterparts of each other. In this process errors occur, resulting in incorrect merging or incorrect failure to merge. We find a high prevalence of such errors when using AskNET, an algorithm for building Knowledge Graphs from text corpora. AskNET node matching method uses string similarity, which we propose to replace with vector embedding similarity. We explore graph-based and wordbased embedding models and show an overall error reduction of from 56% to 23.6%, with a reduction of over a half in both types of incorrect node matching.",,"Node Embeddings for Graph Merging: Case of Knowledge Graph Construction. Combining two graphs requires merging the nodes which are counterparts of each other. In this process errors occur, resulting in incorrect merging or incorrect failure to merge. We find a high prevalence of such errors when using AskNET, an algorithm for building Knowledge Graphs from text corpora. AskNET node matching method uses string similarity, which we propose to replace with vector embedding similarity. We explore graph-based and wordbased embedding models and show an overall error reduction of from 56% to 23.6%, with a reduction of over a half in both types of incorrect node matching.",2019
beinborn-etal-2013-cognate,https://aclanthology.org/I13-1112,0,,,,,,,"Cognate Production using Character-based Machine Translation. Cognates are words in different languages that are associated with each other by language learners. Thus, cognates are important indicators for the prediction of the perceived difficulty of a text. We introduce a method for automatic cognate production using character-based machine translation. We show that our approach is able to learn production patterns from noisy training data and that it works for a wide range of language pairs. It even works across different alphabets, e.g. we obtain good results on the tested language pairs English-Russian, English-Greek, and English-Farsi. Our method performs significantly better than similarity measures used in previous work on cognates.",Cognate Production using Character-based Machine Translation,"Cognates are words in different languages that are associated with each other by language learners. Thus, cognates are important indicators for the prediction of the perceived difficulty of a text. We introduce a method for automatic cognate production using character-based machine translation. We show that our approach is able to learn production patterns from noisy training data and that it works for a wide range of language pairs. It even works across different alphabets, e.g. we obtain good results on the tested language pairs English-Russian, English-Greek, and English-Farsi. Our method performs significantly better than similarity measures used in previous work on cognates.",Cognate Production using Character-based Machine Translation,"Cognates are words in different languages that are associated with each other by language learners. Thus, cognates are important indicators for the prediction of the perceived difficulty of a text. We introduce a method for automatic cognate production using character-based machine translation. We show that our approach is able to learn production patterns from noisy training data and that it works for a wide range of language pairs. It even works across different alphabets, e.g. we obtain good results on the tested language pairs English-Russian, English-Greek, and English-Farsi. Our method performs significantly better than similarity measures used in previous work on cognates.","This work has been supported by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant No. I/82806, and by the Klaus Tschira Foundation under project No. 00.133.2008. ","Cognate Production using Character-based Machine Translation. Cognates are words in different languages that are associated with each other by language learners. Thus, cognates are important indicators for the prediction of the perceived difficulty of a text. We introduce a method for automatic cognate production using character-based machine translation. We show that our approach is able to learn production patterns from noisy training data and that it works for a wide range of language pairs. It even works across different alphabets, e.g. we obtain good results on the tested language pairs English-Russian, English-Greek, and English-Farsi. Our method performs significantly better than similarity measures used in previous work on cognates.",2013
khokhlova-zakharov-2010-studying,http://www.lrec-conf.org/proceedings/lrec2010/pdf/21_Paper.pdf,0,,,,,,,"Studying Word Sketches for Russian. Without any doubt corpora are vital tools for linguistic studies and solution for applied tasks. Although corpora opportunities are very useful, there is a need of another kind of software for further improvement of linguistic research as it is impossible to process huge amount of linguistic data manually. The Sketch Engine representing itself a corpus tool which takes as input a corpus of any language and corresponding grammar patterns. The paper describes the writing of Sketch grammar for the Russian language as a part of the Sketch Engine system. The system gives information about a word's collocability on concrete dependency models, and generates lists of the most frequent phrases for a given word based on appropriate models. The paper deals with two different approaches to writing rules for the grammar, based on morphological information, and also with applying word sketches to the Russian language. The data evidences that such results may find an extensive use in various fields of linguistics, such as dictionary compiling, language learning and teaching, translation (including machine translation), phraseology, information retrieval etc.",Studying Word Sketches for {R}ussian,"Without any doubt corpora are vital tools for linguistic studies and solution for applied tasks. Although corpora opportunities are very useful, there is a need of another kind of software for further improvement of linguistic research as it is impossible to process huge amount of linguistic data manually. The Sketch Engine representing itself a corpus tool which takes as input a corpus of any language and corresponding grammar patterns. The paper describes the writing of Sketch grammar for the Russian language as a part of the Sketch Engine system. The system gives information about a word's collocability on concrete dependency models, and generates lists of the most frequent phrases for a given word based on appropriate models. The paper deals with two different approaches to writing rules for the grammar, based on morphological information, and also with applying word sketches to the Russian language. The data evidences that such results may find an extensive use in various fields of linguistics, such as dictionary compiling, language learning and teaching, translation (including machine translation), phraseology, information retrieval etc.",Studying Word Sketches for Russian,"Without any doubt corpora are vital tools for linguistic studies and solution for applied tasks. Although corpora opportunities are very useful, there is a need of another kind of software for further improvement of linguistic research as it is impossible to process huge amount of linguistic data manually. The Sketch Engine representing itself a corpus tool which takes as input a corpus of any language and corresponding grammar patterns. The paper describes the writing of Sketch grammar for the Russian language as a part of the Sketch Engine system. The system gives information about a word's collocability on concrete dependency models, and generates lists of the most frequent phrases for a given word based on appropriate models. The paper deals with two different approaches to writing rules for the grammar, based on morphological information, and also with applying word sketches to the Russian language. The data evidences that such results may find an extensive use in various fields of linguistics, such as dictionary compiling, language learning and teaching, translation (including machine translation), phraseology, information retrieval etc.",,"Studying Word Sketches for Russian. Without any doubt corpora are vital tools for linguistic studies and solution for applied tasks. Although corpora opportunities are very useful, there is a need of another kind of software for further improvement of linguistic research as it is impossible to process huge amount of linguistic data manually. The Sketch Engine representing itself a corpus tool which takes as input a corpus of any language and corresponding grammar patterns. The paper describes the writing of Sketch grammar for the Russian language as a part of the Sketch Engine system. The system gives information about a word's collocability on concrete dependency models, and generates lists of the most frequent phrases for a given word based on appropriate models. The paper deals with two different approaches to writing rules for the grammar, based on morphological information, and also with applying word sketches to the Russian language. The data evidences that such results may find an extensive use in various fields of linguistics, such as dictionary compiling, language learning and teaching, translation (including machine translation), phraseology, information retrieval etc.",2010
chauhan-etal-2020-one,https://aclanthology.org/2020.aacl-main.31,1,,,,hate_speech,,,"All-in-One: A Deep Attentive Multi-task Learning Framework for Humour, Sarcasm, Offensive, Motivation, and Sentiment on Memes. In this paper, we aim at learning the relationships and similarities of a variety of tasks, such as humour detection, sarcasm detection, offensive content detection, motivational content detection and sentiment analysis on a somewhat complicated form of information, i.e., memes. We propose a multi-task, multi-modal deep learning framework to solve multiple tasks simultaneously. For multi-tasking, we propose two attention-like mechanisms viz., Inter-task Relationship Module (iTRM) and Inter-class Relationship Module (iCRM). The main motivation of iTRM is to learn the relationship between the tasks to realize how they help each other. In contrast, iCRM develops relations between the different classes of tasks. Finally, representations from both the attentions are concatenated and shared across the five tasks (i.e., humour, sarcasm, offensive, motivational, and sentiment) for multi-tasking. We use the recently released dataset in the Memotion Analysis task @ SemEval 2020, which consists of memes annotated for the classes as mentioned above. Empirical results on Memotion dataset show the efficacy of our proposed approach over the existing state-of-theart systems (Baseline and SemEval 2020 winner). The evaluation also indicates that the proposed multi-task framework yields better performance over the single-task learning.","All-in-One: A Deep Attentive Multi-task Learning Framework for Humour, Sarcasm, Offensive, Motivation, and Sentiment on Memes","In this paper, we aim at learning the relationships and similarities of a variety of tasks, such as humour detection, sarcasm detection, offensive content detection, motivational content detection and sentiment analysis on a somewhat complicated form of information, i.e., memes. We propose a multi-task, multi-modal deep learning framework to solve multiple tasks simultaneously. For multi-tasking, we propose two attention-like mechanisms viz., Inter-task Relationship Module (iTRM) and Inter-class Relationship Module (iCRM). The main motivation of iTRM is to learn the relationship between the tasks to realize how they help each other. In contrast, iCRM develops relations between the different classes of tasks. Finally, representations from both the attentions are concatenated and shared across the five tasks (i.e., humour, sarcasm, offensive, motivational, and sentiment) for multi-tasking. We use the recently released dataset in the Memotion Analysis task @ SemEval 2020, which consists of memes annotated for the classes as mentioned above. Empirical results on Memotion dataset show the efficacy of our proposed approach over the existing state-of-theart systems (Baseline and SemEval 2020 winner). The evaluation also indicates that the proposed multi-task framework yields better performance over the single-task learning.","All-in-One: A Deep Attentive Multi-task Learning Framework for Humour, Sarcasm, Offensive, Motivation, and Sentiment on Memes","In this paper, we aim at learning the relationships and similarities of a variety of tasks, such as humour detection, sarcasm detection, offensive content detection, motivational content detection and sentiment analysis on a somewhat complicated form of information, i.e., memes. We propose a multi-task, multi-modal deep learning framework to solve multiple tasks simultaneously. For multi-tasking, we propose two attention-like mechanisms viz., Inter-task Relationship Module (iTRM) and Inter-class Relationship Module (iCRM). The main motivation of iTRM is to learn the relationship between the tasks to realize how they help each other. In contrast, iCRM develops relations between the different classes of tasks. Finally, representations from both the attentions are concatenated and shared across the five tasks (i.e., humour, sarcasm, offensive, motivational, and sentiment) for multi-tasking. We use the recently released dataset in the Memotion Analysis task @ SemEval 2020, which consists of memes annotated for the classes as mentioned above. Empirical results on Memotion dataset show the efficacy of our proposed approach over the existing state-of-theart systems (Baseline and SemEval 2020 winner). The evaluation also indicates that the proposed multi-task framework yields better performance over the single-task learning.","The research reported here is partially supported by SkyMap Global India Private Limited. Dushyant Singh Chauhan acknowledges the support of Prime Minister Research Fellowship (PMRF), Govt. of India. Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (Meit/8Y), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).","All-in-One: A Deep Attentive Multi-task Learning Framework for Humour, Sarcasm, Offensive, Motivation, and Sentiment on Memes. In this paper, we aim at learning the relationships and similarities of a variety of tasks, such as humour detection, sarcasm detection, offensive content detection, motivational content detection and sentiment analysis on a somewhat complicated form of information, i.e., memes. We propose a multi-task, multi-modal deep learning framework to solve multiple tasks simultaneously. For multi-tasking, we propose two attention-like mechanisms viz., Inter-task Relationship Module (iTRM) and Inter-class Relationship Module (iCRM). The main motivation of iTRM is to learn the relationship between the tasks to realize how they help each other. In contrast, iCRM develops relations between the different classes of tasks. Finally, representations from both the attentions are concatenated and shared across the five tasks (i.e., humour, sarcasm, offensive, motivational, and sentiment) for multi-tasking. We use the recently released dataset in the Memotion Analysis task @ SemEval 2020, which consists of memes annotated for the classes as mentioned above. Empirical results on Memotion dataset show the efficacy of our proposed approach over the existing state-of-theart systems (Baseline and SemEval 2020 winner). The evaluation also indicates that the proposed multi-task framework yields better performance over the single-task learning.",2020
lavie-2011-evaluating,https://aclanthology.org/2011.mtsummit-tutorials.3,0,,,,,,,"Evaluating the Output of Machine Translation Systems. Over the past twenty years, we have attacked the historical methodological barriers between statistical machine translation and traditional models of syntax, semantics, and structure. In this tutorial, we will survey some of the central issues and techniques from each of these aspects, with an emphasis on `deeply theoretically integrated' models, rather than hybrid approaches such as superficial statistical aggregation or system combination of outputs produced by traditional symbolic components. On syntactic SMT, we will explore the trade-offs for SMT between learnability and representational expressiveness. After establishing a foundation in the theory and practice of stochastic transduction grammars, we will examine very recent new approaches to automatic unsupervised induction of various classes of",Evaluating the Output of Machine Translation Systems,"Over the past twenty years, we have attacked the historical methodological barriers between statistical machine translation and traditional models of syntax, semantics, and structure. In this tutorial, we will survey some of the central issues and techniques from each of these aspects, with an emphasis on `deeply theoretically integrated' models, rather than hybrid approaches such as superficial statistical aggregation or system combination of outputs produced by traditional symbolic components. On syntactic SMT, we will explore the trade-offs for SMT between learnability and representational expressiveness. After establishing a foundation in the theory and practice of stochastic transduction grammars, we will examine very recent new approaches to automatic unsupervised induction of various classes of",Evaluating the Output of Machine Translation Systems,"Over the past twenty years, we have attacked the historical methodological barriers between statistical machine translation and traditional models of syntax, semantics, and structure. In this tutorial, we will survey some of the central issues and techniques from each of these aspects, with an emphasis on `deeply theoretically integrated' models, rather than hybrid approaches such as superficial statistical aggregation or system combination of outputs produced by traditional symbolic components. On syntactic SMT, we will explore the trade-offs for SMT between learnability and representational expressiveness. After establishing a foundation in the theory and practice of stochastic transduction grammars, we will examine very recent new approaches to automatic unsupervised induction of various classes of",,"Evaluating the Output of Machine Translation Systems. Over the past twenty years, we have attacked the historical methodological barriers between statistical machine translation and traditional models of syntax, semantics, and structure. In this tutorial, we will survey some of the central issues and techniques from each of these aspects, with an emphasis on `deeply theoretically integrated' models, rather than hybrid approaches such as superficial statistical aggregation or system combination of outputs produced by traditional symbolic components. On syntactic SMT, we will explore the trade-offs for SMT between learnability and representational expressiveness. After establishing a foundation in the theory and practice of stochastic transduction grammars, we will examine very recent new approaches to automatic unsupervised induction of various classes of",2011
ouyang-etal-2009-integrated,https://aclanthology.org/P09-2029,0,,,,,,,"An Integrated Multi-document Summarization Approach based on Word Hierarchical Representation. This paper introduces a novel hierarchical summarization approach for automatic multidocument summarization. By creating a hierarchical representation of the words in the input document set, the proposed approach is able to incorporate various objectives of multidocument summarization through an integrated framework. The evaluation is conducted on the DUC 2007 data set.",An Integrated Multi-document Summarization Approach based on Word Hierarchical Representation,"This paper introduces a novel hierarchical summarization approach for automatic multidocument summarization. By creating a hierarchical representation of the words in the input document set, the proposed approach is able to incorporate various objectives of multidocument summarization through an integrated framework. The evaluation is conducted on the DUC 2007 data set.",An Integrated Multi-document Summarization Approach based on Word Hierarchical Representation,"This paper introduces a novel hierarchical summarization approach for automatic multidocument summarization. By creating a hierarchical representation of the words in the input document set, the proposed approach is able to incorporate various objectives of multidocument summarization through an integrated framework. The evaluation is conducted on the DUC 2007 data set.",The work described in this paper was partially supported by Hong Kong RGC Projects (No. PolyU 5217/07E) and partially supported by The Hong Kong Polytechnic University internal grants (A-PA6L and G-YG80).,"An Integrated Multi-document Summarization Approach based on Word Hierarchical Representation. This paper introduces a novel hierarchical summarization approach for automatic multidocument summarization. By creating a hierarchical representation of the words in the input document set, the proposed approach is able to incorporate various objectives of multidocument summarization through an integrated framework. The evaluation is conducted on the DUC 2007 data set.",2009
williams-koehn-2014-syntax,https://aclanthology.org/D14-2005,0,,,,,,,Syntax-Based Statistical Machine Translation. ,Syntax-Based Statistical Machine Translation,,Syntax-Based Statistical Machine Translation,,,Syntax-Based Statistical Machine Translation. ,2014
eisenstein-2013-phonological,https://aclanthology.org/W13-1102,0,,,,,,,"Phonological Factors in Social Media Writing. Does phonological variation get transcribed into social media text? This paper investigates examples of the phonological variable of consonant cluster reduction in Twitter. Not only does this variable appear frequently, but it displays the same sensitivity to linguistic context as in spoken language. This suggests that when social media writing transcribes phonological properties of speech, it is not merely a case of inventing orthographic transcriptions. Rather, social media displays influence from structural properties of the phonological system.",Phonological Factors in Social Media Writing,"Does phonological variation get transcribed into social media text? This paper investigates examples of the phonological variable of consonant cluster reduction in Twitter. Not only does this variable appear frequently, but it displays the same sensitivity to linguistic context as in spoken language. This suggests that when social media writing transcribes phonological properties of speech, it is not merely a case of inventing orthographic transcriptions. Rather, social media displays influence from structural properties of the phonological system.",Phonological Factors in Social Media Writing,"Does phonological variation get transcribed into social media text? This paper investigates examples of the phonological variable of consonant cluster reduction in Twitter. Not only does this variable appear frequently, but it displays the same sensitivity to linguistic context as in spoken language. This suggests that when social media writing transcribes phonological properties of speech, it is not merely a case of inventing orthographic transcriptions. Rather, social media displays influence from structural properties of the phonological system.",Thanks to Brendan O'Connor for building the Twitter dataset that made this research possible. Thanks to the reviewers for their helpful comments.,"Phonological Factors in Social Media Writing. Does phonological variation get transcribed into social media text? This paper investigates examples of the phonological variable of consonant cluster reduction in Twitter. Not only does this variable appear frequently, but it displays the same sensitivity to linguistic context as in spoken language. This suggests that when social media writing transcribes phonological properties of speech, it is not merely a case of inventing orthographic transcriptions. Rather, social media displays influence from structural properties of the phonological system.",2013
paetzold-specia-2016-simplenets,https://aclanthology.org/W16-2388,0,,,,,,,"SimpleNets: Quality Estimation with Resource-Light Neural Networks. We introduce SimpleNets: a resource-light solution to the sentence-level Quality Estimation task of WMT16 that combines Recurrent Neural Networks, word embedding models, and the principle of compositionality. The SimpleNets systems explore the idea that the quality of a translation can be derived from the quality of its n-grams. This approach has been successfully employed in Text Simplification quality assessment in the past. Our experiments show that, surprisingly, our models can learn more about a translation's quality by focusing on the original sentence, rather than on the translation itself.",{S}imple{N}ets: Quality Estimation with Resource-Light Neural Networks,"We introduce SimpleNets: a resource-light solution to the sentence-level Quality Estimation task of WMT16 that combines Recurrent Neural Networks, word embedding models, and the principle of compositionality. The SimpleNets systems explore the idea that the quality of a translation can be derived from the quality of its n-grams. This approach has been successfully employed in Text Simplification quality assessment in the past. Our experiments show that, surprisingly, our models can learn more about a translation's quality by focusing on the original sentence, rather than on the translation itself.",SimpleNets: Quality Estimation with Resource-Light Neural Networks,"We introduce SimpleNets: a resource-light solution to the sentence-level Quality Estimation task of WMT16 that combines Recurrent Neural Networks, word embedding models, and the principle of compositionality. The SimpleNets systems explore the idea that the quality of a translation can be derived from the quality of its n-grams. This approach has been successfully employed in Text Simplification quality assessment in the past. Our experiments show that, surprisingly, our models can learn more about a translation's quality by focusing on the original sentence, rather than on the translation itself.",,"SimpleNets: Quality Estimation with Resource-Light Neural Networks. We introduce SimpleNets: a resource-light solution to the sentence-level Quality Estimation task of WMT16 that combines Recurrent Neural Networks, word embedding models, and the principle of compositionality. The SimpleNets systems explore the idea that the quality of a translation can be derived from the quality of its n-grams. This approach has been successfully employed in Text Simplification quality assessment in the past. Our experiments show that, surprisingly, our models can learn more about a translation's quality by focusing on the original sentence, rather than on the translation itself.",2016
bladier-etal-2018-german,https://aclanthology.org/P18-3009,0,,,,,,,"German and French Neural Supertagging Experiments for LTAG Parsing. We present ongoing work on data-driven parsing of German and French with Lexicalized Tree Adjoining Grammars. We use a supertagging approach combined with deep learning. We show the challenges of extracting LTAG supertags from the French Treebank, introduce the use of leftand right-sister-adjunction, present a neural architecture for the supertagger, and report experiments of n-best supertagging for French and German.",{G}erman and {F}rench Neural Supertagging Experiments for {LTAG} Parsing,"We present ongoing work on data-driven parsing of German and French with Lexicalized Tree Adjoining Grammars. We use a supertagging approach combined with deep learning. We show the challenges of extracting LTAG supertags from the French Treebank, introduce the use of leftand right-sister-adjunction, present a neural architecture for the supertagger, and report experiments of n-best supertagging for French and German.",German and French Neural Supertagging Experiments for LTAG Parsing,"We present ongoing work on data-driven parsing of German and French with Lexicalized Tree Adjoining Grammars. We use a supertagging approach combined with deep learning. We show the challenges of extracting LTAG supertags from the French Treebank, introduce the use of leftand right-sister-adjunction, present a neural architecture for the supertagger, and report experiments of n-best supertagging for French and German.","This work was carried out as a part of the research project TREEGRASP (http://treegrasp.phil. hhu.de) funded by a Consolidator Grant of the European Research Council (ERC). We thank three anonymous reviewers for their careful reading, valuable suggestions and constructive comments.","German and French Neural Supertagging Experiments for LTAG Parsing. We present ongoing work on data-driven parsing of German and French with Lexicalized Tree Adjoining Grammars. We use a supertagging approach combined with deep learning. We show the challenges of extracting LTAG supertags from the French Treebank, introduce the use of leftand right-sister-adjunction, present a neural architecture for the supertagger, and report experiments of n-best supertagging for French and German.",2018
qin-etal-2021-neural,https://aclanthology.org/2021.acl-long.456,0,,,,,,,"Neural-Symbolic Solver for Math Word Problems with Auxiliary Tasks. Previous math word problem solvers following the encoder-decoder paradigm fail to explicitly incorporate essential math symbolic constraints, leading to unexplainable and unreasonable predictions. Herein, we propose Neural-Symbolic Solver (NS-Solver) to explicitly and seamlessly incorporate different levels of symbolic constraints by auxiliary tasks. Our NS-Solver consists of a problem reader to encode problems, a programmer to generate symbolic equations, and a symbolic executor to obtain answers. Along with target expression supervision, our solver is also optimized via 4 new auxiliary objectives to enforce different symbolic reasoning: a) self-supervised number prediction task predicting both number quantity and number locations; b) commonsense constant prediction task predicting what prior knowledge (e.g. how many legs a chicken has) is required; c) program consistency checker computing the semantic loss between predicted equation and target equation to ensure reasonable equation mapping; d) duality exploiting task exploiting the quasi duality between symbolic equation generation and problem's part-of-speech generation to enhance the understanding ability of a solver. Besides, to provide a more realistic and challenging benchmark for developing a universal and scalable solver, we also construct a new largescale MWP benchmark CM17K consisting of 4 kinds of MWPs (arithmetic, one-unknown linear, one-unknown non-linear, equation set) with more than 17K samples. Extensive experiments on Math23K and our CM17k demonstrate the superiority of our NS-Solver compared to state-of-the-art methods 1 .
Deep neural networks have achieved remarkable successes in natural language processing recently. Although neural models have demonstrated performance superior to humans on some tasks, e.g. reading comprehension (Rajpurkar et al., 2016; Devlin et al., 2019; Lan et al.) , it still lacks the ability of discrete reasoning, resulting in low accuracy on math reasoning. Thus, it is hard for pure neural network approaches to tackle the task of solving math word problems (MWPs) , which requires a model to be capable of natural language understanding and discrete reasoning. MWP solving aims to automatically answer a math word problem by understanding the textual description of the problem and reasoning out the underlying answer. A typical MWP is a short story that describes a partial state of the world and poses a question about an unknown quantity or multiple unknown quantities. To solve an MWP, the relevant quantities need to be identified from the text. Furthermore, the correct operators along with their computation order among these quantities need to be determined. Therefore, integrating neural networks with symbolic reasoning is crucial for solving MWPs. Inspired by the recent amazing progress on neural semantic parsing (Liang et al., 2017a) and reading comprehension , we address this problem by neural-symbolic computing.",Neural-Symbolic Solver for Math Word Problems with Auxiliary Tasks,"Previous math word problem solvers following the encoder-decoder paradigm fail to explicitly incorporate essential math symbolic constraints, leading to unexplainable and unreasonable predictions. Herein, we propose Neural-Symbolic Solver (NS-Solver) to explicitly and seamlessly incorporate different levels of symbolic constraints by auxiliary tasks. Our NS-Solver consists of a problem reader to encode problems, a programmer to generate symbolic equations, and a symbolic executor to obtain answers. Along with target expression supervision, our solver is also optimized via 4 new auxiliary objectives to enforce different symbolic reasoning: a) self-supervised number prediction task predicting both number quantity and number locations; b) commonsense constant prediction task predicting what prior knowledge (e.g. how many legs a chicken has) is required; c) program consistency checker computing the semantic loss between predicted equation and target equation to ensure reasonable equation mapping; d) duality exploiting task exploiting the quasi duality between symbolic equation generation and problem's part-of-speech generation to enhance the understanding ability of a solver. Besides, to provide a more realistic and challenging benchmark for developing a universal and scalable solver, we also construct a new largescale MWP benchmark CM17K consisting of 4 kinds of MWPs (arithmetic, one-unknown linear, one-unknown non-linear, equation set) with more than 17K samples. Extensive experiments on Math23K and our CM17k demonstrate the superiority of our NS-Solver compared to state-of-the-art methods 1 .
Deep neural networks have achieved remarkable successes in natural language processing recently. Although neural models have demonstrated performance superior to humans on some tasks, e.g. reading comprehension (Rajpurkar et al., 2016; Devlin et al., 2019; Lan et al.) , it still lacks the ability of discrete reasoning, resulting in low accuracy on math reasoning. Thus, it is hard for pure neural network approaches to tackle the task of solving math word problems (MWPs) , which requires a model to be capable of natural language understanding and discrete reasoning. MWP solving aims to automatically answer a math word problem by understanding the textual description of the problem and reasoning out the underlying answer. A typical MWP is a short story that describes a partial state of the world and poses a question about an unknown quantity or multiple unknown quantities. To solve an MWP, the relevant quantities need to be identified from the text. Furthermore, the correct operators along with their computation order among these quantities need to be determined. Therefore, integrating neural networks with symbolic reasoning is crucial for solving MWPs. Inspired by the recent amazing progress on neural semantic parsing (Liang et al., 2017a) and reading comprehension , we address this problem by neural-symbolic computing.",Neural-Symbolic Solver for Math Word Problems with Auxiliary Tasks,"Previous math word problem solvers following the encoder-decoder paradigm fail to explicitly incorporate essential math symbolic constraints, leading to unexplainable and unreasonable predictions. Herein, we propose Neural-Symbolic Solver (NS-Solver) to explicitly and seamlessly incorporate different levels of symbolic constraints by auxiliary tasks. Our NS-Solver consists of a problem reader to encode problems, a programmer to generate symbolic equations, and a symbolic executor to obtain answers. Along with target expression supervision, our solver is also optimized via 4 new auxiliary objectives to enforce different symbolic reasoning: a) self-supervised number prediction task predicting both number quantity and number locations; b) commonsense constant prediction task predicting what prior knowledge (e.g. how many legs a chicken has) is required; c) program consistency checker computing the semantic loss between predicted equation and target equation to ensure reasonable equation mapping; d) duality exploiting task exploiting the quasi duality between symbolic equation generation and problem's part-of-speech generation to enhance the understanding ability of a solver. Besides, to provide a more realistic and challenging benchmark for developing a universal and scalable solver, we also construct a new largescale MWP benchmark CM17K consisting of 4 kinds of MWPs (arithmetic, one-unknown linear, one-unknown non-linear, equation set) with more than 17K samples. Extensive experiments on Math23K and our CM17k demonstrate the superiority of our NS-Solver compared to state-of-the-art methods 1 .
Deep neural networks have achieved remarkable successes in natural language processing recently. Although neural models have demonstrated performance superior to humans on some tasks, e.g. reading comprehension (Rajpurkar et al., 2016; Devlin et al., 2019; Lan et al.) , it still lacks the ability of discrete reasoning, resulting in low accuracy on math reasoning. Thus, it is hard for pure neural network approaches to tackle the task of solving math word problems (MWPs) , which requires a model to be capable of natural language understanding and discrete reasoning. MWP solving aims to automatically answer a math word problem by understanding the textual description of the problem and reasoning out the underlying answer. A typical MWP is a short story that describes a partial state of the world and poses a question about an unknown quantity or multiple unknown quantities. To solve an MWP, the relevant quantities need to be identified from the text. Furthermore, the correct operators along with their computation order among these quantities need to be determined. Therefore, integrating neural networks with symbolic reasoning is crucial for solving MWPs. Inspired by the recent amazing progress on neural semantic parsing (Liang et al., 2017a) and reading comprehension , we address this problem by neural-symbolic computing.",,"Neural-Symbolic Solver for Math Word Problems with Auxiliary Tasks. Previous math word problem solvers following the encoder-decoder paradigm fail to explicitly incorporate essential math symbolic constraints, leading to unexplainable and unreasonable predictions. Herein, we propose Neural-Symbolic Solver (NS-Solver) to explicitly and seamlessly incorporate different levels of symbolic constraints by auxiliary tasks. Our NS-Solver consists of a problem reader to encode problems, a programmer to generate symbolic equations, and a symbolic executor to obtain answers. Along with target expression supervision, our solver is also optimized via 4 new auxiliary objectives to enforce different symbolic reasoning: a) self-supervised number prediction task predicting both number quantity and number locations; b) commonsense constant prediction task predicting what prior knowledge (e.g. how many legs a chicken has) is required; c) program consistency checker computing the semantic loss between predicted equation and target equation to ensure reasonable equation mapping; d) duality exploiting task exploiting the quasi duality between symbolic equation generation and problem's part-of-speech generation to enhance the understanding ability of a solver. Besides, to provide a more realistic and challenging benchmark for developing a universal and scalable solver, we also construct a new largescale MWP benchmark CM17K consisting of 4 kinds of MWPs (arithmetic, one-unknown linear, one-unknown non-linear, equation set) with more than 17K samples. Extensive experiments on Math23K and our CM17k demonstrate the superiority of our NS-Solver compared to state-of-the-art methods 1 .
Deep neural networks have achieved remarkable successes in natural language processing recently. Although neural models have demonstrated performance superior to humans on some tasks, e.g. reading comprehension (Rajpurkar et al., 2016; Devlin et al., 2019; Lan et al.) , it still lacks the ability of discrete reasoning, resulting in low accuracy on math reasoning. Thus, it is hard for pure neural network approaches to tackle the task of solving math word problems (MWPs) , which requires a model to be capable of natural language understanding and discrete reasoning. MWP solving aims to automatically answer a math word problem by understanding the textual description of the problem and reasoning out the underlying answer. A typical MWP is a short story that describes a partial state of the world and poses a question about an unknown quantity or multiple unknown quantities. To solve an MWP, the relevant quantities need to be identified from the text. Furthermore, the correct operators along with their computation order among these quantities need to be determined. Therefore, integrating neural networks with symbolic reasoning is crucial for solving MWPs. Inspired by the recent amazing progress on neural semantic parsing (Liang et al., 2017a) and reading comprehension , we address this problem by neural-symbolic computing.",2021
aziz-specia-2010-combining,https://aclanthology.org/S10-1024,0,,,,,,,"Combining Dictionaries and Contextual Information for Cross-Lingual Lexical Substitution. We describe two systems participating in Semeval-2010's Cross-Lingual Lexical Substitution task: USPwlv and WLVusp. Both systems are based on two main components: (i) a dictionary to provide a number of possible translations for each source word, and (ii) a contextual model to select the best translation according to the context where the source word occurs. These components and the way they are integrated are different in the two systems: they exploit corpus-based and linguistic resources, and supervised and unsupervised learning methods. Among the 14 participants in the subtask to identify the best translation, our systems were ranked 2nd and 4th in terms of recall, 3rd and 4th in terms of precision. Both systems outperformed the baselines in all subtasks according to all metrics used.",Combining Dictionaries and Contextual Information for Cross-Lingual Lexical Substitution,"We describe two systems participating in Semeval-2010's Cross-Lingual Lexical Substitution task: USPwlv and WLVusp. Both systems are based on two main components: (i) a dictionary to provide a number of possible translations for each source word, and (ii) a contextual model to select the best translation according to the context where the source word occurs. These components and the way they are integrated are different in the two systems: they exploit corpus-based and linguistic resources, and supervised and unsupervised learning methods. Among the 14 participants in the subtask to identify the best translation, our systems were ranked 2nd and 4th in terms of recall, 3rd and 4th in terms of precision. Both systems outperformed the baselines in all subtasks according to all metrics used.",Combining Dictionaries and Contextual Information for Cross-Lingual Lexical Substitution,"We describe two systems participating in Semeval-2010's Cross-Lingual Lexical Substitution task: USPwlv and WLVusp. Both systems are based on two main components: (i) a dictionary to provide a number of possible translations for each source word, and (ii) a contextual model to select the best translation according to the context where the source word occurs. These components and the way they are integrated are different in the two systems: they exploit corpus-based and linguistic resources, and supervised and unsupervised learning methods. Among the 14 participants in the subtask to identify the best translation, our systems were ranked 2nd and 4th in terms of recall, 3rd and 4th in terms of precision. Both systems outperformed the baselines in all subtasks according to all metrics used.",,"Combining Dictionaries and Contextual Information for Cross-Lingual Lexical Substitution. We describe two systems participating in Semeval-2010's Cross-Lingual Lexical Substitution task: USPwlv and WLVusp. Both systems are based on two main components: (i) a dictionary to provide a number of possible translations for each source word, and (ii) a contextual model to select the best translation according to the context where the source word occurs. These components and the way they are integrated are different in the two systems: they exploit corpus-based and linguistic resources, and supervised and unsupervised learning methods. Among the 14 participants in the subtask to identify the best translation, our systems were ranked 2nd and 4th in terms of recall, 3rd and 4th in terms of precision. Both systems outperformed the baselines in all subtasks according to all metrics used.",2010
danlos-2005-automatic,https://aclanthology.org/I05-2013,0,,,,,,,"Automatic Recognition of French Expletive Pronoun Occurrences. We present a tool, called ILIMP, which takes as input a raw text in French and produces as output the same text in which every occurrence of the pronoun il is tagged either with tag [ANA] for anaphoric or [IMP] for impersonal or expletive. This tool is therefore designed to distinguish between the anaphoric occurrences of il, for which an anaphora resolution system has to look for an antecedent, and the expletive occurrences of this pronoun, for which it does not make sense to look for an antecedent. The precision rate for ILIMP is 97,5%. The few errors are analyzed in detail. Other tasks using the method developed for ILIMP are described briefly, as well as the use of ILIMP in a modular syntactic analysis system.",Automatic Recognition of {F}rench Expletive Pronoun Occurrences,"We present a tool, called ILIMP, which takes as input a raw text in French and produces as output the same text in which every occurrence of the pronoun il is tagged either with tag [ANA] for anaphoric or [IMP] for impersonal or expletive. This tool is therefore designed to distinguish between the anaphoric occurrences of il, for which an anaphora resolution system has to look for an antecedent, and the expletive occurrences of this pronoun, for which it does not make sense to look for an antecedent. The precision rate for ILIMP is 97,5%. The few errors are analyzed in detail. Other tasks using the method developed for ILIMP are described briefly, as well as the use of ILIMP in a modular syntactic analysis system.",Automatic Recognition of French Expletive Pronoun Occurrences,"We present a tool, called ILIMP, which takes as input a raw text in French and produces as output the same text in which every occurrence of the pronoun il is tagged either with tag [ANA] for anaphoric or [IMP] for impersonal or expletive. This tool is therefore designed to distinguish between the anaphoric occurrences of il, for which an anaphora resolution system has to look for an antecedent, and the expletive occurrences of this pronoun, for which it does not make sense to look for an antecedent. The precision rate for ILIMP is 97,5%. The few errors are analyzed in detail. Other tasks using the method developed for ILIMP are described briefly, as well as the use of ILIMP in a modular syntactic analysis system.",,"Automatic Recognition of French Expletive Pronoun Occurrences. We present a tool, called ILIMP, which takes as input a raw text in French and produces as output the same text in which every occurrence of the pronoun il is tagged either with tag [ANA] for anaphoric or [IMP] for impersonal or expletive. This tool is therefore designed to distinguish between the anaphoric occurrences of il, for which an anaphora resolution system has to look for an antecedent, and the expletive occurrences of this pronoun, for which it does not make sense to look for an antecedent. The precision rate for ILIMP is 97,5%. The few errors are analyzed in detail. Other tasks using the method developed for ILIMP are described briefly, as well as the use of ILIMP in a modular syntactic analysis system.",2005
johnson-charniak-2004-tag,https://aclanthology.org/P04-1005,0,,,,,,,"A TAG-based noisy-channel model of speech repairs. This paper describes a noisy channel model of speech repairs, which can identify and correct repairs in speech transcripts. A syntactic parser is used as the source model, and a novel type of TAG-based transducer is the channel model. The use of TAG is motivated by the intuition that the reparandum is a ""rough copy"" of the repair. The model is trained and tested on the Switchboard disfluency-annotated corpus.",A {TAG}-based noisy-channel model of speech repairs,"This paper describes a noisy channel model of speech repairs, which can identify and correct repairs in speech transcripts. A syntactic parser is used as the source model, and a novel type of TAG-based transducer is the channel model. The use of TAG is motivated by the intuition that the reparandum is a ""rough copy"" of the repair. The model is trained and tested on the Switchboard disfluency-annotated corpus.",A TAG-based noisy-channel model of speech repairs,"This paper describes a noisy channel model of speech repairs, which can identify and correct repairs in speech transcripts. A syntactic parser is used as the source model, and a novel type of TAG-based transducer is the channel model. The use of TAG is motivated by the intuition that the reparandum is a ""rough copy"" of the repair. The model is trained and tested on the Switchboard disfluency-annotated corpus.",,"A TAG-based noisy-channel model of speech repairs. This paper describes a noisy channel model of speech repairs, which can identify and correct repairs in speech transcripts. A syntactic parser is used as the source model, and a novel type of TAG-based transducer is the channel model. The use of TAG is motivated by the intuition that the reparandum is a ""rough copy"" of the repair. The model is trained and tested on the Switchboard disfluency-annotated corpus.",2004
sagot-boullier-2006-deep,http://www.lrec-conf.org/proceedings/lrec2006/pdf/806_pdf.pdf,0,,,,,,,"Deep non-probabilistic parsing of large corpora. This paper reports a large-scale non-probabilistic parsing experiment with a deep LFG parser. We briefly introduce the parser we used, named SXLFG, and the resources that were used together with it. Then we report quantitative results about the parsing of a multi-million word journalistic corpus. We show that we can parse more than 6 million words in less than 12 hours, only 6.7% of all sentences reaching the 1s timeout. This shows that deep large-coverage non-probabilistic parsers can be efficient enough to parse very large corpora in a reasonable amount of time.",Deep non-probabilistic parsing of large corpora,"This paper reports a large-scale non-probabilistic parsing experiment with a deep LFG parser. We briefly introduce the parser we used, named SXLFG, and the resources that were used together with it. Then we report quantitative results about the parsing of a multi-million word journalistic corpus. We show that we can parse more than 6 million words in less than 12 hours, only 6.7% of all sentences reaching the 1s timeout. This shows that deep large-coverage non-probabilistic parsers can be efficient enough to parse very large corpora in a reasonable amount of time.",Deep non-probabilistic parsing of large corpora,"This paper reports a large-scale non-probabilistic parsing experiment with a deep LFG parser. We briefly introduce the parser we used, named SXLFG, and the resources that were used together with it. Then we report quantitative results about the parsing of a multi-million word journalistic corpus. We show that we can parse more than 6 million words in less than 12 hours, only 6.7% of all sentences reaching the 1s timeout. This shows that deep large-coverage non-probabilistic parsers can be efficient enough to parse very large corpora in a reasonable amount of time.",,"Deep non-probabilistic parsing of large corpora. This paper reports a large-scale non-probabilistic parsing experiment with a deep LFG parser. We briefly introduce the parser we used, named SXLFG, and the resources that were used together with it. Then we report quantitative results about the parsing of a multi-million word journalistic corpus. We show that we can parse more than 6 million words in less than 12 hours, only 6.7% of all sentences reaching the 1s timeout. This shows that deep large-coverage non-probabilistic parsers can be efficient enough to parse very large corpora in a reasonable amount of time.",2006
chen-etal-2022-focus,https://aclanthology.org/2022.acl-short.74,0,,,,,,,"Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation. Label smoothing and vocabulary sharing are two widely used techniques in neural machine translation models. However, we argue that simply applying both techniques can be conflicting and even leads to sub-optimal performance. When allocating smoothed probability, original label smoothing treats the source-side words that would never appear in the target language equally to the real target-side words, which could bias the translation model. To address this issue, we propose Masked Label Smoothing (MLS), a new mechanism that masks the soft label probability of source-side words to zero. Simple yet effective, MLS manages to better integrate label smoothing with vocabulary sharing. Our extensive experiments show that MLS consistently yields improvement over original label smoothing on different datasets, including bilingual and multilingual translation from both translation quality and model's calibration. Our code is released at PKUnlp-icler.",Focus on the Target{'}s Vocabulary: Masked Label Smoothing for Machine Translation,"Label smoothing and vocabulary sharing are two widely used techniques in neural machine translation models. However, we argue that simply applying both techniques can be conflicting and even leads to sub-optimal performance. When allocating smoothed probability, original label smoothing treats the source-side words that would never appear in the target language equally to the real target-side words, which could bias the translation model. To address this issue, we propose Masked Label Smoothing (MLS), a new mechanism that masks the soft label probability of source-side words to zero. Simple yet effective, MLS manages to better integrate label smoothing with vocabulary sharing. Our extensive experiments show that MLS consistently yields improvement over original label smoothing on different datasets, including bilingual and multilingual translation from both translation quality and model's calibration. Our code is released at PKUnlp-icler.",Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation,"Label smoothing and vocabulary sharing are two widely used techniques in neural machine translation models. However, we argue that simply applying both techniques can be conflicting and even leads to sub-optimal performance. When allocating smoothed probability, original label smoothing treats the source-side words that would never appear in the target language equally to the real target-side words, which could bias the translation model. To address this issue, we propose Masked Label Smoothing (MLS), a new mechanism that masks the soft label probability of source-side words to zero. Simple yet effective, MLS manages to better integrate label smoothing with vocabulary sharing. Our extensive experiments show that MLS consistently yields improvement over original label smoothing on different datasets, including bilingual and multilingual translation from both translation quality and model's calibration. Our code is released at PKUnlp-icler.",We thank all reviewers for their valuable suggestions for this work. This paper is supported by the ,"Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation. Label smoothing and vocabulary sharing are two widely used techniques in neural machine translation models. However, we argue that simply applying both techniques can be conflicting and even leads to sub-optimal performance. When allocating smoothed probability, original label smoothing treats the source-side words that would never appear in the target language equally to the real target-side words, which could bias the translation model. To address this issue, we propose Masked Label Smoothing (MLS), a new mechanism that masks the soft label probability of source-side words to zero. Simple yet effective, MLS manages to better integrate label smoothing with vocabulary sharing. Our extensive experiments show that MLS consistently yields improvement over original label smoothing on different datasets, including bilingual and multilingual translation from both translation quality and model's calibration. Our code is released at PKUnlp-icler.",2022
gottwald-etal-2008-tapping,http://www.lrec-conf.org/proceedings/lrec2008/pdf/117_paper.pdf,0,,,,,,,"Tapping Huge Temporally Indexed Textual Resources with WCTAnalyze. WCTAnalyze is a tool for storing, accessing and visually analyzing huge collections of temporally indexed data. It is motivated by applications in media analysis, business intelligence etc. where higher level analysis is performed on top of linguistically and statistically processed unstructured textual data. WCTAnalyze combines fast access with economically storage behaviour and appropriates a lot of built in visualization options for result presentation in detail as well as in contrast. So it enables an efficient and effective way to explore chronological text patterns of word forms, their co-occurrence sets and co-occurrence set intersections. Digging deep into co-occurrences of the same semantic or syntactic describing wordforms, some entities can be recognized as to be temporal related, whereas other differ significantly. This behaviour motivates approaches in interactive discovering events based on co-occurrence subsets.",Tapping Huge Temporally Indexed Textual Resources with {WCTA}nalyze,"WCTAnalyze is a tool for storing, accessing and visually analyzing huge collections of temporally indexed data. It is motivated by applications in media analysis, business intelligence etc. where higher level analysis is performed on top of linguistically and statistically processed unstructured textual data. WCTAnalyze combines fast access with economically storage behaviour and appropriates a lot of built in visualization options for result presentation in detail as well as in contrast. So it enables an efficient and effective way to explore chronological text patterns of word forms, their co-occurrence sets and co-occurrence set intersections. Digging deep into co-occurrences of the same semantic or syntactic describing wordforms, some entities can be recognized as to be temporal related, whereas other differ significantly. This behaviour motivates approaches in interactive discovering events based on co-occurrence subsets.",Tapping Huge Temporally Indexed Textual Resources with WCTAnalyze,"WCTAnalyze is a tool for storing, accessing and visually analyzing huge collections of temporally indexed data. It is motivated by applications in media analysis, business intelligence etc. where higher level analysis is performed on top of linguistically and statistically processed unstructured textual data. WCTAnalyze combines fast access with economically storage behaviour and appropriates a lot of built in visualization options for result presentation in detail as well as in contrast. So it enables an efficient and effective way to explore chronological text patterns of word forms, their co-occurrence sets and co-occurrence set intersections. Digging deep into co-occurrences of the same semantic or syntactic describing wordforms, some entities can be recognized as to be temporal related, whereas other differ significantly. This behaviour motivates approaches in interactive discovering events based on co-occurrence subsets.",,"Tapping Huge Temporally Indexed Textual Resources with WCTAnalyze. WCTAnalyze is a tool for storing, accessing and visually analyzing huge collections of temporally indexed data. It is motivated by applications in media analysis, business intelligence etc. where higher level analysis is performed on top of linguistically and statistically processed unstructured textual data. WCTAnalyze combines fast access with economically storage behaviour and appropriates a lot of built in visualization options for result presentation in detail as well as in contrast. So it enables an efficient and effective way to explore chronological text patterns of word forms, their co-occurrence sets and co-occurrence set intersections. Digging deep into co-occurrences of the same semantic or syntactic describing wordforms, some entities can be recognized as to be temporal related, whereas other differ significantly. This behaviour motivates approaches in interactive discovering events based on co-occurrence subsets.",2008
fong-berwick-1992-isolating,https://aclanthology.org/C92-2095,0,,,,,,,"Isolating Cross-linguistic Parsing Complexity with a Principles-and-Parameters Parser: A Case Study of Japanese and English. As parsing models and linguistic theories have broad ened to encorapass a wider range of non-English languages, a particularly uscfifl ""stress test"" is to buikl a single theory/parser pair that can work for multiple languages, in the best case with minor variation, perhaps restricted to the lexicon. This paper reports on the resuits of just such a test applied to a fully operational (Prolog) implementation of a so-called principles-andparameters model of syntax, for the case of Japanese and English. This paper has two basic aims: (1) to show how an implemented model tbr an entire principles-andparameters model (essentially all of the linguistic theory in Lasnik & Uriagereka (1988) ), see figure 2 for a computer snapshot, leads directly to both a parser for nmltiple languages and a useful ""computational linguistics workbench"" in which one can easily experiment with alternative linguistic theoretical tormulations of grammarital principles as well as alternative computational strategies;",Isolating Cross-linguistic Parsing Complexity with a Principles-and-Parameters Parser: A Case Study of {J}apanese and {E}nglish,"As parsing models and linguistic theories have broad ened to encorapass a wider range of non-English languages, a particularly uscfifl ""stress test"" is to buikl a single theory/parser pair that can work for multiple languages, in the best case with minor variation, perhaps restricted to the lexicon. This paper reports on the resuits of just such a test applied to a fully operational (Prolog) implementation of a so-called principles-andparameters model of syntax, for the case of Japanese and English. This paper has two basic aims: (1) to show how an implemented model tbr an entire principles-andparameters model (essentially all of the linguistic theory in Lasnik & Uriagereka (1988) ), see figure 2 for a computer snapshot, leads directly to both a parser for nmltiple languages and a useful ""computational linguistics workbench"" in which one can easily experiment with alternative linguistic theoretical tormulations of grammarital principles as well as alternative computational strategies;",Isolating Cross-linguistic Parsing Complexity with a Principles-and-Parameters Parser: A Case Study of Japanese and English,"As parsing models and linguistic theories have broad ened to encorapass a wider range of non-English languages, a particularly uscfifl ""stress test"" is to buikl a single theory/parser pair that can work for multiple languages, in the best case with minor variation, perhaps restricted to the lexicon. This paper reports on the resuits of just such a test applied to a fully operational (Prolog) implementation of a so-called principles-andparameters model of syntax, for the case of Japanese and English. This paper has two basic aims: (1) to show how an implemented model tbr an entire principles-andparameters model (essentially all of the linguistic theory in Lasnik & Uriagereka (1988) ), see figure 2 for a computer snapshot, leads directly to both a parser for nmltiple languages and a useful ""computational linguistics workbench"" in which one can easily experiment with alternative linguistic theoretical tormulations of grammarital principles as well as alternative computational strategies;",,"Isolating Cross-linguistic Parsing Complexity with a Principles-and-Parameters Parser: A Case Study of Japanese and English. As parsing models and linguistic theories have broad ened to encorapass a wider range of non-English languages, a particularly uscfifl ""stress test"" is to buikl a single theory/parser pair that can work for multiple languages, in the best case with minor variation, perhaps restricted to the lexicon. This paper reports on the resuits of just such a test applied to a fully operational (Prolog) implementation of a so-called principles-andparameters model of syntax, for the case of Japanese and English. This paper has two basic aims: (1) to show how an implemented model tbr an entire principles-andparameters model (essentially all of the linguistic theory in Lasnik & Uriagereka (1988) ), see figure 2 for a computer snapshot, leads directly to both a parser for nmltiple languages and a useful ""computational linguistics workbench"" in which one can easily experiment with alternative linguistic theoretical tormulations of grammarital principles as well as alternative computational strategies;",1992
chanen-patrick-2004-complex,https://aclanthology.org/U04-1001,0,,,,,,,"Complex, Corpus-Driven, Syntactic Features for Word Sense Disambiguation. Although syntactic features offer more specific information about the context surrounding a target word in a Word Sense Disambiguation (WSD) task, in general, they have not distinguished themselves much above positional features such as bag-of-words. In this paper we offer two methods for increasing the recall rate when using syntactic features on the WSD task by: 1) using an algorithm for discovering in the corpus every possible syntactic feature involving a target word, and 2) using wildcards in place of the lemmas in the templates of the syntactic features. In the best experimental results on the SENSEVAL-2 data we achieved an Fmeasure of 53.1% which is well above the mean F-measure performance of official SENSEVAL-2 entries, of 44.2%. These results are encouraging considering that only one kind of feature is used and only a simple Support Vector Machine (SVM) running with the defaults is used for the machine learning.","Complex, Corpus-Driven, Syntactic Features for Word Sense Disambiguation","Although syntactic features offer more specific information about the context surrounding a target word in a Word Sense Disambiguation (WSD) task, in general, they have not distinguished themselves much above positional features such as bag-of-words. In this paper we offer two methods for increasing the recall rate when using syntactic features on the WSD task by: 1) using an algorithm for discovering in the corpus every possible syntactic feature involving a target word, and 2) using wildcards in place of the lemmas in the templates of the syntactic features. In the best experimental results on the SENSEVAL-2 data we achieved an Fmeasure of 53.1% which is well above the mean F-measure performance of official SENSEVAL-2 entries, of 44.2%. These results are encouraging considering that only one kind of feature is used and only a simple Support Vector Machine (SVM) running with the defaults is used for the machine learning.","Complex, Corpus-Driven, Syntactic Features for Word Sense Disambiguation","Although syntactic features offer more specific information about the context surrounding a target word in a Word Sense Disambiguation (WSD) task, in general, they have not distinguished themselves much above positional features such as bag-of-words. In this paper we offer two methods for increasing the recall rate when using syntactic features on the WSD task by: 1) using an algorithm for discovering in the corpus every possible syntactic feature involving a target word, and 2) using wildcards in place of the lemmas in the templates of the syntactic features. In the best experimental results on the SENSEVAL-2 data we achieved an Fmeasure of 53.1% which is well above the mean F-measure performance of official SENSEVAL-2 entries, of 44.2%. These results are encouraging considering that only one kind of feature is used and only a simple Support Vector Machine (SVM) running with the defaults is used for the machine learning.",The word sense disambiguation architecture was jointly constructed with David Bell. We would like to thank the Capital Markets CRC and the University of Sydney for financial supported and everyone in the Sydney Language Technology Research Group for their support.,"Complex, Corpus-Driven, Syntactic Features for Word Sense Disambiguation. Although syntactic features offer more specific information about the context surrounding a target word in a Word Sense Disambiguation (WSD) task, in general, they have not distinguished themselves much above positional features such as bag-of-words. In this paper we offer two methods for increasing the recall rate when using syntactic features on the WSD task by: 1) using an algorithm for discovering in the corpus every possible syntactic feature involving a target word, and 2) using wildcards in place of the lemmas in the templates of the syntactic features. In the best experimental results on the SENSEVAL-2 data we achieved an Fmeasure of 53.1% which is well above the mean F-measure performance of official SENSEVAL-2 entries, of 44.2%. These results are encouraging considering that only one kind of feature is used and only a simple Support Vector Machine (SVM) running with the defaults is used for the machine learning.",2004
li-etal-2021-recommend-reason,https://aclanthology.org/2021.findings-emnlp.66,0,,,,,,,"Recommend for a Reason: Unlocking the Power of Unsupervised Aspect-Sentiment Co-Extraction. Compliments and concerns in reviews are valuable for understanding users' shopping interests and their opinions with respect to specific aspects of certain items. Existing reviewbased recommenders favor large and complex language encoders that can only learn latent and uninterpretable text representations. They lack explicit user-attention and item-property modeling, which however could provide valuable information beyond the ability to recommend items. Therefore, we propose a tightly coupled two-stage approach, including an Aspect-Sentiment Pair Extractor (ASPE) and an Attention-Property-aware Rating Estimator (APRE). Unsupervised ASPE mines Aspect-Sentiment pairs (AS-pairs) and APRE predicts ratings using AS-pairs as concrete aspect-level evidences. Extensive experiments on seven real-world Amazon Review Datasets demonstrate that ASPE can effectively extract AS-pairs which enable APRE to deliver superior accuracy over the leading baselines.",Recommend for a Reason: Unlocking the Power of Unsupervised Aspect-Sentiment Co-Extraction,"Compliments and concerns in reviews are valuable for understanding users' shopping interests and their opinions with respect to specific aspects of certain items. Existing reviewbased recommenders favor large and complex language encoders that can only learn latent and uninterpretable text representations. They lack explicit user-attention and item-property modeling, which however could provide valuable information beyond the ability to recommend items. Therefore, we propose a tightly coupled two-stage approach, including an Aspect-Sentiment Pair Extractor (ASPE) and an Attention-Property-aware Rating Estimator (APRE). Unsupervised ASPE mines Aspect-Sentiment pairs (AS-pairs) and APRE predicts ratings using AS-pairs as concrete aspect-level evidences. Extensive experiments on seven real-world Amazon Review Datasets demonstrate that ASPE can effectively extract AS-pairs which enable APRE to deliver superior accuracy over the leading baselines.",Recommend for a Reason: Unlocking the Power of Unsupervised Aspect-Sentiment Co-Extraction,"Compliments and concerns in reviews are valuable for understanding users' shopping interests and their opinions with respect to specific aspects of certain items. Existing reviewbased recommenders favor large and complex language encoders that can only learn latent and uninterpretable text representations. They lack explicit user-attention and item-property modeling, which however could provide valuable information beyond the ability to recommend items. Therefore, we propose a tightly coupled two-stage approach, including an Aspect-Sentiment Pair Extractor (ASPE) and an Attention-Property-aware Rating Estimator (APRE). Unsupervised ASPE mines Aspect-Sentiment pairs (AS-pairs) and APRE predicts ratings using AS-pairs as concrete aspect-level evidences. Extensive experiments on seven real-world Amazon Review Datasets demonstrate that ASPE can effectively extract AS-pairs which enable APRE to deliver superior accuracy over the leading baselines.",We would like to thank the reviewers for their helpful comments. The work was partially supported by NSF DGE-1829071 and NSF IIS-2106859.,"Recommend for a Reason: Unlocking the Power of Unsupervised Aspect-Sentiment Co-Extraction. Compliments and concerns in reviews are valuable for understanding users' shopping interests and their opinions with respect to specific aspects of certain items. Existing reviewbased recommenders favor large and complex language encoders that can only learn latent and uninterpretable text representations. They lack explicit user-attention and item-property modeling, which however could provide valuable information beyond the ability to recommend items. Therefore, we propose a tightly coupled two-stage approach, including an Aspect-Sentiment Pair Extractor (ASPE) and an Attention-Property-aware Rating Estimator (APRE). Unsupervised ASPE mines Aspect-Sentiment pairs (AS-pairs) and APRE predicts ratings using AS-pairs as concrete aspect-level evidences. Extensive experiments on seven real-world Amazon Review Datasets demonstrate that ASPE can effectively extract AS-pairs which enable APRE to deliver superior accuracy over the leading baselines.",2021
wang-etal-2021-chemner,https://aclanthology.org/2021.emnlp-main.424,1,,,,industry_innovation_infrastructure,,,"ChemNER: Fine-Grained Chemistry Named Entity Recognition with Ontology-Guided Distant Supervision. Scientific literature analysis needs fine-grained named entity recognition (NER) to provide a wide range of information for scientific discovery. For example, chemistry research needs to study dozens to hundreds of distinct, fine-grained entity types, making consistent and accurate annotation difficult even for crowds of domain experts. On the other hand, domain-specific ontologies and knowledge bases (KBs) can be easily accessed, constructed, or integrated, which makes distant supervision realistic for fine-grained chemistry NER. In distant supervision, training labels are generated by matching mentions in a document with the concepts in the knowledge bases (KBs). However, this kind of KB-matching suffers from two major challenges: incomplete annotation and noisy annotation. We propose CHEMNER, an ontologyguided, distantly-supervised method for finegrained chemistry NER to tackle these challenges. It leverages the chemistry type ontology structure to generate distant labels with novel methods of flexible KB-matching and ontology-guided multi-type disambiguation. It significantly improves the distant label generation for the subsequent sequence labeling model training. We also provide an expertlabeled, chemistry NER dataset with 62 finegrained chemistry types (e.g., chemical compounds and chemical reactions). Experimental results show that CHEMNER is highly effective, outperforming substantially the stateof-the-art NER methods (with .25 absolute F1 score improvement).",{C}hem{NER}: Fine-Grained Chemistry Named Entity Recognition with Ontology-Guided Distant Supervision,"Scientific literature analysis needs fine-grained named entity recognition (NER) to provide a wide range of information for scientific discovery. For example, chemistry research needs to study dozens to hundreds of distinct, fine-grained entity types, making consistent and accurate annotation difficult even for crowds of domain experts. On the other hand, domain-specific ontologies and knowledge bases (KBs) can be easily accessed, constructed, or integrated, which makes distant supervision realistic for fine-grained chemistry NER. In distant supervision, training labels are generated by matching mentions in a document with the concepts in the knowledge bases (KBs). However, this kind of KB-matching suffers from two major challenges: incomplete annotation and noisy annotation. We propose CHEMNER, an ontologyguided, distantly-supervised method for finegrained chemistry NER to tackle these challenges. It leverages the chemistry type ontology structure to generate distant labels with novel methods of flexible KB-matching and ontology-guided multi-type disambiguation. It significantly improves the distant label generation for the subsequent sequence labeling model training. We also provide an expertlabeled, chemistry NER dataset with 62 finegrained chemistry types (e.g., chemical compounds and chemical reactions). Experimental results show that CHEMNER is highly effective, outperforming substantially the stateof-the-art NER methods (with .25 absolute F1 score improvement).",ChemNER: Fine-Grained Chemistry Named Entity Recognition with Ontology-Guided Distant Supervision,"Scientific literature analysis needs fine-grained named entity recognition (NER) to provide a wide range of information for scientific discovery. For example, chemistry research needs to study dozens to hundreds of distinct, fine-grained entity types, making consistent and accurate annotation difficult even for crowds of domain experts. On the other hand, domain-specific ontologies and knowledge bases (KBs) can be easily accessed, constructed, or integrated, which makes distant supervision realistic for fine-grained chemistry NER. In distant supervision, training labels are generated by matching mentions in a document with the concepts in the knowledge bases (KBs). However, this kind of KB-matching suffers from two major challenges: incomplete annotation and noisy annotation. We propose CHEMNER, an ontologyguided, distantly-supervised method for finegrained chemistry NER to tackle these challenges. It leverages the chemistry type ontology structure to generate distant labels with novel methods of flexible KB-matching and ontology-guided multi-type disambiguation. It significantly improves the distant label generation for the subsequent sequence labeling model training. We also provide an expertlabeled, chemistry NER dataset with 62 finegrained chemistry types (e.g., chemical compounds and chemical reactions). Experimental results show that CHEMNER is highly effective, outperforming substantially the stateof-the-art NER methods (with .25 absolute F1 score improvement).","This work was supported by the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897, US DARPA KAIROS Program No. FA8750-19-2-1004, SocialSim Program No. W911NF-17-C-0099, and INCAS Program No. HR001121C0165, and National Science Foundation IIS-19-56151, IIS-17-41317, and IIS 17-04532. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect those of the National Science Foundation, DARPA or the U.S. Government.","ChemNER: Fine-Grained Chemistry Named Entity Recognition with Ontology-Guided Distant Supervision. Scientific literature analysis needs fine-grained named entity recognition (NER) to provide a wide range of information for scientific discovery. For example, chemistry research needs to study dozens to hundreds of distinct, fine-grained entity types, making consistent and accurate annotation difficult even for crowds of domain experts. On the other hand, domain-specific ontologies and knowledge bases (KBs) can be easily accessed, constructed, or integrated, which makes distant supervision realistic for fine-grained chemistry NER. In distant supervision, training labels are generated by matching mentions in a document with the concepts in the knowledge bases (KBs). However, this kind of KB-matching suffers from two major challenges: incomplete annotation and noisy annotation. We propose CHEMNER, an ontologyguided, distantly-supervised method for finegrained chemistry NER to tackle these challenges. It leverages the chemistry type ontology structure to generate distant labels with novel methods of flexible KB-matching and ontology-guided multi-type disambiguation. It significantly improves the distant label generation for the subsequent sequence labeling model training. We also provide an expertlabeled, chemistry NER dataset with 62 finegrained chemistry types (e.g., chemical compounds and chemical reactions). Experimental results show that CHEMNER is highly effective, outperforming substantially the stateof-the-art NER methods (with .25 absolute F1 score improvement).",2021
chang-etal-2017-novel-trajectory,https://aclanthology.org/O17-3008,0,,,,,,,"A Novel Trajectory-based Spatial-Temporal Spectral Features for Speech Emotion Recognition. Speech is one of the most natural form of human communication. Recognizing emotion from speech continues to be an important research venue to advance human-machine interface design and human behavior understanding. In this work, we propose a novel set of features, termed trajectory-based spatial-temporal spectral features, to recognize emotions from speech. The core idea centers on deriving descriptors both spatially and temporally on speech spectrograms over a sub-utterance frame (e.g., 250ms)-an inspiration from dense trajectory-based video descriptors. We conduct categorical and dimensional emotion recognition experiments and compare our proposed features to both the well-established set of prosodic and spectral features and the state-of-the-art exhaustive feature extraction. Our experiment demonstrate that our features by itself achieves comparable accuracies in the 4-class emotion recognition and valence detection task, and it obtains a significant improvement in the activation detection. We additionally show that there exists complementary information in our proposed features to the existing acoustic features set, which can be used to obtain an improved emotion recognition accuracy.",A Novel Trajectory-based Spatial-Temporal Spectral Features for Speech Emotion Recognition,"Speech is one of the most natural form of human communication. Recognizing emotion from speech continues to be an important research venue to advance human-machine interface design and human behavior understanding. In this work, we propose a novel set of features, termed trajectory-based spatial-temporal spectral features, to recognize emotions from speech. The core idea centers on deriving descriptors both spatially and temporally on speech spectrograms over a sub-utterance frame (e.g., 250ms)-an inspiration from dense trajectory-based video descriptors. We conduct categorical and dimensional emotion recognition experiments and compare our proposed features to both the well-established set of prosodic and spectral features and the state-of-the-art exhaustive feature extraction. Our experiment demonstrate that our features by itself achieves comparable accuracies in the 4-class emotion recognition and valence detection task, and it obtains a significant improvement in the activation detection. We additionally show that there exists complementary information in our proposed features to the existing acoustic features set, which can be used to obtain an improved emotion recognition accuracy.",A Novel Trajectory-based Spatial-Temporal Spectral Features for Speech Emotion Recognition,"Speech is one of the most natural form of human communication. Recognizing emotion from speech continues to be an important research venue to advance human-machine interface design and human behavior understanding. In this work, we propose a novel set of features, termed trajectory-based spatial-temporal spectral features, to recognize emotions from speech. The core idea centers on deriving descriptors both spatially and temporally on speech spectrograms over a sub-utterance frame (e.g., 250ms)-an inspiration from dense trajectory-based video descriptors. We conduct categorical and dimensional emotion recognition experiments and compare our proposed features to both the well-established set of prosodic and spectral features and the state-of-the-art exhaustive feature extraction. Our experiment demonstrate that our features by itself achieves comparable accuracies in the 4-class emotion recognition and valence detection task, and it obtains a significant improvement in the activation detection. We additionally show that there exists complementary information in our proposed features to the existing acoustic features set, which can be used to obtain an improved emotion recognition accuracy.",Thanks to Ministry of Science and Technology (103-2218-E-007-012-MY3) for funding.,"A Novel Trajectory-based Spatial-Temporal Spectral Features for Speech Emotion Recognition. Speech is one of the most natural form of human communication. Recognizing emotion from speech continues to be an important research venue to advance human-machine interface design and human behavior understanding. In this work, we propose a novel set of features, termed trajectory-based spatial-temporal spectral features, to recognize emotions from speech. The core idea centers on deriving descriptors both spatially and temporally on speech spectrograms over a sub-utterance frame (e.g., 250ms)-an inspiration from dense trajectory-based video descriptors. We conduct categorical and dimensional emotion recognition experiments and compare our proposed features to both the well-established set of prosodic and spectral features and the state-of-the-art exhaustive feature extraction. Our experiment demonstrate that our features by itself achieves comparable accuracies in the 4-class emotion recognition and valence detection task, and it obtains a significant improvement in the activation detection. We additionally show that there exists complementary information in our proposed features to the existing acoustic features set, which can be used to obtain an improved emotion recognition accuracy.",2017
nandi-etal-2013-towards,https://aclanthology.org/W13-3725,0,,,,,,,"Towards Building Parallel Dependency Treebanks: Intra-Chunk Expansion and Alignment for English Dependency Treebank. The paper presents our work on the annotation of intra-chunk dependencies on an English treebank that was previously annotated with Inter-chunk dependencies, and for which there exists a fully expanded parallel Hindi dependency treebank. This provides fully parsed dependency trees for the English treebank. We also report an analysis of the inter-annotator agreement for this chunk expansion task. Further, these fully expanded parallel Hindi and English treebanks were word aligned and an analysis for the task has been given. Issues related to intra-chunk expansion and alignment for the language pair Hindi-English are discussed and guidelines for these tasks have been prepared and released.",Towards Building Parallel Dependency Treebanks: Intra-Chunk Expansion and Alignment for {E}nglish Dependency Treebank,"The paper presents our work on the annotation of intra-chunk dependencies on an English treebank that was previously annotated with Inter-chunk dependencies, and for which there exists a fully expanded parallel Hindi dependency treebank. This provides fully parsed dependency trees for the English treebank. We also report an analysis of the inter-annotator agreement for this chunk expansion task. Further, these fully expanded parallel Hindi and English treebanks were word aligned and an analysis for the task has been given. Issues related to intra-chunk expansion and alignment for the language pair Hindi-English are discussed and guidelines for these tasks have been prepared and released.",Towards Building Parallel Dependency Treebanks: Intra-Chunk Expansion and Alignment for English Dependency Treebank,"The paper presents our work on the annotation of intra-chunk dependencies on an English treebank that was previously annotated with Inter-chunk dependencies, and for which there exists a fully expanded parallel Hindi dependency treebank. This provides fully parsed dependency trees for the English treebank. We also report an analysis of the inter-annotator agreement for this chunk expansion task. Further, these fully expanded parallel Hindi and English treebanks were word aligned and an analysis for the task has been given. Issues related to intra-chunk expansion and alignment for the language pair Hindi-English are discussed and guidelines for these tasks have been prepared and released.","We gratefully acknowledge the provision of the useful resource by way of the Hindi Treebank developed under HUTB, of which the Hindi treebank used for our research purpose is a part, and the work for which is supported by the NSF grant (Award Number: CNS 0751202; CFDA Number: 47.070). Also, any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.","Towards Building Parallel Dependency Treebanks: Intra-Chunk Expansion and Alignment for English Dependency Treebank. The paper presents our work on the annotation of intra-chunk dependencies on an English treebank that was previously annotated with Inter-chunk dependencies, and for which there exists a fully expanded parallel Hindi dependency treebank. This provides fully parsed dependency trees for the English treebank. We also report an analysis of the inter-annotator agreement for this chunk expansion task. Further, these fully expanded parallel Hindi and English treebanks were word aligned and an analysis for the task has been given. Issues related to intra-chunk expansion and alignment for the language pair Hindi-English are discussed and guidelines for these tasks have been prepared and released.",2013
xia-zong-2011-pos,https://aclanthology.org/I11-1069,0,,,,,,,"A POS-based Ensemble Model for Cross-domain Sentiment Classification. In this paper, we focus on the tasks of cross-domain sentiment classification. We find across different domains, features with some types of part-of-speech (POS) tags are domain-dependent, while some others are domain-free. Based on this finding, we proposed a POS-based ensemble model to efficiently integrate features with different types of POS tags to improve the classification performance. Weights are trained by stochastic gradient descent (SGD) to optimize the perceptron and minimal classification error (MCE) criteria. Experimental results show that the proposed ensemble model is quite effective for the task of cross-domain sentiment classification.",A {POS}-based Ensemble Model for Cross-domain Sentiment Classification,"In this paper, we focus on the tasks of cross-domain sentiment classification. We find across different domains, features with some types of part-of-speech (POS) tags are domain-dependent, while some others are domain-free. Based on this finding, we proposed a POS-based ensemble model to efficiently integrate features with different types of POS tags to improve the classification performance. Weights are trained by stochastic gradient descent (SGD) to optimize the perceptron and minimal classification error (MCE) criteria. Experimental results show that the proposed ensemble model is quite effective for the task of cross-domain sentiment classification.",A POS-based Ensemble Model for Cross-domain Sentiment Classification,"In this paper, we focus on the tasks of cross-domain sentiment classification. We find across different domains, features with some types of part-of-speech (POS) tags are domain-dependent, while some others are domain-free. Based on this finding, we proposed a POS-based ensemble model to efficiently integrate features with different types of POS tags to improve the classification performance. Weights are trained by stochastic gradient descent (SGD) to optimize the perceptron and minimal classification error (MCE) criteria. Experimental results show that the proposed ensemble model is quite effective for the task of cross-domain sentiment classification.","The research work has been funded by the Natural Science Foundation of China under Grant No. 60975053 and 61003160, and supported by the External Cooperation Program of the Chinese Academy of Sciences.","A POS-based Ensemble Model for Cross-domain Sentiment Classification. In this paper, we focus on the tasks of cross-domain sentiment classification. We find across different domains, features with some types of part-of-speech (POS) tags are domain-dependent, while some others are domain-free. Based on this finding, we proposed a POS-based ensemble model to efficiently integrate features with different types of POS tags to improve the classification performance. Weights are trained by stochastic gradient descent (SGD) to optimize the perceptron and minimal classification error (MCE) criteria. Experimental results show that the proposed ensemble model is quite effective for the task of cross-domain sentiment classification.",2011
pulman-sukkarieh-2005-automatic,https://aclanthology.org/W05-0202,0,,,,,,,"Automatic Short Answer Marking. Our aim is to investigate computational linguistics (CL) techniques in marking short free text responses automatically. Successful automatic marking of free text answers would seem to presuppose an advanced level of performance in automated natural language understanding. However, recent advances in CL techniques have opened up the possibility of being able to automate the marking of free text responses typed into a computer without having to create systems that fully understand the answers. This paper describes some of the techniques we have tried so far vis-à-vis this problem with results, discussion and description of the main issues encountered.",Automatic Short Answer Marking,"Our aim is to investigate computational linguistics (CL) techniques in marking short free text responses automatically. Successful automatic marking of free text answers would seem to presuppose an advanced level of performance in automated natural language understanding. However, recent advances in CL techniques have opened up the possibility of being able to automate the marking of free text responses typed into a computer without having to create systems that fully understand the answers. This paper describes some of the techniques we have tried so far vis-à-vis this problem with results, discussion and description of the main issues encountered.",Automatic Short Answer Marking,"Our aim is to investigate computational linguistics (CL) techniques in marking short free text responses automatically. Successful automatic marking of free text answers would seem to presuppose an advanced level of performance in automated natural language understanding. However, recent advances in CL techniques have opened up the possibility of being able to automate the marking of free text responses typed into a computer without having to create systems that fully understand the answers. This paper describes some of the techniques we have tried so far vis-à-vis this problem with results, discussion and description of the main issues encountered.",,"Automatic Short Answer Marking. Our aim is to investigate computational linguistics (CL) techniques in marking short free text responses automatically. Successful automatic marking of free text answers would seem to presuppose an advanced level of performance in automated natural language understanding. However, recent advances in CL techniques have opened up the possibility of being able to automate the marking of free text responses typed into a computer without having to create systems that fully understand the answers. This paper describes some of the techniques we have tried so far vis-à-vis this problem with results, discussion and description of the main issues encountered.",2005
tanaka-ishii-etal-1998-reactive,https://aclanthology.org/C98-2204,0,,,,,,,"Reactive Content Selection in the Generation of Real-time Soccer Commentary. MIKE is an automatic commentary system that generates a commentary of a simulated soccer game in English, French, or Japanese. One of the major technical challenges involved in live sports commentary is the reactive selection of content to describe complex, rapidly unfolding situation. To address this challenge, MIKE employs importance scores that intuitively capture the amount of information communicated to the audience. We describe how a principle of maximizing the total gain of importance scores during a game can be used to incorporate content selection into the surface generation module, thus accounting for issues such as interruption and abbreviation. Sample commentaries produced by MIKE are presented and used to evaluate different methods for content selection and generation in terms of efficiency of communication.",Reactive Content Selection in the Generation of Real-time Soccer Commentary,"MIKE is an automatic commentary system that generates a commentary of a simulated soccer game in English, French, or Japanese. One of the major technical challenges involved in live sports commentary is the reactive selection of content to describe complex, rapidly unfolding situation. To address this challenge, MIKE employs importance scores that intuitively capture the amount of information communicated to the audience. We describe how a principle of maximizing the total gain of importance scores during a game can be used to incorporate content selection into the surface generation module, thus accounting for issues such as interruption and abbreviation. Sample commentaries produced by MIKE are presented and used to evaluate different methods for content selection and generation in terms of efficiency of communication.",Reactive Content Selection in the Generation of Real-time Soccer Commentary,"MIKE is an automatic commentary system that generates a commentary of a simulated soccer game in English, French, or Japanese. One of the major technical challenges involved in live sports commentary is the reactive selection of content to describe complex, rapidly unfolding situation. To address this challenge, MIKE employs importance scores that intuitively capture the amount of information communicated to the audience. We describe how a principle of maximizing the total gain of importance scores during a game can be used to incorporate content selection into the surface generation module, thus accounting for issues such as interruption and abbreviation. Sample commentaries produced by MIKE are presented and used to evaluate different methods for content selection and generation in terms of efficiency of communication.",,"Reactive Content Selection in the Generation of Real-time Soccer Commentary. MIKE is an automatic commentary system that generates a commentary of a simulated soccer game in English, French, or Japanese. One of the major technical challenges involved in live sports commentary is the reactive selection of content to describe complex, rapidly unfolding situation. To address this challenge, MIKE employs importance scores that intuitively capture the amount of information communicated to the audience. We describe how a principle of maximizing the total gain of importance scores during a game can be used to incorporate content selection into the surface generation module, thus accounting for issues such as interruption and abbreviation. Sample commentaries produced by MIKE are presented and used to evaluate different methods for content selection and generation in terms of efficiency of communication.",1998
zhang-etal-2012-lazy,https://aclanthology.org/C12-1189,0,,,,,,,"A Lazy Learning Model for Entity Linking using Query-Specific Information. Entity linking disambiguates a mention of an entity in text to a Knowledge Base (KB). Most previous studies disambiguate a mention of a name (e.g.""AZ"") based on the distribution knowledge learned from labeled instances, which are related to other names (e.g.""Hoffman"",""Chad Johnson"", etc.). The gaps among the distributions of the instances related to different names hinder the further improvement of the previous approaches. This paper proposes a lazy learning model, which allows us to improve the learning process with the distribution information specific to the queried name (e.g.""AZ""). To obtain this distribution information, we automatically label some relevant instances for the queried name leveraging its unambiguous synonyms. Besides, another advantage is that our approach still can benefit from the labeled data related to other names (e.g.""Hoffman"",""Chad Johnson"", etc.), because our model is trained on both the labeled data sets of queried and other names by mining their shared predictive structure.",A Lazy Learning Model for Entity Linking using Query-Specific Information,"Entity linking disambiguates a mention of an entity in text to a Knowledge Base (KB). Most previous studies disambiguate a mention of a name (e.g.""AZ"") based on the distribution knowledge learned from labeled instances, which are related to other names (e.g.""Hoffman"",""Chad Johnson"", etc.). The gaps among the distributions of the instances related to different names hinder the further improvement of the previous approaches. This paper proposes a lazy learning model, which allows us to improve the learning process with the distribution information specific to the queried name (e.g.""AZ""). To obtain this distribution information, we automatically label some relevant instances for the queried name leveraging its unambiguous synonyms. Besides, another advantage is that our approach still can benefit from the labeled data related to other names (e.g.""Hoffman"",""Chad Johnson"", etc.), because our model is trained on both the labeled data sets of queried and other names by mining their shared predictive structure.",A Lazy Learning Model for Entity Linking using Query-Specific Information,"Entity linking disambiguates a mention of an entity in text to a Knowledge Base (KB). Most previous studies disambiguate a mention of a name (e.g.""AZ"") based on the distribution knowledge learned from labeled instances, which are related to other names (e.g.""Hoffman"",""Chad Johnson"", etc.). The gaps among the distributions of the instances related to different names hinder the further improvement of the previous approaches. This paper proposes a lazy learning model, which allows us to improve the learning process with the distribution information specific to the queried name (e.g.""AZ""). To obtain this distribution information, we automatically label some relevant instances for the queried name leveraging its unambiguous synonyms. Besides, another advantage is that our approach still can benefit from the labeled data related to other names (e.g.""Hoffman"",""Chad Johnson"", etc.), because our model is trained on both the labeled data sets of queried and other names by mining their shared predictive structure.",This work is partially supported by Microsoft Research Asia eHealth Theme Program.,"A Lazy Learning Model for Entity Linking using Query-Specific Information. Entity linking disambiguates a mention of an entity in text to a Knowledge Base (KB). Most previous studies disambiguate a mention of a name (e.g.""AZ"") based on the distribution knowledge learned from labeled instances, which are related to other names (e.g.""Hoffman"",""Chad Johnson"", etc.). The gaps among the distributions of the instances related to different names hinder the further improvement of the previous approaches. This paper proposes a lazy learning model, which allows us to improve the learning process with the distribution information specific to the queried name (e.g.""AZ""). To obtain this distribution information, we automatically label some relevant instances for the queried name leveraging its unambiguous synonyms. Besides, another advantage is that our approach still can benefit from the labeled data related to other names (e.g.""Hoffman"",""Chad Johnson"", etc.), because our model is trained on both the labeled data sets of queried and other names by mining their shared predictive structure.",2012
rajendran-etal-2018-sentiment,https://aclanthology.org/L18-1099,0,,,,,,,"Sentiment-Stance-Specificity (SSS) Dataset: Identifying Support-based Entailment among Opinions.. Computational argumentation aims to model arguments as a set of premises that either support each other or collectively support a conclusion. We prepare three datasets of text-hypothesis pairs with support-based entailment based on opinions present in hotel reviews using a distant supervision approach. Support-based entailment is defined as the existence of a specific opinion (premise) that supports as well as entails a more general opinion and where these together support a generalised conclusion. A set of rules is proposed based on three different components-sentiment, stance and specificity to automatically predict support-based entailment. Two annotators manually annotated the relations among text-hypothesis pairs with an inter-rater agreement of 0.80. We compare the performance of the rules which gave an overall accuracy of 0.83. Further, we compare the performance of textual entailment under various conditions. The overall accuracy was 89.54%, 90.00% and 96.19% for our three datasets.",Sentiment-Stance-Specificity ({SSS}) Dataset: Identifying Support-based Entailment among Opinions.,"Computational argumentation aims to model arguments as a set of premises that either support each other or collectively support a conclusion. We prepare three datasets of text-hypothesis pairs with support-based entailment based on opinions present in hotel reviews using a distant supervision approach. Support-based entailment is defined as the existence of a specific opinion (premise) that supports as well as entails a more general opinion and where these together support a generalised conclusion. A set of rules is proposed based on three different components-sentiment, stance and specificity to automatically predict support-based entailment. Two annotators manually annotated the relations among text-hypothesis pairs with an inter-rater agreement of 0.80. We compare the performance of the rules which gave an overall accuracy of 0.83. Further, we compare the performance of textual entailment under various conditions. The overall accuracy was 89.54%, 90.00% and 96.19% for our three datasets.",Sentiment-Stance-Specificity (SSS) Dataset: Identifying Support-based Entailment among Opinions.,"Computational argumentation aims to model arguments as a set of premises that either support each other or collectively support a conclusion. We prepare three datasets of text-hypothesis pairs with support-based entailment based on opinions present in hotel reviews using a distant supervision approach. Support-based entailment is defined as the existence of a specific opinion (premise) that supports as well as entails a more general opinion and where these together support a generalised conclusion. A set of rules is proposed based on three different components-sentiment, stance and specificity to automatically predict support-based entailment. Two annotators manually annotated the relations among text-hypothesis pairs with an inter-rater agreement of 0.80. We compare the performance of the rules which gave an overall accuracy of 0.83. Further, we compare the performance of textual entailment under various conditions. The overall accuracy was 89.54%, 90.00% and 96.19% for our three datasets.",,"Sentiment-Stance-Specificity (SSS) Dataset: Identifying Support-based Entailment among Opinions.. Computational argumentation aims to model arguments as a set of premises that either support each other or collectively support a conclusion. We prepare three datasets of text-hypothesis pairs with support-based entailment based on opinions present in hotel reviews using a distant supervision approach. Support-based entailment is defined as the existence of a specific opinion (premise) that supports as well as entails a more general opinion and where these together support a generalised conclusion. A set of rules is proposed based on three different components-sentiment, stance and specificity to automatically predict support-based entailment. Two annotators manually annotated the relations among text-hypothesis pairs with an inter-rater agreement of 0.80. We compare the performance of the rules which gave an overall accuracy of 0.83. Further, we compare the performance of textual entailment under various conditions. The overall accuracy was 89.54%, 90.00% and 96.19% for our three datasets.",2018
gatt-etal-2009-tuna,https://aclanthology.org/W09-0629,0,,,,,,,"The TUNA-REG Challenge 2009: Overview and Evaluation Results. The TUNA-REG'09 Challenge was one of the shared-task evaluation competitions at Generation Challenges 2009. TUNA-REG'09 used data from the TUNA Corpus of paired representations of entities and human-authored referring expressions. The shared task was to create systems that generate referring expressions for entities given representations of sets of entities and their properties. Four teams submitted six systems to TUNA-REG'09. We evaluated the six systems and two sets of human-authored referring expressions using several automatic intrinsic measures, a human-assessed intrinsic evaluation and a human task performance experiment. This report describes the TUNA-REG task and the evaluation methods used, and presents the evaluation results.",The {TUNA}-{REG} {C}hallenge 2009: Overview and Evaluation Results,"The TUNA-REG'09 Challenge was one of the shared-task evaluation competitions at Generation Challenges 2009. TUNA-REG'09 used data from the TUNA Corpus of paired representations of entities and human-authored referring expressions. The shared task was to create systems that generate referring expressions for entities given representations of sets of entities and their properties. Four teams submitted six systems to TUNA-REG'09. We evaluated the six systems and two sets of human-authored referring expressions using several automatic intrinsic measures, a human-assessed intrinsic evaluation and a human task performance experiment. This report describes the TUNA-REG task and the evaluation methods used, and presents the evaluation results.",The TUNA-REG Challenge 2009: Overview and Evaluation Results,"The TUNA-REG'09 Challenge was one of the shared-task evaluation competitions at Generation Challenges 2009. TUNA-REG'09 used data from the TUNA Corpus of paired representations of entities and human-authored referring expressions. The shared task was to create systems that generate referring expressions for entities given representations of sets of entities and their properties. Four teams submitted six systems to TUNA-REG'09. We evaluated the six systems and two sets of human-authored referring expressions using several automatic intrinsic measures, a human-assessed intrinsic evaluation and a human task performance experiment. This report describes the TUNA-REG task and the evaluation methods used, and presents the evaluation results.","We thank our colleagues at the University of Brighton who participated in the identification experiment, and the Masters students at UCL, Sussex and Brighton who participated in the quality assessment experiment. The evaluations were funded by EPSRC (UK) grant EP/G03995X/1.","The TUNA-REG Challenge 2009: Overview and Evaluation Results. The TUNA-REG'09 Challenge was one of the shared-task evaluation competitions at Generation Challenges 2009. TUNA-REG'09 used data from the TUNA Corpus of paired representations of entities and human-authored referring expressions. The shared task was to create systems that generate referring expressions for entities given representations of sets of entities and their properties. Four teams submitted six systems to TUNA-REG'09. We evaluated the six systems and two sets of human-authored referring expressions using several automatic intrinsic measures, a human-assessed intrinsic evaluation and a human task performance experiment. This report describes the TUNA-REG task and the evaluation methods used, and presents the evaluation results.",2009
fraisse-etal-2012-context,https://aclanthology.org/C12-3018,0,,,,,,,"An In-Context and Collaborative Software Localisation Model. We propose a demonstration of our in context and collaborative software localisation model. It involves volunteer localisers and end users in the localisation process via an efficient and dynamic workflow: while using an application (in context), users knowing the source language of the application (often but not always English) can modify strings of the user interface presented by the application in their current context. The implementation of that approach to localisation requires the integration of a collaborative platform. That leads to a new tripartite localisation workflow. We have experimented with our approach on Notepad++. A demonstration video is proposed as a supplementary material.",An In-Context and Collaborative Software Localisation Model,"We propose a demonstration of our in context and collaborative software localisation model. It involves volunteer localisers and end users in the localisation process via an efficient and dynamic workflow: while using an application (in context), users knowing the source language of the application (often but not always English) can modify strings of the user interface presented by the application in their current context. The implementation of that approach to localisation requires the integration of a collaborative platform. That leads to a new tripartite localisation workflow. We have experimented with our approach on Notepad++. A demonstration video is proposed as a supplementary material.",An In-Context and Collaborative Software Localisation Model,"We propose a demonstration of our in context and collaborative software localisation model. It involves volunteer localisers and end users in the localisation process via an efficient and dynamic workflow: while using an application (in context), users knowing the source language of the application (often but not always English) can modify strings of the user interface presented by the application in their current context. The implementation of that approach to localisation requires the integration of a collaborative platform. That leads to a new tripartite localisation workflow. We have experimented with our approach on Notepad++. A demonstration video is proposed as a supplementary material.",,"An In-Context and Collaborative Software Localisation Model. We propose a demonstration of our in context and collaborative software localisation model. It involves volunteer localisers and end users in the localisation process via an efficient and dynamic workflow: while using an application (in context), users knowing the source language of the application (often but not always English) can modify strings of the user interface presented by the application in their current context. The implementation of that approach to localisation requires the integration of a collaborative platform. That leads to a new tripartite localisation workflow. We have experimented with our approach on Notepad++. A demonstration video is proposed as a supplementary material.",2012
utsuro-etal-1992-lexical,https://aclanthology.org/C92-2088,0,,,,,,,"Lexical Knowledge Acquisition from Bilingual Corpora. I)br practical research in natnral language processing, it is indisl)ensM)le to develop a large scale semantic dictionary for computers. It is cspeciany important to improve thc tcclmiqucs tbr compiling semantic dictionaries ti'orn natural language texts such as those in existing human dictionaries or in large corpora, llowever, there are at least two ditlicultics in analyzing existing texts: tbe l)roblem of syntactic ambiguities and the probtcm of polysemy. Our approaclL to solve these difficulties is to make use of translation exampies in two distinct languages that have (lnite different syntactic structures and word meanings. The roe.son we took this at)preach is that in many cases both syn: tactic aLrd semantic ambignitics arc resolved by comparing analyzed resnlts from botb languages. In this paper, we propose a method Ibr resolving the syntactic ambiguities of translation cxaml>lcs of bilingual corpora and a method for acquiring lexical knowledge, such as ease frames of verbs and attribute sets el noons.",Lexical Knowledge Acquisition from Bilingual Corpora,"I)br practical research in natnral language processing, it is indisl)ensM)le to develop a large scale semantic dictionary for computers. It is cspeciany important to improve thc tcclmiqucs tbr compiling semantic dictionaries ti'orn natural language texts such as those in existing human dictionaries or in large corpora, llowever, there are at least two ditlicultics in analyzing existing texts: tbe l)roblem of syntactic ambiguities and the probtcm of polysemy. Our approaclL to solve these difficulties is to make use of translation exampies in two distinct languages that have (lnite different syntactic structures and word meanings. The roe.son we took this at)preach is that in many cases both syn: tactic aLrd semantic ambignitics arc resolved by comparing analyzed resnlts from botb languages. In this paper, we propose a method Ibr resolving the syntactic ambiguities of translation cxaml>lcs of bilingual corpora and a method for acquiring lexical knowledge, such as ease frames of verbs and attribute sets el noons.",Lexical Knowledge Acquisition from Bilingual Corpora,"I)br practical research in natnral language processing, it is indisl)ensM)le to develop a large scale semantic dictionary for computers. It is cspeciany important to improve thc tcclmiqucs tbr compiling semantic dictionaries ti'orn natural language texts such as those in existing human dictionaries or in large corpora, llowever, there are at least two ditlicultics in analyzing existing texts: tbe l)roblem of syntactic ambiguities and the probtcm of polysemy. Our approaclL to solve these difficulties is to make use of translation exampies in two distinct languages that have (lnite different syntactic structures and word meanings. The roe.son we took this at)preach is that in many cases both syn: tactic aLrd semantic ambignitics arc resolved by comparing analyzed resnlts from botb languages. In this paper, we propose a method Ibr resolving the syntactic ambiguities of translation cxaml>lcs of bilingual corpora and a method for acquiring lexical knowledge, such as ease frames of verbs and attribute sets el noons.",,"Lexical Knowledge Acquisition from Bilingual Corpora. I)br practical research in natnral language processing, it is indisl)ensM)le to develop a large scale semantic dictionary for computers. It is cspeciany important to improve thc tcclmiqucs tbr compiling semantic dictionaries ti'orn natural language texts such as those in existing human dictionaries or in large corpora, llowever, there are at least two ditlicultics in analyzing existing texts: tbe l)roblem of syntactic ambiguities and the probtcm of polysemy. Our approaclL to solve these difficulties is to make use of translation exampies in two distinct languages that have (lnite different syntactic structures and word meanings. The roe.son we took this at)preach is that in many cases both syn: tactic aLrd semantic ambignitics arc resolved by comparing analyzed resnlts from botb languages. In this paper, we propose a method Ibr resolving the syntactic ambiguities of translation cxaml>lcs of bilingual corpora and a method for acquiring lexical knowledge, such as ease frames of verbs and attribute sets el noons.",1992
han-etal-2021-fine,https://aclanthology.org/2021.naacl-main.122,0,,,,,,,"Fine-grained Post-training for Improving Retrieval-based Dialogue Systems. Retrieval-based dialogue systems display an outstanding performance when pre-trained language models are used, which includes bidirectional encoder representations from transformers (BERT). During the multi-turn response selection, BERT focuses on training the relationship between the context with multiple utterances and the response. However, this method of training is insufficient when considering the relations between each utterance in the context. This leads to a problem of not completely understanding the context flow that is required to select a response. To address this issue, we propose a new fine-grained post-training method that reflects the characteristics of the multi-turn dialogue. Specifically, the model learns the utterance level interactions by training every short context-response pair in a dialogue session. Furthermore, by using a new training objective, the utterance relevance classification, the model understands the semantic relevance and coherence between the dialogue utterances. Experimental results show that our model achieves new state-of-the-art with significant margins on three benchmark datasets. This suggests that the fine-grained post-training method is highly effective for the response selection task. 1",Fine-grained Post-training for Improving Retrieval-based Dialogue Systems,"Retrieval-based dialogue systems display an outstanding performance when pre-trained language models are used, which includes bidirectional encoder representations from transformers (BERT). During the multi-turn response selection, BERT focuses on training the relationship between the context with multiple utterances and the response. However, this method of training is insufficient when considering the relations between each utterance in the context. This leads to a problem of not completely understanding the context flow that is required to select a response. To address this issue, we propose a new fine-grained post-training method that reflects the characteristics of the multi-turn dialogue. Specifically, the model learns the utterance level interactions by training every short context-response pair in a dialogue session. Furthermore, by using a new training objective, the utterance relevance classification, the model understands the semantic relevance and coherence between the dialogue utterances. Experimental results show that our model achieves new state-of-the-art with significant margins on three benchmark datasets. This suggests that the fine-grained post-training method is highly effective for the response selection task. 1",Fine-grained Post-training for Improving Retrieval-based Dialogue Systems,"Retrieval-based dialogue systems display an outstanding performance when pre-trained language models are used, which includes bidirectional encoder representations from transformers (BERT). During the multi-turn response selection, BERT focuses on training the relationship between the context with multiple utterances and the response. However, this method of training is insufficient when considering the relations between each utterance in the context. This leads to a problem of not completely understanding the context flow that is required to select a response. To address this issue, we propose a new fine-grained post-training method that reflects the characteristics of the multi-turn dialogue. Specifically, the model learns the utterance level interactions by training every short context-response pair in a dialogue session. Furthermore, by using a new training objective, the utterance relevance classification, the model understands the semantic relevance and coherence between the dialogue utterances. Experimental results show that our model achieves new state-of-the-art with significant margins on three benchmark datasets. This suggests that the fine-grained post-training method is highly effective for the response selection task. 1",,"Fine-grained Post-training for Improving Retrieval-based Dialogue Systems. Retrieval-based dialogue systems display an outstanding performance when pre-trained language models are used, which includes bidirectional encoder representations from transformers (BERT). During the multi-turn response selection, BERT focuses on training the relationship between the context with multiple utterances and the response. However, this method of training is insufficient when considering the relations between each utterance in the context. This leads to a problem of not completely understanding the context flow that is required to select a response. To address this issue, we propose a new fine-grained post-training method that reflects the characteristics of the multi-turn dialogue. Specifically, the model learns the utterance level interactions by training every short context-response pair in a dialogue session. Furthermore, by using a new training objective, the utterance relevance classification, the model understands the semantic relevance and coherence between the dialogue utterances. Experimental results show that our model achieves new state-of-the-art with significant margins on three benchmark datasets. This suggests that the fine-grained post-training method is highly effective for the response selection task. 1",2021
alfalahi-etal-2015-expanding,https://aclanthology.org/W15-2611,0,,,,,,,"Expanding a dictionary of marker words for uncertainty and negation using distributional semantics. Approaches to determining the factuality of diagnoses and findings in clinical text tend to rely on dictionaries of marker words for uncertainty and negation. Here, a method for semi-automatically expanding a dictionary of marker words using distributional semantics is presented and evaluated. It is shown that ranking candidates for inclusion according to their proximity to cluster centroids of semantically similar seed words is more successful than ranking them according to proximity to each individual seed word.",Expanding a dictionary of marker words for uncertainty and negation using distributional semantics,"Approaches to determining the factuality of diagnoses and findings in clinical text tend to rely on dictionaries of marker words for uncertainty and negation. Here, a method for semi-automatically expanding a dictionary of marker words using distributional semantics is presented and evaluated. It is shown that ranking candidates for inclusion according to their proximity to cluster centroids of semantically similar seed words is more successful than ranking them according to proximity to each individual seed word.",Expanding a dictionary of marker words for uncertainty and negation using distributional semantics,"Approaches to determining the factuality of diagnoses and findings in clinical text tend to rely on dictionaries of marker words for uncertainty and negation. Here, a method for semi-automatically expanding a dictionary of marker words using distributional semantics is presented and evaluated. It is shown that ranking candidates for inclusion according to their proximity to cluster centroids of semantically similar seed words is more successful than ranking them according to proximity to each individual seed word.","This work was partly funded through the project StaViCTA by the framework grant ""the Digitized Society Past, Present, and Future"" with No. 2012-5659 from the Swedish Research Council (Vetenskapsrådet) and partly by the Swedish Foundation for Strategic Research through the project High-Performance Data Mining for Drug Effect at Stockholm University, Sweden. The authors would also like to direct thanks to the reviewers for valuable comments.","Expanding a dictionary of marker words for uncertainty and negation using distributional semantics. Approaches to determining the factuality of diagnoses and findings in clinical text tend to rely on dictionaries of marker words for uncertainty and negation. Here, a method for semi-automatically expanding a dictionary of marker words using distributional semantics is presented and evaluated. It is shown that ranking candidates for inclusion according to their proximity to cluster centroids of semantically similar seed words is more successful than ranking them according to proximity to each individual seed word.",2015
martens-passarotti-2014-thomas,http://www.lrec-conf.org/proceedings/lrec2014/pdf/70_Paper.pdf,0,,,,,,,"Thomas Aquinas in the T\""uNDRA: Integrating the Index Thomisticus Treebank into CLARIN-D. This paper describes the integration of the Index Thomisticus Treebank (IT-TB) into the web-based treebank search and visualization application TüNDRA (Tübingen aNnotated Data Retrieval & Analysis). TüNDRA was originally designed to provide access via the Internet to constituency treebanks and to tools for searching and visualizing them, as well as tabulating statistics about their contents. TüNDRA has now been extended to also provide full support for dependency treebanks with non-projective dependencies, in order to integrate the IT-TB and future treebanks with similar properties. These treebanks are queried using an adapted form of the TIGERSearch query language, which can search both hierarchical and sequential information in treebanks in a single query. As a web application, making the IT-TB accessible via TüNDRA makes the treebank and the tools to use of it available to a large community without having to distribute software and show users how to install it.","{T}homas {A}quinas in the {T}{\""u}{NDRA}: Integrating the Index {T}homisticus Treebank into {CLARIN}-{D}","This paper describes the integration of the Index Thomisticus Treebank (IT-TB) into the web-based treebank search and visualization application TüNDRA (Tübingen aNnotated Data Retrieval & Analysis). TüNDRA was originally designed to provide access via the Internet to constituency treebanks and to tools for searching and visualizing them, as well as tabulating statistics about their contents. TüNDRA has now been extended to also provide full support for dependency treebanks with non-projective dependencies, in order to integrate the IT-TB and future treebanks with similar properties. These treebanks are queried using an adapted form of the TIGERSearch query language, which can search both hierarchical and sequential information in treebanks in a single query. As a web application, making the IT-TB accessible via TüNDRA makes the treebank and the tools to use of it available to a large community without having to distribute software and show users how to install it.","Thomas Aquinas in the T\""uNDRA: Integrating the Index Thomisticus Treebank into CLARIN-D","This paper describes the integration of the Index Thomisticus Treebank (IT-TB) into the web-based treebank search and visualization application TüNDRA (Tübingen aNnotated Data Retrieval & Analysis). TüNDRA was originally designed to provide access via the Internet to constituency treebanks and to tools for searching and visualizing them, as well as tabulating statistics about their contents. TüNDRA has now been extended to also provide full support for dependency treebanks with non-projective dependencies, in order to integrate the IT-TB and future treebanks with similar properties. These treebanks are queried using an adapted form of the TIGERSearch query language, which can search both hierarchical and sequential information in treebanks in a single query. As a web application, making the IT-TB accessible via TüNDRA makes the treebank and the tools to use of it available to a large community without having to distribute software and show users how to install it.",,"Thomas Aquinas in the T\""uNDRA: Integrating the Index Thomisticus Treebank into CLARIN-D. This paper describes the integration of the Index Thomisticus Treebank (IT-TB) into the web-based treebank search and visualization application TüNDRA (Tübingen aNnotated Data Retrieval & Analysis). TüNDRA was originally designed to provide access via the Internet to constituency treebanks and to tools for searching and visualizing them, as well as tabulating statistics about their contents. TüNDRA has now been extended to also provide full support for dependency treebanks with non-projective dependencies, in order to integrate the IT-TB and future treebanks with similar properties. These treebanks are queried using an adapted form of the TIGERSearch query language, which can search both hierarchical and sequential information in treebanks in a single query. As a web application, making the IT-TB accessible via TüNDRA makes the treebank and the tools to use of it available to a large community without having to distribute software and show users how to install it.",2014
tait-etal-1999-mable,https://aclanthology.org/1999.tc-1.12,0,,,,business_use,general_purpose_productivity,,"MABLe: A Multi-lingual Authoring Tool for Business Letters. MABLe allows Spanish or Greek business letters writers with limited English to construct good quality, stylistically well formed British English letters. MABLe is a PC program, based on a domain dependant text grammar of fixed and variable phrases that together enforce linguistic cohesion. Interactions with the system are in the user's own language, and the constructed letter may be viewed in that language for sense checking. Our experience to date has shown that the approach it uses to machine aided translation, gives a sufficiently effective and flexible regime for the construction of genuine finished quality documents in the limited domain of business correspondence.
In this paper, we will first review the application domain in which MABLe is intended to operate. Then we will review the approach to machine assisted translation it embodies. Next, we describe the system, its architecture, and the implementation of the linguistic and programming elements. An example interaction sequence is then presented. Finally the successes and shortcomings of the work will be identified and some directions for possible future work will be identified.",{MABL}e: A Multi-lingual Authoring Tool for Business Letters,"MABLe allows Spanish or Greek business letters writers with limited English to construct good quality, stylistically well formed British English letters. MABLe is a PC program, based on a domain dependant text grammar of fixed and variable phrases that together enforce linguistic cohesion. Interactions with the system are in the user's own language, and the constructed letter may be viewed in that language for sense checking. Our experience to date has shown that the approach it uses to machine aided translation, gives a sufficiently effective and flexible regime for the construction of genuine finished quality documents in the limited domain of business correspondence.
In this paper, we will first review the application domain in which MABLe is intended to operate. Then we will review the approach to machine assisted translation it embodies. Next, we describe the system, its architecture, and the implementation of the linguistic and programming elements. An example interaction sequence is then presented. Finally the successes and shortcomings of the work will be identified and some directions for possible future work will be identified.",MABLe: A Multi-lingual Authoring Tool for Business Letters,"MABLe allows Spanish or Greek business letters writers with limited English to construct good quality, stylistically well formed British English letters. MABLe is a PC program, based on a domain dependant text grammar of fixed and variable phrases that together enforce linguistic cohesion. Interactions with the system are in the user's own language, and the constructed letter may be viewed in that language for sense checking. Our experience to date has shown that the approach it uses to machine aided translation, gives a sufficiently effective and flexible regime for the construction of genuine finished quality documents in the limited domain of business correspondence.
In this paper, we will first review the application domain in which MABLe is intended to operate. Then we will review the approach to machine assisted translation it embodies. Next, we describe the system, its architecture, and the implementation of the linguistic and programming elements. An example interaction sequence is then presented. Finally the successes and shortcomings of the work will be identified and some directions for possible future work will be identified.","The authors would like to thank our numerous colleagues who worked on the MABLe Project, especially Prof. Dr. Peter Hell wig and Heinz-Detlev Koch of the University of Heidelberg, Dr. Chris Smith of MARI, Periklis Tsahageas of SENA, and our other partners STI, PEA and CES.The MABLe project was supported in part by the EU Framework IV Language Engineering programme under contract LEI203.Microsoft® Word and Access are registered trademarks of Microsoft Corporation.","MABLe: A Multi-lingual Authoring Tool for Business Letters. MABLe allows Spanish or Greek business letters writers with limited English to construct good quality, stylistically well formed British English letters. MABLe is a PC program, based on a domain dependant text grammar of fixed and variable phrases that together enforce linguistic cohesion. Interactions with the system are in the user's own language, and the constructed letter may be viewed in that language for sense checking. Our experience to date has shown that the approach it uses to machine aided translation, gives a sufficiently effective and flexible regime for the construction of genuine finished quality documents in the limited domain of business correspondence.
In this paper, we will first review the application domain in which MABLe is intended to operate. Then we will review the approach to machine assisted translation it embodies. Next, we describe the system, its architecture, and the implementation of the linguistic and programming elements. An example interaction sequence is then presented. Finally the successes and shortcomings of the work will be identified and some directions for possible future work will be identified.",1999
fu-etal-2013-exploiting,https://aclanthology.org/D13-1122,0,,,,,,,"Exploiting Multiple Sources for Open-Domain Hypernym Discovery. Hypernym discovery aims to extract such noun pairs that one noun is a hypernym of the other. Most previous methods are based on lexical patterns but perform badly on opendomain data. Other work extracts hypernym relations from encyclopedias but has limited coverage. This paper proposes a simple yet effective distant supervision framework for Chinese open-domain hypernym discovery. Given an entity name, we try to discover its hypernyms by leveraging knowledge from multiple sources, i.e., search engine results, encyclopedias, and morphology of the entity name. First, we extract candidate hypernyms from the above sources. Then, we apply a statistical ranking model to select correct hypernyms. A set of novel features is proposed for the ranking model. We also present a heuristic strategy to build a large-scale noisy training data for the model without human annotation. Experimental results demonstrate that our approach outperforms the state-of-the-art methods on a manually labeled test dataset.",Exploiting Multiple Sources for Open-Domain Hypernym Discovery,"Hypernym discovery aims to extract such noun pairs that one noun is a hypernym of the other. Most previous methods are based on lexical patterns but perform badly on opendomain data. Other work extracts hypernym relations from encyclopedias but has limited coverage. This paper proposes a simple yet effective distant supervision framework for Chinese open-domain hypernym discovery. Given an entity name, we try to discover its hypernyms by leveraging knowledge from multiple sources, i.e., search engine results, encyclopedias, and morphology of the entity name. First, we extract candidate hypernyms from the above sources. Then, we apply a statistical ranking model to select correct hypernyms. A set of novel features is proposed for the ranking model. We also present a heuristic strategy to build a large-scale noisy training data for the model without human annotation. Experimental results demonstrate that our approach outperforms the state-of-the-art methods on a manually labeled test dataset.",Exploiting Multiple Sources for Open-Domain Hypernym Discovery,"Hypernym discovery aims to extract such noun pairs that one noun is a hypernym of the other. Most previous methods are based on lexical patterns but perform badly on opendomain data. Other work extracts hypernym relations from encyclopedias but has limited coverage. This paper proposes a simple yet effective distant supervision framework for Chinese open-domain hypernym discovery. Given an entity name, we try to discover its hypernyms by leveraging knowledge from multiple sources, i.e., search engine results, encyclopedias, and morphology of the entity name. First, we extract candidate hypernyms from the above sources. Then, we apply a statistical ranking model to select correct hypernyms. A set of novel features is proposed for the ranking model. We also present a heuristic strategy to build a large-scale noisy training data for the model without human annotation. Experimental results demonstrate that our approach outperforms the state-of-the-art methods on a manually labeled test dataset.","This work was supported by National Natural Science Foundation of China (NSFC) via grant 61133012, 61073126 and the National 863 Leading Technology Research Project via grant 2012AA011102. Special thanks to Zhenghua Li, Wanxiang Che, Wei Song, Yanyan Zhao, Yuhang Guo and the anonymous reviewers for insightful comments and suggestions. Thanks are also due to our annotators Ni Han and Zhenghua Li.","Exploiting Multiple Sources for Open-Domain Hypernym Discovery. Hypernym discovery aims to extract such noun pairs that one noun is a hypernym of the other. Most previous methods are based on lexical patterns but perform badly on opendomain data. Other work extracts hypernym relations from encyclopedias but has limited coverage. This paper proposes a simple yet effective distant supervision framework for Chinese open-domain hypernym discovery. Given an entity name, we try to discover its hypernyms by leveraging knowledge from multiple sources, i.e., search engine results, encyclopedias, and morphology of the entity name. First, we extract candidate hypernyms from the above sources. Then, we apply a statistical ranking model to select correct hypernyms. A set of novel features is proposed for the ranking model. We also present a heuristic strategy to build a large-scale noisy training data for the model without human annotation. Experimental results demonstrate that our approach outperforms the state-of-the-art methods on a manually labeled test dataset.",2013
tomlinson-etal-2014-mygoal,http://www.lrec-conf.org/proceedings/lrec2014/pdf/1120_Paper.pdf,1,,,,health,,,"\#mygoal: Finding Motivations on Twitter. Our everyday language reflects our psychological and cognitive state and effects the states of other individuals. In this contribution we look at the intersection between motivational state and language. We create a set of hashtags, which are annotated for the degree to which they are used by individuals to markup language that is indicative of a collection of factors that interact with an individual's motivational state. We look for tags that reflect a goal mention, reward, or a perception of control. Finally, we present results for a language-model based classifier which is able to predict the presence of one of these factors in a tweet with between 69% and 80% accuracy on a balanced testing set. Our approach suggests that hashtags can be used to understand, not just the language of topics, but the deeper psychological and social meaning of a tweet.",{\#}mygoal: Finding Motivations on {T}witter,"Our everyday language reflects our psychological and cognitive state and effects the states of other individuals. In this contribution we look at the intersection between motivational state and language. We create a set of hashtags, which are annotated for the degree to which they are used by individuals to markup language that is indicative of a collection of factors that interact with an individual's motivational state. We look for tags that reflect a goal mention, reward, or a perception of control. Finally, we present results for a language-model based classifier which is able to predict the presence of one of these factors in a tweet with between 69% and 80% accuracy on a balanced testing set. Our approach suggests that hashtags can be used to understand, not just the language of topics, but the deeper psychological and social meaning of a tweet.",\#mygoal: Finding Motivations on Twitter,"Our everyday language reflects our psychological and cognitive state and effects the states of other individuals. In this contribution we look at the intersection between motivational state and language. We create a set of hashtags, which are annotated for the degree to which they are used by individuals to markup language that is indicative of a collection of factors that interact with an individual's motivational state. We look for tags that reflect a goal mention, reward, or a perception of control. Finally, we present results for a language-model based classifier which is able to predict the presence of one of these factors in a tweet with between 69% and 80% accuracy on a balanced testing set. Our approach suggests that hashtags can be used to understand, not just the language of topics, but the deeper psychological and social meaning of a tweet.","This research was funded by the Intelligence Advanced Research Projects Activity (IARPA) through the Department of Defense US Army Research Laboratory (DoD / ARL). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government.","\#mygoal: Finding Motivations on Twitter. Our everyday language reflects our psychological and cognitive state and effects the states of other individuals. In this contribution we look at the intersection between motivational state and language. We create a set of hashtags, which are annotated for the degree to which they are used by individuals to markup language that is indicative of a collection of factors that interact with an individual's motivational state. We look for tags that reflect a goal mention, reward, or a perception of control. Finally, we present results for a language-model based classifier which is able to predict the presence of one of these factors in a tweet with between 69% and 80% accuracy on a balanced testing set. Our approach suggests that hashtags can be used to understand, not just the language of topics, but the deeper psychological and social meaning of a tweet.",2014
taslimipoor-etal-2020-mtlb,https://aclanthology.org/2020.mwe-1.19,0,,,,,,,"MTLB-STRUCT @Parseme 2020: Capturing Unseen Multiword Expressions Using Multi-task Learning and Pre-trained Masked Language Models. This paper describes a semi-supervised system that jointly learns verbal multiword expressions (VMWEs) and dependency parse trees as an auxiliary task. The model benefits from pre-trained multilingual BERT. BERT hidden layers are shared among the two tasks and we introduce an additional linear layer to retrieve VMWE tags. The dependency parse tree prediction is modelled by a linear layer and a bilinear one plus a tree CRF on top of BERT. The system has participated in the open track of the PARSEME shared task 2020 and ranked first in terms of F1-score in identifying unseen VMWEs as well as VMWEs in general, averaged across all 14 languages.",{MTLB}-{STRUCT} @Parseme 2020: Capturing Unseen Multiword Expressions Using Multi-task Learning and Pre-trained Masked Language Models,"This paper describes a semi-supervised system that jointly learns verbal multiword expressions (VMWEs) and dependency parse trees as an auxiliary task. The model benefits from pre-trained multilingual BERT. BERT hidden layers are shared among the two tasks and we introduce an additional linear layer to retrieve VMWE tags. The dependency parse tree prediction is modelled by a linear layer and a bilinear one plus a tree CRF on top of BERT. The system has participated in the open track of the PARSEME shared task 2020 and ranked first in terms of F1-score in identifying unseen VMWEs as well as VMWEs in general, averaged across all 14 languages.",MTLB-STRUCT @Parseme 2020: Capturing Unseen Multiword Expressions Using Multi-task Learning and Pre-trained Masked Language Models,"This paper describes a semi-supervised system that jointly learns verbal multiword expressions (VMWEs) and dependency parse trees as an auxiliary task. The model benefits from pre-trained multilingual BERT. BERT hidden layers are shared among the two tasks and we introduce an additional linear layer to retrieve VMWE tags. The dependency parse tree prediction is modelled by a linear layer and a bilinear one plus a tree CRF on top of BERT. The system has participated in the open track of the PARSEME shared task 2020 and ranked first in terms of F1-score in identifying unseen VMWEs as well as VMWEs in general, averaged across all 14 languages.","This paper reports on research supported by Cambridge Assessment, University of Cambridge. We are grateful to the anonymous reviewers for their valuable feedback. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used in this research.","MTLB-STRUCT @Parseme 2020: Capturing Unseen Multiword Expressions Using Multi-task Learning and Pre-trained Masked Language Models. This paper describes a semi-supervised system that jointly learns verbal multiword expressions (VMWEs) and dependency parse trees as an auxiliary task. The model benefits from pre-trained multilingual BERT. BERT hidden layers are shared among the two tasks and we introduce an additional linear layer to retrieve VMWE tags. The dependency parse tree prediction is modelled by a linear layer and a bilinear one plus a tree CRF on top of BERT. The system has participated in the open track of the PARSEME shared task 2020 and ranked first in terms of F1-score in identifying unseen VMWEs as well as VMWEs in general, averaged across all 14 languages.",2020
stojanovski-fraser-2018-coreference,https://aclanthology.org/W18-6306,0,,,,,,,"Coreference and Coherence in Neural Machine Translation: A Study Using Oracle Experiments. Cross-sentence context can provide valuable information in Machine Translation and is critical for translation of anaphoric pronouns and for providing consistent translations. In this paper, we devise simple oracle experiments targeting coreference and coherence. Oracles are an easy way to evaluate the effect of different discourse-level phenomena in NMT using BLEU and eliminate the necessity to manually define challenge sets for this purpose. We propose two context-aware NMT models and compare them against models working on a concatenation of consecutive sentences. Concatenation models perform better, but are computationally expensive. We show that NMT models taking advantage of context oracle signals can achieve considerable gains in BLEU, of up to 7.02 BLEU for coreference and 1.89 BLEU for coherence on subtitles translation. Access to strong signals allows us to make clear comparisons between context-aware models.",Coreference and Coherence in Neural Machine Translation: A Study Using Oracle Experiments,"Cross-sentence context can provide valuable information in Machine Translation and is critical for translation of anaphoric pronouns and for providing consistent translations. In this paper, we devise simple oracle experiments targeting coreference and coherence. Oracles are an easy way to evaluate the effect of different discourse-level phenomena in NMT using BLEU and eliminate the necessity to manually define challenge sets for this purpose. We propose two context-aware NMT models and compare them against models working on a concatenation of consecutive sentences. Concatenation models perform better, but are computationally expensive. We show that NMT models taking advantage of context oracle signals can achieve considerable gains in BLEU, of up to 7.02 BLEU for coreference and 1.89 BLEU for coherence on subtitles translation. Access to strong signals allows us to make clear comparisons between context-aware models.",Coreference and Coherence in Neural Machine Translation: A Study Using Oracle Experiments,"Cross-sentence context can provide valuable information in Machine Translation and is critical for translation of anaphoric pronouns and for providing consistent translations. In this paper, we devise simple oracle experiments targeting coreference and coherence. Oracles are an easy way to evaluate the effect of different discourse-level phenomena in NMT using BLEU and eliminate the necessity to manually define challenge sets for this purpose. We propose two context-aware NMT models and compare them against models working on a concatenation of consecutive sentences. Concatenation models perform better, but are computationally expensive. We show that NMT models taking advantage of context oracle signals can achieve considerable gains in BLEU, of up to 7.02 BLEU for coreference and 1.89 BLEU for coherence on subtitles translation. Access to strong signals allows us to make clear comparisons between context-aware models.",We would like to thank the anonymous reviewers for their valuable input and Daniel Ledda for his help with examples. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement № 640550).,"Coreference and Coherence in Neural Machine Translation: A Study Using Oracle Experiments. Cross-sentence context can provide valuable information in Machine Translation and is critical for translation of anaphoric pronouns and for providing consistent translations. In this paper, we devise simple oracle experiments targeting coreference and coherence. Oracles are an easy way to evaluate the effect of different discourse-level phenomena in NMT using BLEU and eliminate the necessity to manually define challenge sets for this purpose. We propose two context-aware NMT models and compare them against models working on a concatenation of consecutive sentences. Concatenation models perform better, but are computationally expensive. We show that NMT models taking advantage of context oracle signals can achieve considerable gains in BLEU, of up to 7.02 BLEU for coreference and 1.89 BLEU for coherence on subtitles translation. Access to strong signals allows us to make clear comparisons between context-aware models.",2018
zhou-liu-1997-similarity,https://aclanthology.org/O97-2011,0,,,,,,,Similarity Comparison between Chinese Sentences. ,Similarity Comparison between {C}hinese Sentences,,Similarity Comparison between Chinese Sentences,,,Similarity Comparison between Chinese Sentences. ,1997
asthana-ekbal-2017-supervised,https://aclanthology.org/W17-7529,0,,,,,,,"Supervised Methods For Ranking Relations In Web Search. In this paper we propose an efficient technique for ranking triples of knowledge base using information of full text. We devise supervised machine learning algorithms to compute the relevance scores for item-property pairs where an item can have more than one value.Such a score measures the degree to which an entity belongs to a type, and this plays an important role in ranking the search results. The problem is, in itself, new and not explored so much in the literature, possibly because of the heterogeneous behaviors of both semantic knowledge base and fulltext articles. The classifiers exploit statistical features computed from the Wikipedia articles and the semantic information obtained from the word embedding concepts. We develop models based on traditional supervised models like Suport Vector Machine (SVM) and Random Forest (RF); and then using deep Convolution Neural Network (CNN). We perform experiments as provided by WSDM cup 2017, which provides about 1k human judgments of person-profession pairs. Evaluation shows that machine learning based approaches produce encouraging performance with the highest accuracy of 71%. The contributions of the current work are twofold , viz. we focus on a problem that has not been explored much, and show the usage of powerful word-embedding features that produce promising results.",Supervised Methods For Ranking Relations In Web Search,"In this paper we propose an efficient technique for ranking triples of knowledge base using information of full text. We devise supervised machine learning algorithms to compute the relevance scores for item-property pairs where an item can have more than one value.Such a score measures the degree to which an entity belongs to a type, and this plays an important role in ranking the search results. The problem is, in itself, new and not explored so much in the literature, possibly because of the heterogeneous behaviors of both semantic knowledge base and fulltext articles. The classifiers exploit statistical features computed from the Wikipedia articles and the semantic information obtained from the word embedding concepts. We develop models based on traditional supervised models like Suport Vector Machine (SVM) and Random Forest (RF); and then using deep Convolution Neural Network (CNN). We perform experiments as provided by WSDM cup 2017, which provides about 1k human judgments of person-profession pairs. Evaluation shows that machine learning based approaches produce encouraging performance with the highest accuracy of 71%. The contributions of the current work are twofold , viz. we focus on a problem that has not been explored much, and show the usage of powerful word-embedding features that produce promising results.",Supervised Methods For Ranking Relations In Web Search,"In this paper we propose an efficient technique for ranking triples of knowledge base using information of full text. We devise supervised machine learning algorithms to compute the relevance scores for item-property pairs where an item can have more than one value.Such a score measures the degree to which an entity belongs to a type, and this plays an important role in ranking the search results. The problem is, in itself, new and not explored so much in the literature, possibly because of the heterogeneous behaviors of both semantic knowledge base and fulltext articles. The classifiers exploit statistical features computed from the Wikipedia articles and the semantic information obtained from the word embedding concepts. We develop models based on traditional supervised models like Suport Vector Machine (SVM) and Random Forest (RF); and then using deep Convolution Neural Network (CNN). We perform experiments as provided by WSDM cup 2017, which provides about 1k human judgments of person-profession pairs. Evaluation shows that machine learning based approaches produce encouraging performance with the highest accuracy of 71%. The contributions of the current work are twofold , viz. we focus on a problem that has not been explored much, and show the usage of powerful word-embedding features that produce promising results.",,"Supervised Methods For Ranking Relations In Web Search. In this paper we propose an efficient technique for ranking triples of knowledge base using information of full text. We devise supervised machine learning algorithms to compute the relevance scores for item-property pairs where an item can have more than one value.Such a score measures the degree to which an entity belongs to a type, and this plays an important role in ranking the search results. The problem is, in itself, new and not explored so much in the literature, possibly because of the heterogeneous behaviors of both semantic knowledge base and fulltext articles. The classifiers exploit statistical features computed from the Wikipedia articles and the semantic information obtained from the word embedding concepts. We develop models based on traditional supervised models like Suport Vector Machine (SVM) and Random Forest (RF); and then using deep Convolution Neural Network (CNN). We perform experiments as provided by WSDM cup 2017, which provides about 1k human judgments of person-profession pairs. Evaluation shows that machine learning based approaches produce encouraging performance with the highest accuracy of 71%. The contributions of the current work are twofold , viz. we focus on a problem that has not been explored much, and show the usage of powerful word-embedding features that produce promising results.",2017
zheng-etal-2021-consistency,https://aclanthology.org/2021.acl-long.264,0,,,,,,,"Consistency Regularization for Cross-Lingual Fine-Tuning. Fine-tuning pre-trained cross-lingual language models can transfer task-specific supervision from one language to the others. In this work, we propose to improve cross-lingual finetuning with consistency regularization. Specifically, we use example consistency regularization to penalize the prediction sensitivity to four types of data augmentations, i.e., subword sampling, Gaussian noise, code-switch substitution, and machine translation. In addition, we employ model consistency to regularize the models trained with two augmented versions of the same training set. Experimental results on the XTREME benchmark show that our method 1 significantly improves crosslingual fine-tuning across various tasks, including text classification, question answering, and sequence labeling.",Consistency Regularization for Cross-Lingual Fine-Tuning,"Fine-tuning pre-trained cross-lingual language models can transfer task-specific supervision from one language to the others. In this work, we propose to improve cross-lingual finetuning with consistency regularization. Specifically, we use example consistency regularization to penalize the prediction sensitivity to four types of data augmentations, i.e., subword sampling, Gaussian noise, code-switch substitution, and machine translation. In addition, we employ model consistency to regularize the models trained with two augmented versions of the same training set. Experimental results on the XTREME benchmark show that our method 1 significantly improves crosslingual fine-tuning across various tasks, including text classification, question answering, and sequence labeling.",Consistency Regularization for Cross-Lingual Fine-Tuning,"Fine-tuning pre-trained cross-lingual language models can transfer task-specific supervision from one language to the others. In this work, we propose to improve cross-lingual finetuning with consistency regularization. Specifically, we use example consistency regularization to penalize the prediction sensitivity to four types of data augmentations, i.e., subword sampling, Gaussian noise, code-switch substitution, and machine translation. In addition, we employ model consistency to regularize the models trained with two augmented versions of the same training set. Experimental results on the XTREME benchmark show that our method 1 significantly improves crosslingual fine-tuning across various tasks, including text classification, question answering, and sequence labeling.",Wanxiang Che is the corresponding author. This work was supported by the National Key R&D Program of China via grant 2020AAA0106501 and the National Natural Science Foundation of China (NSFC) via grant 61976072 and 61772153.,"Consistency Regularization for Cross-Lingual Fine-Tuning. Fine-tuning pre-trained cross-lingual language models can transfer task-specific supervision from one language to the others. In this work, we propose to improve cross-lingual finetuning with consistency regularization. Specifically, we use example consistency regularization to penalize the prediction sensitivity to four types of data augmentations, i.e., subword sampling, Gaussian noise, code-switch substitution, and machine translation. In addition, we employ model consistency to regularize the models trained with two augmented versions of the same training set. Experimental results on the XTREME benchmark show that our method 1 significantly improves crosslingual fine-tuning across various tasks, including text classification, question answering, and sequence labeling.",2021
condon-miller-2002-sharing,https://aclanthology.org/W02-0713,0,,,,,,,Sharing Problems and Solutions for Machine Translation of Spoken and Written Interaction. Examples from chat interaction are presented to demonstrate that machine translation of written interaction shares many problems with translation of spoken interaction. The potential for common solutions to the problems is illustrated by describing operations that normalize and tag input before translation. Segmenting utterances into small translation units and processing short turns separately are also motivated using data from chat.,Sharing Problems and Solutions for Machine Translation of Spoken and Written Interaction,Examples from chat interaction are presented to demonstrate that machine translation of written interaction shares many problems with translation of spoken interaction. The potential for common solutions to the problems is illustrated by describing operations that normalize and tag input before translation. Segmenting utterances into small translation units and processing short turns separately are also motivated using data from chat.,Sharing Problems and Solutions for Machine Translation of Spoken and Written Interaction,Examples from chat interaction are presented to demonstrate that machine translation of written interaction shares many problems with translation of spoken interaction. The potential for common solutions to the problems is illustrated by describing operations that normalize and tag input before translation. Segmenting utterances into small translation units and processing short turns separately are also motivated using data from chat.,,Sharing Problems and Solutions for Machine Translation of Spoken and Written Interaction. Examples from chat interaction are presented to demonstrate that machine translation of written interaction shares many problems with translation of spoken interaction. The potential for common solutions to the problems is illustrated by describing operations that normalize and tag input before translation. Segmenting utterances into small translation units and processing short turns separately are also motivated using data from chat.,2002
nakamura-kawahara-2016-constructing,https://aclanthology.org/W16-1006,0,,,,,,,"Constructing a Dictionary Describing Feature Changes of Arguments in Event Sentences. Common sense knowledge plays an essential role for natural language understanding, human-machine communication and so forth. In this paper, we acquire knowledge of events as common sense knowledge because there is a possibility that dictionaries of such knowledge are useful for recognition of implication relations in texts, inference of human activities and their planning, and so on. As for event knowledge, we focus on feature changes of arguments (hereafter, FCAs) in event sentences as knowledge of events. To construct a dictionary of FCAs, we propose a framework for acquiring such knowledge based on both of the automatic approach and the collective intelligence approach to exploit merits of both approaches. We acquired FCAs in event sentences through crowdsourcing and conducted the subjective evaluation to validate whether the FCAs are adequately acquired. As a result of the evaluation, it was shown that we were able to reasonably well capture FCAs in event sentences.",Constructing a Dictionary Describing Feature Changes of Arguments in Event Sentences,"Common sense knowledge plays an essential role for natural language understanding, human-machine communication and so forth. In this paper, we acquire knowledge of events as common sense knowledge because there is a possibility that dictionaries of such knowledge are useful for recognition of implication relations in texts, inference of human activities and their planning, and so on. As for event knowledge, we focus on feature changes of arguments (hereafter, FCAs) in event sentences as knowledge of events. To construct a dictionary of FCAs, we propose a framework for acquiring such knowledge based on both of the automatic approach and the collective intelligence approach to exploit merits of both approaches. We acquired FCAs in event sentences through crowdsourcing and conducted the subjective evaluation to validate whether the FCAs are adequately acquired. As a result of the evaluation, it was shown that we were able to reasonably well capture FCAs in event sentences.",Constructing a Dictionary Describing Feature Changes of Arguments in Event Sentences,"Common sense knowledge plays an essential role for natural language understanding, human-machine communication and so forth. In this paper, we acquire knowledge of events as common sense knowledge because there is a possibility that dictionaries of such knowledge are useful for recognition of implication relations in texts, inference of human activities and their planning, and so on. As for event knowledge, we focus on feature changes of arguments (hereafter, FCAs) in event sentences as knowledge of events. To construct a dictionary of FCAs, we propose a framework for acquiring such knowledge based on both of the automatic approach and the collective intelligence approach to exploit merits of both approaches. We acquired FCAs in event sentences through crowdsourcing and conducted the subjective evaluation to validate whether the FCAs are adequately acquired. As a result of the evaluation, it was shown that we were able to reasonably well capture FCAs in event sentences.",,"Constructing a Dictionary Describing Feature Changes of Arguments in Event Sentences. Common sense knowledge plays an essential role for natural language understanding, human-machine communication and so forth. In this paper, we acquire knowledge of events as common sense knowledge because there is a possibility that dictionaries of such knowledge are useful for recognition of implication relations in texts, inference of human activities and their planning, and so on. As for event knowledge, we focus on feature changes of arguments (hereafter, FCAs) in event sentences as knowledge of events. To construct a dictionary of FCAs, we propose a framework for acquiring such knowledge based on both of the automatic approach and the collective intelligence approach to exploit merits of both approaches. We acquired FCAs in event sentences through crowdsourcing and conducted the subjective evaluation to validate whether the FCAs are adequately acquired. As a result of the evaluation, it was shown that we were able to reasonably well capture FCAs in event sentences.",2016
mohammad-etal-2016-dataset,https://aclanthology.org/L16-1623,1,,,,peace_justice_and_strong_institutions,,,"A Dataset for Detecting Stance in Tweets. We can often detect from a person's utterances whether he/she is in favor of or against a given target entity (a product, topic, another person, etc.). Here for the first time we present a dataset of tweets annotated for whether the tweeter is in favor of or against pre-chosen targets of interest-their stance. The targets of interest may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. The data pertains to six targets of interest commonly known and debated in the United States. Apart from stance, the tweets are also annotated for whether the target of interest is the target of opinion in the tweet. The annotations were performed by crowdsourcing. Several techniques were employed to encourage high-quality annotations (for example, providing clear and simple instructions) and to identify and discard poor annotations (for example, using a small set of check questions annotated by the authors). This Stance Dataset, which was subsequently also annotated for sentiment, can be used to better understand the relationship between stance, sentiment, entity relationships, and textual inference.",A Dataset for Detecting Stance in Tweets,"We can often detect from a person's utterances whether he/she is in favor of or against a given target entity (a product, topic, another person, etc.). Here for the first time we present a dataset of tweets annotated for whether the tweeter is in favor of or against pre-chosen targets of interest-their stance. The targets of interest may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. The data pertains to six targets of interest commonly known and debated in the United States. Apart from stance, the tweets are also annotated for whether the target of interest is the target of opinion in the tweet. The annotations were performed by crowdsourcing. Several techniques were employed to encourage high-quality annotations (for example, providing clear and simple instructions) and to identify and discard poor annotations (for example, using a small set of check questions annotated by the authors). This Stance Dataset, which was subsequently also annotated for sentiment, can be used to better understand the relationship between stance, sentiment, entity relationships, and textual inference.",A Dataset for Detecting Stance in Tweets,"We can often detect from a person's utterances whether he/she is in favor of or against a given target entity (a product, topic, another person, etc.). Here for the first time we present a dataset of tweets annotated for whether the tweeter is in favor of or against pre-chosen targets of interest-their stance. The targets of interest may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. The data pertains to six targets of interest commonly known and debated in the United States. Apart from stance, the tweets are also annotated for whether the target of interest is the target of opinion in the tweet. The annotations were performed by crowdsourcing. Several techniques were employed to encourage high-quality annotations (for example, providing clear and simple instructions) and to identify and discard poor annotations (for example, using a small set of check questions annotated by the authors). This Stance Dataset, which was subsequently also annotated for sentiment, can be used to better understand the relationship between stance, sentiment, entity relationships, and textual inference.",,"A Dataset for Detecting Stance in Tweets. We can often detect from a person's utterances whether he/she is in favor of or against a given target entity (a product, topic, another person, etc.). Here for the first time we present a dataset of tweets annotated for whether the tweeter is in favor of or against pre-chosen targets of interest-their stance. The targets of interest may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. The data pertains to six targets of interest commonly known and debated in the United States. Apart from stance, the tweets are also annotated for whether the target of interest is the target of opinion in the tweet. The annotations were performed by crowdsourcing. Several techniques were employed to encourage high-quality annotations (for example, providing clear and simple instructions) and to identify and discard poor annotations (for example, using a small set of check questions annotated by the authors). This Stance Dataset, which was subsequently also annotated for sentiment, can be used to better understand the relationship between stance, sentiment, entity relationships, and textual inference.",2016
moghimifar-etal-2020-cosmo,https://aclanthology.org/2020.coling-main.467,0,,,,,,,"CosMo: Conditional Seq2Seq-based Mixture Model for Zero-Shot Commonsense Question Answering. Commonsense reasoning refers to the ability of evaluating a social situation and acting accordingly. Identification of the implicit causes and effects of a social context is the driving capability which can enable machines to perform commonsense reasoning. The dynamic world of social interactions requires context-dependent on-demand systems to infer such underlying information. However, current approaches in this realm lack the ability to perform commonsense reasoning upon facing an unseen situation, mostly due to incapability of identifying a diverse range of implicit social relations. Hence they fail to estimate the correct reasoning path. In this paper, we present Conditional SEQ2SEQ-based Mixture model (COSMO), which provides us with the capabilities of dynamic and diverse content generation. We use COSMO to generate context-dependent clauses, which form a dynamic Knowledge Graph (KG) on-the-fly for commonsense reasoning. To show the adaptability of our model to context-dependant knowledge generation, we address the task of zero-shot commonsense question answering. The empirical results indicate an improvement of up to +5.2% over the state-of-the-art models.",{C}os{M}o: Conditional {S}eq2{S}eq-based Mixture Model for Zero-Shot Commonsense Question Answering,"Commonsense reasoning refers to the ability of evaluating a social situation and acting accordingly. Identification of the implicit causes and effects of a social context is the driving capability which can enable machines to perform commonsense reasoning. The dynamic world of social interactions requires context-dependent on-demand systems to infer such underlying information. However, current approaches in this realm lack the ability to perform commonsense reasoning upon facing an unseen situation, mostly due to incapability of identifying a diverse range of implicit social relations. Hence they fail to estimate the correct reasoning path. In this paper, we present Conditional SEQ2SEQ-based Mixture model (COSMO), which provides us with the capabilities of dynamic and diverse content generation. We use COSMO to generate context-dependent clauses, which form a dynamic Knowledge Graph (KG) on-the-fly for commonsense reasoning. To show the adaptability of our model to context-dependant knowledge generation, we address the task of zero-shot commonsense question answering. The empirical results indicate an improvement of up to +5.2% over the state-of-the-art models.",CosMo: Conditional Seq2Seq-based Mixture Model for Zero-Shot Commonsense Question Answering,"Commonsense reasoning refers to the ability of evaluating a social situation and acting accordingly. Identification of the implicit causes and effects of a social context is the driving capability which can enable machines to perform commonsense reasoning. The dynamic world of social interactions requires context-dependent on-demand systems to infer such underlying information. However, current approaches in this realm lack the ability to perform commonsense reasoning upon facing an unseen situation, mostly due to incapability of identifying a diverse range of implicit social relations. Hence they fail to estimate the correct reasoning path. In this paper, we present Conditional SEQ2SEQ-based Mixture model (COSMO), which provides us with the capabilities of dynamic and diverse content generation. We use COSMO to generate context-dependent clauses, which form a dynamic Knowledge Graph (KG) on-the-fly for commonsense reasoning. To show the adaptability of our model to context-dependant knowledge generation, we address the task of zero-shot commonsense question answering. The empirical results indicate an improvement of up to +5.2% over the state-of-the-art models.",,"CosMo: Conditional Seq2Seq-based Mixture Model for Zero-Shot Commonsense Question Answering. Commonsense reasoning refers to the ability of evaluating a social situation and acting accordingly. Identification of the implicit causes and effects of a social context is the driving capability which can enable machines to perform commonsense reasoning. The dynamic world of social interactions requires context-dependent on-demand systems to infer such underlying information. However, current approaches in this realm lack the ability to perform commonsense reasoning upon facing an unseen situation, mostly due to incapability of identifying a diverse range of implicit social relations. Hence they fail to estimate the correct reasoning path. In this paper, we present Conditional SEQ2SEQ-based Mixture model (COSMO), which provides us with the capabilities of dynamic and diverse content generation. We use COSMO to generate context-dependent clauses, which form a dynamic Knowledge Graph (KG) on-the-fly for commonsense reasoning. To show the adaptability of our model to context-dependant knowledge generation, we address the task of zero-shot commonsense question answering. The empirical results indicate an improvement of up to +5.2% over the state-of-the-art models.",2020
mellish-evans-1989-natural,https://aclanthology.org/J89-4002,0,,,,,,,"Natural Language Generation from Plans. This paper addresses the problem of designing a system that accepts a plan structure of the sort generated by AI planning programs and produces natural language text explaining how to execute the plan. We describe a system that generates text from plans produced by the NONLIN planner (Tate 1976). The results of our system are promising, but the texts still lack much of the smoothness of human-generated text. This is partly because, although the domain of plans seems a priori to provide rich structure that a natural language generator can use, in practice a plan that is generated without the production of explanations in mind rarely contains the kinds of information that would yield an interesting natural language account. For instance, the hierarchical organization assigned to a plan is liable to reflect more a programmer's approach to generating a class of plans efficiently than the way that a human would naturally ""chunk"" the relevant actions. Such problems are, of course, similar to those that Swartout (1983) encountered with expert systems. In addition, AI planners have a restricted view of the world that is hard to match up with the normal semantics of natural language expressions. Thus constructs that are primitive to the planner may be only clumsily or misleadingly expressed in natural language, and the range of possible natural language constructs may be artificially limited by the shallowness of the planner's representations.",Natural Language Generation from Plans,"This paper addresses the problem of designing a system that accepts a plan structure of the sort generated by AI planning programs and produces natural language text explaining how to execute the plan. We describe a system that generates text from plans produced by the NONLIN planner (Tate 1976). The results of our system are promising, but the texts still lack much of the smoothness of human-generated text. This is partly because, although the domain of plans seems a priori to provide rich structure that a natural language generator can use, in practice a plan that is generated without the production of explanations in mind rarely contains the kinds of information that would yield an interesting natural language account. For instance, the hierarchical organization assigned to a plan is liable to reflect more a programmer's approach to generating a class of plans efficiently than the way that a human would naturally ""chunk"" the relevant actions. Such problems are, of course, similar to those that Swartout (1983) encountered with expert systems. In addition, AI planners have a restricted view of the world that is hard to match up with the normal semantics of natural language expressions. Thus constructs that are primitive to the planner may be only clumsily or misleadingly expressed in natural language, and the range of possible natural language constructs may be artificially limited by the shallowness of the planner's representations.",Natural Language Generation from Plans,"This paper addresses the problem of designing a system that accepts a plan structure of the sort generated by AI planning programs and produces natural language text explaining how to execute the plan. We describe a system that generates text from plans produced by the NONLIN planner (Tate 1976). The results of our system are promising, but the texts still lack much of the smoothness of human-generated text. This is partly because, although the domain of plans seems a priori to provide rich structure that a natural language generator can use, in practice a plan that is generated without the production of explanations in mind rarely contains the kinds of information that would yield an interesting natural language account. For instance, the hierarchical organization assigned to a plan is liable to reflect more a programmer's approach to generating a class of plans efficiently than the way that a human would naturally ""chunk"" the relevant actions. Such problems are, of course, similar to those that Swartout (1983) encountered with expert systems. In addition, AI planners have a restricted view of the world that is hard to match up with the normal semantics of natural language expressions. Thus constructs that are primitive to the planner may be only clumsily or misleadingly expressed in natural language, and the range of possible natural language constructs may be artificially limited by the shallowness of the planner's representations.",The work reported here was made possible by SERC grant GR/D/ 08876. Both authors are currently supported by SERC Advanced Fellowships.,"Natural Language Generation from Plans. This paper addresses the problem of designing a system that accepts a plan structure of the sort generated by AI planning programs and produces natural language text explaining how to execute the plan. We describe a system that generates text from plans produced by the NONLIN planner (Tate 1976). The results of our system are promising, but the texts still lack much of the smoothness of human-generated text. This is partly because, although the domain of plans seems a priori to provide rich structure that a natural language generator can use, in practice a plan that is generated without the production of explanations in mind rarely contains the kinds of information that would yield an interesting natural language account. For instance, the hierarchical organization assigned to a plan is liable to reflect more a programmer's approach to generating a class of plans efficiently than the way that a human would naturally ""chunk"" the relevant actions. Such problems are, of course, similar to those that Swartout (1983) encountered with expert systems. In addition, AI planners have a restricted view of the world that is hard to match up with the normal semantics of natural language expressions. Thus constructs that are primitive to the planner may be only clumsily or misleadingly expressed in natural language, and the range of possible natural language constructs may be artificially limited by the shallowness of the planner's representations.",1989
hinkelman-allen-1989-two,https://aclanthology.org/P89-1026,0,,,,,,,"Two Constraints on Speech Act Ambiguity. Existing plan-based theories of speech act interpretation do not account for the conventional aspect of speech acts. We use patterns of linguistic features (e.g. mood, verb form, sentence adverbials, thematic roles) to suggest a range of speech act interpretations for the utterance. These are filtered using plan-bused conversational implicatures to eliminate inappropriate ones. Extended plan reasoning is available but not necessary for familiar forms. Taking speech act ambiguity seriously, with these two constraints, explains how ""Can you pass the salt?"" is a typical indirect request while ""Are you able to pass the salt?"" is not.",Two Constraints on Speech Act Ambiguity,"Existing plan-based theories of speech act interpretation do not account for the conventional aspect of speech acts. We use patterns of linguistic features (e.g. mood, verb form, sentence adverbials, thematic roles) to suggest a range of speech act interpretations for the utterance. These are filtered using plan-bused conversational implicatures to eliminate inappropriate ones. Extended plan reasoning is available but not necessary for familiar forms. Taking speech act ambiguity seriously, with these two constraints, explains how ""Can you pass the salt?"" is a typical indirect request while ""Are you able to pass the salt?"" is not.",Two Constraints on Speech Act Ambiguity,"Existing plan-based theories of speech act interpretation do not account for the conventional aspect of speech acts. We use patterns of linguistic features (e.g. mood, verb form, sentence adverbials, thematic roles) to suggest a range of speech act interpretations for the utterance. These are filtered using plan-bused conversational implicatures to eliminate inappropriate ones. Extended plan reasoning is available but not necessary for familiar forms. Taking speech act ambiguity seriously, with these two constraints, explains how ""Can you pass the salt?"" is a typical indirect request while ""Are you able to pass the salt?"" is not.",,"Two Constraints on Speech Act Ambiguity. Existing plan-based theories of speech act interpretation do not account for the conventional aspect of speech acts. We use patterns of linguistic features (e.g. mood, verb form, sentence adverbials, thematic roles) to suggest a range of speech act interpretations for the utterance. These are filtered using plan-bused conversational implicatures to eliminate inappropriate ones. Extended plan reasoning is available but not necessary for familiar forms. Taking speech act ambiguity seriously, with these two constraints, explains how ""Can you pass the salt?"" is a typical indirect request while ""Are you able to pass the salt?"" is not.",1989
garcia-2021-exploring,https://aclanthology.org/2021.acl-long.281,0,,,,,,,"Exploring the Representation of Word Meanings in Context: A Case Study on Homonymy and Synonymy. This paper presents a multilingual study of word meaning representations in context. We assess the ability of both static and contextualized models to adequately represent different lexical-semantic relations, such as homonymy and synonymy. To do so, we created a new multilingual dataset that allows us to perform a controlled evaluation of several factors such as the impact of the surrounding context or the overlap between words, conveying the same or different senses. A systematic assessment on four scenarios shows that the best monolingual models based on Transformers can adequately disambiguate homonyms in context. However, as they rely heavily on context, these models fail at representing words with different senses when occurring in similar sentences. Experiments are performed in Galician, Portuguese, English, and Spanish, and both the dataset (with more than 3,000 evaluation items) and new models are freely released with this study.",Exploring the Representation of Word Meanings in Context: {A} Case Study on Homonymy and Synonymy,"This paper presents a multilingual study of word meaning representations in context. We assess the ability of both static and contextualized models to adequately represent different lexical-semantic relations, such as homonymy and synonymy. To do so, we created a new multilingual dataset that allows us to perform a controlled evaluation of several factors such as the impact of the surrounding context or the overlap between words, conveying the same or different senses. A systematic assessment on four scenarios shows that the best monolingual models based on Transformers can adequately disambiguate homonyms in context. However, as they rely heavily on context, these models fail at representing words with different senses when occurring in similar sentences. Experiments are performed in Galician, Portuguese, English, and Spanish, and both the dataset (with more than 3,000 evaluation items) and new models are freely released with this study.",Exploring the Representation of Word Meanings in Context: A Case Study on Homonymy and Synonymy,"This paper presents a multilingual study of word meaning representations in context. We assess the ability of both static and contextualized models to adequately represent different lexical-semantic relations, such as homonymy and synonymy. To do so, we created a new multilingual dataset that allows us to perform a controlled evaluation of several factors such as the impact of the surrounding context or the overlap between words, conveying the same or different senses. A systematic assessment on four scenarios shows that the best monolingual models based on Transformers can adequately disambiguate homonyms in context. However, as they rely heavily on context, these models fail at representing words with different senses when occurring in similar sentences. Experiments are performed in Galician, Portuguese, English, and Spanish, and both the dataset (with more than 3,000 evaluation items) and new models are freely released with this study.","We would like to thank the anonymous reviewers for their valuable comments, and NVIDIA Corporation for the donation of a Titan Xp GPU. This research is funded by a Ramón y Cajal grant (RYC2019-028473-I) and by the Galician Government (ERDF 2014-2020: Call ED431G 2019/04).","Exploring the Representation of Word Meanings in Context: A Case Study on Homonymy and Synonymy. This paper presents a multilingual study of word meaning representations in context. We assess the ability of both static and contextualized models to adequately represent different lexical-semantic relations, such as homonymy and synonymy. To do so, we created a new multilingual dataset that allows us to perform a controlled evaluation of several factors such as the impact of the surrounding context or the overlap between words, conveying the same or different senses. A systematic assessment on four scenarios shows that the best monolingual models based on Transformers can adequately disambiguate homonyms in context. However, as they rely heavily on context, these models fail at representing words with different senses when occurring in similar sentences. Experiments are performed in Galician, Portuguese, English, and Spanish, and both the dataset (with more than 3,000 evaluation items) and new models are freely released with this study.",2021
andersen-etal-2013-developing,https://aclanthology.org/W13-1704,1,,,,education,,,"Developing and testing a self-assessment and tutoring system. Automated feedback on writing may be a useful complement to teacher comments in the process of learning a foreign language. This paper presents a self-assessment and tutoring system which combines an holistic score with detection and correction of frequent errors and furthermore provides a qualitative assessment of each individual sentence, thus making the language learner aware of potentially problematic areas rather than providing a panacea. The system has been tested by learners in a range of educational institutions, and their feedback has guided its development.",Developing and testing a self-assessment and tutoring system,"Automated feedback on writing may be a useful complement to teacher comments in the process of learning a foreign language. This paper presents a self-assessment and tutoring system which combines an holistic score with detection and correction of frequent errors and furthermore provides a qualitative assessment of each individual sentence, thus making the language learner aware of potentially problematic areas rather than providing a panacea. The system has been tested by learners in a range of educational institutions, and their feedback has guided its development.",Developing and testing a self-assessment and tutoring system,"Automated feedback on writing may be a useful complement to teacher comments in the process of learning a foreign language. This paper presents a self-assessment and tutoring system which combines an holistic score with detection and correction of frequent errors and furthermore provides a qualitative assessment of each individual sentence, thus making the language learner aware of potentially problematic areas rather than providing a panacea. The system has been tested by learners in a range of educational institutions, and their feedback has guided its development.","Special thanks to Ted Briscoe and Marek Rei, as well as to the anonymous reviewers, for their valu-able contributions at various stages.","Developing and testing a self-assessment and tutoring system. Automated feedback on writing may be a useful complement to teacher comments in the process of learning a foreign language. This paper presents a self-assessment and tutoring system which combines an holistic score with detection and correction of frequent errors and furthermore provides a qualitative assessment of each individual sentence, thus making the language learner aware of potentially problematic areas rather than providing a panacea. The system has been tested by learners in a range of educational institutions, and their feedback has guided its development.",2013
nawaz-etal-2010-evaluating,https://aclanthology.org/W10-3112,1,,,,health,,,"Evaluating a meta-knowledge annotation scheme for bio-events. The correct interpretation of biomedical texts by text mining systems requires the recognition of a range of types of high-level information (or meta-knowledge) about the text. Examples include expressions of negation and speculation, as well as pragmatic/rhetorical intent (e.g. whether the information expressed represents a hypothesis, generally accepted knowledge, new experimental knowledge, etc.) Although such types of information have previously been annotated at the text-span level (most commonly sentences), annotation at the level of the event is currently quite sparse. In this paper, we focus on the evaluation of the multi-dimensional annotation scheme that we have developed specifically for enriching bio-events with meta-knowledge information. Our annotation scheme is intended to be general enough to allow integration with different types of bio-event annotation, whilst being detailed enough to capture important subtleties in the nature of the metaknowledge expressed in the text. To our knowledge, our scheme is unique within the field with regards to the diversity of metaknowledge aspects annotated for each event, whilst the evaluation results have confirmed its feasibility and soundness.",Evaluating a meta-knowledge annotation scheme for bio-events,"The correct interpretation of biomedical texts by text mining systems requires the recognition of a range of types of high-level information (or meta-knowledge) about the text. Examples include expressions of negation and speculation, as well as pragmatic/rhetorical intent (e.g. whether the information expressed represents a hypothesis, generally accepted knowledge, new experimental knowledge, etc.) Although such types of information have previously been annotated at the text-span level (most commonly sentences), annotation at the level of the event is currently quite sparse. In this paper, we focus on the evaluation of the multi-dimensional annotation scheme that we have developed specifically for enriching bio-events with meta-knowledge information. Our annotation scheme is intended to be general enough to allow integration with different types of bio-event annotation, whilst being detailed enough to capture important subtleties in the nature of the metaknowledge expressed in the text. To our knowledge, our scheme is unique within the field with regards to the diversity of metaknowledge aspects annotated for each event, whilst the evaluation results have confirmed its feasibility and soundness.",Evaluating a meta-knowledge annotation scheme for bio-events,"The correct interpretation of biomedical texts by text mining systems requires the recognition of a range of types of high-level information (or meta-knowledge) about the text. Examples include expressions of negation and speculation, as well as pragmatic/rhetorical intent (e.g. whether the information expressed represents a hypothesis, generally accepted knowledge, new experimental knowledge, etc.) Although such types of information have previously been annotated at the text-span level (most commonly sentences), annotation at the level of the event is currently quite sparse. In this paper, we focus on the evaluation of the multi-dimensional annotation scheme that we have developed specifically for enriching bio-events with meta-knowledge information. Our annotation scheme is intended to be general enough to allow integration with different types of bio-event annotation, whilst being detailed enough to capture important subtleties in the nature of the metaknowledge expressed in the text. To our knowledge, our scheme is unique within the field with regards to the diversity of metaknowledge aspects annotated for each event, whilst the evaluation results have confirmed its feasibility and soundness.","The work described in this paper has been funded by the Biotechnology and Biological Sciences Research Council through grant numbers BBS/B/13640, BB/F006039/1 (ONDEX)","Evaluating a meta-knowledge annotation scheme for bio-events. The correct interpretation of biomedical texts by text mining systems requires the recognition of a range of types of high-level information (or meta-knowledge) about the text. Examples include expressions of negation and speculation, as well as pragmatic/rhetorical intent (e.g. whether the information expressed represents a hypothesis, generally accepted knowledge, new experimental knowledge, etc.) Although such types of information have previously been annotated at the text-span level (most commonly sentences), annotation at the level of the event is currently quite sparse. In this paper, we focus on the evaluation of the multi-dimensional annotation scheme that we have developed specifically for enriching bio-events with meta-knowledge information. Our annotation scheme is intended to be general enough to allow integration with different types of bio-event annotation, whilst being detailed enough to capture important subtleties in the nature of the metaknowledge expressed in the text. To our knowledge, our scheme is unique within the field with regards to the diversity of metaknowledge aspects annotated for each event, whilst the evaluation results have confirmed its feasibility and soundness.",2010
lafourcade-boitet-2002-unl,http://www.lrec-conf.org/proceedings/lrec2002/pdf/354.pdf,0,,,,,,,"UNL Lexical Selection with Conceptual Vectors. When deconverting a UNL graph into some natural language LG, we often encounter lexical items (called UWs) made of an English headword and formalized semantic restrictions, such as ""look for (icl>do, agt>person)"", which are not yet connected t o lemmas, so that is it necessary to find a ""nearest"" UW in the UNL-LG dictionary, such as ""look for (icl>action, agt>human, obj>thing)"". Then, this UW may be connected to several lemmas of LG. In order to solve these problems of incompleteness and polysemy, we are applying a method based on the computation of ""conceptual vectors"", previously used successfully in the context of thematic indexing of French and English documents.",{UNL} Lexical Selection with Conceptual Vectors,"When deconverting a UNL graph into some natural language LG, we often encounter lexical items (called UWs) made of an English headword and formalized semantic restrictions, such as ""look for (icl>do, agt>person)"", which are not yet connected t o lemmas, so that is it necessary to find a ""nearest"" UW in the UNL-LG dictionary, such as ""look for (icl>action, agt>human, obj>thing)"". Then, this UW may be connected to several lemmas of LG. In order to solve these problems of incompleteness and polysemy, we are applying a method based on the computation of ""conceptual vectors"", previously used successfully in the context of thematic indexing of French and English documents.",UNL Lexical Selection with Conceptual Vectors,"When deconverting a UNL graph into some natural language LG, we often encounter lexical items (called UWs) made of an English headword and formalized semantic restrictions, such as ""look for (icl>do, agt>person)"", which are not yet connected t o lemmas, so that is it necessary to find a ""nearest"" UW in the UNL-LG dictionary, such as ""look for (icl>action, agt>human, obj>thing)"". Then, this UW may be connected to several lemmas of LG. In order to solve these problems of incompleteness and polysemy, we are applying a method based on the computation of ""conceptual vectors"", previously used successfully in the context of thematic indexing of French and English documents.",,"UNL Lexical Selection with Conceptual Vectors. When deconverting a UNL graph into some natural language LG, we often encounter lexical items (called UWs) made of an English headword and formalized semantic restrictions, such as ""look for (icl>do, agt>person)"", which are not yet connected t o lemmas, so that is it necessary to find a ""nearest"" UW in the UNL-LG dictionary, such as ""look for (icl>action, agt>human, obj>thing)"". Then, this UW may be connected to several lemmas of LG. In order to solve these problems of incompleteness and polysemy, we are applying a method based on the computation of ""conceptual vectors"", previously used successfully in the context of thematic indexing of French and English documents.",2002
chinchor-marsh-1998-appendix,https://aclanthology.org/M98-1027,0,,,,,,,"Appendix D: MUC-7 Information Extraction Task Definition (version 5.1). Brief Definition of Information Extraction Task Information extraction in the sense of the Message Understanding Conferences has been traditionally defined as the extraction of information from a text in the form of text strings and processed text strings which are placed into slots labeled to indicate the kind of information that can fill them. So, for example, a slot labeled NAME would contain a name string taken directly out of the text or modified in some well-defined way, such as by deleting all but the person's surname. Another example could be a slot called WEAPON which requires as a fill one of a set of designated classes of weapons based on some categorization of the weapons that has meaning in the events of import such as GUN or BOMB in a terrorist event. The input to information extraction is a set of texts, usually unclassified newswire articles, and the output is a set of filled slots. The set of filled slots may represent an entity with its attributes, a relationship between two or more entities, or an event with various entities playing roles and/or being in certain relationships. Entities with their attributes are extracted in the Template Element task; relationships between two or more entities are extracted in the Template Relation task; and events with various entities playing roles and/or being in certain relationships are extracted in the Scenario Template task. 1.",Appendix {D}: {MUC}-7 Information Extraction Task Definition (version 5.1),"Brief Definition of Information Extraction Task Information extraction in the sense of the Message Understanding Conferences has been traditionally defined as the extraction of information from a text in the form of text strings and processed text strings which are placed into slots labeled to indicate the kind of information that can fill them. So, for example, a slot labeled NAME would contain a name string taken directly out of the text or modified in some well-defined way, such as by deleting all but the person's surname. Another example could be a slot called WEAPON which requires as a fill one of a set of designated classes of weapons based on some categorization of the weapons that has meaning in the events of import such as GUN or BOMB in a terrorist event. The input to information extraction is a set of texts, usually unclassified newswire articles, and the output is a set of filled slots. The set of filled slots may represent an entity with its attributes, a relationship between two or more entities, or an event with various entities playing roles and/or being in certain relationships. Entities with their attributes are extracted in the Template Element task; relationships between two or more entities are extracted in the Template Relation task; and events with various entities playing roles and/or being in certain relationships are extracted in the Scenario Template task. 1.",Appendix D: MUC-7 Information Extraction Task Definition (version 5.1),"Brief Definition of Information Extraction Task Information extraction in the sense of the Message Understanding Conferences has been traditionally defined as the extraction of information from a text in the form of text strings and processed text strings which are placed into slots labeled to indicate the kind of information that can fill them. So, for example, a slot labeled NAME would contain a name string taken directly out of the text or modified in some well-defined way, such as by deleting all but the person's surname. Another example could be a slot called WEAPON which requires as a fill one of a set of designated classes of weapons based on some categorization of the weapons that has meaning in the events of import such as GUN or BOMB in a terrorist event. The input to information extraction is a set of texts, usually unclassified newswire articles, and the output is a set of filled slots. The set of filled slots may represent an entity with its attributes, a relationship between two or more entities, or an event with various entities playing roles and/or being in certain relationships. Entities with their attributes are extracted in the Template Element task; relationships between two or more entities are extracted in the Template Relation task; and events with various entities playing roles and/or being in certain relationships are extracted in the Scenario Template task. 1.",,"Appendix D: MUC-7 Information Extraction Task Definition (version 5.1). Brief Definition of Information Extraction Task Information extraction in the sense of the Message Understanding Conferences has been traditionally defined as the extraction of information from a text in the form of text strings and processed text strings which are placed into slots labeled to indicate the kind of information that can fill them. So, for example, a slot labeled NAME would contain a name string taken directly out of the text or modified in some well-defined way, such as by deleting all but the person's surname. Another example could be a slot called WEAPON which requires as a fill one of a set of designated classes of weapons based on some categorization of the weapons that has meaning in the events of import such as GUN or BOMB in a terrorist event. The input to information extraction is a set of texts, usually unclassified newswire articles, and the output is a set of filled slots. The set of filled slots may represent an entity with its attributes, a relationship between two or more entities, or an event with various entities playing roles and/or being in certain relationships. Entities with their attributes are extracted in the Template Element task; relationships between two or more entities are extracted in the Template Relation task; and events with various entities playing roles and/or being in certain relationships are extracted in the Scenario Template task. 1.",1998
choi-etal-2008-overcome,https://aclanthology.org/Y08-1015,0,,,,,,,"How to Overcome the Domain Barriers in Pattern-Based Machine Translation System. One of difficult issues in pattern-based machine translation system is maybe to find how to overcome the domain difference in adapting a system from one domain to other domain. This paper describes how we have resolved such barriers among domains as default target word of any domain, domain-specific patterns, and domain adaptation of engine modules in pattern-based machine translation system, especially English-Korean pattern-based machine translation system. For this, we will discuss two types of customization methods which mean a method adapting an existing system to new domain. One is the pure customization method introduced for patent machine translation system in 2006 and another is the upgraded customization method applied to scientific paper machine translation system in 2007. By introducing an upgraded customization method, we could implement a practical machine translation system for scientific paper translation within 8 months, in comparison with the patent machine translation system that was completed even in 24 months by the pure customization method. The translation accuracy of scientific paper machine translation system also rose 77.25% to 81.10% in spite of short term of 8 months.",How to Overcome the Domain Barriers in Pattern-Based Machine Translation System,"One of difficult issues in pattern-based machine translation system is maybe to find how to overcome the domain difference in adapting a system from one domain to other domain. This paper describes how we have resolved such barriers among domains as default target word of any domain, domain-specific patterns, and domain adaptation of engine modules in pattern-based machine translation system, especially English-Korean pattern-based machine translation system. For this, we will discuss two types of customization methods which mean a method adapting an existing system to new domain. One is the pure customization method introduced for patent machine translation system in 2006 and another is the upgraded customization method applied to scientific paper machine translation system in 2007. By introducing an upgraded customization method, we could implement a practical machine translation system for scientific paper translation within 8 months, in comparison with the patent machine translation system that was completed even in 24 months by the pure customization method. The translation accuracy of scientific paper machine translation system also rose 77.25% to 81.10% in spite of short term of 8 months.",How to Overcome the Domain Barriers in Pattern-Based Machine Translation System,"One of difficult issues in pattern-based machine translation system is maybe to find how to overcome the domain difference in adapting a system from one domain to other domain. This paper describes how we have resolved such barriers among domains as default target word of any domain, domain-specific patterns, and domain adaptation of engine modules in pattern-based machine translation system, especially English-Korean pattern-based machine translation system. For this, we will discuss two types of customization methods which mean a method adapting an existing system to new domain. One is the pure customization method introduced for patent machine translation system in 2006 and another is the upgraded customization method applied to scientific paper machine translation system in 2007. By introducing an upgraded customization method, we could implement a practical machine translation system for scientific paper translation within 8 months, in comparison with the patent machine translation system that was completed even in 24 months by the pure customization method. The translation accuracy of scientific paper machine translation system also rose 77.25% to 81.10% in spite of short term of 8 months.",,"How to Overcome the Domain Barriers in Pattern-Based Machine Translation System. One of difficult issues in pattern-based machine translation system is maybe to find how to overcome the domain difference in adapting a system from one domain to other domain. This paper describes how we have resolved such barriers among domains as default target word of any domain, domain-specific patterns, and domain adaptation of engine modules in pattern-based machine translation system, especially English-Korean pattern-based machine translation system. For this, we will discuss two types of customization methods which mean a method adapting an existing system to new domain. One is the pure customization method introduced for patent machine translation system in 2006 and another is the upgraded customization method applied to scientific paper machine translation system in 2007. By introducing an upgraded customization method, we could implement a practical machine translation system for scientific paper translation within 8 months, in comparison with the patent machine translation system that was completed even in 24 months by the pure customization method. The translation accuracy of scientific paper machine translation system also rose 77.25% to 81.10% in spite of short term of 8 months.",2008
hassert-etal-2021-ud,https://aclanthology.org/2021.udw-1.5,0,,,,,,,"UD on Software Requirements: Application and Challenges. Technical documents present distinct challenges when used in natural language processing tasks such as part-of-speech tagging or syntactic parsing. This is mainly due to the nature of their content, which may differ greatly from more studied texts like news articles, encyclopedic extracts or social media entries. This work contributes an English corpus composed of software requirement texts annotated in Universal Dependencies (UD) to study the differences, challenges and issues encountered on these documents when following the UD guidelines. Different structural and linguistic phenomena are studied in the light of their impact on manual and automatic dependency annotation. To better cope with texts of this nature, some modifications and features are proposed in order to enrich the existing UD guidelines to better cover technical texts. The proposed corpus is compared to other existing corpora to show the structural complexity of the texts as well as the challenge it presents to recent processing methods. This contribution is the first software requirement corpus annotated with UD relations.",{UD} on Software Requirements: Application and Challenges,"Technical documents present distinct challenges when used in natural language processing tasks such as part-of-speech tagging or syntactic parsing. This is mainly due to the nature of their content, which may differ greatly from more studied texts like news articles, encyclopedic extracts or social media entries. This work contributes an English corpus composed of software requirement texts annotated in Universal Dependencies (UD) to study the differences, challenges and issues encountered on these documents when following the UD guidelines. Different structural and linguistic phenomena are studied in the light of their impact on manual and automatic dependency annotation. To better cope with texts of this nature, some modifications and features are proposed in order to enrich the existing UD guidelines to better cover technical texts. The proposed corpus is compared to other existing corpora to show the structural complexity of the texts as well as the challenge it presents to recent processing methods. This contribution is the first software requirement corpus annotated with UD relations.",UD on Software Requirements: Application and Challenges,"Technical documents present distinct challenges when used in natural language processing tasks such as part-of-speech tagging or syntactic parsing. This is mainly due to the nature of their content, which may differ greatly from more studied texts like news articles, encyclopedic extracts or social media entries. This work contributes an English corpus composed of software requirement texts annotated in Universal Dependencies (UD) to study the differences, challenges and issues encountered on these documents when following the UD guidelines. Different structural and linguistic phenomena are studied in the light of their impact on manual and automatic dependency annotation. To better cope with texts of this nature, some modifications and features are proposed in order to enrich the existing UD guidelines to better cover technical texts. The proposed corpus is compared to other existing corpora to show the structural complexity of the texts as well as the challenge it presents to recent processing methods. This contribution is the first software requirement corpus annotated with UD relations.",,"UD on Software Requirements: Application and Challenges. Technical documents present distinct challenges when used in natural language processing tasks such as part-of-speech tagging or syntactic parsing. This is mainly due to the nature of their content, which may differ greatly from more studied texts like news articles, encyclopedic extracts or social media entries. This work contributes an English corpus composed of software requirement texts annotated in Universal Dependencies (UD) to study the differences, challenges and issues encountered on these documents when following the UD guidelines. Different structural and linguistic phenomena are studied in the light of their impact on manual and automatic dependency annotation. To better cope with texts of this nature, some modifications and features are proposed in order to enrich the existing UD guidelines to better cover technical texts. The proposed corpus is compared to other existing corpora to show the structural complexity of the texts as well as the challenge it presents to recent processing methods. This contribution is the first software requirement corpus annotated with UD relations.",2021
sakakini-etal-2019-equipping,https://aclanthology.org/W19-4448,1,,,,education,,,"Equipping Educational Applications with Domain Knowledge. One of the challenges of building natural language processing (NLP) applications for education is finding a large domain-specific corpus for the subject of interest (e.g., history or science). To address this challenge, we propose a tool, Dexter, that extracts a subjectspecific corpus from a heterogeneous corpus, such as Wikipedia, by relying on a small seed corpus and distributed document representations. We empirically show the impact of the generated corpus on language modeling, estimating word embeddings, and consequently, distractor generation, resulting in a better performance than while using a general domain corpus, a heuristically constructed domainspecific corpus, and a corpus generated by a popular system: BootCaT.",Equipping Educational Applications with Domain Knowledge,"One of the challenges of building natural language processing (NLP) applications for education is finding a large domain-specific corpus for the subject of interest (e.g., history or science). To address this challenge, we propose a tool, Dexter, that extracts a subjectspecific corpus from a heterogeneous corpus, such as Wikipedia, by relying on a small seed corpus and distributed document representations. We empirically show the impact of the generated corpus on language modeling, estimating word embeddings, and consequently, distractor generation, resulting in a better performance than while using a general domain corpus, a heuristically constructed domainspecific corpus, and a corpus generated by a popular system: BootCaT.",Equipping Educational Applications with Domain Knowledge,"One of the challenges of building natural language processing (NLP) applications for education is finding a large domain-specific corpus for the subject of interest (e.g., history or science). To address this challenge, we propose a tool, Dexter, that extracts a subjectspecific corpus from a heterogeneous corpus, such as Wikipedia, by relying on a small seed corpus and distributed document representations. We empirically show the impact of the generated corpus on language modeling, estimating word embeddings, and consequently, distractor generation, resulting in a better performance than while using a general domain corpus, a heuristically constructed domainspecific corpus, and a corpus generated by a popular system: BootCaT.",This work is supported by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) -a research collaboration as part of the IBM AI Horizons Network.,"Equipping Educational Applications with Domain Knowledge. One of the challenges of building natural language processing (NLP) applications for education is finding a large domain-specific corpus for the subject of interest (e.g., history or science). To address this challenge, we propose a tool, Dexter, that extracts a subjectspecific corpus from a heterogeneous corpus, such as Wikipedia, by relying on a small seed corpus and distributed document representations. We empirically show the impact of the generated corpus on language modeling, estimating word embeddings, and consequently, distractor generation, resulting in a better performance than while using a general domain corpus, a heuristically constructed domainspecific corpus, and a corpus generated by a popular system: BootCaT.",2019
ulinski-etal-2019-spatialnet,https://aclanthology.org/W19-1607,0,,,,,,,"SpatialNet: A Declarative Resource for Spatial Relations. This paper introduces SpatialNet, a novel resource which links linguistic expressions to actual spatial configurations. SpatialNet is based on FrameNet (Ruppenhofer et al., 2016) and VigNet (Coyne et al., 2011), two resources which use frame semantics to encode lexical meaning. SpatialNet uses a deep semantic representation of spatial relations to provide a formal description of how a language expresses spatial information. This formal representation of the lexical semantics of spatial language also provides a consistent way to represent spatial meaning across multiple languages. In this paper, we describe the structure of SpatialNet, with examples from English and German. We also show how SpatialNet can be combined with other existing NLP tools to create a text-to-scene system for a language.",{S}patial{N}et: A Declarative Resource for Spatial Relations,"This paper introduces SpatialNet, a novel resource which links linguistic expressions to actual spatial configurations. SpatialNet is based on FrameNet (Ruppenhofer et al., 2016) and VigNet (Coyne et al., 2011), two resources which use frame semantics to encode lexical meaning. SpatialNet uses a deep semantic representation of spatial relations to provide a formal description of how a language expresses spatial information. This formal representation of the lexical semantics of spatial language also provides a consistent way to represent spatial meaning across multiple languages. In this paper, we describe the structure of SpatialNet, with examples from English and German. We also show how SpatialNet can be combined with other existing NLP tools to create a text-to-scene system for a language.",SpatialNet: A Declarative Resource for Spatial Relations,"This paper introduces SpatialNet, a novel resource which links linguistic expressions to actual spatial configurations. SpatialNet is based on FrameNet (Ruppenhofer et al., 2016) and VigNet (Coyne et al., 2011), two resources which use frame semantics to encode lexical meaning. SpatialNet uses a deep semantic representation of spatial relations to provide a formal description of how a language expresses spatial information. This formal representation of the lexical semantics of spatial language also provides a consistent way to represent spatial meaning across multiple languages. In this paper, we describe the structure of SpatialNet, with examples from English and German. We also show how SpatialNet can be combined with other existing NLP tools to create a text-to-scene system for a language.",,"SpatialNet: A Declarative Resource for Spatial Relations. This paper introduces SpatialNet, a novel resource which links linguistic expressions to actual spatial configurations. SpatialNet is based on FrameNet (Ruppenhofer et al., 2016) and VigNet (Coyne et al., 2011), two resources which use frame semantics to encode lexical meaning. SpatialNet uses a deep semantic representation of spatial relations to provide a formal description of how a language expresses spatial information. This formal representation of the lexical semantics of spatial language also provides a consistent way to represent spatial meaning across multiple languages. In this paper, we describe the structure of SpatialNet, with examples from English and German. We also show how SpatialNet can be combined with other existing NLP tools to create a text-to-scene system for a language.",2019
van-der-wees-etal-2015-whats,https://aclanthology.org/P15-2092,0,,,,,,,"What's in a Domain? Analyzing Genre and Topic Differences in Statistical Machine Translation. Domain adaptation is an active field of research in statistical machine translation (SMT), but so far most work has ignored the distinction between the topic and genre of documents. In this paper we quantify and disentangle the impact of genre and topic differences on translation quality by introducing a new data set that has controlled topic and genre distributions. In addition, we perform a detailed analysis showing that differences across topics only explain to a limited degree translation performance differences across genres, and that genre-specific errors are more attributable to model coverage than to suboptimal scoring of translation candidates.",What{'}s in a Domain? Analyzing Genre and Topic Differences in Statistical Machine Translation,"Domain adaptation is an active field of research in statistical machine translation (SMT), but so far most work has ignored the distinction between the topic and genre of documents. In this paper we quantify and disentangle the impact of genre and topic differences on translation quality by introducing a new data set that has controlled topic and genre distributions. In addition, we perform a detailed analysis showing that differences across topics only explain to a limited degree translation performance differences across genres, and that genre-specific errors are more attributable to model coverage than to suboptimal scoring of translation candidates.",What's in a Domain? Analyzing Genre and Topic Differences in Statistical Machine Translation,"Domain adaptation is an active field of research in statistical machine translation (SMT), but so far most work has ignored the distinction between the topic and genre of documents. In this paper we quantify and disentangle the impact of genre and topic differences on translation quality by introducing a new data set that has controlled topic and genre distributions. In addition, we perform a detailed analysis showing that differences across topics only explain to a limited degree translation performance differences across genres, and that genre-specific errors are more attributable to model coverage than to suboptimal scoring of translation candidates.","This research was funded in part by the Netherlands Organization for Scientific Research (NWO) under project number 639.022.213. We thank Rachel Cotterill, Nigel Dewdney, and the anonymous reviewers for their valuable comments.","What's in a Domain? Analyzing Genre and Topic Differences in Statistical Machine Translation. Domain adaptation is an active field of research in statistical machine translation (SMT), but so far most work has ignored the distinction between the topic and genre of documents. In this paper we quantify and disentangle the impact of genre and topic differences on translation quality by introducing a new data set that has controlled topic and genre distributions. In addition, we perform a detailed analysis showing that differences across topics only explain to a limited degree translation performance differences across genres, and that genre-specific errors are more attributable to model coverage than to suboptimal scoring of translation candidates.",2015
suresh-ong-2021-negatives,https://aclanthology.org/2021.emnlp-main.359,0,,,,,,,"Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification. Fine-grained classification involves dealing with datasets with larger number of classes with subtle differences between them. Guiding the model to focus on differentiating dimensions between these commonly confusable classes is key to improving performance on fine-grained tasks. In this work, we analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks, emotion classification and sentiment analysis. We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives, and in particular, weighting closely confusable negatives more than less similar negative examples. We find that Label-aware Contrastive Loss outperforms previous contrastive methods, in the presence of larger number and/or more confusable classes, and helps models to produce output distributions that are more differentiated.",Not All Negatives are Equal: {L}abel-Aware Contrastive Loss for Fine-grained Text Classification,"Fine-grained classification involves dealing with datasets with larger number of classes with subtle differences between them. Guiding the model to focus on differentiating dimensions between these commonly confusable classes is key to improving performance on fine-grained tasks. In this work, we analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks, emotion classification and sentiment analysis. We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives, and in particular, weighting closely confusable negatives more than less similar negative examples. We find that Label-aware Contrastive Loss outperforms previous contrastive methods, in the presence of larger number and/or more confusable classes, and helps models to produce output distributions that are more differentiated.",Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification,"Fine-grained classification involves dealing with datasets with larger number of classes with subtle differences between them. Guiding the model to focus on differentiating dimensions between these commonly confusable classes is key to improving performance on fine-grained tasks. In this work, we analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks, emotion classification and sentiment analysis. We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives, and in particular, weighting closely confusable negatives more than less similar negative examples. We find that Label-aware Contrastive Loss outperforms previous contrastive methods, in the presence of larger number and/or more confusable classes, and helps models to produce output distributions that are more differentiated.","This research is supported by the National Research Foundation, Singapore under its AI Singapore Program (AISG Award No: AISG2-RP-2020-016).","Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification. Fine-grained classification involves dealing with datasets with larger number of classes with subtle differences between them. Guiding the model to focus on differentiating dimensions between these commonly confusable classes is key to improving performance on fine-grained tasks. In this work, we analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks, emotion classification and sentiment analysis. We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives, and in particular, weighting closely confusable negatives more than less similar negative examples. We find that Label-aware Contrastive Loss outperforms previous contrastive methods, in the presence of larger number and/or more confusable classes, and helps models to produce output distributions that are more differentiated.",2021
gollapalli-etal-2020-ester,https://aclanthology.org/2020.findings-emnlp.93,0,,,,,,,"ESTeR: Combining Word Co-occurrences and Word Associations for Unsupervised Emotion Detection. Accurate detection of emotions in usergenerated text was shown to have several applications for e-commerce, public well-being, and disaster management. Currently, the stateof-the-art performance for emotion detection in text is obtained using complex, deep learning models trained on domain-specific, labeled data. In this paper, we propose Emotion-Sensitive TextRank (ESTeR), an unsupervised model for identifying emotions using a novel similarity function based on random walks on graphs. Our model combines large-scale word co-occurrence information with wordassociations from lexicons avoiding not only the dependence on labeled datasets, but also an explicit mapping of words to latent spaces used in emotion-enriched word embeddings. Our similarity function can also be computed efficiently. We study a diverse range of datasets including recent tweets related to COVID-19 to illustrate the superior performance of our model and report insights on public emotions during the ongoing pandemic.",{EST}e{R}: Combining Word Co-occurrences and Word Associations for Unsupervised Emotion Detection,"Accurate detection of emotions in usergenerated text was shown to have several applications for e-commerce, public well-being, and disaster management. Currently, the stateof-the-art performance for emotion detection in text is obtained using complex, deep learning models trained on domain-specific, labeled data. In this paper, we propose Emotion-Sensitive TextRank (ESTeR), an unsupervised model for identifying emotions using a novel similarity function based on random walks on graphs. Our model combines large-scale word co-occurrence information with wordassociations from lexicons avoiding not only the dependence on labeled datasets, but also an explicit mapping of words to latent spaces used in emotion-enriched word embeddings. Our similarity function can also be computed efficiently. We study a diverse range of datasets including recent tweets related to COVID-19 to illustrate the superior performance of our model and report insights on public emotions during the ongoing pandemic.",ESTeR: Combining Word Co-occurrences and Word Associations for Unsupervised Emotion Detection,"Accurate detection of emotions in usergenerated text was shown to have several applications for e-commerce, public well-being, and disaster management. Currently, the stateof-the-art performance for emotion detection in text is obtained using complex, deep learning models trained on domain-specific, labeled data. In this paper, we propose Emotion-Sensitive TextRank (ESTeR), an unsupervised model for identifying emotions using a novel similarity function based on random walks on graphs. Our model combines large-scale word co-occurrence information with wordassociations from lexicons avoiding not only the dependence on labeled datasets, but also an explicit mapping of words to latent spaces used in emotion-enriched word embeddings. Our similarity function can also be computed efficiently. We study a diverse range of datasets including recent tweets related to COVID-19 to illustrate the superior performance of our model and report insights on public emotions during the ongoing pandemic.","This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-GC-2019-001). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.","ESTeR: Combining Word Co-occurrences and Word Associations for Unsupervised Emotion Detection. Accurate detection of emotions in usergenerated text was shown to have several applications for e-commerce, public well-being, and disaster management. Currently, the stateof-the-art performance for emotion detection in text is obtained using complex, deep learning models trained on domain-specific, labeled data. In this paper, we propose Emotion-Sensitive TextRank (ESTeR), an unsupervised model for identifying emotions using a novel similarity function based on random walks on graphs. Our model combines large-scale word co-occurrence information with wordassociations from lexicons avoiding not only the dependence on labeled datasets, but also an explicit mapping of words to latent spaces used in emotion-enriched word embeddings. Our similarity function can also be computed efficiently. We study a diverse range of datasets including recent tweets related to COVID-19 to illustrate the superior performance of our model and report insights on public emotions during the ongoing pandemic.",2020
hu-etal-2018-shot,https://aclanthology.org/C18-1041,1,,,,peace_justice_and_strong_institutions,,,"Few-Shot Charge Prediction with Discriminative Legal Attributes. Automatic charge prediction aims to predict the final charges according to the fact descriptions in criminal cases and plays a crucial role in legal assistant systems. Existing works on charge prediction perform adequately on those high-frequency charges but are not yet capable of predicting few-shot charges with limited cases. Moreover, these exist many confusing charge pairs, whose fact descriptions are fairly similar to each other. To address these issues, we introduce several discriminative attributes of charges as the internal mapping between fact descriptions and charges. These attributes provide additional information for few-shot charges, as well as effective signals for distinguishing confusing charges. More specifically, we propose an attribute-attentive charge prediction model to infer the attributes and charges simultaneously. Experimental results on real-work datasets demonstrate that our proposed model achieves significant and consistent improvements than other state-of-the-art baselines. Specifically, our model outperforms other baselines by more than 50% in the few-shot scenario. Our codes and datasets can be obtained from https://github.com/thunlp/attribute_charge.",Few-Shot Charge Prediction with Discriminative Legal Attributes,"Automatic charge prediction aims to predict the final charges according to the fact descriptions in criminal cases and plays a crucial role in legal assistant systems. Existing works on charge prediction perform adequately on those high-frequency charges but are not yet capable of predicting few-shot charges with limited cases. Moreover, these exist many confusing charge pairs, whose fact descriptions are fairly similar to each other. To address these issues, we introduce several discriminative attributes of charges as the internal mapping between fact descriptions and charges. These attributes provide additional information for few-shot charges, as well as effective signals for distinguishing confusing charges. More specifically, we propose an attribute-attentive charge prediction model to infer the attributes and charges simultaneously. Experimental results on real-work datasets demonstrate that our proposed model achieves significant and consistent improvements than other state-of-the-art baselines. Specifically, our model outperforms other baselines by more than 50% in the few-shot scenario. Our codes and datasets can be obtained from https://github.com/thunlp/attribute_charge.",Few-Shot Charge Prediction with Discriminative Legal Attributes,"Automatic charge prediction aims to predict the final charges according to the fact descriptions in criminal cases and plays a crucial role in legal assistant systems. Existing works on charge prediction perform adequately on those high-frequency charges but are not yet capable of predicting few-shot charges with limited cases. Moreover, these exist many confusing charge pairs, whose fact descriptions are fairly similar to each other. To address these issues, we introduce several discriminative attributes of charges as the internal mapping between fact descriptions and charges. These attributes provide additional information for few-shot charges, as well as effective signals for distinguishing confusing charges. More specifically, we propose an attribute-attentive charge prediction model to infer the attributes and charges simultaneously. Experimental results on real-work datasets demonstrate that our proposed model achieves significant and consistent improvements than other state-of-the-art baselines. Specifically, our model outperforms other baselines by more than 50% in the few-shot scenario. Our codes and datasets can be obtained from https://github.com/thunlp/attribute_charge.","We thank all the anonymous reviewers for their insightful comments. This work is supported by the National Natural Science Foundation of China (NSFC No. 61661146007, 61572273) and Tsinghua University Initiative Scientific Research Program (20151080406). This research is part of the NExT++ project, supported by the National Research Foundation, Prime Ministers Office, Singapore under its IRC@Singapore Funding Initiative.","Few-Shot Charge Prediction with Discriminative Legal Attributes. Automatic charge prediction aims to predict the final charges according to the fact descriptions in criminal cases and plays a crucial role in legal assistant systems. Existing works on charge prediction perform adequately on those high-frequency charges but are not yet capable of predicting few-shot charges with limited cases. Moreover, these exist many confusing charge pairs, whose fact descriptions are fairly similar to each other. To address these issues, we introduce several discriminative attributes of charges as the internal mapping between fact descriptions and charges. These attributes provide additional information for few-shot charges, as well as effective signals for distinguishing confusing charges. More specifically, we propose an attribute-attentive charge prediction model to infer the attributes and charges simultaneously. Experimental results on real-work datasets demonstrate that our proposed model achieves significant and consistent improvements than other state-of-the-art baselines. Specifically, our model outperforms other baselines by more than 50% in the few-shot scenario. Our codes and datasets can be obtained from https://github.com/thunlp/attribute_charge.",2018
nouvel-etal-2012-coupling,https://aclanthology.org/W12-0510,0,,,,,,,"Coupling Knowledge-Based and Data-Driven Systems for Named Entity Recognition. Within Information Extraction tasks, Named Entity Recognition has received much attention over latest decades. From symbolic / knowledge-based to data-driven / machine-learning systems, many approaches have been experimented. Our work may be viewed as an attempt to bridge the gap from the data-driven perspective back to the knowledge-based one. We use a knowledge-based system, based on manually implemented transducers, that reaches satisfactory performances. It has the undisputable advantage of being modular. However, such a hand-crafted system requires substantial efforts to cope with dedicated tasks. In this context, we implemented a pattern extractor that extracts symbolic knowledge, using hierarchical sequential pattern mining over annotated corpora. To assess the accuracy of mined patterns, we designed a module that recognizes Named Entities in texts by determining their most probable boundaries. Instead of considering Named Entity Recognition as a labeling task, it relies on complex context-aware features provided by lower-level systems and considers the tagging task as a markovian process. Using thos systems, coupling knowledge-based system with extracted patterns is straightforward and leads to a competitive hybrid NE-tagger. We report experiments using this system and compare it to other hybridization strategies along with a baseline CRF model.",Coupling Knowledge-Based and Data-Driven Systems for Named Entity Recognition,"Within Information Extraction tasks, Named Entity Recognition has received much attention over latest decades. From symbolic / knowledge-based to data-driven / machine-learning systems, many approaches have been experimented. Our work may be viewed as an attempt to bridge the gap from the data-driven perspective back to the knowledge-based one. We use a knowledge-based system, based on manually implemented transducers, that reaches satisfactory performances. It has the undisputable advantage of being modular. However, such a hand-crafted system requires substantial efforts to cope with dedicated tasks. In this context, we implemented a pattern extractor that extracts symbolic knowledge, using hierarchical sequential pattern mining over annotated corpora. To assess the accuracy of mined patterns, we designed a module that recognizes Named Entities in texts by determining their most probable boundaries. Instead of considering Named Entity Recognition as a labeling task, it relies on complex context-aware features provided by lower-level systems and considers the tagging task as a markovian process. Using thos systems, coupling knowledge-based system with extracted patterns is straightforward and leads to a competitive hybrid NE-tagger. We report experiments using this system and compare it to other hybridization strategies along with a baseline CRF model.",Coupling Knowledge-Based and Data-Driven Systems for Named Entity Recognition,"Within Information Extraction tasks, Named Entity Recognition has received much attention over latest decades. From symbolic / knowledge-based to data-driven / machine-learning systems, many approaches have been experimented. Our work may be viewed as an attempt to bridge the gap from the data-driven perspective back to the knowledge-based one. We use a knowledge-based system, based on manually implemented transducers, that reaches satisfactory performances. It has the undisputable advantage of being modular. However, such a hand-crafted system requires substantial efforts to cope with dedicated tasks. In this context, we implemented a pattern extractor that extracts symbolic knowledge, using hierarchical sequential pattern mining over annotated corpora. To assess the accuracy of mined patterns, we designed a module that recognizes Named Entities in texts by determining their most probable boundaries. Instead of considering Named Entity Recognition as a labeling task, it relies on complex context-aware features provided by lower-level systems and considers the tagging task as a markovian process. Using thos systems, coupling knowledge-based system with extracted patterns is straightforward and leads to a competitive hybrid NE-tagger. We report experiments using this system and compare it to other hybridization strategies along with a baseline CRF model.",,"Coupling Knowledge-Based and Data-Driven Systems for Named Entity Recognition. Within Information Extraction tasks, Named Entity Recognition has received much attention over latest decades. From symbolic / knowledge-based to data-driven / machine-learning systems, many approaches have been experimented. Our work may be viewed as an attempt to bridge the gap from the data-driven perspective back to the knowledge-based one. We use a knowledge-based system, based on manually implemented transducers, that reaches satisfactory performances. It has the undisputable advantage of being modular. However, such a hand-crafted system requires substantial efforts to cope with dedicated tasks. In this context, we implemented a pattern extractor that extracts symbolic knowledge, using hierarchical sequential pattern mining over annotated corpora. To assess the accuracy of mined patterns, we designed a module that recognizes Named Entities in texts by determining their most probable boundaries. Instead of considering Named Entity Recognition as a labeling task, it relies on complex context-aware features provided by lower-level systems and considers the tagging task as a markovian process. Using thos systems, coupling knowledge-based system with extracted patterns is straightforward and leads to a competitive hybrid NE-tagger. We report experiments using this system and compare it to other hybridization strategies along with a baseline CRF model.",2012
su-etal-2021-dependency,https://aclanthology.org/2021.conll-1.2,0,,,,,,,"Dependency Induction Through the Lens of Visual Perception. Most previous work on grammar induction focuses on learning phrasal or dependency structure purely from text. However, because the signal provided by text alone is limited, recently introduced visually grounded syntax models make use of multimodal information leading to improved performance in constituency grammar induction. However, as compared to dependency grammars, constituency grammars do not provide a straightforward way to incorporate visual information without enforcing language-specific heuristics. In this paper, we propose an unsupervised grammar induction model that leverages word concreteness and a structural vision-based heuristic to jointly learn constituency-structure and dependency-structure grammars. Our experiments find that concreteness is a strong indicator for learning dependency grammars, improving the direct attachment score (DAS) by over 50% as compared to state-of-the-art models trained on pure text. Next, we propose an extension of our model that leverages both word concreteness and visual semantic role labels in constituency and dependency parsing. Our experiments show that the proposed extension outperforms the current state-of-the-art visually grounded models in constituency parsing even with a smaller grammar size. 1",Dependency Induction Through the Lens of Visual Perception,"Most previous work on grammar induction focuses on learning phrasal or dependency structure purely from text. However, because the signal provided by text alone is limited, recently introduced visually grounded syntax models make use of multimodal information leading to improved performance in constituency grammar induction. However, as compared to dependency grammars, constituency grammars do not provide a straightforward way to incorporate visual information without enforcing language-specific heuristics. In this paper, we propose an unsupervised grammar induction model that leverages word concreteness and a structural vision-based heuristic to jointly learn constituency-structure and dependency-structure grammars. Our experiments find that concreteness is a strong indicator for learning dependency grammars, improving the direct attachment score (DAS) by over 50% as compared to state-of-the-art models trained on pure text. Next, we propose an extension of our model that leverages both word concreteness and visual semantic role labels in constituency and dependency parsing. Our experiments show that the proposed extension outperforms the current state-of-the-art visually grounded models in constituency parsing even with a smaller grammar size. 1",Dependency Induction Through the Lens of Visual Perception,"Most previous work on grammar induction focuses on learning phrasal or dependency structure purely from text. However, because the signal provided by text alone is limited, recently introduced visually grounded syntax models make use of multimodal information leading to improved performance in constituency grammar induction. However, as compared to dependency grammars, constituency grammars do not provide a straightforward way to incorporate visual information without enforcing language-specific heuristics. In this paper, we propose an unsupervised grammar induction model that leverages word concreteness and a structural vision-based heuristic to jointly learn constituency-structure and dependency-structure grammars. Our experiments find that concreteness is a strong indicator for learning dependency grammars, improving the direct attachment score (DAS) by over 50% as compared to state-of-the-art models trained on pure text. Next, we propose an extension of our model that leverages both word concreteness and visual semantic role labels in constituency and dependency parsing. Our experiments show that the proposed extension outperforms the current state-of-the-art visually grounded models in constituency parsing even with a smaller grammar size. 1","This work was supported in part by the DARPA GAILA project (award HR00111990063). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. We would also like to thank the reviewers for their thoughtful comments.","Dependency Induction Through the Lens of Visual Perception. Most previous work on grammar induction focuses on learning phrasal or dependency structure purely from text. However, because the signal provided by text alone is limited, recently introduced visually grounded syntax models make use of multimodal information leading to improved performance in constituency grammar induction. However, as compared to dependency grammars, constituency grammars do not provide a straightforward way to incorporate visual information without enforcing language-specific heuristics. In this paper, we propose an unsupervised grammar induction model that leverages word concreteness and a structural vision-based heuristic to jointly learn constituency-structure and dependency-structure grammars. Our experiments find that concreteness is a strong indicator for learning dependency grammars, improving the direct attachment score (DAS) by over 50% as compared to state-of-the-art models trained on pure text. Next, we propose an extension of our model that leverages both word concreteness and visual semantic role labels in constituency and dependency parsing. Our experiments show that the proposed extension outperforms the current state-of-the-art visually grounded models in constituency parsing even with a smaller grammar size. 1",2021
inui-1996-internet,https://aclanthology.org/C96-2175,1,,,,education,,,"The Internet a ``natural'' channel for language learning. The network as a motivational source for using a foreign language. Electronic networks can be useful in many ways for language learners. First of all, network facilities (e-mail, news, WWW home-pages) minimize not only the boundaries of time and space, but they also help to break communication bar-tiers. They are a wonderful tool for USING a foreign language. E-mail, for example, can be used not only for interaction between teachers and students, but also for interaction among students (collaborative learning). Students can even ask for help from friends or ""ex-perts"" living elsewhere, on the other side of the globe. There have been quite a few attempts to introduce these new tools into the classroom. For example, there are several well established mailing lists between Japanese and foreign schools. This allows Japanese kids to practice, let's say English, by exchanging messages with students from ""abroad"", chatting about their favorite topics like music, sport or any other hobby. Obviously, this kind of communication is meaningful for the student, since s/he can talk about things s/he is concerned with. What role then can CALL system play in this new setting? Rather than trying to play the role people are very good at (answering on the fly questions on any topic, common sense reasoning, etc.), CALL system should assist people by providing the learner with information humans are generally fairly poor at. One way to help the user is by providing him with information (databases) he is looking for. For example, all language learners are concerned with lexicons. Having fabulous browsing tools, computers have a great advantage over traditional dictionaries. Also, people are not very good in explaining the contexts in which a word may be used, or in explaining the difference between two words. Last, but not least, existing NLP technology, such as parsing or machine translation, could be incorporated into the development of 'intel-ligent dictionaries'. However, before doing so, we have to consider several basic issues : what information is useful, that is, what information should be provided to the learner, when and how? For example, rather than killing the user by an information overflow,-like these long list of translations that most electronic dictionaries provide, lists in which the user has to dig deeply in order to find the relevant word,one could parametrize the,level of detail, scope and grain size of translations for a given text or text fragment. In sum, there should be a balance between the information provided by the system and the user's competence. Following this line of reasoning we have started to work on a user friendly interface for a bilingual lexicon (English-Japanese). Two features of our prototype are worth mentioning: (a) the tool is implemented as a WWW",The {I}nternet a {``}natural{''} channel for language learning,"The network as a motivational source for using a foreign language. Electronic networks can be useful in many ways for language learners. First of all, network facilities (e-mail, news, WWW home-pages) minimize not only the boundaries of time and space, but they also help to break communication bar-tiers. They are a wonderful tool for USING a foreign language. E-mail, for example, can be used not only for interaction between teachers and students, but also for interaction among students (collaborative learning). Students can even ask for help from friends or ""ex-perts"" living elsewhere, on the other side of the globe. There have been quite a few attempts to introduce these new tools into the classroom. For example, there are several well established mailing lists between Japanese and foreign schools. This allows Japanese kids to practice, let's say English, by exchanging messages with students from ""abroad"", chatting about their favorite topics like music, sport or any other hobby. Obviously, this kind of communication is meaningful for the student, since s/he can talk about things s/he is concerned with. What role then can CALL system play in this new setting? Rather than trying to play the role people are very good at (answering on the fly questions on any topic, common sense reasoning, etc.), CALL system should assist people by providing the learner with information humans are generally fairly poor at. One way to help the user is by providing him with information (databases) he is looking for. For example, all language learners are concerned with lexicons. Having fabulous browsing tools, computers have a great advantage over traditional dictionaries. Also, people are not very good in explaining the contexts in which a word may be used, or in explaining the difference between two words. Last, but not least, existing NLP technology, such as parsing or machine translation, could be incorporated into the development of 'intel-ligent dictionaries'. However, before doing so, we have to consider several basic issues : what information is useful, that is, what information should be provided to the learner, when and how? For example, rather than killing the user by an information overflow,-like these long list of translations that most electronic dictionaries provide, lists in which the user has to dig deeply in order to find the relevant word,one could parametrize the,level of detail, scope and grain size of translations for a given text or text fragment. In sum, there should be a balance between the information provided by the system and the user's competence. Following this line of reasoning we have started to work on a user friendly interface for a bilingual lexicon (English-Japanese). Two features of our prototype are worth mentioning: (a) the tool is implemented as a WWW",The Internet a ``natural'' channel for language learning,"The network as a motivational source for using a foreign language. Electronic networks can be useful in many ways for language learners. First of all, network facilities (e-mail, news, WWW home-pages) minimize not only the boundaries of time and space, but they also help to break communication bar-tiers. They are a wonderful tool for USING a foreign language. E-mail, for example, can be used not only for interaction between teachers and students, but also for interaction among students (collaborative learning). Students can even ask for help from friends or ""ex-perts"" living elsewhere, on the other side of the globe. There have been quite a few attempts to introduce these new tools into the classroom. For example, there are several well established mailing lists between Japanese and foreign schools. This allows Japanese kids to practice, let's say English, by exchanging messages with students from ""abroad"", chatting about their favorite topics like music, sport or any other hobby. Obviously, this kind of communication is meaningful for the student, since s/he can talk about things s/he is concerned with. What role then can CALL system play in this new setting? Rather than trying to play the role people are very good at (answering on the fly questions on any topic, common sense reasoning, etc.), CALL system should assist people by providing the learner with information humans are generally fairly poor at. One way to help the user is by providing him with information (databases) he is looking for. For example, all language learners are concerned with lexicons. Having fabulous browsing tools, computers have a great advantage over traditional dictionaries. Also, people are not very good in explaining the contexts in which a word may be used, or in explaining the difference between two words. Last, but not least, existing NLP technology, such as parsing or machine translation, could be incorporated into the development of 'intel-ligent dictionaries'. However, before doing so, we have to consider several basic issues : what information is useful, that is, what information should be provided to the learner, when and how? For example, rather than killing the user by an information overflow,-like these long list of translations that most electronic dictionaries provide, lists in which the user has to dig deeply in order to find the relevant word,one could parametrize the,level of detail, scope and grain size of translations for a given text or text fragment. In sum, there should be a balance between the information provided by the system and the user's competence. Following this line of reasoning we have started to work on a user friendly interface for a bilingual lexicon (English-Japanese). Two features of our prototype are worth mentioning: (a) the tool is implemented as a WWW",,"The Internet a ``natural'' channel for language learning. The network as a motivational source for using a foreign language. Electronic networks can be useful in many ways for language learners. First of all, network facilities (e-mail, news, WWW home-pages) minimize not only the boundaries of time and space, but they also help to break communication bar-tiers. They are a wonderful tool for USING a foreign language. E-mail, for example, can be used not only for interaction between teachers and students, but also for interaction among students (collaborative learning). Students can even ask for help from friends or ""ex-perts"" living elsewhere, on the other side of the globe. There have been quite a few attempts to introduce these new tools into the classroom. For example, there are several well established mailing lists between Japanese and foreign schools. This allows Japanese kids to practice, let's say English, by exchanging messages with students from ""abroad"", chatting about their favorite topics like music, sport or any other hobby. Obviously, this kind of communication is meaningful for the student, since s/he can talk about things s/he is concerned with. What role then can CALL system play in this new setting? Rather than trying to play the role people are very good at (answering on the fly questions on any topic, common sense reasoning, etc.), CALL system should assist people by providing the learner with information humans are generally fairly poor at. One way to help the user is by providing him with information (databases) he is looking for. For example, all language learners are concerned with lexicons. Having fabulous browsing tools, computers have a great advantage over traditional dictionaries. Also, people are not very good in explaining the contexts in which a word may be used, or in explaining the difference between two words. Last, but not least, existing NLP technology, such as parsing or machine translation, could be incorporated into the development of 'intel-ligent dictionaries'. However, before doing so, we have to consider several basic issues : what information is useful, that is, what information should be provided to the learner, when and how? For example, rather than killing the user by an information overflow,-like these long list of translations that most electronic dictionaries provide, lists in which the user has to dig deeply in order to find the relevant word,one could parametrize the,level of detail, scope and grain size of translations for a given text or text fragment. In sum, there should be a balance between the information provided by the system and the user's competence. Following this line of reasoning we have started to work on a user friendly interface for a bilingual lexicon (English-Japanese). Two features of our prototype are worth mentioning: (a) the tool is implemented as a WWW",1996
strzalkowskl-1990-invert,https://aclanthology.org/C90-2060,0,,,,,,,"How to Invert a Natural Language Parser Into an Efficient Generator: An Algorithm for Logic Grammars. The use of a single grammar in natural language parsing and generation is most desirable for variety of reasons including efficiency, perspicuity, integrity, robusthess, and a certain ,amount of elegance. In this paper we present an algorithm for automated inversion of a PROLOG-coded unification parser into an efficient unification generator, using the collections of minimal sets of essential arguments (MSEA) for predicates. The algorithm is also applicable to more abstract systems for writing logic grammars, such as DCG.",How to Invert a Natural Language Parser Into an Efficient Generator: An Algorithm for Logic Grammars,"The use of a single grammar in natural language parsing and generation is most desirable for variety of reasons including efficiency, perspicuity, integrity, robusthess, and a certain ,amount of elegance. In this paper we present an algorithm for automated inversion of a PROLOG-coded unification parser into an efficient unification generator, using the collections of minimal sets of essential arguments (MSEA) for predicates. The algorithm is also applicable to more abstract systems for writing logic grammars, such as DCG.",How to Invert a Natural Language Parser Into an Efficient Generator: An Algorithm for Logic Grammars,"The use of a single grammar in natural language parsing and generation is most desirable for variety of reasons including efficiency, perspicuity, integrity, robusthess, and a certain ,amount of elegance. In this paper we present an algorithm for automated inversion of a PROLOG-coded unification parser into an efficient unification generator, using the collections of minimal sets of essential arguments (MSEA) for predicates. The algorithm is also applicable to more abstract systems for writing logic grammars, such as DCG.",,"How to Invert a Natural Language Parser Into an Efficient Generator: An Algorithm for Logic Grammars. The use of a single grammar in natural language parsing and generation is most desirable for variety of reasons including efficiency, perspicuity, integrity, robusthess, and a certain ,amount of elegance. In this paper we present an algorithm for automated inversion of a PROLOG-coded unification parser into an efficient unification generator, using the collections of minimal sets of essential arguments (MSEA) for predicates. The algorithm is also applicable to more abstract systems for writing logic grammars, such as DCG.",1990
liu-etal-2010-semantic,https://aclanthology.org/C10-1079,0,,,,,,,"Semantic Role Labeling for News Tweets. News tweets that report what is happening have become an important real-time information source. We raise the problem of Semantic Role Labeling (SRL) for news tweets, which is meaningful for fine grained information extraction and retrieval. We present a self-supervised learning approach to train a domain specific SRL system to resolve the problem. A large volume of training data is automatically labeled, by leveraging the existing SRL system on news domain and content similarity between news and news tweets. On a human annotated test set, our system achieves state-of-the-art performance, outperforming the SRL system trained on news.",Semantic Role Labeling for News Tweets,"News tweets that report what is happening have become an important real-time information source. We raise the problem of Semantic Role Labeling (SRL) for news tweets, which is meaningful for fine grained information extraction and retrieval. We present a self-supervised learning approach to train a domain specific SRL system to resolve the problem. A large volume of training data is automatically labeled, by leveraging the existing SRL system on news domain and content similarity between news and news tweets. On a human annotated test set, our system achieves state-of-the-art performance, outperforming the SRL system trained on news.",Semantic Role Labeling for News Tweets,"News tweets that report what is happening have become an important real-time information source. We raise the problem of Semantic Role Labeling (SRL) for news tweets, which is meaningful for fine grained information extraction and retrieval. We present a self-supervised learning approach to train a domain specific SRL system to resolve the problem. A large volume of training data is automatically labeled, by leveraging the existing SRL system on news domain and content similarity between news and news tweets. On a human annotated test set, our system achieves state-of-the-art performance, outperforming the SRL system trained on news.",,"Semantic Role Labeling for News Tweets. News tweets that report what is happening have become an important real-time information source. We raise the problem of Semantic Role Labeling (SRL) for news tweets, which is meaningful for fine grained information extraction and retrieval. We present a self-supervised learning approach to train a domain specific SRL system to resolve the problem. A large volume of training data is automatically labeled, by leveraging the existing SRL system on news domain and content similarity between news and news tweets. On a human annotated test set, our system achieves state-of-the-art performance, outperforming the SRL system trained on news.",2010
chen-etal-2006-reordering,https://aclanthology.org/2006.iwslt-papers.4,0,,,,,,,"Reordering rules for phrase-based statistical machine translation. This paper proposes the use of rules automatically extracted from word aligned training data to model word reordering phenomena in phrase-based statistical machine translation. Scores computed from matching rules are used as additional feature functions in the rescoring stage of the automatic translation process from various languages to English, in the ambit of a popular traveling domain task. Rules are defined either on Part-of-Speech or words. Part-of-Speech rules are extracted from and applied to Chinese, while lexicalized rules are extracted from and applied to Chinese, Japanese and Arabic. Both Part-of-Speech and lexicalized rules yield an absolute improvement of the BLEU score of 0.4-0.9 points without affecting the NIST score, on the Chinese-to-English translation task. On other language pairs which differ a lot in the word order, the use of lexicalized rules allows to observe significant improvements as well.",Reordering rules for phrase-based statistical machine translation,"This paper proposes the use of rules automatically extracted from word aligned training data to model word reordering phenomena in phrase-based statistical machine translation. Scores computed from matching rules are used as additional feature functions in the rescoring stage of the automatic translation process from various languages to English, in the ambit of a popular traveling domain task. Rules are defined either on Part-of-Speech or words. Part-of-Speech rules are extracted from and applied to Chinese, while lexicalized rules are extracted from and applied to Chinese, Japanese and Arabic. Both Part-of-Speech and lexicalized rules yield an absolute improvement of the BLEU score of 0.4-0.9 points without affecting the NIST score, on the Chinese-to-English translation task. On other language pairs which differ a lot in the word order, the use of lexicalized rules allows to observe significant improvements as well.",Reordering rules for phrase-based statistical machine translation,"This paper proposes the use of rules automatically extracted from word aligned training data to model word reordering phenomena in phrase-based statistical machine translation. Scores computed from matching rules are used as additional feature functions in the rescoring stage of the automatic translation process from various languages to English, in the ambit of a popular traveling domain task. Rules are defined either on Part-of-Speech or words. Part-of-Speech rules are extracted from and applied to Chinese, while lexicalized rules are extracted from and applied to Chinese, Japanese and Arabic. Both Part-of-Speech and lexicalized rules yield an absolute improvement of the BLEU score of 0.4-0.9 points without affecting the NIST score, on the Chinese-to-English translation task. On other language pairs which differ a lot in the word order, the use of lexicalized rules allows to observe significant improvements as well.","This work has been funded by the European Union under the integrated project TC-STAR -Technology and Corpora for Speech-to-Speech Translation -(IST-2002-FP6-506738, http://www.tc-star.org).","Reordering rules for phrase-based statistical machine translation. This paper proposes the use of rules automatically extracted from word aligned training data to model word reordering phenomena in phrase-based statistical machine translation. Scores computed from matching rules are used as additional feature functions in the rescoring stage of the automatic translation process from various languages to English, in the ambit of a popular traveling domain task. Rules are defined either on Part-of-Speech or words. Part-of-Speech rules are extracted from and applied to Chinese, while lexicalized rules are extracted from and applied to Chinese, Japanese and Arabic. Both Part-of-Speech and lexicalized rules yield an absolute improvement of the BLEU score of 0.4-0.9 points without affecting the NIST score, on the Chinese-to-English translation task. On other language pairs which differ a lot in the word order, the use of lexicalized rules allows to observe significant improvements as well.",2006
padro-etal-2010-freeling,http://www.lrec-conf.org/proceedings/lrec2010/pdf/14_Paper.pdf,0,,,,,,,"FreeLing 2.1: Five Years of Open-source Language Processing Tools. FreeLing is an open-source multilingual language processing library providing a wide range of language analyzers for several languages. It offers text processing and language annotation facilities to natural language processing application developers, simplifying the task of building those applications. FreeLing is customizable and extensible. Developers can use the default linguistic resources (dictionaries, lexicons, grammars, etc.) directly, or extend them, adapt them to specific domains, or even develop new ones for specific languages. This paper overviews the recent history of this tool, summarizes the improvements and extensions incorporated in the latest version, and depicts the architecture of the library. Special focus is brought to the fact and consequences of the library being open-source: After five years and over 35,000 downloads, a growing user community has extended the initial three languages (English, Spanish and Catalan) to eight (adding Galician, Italian, Welsh, Portuguese, and Asturian), proving that the collaborative open model is a productive approach for the development of NLP tools and resources.",{F}ree{L}ing 2.1: Five Years of Open-source Language Processing Tools,"FreeLing is an open-source multilingual language processing library providing a wide range of language analyzers for several languages. It offers text processing and language annotation facilities to natural language processing application developers, simplifying the task of building those applications. FreeLing is customizable and extensible. Developers can use the default linguistic resources (dictionaries, lexicons, grammars, etc.) directly, or extend them, adapt them to specific domains, or even develop new ones for specific languages. This paper overviews the recent history of this tool, summarizes the improvements and extensions incorporated in the latest version, and depicts the architecture of the library. Special focus is brought to the fact and consequences of the library being open-source: After five years and over 35,000 downloads, a growing user community has extended the initial three languages (English, Spanish and Catalan) to eight (adding Galician, Italian, Welsh, Portuguese, and Asturian), proving that the collaborative open model is a productive approach for the development of NLP tools and resources.",FreeLing 2.1: Five Years of Open-source Language Processing Tools,"FreeLing is an open-source multilingual language processing library providing a wide range of language analyzers for several languages. It offers text processing and language annotation facilities to natural language processing application developers, simplifying the task of building those applications. FreeLing is customizable and extensible. Developers can use the default linguistic resources (dictionaries, lexicons, grammars, etc.) directly, or extend them, adapt them to specific domains, or even develop new ones for specific languages. This paper overviews the recent history of this tool, summarizes the improvements and extensions incorporated in the latest version, and depicts the architecture of the library. Special focus is brought to the fact and consequences of the library being open-source: After five years and over 35,000 downloads, a growing user community has extended the initial three languages (English, Spanish and Catalan) to eight (adding Galician, Italian, Welsh, Portuguese, and Asturian), proving that the collaborative open model is a productive approach for the development of NLP tools and resources.",This work has been partially funded by the Spanish Government via the KNOW2 (TIN2009-14715-C04-03/04) project.,"FreeLing 2.1: Five Years of Open-source Language Processing Tools. FreeLing is an open-source multilingual language processing library providing a wide range of language analyzers for several languages. It offers text processing and language annotation facilities to natural language processing application developers, simplifying the task of building those applications. FreeLing is customizable and extensible. Developers can use the default linguistic resources (dictionaries, lexicons, grammars, etc.) directly, or extend them, adapt them to specific domains, or even develop new ones for specific languages. This paper overviews the recent history of this tool, summarizes the improvements and extensions incorporated in the latest version, and depicts the architecture of the library. Special focus is brought to the fact and consequences of the library being open-source: After five years and over 35,000 downloads, a growing user community has extended the initial three languages (English, Spanish and Catalan) to eight (adding Galician, Italian, Welsh, Portuguese, and Asturian), proving that the collaborative open model is a productive approach for the development of NLP tools and resources.",2010
attnas-etal-2005-integration,https://aclanthology.org/2005.mtsummit-papers.28,0,,,,,,,Integration of SYSTRAN MT Systems in an Open Workflow. ,Integration of {SYSTRAN} {MT} Systems in an Open Workflow,,Integration of SYSTRAN MT Systems in an Open Workflow,,,Integration of SYSTRAN MT Systems in an Open Workflow. ,2005
trisedya-etal-2018-gtr,https://aclanthology.org/P18-1151,0,,,,,,,"GTR-LSTM: A Triple Encoder for Sentence Generation from RDF Data. A knowledge base is a large repository of facts that are mainly represented as RDF triples, each of which consists of a subject, a predicate (relationship), and an object. The RDF triple representation offers a simple interface for applications to access the facts. However, this representation is not in a natural language form, which is difficult for humans to understand. We address this problem by proposing a system to translate a set of RDF triples into natural sentences based on an encoder-decoder framework. To preserve as much information from RDF triples as possible, we propose a novel graph-based triple encoder. The proposed encoder encodes not only the elements of the triples but also the relationships both within a triple and between the triples. Experimental results show that the proposed encoder achieves a consistent improvement over the baseline models by up to 17.6%, 6.0%, and 16.4% in three common metrics BLEU, METEOR, and TER, respectively.",{GTR}-{LSTM}: A Triple Encoder for Sentence Generation from {RDF} Data,"A knowledge base is a large repository of facts that are mainly represented as RDF triples, each of which consists of a subject, a predicate (relationship), and an object. The RDF triple representation offers a simple interface for applications to access the facts. However, this representation is not in a natural language form, which is difficult for humans to understand. We address this problem by proposing a system to translate a set of RDF triples into natural sentences based on an encoder-decoder framework. To preserve as much information from RDF triples as possible, we propose a novel graph-based triple encoder. The proposed encoder encodes not only the elements of the triples but also the relationships both within a triple and between the triples. Experimental results show that the proposed encoder achieves a consistent improvement over the baseline models by up to 17.6%, 6.0%, and 16.4% in three common metrics BLEU, METEOR, and TER, respectively.",GTR-LSTM: A Triple Encoder for Sentence Generation from RDF Data,"A knowledge base is a large repository of facts that are mainly represented as RDF triples, each of which consists of a subject, a predicate (relationship), and an object. The RDF triple representation offers a simple interface for applications to access the facts. However, this representation is not in a natural language form, which is difficult for humans to understand. We address this problem by proposing a system to translate a set of RDF triples into natural sentences based on an encoder-decoder framework. To preserve as much information from RDF triples as possible, we propose a novel graph-based triple encoder. The proposed encoder encodes not only the elements of the triples but also the relationships both within a triple and between the triples. Experimental results show that the proposed encoder achieves a consistent improvement over the baseline models by up to 17.6%, 6.0%, and 16.4% in three common metrics BLEU, METEOR, and TER, respectively.","Bayu Distiawan Trisedya is supported by the Indonesian Endowment Fund for Education (LPDP). This work is supported by Australian Research Council (ARC) Discovery Project DP180102050 and Future Fellowships Project FT120100832, and Google Faculty Research Award. This work is partly done while Jianzhong Qi is visiting the University of New South Wales. Wei Wang was partially supported by D2DCRC DC25002, DC25003, ARC DP 170103710 and 180103411.","GTR-LSTM: A Triple Encoder for Sentence Generation from RDF Data. A knowledge base is a large repository of facts that are mainly represented as RDF triples, each of which consists of a subject, a predicate (relationship), and an object. The RDF triple representation offers a simple interface for applications to access the facts. However, this representation is not in a natural language form, which is difficult for humans to understand. We address this problem by proposing a system to translate a set of RDF triples into natural sentences based on an encoder-decoder framework. To preserve as much information from RDF triples as possible, we propose a novel graph-based triple encoder. The proposed encoder encodes not only the elements of the triples but also the relationships both within a triple and between the triples. Experimental results show that the proposed encoder achieves a consistent improvement over the baseline models by up to 17.6%, 6.0%, and 16.4% in three common metrics BLEU, METEOR, and TER, respectively.",2018
messina-etal-2021-aimh,https://aclanthology.org/2021.semeval-1.140,0,,,,,,,"AIMH at SemEval-2021 Task 6: Multimodal Classification Using an Ensemble of Transformer Models. This paper describes the system used by the AIMH Team to approach the SemEval Task 6. We propose an approach that relies on an architecture based on the transformer model to process multimodal content (text and images) in memes. Our architecture, called DVTT (Double Visual Textual Transformer), approaches Subtasks 1 and 3 of Task 6 as multi-label classification problems, where the text and/or images of the meme are processed, and the probabilities of the presence of each possible persuasion technique are returned as a result. DVTT uses two complete networks of transformers that work on text and images that are mutually conditioned. One of the two modalities acts as the main one and the second one intervenes to enrich the first one, thus obtaining two distinct ways of operation. The two transformers outputs are merged by averaging the inferred probabilities for each possible label, and the overall network is trained end-to-end with a binary cross-entropy loss.",{AIMH} at {S}em{E}val-2021 Task 6: Multimodal Classification Using an Ensemble of Transformer Models,"This paper describes the system used by the AIMH Team to approach the SemEval Task 6. We propose an approach that relies on an architecture based on the transformer model to process multimodal content (text and images) in memes. Our architecture, called DVTT (Double Visual Textual Transformer), approaches Subtasks 1 and 3 of Task 6 as multi-label classification problems, where the text and/or images of the meme are processed, and the probabilities of the presence of each possible persuasion technique are returned as a result. DVTT uses two complete networks of transformers that work on text and images that are mutually conditioned. One of the two modalities acts as the main one and the second one intervenes to enrich the first one, thus obtaining two distinct ways of operation. The two transformers outputs are merged by averaging the inferred probabilities for each possible label, and the overall network is trained end-to-end with a binary cross-entropy loss.",AIMH at SemEval-2021 Task 6: Multimodal Classification Using an Ensemble of Transformer Models,"This paper describes the system used by the AIMH Team to approach the SemEval Task 6. We propose an approach that relies on an architecture based on the transformer model to process multimodal content (text and images) in memes. Our architecture, called DVTT (Double Visual Textual Transformer), approaches Subtasks 1 and 3 of Task 6 as multi-label classification problems, where the text and/or images of the meme are processed, and the probabilities of the presence of each possible persuasion technique are returned as a result. DVTT uses two complete networks of transformers that work on text and images that are mutually conditioned. One of the two modalities acts as the main one and the second one intervenes to enrich the first one, thus obtaining two distinct ways of operation. The two transformers outputs are merged by averaging the inferred probabilities for each possible label, and the overall network is trained end-to-end with a binary cross-entropy loss.","This work was partially supported by ""Intelligenza Artificiale per il Monitoraggio Visuale dei Siti Culturali"" (AI4CHSites) CNR4C program, CUP B15J19001040004, by the AI4EU project, funded by the EC (H2020 -Contract n. 825619), and AI4Media under GA 951911.","AIMH at SemEval-2021 Task 6: Multimodal Classification Using an Ensemble of Transformer Models. This paper describes the system used by the AIMH Team to approach the SemEval Task 6. We propose an approach that relies on an architecture based on the transformer model to process multimodal content (text and images) in memes. Our architecture, called DVTT (Double Visual Textual Transformer), approaches Subtasks 1 and 3 of Task 6 as multi-label classification problems, where the text and/or images of the meme are processed, and the probabilities of the presence of each possible persuasion technique are returned as a result. DVTT uses two complete networks of transformers that work on text and images that are mutually conditioned. One of the two modalities acts as the main one and the second one intervenes to enrich the first one, thus obtaining two distinct ways of operation. The two transformers outputs are merged by averaging the inferred probabilities for each possible label, and the overall network is trained end-to-end with a binary cross-entropy loss.",2021
balouchzahi-etal-2021-mucs,https://aclanthology.org/2021.dravidianlangtech-1.47,1,,,,hate_speech,,,"MUCS@DravidianLangTech-EACL2021:COOLI-Code-Mixing Offensive Language Identification. This paper describes the models submitted by the team MUCS for Offensive Language Identification in Dravidian Languages-EACL 2021 shared task that aims at identifying and classifying code-mixed texts of three language pairs namely, Kannada-English (Kn-En), Malayalam-English (Ma-En), and Tamil-English (Ta-En) into six predefined categories (5 categories in Ma-En language pair). Two models, namely, COOLI-Ensemble and COOLI-Keras are trained with the char sequences extracted from the sentences combined with words in the sentences as features. Out of the two proposed models, COOLI-Ensemble model (best among our models) obtained first rank for Ma-En language pair with 0.97 weighted F1-score and fourth and sixth ranks with 0.75 and 0.69 weighted F1-score for Ta-En and Kn-En language pairs respectively.",{MUCS}@{D}ravidian{L}ang{T}ech-{EACL}2021:{COOLI}-Code-Mixing Offensive Language Identification,"This paper describes the models submitted by the team MUCS for Offensive Language Identification in Dravidian Languages-EACL 2021 shared task that aims at identifying and classifying code-mixed texts of three language pairs namely, Kannada-English (Kn-En), Malayalam-English (Ma-En), and Tamil-English (Ta-En) into six predefined categories (5 categories in Ma-En language pair). Two models, namely, COOLI-Ensemble and COOLI-Keras are trained with the char sequences extracted from the sentences combined with words in the sentences as features. Out of the two proposed models, COOLI-Ensemble model (best among our models) obtained first rank for Ma-En language pair with 0.97 weighted F1-score and fourth and sixth ranks with 0.75 and 0.69 weighted F1-score for Ta-En and Kn-En language pairs respectively.",MUCS@DravidianLangTech-EACL2021:COOLI-Code-Mixing Offensive Language Identification,"This paper describes the models submitted by the team MUCS for Offensive Language Identification in Dravidian Languages-EACL 2021 shared task that aims at identifying and classifying code-mixed texts of three language pairs namely, Kannada-English (Kn-En), Malayalam-English (Ma-En), and Tamil-English (Ta-En) into six predefined categories (5 categories in Ma-En language pair). Two models, namely, COOLI-Ensemble and COOLI-Keras are trained with the char sequences extracted from the sentences combined with words in the sentences as features. Out of the two proposed models, COOLI-Ensemble model (best among our models) obtained first rank for Ma-En language pair with 0.97 weighted F1-score and fourth and sixth ranks with 0.75 and 0.69 weighted F1-score for Ta-En and Kn-En language pairs respectively.",,"MUCS@DravidianLangTech-EACL2021:COOLI-Code-Mixing Offensive Language Identification. This paper describes the models submitted by the team MUCS for Offensive Language Identification in Dravidian Languages-EACL 2021 shared task that aims at identifying and classifying code-mixed texts of three language pairs namely, Kannada-English (Kn-En), Malayalam-English (Ma-En), and Tamil-English (Ta-En) into six predefined categories (5 categories in Ma-En language pair). Two models, namely, COOLI-Ensemble and COOLI-Keras are trained with the char sequences extracted from the sentences combined with words in the sentences as features. Out of the two proposed models, COOLI-Ensemble model (best among our models) obtained first rank for Ma-En language pair with 0.97 weighted F1-score and fourth and sixth ranks with 0.75 and 0.69 weighted F1-score for Ta-En and Kn-En language pairs respectively.",2021
zhang-etal-2022-multilingual,https://aclanthology.org/2022.acl-long.287,0,,,,,,,"Multilingual Document-Level Translation Enables Zero-Shot Transfer From Sentences to Documents. Document-level neural machine translation (DocNMT) achieves coherent translations by incorporating cross-sentence context. However, for most language pairs there's a shortage of parallel documents, although parallel sentences are readily available. In this paper, we study whether and how contextual modeling in DocNMT is transferable via multilingual modeling. We focus on the scenario of zero-shot transfer from teacher languages with document level data to student languages with no documents but sentence level data, and for the first time treat document-level translation as a transfer learning problem. Using simple concatenation-based DocNMT, we explore the effect of 3 factors on the transfer: the number of teacher languages with document level data, the balance between document and sentence level data at training, and the data condition of parallel documents (genuine vs. backtranslated). Our experiments on Europarl-7 and IWSLT-10 show the feasibility of multilingual transfer for DocNMT, particularly on document-specific metrics. We observe that more teacher languages and adequate data balance both contribute to better transfer quality. Surprisingly, the transfer is less sensitive to the data condition, where multilingual DocNMT delivers decent performance with either backtranslated or genuine document pairs.",Multilingual Document-Level Translation Enables Zero-Shot Transfer From Sentences to Documents,"Document-level neural machine translation (DocNMT) achieves coherent translations by incorporating cross-sentence context. However, for most language pairs there's a shortage of parallel documents, although parallel sentences are readily available. In this paper, we study whether and how contextual modeling in DocNMT is transferable via multilingual modeling. We focus on the scenario of zero-shot transfer from teacher languages with document level data to student languages with no documents but sentence level data, and for the first time treat document-level translation as a transfer learning problem. Using simple concatenation-based DocNMT, we explore the effect of 3 factors on the transfer: the number of teacher languages with document level data, the balance between document and sentence level data at training, and the data condition of parallel documents (genuine vs. backtranslated). Our experiments on Europarl-7 and IWSLT-10 show the feasibility of multilingual transfer for DocNMT, particularly on document-specific metrics. We observe that more teacher languages and adequate data balance both contribute to better transfer quality. Surprisingly, the transfer is less sensitive to the data condition, where multilingual DocNMT delivers decent performance with either backtranslated or genuine document pairs.",Multilingual Document-Level Translation Enables Zero-Shot Transfer From Sentences to Documents,"Document-level neural machine translation (DocNMT) achieves coherent translations by incorporating cross-sentence context. However, for most language pairs there's a shortage of parallel documents, although parallel sentences are readily available. In this paper, we study whether and how contextual modeling in DocNMT is transferable via multilingual modeling. We focus on the scenario of zero-shot transfer from teacher languages with document level data to student languages with no documents but sentence level data, and for the first time treat document-level translation as a transfer learning problem. Using simple concatenation-based DocNMT, we explore the effect of 3 factors on the transfer: the number of teacher languages with document level data, the balance between document and sentence level data at training, and the data condition of parallel documents (genuine vs. backtranslated). Our experiments on Europarl-7 and IWSLT-10 show the feasibility of multilingual transfer for DocNMT, particularly on document-specific metrics. We observe that more teacher languages and adequate data balance both contribute to better transfer quality. Surprisingly, the transfer is less sensitive to the data condition, where multilingual DocNMT delivers decent performance with either backtranslated or genuine document pairs.",We thank the reviewers for their insightful comments. We want to thank Macduff Hughes and Wolfgang Macherey for their valuable feedback. We would also like to thank the Google Translate team for their constructive discussions and comments.,"Multilingual Document-Level Translation Enables Zero-Shot Transfer From Sentences to Documents. Document-level neural machine translation (DocNMT) achieves coherent translations by incorporating cross-sentence context. However, for most language pairs there's a shortage of parallel documents, although parallel sentences are readily available. In this paper, we study whether and how contextual modeling in DocNMT is transferable via multilingual modeling. We focus on the scenario of zero-shot transfer from teacher languages with document level data to student languages with no documents but sentence level data, and for the first time treat document-level translation as a transfer learning problem. Using simple concatenation-based DocNMT, we explore the effect of 3 factors on the transfer: the number of teacher languages with document level data, the balance between document and sentence level data at training, and the data condition of parallel documents (genuine vs. backtranslated). Our experiments on Europarl-7 and IWSLT-10 show the feasibility of multilingual transfer for DocNMT, particularly on document-specific metrics. We observe that more teacher languages and adequate data balance both contribute to better transfer quality. Surprisingly, the transfer is less sensitive to the data condition, where multilingual DocNMT delivers decent performance with either backtranslated or genuine document pairs.",2022
sanchis-trilles-etal-2011-bilingual,https://aclanthology.org/2011.eamt-1.35,0,,,,,,,"Bilingual segmentation for phrasetable pruning in Statistical Machine Translation. Statistical machine translation systems have greatly improved in the last years. However, this boost in performance usually comes at a high computational cost, yielding systems that are often not suitable for integration in hand-held or real-time devices. We describe a novel technique for reducing such cost by performing a Viterbi-style selection of the parameters of the translation model. We present results with finite state transducers and phrasebased models showing a 98% reduction of the number of parameters and a 15-fold increase in translation speed without any significant loss in translation quality.",Bilingual segmentation for phrasetable pruning in Statistical Machine Translation,"Statistical machine translation systems have greatly improved in the last years. However, this boost in performance usually comes at a high computational cost, yielding systems that are often not suitable for integration in hand-held or real-time devices. We describe a novel technique for reducing such cost by performing a Viterbi-style selection of the parameters of the translation model. We present results with finite state transducers and phrasebased models showing a 98% reduction of the number of parameters and a 15-fold increase in translation speed without any significant loss in translation quality.",Bilingual segmentation for phrasetable pruning in Statistical Machine Translation,"Statistical machine translation systems have greatly improved in the last years. However, this boost in performance usually comes at a high computational cost, yielding systems that are often not suitable for integration in hand-held or real-time devices. We describe a novel technique for reducing such cost by performing a Viterbi-style selection of the parameters of the translation model. We present results with finite state transducers and phrasebased models showing a 98% reduction of the number of parameters and a 15-fold increase in translation speed without any significant loss in translation quality.","This paper is based upon work supported by the EC (FEDER/FSE) and the Spanish MICINN under projects MIPRCV ""Consolider Ingenio 2010"" (CSD2007-00018) and iTrans2 (TIN2009-14511). Also supported by the Spanish MITyC under the erudito.com (TSI-020110-2009-439) project, by the Generalitat Valenciana under grant Prometeo/2009/014, and by the UPV under grant 20091027.The authors would also like to thank the anonymous reviewers for their constructive and detailed comments.","Bilingual segmentation for phrasetable pruning in Statistical Machine Translation. Statistical machine translation systems have greatly improved in the last years. However, this boost in performance usually comes at a high computational cost, yielding systems that are often not suitable for integration in hand-held or real-time devices. We describe a novel technique for reducing such cost by performing a Viterbi-style selection of the parameters of the translation model. We present results with finite state transducers and phrasebased models showing a 98% reduction of the number of parameters and a 15-fold increase in translation speed without any significant loss in translation quality.",2011
bonnema-etal-1997-dop,https://aclanthology.org/P97-1021,0,,,,,,,"A DOP Model for Semantic Interpretation. In data-oriented language processing, an annotated language corpus is used as a stochastic grammar. The most probable analysis of a new sentence is constructed by combining fragments from the corpus in the most probable way. This approach has been successfully used for syntactic analysis, using corpora with syntactic annotations such as the Penn Tree-bank. If a corpus with semantically annotated sentences is used, the same approach can also generate the most probable semantic interpretation of an input sentence. The present paper explains this semantic interpretation method. A data-oriented semantic interpretation algorithm was tested on two semantically annotated corpora: the English ATIS corpus and the Dutch OVIS corpus. Experiments show an increase in semantic accuracy if larger corpus-fragments are taken into consideration.",A {DOP} Model for Semantic Interpretation,"In data-oriented language processing, an annotated language corpus is used as a stochastic grammar. The most probable analysis of a new sentence is constructed by combining fragments from the corpus in the most probable way. This approach has been successfully used for syntactic analysis, using corpora with syntactic annotations such as the Penn Tree-bank. If a corpus with semantically annotated sentences is used, the same approach can also generate the most probable semantic interpretation of an input sentence. The present paper explains this semantic interpretation method. A data-oriented semantic interpretation algorithm was tested on two semantically annotated corpora: the English ATIS corpus and the Dutch OVIS corpus. Experiments show an increase in semantic accuracy if larger corpus-fragments are taken into consideration.",A DOP Model for Semantic Interpretation,"In data-oriented language processing, an annotated language corpus is used as a stochastic grammar. The most probable analysis of a new sentence is constructed by combining fragments from the corpus in the most probable way. This approach has been successfully used for syntactic analysis, using corpora with syntactic annotations such as the Penn Tree-bank. If a corpus with semantically annotated sentences is used, the same approach can also generate the most probable semantic interpretation of an input sentence. The present paper explains this semantic interpretation method. A data-oriented semantic interpretation algorithm was tested on two semantically annotated corpora: the English ATIS corpus and the Dutch OVIS corpus. Experiments show an increase in semantic accuracy if larger corpus-fragments are taken into consideration.",,"A DOP Model for Semantic Interpretation. In data-oriented language processing, an annotated language corpus is used as a stochastic grammar. The most probable analysis of a new sentence is constructed by combining fragments from the corpus in the most probable way. This approach has been successfully used for syntactic analysis, using corpora with syntactic annotations such as the Penn Tree-bank. If a corpus with semantically annotated sentences is used, the same approach can also generate the most probable semantic interpretation of an input sentence. The present paper explains this semantic interpretation method. A data-oriented semantic interpretation algorithm was tested on two semantically annotated corpora: the English ATIS corpus and the Dutch OVIS corpus. Experiments show an increase in semantic accuracy if larger corpus-fragments are taken into consideration.",1997
stromback-1994-achieving,https://aclanthology.org/C94-2135,0,,,,,,,Achieving Flexibility in Unification Formalisms. We argue that flexibility is an important property for unification-based formalisms. By flexibility we mean the ability for the user to modify and extend the formalism according to the needs of his problem. The paper discusses some properties necessary to achieve a flexible formalism and presents the FLUF formalism as a realization of these ideas.,Achieving Flexibility in Unification Formalisms,We argue that flexibility is an important property for unification-based formalisms. By flexibility we mean the ability for the user to modify and extend the formalism according to the needs of his problem. The paper discusses some properties necessary to achieve a flexible formalism and presents the FLUF formalism as a realization of these ideas.,Achieving Flexibility in Unification Formalisms,We argue that flexibility is an important property for unification-based formalisms. By flexibility we mean the ability for the user to modify and extend the formalism according to the needs of his problem. The paper discusses some properties necessary to achieve a flexible formalism and presents the FLUF formalism as a realization of these ideas.,This work has been supported by the Swedish Research Council for Engineering Sciences. I am also grateful to Lars Ahrenberg for guidance on this work.,Achieving Flexibility in Unification Formalisms. We argue that flexibility is an important property for unification-based formalisms. By flexibility we mean the ability for the user to modify and extend the formalism according to the needs of his problem. The paper discusses some properties necessary to achieve a flexible formalism and presents the FLUF formalism as a realization of these ideas.,1994
fox-2005-dependency,https://aclanthology.org/P05-2016,0,,,,,,,Dependency-Based Statistical Machine Translation. We present a Czech-English statistical machine translation system which performs tree-to-tree translation of dependency structures. The only bilingual resource required is a sentence-aligned parallel corpus. All other resources are monolingual. We also refer to an evaluation method and plan to compare our system's output with a benchmark system.,Dependency-Based Statistical Machine Translation,We present a Czech-English statistical machine translation system which performs tree-to-tree translation of dependency structures. The only bilingual resource required is a sentence-aligned parallel corpus. All other resources are monolingual. We also refer to an evaluation method and plan to compare our system's output with a benchmark system.,Dependency-Based Statistical Machine Translation,We present a Czech-English statistical machine translation system which performs tree-to-tree translation of dependency structures. The only bilingual resource required is a sentence-aligned parallel corpus. All other resources are monolingual. We also refer to an evaluation method and plan to compare our system's output with a benchmark system.,"This work was supported in part by NSF grant IGERT-9870676. We would like to thank Jan Hajič, MartinČmejrek, Jan Cuřín for all of their assistance.",Dependency-Based Statistical Machine Translation. We present a Czech-English statistical machine translation system which performs tree-to-tree translation of dependency structures. The only bilingual resource required is a sentence-aligned parallel corpus. All other resources are monolingual. We also refer to an evaluation method and plan to compare our system's output with a benchmark system.,2005
mason-2013-domain,https://aclanthology.org/N13-2010,0,,,,,,,"Domain-Independent Captioning of Domain-Specific Images. Automatically describing visual content is an extremely difficult task, with hard AI problems in Computer Vision (CV) and Natural Language Processing (NLP) at its core. Previous work relies on supervised visual recognition systems to determine the content of images. These systems require massive amounts of hand-labeled data for training, so the number of visual classes that can be recognized is typically very small. We argue that these approaches place unrealistic limits on the kinds of images that can be captioned, and are unlikely to produce captions which reflect human interpretations. We present a framework for image caption generation that does not rely on visual recognition systems, which we have implemented on a dataset of online shopping images and product descriptions. We propose future work to improve this method, and extensions for other domains of images and natural text.",Domain-Independent Captioning of Domain-Specific Images,"Automatically describing visual content is an extremely difficult task, with hard AI problems in Computer Vision (CV) and Natural Language Processing (NLP) at its core. Previous work relies on supervised visual recognition systems to determine the content of images. These systems require massive amounts of hand-labeled data for training, so the number of visual classes that can be recognized is typically very small. We argue that these approaches place unrealistic limits on the kinds of images that can be captioned, and are unlikely to produce captions which reflect human interpretations. We present a framework for image caption generation that does not rely on visual recognition systems, which we have implemented on a dataset of online shopping images and product descriptions. We propose future work to improve this method, and extensions for other domains of images and natural text.",Domain-Independent Captioning of Domain-Specific Images,"Automatically describing visual content is an extremely difficult task, with hard AI problems in Computer Vision (CV) and Natural Language Processing (NLP) at its core. Previous work relies on supervised visual recognition systems to determine the content of images. These systems require massive amounts of hand-labeled data for training, so the number of visual classes that can be recognized is typically very small. We argue that these approaches place unrealistic limits on the kinds of images that can be captioned, and are unlikely to produce captions which reflect human interpretations. We present a framework for image caption generation that does not rely on visual recognition systems, which we have implemented on a dataset of online shopping images and product descriptions. We propose future work to improve this method, and extensions for other domains of images and natural text.",,"Domain-Independent Captioning of Domain-Specific Images. Automatically describing visual content is an extremely difficult task, with hard AI problems in Computer Vision (CV) and Natural Language Processing (NLP) at its core. Previous work relies on supervised visual recognition systems to determine the content of images. These systems require massive amounts of hand-labeled data for training, so the number of visual classes that can be recognized is typically very small. We argue that these approaches place unrealistic limits on the kinds of images that can be captioned, and are unlikely to produce captions which reflect human interpretations. We present a framework for image caption generation that does not rely on visual recognition systems, which we have implemented on a dataset of online shopping images and product descriptions. We propose future work to improve this method, and extensions for other domains of images and natural text.",2013
petrick-1981-field,https://aclanthology.org/P81-1009,0,,,,,,,"Field Testing the Transformational Question Answering (TQA) System. The Transformatlonal question Answering (TqA) system was developed over a period of time beginning in the early part of the last decade and continuing to the present. Its syntactic component is a transformational grammar parser [1, 2, ~] , and its semantic Gomponqnt is a Knuth attribute grammor [~, 5] . The combination of these components providQs sufficiQnt generality, conveniQnga, and efficiency to implement a broad range of linguistic models; in addition to a wide spectrum of transformational grammars, Gilder-type phrase structure grammar [6] and lexigal functional grammar [7] systems appear to be cases in point, for example. The Particular grammar Nhich was, in fact, developed, however, was closest tO those of the genQrative semantics variety of trsnsformationel grammar; both the underlying structures assigned to sQntences and the transformations employed to effect that assignmQnt traced their origins to the generative semantics model. was the one employed in the field testing to be dQscribed. In a more recent version of the system, however, it has been replacod by a translation of logical forms, first to equivalent logical forms in a set domain relational calculus and then to appropriate expressions in the 5el language, SystQm RIs high level query language.
The first data base to which the system was applied was one concerning business statistics such as the sales, earnings, number of employees, etc. of 60 large companies over a five-year",Field Testing the {T}ransformational {Q}uestion {A}nswering ({TQA}) System,"The Transformatlonal question Answering (TqA) system was developed over a period of time beginning in the early part of the last decade and continuing to the present. Its syntactic component is a transformational grammar parser [1, 2, ~] , and its semantic Gomponqnt is a Knuth attribute grammor [~, 5] . The combination of these components providQs sufficiQnt generality, conveniQnga, and efficiency to implement a broad range of linguistic models; in addition to a wide spectrum of transformational grammars, Gilder-type phrase structure grammar [6] and lexigal functional grammar [7] systems appear to be cases in point, for example. The Particular grammar Nhich was, in fact, developed, however, was closest tO those of the genQrative semantics variety of trsnsformationel grammar; both the underlying structures assigned to sQntences and the transformations employed to effect that assignmQnt traced their origins to the generative semantics model. was the one employed in the field testing to be dQscribed. In a more recent version of the system, however, it has been replacod by a translation of logical forms, first to equivalent logical forms in a set domain relational calculus and then to appropriate expressions in the 5el language, SystQm RIs high level query language.
The first data base to which the system was applied was one concerning business statistics such as the sales, earnings, number of employees, etc. of 60 large companies over a five-year",Field Testing the Transformational Question Answering (TQA) System,"The Transformatlonal question Answering (TqA) system was developed over a period of time beginning in the early part of the last decade and continuing to the present. Its syntactic component is a transformational grammar parser [1, 2, ~] , and its semantic Gomponqnt is a Knuth attribute grammor [~, 5] . The combination of these components providQs sufficiQnt generality, conveniQnga, and efficiency to implement a broad range of linguistic models; in addition to a wide spectrum of transformational grammars, Gilder-type phrase structure grammar [6] and lexigal functional grammar [7] systems appear to be cases in point, for example. The Particular grammar Nhich was, in fact, developed, however, was closest tO those of the genQrative semantics variety of trsnsformationel grammar; both the underlying structures assigned to sQntences and the transformations employed to effect that assignmQnt traced their origins to the generative semantics model. was the one employed in the field testing to be dQscribed. In a more recent version of the system, however, it has been replacod by a translation of logical forms, first to equivalent logical forms in a set domain relational calculus and then to appropriate expressions in the 5el language, SystQm RIs high level query language.
The first data base to which the system was applied was one concerning business statistics such as the sales, earnings, number of employees, etc. of 60 large companies over a five-year",,"Field Testing the Transformational Question Answering (TQA) System. The Transformatlonal question Answering (TqA) system was developed over a period of time beginning in the early part of the last decade and continuing to the present. Its syntactic component is a transformational grammar parser [1, 2, ~] , and its semantic Gomponqnt is a Knuth attribute grammor [~, 5] . The combination of these components providQs sufficiQnt generality, conveniQnga, and efficiency to implement a broad range of linguistic models; in addition to a wide spectrum of transformational grammars, Gilder-type phrase structure grammar [6] and lexigal functional grammar [7] systems appear to be cases in point, for example. The Particular grammar Nhich was, in fact, developed, however, was closest tO those of the genQrative semantics variety of trsnsformationel grammar; both the underlying structures assigned to sQntences and the transformations employed to effect that assignmQnt traced their origins to the generative semantics model. was the one employed in the field testing to be dQscribed. In a more recent version of the system, however, it has been replacod by a translation of logical forms, first to equivalent logical forms in a set domain relational calculus and then to appropriate expressions in the 5el language, SystQm RIs high level query language.
The first data base to which the system was applied was one concerning business statistics such as the sales, earnings, number of employees, etc. of 60 large companies over a five-year",1981
smith-2009-copyright,https://aclanthology.org/2009.tc-1.13,0,,,,,,,"Copyright issues in translation memory ownership. In the last two decades terminological databases and translation memory (TM) software have become ubiquitous in the professional translation community, to such an extent that translating without these tools has now become almost unthinkable in most technical fields. Until recently, however, the question of who actually owned these tools was not considered relevant. Most users installed the software on their desktop computers and built their termbanks and TMs without ever considering the possibility that they could be extracted and sent elsewhere, or that the content of their databases might in fact belong wholly or partly to someone else. With the advent of high-speed data transmission over computer networks, however, these resources have been released from the confines of individual PCs and have begun circulating around the Internet, causing a major shift in the manner in which they are perceived and uncovering new commercial possibilities for their exploitation.
The first outlet for TMs as tradable products was set up in 2007 as a joint initiative between Multilingual Computing, Inc. and International Writers' Group, with the name of TM Marketplace (www.tmmarketplace.com). The creators of TM Marketplace use the term ""TM assets"" to refer to the products traded on this site. As a result, translation memory files can now be bought, sold and licensed as individual assets. Termbanks, already commercialised in the form of word-lists and specialised glossaries, are also affected by the Internet revolution because as well as being easy to transfer electronically, they are particularly vulnerable to illegal copying.",Copyright issues in translation memory ownership,"In the last two decades terminological databases and translation memory (TM) software have become ubiquitous in the professional translation community, to such an extent that translating without these tools has now become almost unthinkable in most technical fields. Until recently, however, the question of who actually owned these tools was not considered relevant. Most users installed the software on their desktop computers and built their termbanks and TMs without ever considering the possibility that they could be extracted and sent elsewhere, or that the content of their databases might in fact belong wholly or partly to someone else. With the advent of high-speed data transmission over computer networks, however, these resources have been released from the confines of individual PCs and have begun circulating around the Internet, causing a major shift in the manner in which they are perceived and uncovering new commercial possibilities for their exploitation.
The first outlet for TMs as tradable products was set up in 2007 as a joint initiative between Multilingual Computing, Inc. and International Writers' Group, with the name of TM Marketplace (www.tmmarketplace.com). The creators of TM Marketplace use the term ""TM assets"" to refer to the products traded on this site. As a result, translation memory files can now be bought, sold and licensed as individual assets. Termbanks, already commercialised in the form of word-lists and specialised glossaries, are also affected by the Internet revolution because as well as being easy to transfer electronically, they are particularly vulnerable to illegal copying.",Copyright issues in translation memory ownership,"In the last two decades terminological databases and translation memory (TM) software have become ubiquitous in the professional translation community, to such an extent that translating without these tools has now become almost unthinkable in most technical fields. Until recently, however, the question of who actually owned these tools was not considered relevant. Most users installed the software on their desktop computers and built their termbanks and TMs without ever considering the possibility that they could be extracted and sent elsewhere, or that the content of their databases might in fact belong wholly or partly to someone else. With the advent of high-speed data transmission over computer networks, however, these resources have been released from the confines of individual PCs and have begun circulating around the Internet, causing a major shift in the manner in which they are perceived and uncovering new commercial possibilities for their exploitation.
The first outlet for TMs as tradable products was set up in 2007 as a joint initiative between Multilingual Computing, Inc. and International Writers' Group, with the name of TM Marketplace (www.tmmarketplace.com). The creators of TM Marketplace use the term ""TM assets"" to refer to the products traded on this site. As a result, translation memory files can now be bought, sold and licensed as individual assets. Termbanks, already commercialised in the form of word-lists and specialised glossaries, are also affected by the Internet revolution because as well as being easy to transfer electronically, they are particularly vulnerable to illegal copying.",,"Copyright issues in translation memory ownership. In the last two decades terminological databases and translation memory (TM) software have become ubiquitous in the professional translation community, to such an extent that translating without these tools has now become almost unthinkable in most technical fields. Until recently, however, the question of who actually owned these tools was not considered relevant. Most users installed the software on their desktop computers and built their termbanks and TMs without ever considering the possibility that they could be extracted and sent elsewhere, or that the content of their databases might in fact belong wholly or partly to someone else. With the advent of high-speed data transmission over computer networks, however, these resources have been released from the confines of individual PCs and have begun circulating around the Internet, causing a major shift in the manner in which they are perceived and uncovering new commercial possibilities for their exploitation.
The first outlet for TMs as tradable products was set up in 2007 as a joint initiative between Multilingual Computing, Inc. and International Writers' Group, with the name of TM Marketplace (www.tmmarketplace.com). The creators of TM Marketplace use the term ""TM assets"" to refer to the products traded on this site. As a result, translation memory files can now be bought, sold and licensed as individual assets. Termbanks, already commercialised in the form of word-lists and specialised glossaries, are also affected by the Internet revolution because as well as being easy to transfer electronically, they are particularly vulnerable to illegal copying.",2009
chakravarthy-etal-2008-learning,https://aclanthology.org/I08-2118,0,,,,,,,"Learning Decision Lists with Known Rules for Text Mining. Many real-world systems for handling unstructured text data are rule-based. Examples of such systems are named entity annotators, information extraction systems, and text classifiers. In each of these applications, ordering rules into a decision list is an important issue. In this paper, we assume that a set of rules is given and study the problem (MaxDL) of ordering them into an optimal decision list with respect to a given training set. We formalize this problem and show that it is NP-Hard and cannot be approximated within any reasonable factors. We then propose some heuristic algorithms and conduct exhaustive experiments to evaluate their performance. In our experiments we also observe performance improvement over an existing decision list learning algorithm, by merely reordering the rules output by it.",Learning Decision Lists with Known Rules for Text Mining,"Many real-world systems for handling unstructured text data are rule-based. Examples of such systems are named entity annotators, information extraction systems, and text classifiers. In each of these applications, ordering rules into a decision list is an important issue. In this paper, we assume that a set of rules is given and study the problem (MaxDL) of ordering them into an optimal decision list with respect to a given training set. We formalize this problem and show that it is NP-Hard and cannot be approximated within any reasonable factors. We then propose some heuristic algorithms and conduct exhaustive experiments to evaluate their performance. In our experiments we also observe performance improvement over an existing decision list learning algorithm, by merely reordering the rules output by it.",Learning Decision Lists with Known Rules for Text Mining,"Many real-world systems for handling unstructured text data are rule-based. Examples of such systems are named entity annotators, information extraction systems, and text classifiers. In each of these applications, ordering rules into a decision list is an important issue. In this paper, we assume that a set of rules is given and study the problem (MaxDL) of ordering them into an optimal decision list with respect to a given training set. We formalize this problem and show that it is NP-Hard and cannot be approximated within any reasonable factors. We then propose some heuristic algorithms and conduct exhaustive experiments to evaluate their performance. In our experiments we also observe performance improvement over an existing decision list learning algorithm, by merely reordering the rules output by it.",,"Learning Decision Lists with Known Rules for Text Mining. Many real-world systems for handling unstructured text data are rule-based. Examples of such systems are named entity annotators, information extraction systems, and text classifiers. In each of these applications, ordering rules into a decision list is an important issue. In this paper, we assume that a set of rules is given and study the problem (MaxDL) of ordering them into an optimal decision list with respect to a given training set. We formalize this problem and show that it is NP-Hard and cannot be approximated within any reasonable factors. We then propose some heuristic algorithms and conduct exhaustive experiments to evaluate their performance. In our experiments we also observe performance improvement over an existing decision list learning algorithm, by merely reordering the rules output by it.",2008
nivre-1994-pragmatics,https://aclanthology.org/W93-0414,0,,,,,,,"Pragmatics Through Context Management. Pragmatically based dialogue management requires flexible and efficient representation of contextual information. The approach described in this paper uses logical knowledge bases to represent contextual information and special abductive reasoning tools to manage these knowledge bases. One of the advantages of such a reasoning based approach to computational dialogue pragmatics is that the same rules, stated declaratively, can be used both in analysis and generation.",Pragmatics Through Context Management,"Pragmatically based dialogue management requires flexible and efficient representation of contextual information. The approach described in this paper uses logical knowledge bases to represent contextual information and special abductive reasoning tools to manage these knowledge bases. One of the advantages of such a reasoning based approach to computational dialogue pragmatics is that the same rules, stated declaratively, can be used both in analysis and generation.",Pragmatics Through Context Management,"Pragmatically based dialogue management requires flexible and efficient representation of contextual information. The approach described in this paper uses logical knowledge bases to represent contextual information and special abductive reasoning tools to manage these knowledge bases. One of the advantages of such a reasoning based approach to computational dialogue pragmatics is that the same rules, stated declaratively, can be used both in analysis and generation.",,"Pragmatics Through Context Management. Pragmatically based dialogue management requires flexible and efficient representation of contextual information. The approach described in this paper uses logical knowledge bases to represent contextual information and special abductive reasoning tools to manage these knowledge bases. One of the advantages of such a reasoning based approach to computational dialogue pragmatics is that the same rules, stated declaratively, can be used both in analysis and generation.",1994
rajagopal-etal-2019-domain,https://aclanthology.org/W19-5009,1,,,,health,,,"Domain Adaptation of SRL Systems for Biological Processes. Domain adaptation remains one of the most challenging aspects in the widespread use of Semantic Role Labeling (SRL) systems. Current state-of-the-art methods are typically trained on large-scale datasets, but their performances do not directly transfer to lowresource domain-specific settings. In this paper, we propose two approaches for domain adaptation in biological domain that involve pre-training LSTM-CRF based on existing large-scale datasets and adapting it for a low-resource corpus of biological processes. Our first approach defines a mapping between the source labels and the target labels, and the other approach modifies the final CRF layer in sequence-labeling neural network architecture. We perform our experiments on Pro-cessBank (Berant et al., 2014) dataset which contains less than 200 paragraphs on biological processes. We improve over the previous state-of-the-art system on this dataset by 21 F1 points. We also show that, by incorporating event-event relationship in ProcessBank, we are able to achieve an additional 2.6 F1 gain, giving us possible insights into how to improve SRL systems for biological process using richer annotations.",Domain Adaptation of {SRL} Systems for Biological Processes,"Domain adaptation remains one of the most challenging aspects in the widespread use of Semantic Role Labeling (SRL) systems. Current state-of-the-art methods are typically trained on large-scale datasets, but their performances do not directly transfer to lowresource domain-specific settings. In this paper, we propose two approaches for domain adaptation in biological domain that involve pre-training LSTM-CRF based on existing large-scale datasets and adapting it for a low-resource corpus of biological processes. Our first approach defines a mapping between the source labels and the target labels, and the other approach modifies the final CRF layer in sequence-labeling neural network architecture. We perform our experiments on Pro-cessBank (Berant et al., 2014) dataset which contains less than 200 paragraphs on biological processes. We improve over the previous state-of-the-art system on this dataset by 21 F1 points. We also show that, by incorporating event-event relationship in ProcessBank, we are able to achieve an additional 2.6 F1 gain, giving us possible insights into how to improve SRL systems for biological process using richer annotations.",Domain Adaptation of SRL Systems for Biological Processes,"Domain adaptation remains one of the most challenging aspects in the widespread use of Semantic Role Labeling (SRL) systems. Current state-of-the-art methods are typically trained on large-scale datasets, but their performances do not directly transfer to lowresource domain-specific settings. In this paper, we propose two approaches for domain adaptation in biological domain that involve pre-training LSTM-CRF based on existing large-scale datasets and adapting it for a low-resource corpus of biological processes. Our first approach defines a mapping between the source labels and the target labels, and the other approach modifies the final CRF layer in sequence-labeling neural network architecture. We perform our experiments on Pro-cessBank (Berant et al., 2014) dataset which contains less than 200 paragraphs on biological processes. We improve over the previous state-of-the-art system on this dataset by 21 F1 points. We also show that, by incorporating event-event relationship in ProcessBank, we are able to achieve an additional 2.6 F1 gain, giving us possible insights into how to improve SRL systems for biological process using richer annotations.",,"Domain Adaptation of SRL Systems for Biological Processes. Domain adaptation remains one of the most challenging aspects in the widespread use of Semantic Role Labeling (SRL) systems. Current state-of-the-art methods are typically trained on large-scale datasets, but their performances do not directly transfer to lowresource domain-specific settings. In this paper, we propose two approaches for domain adaptation in biological domain that involve pre-training LSTM-CRF based on existing large-scale datasets and adapting it for a low-resource corpus of biological processes. Our first approach defines a mapping between the source labels and the target labels, and the other approach modifies the final CRF layer in sequence-labeling neural network architecture. We perform our experiments on Pro-cessBank (Berant et al., 2014) dataset which contains less than 200 paragraphs on biological processes. We improve over the previous state-of-the-art system on this dataset by 21 F1 points. We also show that, by incorporating event-event relationship in ProcessBank, we are able to achieve an additional 2.6 F1 gain, giving us possible insights into how to improve SRL systems for biological process using richer annotations.",2019
xing-etal-2020-tasty,https://aclanthology.org/2020.emnlp-main.292,0,,,,,,,"Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis. Aspect-based sentiment analysis (ABSA) aims to predict the sentiment towards a specific aspect in the text. However, existing ABSA test sets cannot be used to probe whether a model can distinguish the sentiment of the target aspect from the non-target aspects. To solve this problem, we develop a simple but effective approach to enrich ABSA test sets. Specifically, we generate new examples to disentangle the confounding sentiments of the non-target aspects from the target aspect's sentiment. Based on the SemEval 2014 dataset, we construct the Aspect Robustness Test Set (ARTS) as a comprehensive probe of the aspect robustness of ABSA models. Over 92% data of ARTS show high fluency and desired sentiment on all aspects by human evaluation. Using ARTS, we analyze the robustness of nine ABSA models, and observe, surprisingly, that their accuracy drops by up to 69.73%. We explore several ways to improve aspect robustness, and find that adversarial training can improve models' performance on ARTS by up to 32.85%. 1","Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis","Aspect-based sentiment analysis (ABSA) aims to predict the sentiment towards a specific aspect in the text. However, existing ABSA test sets cannot be used to probe whether a model can distinguish the sentiment of the target aspect from the non-target aspects. To solve this problem, we develop a simple but effective approach to enrich ABSA test sets. Specifically, we generate new examples to disentangle the confounding sentiments of the non-target aspects from the target aspect's sentiment. Based on the SemEval 2014 dataset, we construct the Aspect Robustness Test Set (ARTS) as a comprehensive probe of the aspect robustness of ABSA models. Over 92% data of ARTS show high fluency and desired sentiment on all aspects by human evaluation. Using ARTS, we analyze the robustness of nine ABSA models, and observe, surprisingly, that their accuracy drops by up to 69.73%. We explore several ways to improve aspect robustness, and find that adversarial training can improve models' performance on ARTS by up to 32.85%. 1","Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis","Aspect-based sentiment analysis (ABSA) aims to predict the sentiment towards a specific aspect in the text. However, existing ABSA test sets cannot be used to probe whether a model can distinguish the sentiment of the target aspect from the non-target aspects. To solve this problem, we develop a simple but effective approach to enrich ABSA test sets. Specifically, we generate new examples to disentangle the confounding sentiments of the non-target aspects from the target aspect's sentiment. Based on the SemEval 2014 dataset, we construct the Aspect Robustness Test Set (ARTS) as a comprehensive probe of the aspect robustness of ABSA models. Over 92% data of ARTS show high fluency and desired sentiment on all aspects by human evaluation. Using ARTS, we analyze the robustness of nine ABSA models, and observe, surprisingly, that their accuracy drops by up to 69.73%. We explore several ways to improve aspect robustness, and find that adversarial training can improve models' performance on ARTS by up to 32.85%. 1","We appreciate Professor Rada Mihalcea for her insights that helped us plan this research, Pengfei Liu for valuable suggestions on writing, and Yuchun Dai for helping to code some functions in our annotation tool. We also want to convey special thanks to Mahi Shafiullah and Osmond Wang for brilliant suggestions on the wording of the title. This work was partially funded by China National Key R&D Program (No. 2018YFC0831105, 2018YFB1005104, 2017YFB1002104), National Natural Science Foundation of China (No. 61751201, 61976056, 61532011, 62076069), Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01), Science and Technology Commission of Shanghai Municipality Grant (No.18DZ1201000, 17JC1420200).","Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis. Aspect-based sentiment analysis (ABSA) aims to predict the sentiment towards a specific aspect in the text. However, existing ABSA test sets cannot be used to probe whether a model can distinguish the sentiment of the target aspect from the non-target aspects. To solve this problem, we develop a simple but effective approach to enrich ABSA test sets. Specifically, we generate new examples to disentangle the confounding sentiments of the non-target aspects from the target aspect's sentiment. Based on the SemEval 2014 dataset, we construct the Aspect Robustness Test Set (ARTS) as a comprehensive probe of the aspect robustness of ABSA models. Over 92% data of ARTS show high fluency and desired sentiment on all aspects by human evaluation. Using ARTS, we analyze the robustness of nine ABSA models, and observe, surprisingly, that their accuracy drops by up to 69.73%. We explore several ways to improve aspect robustness, and find that adversarial training can improve models' performance on ARTS by up to 32.85%. 1",2020
azadi-khadivi-2015-improved,https://aclanthology.org/2015.mtsummit-papers.25,0,,,,,,,"Improved search strategy for interactive predictions in computer-assisted translation. The statistical machine translation outputs are not error-free and in a high quality yet. So in the cases that we need high quality translations we definitely need the human intervention. An interactive-predictive machine translation is a framework, which enables the collaboration of the human and the translation system. Here, we address the problem of searching the best suffix to propose to the user in the phrase-based interactive prediction scenario. By adding the jump operation to the common edit distance based search, we try to overcome the lack of some of the reorderings in the search graph which might be desired by the user. The experiments results shows that this method improves the base method by 1.35% in KSMR 2 , and if we combine the edit error in the proposed method with the translation scores given by the statistical models to select the offered suffix, we could gain the KSMR improvement of about 1.63% compared to the base search method.",Improved search strategy for interactive predictions in computer-assisted translation,"The statistical machine translation outputs are not error-free and in a high quality yet. So in the cases that we need high quality translations we definitely need the human intervention. An interactive-predictive machine translation is a framework, which enables the collaboration of the human and the translation system. Here, we address the problem of searching the best suffix to propose to the user in the phrase-based interactive prediction scenario. By adding the jump operation to the common edit distance based search, we try to overcome the lack of some of the reorderings in the search graph which might be desired by the user. The experiments results shows that this method improves the base method by 1.35% in KSMR 2 , and if we combine the edit error in the proposed method with the translation scores given by the statistical models to select the offered suffix, we could gain the KSMR improvement of about 1.63% compared to the base search method.",Improved search strategy for interactive predictions in computer-assisted translation,"The statistical machine translation outputs are not error-free and in a high quality yet. So in the cases that we need high quality translations we definitely need the human intervention. An interactive-predictive machine translation is a framework, which enables the collaboration of the human and the translation system. Here, we address the problem of searching the best suffix to propose to the user in the phrase-based interactive prediction scenario. By adding the jump operation to the common edit distance based search, we try to overcome the lack of some of the reorderings in the search graph which might be desired by the user. The experiments results shows that this method improves the base method by 1.35% in KSMR 2 , and if we combine the edit error in the proposed method with the translation scores given by the statistical models to select the offered suffix, we could gain the KSMR improvement of about 1.63% compared to the base search method.",,"Improved search strategy for interactive predictions in computer-assisted translation. The statistical machine translation outputs are not error-free and in a high quality yet. So in the cases that we need high quality translations we definitely need the human intervention. An interactive-predictive machine translation is a framework, which enables the collaboration of the human and the translation system. Here, we address the problem of searching the best suffix to propose to the user in the phrase-based interactive prediction scenario. By adding the jump operation to the common edit distance based search, we try to overcome the lack of some of the reorderings in the search graph which might be desired by the user. The experiments results shows that this method improves the base method by 1.35% in KSMR 2 , and if we combine the edit error in the proposed method with the translation scores given by the statistical models to select the offered suffix, we could gain the KSMR improvement of about 1.63% compared to the base search method.",2015
dong-etal-2013-difficulties,https://aclanthology.org/Y13-1012,1,,,,education,,,"Difficulties in Perception and Pronunciation of Mandarin Chinese Disyllabic Word Tone Acquisition: A Study of Some Japanese University Students. Tonal errors pose a serious problem to Mandarin Chinese learners, making them stumble in their communication. The purpose of this paper is to investigate beginner level Japanese students' difficulties in the perception and pronunciation of disyllabic words, particularly to find out which combinations of tones these errors mostly occur in. As a result, the errors made by the 10 subjects were mostly found in tonal patterns 1-3, 2-1, 2-3, 3-2 and 4-3 in both perception and pronunciation. Furthermore, by comparing the ratio of tonal errors of initial to final syllables, we can tell that the initial syllables appear more difficult than the final syllables in perception, but in pronunciation this tendency is not found. Moreover, there seems to be some connection between learners' perception and pronunciation in their acquisition process.",Difficulties in Perception and Pronunciation of {M}andarin {C}hinese Disyllabic Word Tone Acquisition: A Study of Some {J}apanese {U}niversity Students,"Tonal errors pose a serious problem to Mandarin Chinese learners, making them stumble in their communication. The purpose of this paper is to investigate beginner level Japanese students' difficulties in the perception and pronunciation of disyllabic words, particularly to find out which combinations of tones these errors mostly occur in. As a result, the errors made by the 10 subjects were mostly found in tonal patterns 1-3, 2-1, 2-3, 3-2 and 4-3 in both perception and pronunciation. Furthermore, by comparing the ratio of tonal errors of initial to final syllables, we can tell that the initial syllables appear more difficult than the final syllables in perception, but in pronunciation this tendency is not found. Moreover, there seems to be some connection between learners' perception and pronunciation in their acquisition process.",Difficulties in Perception and Pronunciation of Mandarin Chinese Disyllabic Word Tone Acquisition: A Study of Some Japanese University Students,"Tonal errors pose a serious problem to Mandarin Chinese learners, making them stumble in their communication. The purpose of this paper is to investigate beginner level Japanese students' difficulties in the perception and pronunciation of disyllabic words, particularly to find out which combinations of tones these errors mostly occur in. As a result, the errors made by the 10 subjects were mostly found in tonal patterns 1-3, 2-1, 2-3, 3-2 and 4-3 in both perception and pronunciation. Furthermore, by comparing the ratio of tonal errors of initial to final syllables, we can tell that the initial syllables appear more difficult than the final syllables in perception, but in pronunciation this tendency is not found. Moreover, there seems to be some connection between learners' perception and pronunciation in their acquisition process.",,"Difficulties in Perception and Pronunciation of Mandarin Chinese Disyllabic Word Tone Acquisition: A Study of Some Japanese University Students. Tonal errors pose a serious problem to Mandarin Chinese learners, making them stumble in their communication. The purpose of this paper is to investigate beginner level Japanese students' difficulties in the perception and pronunciation of disyllabic words, particularly to find out which combinations of tones these errors mostly occur in. As a result, the errors made by the 10 subjects were mostly found in tonal patterns 1-3, 2-1, 2-3, 3-2 and 4-3 in both perception and pronunciation. Furthermore, by comparing the ratio of tonal errors of initial to final syllables, we can tell that the initial syllables appear more difficult than the final syllables in perception, but in pronunciation this tendency is not found. Moreover, there seems to be some connection between learners' perception and pronunciation in their acquisition process.",2013
spreyer-kuhn-2009-data,https://aclanthology.org/W09-1104,0,,,,,,,"Data-Driven Dependency Parsing of New Languages Using Incomplete and Noisy Training Data. We present a simple but very effective approach to identifying high-quality data in noisy data sets for structured problems like parsing, by greedily exploiting partial structures. We analyze our approach in an annotation projection framework for dependency trees, and show how dependency parsers from two different paradigms (graph-based and transition-based) can be trained on the resulting tree fragments. We train parsers for Dutch to evaluate our method and to investigate to which degree graph-based and transitionbased parsers can benefit from incomplete training data. We find that partial correspondence projection gives rise to parsers that outperform parsers trained on aggressively filtered data sets, and achieve unlabeled attachment scores that are only 5% behind the average UAS for Dutch in the CoNLL-X Shared Task on supervised parsing (Buchholz and Marsi, 2006).",Data-Driven Dependency Parsing of New Languages Using Incomplete and Noisy Training Data,"We present a simple but very effective approach to identifying high-quality data in noisy data sets for structured problems like parsing, by greedily exploiting partial structures. We analyze our approach in an annotation projection framework for dependency trees, and show how dependency parsers from two different paradigms (graph-based and transition-based) can be trained on the resulting tree fragments. We train parsers for Dutch to evaluate our method and to investigate to which degree graph-based and transitionbased parsers can benefit from incomplete training data. We find that partial correspondence projection gives rise to parsers that outperform parsers trained on aggressively filtered data sets, and achieve unlabeled attachment scores that are only 5% behind the average UAS for Dutch in the CoNLL-X Shared Task on supervised parsing (Buchholz and Marsi, 2006).",Data-Driven Dependency Parsing of New Languages Using Incomplete and Noisy Training Data,"We present a simple but very effective approach to identifying high-quality data in noisy data sets for structured problems like parsing, by greedily exploiting partial structures. We analyze our approach in an annotation projection framework for dependency trees, and show how dependency parsers from two different paradigms (graph-based and transition-based) can be trained on the resulting tree fragments. We train parsers for Dutch to evaluate our method and to investigate to which degree graph-based and transitionbased parsers can benefit from incomplete training data. We find that partial correspondence projection gives rise to parsers that outperform parsers trained on aggressively filtered data sets, and achieve unlabeled attachment scores that are only 5% behind the average UAS for Dutch in the CoNLL-X Shared Task on supervised parsing (Buchholz and Marsi, 2006).","The research reported in this paper has been supported by the German Research Foundation DFG as part of SFB 632 ""Information structure"" (project D4; PI: Kuhn).","Data-Driven Dependency Parsing of New Languages Using Incomplete and Noisy Training Data. We present a simple but very effective approach to identifying high-quality data in noisy data sets for structured problems like parsing, by greedily exploiting partial structures. We analyze our approach in an annotation projection framework for dependency trees, and show how dependency parsers from two different paradigms (graph-based and transition-based) can be trained on the resulting tree fragments. We train parsers for Dutch to evaluate our method and to investigate to which degree graph-based and transitionbased parsers can benefit from incomplete training data. We find that partial correspondence projection gives rise to parsers that outperform parsers trained on aggressively filtered data sets, and achieve unlabeled attachment scores that are only 5% behind the average UAS for Dutch in the CoNLL-X Shared Task on supervised parsing (Buchholz and Marsi, 2006).",2009
parmentier-etal-2008-tulipa,https://aclanthology.org/W08-2316,0,,,,,,,"TuLiPA: A syntax-semantics parsing environment for mildly context-sensitive formalisms. In this paper we present a parsing architecture that allows processing of different mildly context-sensitive formalisms, in particular Tree-Adjoining Grammar (TAG), Multi-Component Tree-Adjoining Grammar with Tree Tuples (TT-MCTAG) and simple Range Concatenation Grammar (RCG). Furthermore, for tree-based grammars, the parser computes not only syntactic analyses but also the corresponding semantic representations.",{T}u{L}i{PA}: A syntax-semantics parsing environment for mildly context-sensitive formalisms,"In this paper we present a parsing architecture that allows processing of different mildly context-sensitive formalisms, in particular Tree-Adjoining Grammar (TAG), Multi-Component Tree-Adjoining Grammar with Tree Tuples (TT-MCTAG) and simple Range Concatenation Grammar (RCG). Furthermore, for tree-based grammars, the parser computes not only syntactic analyses but also the corresponding semantic representations.",TuLiPA: A syntax-semantics parsing environment for mildly context-sensitive formalisms,"In this paper we present a parsing architecture that allows processing of different mildly context-sensitive formalisms, in particular Tree-Adjoining Grammar (TAG), Multi-Component Tree-Adjoining Grammar with Tree Tuples (TT-MCTAG) and simple Range Concatenation Grammar (RCG). Furthermore, for tree-based grammars, the parser computes not only syntactic analyses but also the corresponding semantic representations.",,"TuLiPA: A syntax-semantics parsing environment for mildly context-sensitive formalisms. In this paper we present a parsing architecture that allows processing of different mildly context-sensitive formalisms, in particular Tree-Adjoining Grammar (TAG), Multi-Component Tree-Adjoining Grammar with Tree Tuples (TT-MCTAG) and simple Range Concatenation Grammar (RCG). Furthermore, for tree-based grammars, the parser computes not only syntactic analyses but also the corresponding semantic representations.",2008
faisal-etal-2021-sd-qa,https://aclanthology.org/2021.findings-emnlp.281,0,,,,,,,"SD-QA: Spoken Dialectal Question Answering for the Real World. Question answering (QA) systems are now available through numerous commercial applications for a wide variety of domains, serving millions of users that interact with them via speech interfaces. However, current benchmarks in QA research do not account for the errors that speech recognition models might introduce, nor do they consider the language variations (dialects) of the users. To address this gap, we augment an existing QA dataset to construct a multi-dialect, spoken QA benchmark on five languages (Arabic, Bengali, English, Kiswahili, Korean) with more than 68k audio prompts in 24 dialects from 255 speakers. We provide baseline results showcasing the real-world performance of QA systems and analyze the effect of language variety and other sensitive speaker attributes on downstream performance. Last, we study the fairness of the ASR and QA models with respect to the underlying user populations. 1",{SD}-{QA}: Spoken Dialectal Question Answering for the Real World,"Question answering (QA) systems are now available through numerous commercial applications for a wide variety of domains, serving millions of users that interact with them via speech interfaces. However, current benchmarks in QA research do not account for the errors that speech recognition models might introduce, nor do they consider the language variations (dialects) of the users. To address this gap, we augment an existing QA dataset to construct a multi-dialect, spoken QA benchmark on five languages (Arabic, Bengali, English, Kiswahili, Korean) with more than 68k audio prompts in 24 dialects from 255 speakers. We provide baseline results showcasing the real-world performance of QA systems and analyze the effect of language variety and other sensitive speaker attributes on downstream performance. Last, we study the fairness of the ASR and QA models with respect to the underlying user populations. 1",SD-QA: Spoken Dialectal Question Answering for the Real World,"Question answering (QA) systems are now available through numerous commercial applications for a wide variety of domains, serving millions of users that interact with them via speech interfaces. However, current benchmarks in QA research do not account for the errors that speech recognition models might introduce, nor do they consider the language variations (dialects) of the users. To address this gap, we augment an existing QA dataset to construct a multi-dialect, spoken QA benchmark on five languages (Arabic, Bengali, English, Kiswahili, Korean) with more than 68k audio prompts in 24 dialects from 255 speakers. We provide baseline results showcasing the real-world performance of QA systems and analyze the effect of language variety and other sensitive speaker attributes on downstream performance. Last, we study the fairness of the ASR and QA models with respect to the underlying user populations. 1","This work is generously supported by NSF Awards 2040926 and 2125466. The dataset creation was supported though a Google Award for Inclusion Research. We also want to thank Jacob Eisenstein, Manaal Faruqui, and Jon Clark for helpful discussions on question answering and data collection. The authors are grateful to Kathleen Siminyu for her help with collecting Kiswahili and Kenyan English speech samples, to Sylwia Tur and Moana Wilkinson from Appen for help with the rest of the data collection and quality assurance process, and to all the annotators who participated in the creation of SD-QA. We also thank Abdulrahman Alshammari for his help with analyzing and correcting the Arabic data.","SD-QA: Spoken Dialectal Question Answering for the Real World. Question answering (QA) systems are now available through numerous commercial applications for a wide variety of domains, serving millions of users that interact with them via speech interfaces. However, current benchmarks in QA research do not account for the errors that speech recognition models might introduce, nor do they consider the language variations (dialects) of the users. To address this gap, we augment an existing QA dataset to construct a multi-dialect, spoken QA benchmark on five languages (Arabic, Bengali, English, Kiswahili, Korean) with more than 68k audio prompts in 24 dialects from 255 speakers. We provide baseline results showcasing the real-world performance of QA systems and analyze the effect of language variety and other sensitive speaker attributes on downstream performance. Last, we study the fairness of the ASR and QA models with respect to the underlying user populations. 1",2021
rocha-chaves-machado-rino-2008-mitkov,https://aclanthology.org/2008.jeptalnrecital-long.1,0,,,,,,,"The Mitkov algorithm for anaphora resolution in Portuguese. This paper reports on the use of the Mitkov´s algorithm for pronoun resolution in texts written in Brazilian Portuguese. Third person pronouns are the only ones focused upon here, with noun phrases as antecedents. A system for anaphora resolution in Brazilian Portuguese texts was built that embeds most of the Mitkov's features. Some of his resolution factors were directly incorporated into the system; others had to be slightly modified for language adequacy. The resulting approach was intrinsically evaluated on hand-annotated corpora. It was also compared to Lappin & Leass's algorithm for pronoun resolution, also customized to Portuguese. Success rate was the evaluation measure used in both experiments. The results of both evaluations are discussed here.",The Mitkov algorithm for anaphora resolution in {P}ortuguese,"This paper reports on the use of the Mitkov´s algorithm for pronoun resolution in texts written in Brazilian Portuguese. Third person pronouns are the only ones focused upon here, with noun phrases as antecedents. A system for anaphora resolution in Brazilian Portuguese texts was built that embeds most of the Mitkov's features. Some of his resolution factors were directly incorporated into the system; others had to be slightly modified for language adequacy. The resulting approach was intrinsically evaluated on hand-annotated corpora. It was also compared to Lappin & Leass's algorithm for pronoun resolution, also customized to Portuguese. Success rate was the evaluation measure used in both experiments. The results of both evaluations are discussed here.",The Mitkov algorithm for anaphora resolution in Portuguese,"This paper reports on the use of the Mitkov´s algorithm for pronoun resolution in texts written in Brazilian Portuguese. Third person pronouns are the only ones focused upon here, with noun phrases as antecedents. A system for anaphora resolution in Brazilian Portuguese texts was built that embeds most of the Mitkov's features. Some of his resolution factors were directly incorporated into the system; others had to be slightly modified for language adequacy. The resulting approach was intrinsically evaluated on hand-annotated corpora. It was also compared to Lappin & Leass's algorithm for pronoun resolution, also customized to Portuguese. Success rate was the evaluation measure used in both experiments. The results of both evaluations are discussed here.",,"The Mitkov algorithm for anaphora resolution in Portuguese. This paper reports on the use of the Mitkov´s algorithm for pronoun resolution in texts written in Brazilian Portuguese. Third person pronouns are the only ones focused upon here, with noun phrases as antecedents. A system for anaphora resolution in Brazilian Portuguese texts was built that embeds most of the Mitkov's features. Some of his resolution factors were directly incorporated into the system; others had to be slightly modified for language adequacy. The resulting approach was intrinsically evaluated on hand-annotated corpora. It was also compared to Lappin & Leass's algorithm for pronoun resolution, also customized to Portuguese. Success rate was the evaluation measure used in both experiments. The results of both evaluations are discussed here.",2008
ahia-etal-2021-low-resource,https://aclanthology.org/2021.findings-emnlp.282,0,,,,,,,"The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation. A ""bigger is better"" explosion in the number of parameters in deep neural networks has made it increasingly challenging to make stateof-the-art networks accessible in computerestricted environments. Compression techniques have taken on renewed importance as a way to bridge the gap. However, evaluation of the trade-offs incurred by popular compression techniques has been centered on high-resource datasets. In this work, we instead consider the impact of compression in a data-limited regime. We introduce the term low-resource double bind to refer to the co-occurrence of data limitations and compute resource constraints. This is a common setting for NLP for low-resource languages, yet the trade-offs in performance are poorly studied. Our work offers surprising insights into the relationship between capacity and generalization in data-limited regimes for the task of machine translation. Our experiments on magnitude pruning for translations from English into Yoruba, Hausa, Igbo and German show that in low-resource regimes, sparsity preserves performance on frequent sentences but has a disparate impact on infrequent ones. However, it improves robustness to out-of-distribution shifts, especially for datasets that are very distinct from the training distribution. Our findings suggest that sparsity can play a beneficial role at curbing memorization of low frequency attributes, and therefore offers a promising solution to the low-resource double bind.",The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation,"A ""bigger is better"" explosion in the number of parameters in deep neural networks has made it increasingly challenging to make stateof-the-art networks accessible in computerestricted environments. Compression techniques have taken on renewed importance as a way to bridge the gap. However, evaluation of the trade-offs incurred by popular compression techniques has been centered on high-resource datasets. In this work, we instead consider the impact of compression in a data-limited regime. We introduce the term low-resource double bind to refer to the co-occurrence of data limitations and compute resource constraints. This is a common setting for NLP for low-resource languages, yet the trade-offs in performance are poorly studied. Our work offers surprising insights into the relationship between capacity and generalization in data-limited regimes for the task of machine translation. Our experiments on magnitude pruning for translations from English into Yoruba, Hausa, Igbo and German show that in low-resource regimes, sparsity preserves performance on frequent sentences but has a disparate impact on infrequent ones. However, it improves robustness to out-of-distribution shifts, especially for datasets that are very distinct from the training distribution. Our findings suggest that sparsity can play a beneficial role at curbing memorization of low frequency attributes, and therefore offers a promising solution to the low-resource double bind.",The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation,"A ""bigger is better"" explosion in the number of parameters in deep neural networks has made it increasingly challenging to make stateof-the-art networks accessible in computerestricted environments. Compression techniques have taken on renewed importance as a way to bridge the gap. However, evaluation of the trade-offs incurred by popular compression techniques has been centered on high-resource datasets. In this work, we instead consider the impact of compression in a data-limited regime. We introduce the term low-resource double bind to refer to the co-occurrence of data limitations and compute resource constraints. This is a common setting for NLP for low-resource languages, yet the trade-offs in performance are poorly studied. Our work offers surprising insights into the relationship between capacity and generalization in data-limited regimes for the task of machine translation. Our experiments on magnitude pruning for translations from English into Yoruba, Hausa, Igbo and German show that in low-resource regimes, sparsity preserves performance on frequent sentences but has a disparate impact on infrequent ones. However, it improves robustness to out-of-distribution shifts, especially for datasets that are very distinct from the training distribution. Our findings suggest that sparsity can play a beneficial role at curbing memorization of low frequency attributes, and therefore offers a promising solution to the low-resource double bind.",,"The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation. A ""bigger is better"" explosion in the number of parameters in deep neural networks has made it increasingly challenging to make stateof-the-art networks accessible in computerestricted environments. Compression techniques have taken on renewed importance as a way to bridge the gap. However, evaluation of the trade-offs incurred by popular compression techniques has been centered on high-resource datasets. In this work, we instead consider the impact of compression in a data-limited regime. We introduce the term low-resource double bind to refer to the co-occurrence of data limitations and compute resource constraints. This is a common setting for NLP for low-resource languages, yet the trade-offs in performance are poorly studied. Our work offers surprising insights into the relationship between capacity and generalization in data-limited regimes for the task of machine translation. Our experiments on magnitude pruning for translations from English into Yoruba, Hausa, Igbo and German show that in low-resource regimes, sparsity preserves performance on frequent sentences but has a disparate impact on infrequent ones. However, it improves robustness to out-of-distribution shifts, especially for datasets that are very distinct from the training distribution. Our findings suggest that sparsity can play a beneficial role at curbing memorization of low frequency attributes, and therefore offers a promising solution to the low-resource double bind.",2021
burtenshaw-kestemont-2021-uantwerp,https://aclanthology.org/2021.semeval-1.121,1,,,,hate_speech,,,"UAntwerp at SemEval-2021 Task 5: Spans are Spans, stacking a binary word level approach to toxic span detection. This paper describes the system developed by the Antwerp Centre for Digital humanities and literary Criticism [UAntwerp] for toxic span detection. We used a stacked generalisation ensemble of five component models, with two distinct interpretations of the task. Two models attempted to predict binary word toxicity based on ngram sequences, whilst 3 categorical span based models were trained to predict toxic token labels based on complete sequence tokens. The five models' predictions were ensembled within an LSTM model. As well as describing the system, we perform error analysis to explore model performance in relation to textual features. The system described in this paper scored 0.6755 and ranked 26 th .","{UA}ntwerp at {S}em{E}val-2021 Task 5: Spans are Spans, stacking a binary word level approach to toxic span detection","This paper describes the system developed by the Antwerp Centre for Digital humanities and literary Criticism [UAntwerp] for toxic span detection. We used a stacked generalisation ensemble of five component models, with two distinct interpretations of the task. Two models attempted to predict binary word toxicity based on ngram sequences, whilst 3 categorical span based models were trained to predict toxic token labels based on complete sequence tokens. The five models' predictions were ensembled within an LSTM model. As well as describing the system, we perform error analysis to explore model performance in relation to textual features. The system described in this paper scored 0.6755 and ranked 26 th .","UAntwerp at SemEval-2021 Task 5: Spans are Spans, stacking a binary word level approach to toxic span detection","This paper describes the system developed by the Antwerp Centre for Digital humanities and literary Criticism [UAntwerp] for toxic span detection. We used a stacked generalisation ensemble of five component models, with two distinct interpretations of the task. Two models attempted to predict binary word toxicity based on ngram sequences, whilst 3 categorical span based models were trained to predict toxic token labels based on complete sequence tokens. The five models' predictions were ensembled within an LSTM model. As well as describing the system, we perform error analysis to explore model performance in relation to textual features. The system described in this paper scored 0.6755 and ranked 26 th .",,"UAntwerp at SemEval-2021 Task 5: Spans are Spans, stacking a binary word level approach to toxic span detection. This paper describes the system developed by the Antwerp Centre for Digital humanities and literary Criticism [UAntwerp] for toxic span detection. We used a stacked generalisation ensemble of five component models, with two distinct interpretations of the task. Two models attempted to predict binary word toxicity based on ngram sequences, whilst 3 categorical span based models were trained to predict toxic token labels based on complete sequence tokens. The five models' predictions were ensembled within an LSTM model. As well as describing the system, we perform error analysis to explore model performance in relation to textual features. The system described in this paper scored 0.6755 and ranked 26 th .",2021
zens-ney-2005-word,https://aclanthology.org/W05-0834,0,,,,,,,"Word Graphs for Statistical Machine Translation. Word graphs have various applications in the field of machine translation. Therefore it is important for machine translation systems to produce compact word graphs of high quality. We will describe the generation of word graphs for state of the art phrase-based statistical machine translation. We will use these word graph to provide an analysis of the search process. We will evaluate the quality of the word graphs using the well-known graph word error rate. Additionally, we introduce the two novel graph-to-string criteria: the position-independent graph word error rate and the graph BLEU score. Experimental results are presented for two Chinese-English tasks: the small IWSLT task and the NIST large data track task. For both tasks, we achieve significant reductions of the graph error rate already with compact word graphs.",Word Graphs for Statistical Machine Translation,"Word graphs have various applications in the field of machine translation. Therefore it is important for machine translation systems to produce compact word graphs of high quality. We will describe the generation of word graphs for state of the art phrase-based statistical machine translation. We will use these word graph to provide an analysis of the search process. We will evaluate the quality of the word graphs using the well-known graph word error rate. Additionally, we introduce the two novel graph-to-string criteria: the position-independent graph word error rate and the graph BLEU score. Experimental results are presented for two Chinese-English tasks: the small IWSLT task and the NIST large data track task. For both tasks, we achieve significant reductions of the graph error rate already with compact word graphs.",Word Graphs for Statistical Machine Translation,"Word graphs have various applications in the field of machine translation. Therefore it is important for machine translation systems to produce compact word graphs of high quality. We will describe the generation of word graphs for state of the art phrase-based statistical machine translation. We will use these word graph to provide an analysis of the search process. We will evaluate the quality of the word graphs using the well-known graph word error rate. Additionally, we introduce the two novel graph-to-string criteria: the position-independent graph word error rate and the graph BLEU score. Experimental results are presented for two Chinese-English tasks: the small IWSLT task and the NIST large data track task. For both tasks, we achieve significant reductions of the graph error rate already with compact word graphs.","This work was partly funded by the European Union under the integrated project TC-Star (Technology and Corpora for Speech to Speech Translation, IST-2002-FP6-506738, http://www.tc-star.org).","Word Graphs for Statistical Machine Translation. Word graphs have various applications in the field of machine translation. Therefore it is important for machine translation systems to produce compact word graphs of high quality. We will describe the generation of word graphs for state of the art phrase-based statistical machine translation. We will use these word graph to provide an analysis of the search process. We will evaluate the quality of the word graphs using the well-known graph word error rate. Additionally, we introduce the two novel graph-to-string criteria: the position-independent graph word error rate and the graph BLEU score. Experimental results are presented for two Chinese-English tasks: the small IWSLT task and the NIST large data track task. For both tasks, we achieve significant reductions of the graph error rate already with compact word graphs.",2005
boleda-etal-2013-intensionality,https://aclanthology.org/W13-0104,0,,,,,,,"Intensionality was only alleged: On adjective-noun composition in distributional semantics. Distributional semantics has very successfully modeled semantic phenomena at the word level, and recently interest has grown in extending it to capture the meaning of phrases via semantic composition. We present experiments in adjective-noun composition which (1) show that adjectival modification can be successfully modeled with distributional semantics, (2) show that composition models inspired by the semantics of higher-order predication fare better than those that perform simple feature union or intersection, (3) contrary to what the theoretical literature might lead one to expect, do not yield a distinction between intensional and non-intensional modification, and (4) suggest that head noun polysemy and whether the adjective corresponds to a typical attribute of the noun are relevant factors in the distributional representation of adjective phrases.",Intensionality was only alleged: On adjective-noun composition in distributional semantics,"Distributional semantics has very successfully modeled semantic phenomena at the word level, and recently interest has grown in extending it to capture the meaning of phrases via semantic composition. We present experiments in adjective-noun composition which (1) show that adjectival modification can be successfully modeled with distributional semantics, (2) show that composition models inspired by the semantics of higher-order predication fare better than those that perform simple feature union or intersection, (3) contrary to what the theoretical literature might lead one to expect, do not yield a distinction between intensional and non-intensional modification, and (4) suggest that head noun polysemy and whether the adjective corresponds to a typical attribute of the noun are relevant factors in the distributional representation of adjective phrases.",Intensionality was only alleged: On adjective-noun composition in distributional semantics,"Distributional semantics has very successfully modeled semantic phenomena at the word level, and recently interest has grown in extending it to capture the meaning of phrases via semantic composition. We present experiments in adjective-noun composition which (1) show that adjectival modification can be successfully modeled with distributional semantics, (2) show that composition models inspired by the semantics of higher-order predication fare better than those that perform simple feature union or intersection, (3) contrary to what the theoretical literature might lead one to expect, do not yield a distinction between intensional and non-intensional modification, and (4) suggest that head noun polysemy and whether the adjective corresponds to a typical attribute of the noun are relevant factors in the distributional representation of adjective phrases.","We thank Miquel Cornudella for help in constructing the dataset. We acknowledge the support of Spanish MICINN grant FFI2010-09464-E (McNally, Boleda), the ICREA Foundation (McNally), Catalan AGAUR grant 2010BP-A00070, MICINN grant TIN2009-14715-C04-04, EU grant PASCAL2, FP7-ICT-216886, the DARPA DEFT program under AFRL grant FA8750-13-2-0026 (Boleda) and the ERC under the 2011 Starting Independent Research Grant 283554 to the COMPOSES project (Baroni, Pham). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of DARPA, AFRL or the US government.","Intensionality was only alleged: On adjective-noun composition in distributional semantics. Distributional semantics has very successfully modeled semantic phenomena at the word level, and recently interest has grown in extending it to capture the meaning of phrases via semantic composition. We present experiments in adjective-noun composition which (1) show that adjectival modification can be successfully modeled with distributional semantics, (2) show that composition models inspired by the semantics of higher-order predication fare better than those that perform simple feature union or intersection, (3) contrary to what the theoretical literature might lead one to expect, do not yield a distinction between intensional and non-intensional modification, and (4) suggest that head noun polysemy and whether the adjective corresponds to a typical attribute of the noun are relevant factors in the distributional representation of adjective phrases.",2013
murveit-weintraub-1990-real,https://aclanthology.org/H90-1097,0,,,,,,,"Real-Time Speech Recognition Systems. SRI and U.C. Berkeley have begun a cooperative effort to develop a new architecture for real-time implementation of spoken language systems (SLS). Our goal is to develop fast speech recognition algorithms, and supporting hardware capable of recognizing continuous speech from a bigram or trigram based 20,000 word vocabulary or a 1,000 to 5,000 word SLS systems.
• We have designed eight special purpose VLSI chips for the HMM board, six chips at U.C. Berkeley for HMM beam search and viterbi processing, and two chips at SRI for interfacing to the grammar board.",Real-Time Speech Recognition Systems,"SRI and U.C. Berkeley have begun a cooperative effort to develop a new architecture for real-time implementation of spoken language systems (SLS). Our goal is to develop fast speech recognition algorithms, and supporting hardware capable of recognizing continuous speech from a bigram or trigram based 20,000 word vocabulary or a 1,000 to 5,000 word SLS systems.
• We have designed eight special purpose VLSI chips for the HMM board, six chips at U.C. Berkeley for HMM beam search and viterbi processing, and two chips at SRI for interfacing to the grammar board.",Real-Time Speech Recognition Systems,"SRI and U.C. Berkeley have begun a cooperative effort to develop a new architecture for real-time implementation of spoken language systems (SLS). Our goal is to develop fast speech recognition algorithms, and supporting hardware capable of recognizing continuous speech from a bigram or trigram based 20,000 word vocabulary or a 1,000 to 5,000 word SLS systems.
• We have designed eight special purpose VLSI chips for the HMM board, six chips at U.C. Berkeley for HMM beam search and viterbi processing, and two chips at SRI for interfacing to the grammar board.",,"Real-Time Speech Recognition Systems. SRI and U.C. Berkeley have begun a cooperative effort to develop a new architecture for real-time implementation of spoken language systems (SLS). Our goal is to develop fast speech recognition algorithms, and supporting hardware capable of recognizing continuous speech from a bigram or trigram based 20,000 word vocabulary or a 1,000 to 5,000 word SLS systems.
• We have designed eight special purpose VLSI chips for the HMM board, six chips at U.C. Berkeley for HMM beam search and viterbi processing, and two chips at SRI for interfacing to the grammar board.",1990
nogueira-cho-2017-task,https://aclanthology.org/D17-1061,0,,,,,,,"Task-Oriented Query Reformulation with Reinforcement Learning. Search engines play an important role in our everyday lives by assisting us in finding the information we need. When we input a complex query, however, results are often far from satisfactory. In this work, we introduce a query reformulation system based on a neural network that rewrites a query to maximize the number of relevant documents returned. We train this neural network with reinforcement learning. The actions correspond to selecting terms to build a reformulated query, and the reward is the document recall. We evaluate our approach on three datasets against strong baselines and show a relative improvement of 5-20% in terms of recall. Furthermore, we present a simple method to estimate a conservative upperbound performance of a model in a particular environment and verify that there is still large room for improvements.",Task-Oriented Query Reformulation with Reinforcement Learning,"Search engines play an important role in our everyday lives by assisting us in finding the information we need. When we input a complex query, however, results are often far from satisfactory. In this work, we introduce a query reformulation system based on a neural network that rewrites a query to maximize the number of relevant documents returned. We train this neural network with reinforcement learning. The actions correspond to selecting terms to build a reformulated query, and the reward is the document recall. We evaluate our approach on three datasets against strong baselines and show a relative improvement of 5-20% in terms of recall. Furthermore, we present a simple method to estimate a conservative upperbound performance of a model in a particular environment and verify that there is still large room for improvements.",Task-Oriented Query Reformulation with Reinforcement Learning,"Search engines play an important role in our everyday lives by assisting us in finding the information we need. When we input a complex query, however, results are often far from satisfactory. In this work, we introduce a query reformulation system based on a neural network that rewrites a query to maximize the number of relevant documents returned. We train this neural network with reinforcement learning. The actions correspond to selecting terms to build a reformulated query, and the reward is the document recall. We evaluate our approach on three datasets against strong baselines and show a relative improvement of 5-20% in terms of recall. Furthermore, we present a simple method to estimate a conservative upperbound performance of a model in a particular environment and verify that there is still large room for improvements.","RN is funded by Coordenao de Aperfeioamento de Pessoal de Nvel Superior (CAPES). KC thanks support by Facebook, Google and NVIDIA. This work was partly funded by the Defense Advanced Research Projects Agency (DARPA) D3M program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.","Task-Oriented Query Reformulation with Reinforcement Learning. Search engines play an important role in our everyday lives by assisting us in finding the information we need. When we input a complex query, however, results are often far from satisfactory. In this work, we introduce a query reformulation system based on a neural network that rewrites a query to maximize the number of relevant documents returned. We train this neural network with reinforcement learning. The actions correspond to selecting terms to build a reformulated query, and the reward is the document recall. We evaluate our approach on three datasets against strong baselines and show a relative improvement of 5-20% in terms of recall. Furthermore, we present a simple method to estimate a conservative upperbound performance of a model in a particular environment and verify that there is still large room for improvements.",2017
monsalve-etal-2019-assessing,https://aclanthology.org/W19-4010,0,,,,,,,"Assessing Back-Translation as a Corpus Generation Strategy for non-English Tasks: A Study in Reading Comprehension and Word Sense Disambiguation. Corpora curated by experts have sustained Natural Language Processing mainly in English, but the expensiveness of corpora creation is a barrier for the development in further languages. Thus, we propose a corpus generation strategy that only requires a machine translation system between English and the target language in both directions, where we filter the best translations by computing automatic translation metrics and the task performance score. By studying Reading Comprehension in Spanish and Word Sense Disambiguation in Portuguese, we identified that a more quality-oriented metric has high potential in the corpora selection without degrading the task performance. We conclude that it is possible to systematise the building of quality corpora using machine translation and automatic metrics, besides some prior effort to clean and process the data.",Assessing Back-Translation as a Corpus Generation Strategy for non-{E}nglish Tasks: A Study in Reading Comprehension and Word Sense Disambiguation,"Corpora curated by experts have sustained Natural Language Processing mainly in English, but the expensiveness of corpora creation is a barrier for the development in further languages. Thus, we propose a corpus generation strategy that only requires a machine translation system between English and the target language in both directions, where we filter the best translations by computing automatic translation metrics and the task performance score. By studying Reading Comprehension in Spanish and Word Sense Disambiguation in Portuguese, we identified that a more quality-oriented metric has high potential in the corpora selection without degrading the task performance. We conclude that it is possible to systematise the building of quality corpora using machine translation and automatic metrics, besides some prior effort to clean and process the data.",Assessing Back-Translation as a Corpus Generation Strategy for non-English Tasks: A Study in Reading Comprehension and Word Sense Disambiguation,"Corpora curated by experts have sustained Natural Language Processing mainly in English, but the expensiveness of corpora creation is a barrier for the development in further languages. Thus, we propose a corpus generation strategy that only requires a machine translation system between English and the target language in both directions, where we filter the best translations by computing automatic translation metrics and the task performance score. By studying Reading Comprehension in Spanish and Word Sense Disambiguation in Portuguese, we identified that a more quality-oriented metric has high potential in the corpora selection without degrading the task performance. We conclude that it is possible to systematise the building of quality corpora using machine translation and automatic metrics, besides some prior effort to clean and process the data.","This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 825299. Besides, we acknowledge the support of the NVIDIA Corporation with the donation of the Titan Xp GPU used for this study. Finally, the first author is granted by the ""Programa de apoyo al desarrollo de tesis de licenciatura"" (Support programme of undergraduate thesis development, PADET 2018, PUCP).","Assessing Back-Translation as a Corpus Generation Strategy for non-English Tasks: A Study in Reading Comprehension and Word Sense Disambiguation. Corpora curated by experts have sustained Natural Language Processing mainly in English, but the expensiveness of corpora creation is a barrier for the development in further languages. Thus, we propose a corpus generation strategy that only requires a machine translation system between English and the target language in both directions, where we filter the best translations by computing automatic translation metrics and the task performance score. By studying Reading Comprehension in Spanish and Word Sense Disambiguation in Portuguese, we identified that a more quality-oriented metric has high potential in the corpora selection without degrading the task performance. We conclude that it is possible to systematise the building of quality corpora using machine translation and automatic metrics, besides some prior effort to clean and process the data.",2019
prolo-2006-handling,https://aclanthology.org/W06-1520,0,,,,,,,"Handling Unlike Coordinated Phrases in TAG by Mixing Syntactic Category and Grammatical Function. Coordination of phrases of different syntactic categories has posed a problem for generative systems based only on syntactic categories. Although some prefer to treat them as exceptional cases that should require some extra mechanism (as for elliptical constructions), or to allow for unrestricted cross-category coordination, they can be naturally derived in a grammatic functional generative approach. In this paper we explore the ideia on how mixing syntactic categories and grammatical functions in the label set of a Tree Adjoining Grammar allows us to develop grammars that elegantly handle both the cases of same-and cross-category coordination in an uniform way.",Handling Unlike Coordinated Phrases in {TAG} by Mixing Syntactic Category and Grammatical Function,"Coordination of phrases of different syntactic categories has posed a problem for generative systems based only on syntactic categories. Although some prefer to treat them as exceptional cases that should require some extra mechanism (as for elliptical constructions), or to allow for unrestricted cross-category coordination, they can be naturally derived in a grammatic functional generative approach. In this paper we explore the ideia on how mixing syntactic categories and grammatical functions in the label set of a Tree Adjoining Grammar allows us to develop grammars that elegantly handle both the cases of same-and cross-category coordination in an uniform way.",Handling Unlike Coordinated Phrases in TAG by Mixing Syntactic Category and Grammatical Function,"Coordination of phrases of different syntactic categories has posed a problem for generative systems based only on syntactic categories. Although some prefer to treat them as exceptional cases that should require some extra mechanism (as for elliptical constructions), or to allow for unrestricted cross-category coordination, they can be naturally derived in a grammatic functional generative approach. In this paper we explore the ideia on how mixing syntactic categories and grammatical functions in the label set of a Tree Adjoining Grammar allows us to develop grammars that elegantly handle both the cases of same-and cross-category coordination in an uniform way.",,"Handling Unlike Coordinated Phrases in TAG by Mixing Syntactic Category and Grammatical Function. Coordination of phrases of different syntactic categories has posed a problem for generative systems based only on syntactic categories. Although some prefer to treat them as exceptional cases that should require some extra mechanism (as for elliptical constructions), or to allow for unrestricted cross-category coordination, they can be naturally derived in a grammatic functional generative approach. In this paper we explore the ideia on how mixing syntactic categories and grammatical functions in the label set of a Tree Adjoining Grammar allows us to develop grammars that elegantly handle both the cases of same-and cross-category coordination in an uniform way.",2006
bateman-etal-2002-brief,https://aclanthology.org/W02-1703,0,,,,,,,"A Brief Introduction to the GeM Annotation Schema for Complex Document Layout. In this paper we sketch the design, motivation and use of the GeM annotation scheme: an XML-based annotation framework for preparing corpora involving documents with complex layout of text, graphics, diagrams, layout and other navigational elements. We set out the basic organizational layers, contrast the technical approach with some other schemes for complex markup in the XML tradition, and indicate some of the applications we are pursuing.",A Brief Introduction to the {G}e{M} Annotation Schema for Complex Document Layout,"In this paper we sketch the design, motivation and use of the GeM annotation scheme: an XML-based annotation framework for preparing corpora involving documents with complex layout of text, graphics, diagrams, layout and other navigational elements. We set out the basic organizational layers, contrast the technical approach with some other schemes for complex markup in the XML tradition, and indicate some of the applications we are pursuing.",A Brief Introduction to the GeM Annotation Schema for Complex Document Layout,"In this paper we sketch the design, motivation and use of the GeM annotation scheme: an XML-based annotation framework for preparing corpora involving documents with complex layout of text, graphics, diagrams, layout and other navigational elements. We set out the basic organizational layers, contrast the technical approach with some other schemes for complex markup in the XML tradition, and indicate some of the applications we are pursuing.",,"A Brief Introduction to the GeM Annotation Schema for Complex Document Layout. In this paper we sketch the design, motivation and use of the GeM annotation scheme: an XML-based annotation framework for preparing corpora involving documents with complex layout of text, graphics, diagrams, layout and other navigational elements. We set out the basic organizational layers, contrast the technical approach with some other schemes for complex markup in the XML tradition, and indicate some of the applications we are pursuing.",2002
kreutzer-etal-2020-inference,https://aclanthology.org/2020.emnlp-main.465,0,,,,,,,"Inference Strategies for Machine Translation with Conditional Masking. Conditional masked language model (CMLM) training has proven successful for nonautoregressive and semi-autoregressive sequence generation tasks, such as machine translation. Given a trained CMLM, however, it is not clear what the best inference strategy is. We formulate masked inference as a factorization of conditional probabilities of partial sequences, show that this does not harm performance, and investigate a number of simple heuristics motivated by this perspective. We identify a thresholding strategy that has advantages over the standard ""mask-predict"" algorithm, and provide analyses of its behavior on machine translation tasks.",Inference Strategies for Machine Translation with Conditional Masking,"Conditional masked language model (CMLM) training has proven successful for nonautoregressive and semi-autoregressive sequence generation tasks, such as machine translation. Given a trained CMLM, however, it is not clear what the best inference strategy is. We formulate masked inference as a factorization of conditional probabilities of partial sequences, show that this does not harm performance, and investigate a number of simple heuristics motivated by this perspective. We identify a thresholding strategy that has advantages over the standard ""mask-predict"" algorithm, and provide analyses of its behavior on machine translation tasks.",Inference Strategies for Machine Translation with Conditional Masking,"Conditional masked language model (CMLM) training has proven successful for nonautoregressive and semi-autoregressive sequence generation tasks, such as machine translation. Given a trained CMLM, however, it is not clear what the best inference strategy is. We formulate masked inference as a factorization of conditional probabilities of partial sequences, show that this does not harm performance, and investigate a number of simple heuristics motivated by this perspective. We identify a thresholding strategy that has advantages over the standard ""mask-predict"" algorithm, and provide analyses of its behavior on machine translation tasks.",,"Inference Strategies for Machine Translation with Conditional Masking. Conditional masked language model (CMLM) training has proven successful for nonautoregressive and semi-autoregressive sequence generation tasks, such as machine translation. Given a trained CMLM, however, it is not clear what the best inference strategy is. We formulate masked inference as a factorization of conditional probabilities of partial sequences, show that this does not harm performance, and investigate a number of simple heuristics motivated by this perspective. We identify a thresholding strategy that has advantages over the standard ""mask-predict"" algorithm, and provide analyses of its behavior on machine translation tasks.",2020
knight-koehn-2003-whats,https://aclanthology.org/N03-5005,0,,,,,,,What's New in Statistical Machine Translation. ,What{'}s New in Statistical Machine Translation,,What's New in Statistical Machine Translation,,,What's New in Statistical Machine Translation. ,2003
miquel-ribe-rodriguez-2011-cultural,https://aclanthology.org/R11-1044,0,,,,,,,"Cultural Configuration of Wikipedia: measuring Autoreferentiality in Different Languages. Among the motivations to write in Wikipedia given by the current literature there is often coincidence, but none of the studies presents the hypothesis of contributing for the visibility of the own national or language related content. Similar to topical coverage studies, we outline a method which allows collecting the articles of this content, to later analyse them in several dimensions. To prove its universality, the tests are repeated for up to twenty language editions of Wikipedia. Finally, through the best indicators from each dimension we obtain an index which represents the degree of autoreferentiality of the encyclopedia. Last, we point out the impact of this fact and the risk of not considering its existence in the design of applications based on user generated content.",Cultural Configuration of {W}ikipedia: measuring Autoreferentiality in Different Languages,"Among the motivations to write in Wikipedia given by the current literature there is often coincidence, but none of the studies presents the hypothesis of contributing for the visibility of the own national or language related content. Similar to topical coverage studies, we outline a method which allows collecting the articles of this content, to later analyse them in several dimensions. To prove its universality, the tests are repeated for up to twenty language editions of Wikipedia. Finally, through the best indicators from each dimension we obtain an index which represents the degree of autoreferentiality of the encyclopedia. Last, we point out the impact of this fact and the risk of not considering its existence in the design of applications based on user generated content.",Cultural Configuration of Wikipedia: measuring Autoreferentiality in Different Languages,"Among the motivations to write in Wikipedia given by the current literature there is often coincidence, but none of the studies presents the hypothesis of contributing for the visibility of the own national or language related content. Similar to topical coverage studies, we outline a method which allows collecting the articles of this content, to later analyse them in several dimensions. To prove its universality, the tests are repeated for up to twenty language editions of Wikipedia. Finally, through the best indicators from each dimension we obtain an index which represents the degree of autoreferentiality of the encyclopedia. Last, we point out the impact of this fact and the risk of not considering its existence in the design of applications based on user generated content.","This work has been partially funded by KNOW2 (TIN2009-14715-C04-04) Eduard Aibar, Amical Viquipèdia, Joan Campàs, Marcos Faúndez. Diana Petri, Pere Tuset, Fina Ribé, Jordi Miquel, Joan Ribé, Peius Cotonat.","Cultural Configuration of Wikipedia: measuring Autoreferentiality in Different Languages. Among the motivations to write in Wikipedia given by the current literature there is often coincidence, but none of the studies presents the hypothesis of contributing for the visibility of the own national or language related content. Similar to topical coverage studies, we outline a method which allows collecting the articles of this content, to later analyse them in several dimensions. To prove its universality, the tests are repeated for up to twenty language editions of Wikipedia. Finally, through the best indicators from each dimension we obtain an index which represents the degree of autoreferentiality of the encyclopedia. Last, we point out the impact of this fact and the risk of not considering its existence in the design of applications based on user generated content.",2011
stambolieva-2011-parallel,https://aclanthology.org/W11-4306,0,,,,,,,"Parallel Corpora in Aspectual Studies of Non-Aspect Languages. The paper presents the first results, for Bulgarian and English, of a multilingual Trans-Verba project in progress at the NBU Laboratory for Language Technologies. The project explores the possibility to use Bulgarian translation equivalents in parallel corpora and translation memories as a metalanguage in assigning aspectual values to ""non-aspect"" language equivalents. The resulting subcorpora of Perfective Aspect and Imperfective Aspect units are then quantitatively analysed and concordanced to obtain parameters of aspectual build-up.",Parallel Corpora in Aspectual Studies of Non-Aspect Languages,"The paper presents the first results, for Bulgarian and English, of a multilingual Trans-Verba project in progress at the NBU Laboratory for Language Technologies. The project explores the possibility to use Bulgarian translation equivalents in parallel corpora and translation memories as a metalanguage in assigning aspectual values to ""non-aspect"" language equivalents. The resulting subcorpora of Perfective Aspect and Imperfective Aspect units are then quantitatively analysed and concordanced to obtain parameters of aspectual build-up.",Parallel Corpora in Aspectual Studies of Non-Aspect Languages,"The paper presents the first results, for Bulgarian and English, of a multilingual Trans-Verba project in progress at the NBU Laboratory for Language Technologies. The project explores the possibility to use Bulgarian translation equivalents in parallel corpora and translation memories as a metalanguage in assigning aspectual values to ""non-aspect"" language equivalents. The resulting subcorpora of Perfective Aspect and Imperfective Aspect units are then quantitatively analysed and concordanced to obtain parameters of aspectual build-up.",,"Parallel Corpora in Aspectual Studies of Non-Aspect Languages. The paper presents the first results, for Bulgarian and English, of a multilingual Trans-Verba project in progress at the NBU Laboratory for Language Technologies. The project explores the possibility to use Bulgarian translation equivalents in parallel corpora and translation memories as a metalanguage in assigning aspectual values to ""non-aspect"" language equivalents. The resulting subcorpora of Perfective Aspect and Imperfective Aspect units are then quantitatively analysed and concordanced to obtain parameters of aspectual build-up.",2011
lee-etal-2011-discriminative,https://aclanthology.org/P11-1089,0,,,,,,,"A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing. Most previous studies of morphological disambiguation and dependency parsing have been pursued independently. Morphological taggers operate on n-grams and do not take into account syntactic relations; parsers use the ""pipeline"" approach, assuming that morphological information has been separately obtained. However, in morphologically-rich languages, there is often considerable interaction between morphology and syntax, such that neither can be disambiguated without the other. In this paper, we propose a discriminative model that jointly infers morphological properties and syntactic structures. In evaluations on various highly-inflected languages, this joint model outperforms both a baseline tagger in morphological disambiguation, and a pipeline parser in head selection.",A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing,"Most previous studies of morphological disambiguation and dependency parsing have been pursued independently. Morphological taggers operate on n-grams and do not take into account syntactic relations; parsers use the ""pipeline"" approach, assuming that morphological information has been separately obtained. However, in morphologically-rich languages, there is often considerable interaction between morphology and syntax, such that neither can be disambiguated without the other. In this paper, we propose a discriminative model that jointly infers morphological properties and syntactic structures. In evaluations on various highly-inflected languages, this joint model outperforms both a baseline tagger in morphological disambiguation, and a pipeline parser in head selection.",A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing,"Most previous studies of morphological disambiguation and dependency parsing have been pursued independently. Morphological taggers operate on n-grams and do not take into account syntactic relations; parsers use the ""pipeline"" approach, assuming that morphological information has been separately obtained. However, in morphologically-rich languages, there is often considerable interaction between morphology and syntax, such that neither can be disambiguated without the other. In this paper, we propose a discriminative model that jointly infers morphological properties and syntactic structures. In evaluations on various highly-inflected languages, this joint model outperforms both a baseline tagger in morphological disambiguation, and a pipeline parser in head selection.","We thank David Bamman and Gregory Crane for their feedback and support. Part of this research was performed by the first author while visiting Perseus Digital Library at Tufts University, under the grants A Reading Environment for Arabic and Islamic Culture, Department of Education (P017A060068-08) and The Dynamic Lexicon: Cyberinfrastructure and the Automatic Analysis of Historical Languages, National Endowment for the Humanities (PR-50013-08). The latter two authors were supported by Army prime contract #W911NF-07-1-0216 and University of Pennsylvania subaward #103-548106; by SRI International subcontract #27-001338 and ARFL prime contract #FA8750-09-C-0181; and by the Center for Intelligent Information Retrieval. Any opinions, findings, and conclusions or recommendations expressed in this material are the authors' and do not necessarily reflect those of the sponsors.","A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing. Most previous studies of morphological disambiguation and dependency parsing have been pursued independently. Morphological taggers operate on n-grams and do not take into account syntactic relations; parsers use the ""pipeline"" approach, assuming that morphological information has been separately obtained. However, in morphologically-rich languages, there is often considerable interaction between morphology and syntax, such that neither can be disambiguated without the other. In this paper, we propose a discriminative model that jointly infers morphological properties and syntactic structures. In evaluations on various highly-inflected languages, this joint model outperforms both a baseline tagger in morphological disambiguation, and a pipeline parser in head selection.",2011
chang-etal-2013-constrained,https://aclanthology.org/D13-1057,0,,,,,,,"A Constrained Latent Variable Model for Coreference Resolution. Coreference resolution is a well known clustering task in Natural Language Processing. In this paper, we describe the Latent Left Linking model (L 3 M), a novel, principled, and linguistically motivated latent structured prediction approach to coreference resolution. We show that L 3 M admits efficient inference and can be augmented with knowledge-based constraints; we also present a fast stochastic gradient based learning. Experiments on ACE and Ontonotes data show that L 3 M and its constrained version, CL 3 M, are more accurate than several state-of-the-art approaches as well as some structured prediction models proposed in the literature.",A Constrained Latent Variable Model for Coreference Resolution,"Coreference resolution is a well known clustering task in Natural Language Processing. In this paper, we describe the Latent Left Linking model (L 3 M), a novel, principled, and linguistically motivated latent structured prediction approach to coreference resolution. We show that L 3 M admits efficient inference and can be augmented with knowledge-based constraints; we also present a fast stochastic gradient based learning. Experiments on ACE and Ontonotes data show that L 3 M and its constrained version, CL 3 M, are more accurate than several state-of-the-art approaches as well as some structured prediction models proposed in the literature.",A Constrained Latent Variable Model for Coreference Resolution,"Coreference resolution is a well known clustering task in Natural Language Processing. In this paper, we describe the Latent Left Linking model (L 3 M), a novel, principled, and linguistically motivated latent structured prediction approach to coreference resolution. We show that L 3 M admits efficient inference and can be augmented with knowledge-based constraints; we also present a fast stochastic gradient based learning. Experiments on ACE and Ontonotes data show that L 3 M and its constrained version, CL 3 M, are more accurate than several state-of-the-art approaches as well as some structured prediction models proposed in the literature.",,"A Constrained Latent Variable Model for Coreference Resolution. Coreference resolution is a well known clustering task in Natural Language Processing. In this paper, we describe the Latent Left Linking model (L 3 M), a novel, principled, and linguistically motivated latent structured prediction approach to coreference resolution. We show that L 3 M admits efficient inference and can be augmented with knowledge-based constraints; we also present a fast stochastic gradient based learning. Experiments on ACE and Ontonotes data show that L 3 M and its constrained version, CL 3 M, are more accurate than several state-of-the-art approaches as well as some structured prediction models proposed in the literature.",2013
appelt-hobbs-1990-making,https://aclanthology.org/H90-1012,0,,,,,,,"Making Abduction More Efficient. The TACITUS system uses a cost-based abduction scheme for finding and choosing among possible interpretations for natural language texts. Ordinary Prologstyle, backchaining deduction is augmented with the capability of making assumptions and of factoring two goal literals that are unifiable (see Hobbs et al., 1988) .
Deduction is combinatorially explosive, and since the abduction scheme augments deduction with two more options at each node--assumption and factoring--it is even more explosive. We have been engaged in an empirical investigation of the behavior of this abductive scheme on a knowledge base of nearly 400 axioms, performing relatively sophisticated linguistic processing. So far, we have begun to experiment, with good results, with three different techniques for controlling abduction--a type hierarchy, unwinding or avoiding transitivity axioms, and various heuristics for reducing the branch factor of the search.",Making Abduction More Efficient,"The TACITUS system uses a cost-based abduction scheme for finding and choosing among possible interpretations for natural language texts. Ordinary Prologstyle, backchaining deduction is augmented with the capability of making assumptions and of factoring two goal literals that are unifiable (see Hobbs et al., 1988) .
Deduction is combinatorially explosive, and since the abduction scheme augments deduction with two more options at each node--assumption and factoring--it is even more explosive. We have been engaged in an empirical investigation of the behavior of this abductive scheme on a knowledge base of nearly 400 axioms, performing relatively sophisticated linguistic processing. So far, we have begun to experiment, with good results, with three different techniques for controlling abduction--a type hierarchy, unwinding or avoiding transitivity axioms, and various heuristics for reducing the branch factor of the search.",Making Abduction More Efficient,"The TACITUS system uses a cost-based abduction scheme for finding and choosing among possible interpretations for natural language texts. Ordinary Prologstyle, backchaining deduction is augmented with the capability of making assumptions and of factoring two goal literals that are unifiable (see Hobbs et al., 1988) .
Deduction is combinatorially explosive, and since the abduction scheme augments deduction with two more options at each node--assumption and factoring--it is even more explosive. We have been engaged in an empirical investigation of the behavior of this abductive scheme on a knowledge base of nearly 400 axioms, performing relatively sophisticated linguistic processing. So far, we have begun to experiment, with good results, with three different techniques for controlling abduction--a type hierarchy, unwinding or avoiding transitivity axioms, and various heuristics for reducing the branch factor of the search.",,"Making Abduction More Efficient. The TACITUS system uses a cost-based abduction scheme for finding and choosing among possible interpretations for natural language texts. Ordinary Prologstyle, backchaining deduction is augmented with the capability of making assumptions and of factoring two goal literals that are unifiable (see Hobbs et al., 1988) .
Deduction is combinatorially explosive, and since the abduction scheme augments deduction with two more options at each node--assumption and factoring--it is even more explosive. We have been engaged in an empirical investigation of the behavior of this abductive scheme on a knowledge base of nearly 400 axioms, performing relatively sophisticated linguistic processing. So far, we have begun to experiment, with good results, with three different techniques for controlling abduction--a type hierarchy, unwinding or avoiding transitivity axioms, and various heuristics for reducing the branch factor of the search.",1990
roxas-borra-2000-panel,https://aclanthology.org/P00-1074,0,,,,,,,"Panel: Computational Linguistics Research on Philippine Languages. This is a paper that describes computational linguistic activities on Philippines languages. The Philippines is an archipelago with vast numbers of islands and numerous languages. The tasks of understanding, representing and implementing these languages require enormous work. An extensive amount of work has been done on understanding at least some of the major Philippine languages, but little has been done on the computational aspect. Majority of the latter has been on the purpose of machine translation.",Panel: Computational Linguistics Research on {P}hilippine Languages,"This is a paper that describes computational linguistic activities on Philippines languages. The Philippines is an archipelago with vast numbers of islands and numerous languages. The tasks of understanding, representing and implementing these languages require enormous work. An extensive amount of work has been done on understanding at least some of the major Philippine languages, but little has been done on the computational aspect. Majority of the latter has been on the purpose of machine translation.",Panel: Computational Linguistics Research on Philippine Languages,"This is a paper that describes computational linguistic activities on Philippines languages. The Philippines is an archipelago with vast numbers of islands and numerous languages. The tasks of understanding, representing and implementing these languages require enormous work. An extensive amount of work has been done on understanding at least some of the major Philippine languages, but little has been done on the computational aspect. Majority of the latter has been on the purpose of machine translation.",,"Panel: Computational Linguistics Research on Philippine Languages. This is a paper that describes computational linguistic activities on Philippines languages. The Philippines is an archipelago with vast numbers of islands and numerous languages. The tasks of understanding, representing and implementing these languages require enormous work. An extensive amount of work has been done on understanding at least some of the major Philippine languages, but little has been done on the computational aspect. Majority of the latter has been on the purpose of machine translation.",2000
r-l-m-2020-nitk,https://aclanthology.org/2020.fnp-1.9,0,,,,finance,,,"NITK NLP at FinCausal-2020 Task 1 Using BERT and Linear models.. FinCausal-2020 is the shared task which focuses on the causality detection of factual data for financial analysis. The financial data facts don't provide much explanation on the variability of these data. This paper aims to propose an efficient method to classify the data into one which is having any financial cause or not. Many models were used to classify the data, out of which SVM model gave an F-Score of 0.9435, BERT with specific fine-tuning achieved best results with F-Score of 0.9677.",{NITK} {NLP} at {F}in{C}ausal-2020 Task 1 Using {BERT} and Linear models.,"FinCausal-2020 is the shared task which focuses on the causality detection of factual data for financial analysis. The financial data facts don't provide much explanation on the variability of these data. This paper aims to propose an efficient method to classify the data into one which is having any financial cause or not. Many models were used to classify the data, out of which SVM model gave an F-Score of 0.9435, BERT with specific fine-tuning achieved best results with F-Score of 0.9677.",NITK NLP at FinCausal-2020 Task 1 Using BERT and Linear models.,"FinCausal-2020 is the shared task which focuses on the causality detection of factual data for financial analysis. The financial data facts don't provide much explanation on the variability of these data. This paper aims to propose an efficient method to classify the data into one which is having any financial cause or not. Many models were used to classify the data, out of which SVM model gave an F-Score of 0.9435, BERT with specific fine-tuning achieved best results with F-Score of 0.9677.",,"NITK NLP at FinCausal-2020 Task 1 Using BERT and Linear models.. FinCausal-2020 is the shared task which focuses on the causality detection of factual data for financial analysis. The financial data facts don't provide much explanation on the variability of these data. This paper aims to propose an efficient method to classify the data into one which is having any financial cause or not. Many models were used to classify the data, out of which SVM model gave an F-Score of 0.9435, BERT with specific fine-tuning achieved best results with F-Score of 0.9677.",2020
parida-etal-2020-odianlps,https://aclanthology.org/2020.wat-1.10,0,,,,,,,"ODIANLP's Participation in WAT2020. This paper describes the team (""ODIANLP"")'s submission to WAT 2020. We have participated in the English→Hindi Multimodal task and Indic task. We have used the state-ofthe-art Transformer model for the translation task and InceptionResNetV2 for the Hindi Image Captioning task. Our submission tops in English→Hindi Multimodal task in its track and Odia↔English translation tasks. Also, our submissions performed well in the Indic Multilingual tasks.",{ODIANLP}{'}s Participation in {WAT}2020,"This paper describes the team (""ODIANLP"")'s submission to WAT 2020. We have participated in the English→Hindi Multimodal task and Indic task. We have used the state-ofthe-art Transformer model for the translation task and InceptionResNetV2 for the Hindi Image Captioning task. Our submission tops in English→Hindi Multimodal task in its track and Odia↔English translation tasks. Also, our submissions performed well in the Indic Multilingual tasks.",ODIANLP's Participation in WAT2020,"This paper describes the team (""ODIANLP"")'s submission to WAT 2020. We have participated in the English→Hindi Multimodal task and Indic task. We have used the state-ofthe-art Transformer model for the translation task and InceptionResNetV2 for the Hindi Image Captioning task. Our submission tops in English→Hindi Multimodal task in its track and Odia↔English translation tasks. Also, our submissions performed well in the Indic Multilingual tasks.","At Idiap, the work was supported by the EU H2020 project ""Real-time network, text, and speaker analytics for combating organized crime"" (ROX-ANNE), grant agreement: 833635.At Charles University, the work was supported by the grants 19-26934X (NEUREM3) of the Czech Science Foundation and ""Progress"" Q18+Q48 of Charles University, and using language resources distributed by the LIN-DAT/CLARIN project of the Ministry of","ODIANLP's Participation in WAT2020. This paper describes the team (""ODIANLP"")'s submission to WAT 2020. We have participated in the English→Hindi Multimodal task and Indic task. We have used the state-ofthe-art Transformer model for the translation task and InceptionResNetV2 for the Hindi Image Captioning task. Our submission tops in English→Hindi Multimodal task in its track and Odia↔English translation tasks. Also, our submissions performed well in the Indic Multilingual tasks.",2020
filimonov-harper-2011-syntactic,https://aclanthology.org/D11-1064,0,,,,,,,"Syntactic Decision Tree LMs: Random Selection or Intelligent Design?. Decision trees have been applied to a variety of NLP tasks, including language modeling, for their ability to handle a variety of attributes and sparse context space. Moreover, forests (collections of decision trees) have been shown to substantially outperform individual decision trees. In this work, we investigate methods for combining trees in a forest, as well as methods for diversifying trees for the task of syntactic language modeling. We show that our tree interpolation technique outperforms the standard method used in the literature, and that, on this particular task, restricting tree contexts in a principled way produces smaller and better forests, with the best achieving an 8% relative reduction in Word Error Rate over an n-gram baseline.",Syntactic Decision Tree {LM}s: Random Selection or Intelligent Design?,"Decision trees have been applied to a variety of NLP tasks, including language modeling, for their ability to handle a variety of attributes and sparse context space. Moreover, forests (collections of decision trees) have been shown to substantially outperform individual decision trees. In this work, we investigate methods for combining trees in a forest, as well as methods for diversifying trees for the task of syntactic language modeling. We show that our tree interpolation technique outperforms the standard method used in the literature, and that, on this particular task, restricting tree contexts in a principled way produces smaller and better forests, with the best achieving an 8% relative reduction in Word Error Rate over an n-gram baseline.",Syntactic Decision Tree LMs: Random Selection or Intelligent Design?,"Decision trees have been applied to a variety of NLP tasks, including language modeling, for their ability to handle a variety of attributes and sparse context space. Moreover, forests (collections of decision trees) have been shown to substantially outperform individual decision trees. In this work, we investigate methods for combining trees in a forest, as well as methods for diversifying trees for the task of syntactic language modeling. We show that our tree interpolation technique outperforms the standard method used in the literature, and that, on this particular task, restricting tree contexts in a principled way produces smaller and better forests, with the best achieving an 8% relative reduction in Word Error Rate over an n-gram baseline.",We would like to thank Ariya Rastrow for providing word lattices for the ASR rescoring experiments.,"Syntactic Decision Tree LMs: Random Selection or Intelligent Design?. Decision trees have been applied to a variety of NLP tasks, including language modeling, for their ability to handle a variety of attributes and sparse context space. Moreover, forests (collections of decision trees) have been shown to substantially outperform individual decision trees. In this work, we investigate methods for combining trees in a forest, as well as methods for diversifying trees for the task of syntactic language modeling. We show that our tree interpolation technique outperforms the standard method used in the literature, and that, on this particular task, restricting tree contexts in a principled way produces smaller and better forests, with the best achieving an 8% relative reduction in Word Error Rate over an n-gram baseline.",2011
burlot-2019-lingua,https://aclanthology.org/W19-5310,0,,,,,,,"Lingua Custodia at WMT'19: Attempts to Control Terminology. This paper describes Lingua Custodia's submission to the WMT'19 news shared task for German-to-French on the topic of the EU elections. We report experiments on the adaptation of the terminology of a machine translation system to a specific topic, aimed at providing more accurate translations of specific entities like political parties and person names, given that the shared task provided no in-domain training parallel data dealing with the restricted topic. Our primary submission to the shared task uses backtranslation generated with a type of decoding allowing the insertion of constraints in the output in order to guarantee the correct translation of specific terms that are not necessarily observed in the data.",Lingua Custodia at {WMT}{'}19: Attempts to Control Terminology,"This paper describes Lingua Custodia's submission to the WMT'19 news shared task for German-to-French on the topic of the EU elections. We report experiments on the adaptation of the terminology of a machine translation system to a specific topic, aimed at providing more accurate translations of specific entities like political parties and person names, given that the shared task provided no in-domain training parallel data dealing with the restricted topic. Our primary submission to the shared task uses backtranslation generated with a type of decoding allowing the insertion of constraints in the output in order to guarantee the correct translation of specific terms that are not necessarily observed in the data.",Lingua Custodia at WMT'19: Attempts to Control Terminology,"This paper describes Lingua Custodia's submission to the WMT'19 news shared task for German-to-French on the topic of the EU elections. We report experiments on the adaptation of the terminology of a machine translation system to a specific topic, aimed at providing more accurate translations of specific entities like political parties and person names, given that the shared task provided no in-domain training parallel data dealing with the restricted topic. Our primary submission to the shared task uses backtranslation generated with a type of decoding allowing the insertion of constraints in the output in order to guarantee the correct translation of specific terms that are not necessarily observed in the data.",,"Lingua Custodia at WMT'19: Attempts to Control Terminology. This paper describes Lingua Custodia's submission to the WMT'19 news shared task for German-to-French on the topic of the EU elections. We report experiments on the adaptation of the terminology of a machine translation system to a specific topic, aimed at providing more accurate translations of specific entities like political parties and person names, given that the shared task provided no in-domain training parallel data dealing with the restricted topic. Our primary submission to the shared task uses backtranslation generated with a type of decoding allowing the insertion of constraints in the output in order to guarantee the correct translation of specific terms that are not necessarily observed in the data.",2019
fang-cohn-2017-model,https://aclanthology.org/P17-2093,0,,,,,,,"Model Transfer for Tagging Low-resource Languages using a Bilingual Dictionary. Cross-lingual model transfer is a compelling and popular method for predicting annotations in a low-resource language, whereby parallel corpora provide a bridge to a high-resource language and its associated annotated corpora. However, parallel data is not readily available for many languages, limiting the applicability of these approaches. We address these drawbacks in our framework which takes advantage of cross-lingual word embeddings trained solely on a high coverage bilingual dictionary. We propose a novel neural network model for joint training from both sources of data based on cross-lingual word embeddings, and show substantial empirical improvements over baseline techniques. We also propose several active learning heuristics, which result in improvements over competitive benchmark methods.",Model Transfer for Tagging Low-resource Languages using a Bilingual Dictionary,"Cross-lingual model transfer is a compelling and popular method for predicting annotations in a low-resource language, whereby parallel corpora provide a bridge to a high-resource language and its associated annotated corpora. However, parallel data is not readily available for many languages, limiting the applicability of these approaches. We address these drawbacks in our framework which takes advantage of cross-lingual word embeddings trained solely on a high coverage bilingual dictionary. We propose a novel neural network model for joint training from both sources of data based on cross-lingual word embeddings, and show substantial empirical improvements over baseline techniques. We also propose several active learning heuristics, which result in improvements over competitive benchmark methods.",Model Transfer for Tagging Low-resource Languages using a Bilingual Dictionary,"Cross-lingual model transfer is a compelling and popular method for predicting annotations in a low-resource language, whereby parallel corpora provide a bridge to a high-resource language and its associated annotated corpora. However, parallel data is not readily available for many languages, limiting the applicability of these approaches. We address these drawbacks in our framework which takes advantage of cross-lingual word embeddings trained solely on a high coverage bilingual dictionary. We propose a novel neural network model for joint training from both sources of data based on cross-lingual word embeddings, and show substantial empirical improvements over baseline techniques. We also propose several active learning heuristics, which result in improvements over competitive benchmark methods.",,"Model Transfer for Tagging Low-resource Languages using a Bilingual Dictionary. Cross-lingual model transfer is a compelling and popular method for predicting annotations in a low-resource language, whereby parallel corpora provide a bridge to a high-resource language and its associated annotated corpora. However, parallel data is not readily available for many languages, limiting the applicability of these approaches. We address these drawbacks in our framework which takes advantage of cross-lingual word embeddings trained solely on a high coverage bilingual dictionary. We propose a novel neural network model for joint training from both sources of data based on cross-lingual word embeddings, and show substantial empirical improvements over baseline techniques. We also propose several active learning heuristics, which result in improvements over competitive benchmark methods.",2017
mcintyre-1998-babel-testbed,https://aclanthology.org/P98-2137,0,,,,,,,"Babel: A Testbed for Research in Origins of Language. We believe that language is a complex adaptive system that emerges from adaptive interactions between language users and continues to evolve and adapt through repeated interactions. Our research looks at the mechanisms and processes involved in such emergence and adaptation. To provide a basis for our computer simulations, we have implemented an open-ended, extensible testbed called Babel which allows rapid construction of experiments and flexible visualization of results.",{B}abel: A Testbed for Research in Origins of Language,"We believe that language is a complex adaptive system that emerges from adaptive interactions between language users and continues to evolve and adapt through repeated interactions. Our research looks at the mechanisms and processes involved in such emergence and adaptation. To provide a basis for our computer simulations, we have implemented an open-ended, extensible testbed called Babel which allows rapid construction of experiments and flexible visualization of results.",Babel: A Testbed for Research in Origins of Language,"We believe that language is a complex adaptive system that emerges from adaptive interactions between language users and continues to evolve and adapt through repeated interactions. Our research looks at the mechanisms and processes involved in such emergence and adaptation. To provide a basis for our computer simulations, we have implemented an open-ended, extensible testbed called Babel which allows rapid construction of experiments and flexible visualization of results.",,"Babel: A Testbed for Research in Origins of Language. We believe that language is a complex adaptive system that emerges from adaptive interactions between language users and continues to evolve and adapt through repeated interactions. Our research looks at the mechanisms and processes involved in such emergence and adaptation. To provide a basis for our computer simulations, we have implemented an open-ended, extensible testbed called Babel which allows rapid construction of experiments and flexible visualization of results.",1998
gust-reddig-1982-logic,https://aclanthology.org/C82-2026,0,,,,,,,"A LOGIC-ORIENTED ATN: Grammar Knowledge as Part of the System's Knowledge. The eystem BACON (Berlin Autcmatle COnstruction for semantic Networks) is an experimental intelligent qaestion--answerlng system with a nataral la~uage interface based on single sentence input 1. 1 This system has been developed in the project ""Automatisehe Erstellung eemantischer Hetze"" (Automatic construction of emantic networks) at the Institute of Applied Compuger Scienoe at the Technical University of Berlin. The pro~eet was supported by the Ministry for Science and Technology (BMFT) of the Federal Republic of German~.
Explanations of the s~stems structure:",A LOGIC-ORIENTED {ATN}: Grammar Knowledge as Part of the System{'}s Knowledge,"The eystem BACON (Berlin Autcmatle COnstruction for semantic Networks) is an experimental intelligent qaestion--answerlng system with a nataral la~uage interface based on single sentence input 1. 1 This system has been developed in the project ""Automatisehe Erstellung eemantischer Hetze"" (Automatic construction of emantic networks) at the Institute of Applied Compuger Scienoe at the Technical University of Berlin. The pro~eet was supported by the Ministry for Science and Technology (BMFT) of the Federal Republic of German~.
Explanations of the s~stems structure:",A LOGIC-ORIENTED ATN: Grammar Knowledge as Part of the System's Knowledge,"The eystem BACON (Berlin Autcmatle COnstruction for semantic Networks) is an experimental intelligent qaestion--answerlng system with a nataral la~uage interface based on single sentence input 1. 1 This system has been developed in the project ""Automatisehe Erstellung eemantischer Hetze"" (Automatic construction of emantic networks) at the Institute of Applied Compuger Scienoe at the Technical University of Berlin. The pro~eet was supported by the Ministry for Science and Technology (BMFT) of the Federal Republic of German~.
Explanations of the s~stems structure:",,"A LOGIC-ORIENTED ATN: Grammar Knowledge as Part of the System's Knowledge. The eystem BACON (Berlin Autcmatle COnstruction for semantic Networks) is an experimental intelligent qaestion--answerlng system with a nataral la~uage interface based on single sentence input 1. 1 This system has been developed in the project ""Automatisehe Erstellung eemantischer Hetze"" (Automatic construction of emantic networks) at the Institute of Applied Compuger Scienoe at the Technical University of Berlin. The pro~eet was supported by the Ministry for Science and Technology (BMFT) of the Federal Republic of German~.
Explanations of the s~stems structure:",1982
gianfortoni-etal-2011-modeling,https://aclanthology.org/W11-2606,0,,,,,,,"Modeling of Stylistic Variation in Social Media with Stretchy Patterns. In this paper we describe a novel feature discovery technique that can be used to model stylistic variation in sociolects. While structural features offer much in terms of expressive power over simpler features used more frequently in machine learning approaches to modeling linguistic variation, they frequently come at an excessive cost in terms of feature space size expansion. We propose a novel form of structural features referred to as ""stretchy patterns"" that strike a balance between expressive power and compactness in order to enable modeling stylistic variation with reasonably small datasets. As an example we focus on the problem of modeling variation related to gender in personal blogs. Our evaluation demonstrates a significant improvement over standard baselines.",Modeling of Stylistic Variation in Social Media with Stretchy Patterns,"In this paper we describe a novel feature discovery technique that can be used to model stylistic variation in sociolects. While structural features offer much in terms of expressive power over simpler features used more frequently in machine learning approaches to modeling linguistic variation, they frequently come at an excessive cost in terms of feature space size expansion. We propose a novel form of structural features referred to as ""stretchy patterns"" that strike a balance between expressive power and compactness in order to enable modeling stylistic variation with reasonably small datasets. As an example we focus on the problem of modeling variation related to gender in personal blogs. Our evaluation demonstrates a significant improvement over standard baselines.",Modeling of Stylistic Variation in Social Media with Stretchy Patterns,"In this paper we describe a novel feature discovery technique that can be used to model stylistic variation in sociolects. While structural features offer much in terms of expressive power over simpler features used more frequently in machine learning approaches to modeling linguistic variation, they frequently come at an excessive cost in terms of feature space size expansion. We propose a novel form of structural features referred to as ""stretchy patterns"" that strike a balance between expressive power and compactness in order to enable modeling stylistic variation with reasonably small datasets. As an example we focus on the problem of modeling variation related to gender in personal blogs. Our evaluation demonstrates a significant improvement over standard baselines.",This research was funded by ONR grant N000141110221 and NSF DRL-0835426.,"Modeling of Stylistic Variation in Social Media with Stretchy Patterns. In this paper we describe a novel feature discovery technique that can be used to model stylistic variation in sociolects. While structural features offer much in terms of expressive power over simpler features used more frequently in machine learning approaches to modeling linguistic variation, they frequently come at an excessive cost in terms of feature space size expansion. We propose a novel form of structural features referred to as ""stretchy patterns"" that strike a balance between expressive power and compactness in order to enable modeling stylistic variation with reasonably small datasets. As an example we focus on the problem of modeling variation related to gender in personal blogs. Our evaluation demonstrates a significant improvement over standard baselines.",2011
spitkovsky-etal-2011-punctuation,https://aclanthology.org/W11-0303,0,,,,,,,"Punctuation: Making a Point in Unsupervised Dependency Parsing. We show how punctuation can be used to improve unsupervised dependency parsing. Our linguistic analysis confirms the strong connection between English punctuation and phrase boundaries in the Penn Treebank. However, approaches that naively include punctuation marks in the grammar (as if they were words) do not perform well with Klein and Manning's Dependency Model with Valence (DMV). Instead, we split a sentence at punctuation and impose parsing restrictions over its fragments. Our grammar inducer is trained on the Wall Street Journal (WSJ) and achieves 59.5% accuracy out-of-domain (Brown sentences with 100 or fewer words), more than 6% higher than the previous best results. Further evaluation, using the 2006/7 CoNLL sets, reveals that punctuation aids grammar induction in 17 of 18 languages, for an overall average net gain of 1.3%. Some of this improvement is from training, but more than half is from parsing with induced constraints, in inference. Punctuation-aware decoding works with existing (even already-trained) parsing models and always increased accuracy in our experiments.",{P}unctuation: Making a Point in Unsupervised Dependency Parsing,"We show how punctuation can be used to improve unsupervised dependency parsing. Our linguistic analysis confirms the strong connection between English punctuation and phrase boundaries in the Penn Treebank. However, approaches that naively include punctuation marks in the grammar (as if they were words) do not perform well with Klein and Manning's Dependency Model with Valence (DMV). Instead, we split a sentence at punctuation and impose parsing restrictions over its fragments. Our grammar inducer is trained on the Wall Street Journal (WSJ) and achieves 59.5% accuracy out-of-domain (Brown sentences with 100 or fewer words), more than 6% higher than the previous best results. Further evaluation, using the 2006/7 CoNLL sets, reveals that punctuation aids grammar induction in 17 of 18 languages, for an overall average net gain of 1.3%. Some of this improvement is from training, but more than half is from parsing with induced constraints, in inference. Punctuation-aware decoding works with existing (even already-trained) parsing models and always increased accuracy in our experiments.",Punctuation: Making a Point in Unsupervised Dependency Parsing,"We show how punctuation can be used to improve unsupervised dependency parsing. Our linguistic analysis confirms the strong connection between English punctuation and phrase boundaries in the Penn Treebank. However, approaches that naively include punctuation marks in the grammar (as if they were words) do not perform well with Klein and Manning's Dependency Model with Valence (DMV). Instead, we split a sentence at punctuation and impose parsing restrictions over its fragments. Our grammar inducer is trained on the Wall Street Journal (WSJ) and achieves 59.5% accuracy out-of-domain (Brown sentences with 100 or fewer words), more than 6% higher than the previous best results. Further evaluation, using the 2006/7 CoNLL sets, reveals that punctuation aids grammar induction in 17 of 18 languages, for an overall average net gain of 1.3%. Some of this improvement is from training, but more than half is from parsing with induced constraints, in inference. Punctuation-aware decoding works with existing (even already-trained) parsing models and always increased accuracy in our experiments.","Partially funded by the Air Force Research Laboratory (AFRL), under prime contract no. FA8750-09-C-0181, and by NSF, via award #IIS-0811974. We thank Omri Abend, Slav Petrov and anonymous reviewers for many helpful suggestions, and we are especially grateful to Jenny R. Finkel for shaming us into using punctuation, to Christopher D. Manning for reminding us to explore ""punctuation as words"" baselines, and to Noah A. Smith for encouraging us to test against languages other than English.","Punctuation: Making a Point in Unsupervised Dependency Parsing. We show how punctuation can be used to improve unsupervised dependency parsing. Our linguistic analysis confirms the strong connection between English punctuation and phrase boundaries in the Penn Treebank. However, approaches that naively include punctuation marks in the grammar (as if they were words) do not perform well with Klein and Manning's Dependency Model with Valence (DMV). Instead, we split a sentence at punctuation and impose parsing restrictions over its fragments. Our grammar inducer is trained on the Wall Street Journal (WSJ) and achieves 59.5% accuracy out-of-domain (Brown sentences with 100 or fewer words), more than 6% higher than the previous best results. Further evaluation, using the 2006/7 CoNLL sets, reveals that punctuation aids grammar induction in 17 of 18 languages, for an overall average net gain of 1.3%. Some of this improvement is from training, but more than half is from parsing with induced constraints, in inference. Punctuation-aware decoding works with existing (even already-trained) parsing models and always increased accuracy in our experiments.",2011
rasanen-driesen-2009-comparison,https://aclanthology.org/W09-4640,0,,,,,,,"A comparison and combination of segmental and fixed-frame signal representations in NMF-based word recognition. Segmental and fixed-frame signal representations were compared in different noise conditions in a weakly supervised word recognition task using a non-negative matrix factorization (NMF) framework. The experiments show that fixed-frame windowing results in better recognition rates with clean signals. When noise is introduced to the system, robustness of segmental signal representations becomes useful, decreasing the overall word error rate. It is shown that a combination of fixed-frame and segmental representations yields the best recognition rates in different noise conditions. An entropy based method for dynamically adjusting the weight between representations is also introduced, leading to near-optimal weighting and therefore enhanced recognition rates in varying SNR conditions.",A comparison and combination of segmental and fixed-frame signal representations in {NMF}-based word recognition,"Segmental and fixed-frame signal representations were compared in different noise conditions in a weakly supervised word recognition task using a non-negative matrix factorization (NMF) framework. The experiments show that fixed-frame windowing results in better recognition rates with clean signals. When noise is introduced to the system, robustness of segmental signal representations becomes useful, decreasing the overall word error rate. It is shown that a combination of fixed-frame and segmental representations yields the best recognition rates in different noise conditions. An entropy based method for dynamically adjusting the weight between representations is also introduced, leading to near-optimal weighting and therefore enhanced recognition rates in varying SNR conditions.",A comparison and combination of segmental and fixed-frame signal representations in NMF-based word recognition,"Segmental and fixed-frame signal representations were compared in different noise conditions in a weakly supervised word recognition task using a non-negative matrix factorization (NMF) framework. The experiments show that fixed-frame windowing results in better recognition rates with clean signals. When noise is introduced to the system, robustness of segmental signal representations becomes useful, decreasing the overall word error rate. It is shown that a combination of fixed-frame and segmental representations yields the best recognition rates in different noise conditions. An entropy based method for dynamically adjusting the weight between representations is also introduced, leading to near-optimal weighting and therefore enhanced recognition rates in varying SNR conditions.","This research is funded as part of the EU FP6 FET project Acquisition of Communication and Recognition Skills (ACORNS), contract no. FP6-034362.","A comparison and combination of segmental and fixed-frame signal representations in NMF-based word recognition. Segmental and fixed-frame signal representations were compared in different noise conditions in a weakly supervised word recognition task using a non-negative matrix factorization (NMF) framework. The experiments show that fixed-frame windowing results in better recognition rates with clean signals. When noise is introduced to the system, robustness of segmental signal representations becomes useful, decreasing the overall word error rate. It is shown that a combination of fixed-frame and segmental representations yields the best recognition rates in different noise conditions. An entropy based method for dynamically adjusting the weight between representations is also introduced, leading to near-optimal weighting and therefore enhanced recognition rates in varying SNR conditions.",2009
wang-etal-2019-stac,https://aclanthology.org/W19-2608,0,,,,,,,"STAC: Science Toolkit Based on Chinese Idiom Knowledge Graph. Chinese idioms (Cheng Yu) have seen five thousand years' history and culture of China, meanwhile they contain large number of scientific achievement of ancient China. However, existing Chinese online idiom dictionaries have limited function for scientific exploration. In this paper, we first construct a Chinese idiom knowledge graph by extracting domains and dynasties and associating them with idioms, and based on the idiom knowledge graph, we propose a Science Toolkit for Ancient China (STAC) aiming to support scientific exploration. In the STAC toolkit, idiom navigator helps users explore overall scientific progress from idiom perspective with visualization tools, and idiom card and idiom QA shorten action path and avoid thinking being interrupted while users are reading and writing. The current STAC toolkit is deployed at",{STAC}: Science Toolkit Based on {C}hinese Idiom Knowledge Graph,"Chinese idioms (Cheng Yu) have seen five thousand years' history and culture of China, meanwhile they contain large number of scientific achievement of ancient China. However, existing Chinese online idiom dictionaries have limited function for scientific exploration. In this paper, we first construct a Chinese idiom knowledge graph by extracting domains and dynasties and associating them with idioms, and based on the idiom knowledge graph, we propose a Science Toolkit for Ancient China (STAC) aiming to support scientific exploration. In the STAC toolkit, idiom navigator helps users explore overall scientific progress from idiom perspective with visualization tools, and idiom card and idiom QA shorten action path and avoid thinking being interrupted while users are reading and writing. The current STAC toolkit is deployed at",STAC: Science Toolkit Based on Chinese Idiom Knowledge Graph,"Chinese idioms (Cheng Yu) have seen five thousand years' history and culture of China, meanwhile they contain large number of scientific achievement of ancient China. However, existing Chinese online idiom dictionaries have limited function for scientific exploration. In this paper, we first construct a Chinese idiom knowledge graph by extracting domains and dynasties and associating them with idioms, and based on the idiom knowledge graph, we propose a Science Toolkit for Ancient China (STAC) aiming to support scientific exploration. In the STAC toolkit, idiom navigator helps users explore overall scientific progress from idiom perspective with visualization tools, and idiom card and idiom QA shorten action path and avoid thinking being interrupted while users are reading and writing. The current STAC toolkit is deployed at",,"STAC: Science Toolkit Based on Chinese Idiom Knowledge Graph. Chinese idioms (Cheng Yu) have seen five thousand years' history and culture of China, meanwhile they contain large number of scientific achievement of ancient China. However, existing Chinese online idiom dictionaries have limited function for scientific exploration. In this paper, we first construct a Chinese idiom knowledge graph by extracting domains and dynasties and associating them with idioms, and based on the idiom knowledge graph, we propose a Science Toolkit for Ancient China (STAC) aiming to support scientific exploration. In the STAC toolkit, idiom navigator helps users explore overall scientific progress from idiom perspective with visualization tools, and idiom card and idiom QA shorten action path and avoid thinking being interrupted while users are reading and writing. The current STAC toolkit is deployed at",2019
lopez-etal-2011-automatic,https://aclanthology.org/R11-1106,0,,,,,,,"Automatic titling of Articles Using Position and Statistical Information. This paper describes a system facilitating information retrieval in a set of textual documents by tackling the automatic titling and subtitling issue. Automatic titling here consists in extracting relevant noun phrases from texts as candidate titles. An original approach combining statistical criteria and noun phrases positions in the text helps collecting relevant titles and subtitles. So, the user may benefit from an outline of all the subjects evoked in a mass of documents, and easily find the information he/she is looking for. An evaluation on real data shows that the solutions given by this automatic titling approach are relevant.",Automatic titling of Articles Using Position and Statistical Information,"This paper describes a system facilitating information retrieval in a set of textual documents by tackling the automatic titling and subtitling issue. Automatic titling here consists in extracting relevant noun phrases from texts as candidate titles. An original approach combining statistical criteria and noun phrases positions in the text helps collecting relevant titles and subtitles. So, the user may benefit from an outline of all the subjects evoked in a mass of documents, and easily find the information he/she is looking for. An evaluation on real data shows that the solutions given by this automatic titling approach are relevant.",Automatic titling of Articles Using Position and Statistical Information,"This paper describes a system facilitating information retrieval in a set of textual documents by tackling the automatic titling and subtitling issue. Automatic titling here consists in extracting relevant noun phrases from texts as candidate titles. An original approach combining statistical criteria and noun phrases positions in the text helps collecting relevant titles and subtitles. So, the user may benefit from an outline of all the subjects evoked in a mass of documents, and easily find the information he/she is looking for. An evaluation on real data shows that the solutions given by this automatic titling approach are relevant.",,"Automatic titling of Articles Using Position and Statistical Information. This paper describes a system facilitating information retrieval in a set of textual documents by tackling the automatic titling and subtitling issue. Automatic titling here consists in extracting relevant noun phrases from texts as candidate titles. An original approach combining statistical criteria and noun phrases positions in the text helps collecting relevant titles and subtitles. So, the user may benefit from an outline of all the subjects evoked in a mass of documents, and easily find the information he/she is looking for. An evaluation on real data shows that the solutions given by this automatic titling approach are relevant.",2011
aksenova-etal-2016-morphotactics,https://aclanthology.org/W16-2019,0,,,,,,,"Morphotactics as Tier-Based Strictly Local Dependencies. It is commonly accepted that morphological dependencies are finite-state in nature. We argue that the upper bound on morphological expressivity is much lower. Drawing on technical results from computational phonology, we show that a variety of morphotactic phenomena are tierbased strictly local and do not fall into weaker subclasses such as the strictly local or strictly piecewise languages. Since the tier-based strictly local languages are learnable in the limit from positive texts, this marks a first important step towards general machine learning algorithms for morphology. Furthermore, the limitation to tier-based strictly local languages explains typological gaps that are puzzling from a purely linguistic perspective.",Morphotactics as Tier-Based Strictly Local Dependencies,"It is commonly accepted that morphological dependencies are finite-state in nature. We argue that the upper bound on morphological expressivity is much lower. Drawing on technical results from computational phonology, we show that a variety of morphotactic phenomena are tierbased strictly local and do not fall into weaker subclasses such as the strictly local or strictly piecewise languages. Since the tier-based strictly local languages are learnable in the limit from positive texts, this marks a first important step towards general machine learning algorithms for morphology. Furthermore, the limitation to tier-based strictly local languages explains typological gaps that are puzzling from a purely linguistic perspective.",Morphotactics as Tier-Based Strictly Local Dependencies,"It is commonly accepted that morphological dependencies are finite-state in nature. We argue that the upper bound on morphological expressivity is much lower. Drawing on technical results from computational phonology, we show that a variety of morphotactic phenomena are tierbased strictly local and do not fall into weaker subclasses such as the strictly local or strictly piecewise languages. Since the tier-based strictly local languages are learnable in the limit from positive texts, this marks a first important step towards general machine learning algorithms for morphology. Furthermore, the limitation to tier-based strictly local languages explains typological gaps that are puzzling from a purely linguistic perspective.",,"Morphotactics as Tier-Based Strictly Local Dependencies. It is commonly accepted that morphological dependencies are finite-state in nature. We argue that the upper bound on morphological expressivity is much lower. Drawing on technical results from computational phonology, we show that a variety of morphotactic phenomena are tierbased strictly local and do not fall into weaker subclasses such as the strictly local or strictly piecewise languages. Since the tier-based strictly local languages are learnable in the limit from positive texts, this marks a first important step towards general machine learning algorithms for morphology. Furthermore, the limitation to tier-based strictly local languages explains typological gaps that are puzzling from a purely linguistic perspective.",2016
chisholm-etal-2017-learning,https://aclanthology.org/E17-1060,0,,,,,,,"Learning to generate one-sentence biographies from Wikidata. We investigate the generation of onesentence Wikipedia biographies from facts derived from Wikidata slot-value pairs. We train a recurrent neural network sequence-to-sequence model with attention to select facts and generate textual summaries. Our model incorporates a novel secondary objective that helps ensure it generates sentences that contain the input facts. The model achieves a BLEU score of 41, improving significantly upon the vanilla sequence-to-sequence model and scoring roughly twice that of a simple template baseline. Human preference evaluation suggests the model is nearly as good as the Wikipedia reference. Manual analysis explores content selection, suggesting the model can trade the ability to infer knowledge against the risk of hallucinating incorrect information.",Learning to generate one-sentence biographies from {W}ikidata,"We investigate the generation of onesentence Wikipedia biographies from facts derived from Wikidata slot-value pairs. We train a recurrent neural network sequence-to-sequence model with attention to select facts and generate textual summaries. Our model incorporates a novel secondary objective that helps ensure it generates sentences that contain the input facts. The model achieves a BLEU score of 41, improving significantly upon the vanilla sequence-to-sequence model and scoring roughly twice that of a simple template baseline. Human preference evaluation suggests the model is nearly as good as the Wikipedia reference. Manual analysis explores content selection, suggesting the model can trade the ability to infer knowledge against the risk of hallucinating incorrect information.",Learning to generate one-sentence biographies from Wikidata,"We investigate the generation of onesentence Wikipedia biographies from facts derived from Wikidata slot-value pairs. We train a recurrent neural network sequence-to-sequence model with attention to select facts and generate textual summaries. Our model incorporates a novel secondary objective that helps ensure it generates sentences that contain the input facts. The model achieves a BLEU score of 41, improving significantly upon the vanilla sequence-to-sequence model and scoring roughly twice that of a simple template baseline. Human preference evaluation suggests the model is nearly as good as the Wikipedia reference. Manual analysis explores content selection, suggesting the model can trade the ability to infer knowledge against the risk of hallucinating incorrect information.","This work was supported by a Google Faculty Research Award (Chisholm) and an Australian Research Council Discovery Early Career Researcher Award (DE120102900, Hachey). Many thanks to reviewers for insightful comments and suggestions, and to Glen Pink, Kellie Webster, Art Harol and Bo Han for feedback at various stages.","Learning to generate one-sentence biographies from Wikidata. We investigate the generation of onesentence Wikipedia biographies from facts derived from Wikidata slot-value pairs. We train a recurrent neural network sequence-to-sequence model with attention to select facts and generate textual summaries. Our model incorporates a novel secondary objective that helps ensure it generates sentences that contain the input facts. The model achieves a BLEU score of 41, improving significantly upon the vanilla sequence-to-sequence model and scoring roughly twice that of a simple template baseline. Human preference evaluation suggests the model is nearly as good as the Wikipedia reference. Manual analysis explores content selection, suggesting the model can trade the ability to infer knowledge against the risk of hallucinating incorrect information.",2017
chiang-etal-2021-improved,https://aclanthology.org/2021.rocling-1.38,1,,,,health,,,"Improved Text Classification of Long-term Care Materials. Aging populations have posed a challenge to many countries including Taiwan, and with them come the issue of long-term care. Given the current context, the aim of this study was to explore the hotly-discussed subtopics in the field of long-term care, and identify its features through NLP. Texts from forums and websites were utilized for data collection and analysis. The study applied TF-IDF, the logistic regression model, and the naive Bayes classifier to process data. In sum, the results showed that it reached a F1-score of 0.92 in identification, and a best accuracy of 0.71 in classification. Results of the study found that apart from TF-IDF features, certain words could be elicited as favorable features in classification. The results of this study could be used as a reference for future long-term care related applications.",Improved Text Classification of Long-term Care Materials,"Aging populations have posed a challenge to many countries including Taiwan, and with them come the issue of long-term care. Given the current context, the aim of this study was to explore the hotly-discussed subtopics in the field of long-term care, and identify its features through NLP. Texts from forums and websites were utilized for data collection and analysis. The study applied TF-IDF, the logistic regression model, and the naive Bayes classifier to process data. In sum, the results showed that it reached a F1-score of 0.92 in identification, and a best accuracy of 0.71 in classification. Results of the study found that apart from TF-IDF features, certain words could be elicited as favorable features in classification. The results of this study could be used as a reference for future long-term care related applications.",Improved Text Classification of Long-term Care Materials,"Aging populations have posed a challenge to many countries including Taiwan, and with them come the issue of long-term care. Given the current context, the aim of this study was to explore the hotly-discussed subtopics in the field of long-term care, and identify its features through NLP. Texts from forums and websites were utilized for data collection and analysis. The study applied TF-IDF, the logistic regression model, and the naive Bayes classifier to process data. In sum, the results showed that it reached a F1-score of 0.92 in identification, and a best accuracy of 0.71 in classification. Results of the study found that apart from TF-IDF features, certain words could be elicited as favorable features in classification. The results of this study could be used as a reference for future long-term care related applications.",,"Improved Text Classification of Long-term Care Materials. Aging populations have posed a challenge to many countries including Taiwan, and with them come the issue of long-term care. Given the current context, the aim of this study was to explore the hotly-discussed subtopics in the field of long-term care, and identify its features through NLP. Texts from forums and websites were utilized for data collection and analysis. The study applied TF-IDF, the logistic regression model, and the naive Bayes classifier to process data. In sum, the results showed that it reached a F1-score of 0.92 in identification, and a best accuracy of 0.71 in classification. Results of the study found that apart from TF-IDF features, certain words could be elicited as favorable features in classification. The results of this study could be used as a reference for future long-term care related applications.",2021
lukasik-etal-2016-hawkes,https://aclanthology.org/P16-2064,1,,,,disinformation_and_fake_news,,,"Hawkes Processes for Continuous Time Sequence Classification: an Application to Rumour Stance Classification in Twitter. Classification of temporal textual data sequences is a common task in various domains such as social media and the Web. In this paper we propose to use Hawkes Processes for classifying sequences of temporal textual data, which exploit both temporal and textual information. Our experiments on rumour stance classification on four Twitter datasets show the importance of using the temporal information of tweets along with the textual content.",{H}awkes Processes for Continuous Time Sequence Classification: an Application to Rumour Stance Classification in {T}witter,"Classification of temporal textual data sequences is a common task in various domains such as social media and the Web. In this paper we propose to use Hawkes Processes for classifying sequences of temporal textual data, which exploit both temporal and textual information. Our experiments on rumour stance classification on four Twitter datasets show the importance of using the temporal information of tweets along with the textual content.",Hawkes Processes for Continuous Time Sequence Classification: an Application to Rumour Stance Classification in Twitter,"Classification of temporal textual data sequences is a common task in various domains such as social media and the Web. In this paper we propose to use Hawkes Processes for classifying sequences of temporal textual data, which exploit both temporal and textual information. Our experiments on rumour stance classification on four Twitter datasets show the importance of using the temporal information of tweets along with the textual content.",The work was supported by the European Union under grant agreement No. 611233 PHEME. Cohn was supported by an ARC Future Fellowship scheme (project number FT130101105).,"Hawkes Processes for Continuous Time Sequence Classification: an Application to Rumour Stance Classification in Twitter. Classification of temporal textual data sequences is a common task in various domains such as social media and the Web. In this paper we propose to use Hawkes Processes for classifying sequences of temporal textual data, which exploit both temporal and textual information. Our experiments on rumour stance classification on four Twitter datasets show the importance of using the temporal information of tweets along with the textual content.",2016
kwong-2009-graphemic,https://aclanthology.org/W09-3537,0,,,,,,,"Graphemic Approximation of Phonological Context for English-Chinese Transliteration. Although direct orthographic mapping has been shown to outperform phoneme-based methods in English-to-Chinese (E2C) transliteration, it is observed that phonological context plays an important role in resolving graphemic ambiguity. In this paper, we investigate the use of surface graphemic features to approximate local phonological context for E2C. In the absence of an explicit phonemic representation of the English source names, experiments show that the previous and next character of a given English segment could effectively capture the local context affecting its expected pronunciation, and thus its rendition in Chinese.",Graphemic Approximation of Phonological Context for {E}nglish-{C}hinese Transliteration,"Although direct orthographic mapping has been shown to outperform phoneme-based methods in English-to-Chinese (E2C) transliteration, it is observed that phonological context plays an important role in resolving graphemic ambiguity. In this paper, we investigate the use of surface graphemic features to approximate local phonological context for E2C. In the absence of an explicit phonemic representation of the English source names, experiments show that the previous and next character of a given English segment could effectively capture the local context affecting its expected pronunciation, and thus its rendition in Chinese.",Graphemic Approximation of Phonological Context for English-Chinese Transliteration,"Although direct orthographic mapping has been shown to outperform phoneme-based methods in English-to-Chinese (E2C) transliteration, it is observed that phonological context plays an important role in resolving graphemic ambiguity. In this paper, we investigate the use of surface graphemic features to approximate local phonological context for E2C. In the absence of an explicit phonemic representation of the English source names, experiments show that the previous and next character of a given English segment could effectively capture the local context affecting its expected pronunciation, and thus its rendition in Chinese.",The work described in this paper was substantially supported by a grant from City University of Hong Kong (Project No. 7002203).,"Graphemic Approximation of Phonological Context for English-Chinese Transliteration. Although direct orthographic mapping has been shown to outperform phoneme-based methods in English-to-Chinese (E2C) transliteration, it is observed that phonological context plays an important role in resolving graphemic ambiguity. In this paper, we investigate the use of surface graphemic features to approximate local phonological context for E2C. In the absence of an explicit phonemic representation of the English source names, experiments show that the previous and next character of a given English segment could effectively capture the local context affecting its expected pronunciation, and thus its rendition in Chinese.",2009
park-rim-2005-maximum,https://aclanthology.org/W05-0632,0,,,,,,,"Maximum Entropy Based Semantic Role Labeling. The semantic role labeling (SRL) refers to finding the semantic relation (e.g. Agent, Patient, etc.) between a predicate and syntactic constituents in the sentences. Especially, with the argument information of the predicate, we can derive the predicateargument structures, which are useful for the applications such as automatic information extraction. As previous work on the SRL, there have been many machine learning approaches. (Gildea and Jurafsky, 2002; Pradhan et al., 2003; Lim et al., 2004) .",Maximum Entropy Based Semantic Role Labeling,"The semantic role labeling (SRL) refers to finding the semantic relation (e.g. Agent, Patient, etc.) between a predicate and syntactic constituents in the sentences. Especially, with the argument information of the predicate, we can derive the predicateargument structures, which are useful for the applications such as automatic information extraction. As previous work on the SRL, there have been many machine learning approaches. (Gildea and Jurafsky, 2002; Pradhan et al., 2003; Lim et al., 2004) .",Maximum Entropy Based Semantic Role Labeling,"The semantic role labeling (SRL) refers to finding the semantic relation (e.g. Agent, Patient, etc.) between a predicate and syntactic constituents in the sentences. Especially, with the argument information of the predicate, we can derive the predicateargument structures, which are useful for the applications such as automatic information extraction. As previous work on the SRL, there have been many machine learning approaches. (Gildea and Jurafsky, 2002; Pradhan et al., 2003; Lim et al., 2004) .",,"Maximum Entropy Based Semantic Role Labeling. The semantic role labeling (SRL) refers to finding the semantic relation (e.g. Agent, Patient, etc.) between a predicate and syntactic constituents in the sentences. Especially, with the argument information of the predicate, we can derive the predicateargument structures, which are useful for the applications such as automatic information extraction. As previous work on the SRL, there have been many machine learning approaches. (Gildea and Jurafsky, 2002; Pradhan et al., 2003; Lim et al., 2004) .",2005
gartner-etal-2015-multi,https://aclanthology.org/P15-4005,0,,,,,,,"Multi-modal Visualization and Search for Text and Prosody Annotations. We present ICARUS for intonation, an interactive tool to browse and search automatically derived descriptions of fundamental frequency contours. It offers access to tonal features in combination with other annotation layers like part-ofspeech, syntax or coreference and visualizes them in a highly customizable graphical interface with various playback functions. The built-in search allows multilevel queries, the construction of which can be done graphically or textually, and includes the ability to search F 0 contours based on various similarity measures.",Multi-modal Visualization and Search for Text and Prosody Annotations,"We present ICARUS for intonation, an interactive tool to browse and search automatically derived descriptions of fundamental frequency contours. It offers access to tonal features in combination with other annotation layers like part-ofspeech, syntax or coreference and visualizes them in a highly customizable graphical interface with various playback functions. The built-in search allows multilevel queries, the construction of which can be done graphically or textually, and includes the ability to search F 0 contours based on various similarity measures.",Multi-modal Visualization and Search for Text and Prosody Annotations,"We present ICARUS for intonation, an interactive tool to browse and search automatically derived descriptions of fundamental frequency contours. It offers access to tonal features in combination with other annotation layers like part-ofspeech, syntax or coreference and visualizes them in a highly customizable graphical interface with various playback functions. The built-in search allows multilevel queries, the construction of which can be done graphically or textually, and includes the ability to search F 0 contours based on various similarity measures.","This work was funded by the German Federal Ministry of Education and Research (BMBF) via CLARIN-D, No. 01UG1120F and the German Research Foundation (DFG) via the SFB 732, project INF.","Multi-modal Visualization and Search for Text and Prosody Annotations. We present ICARUS for intonation, an interactive tool to browse and search automatically derived descriptions of fundamental frequency contours. It offers access to tonal features in combination with other annotation layers like part-ofspeech, syntax or coreference and visualizes them in a highly customizable graphical interface with various playback functions. The built-in search allows multilevel queries, the construction of which can be done graphically or textually, and includes the ability to search F 0 contours based on various similarity measures.",2015
dobrovoljc-nivre-2016-universal,https://aclanthology.org/L16-1248,0,,,,,,,"The Universal Dependencies Treebank of Spoken Slovenian. This paper presents the construction of an open-source dependency treebank of spoken Slovenian, the first syntactically annotated collection of spontaneous speech in Slovenian. The treebank has been manually annotated using the Universal Dependencies annotation scheme, a one-layer syntactic annotation scheme with a high degree of cross-modality, cross-framework and cross-language interoperability. In this original application of the scheme to spoken language transcripts, we address a wide spectrum of syntactic particularities in speech, either by extending the scope of application of existing universal labels or by proposing new speech-specific extensions. The initial analysis of the resulting treebank and its comparison with the written Slovenian UD treebank confirms significant syntactic differences between the two language modalities, with spoken data consisting of shorter and more elliptic sentences, less and simpler nominal phrases, and more relations marking disfluencies, interaction, deixis and modality.",The {U}niversal {D}ependencies Treebank of Spoken {S}lovenian,"This paper presents the construction of an open-source dependency treebank of spoken Slovenian, the first syntactically annotated collection of spontaneous speech in Slovenian. The treebank has been manually annotated using the Universal Dependencies annotation scheme, a one-layer syntactic annotation scheme with a high degree of cross-modality, cross-framework and cross-language interoperability. In this original application of the scheme to spoken language transcripts, we address a wide spectrum of syntactic particularities in speech, either by extending the scope of application of existing universal labels or by proposing new speech-specific extensions. The initial analysis of the resulting treebank and its comparison with the written Slovenian UD treebank confirms significant syntactic differences between the two language modalities, with spoken data consisting of shorter and more elliptic sentences, less and simpler nominal phrases, and more relations marking disfluencies, interaction, deixis and modality.",The Universal Dependencies Treebank of Spoken Slovenian,"This paper presents the construction of an open-source dependency treebank of spoken Slovenian, the first syntactically annotated collection of spontaneous speech in Slovenian. The treebank has been manually annotated using the Universal Dependencies annotation scheme, a one-layer syntactic annotation scheme with a high degree of cross-modality, cross-framework and cross-language interoperability. In this original application of the scheme to spoken language transcripts, we address a wide spectrum of syntactic particularities in speech, either by extending the scope of application of existing universal labels or by proposing new speech-specific extensions. The initial analysis of the resulting treebank and its comparison with the written Slovenian UD treebank confirms significant syntactic differences between the two language modalities, with spoken data consisting of shorter and more elliptic sentences, less and simpler nominal phrases, and more relations marking disfluencies, interaction, deixis and modality.",The work presented in this paper has been partially supported by the Young Researcher Programme of the Slovenian Research Agency and Parseme ICT COST Action IC1207 STSM grant.,"The Universal Dependencies Treebank of Spoken Slovenian. This paper presents the construction of an open-source dependency treebank of spoken Slovenian, the first syntactically annotated collection of spontaneous speech in Slovenian. The treebank has been manually annotated using the Universal Dependencies annotation scheme, a one-layer syntactic annotation scheme with a high degree of cross-modality, cross-framework and cross-language interoperability. In this original application of the scheme to spoken language transcripts, we address a wide spectrum of syntactic particularities in speech, either by extending the scope of application of existing universal labels or by proposing new speech-specific extensions. The initial analysis of the resulting treebank and its comparison with the written Slovenian UD treebank confirms significant syntactic differences between the two language modalities, with spoken data consisting of shorter and more elliptic sentences, less and simpler nominal phrases, and more relations marking disfluencies, interaction, deixis and modality.",2016
jayanthi-pratapa-2021-study,https://aclanthology.org/2021.sigmorphon-1.6,0,,,,,,,"A Study of Morphological Robustness of Neural Machine Translation. In this work, we analyze the robustness of neural machine translation systems towards grammatical perturbations in the source. In particular, we focus on morphological inflection related perturbations. While this has been recently studied for English→French translation (MORPHEUS) (Tan et al., 2020), it is unclear how this extends to Any→English translation systems. We propose MORPHEUS-MULTILINGUAL that utilizes UniMorph dictionaries to identify morphological perturbations to source that adversely affect the translation models. Along with an analysis of stateof-the-art pretrained MT systems, we train and analyze systems for 11 language pairs using the multilingual TED corpus (Qi et al., 2018). We also compare this to actual errors of nonnative speakers using Grammatical Error Correction datasets. Finally, we present a qualitative and quantitative analysis of the robustness of Any→English translation systems. Code for our work is publicly available. 1 * Equal contribution.",A Study of Morphological Robustness of Neural Machine Translation,"In this work, we analyze the robustness of neural machine translation systems towards grammatical perturbations in the source. In particular, we focus on morphological inflection related perturbations. While this has been recently studied for English→French translation (MORPHEUS) (Tan et al., 2020), it is unclear how this extends to Any→English translation systems. We propose MORPHEUS-MULTILINGUAL that utilizes UniMorph dictionaries to identify morphological perturbations to source that adversely affect the translation models. Along with an analysis of stateof-the-art pretrained MT systems, we train and analyze systems for 11 language pairs using the multilingual TED corpus (Qi et al., 2018). We also compare this to actual errors of nonnative speakers using Grammatical Error Correction datasets. Finally, we present a qualitative and quantitative analysis of the robustness of Any→English translation systems. Code for our work is publicly available. 1 * Equal contribution.",A Study of Morphological Robustness of Neural Machine Translation,"In this work, we analyze the robustness of neural machine translation systems towards grammatical perturbations in the source. In particular, we focus on morphological inflection related perturbations. While this has been recently studied for English→French translation (MORPHEUS) (Tan et al., 2020), it is unclear how this extends to Any→English translation systems. We propose MORPHEUS-MULTILINGUAL that utilizes UniMorph dictionaries to identify morphological perturbations to source that adversely affect the translation models. Along with an analysis of stateof-the-art pretrained MT systems, we train and analyze systems for 11 language pairs using the multilingual TED corpus (Qi et al., 2018). We also compare this to actual errors of nonnative speakers using Grammatical Error Correction datasets. Finally, we present a qualitative and quantitative analysis of the robustness of Any→English translation systems. Code for our work is publicly available. 1 * Equal contribution.",,"A Study of Morphological Robustness of Neural Machine Translation. In this work, we analyze the robustness of neural machine translation systems towards grammatical perturbations in the source. In particular, we focus on morphological inflection related perturbations. While this has been recently studied for English→French translation (MORPHEUS) (Tan et al., 2020), it is unclear how this extends to Any→English translation systems. We propose MORPHEUS-MULTILINGUAL that utilizes UniMorph dictionaries to identify morphological perturbations to source that adversely affect the translation models. Along with an analysis of stateof-the-art pretrained MT systems, we train and analyze systems for 11 language pairs using the multilingual TED corpus (Qi et al., 2018). We also compare this to actual errors of nonnative speakers using Grammatical Error Correction datasets. Finally, we present a qualitative and quantitative analysis of the robustness of Any→English translation systems. Code for our work is publicly available. 1 * Equal contribution.",2021
sleimi-gardent-2016-generating,https://aclanthology.org/W16-3511,0,,,,,,,"Generating Paraphrases from DBPedia using Deep Learning. Recent deep learning approaches to Natural Language Generation mostly rely on sequence-to-sequence models. In these approaches, the input is treated as a sequence whereas in most cases, input to generation usually is either a tree or a graph. In this paper, we describe an experiment showing how enriching a sequential input with structural information improves results and help support the generation of paraphrases.",Generating Paraphrases from {DBP}edia using Deep Learning,"Recent deep learning approaches to Natural Language Generation mostly rely on sequence-to-sequence models. In these approaches, the input is treated as a sequence whereas in most cases, input to generation usually is either a tree or a graph. In this paper, we describe an experiment showing how enriching a sequential input with structural information improves results and help support the generation of paraphrases.",Generating Paraphrases from DBPedia using Deep Learning,"Recent deep learning approaches to Natural Language Generation mostly rely on sequence-to-sequence models. In these approaches, the input is treated as a sequence whereas in most cases, input to generation usually is either a tree or a graph. In this paper, we describe an experiment showing how enriching a sequential input with structural information improves results and help support the generation of paraphrases.",We thank the French National Research Agency for funding the research presented in this paper in the context of the WebNLG project ANR-14-CE24-0033.,"Generating Paraphrases from DBPedia using Deep Learning. Recent deep learning approaches to Natural Language Generation mostly rely on sequence-to-sequence models. In these approaches, the input is treated as a sequence whereas in most cases, input to generation usually is either a tree or a graph. In this paper, we describe an experiment showing how enriching a sequential input with structural information improves results and help support the generation of paraphrases.",2016
liberman-2018-corpus,https://aclanthology.org/W18-3801,0,,,,,,,"Corpus Phonetics: Past, Present, and Future. Semi-automatic analysis of digital speech collections is transforming the science of phonetics, and offers interesting opportunities to researchers in other fields. Convenient search and analysis of large published bodies of recordings, transcripts, metadata, and annotations-as much as three or four orders of magnitude larger than a few decades ago-has created a trend towards ""corpus phonetics,"" whose benefits include greatly increased researcher productivity, better coverage of variation in speech patterns, and essential support for reproducibility. The results of this work include insight into theoretical questions at all levels of linguistic analysis, as well as applications in fields as diverse as psychology, sociology, medicine, and poetics, as well as within phonetics itself. Crucially, analytic inputs include annotation or categorization of speech recordings along many dimensions, from words and phrase structures to discourse structures, speaker attitudes, speaker demographics, and speech styles. Among the many near-term opportunities in this area we can single out the possibility of improving parsing algorithms by incorporating features from speech as well as text.","Corpus Phonetics: Past, Present, and Future","Semi-automatic analysis of digital speech collections is transforming the science of phonetics, and offers interesting opportunities to researchers in other fields. Convenient search and analysis of large published bodies of recordings, transcripts, metadata, and annotations-as much as three or four orders of magnitude larger than a few decades ago-has created a trend towards ""corpus phonetics,"" whose benefits include greatly increased researcher productivity, better coverage of variation in speech patterns, and essential support for reproducibility. The results of this work include insight into theoretical questions at all levels of linguistic analysis, as well as applications in fields as diverse as psychology, sociology, medicine, and poetics, as well as within phonetics itself. Crucially, analytic inputs include annotation or categorization of speech recordings along many dimensions, from words and phrase structures to discourse structures, speaker attitudes, speaker demographics, and speech styles. Among the many near-term opportunities in this area we can single out the possibility of improving parsing algorithms by incorporating features from speech as well as text.","Corpus Phonetics: Past, Present, and Future","Semi-automatic analysis of digital speech collections is transforming the science of phonetics, and offers interesting opportunities to researchers in other fields. Convenient search and analysis of large published bodies of recordings, transcripts, metadata, and annotations-as much as three or four orders of magnitude larger than a few decades ago-has created a trend towards ""corpus phonetics,"" whose benefits include greatly increased researcher productivity, better coverage of variation in speech patterns, and essential support for reproducibility. The results of this work include insight into theoretical questions at all levels of linguistic analysis, as well as applications in fields as diverse as psychology, sociology, medicine, and poetics, as well as within phonetics itself. Crucially, analytic inputs include annotation or categorization of speech recordings along many dimensions, from words and phrase structures to discourse structures, speaker attitudes, speaker demographics, and speech styles. Among the many near-term opportunities in this area we can single out the possibility of improving parsing algorithms by incorporating features from speech as well as text.",,"Corpus Phonetics: Past, Present, and Future. Semi-automatic analysis of digital speech collections is transforming the science of phonetics, and offers interesting opportunities to researchers in other fields. Convenient search and analysis of large published bodies of recordings, transcripts, metadata, and annotations-as much as three or four orders of magnitude larger than a few decades ago-has created a trend towards ""corpus phonetics,"" whose benefits include greatly increased researcher productivity, better coverage of variation in speech patterns, and essential support for reproducibility. The results of this work include insight into theoretical questions at all levels of linguistic analysis, as well as applications in fields as diverse as psychology, sociology, medicine, and poetics, as well as within phonetics itself. Crucially, analytic inputs include annotation or categorization of speech recordings along many dimensions, from words and phrase structures to discourse structures, speaker attitudes, speaker demographics, and speech styles. Among the many near-term opportunities in this area we can single out the possibility of improving parsing algorithms by incorporating features from speech as well as text.",2018
yuan-li-2007-breath,https://aclanthology.org/O07-3002,0,,,,,,,"The Breath Segment in Expressive Speech. This paper, based on a selected one hour of expressive speech, is a pilot study on how to use breath segments to get more natural and expressive speech. It mainly deals with the status of when the breath segments occur and how the acoustic features are affected by the speaker 's emotional states in terms of valence and activation. Statistical analysis is made to investigate the relationship between the length and intensity of the breath segments and the two state parameters. Finally, a perceptual experiment is conducted by employing the analysis results to synthesized speech, the results of which demonstrate that breath segment insertion can help improve the expressiveness and naturalness of the synthesized speech.",The Breath Segment in Expressive Speech,"This paper, based on a selected one hour of expressive speech, is a pilot study on how to use breath segments to get more natural and expressive speech. It mainly deals with the status of when the breath segments occur and how the acoustic features are affected by the speaker 's emotional states in terms of valence and activation. Statistical analysis is made to investigate the relationship between the length and intensity of the breath segments and the two state parameters. Finally, a perceptual experiment is conducted by employing the analysis results to synthesized speech, the results of which demonstrate that breath segment insertion can help improve the expressiveness and naturalness of the synthesized speech.",The Breath Segment in Expressive Speech,"This paper, based on a selected one hour of expressive speech, is a pilot study on how to use breath segments to get more natural and expressive speech. It mainly deals with the status of when the breath segments occur and how the acoustic features are affected by the speaker 's emotional states in terms of valence and activation. Statistical analysis is made to investigate the relationship between the length and intensity of the breath segments and the two state parameters. Finally, a perceptual experiment is conducted by employing the analysis results to synthesized speech, the results of which demonstrate that breath segment insertion can help improve the expressiveness and naturalness of the synthesized speech.",,"The Breath Segment in Expressive Speech. This paper, based on a selected one hour of expressive speech, is a pilot study on how to use breath segments to get more natural and expressive speech. It mainly deals with the status of when the breath segments occur and how the acoustic features are affected by the speaker 's emotional states in terms of valence and activation. Statistical analysis is made to investigate the relationship between the length and intensity of the breath segments and the two state parameters. Finally, a perceptual experiment is conducted by employing the analysis results to synthesized speech, the results of which demonstrate that breath segment insertion can help improve the expressiveness and naturalness of the synthesized speech.",2007
dsouza-etal-2021-semeval,https://aclanthology.org/2021.semeval-1.44,1,,,,industry_innovation_infrastructure,,,"SemEval-2021 Task 11: NLPContributionGraph - Structuring Scholarly NLP Contributions for a Research Knowledge Graph. There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPCONTRI-BUTIONGRAPH (a.k.a. 'the NCG task') tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the firstof-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentencelevel annotations comprised the few sentences about the article's contribution. The phraselevel annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, in the conclusion of this article, the difficulty of producing such data and as a consequence of modeling it is highlighted.",{S}em{E}val-2021 Task 11: {NLPC}ontribution{G}raph - Structuring Scholarly {NLP} Contributions for a Research Knowledge Graph,"There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPCONTRI-BUTIONGRAPH (a.k.a. 'the NCG task') tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the firstof-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentencelevel annotations comprised the few sentences about the article's contribution. The phraselevel annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, in the conclusion of this article, the difficulty of producing such data and as a consequence of modeling it is highlighted.",SemEval-2021 Task 11: NLPContributionGraph - Structuring Scholarly NLP Contributions for a Research Knowledge Graph,"There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPCONTRI-BUTIONGRAPH (a.k.a. 'the NCG task') tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the firstof-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentencelevel annotations comprised the few sentences about the article's contribution. The phraselevel annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, in the conclusion of this article, the difficulty of producing such data and as a consequence of modeling it is highlighted.",We thank the anonymous reviewers for their comments and suggestions. This work was co-funded by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536) and by the TIB Leibniz Information Centre for Science and Technology.,"SemEval-2021 Task 11: NLPContributionGraph - Structuring Scholarly NLP Contributions for a Research Knowledge Graph. There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPCONTRI-BUTIONGRAPH (a.k.a. 'the NCG task') tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the firstof-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentencelevel annotations comprised the few sentences about the article's contribution. The phraselevel annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, in the conclusion of this article, the difficulty of producing such data and as a consequence of modeling it is highlighted.",2021
wilson-etal-2015-detection,https://aclanthology.org/D15-1307,0,,,,,,,"Detection of Steganographic Techniques on Twitter. We propose a method to detect hidden data in English text. We target a system previously thought secure, which hides messages in tweets. The method brings ideas from image steganalysis into the linguistic domain, including the training of a feature-rich model for detection. To identify Twitter users guilty of steganography, we aggregate evidence; a first, in any domain. We test our system on a set of 1M steganographic tweets, and show it to be effective.",Detection of Steganographic Techniques on {T}witter,"We propose a method to detect hidden data in English text. We target a system previously thought secure, which hides messages in tweets. The method brings ideas from image steganalysis into the linguistic domain, including the training of a feature-rich model for detection. To identify Twitter users guilty of steganography, we aggregate evidence; a first, in any domain. We test our system on a set of 1M steganographic tweets, and show it to be effective.",Detection of Steganographic Techniques on Twitter,"We propose a method to detect hidden data in English text. We target a system previously thought secure, which hides messages in tweets. The method brings ideas from image steganalysis into the linguistic domain, including the training of a feature-rich model for detection. To identify Twitter users guilty of steganography, we aggregate evidence; a first, in any domain. We test our system on a set of 1M steganographic tweets, and show it to be effective.","This paper aims to attack CoverTweet statistically. We are in the shoes of the warden, attempting to classify stego objects from innocent 2 cover objects. We propose techniques new to linguistic steganalysis, including a large set of features that detect unusual and inconsistent use of language and the aggregation of evidence from multiple sentences. This last development, known in the steganographic literature as pooled steganalysis (Ker, 2007) , represents a first in both linguistic and image steganalysis.","Detection of Steganographic Techniques on Twitter. We propose a method to detect hidden data in English text. We target a system previously thought secure, which hides messages in tweets. The method brings ideas from image steganalysis into the linguistic domain, including the training of a feature-rich model for detection. To identify Twitter users guilty of steganography, we aggregate evidence; a first, in any domain. We test our system on a set of 1M steganographic tweets, and show it to be effective.",2015
linzen-jaeger-2014-investigating,https://aclanthology.org/W14-2002,0,,,,,,,"Investigating the role of entropy in sentence processing. We outline four ways in which uncertainty might affect comprehension difficulty in human sentence processing. These four hypotheses motivate a self-paced reading experiment, in which we used verb subcategorization distributions to manipulate the uncertainty over the next step in the syntactic derivation (single step entropy) and the surprisal of the verb's complement. We additionally estimate wordby-word surprisal and total entropy over parses of the sentence using a probabilistic context-free grammar (PCFG). Surprisal and total entropy, but not single step entropy, were significant predictors of reading times in different parts of the sentence. This suggests that a complete model of sentence processing should incorporate both entropy and surprisal.",Investigating the role of entropy in sentence processing,"We outline four ways in which uncertainty might affect comprehension difficulty in human sentence processing. These four hypotheses motivate a self-paced reading experiment, in which we used verb subcategorization distributions to manipulate the uncertainty over the next step in the syntactic derivation (single step entropy) and the surprisal of the verb's complement. We additionally estimate wordby-word surprisal and total entropy over parses of the sentence using a probabilistic context-free grammar (PCFG). Surprisal and total entropy, but not single step entropy, were significant predictors of reading times in different parts of the sentence. This suggests that a complete model of sentence processing should incorporate both entropy and surprisal.",Investigating the role of entropy in sentence processing,"We outline four ways in which uncertainty might affect comprehension difficulty in human sentence processing. These four hypotheses motivate a self-paced reading experiment, in which we used verb subcategorization distributions to manipulate the uncertainty over the next step in the syntactic derivation (single step entropy) and the surprisal of the verb's complement. We additionally estimate wordby-word surprisal and total entropy over parses of the sentence using a probabilistic context-free grammar (PCFG). Surprisal and total entropy, but not single step entropy, were significant predictors of reading times in different parts of the sentence. This suggests that a complete model of sentence processing should incorporate both entropy and surprisal.",We thank Alec Marantz for discussion and Andrew Watts for technical assistance. This work was supported by an Alfred P. Sloan Fellowship to T. Florian Jaeger.,"Investigating the role of entropy in sentence processing. We outline four ways in which uncertainty might affect comprehension difficulty in human sentence processing. These four hypotheses motivate a self-paced reading experiment, in which we used verb subcategorization distributions to manipulate the uncertainty over the next step in the syntactic derivation (single step entropy) and the surprisal of the verb's complement. We additionally estimate wordby-word surprisal and total entropy over parses of the sentence using a probabilistic context-free grammar (PCFG). Surprisal and total entropy, but not single step entropy, were significant predictors of reading times in different parts of the sentence. This suggests that a complete model of sentence processing should incorporate both entropy and surprisal.",2014
lawson-etal-2010-annotating,https://aclanthology.org/W10-0712,0,,,,general_purpose_productivity,,,"Annotating Large Email Datasets for Named Entity Recognition with Mechanical Turk. Amazon's Mechanical Turk service has been successfully applied to many natural language processing tasks. However, the task of named entity recognition presents unique challenges. In a large annotation task involving over 20,000 emails, we demonstrate that a compet itive bonus system and interannotator agree ment can be used to improve the quality of named entity annotations from Mechanical Turk. We also build several statistical named entity recognition models trained with these annotations, which compare favorably to sim ilar models trained on expert annotations.",Annotating Large Email Datasets for Named Entity Recognition with {M}echanical {T}urk,"Amazon's Mechanical Turk service has been successfully applied to many natural language processing tasks. However, the task of named entity recognition presents unique challenges. In a large annotation task involving over 20,000 emails, we demonstrate that a compet itive bonus system and interannotator agree ment can be used to improve the quality of named entity annotations from Mechanical Turk. We also build several statistical named entity recognition models trained with these annotations, which compare favorably to sim ilar models trained on expert annotations.",Annotating Large Email Datasets for Named Entity Recognition with Mechanical Turk,"Amazon's Mechanical Turk service has been successfully applied to many natural language processing tasks. However, the task of named entity recognition presents unique challenges. In a large annotation task involving over 20,000 emails, we demonstrate that a compet itive bonus system and interannotator agree ment can be used to improve the quality of named entity annotations from Mechanical Turk. We also build several statistical named entity recognition models trained with these annotations, which compare favorably to sim ilar models trained on expert annotations.",,"Annotating Large Email Datasets for Named Entity Recognition with Mechanical Turk. Amazon's Mechanical Turk service has been successfully applied to many natural language processing tasks. However, the task of named entity recognition presents unique challenges. In a large annotation task involving over 20,000 emails, we demonstrate that a compet itive bonus system and interannotator agree ment can be used to improve the quality of named entity annotations from Mechanical Turk. We also build several statistical named entity recognition models trained with these annotations, which compare favorably to sim ilar models trained on expert annotations.",2010
srikumar-roth-2013-modeling,https://aclanthology.org/Q13-1019,0,,,,,,,"Modeling Semantic Relations Expressed by Prepositions. This paper introduces the problem of predicting semantic relations expressed by prepositions and develops statistical learning models for predicting the relations, their arguments and the semantic types of the arguments. We define an inventory of 32 relations, building on the word sense disambiguation task for prepositions and collapsing related senses across prepositions. Given a preposition in a sentence, our computational task to jointly model the preposition relation and its arguments along with their semantic types, as a way to support the relation prediction. The annotated data, however, only provides labels for the relation label, and not the arguments and types. We address this by presenting two models for preposition relation labeling. Our generalization of latent structure SVM gives close to 90% accuracy on relation labeling. Further, by jointly predicting the relation, arguments, and their types along with preposition sense, we show that we can not only improve the relation accuracy, but also significantly improve sense prediction accuracy.",Modeling Semantic Relations Expressed by Prepositions,"This paper introduces the problem of predicting semantic relations expressed by prepositions and develops statistical learning models for predicting the relations, their arguments and the semantic types of the arguments. We define an inventory of 32 relations, building on the word sense disambiguation task for prepositions and collapsing related senses across prepositions. Given a preposition in a sentence, our computational task to jointly model the preposition relation and its arguments along with their semantic types, as a way to support the relation prediction. The annotated data, however, only provides labels for the relation label, and not the arguments and types. We address this by presenting two models for preposition relation labeling. Our generalization of latent structure SVM gives close to 90% accuracy on relation labeling. Further, by jointly predicting the relation, arguments, and their types along with preposition sense, we show that we can not only improve the relation accuracy, but also significantly improve sense prediction accuracy.",Modeling Semantic Relations Expressed by Prepositions,"This paper introduces the problem of predicting semantic relations expressed by prepositions and develops statistical learning models for predicting the relations, their arguments and the semantic types of the arguments. We define an inventory of 32 relations, building on the word sense disambiguation task for prepositions and collapsing related senses across prepositions. Given a preposition in a sentence, our computational task to jointly model the preposition relation and its arguments along with their semantic types, as a way to support the relation prediction. The annotated data, however, only provides labels for the relation label, and not the arguments and types. We address this by presenting two models for preposition relation labeling. Our generalization of latent structure SVM gives close to 90% accuracy on relation labeling. Further, by jointly predicting the relation, arguments, and their types along with preposition sense, we show that we can not only improve the relation accuracy, but also significantly improve sense prediction accuracy.","The authors wish to thank Martha Palmer, Nathan Schneider, the anonymous reviewers and the editor for their valuable feedback. The authors gratefully acknowledge the support of the ","Modeling Semantic Relations Expressed by Prepositions. This paper introduces the problem of predicting semantic relations expressed by prepositions and develops statistical learning models for predicting the relations, their arguments and the semantic types of the arguments. We define an inventory of 32 relations, building on the word sense disambiguation task for prepositions and collapsing related senses across prepositions. Given a preposition in a sentence, our computational task to jointly model the preposition relation and its arguments along with their semantic types, as a way to support the relation prediction. The annotated data, however, only provides labels for the relation label, and not the arguments and types. We address this by presenting two models for preposition relation labeling. Our generalization of latent structure SVM gives close to 90% accuracy on relation labeling. Further, by jointly predicting the relation, arguments, and their types along with preposition sense, we show that we can not only improve the relation accuracy, but also significantly improve sense prediction accuracy.",2013
lin-etal-2006-information,https://aclanthology.org/N06-1059,0,,,,,,,An Information-Theoretic Approach to Automatic Evaluation of Summaries. ,An Information-Theoretic Approach to Automatic Evaluation of Summaries,,An Information-Theoretic Approach to Automatic Evaluation of Summaries,,,An Information-Theoretic Approach to Automatic Evaluation of Summaries. ,2006
may-knight-2007-syntactic,https://aclanthology.org/D07-1038,0,,,,,,,Syntactic Re-Alignment Models for Machine Translation. We present a method for improving word alignment for statistical syntax-based machine translation that employs a syntactically informed alignment model closer to the translation model than commonly-used word alignment models. This leads to extraction of more useful linguistic patterns and improved BLEU scores on translation experiments in Chinese and Arabic.,Syntactic Re-Alignment Models for Machine Translation,We present a method for improving word alignment for statistical syntax-based machine translation that employs a syntactically informed alignment model closer to the translation model than commonly-used word alignment models. This leads to extraction of more useful linguistic patterns and improved BLEU scores on translation experiments in Chinese and Arabic.,Syntactic Re-Alignment Models for Machine Translation,We present a method for improving word alignment for statistical syntax-based machine translation that employs a syntactically informed alignment model closer to the translation model than commonly-used word alignment models. This leads to extraction of more useful linguistic patterns and improved BLEU scores on translation experiments in Chinese and Arabic.,"We thank David Chiang, Steve DeNeefe, Alex Fraser, Victoria Fossum, Jonathan Graehl, Liang Huang, Daniel Marcu, Michael Pust, Oana Postolache, Michael Pust, Jason Riesa, Jens Vöckler, and Wei Wang for help and discussion. This research was supported by NSF (grant IIS-0428020) and DARPA (contract HR0011-06-C-0022).",Syntactic Re-Alignment Models for Machine Translation. We present a method for improving word alignment for statistical syntax-based machine translation that employs a syntactically informed alignment model closer to the translation model than commonly-used word alignment models. This leads to extraction of more useful linguistic patterns and improved BLEU scores on translation experiments in Chinese and Arabic.,2007
chiang-etal-2013-parsing,https://aclanthology.org/P13-1091,0,,,,,,,"Parsing Graphs with Hyperedge Replacement Grammars. Hyperedge replacement grammar (HRG) is a formalism for generating and transforming graphs that has potential applications in natural language understanding and generation. A recognition algorithm due to Lautemann is known to be polynomial-time for graphs that are connected and of bounded degree. We present a more precise characterization of the algorithm's complexity, an optimization analogous to binarization of contextfree grammars, and some important implementation details, resulting in an algorithm that is practical for natural-language applications. The algorithm is part of Bolinas, a new software toolkit for HRG processing.",Parsing Graphs with Hyperedge Replacement Grammars,"Hyperedge replacement grammar (HRG) is a formalism for generating and transforming graphs that has potential applications in natural language understanding and generation. A recognition algorithm due to Lautemann is known to be polynomial-time for graphs that are connected and of bounded degree. We present a more precise characterization of the algorithm's complexity, an optimization analogous to binarization of contextfree grammars, and some important implementation details, resulting in an algorithm that is practical for natural-language applications. The algorithm is part of Bolinas, a new software toolkit for HRG processing.",Parsing Graphs with Hyperedge Replacement Grammars,"Hyperedge replacement grammar (HRG) is a formalism for generating and transforming graphs that has potential applications in natural language understanding and generation. A recognition algorithm due to Lautemann is known to be polynomial-time for graphs that are connected and of bounded degree. We present a more precise characterization of the algorithm's complexity, an optimization analogous to binarization of contextfree grammars, and some important implementation details, resulting in an algorithm that is practical for natural-language applications. The algorithm is part of Bolinas, a new software toolkit for HRG processing.",We would like to thank the anonymous reviewers for their helpful comments. This research was supported in part by ARO grant W911NF-10-1-0533.,"Parsing Graphs with Hyperedge Replacement Grammars. Hyperedge replacement grammar (HRG) is a formalism for generating and transforming graphs that has potential applications in natural language understanding and generation. A recognition algorithm due to Lautemann is known to be polynomial-time for graphs that are connected and of bounded degree. We present a more precise characterization of the algorithm's complexity, an optimization analogous to binarization of contextfree grammars, and some important implementation details, resulting in an algorithm that is practical for natural-language applications. The algorithm is part of Bolinas, a new software toolkit for HRG processing.",2013
iordanskaja-etal-1992-generation,https://aclanthology.org/C92-3158,0,,,,,,,"Generation of Extended Bilingual Statistical Reports. During tim past few years we liave been concerned with developing models for the automatic planning and realization of report texts wittlin technical sublanguages of English and French. Since 1987 we have been implementing Meaning-Text language models (MTMs) [6, 7] for the task of realizing sentences from semantic specifications that are output by a text planner. A relatively complete MTM implementation for English was tested in the domain of operating system audit summaries in tile Gossip project of 1987-89 [3] . At COLING-gO a report was given on the fully operational FoG system for generating marine forecasts in both English and French at weather centres in Eastern Canada [1] . The work reported on here concerns the experimental generation of extended bilingual summaries of Canadian statistical data. Our first focus has been on labour force surveys (LFS), where an extensive corpus of published reports in each language is available for empirical study. Tire current LFS system has built on the experience of the two preceding systems, but goes beyond either of them 1. Iu contrast to FoG, but similar to Gossip, LFS uses a semantic net representation of sentences as input to the realization process. Like Gossip, LFS also makes use of theme/theme constraints to help optimize lexical and syntactic choices during sentence realizatiou. But in contrast to Gossip, which produced only English texts, LFS is bilingual, making use of the conceptual level of representation produced by the planner as an interlingua from which to derive the linguistic semantic representations for texts in the two languages independently. Hence the LFS interlingua is much ""deeper"" than FoG's deep-syntactic interlingua. This allows us to iutroduce certain semantic differences between English and I,¥ench sentences that we observe in natural ""translation twin"" texts.",Generation of Extended Bilingual Statistical Reports,"During tim past few years we liave been concerned with developing models for the automatic planning and realization of report texts wittlin technical sublanguages of English and French. Since 1987 we have been implementing Meaning-Text language models (MTMs) [6, 7] for the task of realizing sentences from semantic specifications that are output by a text planner. A relatively complete MTM implementation for English was tested in the domain of operating system audit summaries in tile Gossip project of 1987-89 [3] . At COLING-gO a report was given on the fully operational FoG system for generating marine forecasts in both English and French at weather centres in Eastern Canada [1] . The work reported on here concerns the experimental generation of extended bilingual summaries of Canadian statistical data. Our first focus has been on labour force surveys (LFS), where an extensive corpus of published reports in each language is available for empirical study. Tire current LFS system has built on the experience of the two preceding systems, but goes beyond either of them 1. Iu contrast to FoG, but similar to Gossip, LFS uses a semantic net representation of sentences as input to the realization process. Like Gossip, LFS also makes use of theme/theme constraints to help optimize lexical and syntactic choices during sentence realizatiou. But in contrast to Gossip, which produced only English texts, LFS is bilingual, making use of the conceptual level of representation produced by the planner as an interlingua from which to derive the linguistic semantic representations for texts in the two languages independently. Hence the LFS interlingua is much ""deeper"" than FoG's deep-syntactic interlingua. This allows us to iutroduce certain semantic differences between English and I,¥ench sentences that we observe in natural ""translation twin"" texts.",Generation of Extended Bilingual Statistical Reports,"During tim past few years we liave been concerned with developing models for the automatic planning and realization of report texts wittlin technical sublanguages of English and French. Since 1987 we have been implementing Meaning-Text language models (MTMs) [6, 7] for the task of realizing sentences from semantic specifications that are output by a text planner. A relatively complete MTM implementation for English was tested in the domain of operating system audit summaries in tile Gossip project of 1987-89 [3] . At COLING-gO a report was given on the fully operational FoG system for generating marine forecasts in both English and French at weather centres in Eastern Canada [1] . The work reported on here concerns the experimental generation of extended bilingual summaries of Canadian statistical data. Our first focus has been on labour force surveys (LFS), where an extensive corpus of published reports in each language is available for empirical study. Tire current LFS system has built on the experience of the two preceding systems, but goes beyond either of them 1. Iu contrast to FoG, but similar to Gossip, LFS uses a semantic net representation of sentences as input to the realization process. Like Gossip, LFS also makes use of theme/theme constraints to help optimize lexical and syntactic choices during sentence realizatiou. But in contrast to Gossip, which produced only English texts, LFS is bilingual, making use of the conceptual level of representation produced by the planner as an interlingua from which to derive the linguistic semantic representations for texts in the two languages independently. Hence the LFS interlingua is much ""deeper"" than FoG's deep-syntactic interlingua. This allows us to iutroduce certain semantic differences between English and I,¥ench sentences that we observe in natural ""translation twin"" texts.",,"Generation of Extended Bilingual Statistical Reports. During tim past few years we liave been concerned with developing models for the automatic planning and realization of report texts wittlin technical sublanguages of English and French. Since 1987 we have been implementing Meaning-Text language models (MTMs) [6, 7] for the task of realizing sentences from semantic specifications that are output by a text planner. A relatively complete MTM implementation for English was tested in the domain of operating system audit summaries in tile Gossip project of 1987-89 [3] . At COLING-gO a report was given on the fully operational FoG system for generating marine forecasts in both English and French at weather centres in Eastern Canada [1] . The work reported on here concerns the experimental generation of extended bilingual summaries of Canadian statistical data. Our first focus has been on labour force surveys (LFS), where an extensive corpus of published reports in each language is available for empirical study. Tire current LFS system has built on the experience of the two preceding systems, but goes beyond either of them 1. Iu contrast to FoG, but similar to Gossip, LFS uses a semantic net representation of sentences as input to the realization process. Like Gossip, LFS also makes use of theme/theme constraints to help optimize lexical and syntactic choices during sentence realizatiou. But in contrast to Gossip, which produced only English texts, LFS is bilingual, making use of the conceptual level of representation produced by the planner as an interlingua from which to derive the linguistic semantic representations for texts in the two languages independently. Hence the LFS interlingua is much ""deeper"" than FoG's deep-syntactic interlingua. This allows us to iutroduce certain semantic differences between English and I,¥ench sentences that we observe in natural ""translation twin"" texts.",1992
bianchi-etal-1993-undestanding,https://aclanthology.org/E93-1058,0,,,,,,,Undestanding Stories in Different Languages with GETA-RUN. ,Undestanding Stories in Different Languages with {GETA}-{RUN},,Undestanding Stories in Different Languages with GETA-RUN,,,Undestanding Stories in Different Languages with GETA-RUN. ,1993
roukos-1993-automatic,https://aclanthology.org/H93-1092,0,,,,,,,"Automatic Extraction of Grammars From Annotated Text. ""lhe primary objective of this project is to develop a robust, high-performance parser for English by automatically extracting a grammar from an annotated corpus of bracketed sentences, called the Treebank. The project is a collaboration between the IBM Continuous Speech Recognition Group and the University of Pennsylvania Department of Computer Sciences 1. Our initial focus is the domain of computer manuals with a vocabulary of 3000 words. We use a Treebank that was developed jointly by IBM and the University of Lancaster, England, during the past three years.
We have an initial implementation of our parsing model where we used a simple set of features to guide us in our development of the approach. We used for training a Treebank of about 28,000 sentences. The parser's accuracy on a sample of 25 new sentences of length 7 to 17 words as judged, when compared to the Treebank, by three members of the group, is 52%. This is encouraging in light of the fact that we are in the process of increasing the features that the parser can look at. We give below a brief sketch of our approach.",Automatic Extraction of Grammars From Annotated Text,"""lhe primary objective of this project is to develop a robust, high-performance parser for English by automatically extracting a grammar from an annotated corpus of bracketed sentences, called the Treebank. The project is a collaboration between the IBM Continuous Speech Recognition Group and the University of Pennsylvania Department of Computer Sciences 1. Our initial focus is the domain of computer manuals with a vocabulary of 3000 words. We use a Treebank that was developed jointly by IBM and the University of Lancaster, England, during the past three years.
We have an initial implementation of our parsing model where we used a simple set of features to guide us in our development of the approach. We used for training a Treebank of about 28,000 sentences. The parser's accuracy on a sample of 25 new sentences of length 7 to 17 words as judged, when compared to the Treebank, by three members of the group, is 52%. This is encouraging in light of the fact that we are in the process of increasing the features that the parser can look at. We give below a brief sketch of our approach.",Automatic Extraction of Grammars From Annotated Text,"""lhe primary objective of this project is to develop a robust, high-performance parser for English by automatically extracting a grammar from an annotated corpus of bracketed sentences, called the Treebank. The project is a collaboration between the IBM Continuous Speech Recognition Group and the University of Pennsylvania Department of Computer Sciences 1. Our initial focus is the domain of computer manuals with a vocabulary of 3000 words. We use a Treebank that was developed jointly by IBM and the University of Lancaster, England, during the past three years.
We have an initial implementation of our parsing model where we used a simple set of features to guide us in our development of the approach. We used for training a Treebank of about 28,000 sentences. The parser's accuracy on a sample of 25 new sentences of length 7 to 17 words as judged, when compared to the Treebank, by three members of the group, is 52%. This is encouraging in light of the fact that we are in the process of increasing the features that the parser can look at. We give below a brief sketch of our approach.",,"Automatic Extraction of Grammars From Annotated Text. ""lhe primary objective of this project is to develop a robust, high-performance parser for English by automatically extracting a grammar from an annotated corpus of bracketed sentences, called the Treebank. The project is a collaboration between the IBM Continuous Speech Recognition Group and the University of Pennsylvania Department of Computer Sciences 1. Our initial focus is the domain of computer manuals with a vocabulary of 3000 words. We use a Treebank that was developed jointly by IBM and the University of Lancaster, England, during the past three years.
We have an initial implementation of our parsing model where we used a simple set of features to guide us in our development of the approach. We used for training a Treebank of about 28,000 sentences. The parser's accuracy on a sample of 25 new sentences of length 7 to 17 words as judged, when compared to the Treebank, by three members of the group, is 52%. This is encouraging in light of the fact that we are in the process of increasing the features that the parser can look at. We give below a brief sketch of our approach.",1993
li-etal-2019-modeling,https://aclanthology.org/P19-1619,0,,,,,,,"Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions. Several deep learning models have been proposed for solving math word problems (MWPs) automatically. Although these models have the ability to capture features without manual efforts, their approaches to capturing features are not specifically designed for MWPs. To utilize the merits of deep learning models with simultaneous consideration of MWPs' specific features, we propose a group attention mechanism to extract global features, quantity-related features, quantitypair features and question-related features in MWPs respectively. The experimental results show that the proposed approach performs significantly better than previous state-of-the-art methods, and boost performance from 66.9% to 69.5% on Math23K with training-test split, from 65.8% to 66.9% on Math23K with 5-fold cross-validation and from 69.2% to 76.1% on MAWPS.",Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions,"Several deep learning models have been proposed for solving math word problems (MWPs) automatically. Although these models have the ability to capture features without manual efforts, their approaches to capturing features are not specifically designed for MWPs. To utilize the merits of deep learning models with simultaneous consideration of MWPs' specific features, we propose a group attention mechanism to extract global features, quantity-related features, quantitypair features and question-related features in MWPs respectively. The experimental results show that the proposed approach performs significantly better than previous state-of-the-art methods, and boost performance from 66.9% to 69.5% on Math23K with training-test split, from 65.8% to 66.9% on Math23K with 5-fold cross-validation and from 69.2% to 76.1% on MAWPS.",Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions,"Several deep learning models have been proposed for solving math word problems (MWPs) automatically. Although these models have the ability to capture features without manual efforts, their approaches to capturing features are not specifically designed for MWPs. To utilize the merits of deep learning models with simultaneous consideration of MWPs' specific features, we propose a group attention mechanism to extract global features, quantity-related features, quantitypair features and question-related features in MWPs respectively. The experimental results show that the proposed approach performs significantly better than previous state-of-the-art methods, and boost performance from 66.9% to 69.5% on Math23K with training-test split, from 65.8% to 66.9% on Math23K with 5-fold cross-validation and from 69.2% to 76.1% on MAWPS.",,"Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions. Several deep learning models have been proposed for solving math word problems (MWPs) automatically. Although these models have the ability to capture features without manual efforts, their approaches to capturing features are not specifically designed for MWPs. To utilize the merits of deep learning models with simultaneous consideration of MWPs' specific features, we propose a group attention mechanism to extract global features, quantity-related features, quantitypair features and question-related features in MWPs respectively. The experimental results show that the proposed approach performs significantly better than previous state-of-the-art methods, and boost performance from 66.9% to 69.5% on Math23K with training-test split, from 65.8% to 66.9% on Math23K with 5-fold cross-validation and from 69.2% to 76.1% on MAWPS.",2019
eyigoz-2010-tag,https://aclanthology.org/W10-4419,0,,,,,,,"TAG Analysis of Turkish Long Distance Dependencies. All permutations of a two level embedding sentence in Turkish is analyzed, in order to develop an LTAG grammar that can account for Turkish long distance dependencies. The fact that Turkish allows only long distance topicalization and extraposition is shown to be connected to a condition-the coherence condition-that draws the boundary between the acceptable and inacceptable permutations of the five word sentence under investigation. The LTAG grammar for this fragment of Turkish has two levels: the first level assumes lexicalized and linguistically appropriate elementary trees, where as the second level assumes elementary trees that are derived from the elementary trees of the first level, and are not lexicalized.",{TAG} Analysis of {T}urkish Long Distance Dependencies,"All permutations of a two level embedding sentence in Turkish is analyzed, in order to develop an LTAG grammar that can account for Turkish long distance dependencies. The fact that Turkish allows only long distance topicalization and extraposition is shown to be connected to a condition-the coherence condition-that draws the boundary between the acceptable and inacceptable permutations of the five word sentence under investigation. The LTAG grammar for this fragment of Turkish has two levels: the first level assumes lexicalized and linguistically appropriate elementary trees, where as the second level assumes elementary trees that are derived from the elementary trees of the first level, and are not lexicalized.",TAG Analysis of Turkish Long Distance Dependencies,"All permutations of a two level embedding sentence in Turkish is analyzed, in order to develop an LTAG grammar that can account for Turkish long distance dependencies. The fact that Turkish allows only long distance topicalization and extraposition is shown to be connected to a condition-the coherence condition-that draws the boundary between the acceptable and inacceptable permutations of the five word sentence under investigation. The LTAG grammar for this fragment of Turkish has two levels: the first level assumes lexicalized and linguistically appropriate elementary trees, where as the second level assumes elementary trees that are derived from the elementary trees of the first level, and are not lexicalized.",,"TAG Analysis of Turkish Long Distance Dependencies. All permutations of a two level embedding sentence in Turkish is analyzed, in order to develop an LTAG grammar that can account for Turkish long distance dependencies. The fact that Turkish allows only long distance topicalization and extraposition is shown to be connected to a condition-the coherence condition-that draws the boundary between the acceptable and inacceptable permutations of the five word sentence under investigation. The LTAG grammar for this fragment of Turkish has two levels: the first level assumes lexicalized and linguistically appropriate elementary trees, where as the second level assumes elementary trees that are derived from the elementary trees of the first level, and are not lexicalized.",2010
ma-etal-2018-challenging,https://aclanthology.org/N18-1185,0,,,,,,,"Challenging Reading Comprehension on Daily Conversation: Passage Completion on Multiparty Dialog. This paper presents a new corpus and a robust deep learning architecture for a task in reading comprehension, passage completion, on multiparty dialog. Given a dialog in text and a passage containing factual descriptions about the dialog where mentions of the characters are replaced by blanks, the task is to fill the blanks with the most appropriate character names that reflect the contexts in the dialog. Since there is no dataset that challenges the task of passage completion in this genre, we create a corpus by selecting transcripts from a TV show that comprise 1,681 dialogs, generating passages for each dialog through crowdsourcing, and annotating mentions of characters in both the dialog and the passages. Given this dataset, we build a deep neural model that integrates rich feature extraction from convolutional neural networks into sequence modeling in recurrent neural networks, optimized by utterance and dialog level attentions. Our model outperforms the previous state-of-the-art model on this task in a different genre using bidirectional LSTM, showing a 13.0+% improvement for longer dialogs. Our analysis shows the effectiveness of the attention mechanisms and suggests a direction to machine comprehension on multiparty dialog.",Challenging Reading Comprehension on Daily Conversation: Passage Completion on Multiparty Dialog,"This paper presents a new corpus and a robust deep learning architecture for a task in reading comprehension, passage completion, on multiparty dialog. Given a dialog in text and a passage containing factual descriptions about the dialog where mentions of the characters are replaced by blanks, the task is to fill the blanks with the most appropriate character names that reflect the contexts in the dialog. Since there is no dataset that challenges the task of passage completion in this genre, we create a corpus by selecting transcripts from a TV show that comprise 1,681 dialogs, generating passages for each dialog through crowdsourcing, and annotating mentions of characters in both the dialog and the passages. Given this dataset, we build a deep neural model that integrates rich feature extraction from convolutional neural networks into sequence modeling in recurrent neural networks, optimized by utterance and dialog level attentions. Our model outperforms the previous state-of-the-art model on this task in a different genre using bidirectional LSTM, showing a 13.0+% improvement for longer dialogs. Our analysis shows the effectiveness of the attention mechanisms and suggests a direction to machine comprehension on multiparty dialog.",Challenging Reading Comprehension on Daily Conversation: Passage Completion on Multiparty Dialog,"This paper presents a new corpus and a robust deep learning architecture for a task in reading comprehension, passage completion, on multiparty dialog. Given a dialog in text and a passage containing factual descriptions about the dialog where mentions of the characters are replaced by blanks, the task is to fill the blanks with the most appropriate character names that reflect the contexts in the dialog. Since there is no dataset that challenges the task of passage completion in this genre, we create a corpus by selecting transcripts from a TV show that comprise 1,681 dialogs, generating passages for each dialog through crowdsourcing, and annotating mentions of characters in both the dialog and the passages. Given this dataset, we build a deep neural model that integrates rich feature extraction from convolutional neural networks into sequence modeling in recurrent neural networks, optimized by utterance and dialog level attentions. Our model outperforms the previous state-of-the-art model on this task in a different genre using bidirectional LSTM, showing a 13.0+% improvement for longer dialogs. Our analysis shows the effectiveness of the attention mechanisms and suggests a direction to machine comprehension on multiparty dialog.",,"Challenging Reading Comprehension on Daily Conversation: Passage Completion on Multiparty Dialog. This paper presents a new corpus and a robust deep learning architecture for a task in reading comprehension, passage completion, on multiparty dialog. Given a dialog in text and a passage containing factual descriptions about the dialog where mentions of the characters are replaced by blanks, the task is to fill the blanks with the most appropriate character names that reflect the contexts in the dialog. Since there is no dataset that challenges the task of passage completion in this genre, we create a corpus by selecting transcripts from a TV show that comprise 1,681 dialogs, generating passages for each dialog through crowdsourcing, and annotating mentions of characters in both the dialog and the passages. Given this dataset, we build a deep neural model that integrates rich feature extraction from convolutional neural networks into sequence modeling in recurrent neural networks, optimized by utterance and dialog level attentions. Our model outperforms the previous state-of-the-art model on this task in a different genre using bidirectional LSTM, showing a 13.0+% improvement for longer dialogs. Our analysis shows the effectiveness of the attention mechanisms and suggests a direction to machine comprehension on multiparty dialog.",2018
raux-eskenazi-2004-non,https://aclanthology.org/N04-1028,0,,,,,,,"Non-Native Users in the Let's Go!! Spoken Dialogue System: Dealing with Linguistic Mismatch. This paper describes the CMU Let's Go!! bus information system, an experimental system designed to study the use of spoken dialogue interfaces by non-native speakers. The differences in performance of the speech recognition and language understanding modules of the system when confronted with native and non-native spontaneous speech are analyzed. Focus is placed on the linguistic mismatch between the user input and the system's expectations, and on its implications in terms of language modeling and parsing performance. The effect of including non-native data when building the speech recognition and language understanding modules is discussed. In order to close the gap between non-native and native input, a method is proposed to automatically generate confirmation prompts that are both close to the user's input and covered by the system's language model and grammar, in order to help the user acquire idiomatic expressions appropriate to the task.",Non-Native Users in the {L}et{'}s {G}o!! Spoken Dialogue System: Dealing with Linguistic Mismatch,"This paper describes the CMU Let's Go!! bus information system, an experimental system designed to study the use of spoken dialogue interfaces by non-native speakers. The differences in performance of the speech recognition and language understanding modules of the system when confronted with native and non-native spontaneous speech are analyzed. Focus is placed on the linguistic mismatch between the user input and the system's expectations, and on its implications in terms of language modeling and parsing performance. The effect of including non-native data when building the speech recognition and language understanding modules is discussed. In order to close the gap between non-native and native input, a method is proposed to automatically generate confirmation prompts that are both close to the user's input and covered by the system's language model and grammar, in order to help the user acquire idiomatic expressions appropriate to the task.",Non-Native Users in the Let's Go!! Spoken Dialogue System: Dealing with Linguistic Mismatch,"This paper describes the CMU Let's Go!! bus information system, an experimental system designed to study the use of spoken dialogue interfaces by non-native speakers. The differences in performance of the speech recognition and language understanding modules of the system when confronted with native and non-native spontaneous speech are analyzed. Focus is placed on the linguistic mismatch between the user input and the system's expectations, and on its implications in terms of language modeling and parsing performance. The effect of including non-native data when building the speech recognition and language understanding modules is discussed. In order to close the gap between non-native and native input, a method is proposed to automatically generate confirmation prompts that are both close to the user's input and covered by the system's language model and grammar, in order to help the user acquire idiomatic expressions appropriate to the task.","The authors would like to thank Alan W Black, Dan Bohus and Brian Langner for their help with this research.This material is based upon work supported by the U.S. National Science Foundation under Grant No. 0208835, ""LET'S GO: improved speech interfaces for the general public"". Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.","Non-Native Users in the Let's Go!! Spoken Dialogue System: Dealing with Linguistic Mismatch. This paper describes the CMU Let's Go!! bus information system, an experimental system designed to study the use of spoken dialogue interfaces by non-native speakers. The differences in performance of the speech recognition and language understanding modules of the system when confronted with native and non-native spontaneous speech are analyzed. Focus is placed on the linguistic mismatch between the user input and the system's expectations, and on its implications in terms of language modeling and parsing performance. The effect of including non-native data when building the speech recognition and language understanding modules is discussed. In order to close the gap between non-native and native input, a method is proposed to automatically generate confirmation prompts that are both close to the user's input and covered by the system's language model and grammar, in order to help the user acquire idiomatic expressions appropriate to the task.",2004
zhao-liu-2010-cips,https://aclanthology.org/W10-4126,0,,,,,,,"The CIPS-SIGHAN CLP2010 Chinese Word Segmentation Backoff. The CIPS-SIGHAN CLP 2010 Chinese Word Segmentation Bakeoff was held in the summer of 2010 to evaluate the current state of the art in word segmentation. It focused on the crossdomain performance of Chinese word segmentation algorithms. Eighteen groups submitted 128 results over two tracks (open training and closed training), four domains (literature, computer science, medicine and finance) and two subtasks (simplified Chinese and traditional Chinese). We found that compared with the previous Chinese word segmentation bakeoffs, the performance of cross-domain Chinese word segmentation is not much lower, and the out-of-vocabulary recall is improved.",The {CIPS}-{SIGHAN} {CLP}2010 {C}hinese Word Segmentation Backoff,"The CIPS-SIGHAN CLP 2010 Chinese Word Segmentation Bakeoff was held in the summer of 2010 to evaluate the current state of the art in word segmentation. It focused on the crossdomain performance of Chinese word segmentation algorithms. Eighteen groups submitted 128 results over two tracks (open training and closed training), four domains (literature, computer science, medicine and finance) and two subtasks (simplified Chinese and traditional Chinese). We found that compared with the previous Chinese word segmentation bakeoffs, the performance of cross-domain Chinese word segmentation is not much lower, and the out-of-vocabulary recall is improved.",The CIPS-SIGHAN CLP2010 Chinese Word Segmentation Backoff,"The CIPS-SIGHAN CLP 2010 Chinese Word Segmentation Bakeoff was held in the summer of 2010 to evaluate the current state of the art in word segmentation. It focused on the crossdomain performance of Chinese word segmentation algorithms. Eighteen groups submitted 128 results over two tracks (open training and closed training), four domains (literature, computer science, medicine and finance) and two subtasks (simplified Chinese and traditional Chinese). We found that compared with the previous Chinese word segmentation bakeoffs, the performance of cross-domain Chinese word segmentation is not much lower, and the out-of-vocabulary recall is improved.","This work is supported by the National Natural Science Foundation of China (Grant No. 90920004). We gratefully acknowledge the generous assistance of the organizations listed below who provided the data and the Chinese word segmentation standard for this bakeoff; without their support, it could not have taken place:City University of Hong Kong Institute for Computational Linguistics, Beijing University, Beijing, ChinaWe thank Le Sun for his organization of the First CIPS-SIGHAN Joint Conference on Chinese Language Processing of which this bakeoff is part. We thank Lili Zhao for her segmentation-inconsistency-checking program and other preparation works for this bakeoff, and thank Guanghui Luo and Siyang Cao for the online scoring system they set up and maintained. We thank Yajuan Lü for her helpful suggestions on this paper. Finally we thank all the participants for their interest and hard work in making this bakeoff a success.","The CIPS-SIGHAN CLP2010 Chinese Word Segmentation Backoff. The CIPS-SIGHAN CLP 2010 Chinese Word Segmentation Bakeoff was held in the summer of 2010 to evaluate the current state of the art in word segmentation. It focused on the crossdomain performance of Chinese word segmentation algorithms. Eighteen groups submitted 128 results over two tracks (open training and closed training), four domains (literature, computer science, medicine and finance) and two subtasks (simplified Chinese and traditional Chinese). We found that compared with the previous Chinese word segmentation bakeoffs, the performance of cross-domain Chinese word segmentation is not much lower, and the out-of-vocabulary recall is improved.",2010
blanco-sarabi-2016-automatic,https://aclanthology.org/N16-1169,0,,,,,,,"Automatic Generation and Scoring of Positive Interpretations from Negated Statements. This paper presents a methodology to extract positive interpretations from negated statements. First, we automatically generate plausible interpretations using well-known grammar rules and manipulating semantic roles. Second, we score plausible alternatives according to their likelihood. Manual annotations show that the positive interpretations are intuitive to humans, and experimental results show that the scoring task can be automated.",Automatic Generation and Scoring of Positive Interpretations from Negated Statements,"This paper presents a methodology to extract positive interpretations from negated statements. First, we automatically generate plausible interpretations using well-known grammar rules and manipulating semantic roles. Second, we score plausible alternatives according to their likelihood. Manual annotations show that the positive interpretations are intuitive to humans, and experimental results show that the scoring task can be automated.",Automatic Generation and Scoring of Positive Interpretations from Negated Statements,"This paper presents a methodology to extract positive interpretations from negated statements. First, we automatically generate plausible interpretations using well-known grammar rules and manipulating semantic roles. Second, we score plausible alternatives according to their likelihood. Manual annotations show that the positive interpretations are intuitive to humans, and experimental results show that the scoring task can be automated.",,"Automatic Generation and Scoring of Positive Interpretations from Negated Statements. This paper presents a methodology to extract positive interpretations from negated statements. First, we automatically generate plausible interpretations using well-known grammar rules and manipulating semantic roles. Second, we score plausible alternatives according to their likelihood. Manual annotations show that the positive interpretations are intuitive to humans, and experimental results show that the scoring task can be automated.",2016
rosenberg-2010-classification,https://aclanthology.org/N10-1109,0,,,,,,,"Classification of Prosodic Events using Quantized Contour Modeling. We present Quantized Contour Modeling (QCM), a Bayesian approach to the classification of acoustic contours. We evaluate the performance of this technique in the classification of prosodic events. We find that, on BURNC, this technique can successfully classify pitch accents with 63.99% accuracy (.4481 CER), and phrase ending tones with 72.91% accuracy.",Classification of Prosodic Events using Quantized Contour Modeling,"We present Quantized Contour Modeling (QCM), a Bayesian approach to the classification of acoustic contours. We evaluate the performance of this technique in the classification of prosodic events. We find that, on BURNC, this technique can successfully classify pitch accents with 63.99% accuracy (.4481 CER), and phrase ending tones with 72.91% accuracy.",Classification of Prosodic Events using Quantized Contour Modeling,"We present Quantized Contour Modeling (QCM), a Bayesian approach to the classification of acoustic contours. We evaluate the performance of this technique in the classification of prosodic events. We find that, on BURNC, this technique can successfully classify pitch accents with 63.99% accuracy (.4481 CER), and phrase ending tones with 72.91% accuracy.",,"Classification of Prosodic Events using Quantized Contour Modeling. We present Quantized Contour Modeling (QCM), a Bayesian approach to the classification of acoustic contours. We evaluate the performance of this technique in the classification of prosodic events. We find that, on BURNC, this technique can successfully classify pitch accents with 63.99% accuracy (.4481 CER), and phrase ending tones with 72.91% accuracy.",2010
brasoveanu-etal-2020-media,https://aclanthology.org/2020.conll-1.28,0,,,,,,,"In Media Res: A Corpus for Evaluating Named Entity Linking with Creative Works. Annotation styles express guidelines that direct human annotators by explicitly stating the rules to follow when creating gold standard annotations of text corpora. These guidelines not only shape the gold standards they help create, but also influence the training and evaluation of Named Entity Linking (NEL) tools, since different annotation styles correspond to divergent views on the entities present in a document. Such divergence is particularly relevant for texts from the media domain containing references to creative works. This paper presents a corpus of 1000 annotated documents from sources such as Wikipedia, TVTropes and WikiNews that are organized in ten partitions. Each document contains multiple gold standard annotations representing various annotation styles. The corpus is used to evaluate a series of Named Entity Linking tools in order to understand the impact of the differences in annotation styles on the reported accuracy when processing highly ambiguous entities such as names of creative works. Relaxed annotation guidelines that include overlap styles, for instance, lead to better results across all tools.",In Media Res: A Corpus for Evaluating Named Entity Linking with Creative Works,"Annotation styles express guidelines that direct human annotators by explicitly stating the rules to follow when creating gold standard annotations of text corpora. These guidelines not only shape the gold standards they help create, but also influence the training and evaluation of Named Entity Linking (NEL) tools, since different annotation styles correspond to divergent views on the entities present in a document. Such divergence is particularly relevant for texts from the media domain containing references to creative works. This paper presents a corpus of 1000 annotated documents from sources such as Wikipedia, TVTropes and WikiNews that are organized in ten partitions. Each document contains multiple gold standard annotations representing various annotation styles. The corpus is used to evaluate a series of Named Entity Linking tools in order to understand the impact of the differences in annotation styles on the reported accuracy when processing highly ambiguous entities such as names of creative works. Relaxed annotation guidelines that include overlap styles, for instance, lead to better results across all tools.",In Media Res: A Corpus for Evaluating Named Entity Linking with Creative Works,"Annotation styles express guidelines that direct human annotators by explicitly stating the rules to follow when creating gold standard annotations of text corpora. These guidelines not only shape the gold standards they help create, but also influence the training and evaluation of Named Entity Linking (NEL) tools, since different annotation styles correspond to divergent views on the entities present in a document. Such divergence is particularly relevant for texts from the media domain containing references to creative works. This paper presents a corpus of 1000 annotated documents from sources such as Wikipedia, TVTropes and WikiNews that are organized in ten partitions. Each document contains multiple gold standard annotations representing various annotation styles. The corpus is used to evaluate a series of Named Entity Linking tools in order to understand the impact of the differences in annotation styles on the reported accuracy when processing highly ambiguous entities such as names of creative works. Relaxed annotation guidelines that include overlap styles, for instance, lead to better results across all tools.","This research has been partially funded through the following projects: the ReTV project (www.retvproject.eu) funded by the European Union's Horizon 2020 Research and Innovation Programme (No. 780656), and MedMon (www.fhgr.ch/medmon) funded by the Swiss Innovation Agency Innosuisse.","In Media Res: A Corpus for Evaluating Named Entity Linking with Creative Works. Annotation styles express guidelines that direct human annotators by explicitly stating the rules to follow when creating gold standard annotations of text corpora. These guidelines not only shape the gold standards they help create, but also influence the training and evaluation of Named Entity Linking (NEL) tools, since different annotation styles correspond to divergent views on the entities present in a document. Such divergence is particularly relevant for texts from the media domain containing references to creative works. This paper presents a corpus of 1000 annotated documents from sources such as Wikipedia, TVTropes and WikiNews that are organized in ten partitions. Each document contains multiple gold standard annotations representing various annotation styles. The corpus is used to evaluate a series of Named Entity Linking tools in order to understand the impact of the differences in annotation styles on the reported accuracy when processing highly ambiguous entities such as names of creative works. Relaxed annotation guidelines that include overlap styles, for instance, lead to better results across all tools.",2020
jimenez-gutierrez-etal-2020-document,https://aclanthology.org/2020.findings-emnlp.332,1,,,,health,,,"Document Classification for COVID-19 Literature. The global pandemic has made it more important than ever to quickly and accurately retrieve relevant scientific literature for effective consumption by researchers in a wide range of fields. We provide an analysis of several multi-label document classification models on the LitCovid dataset, a growing collection of 23,000 research papers regarding the novel 2019 coronavirus. We find that pre-trained language models fine-tuned on this dataset outperform all other baselines and that BioBERT surpasses the others by a small margin with micro-F1 and accuracy scores of around 86% and 75% respectively on the test set. We evaluate the data efficiency and generalizability of these models as essential features of any system prepared to deal with an urgent situation like the current health crisis. We perform a data ablation study to determine how important article titles are for achieving reasonable performance on this dataset. Finally, we explore 50 errors made by the best performing models on LitCovid documents and find that they often (1) correlate certain labels too closely together and (2) fail to focus on discriminative sections of the articles; both of which are important issues to address in future work. Both data and code are available on GitHub 1 .",Document Classification for {COVID}-19 Literature,"The global pandemic has made it more important than ever to quickly and accurately retrieve relevant scientific literature for effective consumption by researchers in a wide range of fields. We provide an analysis of several multi-label document classification models on the LitCovid dataset, a growing collection of 23,000 research papers regarding the novel 2019 coronavirus. We find that pre-trained language models fine-tuned on this dataset outperform all other baselines and that BioBERT surpasses the others by a small margin with micro-F1 and accuracy scores of around 86% and 75% respectively on the test set. We evaluate the data efficiency and generalizability of these models as essential features of any system prepared to deal with an urgent situation like the current health crisis. We perform a data ablation study to determine how important article titles are for achieving reasonable performance on this dataset. Finally, we explore 50 errors made by the best performing models on LitCovid documents and find that they often (1) correlate certain labels too closely together and (2) fail to focus on discriminative sections of the articles; both of which are important issues to address in future work. Both data and code are available on GitHub 1 .",Document Classification for COVID-19 Literature,"The global pandemic has made it more important than ever to quickly and accurately retrieve relevant scientific literature for effective consumption by researchers in a wide range of fields. We provide an analysis of several multi-label document classification models on the LitCovid dataset, a growing collection of 23,000 research papers regarding the novel 2019 coronavirus. We find that pre-trained language models fine-tuned on this dataset outperform all other baselines and that BioBERT surpasses the others by a small margin with micro-F1 and accuracy scores of around 86% and 75% respectively on the test set. We evaluate the data efficiency and generalizability of these models as essential features of any system prepared to deal with an urgent situation like the current health crisis. We perform a data ablation study to determine how important article titles are for achieving reasonable performance on this dataset. Finally, we explore 50 errors made by the best performing models on LitCovid documents and find that they often (1) correlate certain labels too closely together and (2) fail to focus on discriminative sections of the articles; both of which are important issues to address in future work. Both data and code are available on GitHub 1 .","This research was sponsored in part by the Ohio Supercomputer Center (Center, 1987) . The authors would also like to thank Lang Li and Tanya Berger-Wolf for helpful discussions.","Document Classification for COVID-19 Literature. The global pandemic has made it more important than ever to quickly and accurately retrieve relevant scientific literature for effective consumption by researchers in a wide range of fields. We provide an analysis of several multi-label document classification models on the LitCovid dataset, a growing collection of 23,000 research papers regarding the novel 2019 coronavirus. We find that pre-trained language models fine-tuned on this dataset outperform all other baselines and that BioBERT surpasses the others by a small margin with micro-F1 and accuracy scores of around 86% and 75% respectively on the test set. We evaluate the data efficiency and generalizability of these models as essential features of any system prepared to deal with an urgent situation like the current health crisis. We perform a data ablation study to determine how important article titles are for achieving reasonable performance on this dataset. Finally, we explore 50 errors made by the best performing models on LitCovid documents and find that they often (1) correlate certain labels too closely together and (2) fail to focus on discriminative sections of the articles; both of which are important issues to address in future work. Both data and code are available on GitHub 1 .",2020
zhao-etal-2021-efficient,https://aclanthology.org/2021.emnlp-main.354,0,,,,,,,"Efficient Dialogue Complementary Policy Learning via Deep Q-network Policy and Episodic Memory Policy. Deep reinforcement learning has shown great potential in training dialogue policies. However, its favorable performance comes at the cost of many rounds of interaction. Most of the existing dialogue policy methods rely on a single learning system, while the human brain has two specialized learning and memory systems, supporting to find good solutions without requiring copious examples. Inspired by the human brain, this paper proposes a novel complementary policy learning (CPL) framework, which exploits the complementary advantages of the episodic memory (EM) policy and the deep Q-network (DQN) policy to achieve fast and effective dialogue policy learning. In order to coordinate between the two policies, we proposed a confidence controller to control the complementary time according to their relative efficacy at different stages. Furthermore, memory connectivity and time pruning are proposed to guarantee the flexible and adaptive generalization of the EM policy in dialog tasks. Experimental results on three dialogue datasets show that our method significantly outperforms existing methods relying on a single learning system.",Efficient Dialogue Complementary Policy Learning via Deep {Q}-network Policy and Episodic Memory Policy,"Deep reinforcement learning has shown great potential in training dialogue policies. However, its favorable performance comes at the cost of many rounds of interaction. Most of the existing dialogue policy methods rely on a single learning system, while the human brain has two specialized learning and memory systems, supporting to find good solutions without requiring copious examples. Inspired by the human brain, this paper proposes a novel complementary policy learning (CPL) framework, which exploits the complementary advantages of the episodic memory (EM) policy and the deep Q-network (DQN) policy to achieve fast and effective dialogue policy learning. In order to coordinate between the two policies, we proposed a confidence controller to control the complementary time according to their relative efficacy at different stages. Furthermore, memory connectivity and time pruning are proposed to guarantee the flexible and adaptive generalization of the EM policy in dialog tasks. Experimental results on three dialogue datasets show that our method significantly outperforms existing methods relying on a single learning system.",Efficient Dialogue Complementary Policy Learning via Deep Q-network Policy and Episodic Memory Policy,"Deep reinforcement learning has shown great potential in training dialogue policies. However, its favorable performance comes at the cost of many rounds of interaction. Most of the existing dialogue policy methods rely on a single learning system, while the human brain has two specialized learning and memory systems, supporting to find good solutions without requiring copious examples. Inspired by the human brain, this paper proposes a novel complementary policy learning (CPL) framework, which exploits the complementary advantages of the episodic memory (EM) policy and the deep Q-network (DQN) policy to achieve fast and effective dialogue policy learning. In order to coordinate between the two policies, we proposed a confidence controller to control the complementary time according to their relative efficacy at different stages. Furthermore, memory connectivity and time pruning are proposed to guarantee the flexible and adaptive generalization of the EM policy in dialog tasks. Experimental results on three dialogue datasets show that our method significantly outperforms existing methods relying on a single learning system.",,"Efficient Dialogue Complementary Policy Learning via Deep Q-network Policy and Episodic Memory Policy. Deep reinforcement learning has shown great potential in training dialogue policies. However, its favorable performance comes at the cost of many rounds of interaction. Most of the existing dialogue policy methods rely on a single learning system, while the human brain has two specialized learning and memory systems, supporting to find good solutions without requiring copious examples. Inspired by the human brain, this paper proposes a novel complementary policy learning (CPL) framework, which exploits the complementary advantages of the episodic memory (EM) policy and the deep Q-network (DQN) policy to achieve fast and effective dialogue policy learning. In order to coordinate between the two policies, we proposed a confidence controller to control the complementary time according to their relative efficacy at different stages. Furthermore, memory connectivity and time pruning are proposed to guarantee the flexible and adaptive generalization of the EM policy in dialog tasks. Experimental results on three dialogue datasets show that our method significantly outperforms existing methods relying on a single learning system.",2021
rios-kavuluru-2018-emr,https://aclanthology.org/N18-1189,0,,,,,,,"EMR Coding with Semi-Parametric Multi-Head Matching Networks. Coding EMRs with diagnosis and procedure codes is an indispensable task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and misinterpretation of a patient's well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. In this paper, we present a new neural network architecture that combines ideas from few-shot learning matching networks, multi-label loss functions, and convolutional neural networks for text classification to significantly outperform other state-of-the-art models. Our evaluations are conducted using a well known deidentified EMR dataset (MIMIC) with a variety of multi-label performance measures.",{EMR} Coding with Semi-Parametric Multi-Head Matching Networks,"Coding EMRs with diagnosis and procedure codes is an indispensable task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and misinterpretation of a patient's well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. In this paper, we present a new neural network architecture that combines ideas from few-shot learning matching networks, multi-label loss functions, and convolutional neural networks for text classification to significantly outperform other state-of-the-art models. Our evaluations are conducted using a well known deidentified EMR dataset (MIMIC) with a variety of multi-label performance measures.",EMR Coding with Semi-Parametric Multi-Head Matching Networks,"Coding EMRs with diagnosis and procedure codes is an indispensable task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and misinterpretation of a patient's well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. In this paper, we present a new neural network architecture that combines ideas from few-shot learning matching networks, multi-label loss functions, and convolutional neural networks for text classification to significantly outperform other state-of-the-art models. Our evaluations are conducted using a well known deidentified EMR dataset (MIMIC) with a variety of multi-label performance measures.",Thanks to anonymous reviewers for their thorough reviews and constructive criticism that helped improve the clarity of the paper (especially leading to the addition of Section 3.5 in the revision). This research is supported by the U.S. National Library of Medicine through grant R21LM012274. We also gratefully acknowledge the support of the NVIDIA Corporation for its donation of the Titan X Pascal GPU used for this research.,"EMR Coding with Semi-Parametric Multi-Head Matching Networks. Coding EMRs with diagnosis and procedure codes is an indispensable task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and misinterpretation of a patient's well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. In this paper, we present a new neural network architecture that combines ideas from few-shot learning matching networks, multi-label loss functions, and convolutional neural networks for text classification to significantly outperform other state-of-the-art models. Our evaluations are conducted using a well known deidentified EMR dataset (MIMIC) with a variety of multi-label performance measures.",2018
dsouza-ng-2014-ensemble,https://aclanthology.org/C14-1159,1,,,,health,,,"Ensemble-Based Medical Relation Classification. Despite the successes of distant supervision approaches to relation extraction in the news domain, the lack of a comprehensive ontology of medical relations makes it difficult to apply such approaches to relation classification in the medical domain. In light of this difficulty, we propose an ensemble approach to this task where we exploit human-supplied knowledge to guide the design of members of the ensemble. Results on the 2010 i2b2/VA Challenge corpus show that our ensemble approach yields a 19.8% relative error reduction over a state-of-the-art baseline.",Ensemble-Based Medical Relation Classification,"Despite the successes of distant supervision approaches to relation extraction in the news domain, the lack of a comprehensive ontology of medical relations makes it difficult to apply such approaches to relation classification in the medical domain. In light of this difficulty, we propose an ensemble approach to this task where we exploit human-supplied knowledge to guide the design of members of the ensemble. Results on the 2010 i2b2/VA Challenge corpus show that our ensemble approach yields a 19.8% relative error reduction over a state-of-the-art baseline.",Ensemble-Based Medical Relation Classification,"Despite the successes of distant supervision approaches to relation extraction in the news domain, the lack of a comprehensive ontology of medical relations makes it difficult to apply such approaches to relation classification in the medical domain. In light of this difficulty, we propose an ensemble approach to this task where we exploit human-supplied knowledge to guide the design of members of the ensemble. Results on the 2010 i2b2/VA Challenge corpus show that our ensemble approach yields a 19.8% relative error reduction over a state-of-the-art baseline.",We thank the three anonymous reviewers for their detailed and insightful comments on an earlier draft of this paper. This work was supported in part by NSF Grants IIS-1147644 and IIS-1219142.,"Ensemble-Based Medical Relation Classification. Despite the successes of distant supervision approaches to relation extraction in the news domain, the lack of a comprehensive ontology of medical relations makes it difficult to apply such approaches to relation classification in the medical domain. In light of this difficulty, we propose an ensemble approach to this task where we exploit human-supplied knowledge to guide the design of members of the ensemble. Results on the 2010 i2b2/VA Challenge corpus show that our ensemble approach yields a 19.8% relative error reduction over a state-of-the-art baseline.",2014
bateman-1999-using,https://aclanthology.org/P99-1017,0,,,,,,,Using aggregation for selecting content when generating referring expressions. Previous algorithms for the generation of referring expressions have been developed specifically for this purpose. Here we introduce an alternative approach based on a fully generic aggregation method also motivated for other generation tasks. We argue that the alternative contributes to a more integrated and uniform approach to content determination in the context of complete noun phrase generation.,Using aggregation for selecting content when generating referring expressions,Previous algorithms for the generation of referring expressions have been developed specifically for this purpose. Here we introduce an alternative approach based on a fully generic aggregation method also motivated for other generation tasks. We argue that the alternative contributes to a more integrated and uniform approach to content determination in the context of complete noun phrase generation.,Using aggregation for selecting content when generating referring expressions,Previous algorithms for the generation of referring expressions have been developed specifically for this purpose. Here we introduce an alternative approach based on a fully generic aggregation method also motivated for other generation tasks. We argue that the alternative contributes to a more integrated and uniform approach to content determination in the context of complete noun phrase generation.,This paper was improved by the anonymous comments of reviewers for both the ACL and the European Natural Language Generation Workshop (1999). Remaining errors and obscurities are my own.,Using aggregation for selecting content when generating referring expressions. Previous algorithms for the generation of referring expressions have been developed specifically for this purpose. Here we introduce an alternative approach based on a fully generic aggregation method also motivated for other generation tasks. We argue that the alternative contributes to a more integrated and uniform approach to content determination in the context of complete noun phrase generation.,1999
hautli-butt-2011-towards,https://aclanthology.org/W11-3412,0,,,,,,,"Towards a Computational Semantic Analyzer for Urdu. This paper describes a first approach to a computational semantic analyzer for Urdu on the basis of the deep syntactic analysis done by the Urdu grammar ParGram. Apart from the semantic construction, external lexical resources such as an Urdu WordNet and a preliminary VerbNet style resource for Urdu are developed and connected to the semantic analyzer. These resources allow for a deeper level of representation by providing real-word knowledge such as hypernyms of lexical entities and information on thematic roles. We therefore contribute to the overall goal of providing more insights into the computationally efficient analysis of Urdu, in particular to computational semantic analysis.",Towards a Computational Semantic Analyzer for {U}rdu,"This paper describes a first approach to a computational semantic analyzer for Urdu on the basis of the deep syntactic analysis done by the Urdu grammar ParGram. Apart from the semantic construction, external lexical resources such as an Urdu WordNet and a preliminary VerbNet style resource for Urdu are developed and connected to the semantic analyzer. These resources allow for a deeper level of representation by providing real-word knowledge such as hypernyms of lexical entities and information on thematic roles. We therefore contribute to the overall goal of providing more insights into the computationally efficient analysis of Urdu, in particular to computational semantic analysis.",Towards a Computational Semantic Analyzer for Urdu,"This paper describes a first approach to a computational semantic analyzer for Urdu on the basis of the deep syntactic analysis done by the Urdu grammar ParGram. Apart from the semantic construction, external lexical resources such as an Urdu WordNet and a preliminary VerbNet style resource for Urdu are developed and connected to the semantic analyzer. These resources allow for a deeper level of representation by providing real-word knowledge such as hypernyms of lexical entities and information on thematic roles. We therefore contribute to the overall goal of providing more insights into the computationally efficient analysis of Urdu, in particular to computational semantic analysis.",,"Towards a Computational Semantic Analyzer for Urdu. This paper describes a first approach to a computational semantic analyzer for Urdu on the basis of the deep syntactic analysis done by the Urdu grammar ParGram. Apart from the semantic construction, external lexical resources such as an Urdu WordNet and a preliminary VerbNet style resource for Urdu are developed and connected to the semantic analyzer. These resources allow for a deeper level of representation by providing real-word knowledge such as hypernyms of lexical entities and information on thematic roles. We therefore contribute to the overall goal of providing more insights into the computationally efficient analysis of Urdu, in particular to computational semantic analysis.",2011
servan-etal-2012-liums,https://aclanthology.org/W12-3147,0,,,,,,,"LIUM's SMT Machine Translation Systems for WMT 2012. This paper describes the development of French-English and English-French statistical machine translation systems for the 2012 WMT shared task evaluation. We developed phrase-based systems based on the Moses decoder, trained on the provided data only. Additionally, new features this year included improved language and translation model adaptation using the cross-entropy score for the corpus selection.",{LIUM}{'}s {SMT} Machine Translation Systems for {WMT} 2012,"This paper describes the development of French-English and English-French statistical machine translation systems for the 2012 WMT shared task evaluation. We developed phrase-based systems based on the Moses decoder, trained on the provided data only. Additionally, new features this year included improved language and translation model adaptation using the cross-entropy score for the corpus selection.",LIUM's SMT Machine Translation Systems for WMT 2012,"This paper describes the development of French-English and English-French statistical machine translation systems for the 2012 WMT shared task evaluation. We developed phrase-based systems based on the Moses decoder, trained on the provided data only. Additionally, new features this year included improved language and translation model adaptation using the cross-entropy score for the corpus selection.",This work has been partially funded by the European Union under the EuroMatrixPlus project ICT-2007.2.2-FP7-231720 and the French government under the ANR project COSMAT ANR-09-CORD-004.,"LIUM's SMT Machine Translation Systems for WMT 2012. This paper describes the development of French-English and English-French statistical machine translation systems for the 2012 WMT shared task evaluation. We developed phrase-based systems based on the Moses decoder, trained on the provided data only. Additionally, new features this year included improved language and translation model adaptation using the cross-entropy score for the corpus selection.",2012
fernandez-gonzalez-martins-2015-parsing,https://aclanthology.org/P15-1147,0,,,,,,,"Parsing as Reduction. We reduce phrase-based parsing to dependency parsing. Our reduction is grounded on a new intermediate representation, ""head-ordered dependency trees,"" shown to be isomorphic to constituent trees. By encoding order information in the dependency labels, we show that any off-theshelf, trainable dependency parser can be used to produce constituents. When this parser is non-projective, we can perform discontinuous parsing in a very natural manner. Despite the simplicity of our approach, experiments show that the resulting parsers are on par with strong baselines, such as the Berkeley parser for English and the best non-reranking system in the SPMRL-2014 shared task. Results are particularly striking for discontinuous parsing of German, where we surpass the current state of the art by a wide margin. * This research was carried out during an internship at Priberam Labs.",Parsing as Reduction,"We reduce phrase-based parsing to dependency parsing. Our reduction is grounded on a new intermediate representation, ""head-ordered dependency trees,"" shown to be isomorphic to constituent trees. By encoding order information in the dependency labels, we show that any off-theshelf, trainable dependency parser can be used to produce constituents. When this parser is non-projective, we can perform discontinuous parsing in a very natural manner. Despite the simplicity of our approach, experiments show that the resulting parsers are on par with strong baselines, such as the Berkeley parser for English and the best non-reranking system in the SPMRL-2014 shared task. Results are particularly striking for discontinuous parsing of German, where we surpass the current state of the art by a wide margin. * This research was carried out during an internship at Priberam Labs.",Parsing as Reduction,"We reduce phrase-based parsing to dependency parsing. Our reduction is grounded on a new intermediate representation, ""head-ordered dependency trees,"" shown to be isomorphic to constituent trees. By encoding order information in the dependency labels, we show that any off-theshelf, trainable dependency parser can be used to produce constituents. When this parser is non-projective, we can perform discontinuous parsing in a very natural manner. Despite the simplicity of our approach, experiments show that the resulting parsers are on par with strong baselines, such as the Berkeley parser for English and the best non-reranking system in the SPMRL-2014 shared task. Results are particularly striking for discontinuous parsing of German, where we surpass the current state of the art by a wide margin. * This research was carried out during an internship at Priberam Labs.","We would like to thank the three reviewers for their insightful comments, and Slav Petrov, Djamé Seddah, Yannick Versley, David Hall, Muhua Zhu, Lingpeng Kong, Carlos Gómez-Rodríguez, and Andreas van Cranenburgh for valuable feedback and help in preparing data and running software code. This research has been partially funded by the Spanish Ministry of Economy and Competitiveness and FEDER (project TIN2010-18552-C03-01), Ministry of Education (FPU Grant Program) and Xunta de Galicia (projects R2014/029 and R2014/034). A. M. was supported by the EU/FEDER programme, QREN/POR Lisboa (Portugal), under the Intelligo project (contract 2012/24803), and by the FCT grants UID/EEA/50008/2013 and PTDC/EEI-SII/2312/2012.","Parsing as Reduction. We reduce phrase-based parsing to dependency parsing. Our reduction is grounded on a new intermediate representation, ""head-ordered dependency trees,"" shown to be isomorphic to constituent trees. By encoding order information in the dependency labels, we show that any off-theshelf, trainable dependency parser can be used to produce constituents. When this parser is non-projective, we can perform discontinuous parsing in a very natural manner. Despite the simplicity of our approach, experiments show that the resulting parsers are on par with strong baselines, such as the Berkeley parser for English and the best non-reranking system in the SPMRL-2014 shared task. Results are particularly striking for discontinuous parsing of German, where we surpass the current state of the art by a wide margin. * This research was carried out during an internship at Priberam Labs.",2015
caucheteux-etal-2021-model-based,https://aclanthology.org/2021.findings-emnlp.308,0,,,,,,,"Model-based analysis of brain activity reveals the hierarchy of language in 305 subjects. A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli (e.g. regular speech versus scrambled words, sentences, or paragraphs). Although successful, this 'model-free' approach necessitates the acquisition of a large and costly set of neuroimaging data. Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli. We capitalize on the recentlydiscovered similarities between deep language models and the human brain to compute the mapping between i) the brain responses to regular speech and ii) the activations of deep language models elicited by modified stimuli (e.g. scrambled words, sentences, or paragraphs). Our model-based approach successfully replicates the seminal study of (Lerner et al., 2011), which revealed the hierarchy of language areas by comparing the functional-magnetic resonance imaging (fMRI) of seven subjects listening to 7 min of both regular and scrambled narratives. We further extend and precise these results to the brain signals of 305 individuals listening to 4.1 hours of narrated stories. Overall, this study paves the way for efficient and flexible analyses of the brain bases of language.",Model-based analysis of brain activity reveals the hierarchy of language in 305 subjects,"A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli (e.g. regular speech versus scrambled words, sentences, or paragraphs). Although successful, this 'model-free' approach necessitates the acquisition of a large and costly set of neuroimaging data. Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli. We capitalize on the recentlydiscovered similarities between deep language models and the human brain to compute the mapping between i) the brain responses to regular speech and ii) the activations of deep language models elicited by modified stimuli (e.g. scrambled words, sentences, or paragraphs). Our model-based approach successfully replicates the seminal study of (Lerner et al., 2011), which revealed the hierarchy of language areas by comparing the functional-magnetic resonance imaging (fMRI) of seven subjects listening to 7 min of both regular and scrambled narratives. We further extend and precise these results to the brain signals of 305 individuals listening to 4.1 hours of narrated stories. Overall, this study paves the way for efficient and flexible analyses of the brain bases of language.",Model-based analysis of brain activity reveals the hierarchy of language in 305 subjects,"A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli (e.g. regular speech versus scrambled words, sentences, or paragraphs). Although successful, this 'model-free' approach necessitates the acquisition of a large and costly set of neuroimaging data. Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli. We capitalize on the recentlydiscovered similarities between deep language models and the human brain to compute the mapping between i) the brain responses to regular speech and ii) the activations of deep language models elicited by modified stimuli (e.g. scrambled words, sentences, or paragraphs). Our model-based approach successfully replicates the seminal study of (Lerner et al., 2011), which revealed the hierarchy of language areas by comparing the functional-magnetic resonance imaging (fMRI) of seven subjects listening to 7 min of both regular and scrambled narratives. We further extend and precise these results to the brain signals of 305 individuals listening to 4.1 hours of narrated stories. Overall, this study paves the way for efficient and flexible analyses of the brain bases of language.","This work was supported by the French ANR-20-CHIA-0016 and the European Research Council Starting Grant SLAB ERC-YStG-676943 to AG, and by the French ANR-17-EURE-0017 and the Fyssen Foundation to JRK for his work at PSL.","Model-based analysis of brain activity reveals the hierarchy of language in 305 subjects. A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli (e.g. regular speech versus scrambled words, sentences, or paragraphs). Although successful, this 'model-free' approach necessitates the acquisition of a large and costly set of neuroimaging data. Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli. We capitalize on the recentlydiscovered similarities between deep language models and the human brain to compute the mapping between i) the brain responses to regular speech and ii) the activations of deep language models elicited by modified stimuli (e.g. scrambled words, sentences, or paragraphs). Our model-based approach successfully replicates the seminal study of (Lerner et al., 2011), which revealed the hierarchy of language areas by comparing the functional-magnetic resonance imaging (fMRI) of seven subjects listening to 7 min of both regular and scrambled narratives. We further extend and precise these results to the brain signals of 305 individuals listening to 4.1 hours of narrated stories. Overall, this study paves the way for efficient and flexible analyses of the brain bases of language.",2021
rao-etal-2017-scalable,https://aclanthology.org/W17-7547,1,,,,health,,,"Scalable Bio-Molecular Event Extraction System towards Knowledge Acquisition. This paper presents a robust system for the automatic extraction of bio-molecular events from scientific texts. Event extraction provides information in the understanding of physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, such as knowledge base creation, knowledge discovery. Automatic event extraction is a challenging task due to ambiguity and diversity of natural language and linguistic phenomena, such as negations, anaphora and coreferencing leading to incorrect interpretation. In this work a machine learning based approach has been used for the event extraction. The methodology framework proposed in this work is derived from the perspective of natural language processing. The system includes a robust anaphora and coreference resolution module, developed as part of this work. An overall F-score of 54.25% is obtained, which is an improvement of 4% in comparison with the state of the art systems.",Scalable Bio-Molecular Event Extraction System towards Knowledge Acquisition,"This paper presents a robust system for the automatic extraction of bio-molecular events from scientific texts. Event extraction provides information in the understanding of physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, such as knowledge base creation, knowledge discovery. Automatic event extraction is a challenging task due to ambiguity and diversity of natural language and linguistic phenomena, such as negations, anaphora and coreferencing leading to incorrect interpretation. In this work a machine learning based approach has been used for the event extraction. The methodology framework proposed in this work is derived from the perspective of natural language processing. The system includes a robust anaphora and coreference resolution module, developed as part of this work. An overall F-score of 54.25% is obtained, which is an improvement of 4% in comparison with the state of the art systems.",Scalable Bio-Molecular Event Extraction System towards Knowledge Acquisition,"This paper presents a robust system for the automatic extraction of bio-molecular events from scientific texts. Event extraction provides information in the understanding of physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, such as knowledge base creation, knowledge discovery. Automatic event extraction is a challenging task due to ambiguity and diversity of natural language and linguistic phenomena, such as negations, anaphora and coreferencing leading to incorrect interpretation. In this work a machine learning based approach has been used for the event extraction. The methodology framework proposed in this work is derived from the perspective of natural language processing. The system includes a robust anaphora and coreference resolution module, developed as part of this work. An overall F-score of 54.25% is obtained, which is an improvement of 4% in comparison with the state of the art systems.",,"Scalable Bio-Molecular Event Extraction System towards Knowledge Acquisition. This paper presents a robust system for the automatic extraction of bio-molecular events from scientific texts. Event extraction provides information in the understanding of physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, such as knowledge base creation, knowledge discovery. Automatic event extraction is a challenging task due to ambiguity and diversity of natural language and linguistic phenomena, such as negations, anaphora and coreferencing leading to incorrect interpretation. In this work a machine learning based approach has been used for the event extraction. The methodology framework proposed in this work is derived from the perspective of natural language processing. The system includes a robust anaphora and coreference resolution module, developed as part of this work. An overall F-score of 54.25% is obtained, which is an improvement of 4% in comparison with the state of the art systems.",2017
yu-etal-2016-product,https://aclanthology.org/C16-1106,0,,,,business_use,,,"Product Review Summarization by Exploiting Phrase Properties. We propose a phrase-based approach for generating product review summaries. The main idea of our method is to leverage phrase properties to choose a subset of optimal phrases for generating the final summary. Specifically, we exploit two phrase properties, popularity and specificity. Popularity describes how popular the phrase is in the original reviews. Specificity describes how descriptive a phrase is in comparison to generic comments. We formalize the phrase selection procedure as an optimization problem and solve it using integer linear programming (ILP). An aspect-based bigram language model is used for generating the final summary with the selected phrases. Experiments show that our summarizer outperforms the other baselines.",Product Review Summarization by Exploiting Phrase Properties,"We propose a phrase-based approach for generating product review summaries. The main idea of our method is to leverage phrase properties to choose a subset of optimal phrases for generating the final summary. Specifically, we exploit two phrase properties, popularity and specificity. Popularity describes how popular the phrase is in the original reviews. Specificity describes how descriptive a phrase is in comparison to generic comments. We formalize the phrase selection procedure as an optimization problem and solve it using integer linear programming (ILP). An aspect-based bigram language model is used for generating the final summary with the selected phrases. Experiments show that our summarizer outperforms the other baselines.",Product Review Summarization by Exploiting Phrase Properties,"We propose a phrase-based approach for generating product review summaries. The main idea of our method is to leverage phrase properties to choose a subset of optimal phrases for generating the final summary. Specifically, we exploit two phrase properties, popularity and specificity. Popularity describes how popular the phrase is in the original reviews. Specificity describes how descriptive a phrase is in comparison to generic comments. We formalize the phrase selection procedure as an optimization problem and solve it using integer linear programming (ILP). An aspect-based bigram language model is used for generating the final summary with the selected phrases. Experiments show that our summarizer outperforms the other baselines.",This work was partly supported by the National Basic Research Program (973 Program ,"Product Review Summarization by Exploiting Phrase Properties. We propose a phrase-based approach for generating product review summaries. The main idea of our method is to leverage phrase properties to choose a subset of optimal phrases for generating the final summary. Specifically, we exploit two phrase properties, popularity and specificity. Popularity describes how popular the phrase is in the original reviews. Specificity describes how descriptive a phrase is in comparison to generic comments. We formalize the phrase selection procedure as an optimization problem and solve it using integer linear programming (ILP). An aspect-based bigram language model is used for generating the final summary with the selected phrases. Experiments show that our summarizer outperforms the other baselines.",2016
adorni-massone-1984-production,https://aclanthology.org/1984.bcs-1.16,0,,,,,,,"Production of sentences: a general algorithm and a case study. In this paper a procedure for the production of sentences is described, producing written sentences in a particular language starting from formal representations of their meaning. After a brief description of the internal representation used, the algorithm is presented, and some results and future trends are discussed.",Production of sentences: a general algorithm and a case study,"In this paper a procedure for the production of sentences is described, producing written sentences in a particular language starting from formal representations of their meaning. After a brief description of the internal representation used, the algorithm is presented, and some results and future trends are discussed.",Production of sentences: a general algorithm and a case study,"In this paper a procedure for the production of sentences is described, producing written sentences in a particular language starting from formal representations of their meaning. After a brief description of the internal representation used, the algorithm is presented, and some results and future trends are discussed.","Authors wish to thank Domenico Parisi and Alessandra Giorgi for their helpful comments and discussion, leading to the implementation of the algorithm.","Production of sentences: a general algorithm and a case study. In this paper a procedure for the production of sentences is described, producing written sentences in a particular language starting from formal representations of their meaning. After a brief description of the internal representation used, the algorithm is presented, and some results and future trends are discussed.",1984
ni-etal-2019-justifying,https://aclanthology.org/D19-1018,0,,,,,,,"Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects. Several recent works have considered the problem of generating reviews (or 'tips') as a form of explanation as to why a recommendation might match a user's interests. While promising, we demonstrate that existing approaches struggle (in terms of both quality and content) to generate justifications that are relevant to users' decision-making process. We seek to introduce new datasets and methods to address this recommendation justification task. In terms of data, we first propose an 'extractive' approach to identify review segments which justify users' intentions; this approach is then used to distantly label massive review corpora and construct largescale personalized recommendation justification datasets. In terms of generation, we design two personalized generation models with this data: (1) a reference-based Seq2Seq model with aspect-planning which can generate justifications covering different aspects, and (2) an aspect-conditional masked language model which can generate diverse justifications based on templates extracted from justification histories. We conduct experiments on two real-world datasets which show that our model is capable of generating convincing and diverse justifications.",Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects,"Several recent works have considered the problem of generating reviews (or 'tips') as a form of explanation as to why a recommendation might match a user's interests. While promising, we demonstrate that existing approaches struggle (in terms of both quality and content) to generate justifications that are relevant to users' decision-making process. We seek to introduce new datasets and methods to address this recommendation justification task. In terms of data, we first propose an 'extractive' approach to identify review segments which justify users' intentions; this approach is then used to distantly label massive review corpora and construct largescale personalized recommendation justification datasets. In terms of generation, we design two personalized generation models with this data: (1) a reference-based Seq2Seq model with aspect-planning which can generate justifications covering different aspects, and (2) an aspect-conditional masked language model which can generate diverse justifications based on templates extracted from justification histories. We conduct experiments on two real-world datasets which show that our model is capable of generating convincing and diverse justifications.",Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects,"Several recent works have considered the problem of generating reviews (or 'tips') as a form of explanation as to why a recommendation might match a user's interests. While promising, we demonstrate that existing approaches struggle (in terms of both quality and content) to generate justifications that are relevant to users' decision-making process. We seek to introduce new datasets and methods to address this recommendation justification task. In terms of data, we first propose an 'extractive' approach to identify review segments which justify users' intentions; this approach is then used to distantly label massive review corpora and construct largescale personalized recommendation justification datasets. In terms of generation, we design two personalized generation models with this data: (1) a reference-based Seq2Seq model with aspect-planning which can generate justifications covering different aspects, and (2) an aspect-conditional masked language model which can generate diverse justifications based on templates extracted from justification histories. We conduct experiments on two real-world datasets which show that our model is capable of generating convincing and diverse justifications.",Acknowledgements. This work is partly supported by NSF #1750063. We thank all the reviewers for their constructive suggestions.,"Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects. Several recent works have considered the problem of generating reviews (or 'tips') as a form of explanation as to why a recommendation might match a user's interests. While promising, we demonstrate that existing approaches struggle (in terms of both quality and content) to generate justifications that are relevant to users' decision-making process. We seek to introduce new datasets and methods to address this recommendation justification task. In terms of data, we first propose an 'extractive' approach to identify review segments which justify users' intentions; this approach is then used to distantly label massive review corpora and construct largescale personalized recommendation justification datasets. In terms of generation, we design two personalized generation models with this data: (1) a reference-based Seq2Seq model with aspect-planning which can generate justifications covering different aspects, and (2) an aspect-conditional masked language model which can generate diverse justifications based on templates extracted from justification histories. We conduct experiments on two real-world datasets which show that our model is capable of generating convincing and diverse justifications.",2019
chen-etal-2021-retrack,https://aclanthology.org/2021.acl-demo.39,0,,,,,,,"ReTraCk: A Flexible and Efficient Framework for Knowledge Base Question Answering. We present Retriever-Transducer-Checker (ReTraCk), a neural semantic parsing framework for large scale knowledge base question answering (KBQA). ReTraCk is designed as a modular framework to maintain high flexibility. It includes a retriever to retrieve relevant KB items efficiently, a transducer to generate logical form with syntax correctness guarantees and a checker to improve the transduction procedure. ReTraCk is ranked at top1 overall performance on the GrailQA leaderboard 1 and obtains highly competitive performance on the typical WebQuestionsSP benchmark. Our system can interact with users timely, demonstrating the efficiency of the proposed framework. 2 * The first three authors contributed equally. This work was conducted during Shuang and Qian's internship at Microsoft Research Asia.",{R}e{T}ra{C}k: A Flexible and Efficient Framework for Knowledge Base Question Answering,"We present Retriever-Transducer-Checker (ReTraCk), a neural semantic parsing framework for large scale knowledge base question answering (KBQA). ReTraCk is designed as a modular framework to maintain high flexibility. It includes a retriever to retrieve relevant KB items efficiently, a transducer to generate logical form with syntax correctness guarantees and a checker to improve the transduction procedure. ReTraCk is ranked at top1 overall performance on the GrailQA leaderboard 1 and obtains highly competitive performance on the typical WebQuestionsSP benchmark. Our system can interact with users timely, demonstrating the efficiency of the proposed framework. 2 * The first three authors contributed equally. This work was conducted during Shuang and Qian's internship at Microsoft Research Asia.",ReTraCk: A Flexible and Efficient Framework for Knowledge Base Question Answering,"We present Retriever-Transducer-Checker (ReTraCk), a neural semantic parsing framework for large scale knowledge base question answering (KBQA). ReTraCk is designed as a modular framework to maintain high flexibility. It includes a retriever to retrieve relevant KB items efficiently, a transducer to generate logical form with syntax correctness guarantees and a checker to improve the transduction procedure. ReTraCk is ranked at top1 overall performance on the GrailQA leaderboard 1 and obtains highly competitive performance on the typical WebQuestionsSP benchmark. Our system can interact with users timely, demonstrating the efficiency of the proposed framework. 2 * The first three authors contributed equally. This work was conducted during Shuang and Qian's internship at Microsoft Research Asia.","We would like to thank Audrey Lin and Börje F. Karlsson for their constructive comments and useful suggestions, and all the anonymous reviewers for their helpful feedback. We also thank Yu Gu for evaluating our submissions on the test set of the GrailQA benchmark and sharing preprocessed data on GrailQA and WebQuestionsSP.","ReTraCk: A Flexible and Efficient Framework for Knowledge Base Question Answering. We present Retriever-Transducer-Checker (ReTraCk), a neural semantic parsing framework for large scale knowledge base question answering (KBQA). ReTraCk is designed as a modular framework to maintain high flexibility. It includes a retriever to retrieve relevant KB items efficiently, a transducer to generate logical form with syntax correctness guarantees and a checker to improve the transduction procedure. ReTraCk is ranked at top1 overall performance on the GrailQA leaderboard 1 and obtains highly competitive performance on the typical WebQuestionsSP benchmark. Our system can interact with users timely, demonstrating the efficiency of the proposed framework. 2 * The first three authors contributed equally. This work was conducted during Shuang and Qian's internship at Microsoft Research Asia.",2021
thieberger-2007-language,https://aclanthology.org/U07-1002,1,,,,social_equality,,,"Does Language Technology Offer Anything to Small Languages?. The effort currently going into recording the smaller and perhaps more endangered languages of the world may result in computationally tractable documents in those languages, but to date there has not been a tradition of corpus creation for these languages. In this talk I will outline the language situation of Australia's neighbouring region and discuss methods currently used in language documentation, observing that it is quite difficult to get linguists to create reusable records of the languages they record, let alone expecting them to create marked-up corpora. I will highlight the importance of creating shared infrastructure to support our work, including the development of Pacific and Regional Archive for Digital Sources in Endangered Cultures (PARADISEC), a facility for curation of linguistic data.",Does Language Technology Offer Anything to Small Languages?,"The effort currently going into recording the smaller and perhaps more endangered languages of the world may result in computationally tractable documents in those languages, but to date there has not been a tradition of corpus creation for these languages. In this talk I will outline the language situation of Australia's neighbouring region and discuss methods currently used in language documentation, observing that it is quite difficult to get linguists to create reusable records of the languages they record, let alone expecting them to create marked-up corpora. I will highlight the importance of creating shared infrastructure to support our work, including the development of Pacific and Regional Archive for Digital Sources in Endangered Cultures (PARADISEC), a facility for curation of linguistic data.",Does Language Technology Offer Anything to Small Languages?,"The effort currently going into recording the smaller and perhaps more endangered languages of the world may result in computationally tractable documents in those languages, but to date there has not been a tradition of corpus creation for these languages. In this talk I will outline the language situation of Australia's neighbouring region and discuss methods currently used in language documentation, observing that it is quite difficult to get linguists to create reusable records of the languages they record, let alone expecting them to create marked-up corpora. I will highlight the importance of creating shared infrastructure to support our work, including the development of Pacific and Regional Archive for Digital Sources in Endangered Cultures (PARADISEC), a facility for curation of linguistic data.",,"Does Language Technology Offer Anything to Small Languages?. The effort currently going into recording the smaller and perhaps more endangered languages of the world may result in computationally tractable documents in those languages, but to date there has not been a tradition of corpus creation for these languages. In this talk I will outline the language situation of Australia's neighbouring region and discuss methods currently used in language documentation, observing that it is quite difficult to get linguists to create reusable records of the languages they record, let alone expecting them to create marked-up corpora. I will highlight the importance of creating shared infrastructure to support our work, including the development of Pacific and Regional Archive for Digital Sources in Endangered Cultures (PARADISEC), a facility for curation of linguistic data.",2007
kishimoto-etal-2014-post,https://aclanthology.org/2014.amta-wptp.15,0,,,,,,,"Post-editing user interface using visualization of a sentence structure. Translation has become increasingly important by virtue of globalization. To reduce the cost of translation, it is necessary to use machine translation and further to take advantage of post-editing based on the result of a machine translation for accurate information dissemination. Such post-editing (e.g., PET [Aziz et al., 2012] ) can be used practically for translation between European languages, which has a high performance in statistical machine translation. However, due to the low accuracy of machine translation between languages with different word order, such as Japanese-English and Japanese-Chinese, post-editing has not been used actively.",Post-editing user interface using visualization of a sentence structure,"Translation has become increasingly important by virtue of globalization. To reduce the cost of translation, it is necessary to use machine translation and further to take advantage of post-editing based on the result of a machine translation for accurate information dissemination. Such post-editing (e.g., PET [Aziz et al., 2012] ) can be used practically for translation between European languages, which has a high performance in statistical machine translation. However, due to the low accuracy of machine translation between languages with different word order, such as Japanese-English and Japanese-Chinese, post-editing has not been used actively.",Post-editing user interface using visualization of a sentence structure,"Translation has become increasingly important by virtue of globalization. To reduce the cost of translation, it is necessary to use machine translation and further to take advantage of post-editing based on the result of a machine translation for accurate information dissemination. Such post-editing (e.g., PET [Aziz et al., 2012] ) can be used practically for translation between European languages, which has a high performance in statistical machine translation. However, due to the low accuracy of machine translation between languages with different word order, such as Japanese-English and Japanese-Chinese, post-editing has not been used actively.",,"Post-editing user interface using visualization of a sentence structure. Translation has become increasingly important by virtue of globalization. To reduce the cost of translation, it is necessary to use machine translation and further to take advantage of post-editing based on the result of a machine translation for accurate information dissemination. Such post-editing (e.g., PET [Aziz et al., 2012] ) can be used practically for translation between European languages, which has a high performance in statistical machine translation. However, due to the low accuracy of machine translation between languages with different word order, such as Japanese-English and Japanese-Chinese, post-editing has not been used actively.",2014
el-haj-etal-2018-profiling,https://aclanthology.org/L18-1726,1,,,,health,,,"Profiling Medical Journal Articles Using a Gene Ontology Semantic Tagger. In many areas of academic publishing, there is an explosion of literature, and subdivision of fields into subfields, leading to stove-piping where sub-communities of expertise become disconnected from each other. This is especially true in the genetics literature over the last 10 years where researchers are no longer able to maintain knowledge of previously related areas. This paper extends several approaches based on natural language processing and corpus linguistics which allow us to examine corpora derived from bodies of genetics literature and will help to make comparisons and improve retrieval methods using domain knowledge via an existing gene ontology. We derived two open access medical journal corpora from PubMed related to psychiatric genetics and immune disorder genetics. We created a novel Gene Ontology Semantic Tagger (GOST) and lexicon to annotate the corpora and are then able to compare subsets of literature to understand the relative distributions of genetic terminology, thereby enabling researchers to make improved connections between them.",Profiling Medical Journal Articles Using a Gene Ontology Semantic Tagger,"In many areas of academic publishing, there is an explosion of literature, and subdivision of fields into subfields, leading to stove-piping where sub-communities of expertise become disconnected from each other. This is especially true in the genetics literature over the last 10 years where researchers are no longer able to maintain knowledge of previously related areas. This paper extends several approaches based on natural language processing and corpus linguistics which allow us to examine corpora derived from bodies of genetics literature and will help to make comparisons and improve retrieval methods using domain knowledge via an existing gene ontology. We derived two open access medical journal corpora from PubMed related to psychiatric genetics and immune disorder genetics. We created a novel Gene Ontology Semantic Tagger (GOST) and lexicon to annotate the corpora and are then able to compare subsets of literature to understand the relative distributions of genetic terminology, thereby enabling researchers to make improved connections between them.",Profiling Medical Journal Articles Using a Gene Ontology Semantic Tagger,"In many areas of academic publishing, there is an explosion of literature, and subdivision of fields into subfields, leading to stove-piping where sub-communities of expertise become disconnected from each other. This is especially true in the genetics literature over the last 10 years where researchers are no longer able to maintain knowledge of previously related areas. This paper extends several approaches based on natural language processing and corpus linguistics which allow us to examine corpora derived from bodies of genetics literature and will help to make comparisons and improve retrieval methods using domain knowledge via an existing gene ontology. We derived two open access medical journal corpora from PubMed related to psychiatric genetics and immune disorder genetics. We created a novel Gene Ontology Semantic Tagger (GOST) and lexicon to annotate the corpora and are then able to compare subsets of literature to understand the relative distributions of genetic terminology, thereby enabling researchers to make improved connections between them.",,"Profiling Medical Journal Articles Using a Gene Ontology Semantic Tagger. In many areas of academic publishing, there is an explosion of literature, and subdivision of fields into subfields, leading to stove-piping where sub-communities of expertise become disconnected from each other. This is especially true in the genetics literature over the last 10 years where researchers are no longer able to maintain knowledge of previously related areas. This paper extends several approaches based on natural language processing and corpus linguistics which allow us to examine corpora derived from bodies of genetics literature and will help to make comparisons and improve retrieval methods using domain knowledge via an existing gene ontology. We derived two open access medical journal corpora from PubMed related to psychiatric genetics and immune disorder genetics. We created a novel Gene Ontology Semantic Tagger (GOST) and lexicon to annotate the corpora and are then able to compare subsets of literature to understand the relative distributions of genetic terminology, thereby enabling researchers to make improved connections between them.",2018
kolak-etal-2003-generative,https://aclanthology.org/N03-1018,0,,,,,,,"A Generative Probabilistic OCR Model for NLP Applications. In this paper, we introduce a generative probabilistic optical character recognition (OCR) model that describes an end-to-end process in the noisy channel framework, progressing from generation of true text through its transformation into the noisy output of an OCR system. The model is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks. We present an implementation of the model based on finitestate models, demonstrate the model's ability to significantly reduce character and word error rate, and provide evaluation results involving automatic extraction of translation lexicons from printed text.",A Generative Probabilistic {OCR} Model for {NLP} Applications,"In this paper, we introduce a generative probabilistic optical character recognition (OCR) model that describes an end-to-end process in the noisy channel framework, progressing from generation of true text through its transformation into the noisy output of an OCR system. The model is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks. We present an implementation of the model based on finitestate models, demonstrate the model's ability to significantly reduce character and word error rate, and provide evaluation results involving automatic extraction of translation lexicons from printed text.",A Generative Probabilistic OCR Model for NLP Applications,"In this paper, we introduce a generative probabilistic optical character recognition (OCR) model that describes an end-to-end process in the noisy channel framework, progressing from generation of true text through its transformation into the noisy output of an OCR system. The model is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks. We present an implementation of the model based on finitestate models, demonstrate the model's ability to significantly reduce character and word error rate, and provide evaluation results involving automatic extraction of translation lexicons from printed text.","This research was supported in part by National Science Foundation grant EIA0130422, Department of Defense contract RD-02-5700, DARPA/ITO Cooperative Agreement N660010028910, and Mitre agreement 010418-7712.We are grateful to Mohri et al. for the AT&T FSM Toolkit, Clarkson and Rosenfeld for CMU-Cambridge Toolkit, and David Doermann for providing the OCR output and useful discussion.","A Generative Probabilistic OCR Model for NLP Applications. In this paper, we introduce a generative probabilistic optical character recognition (OCR) model that describes an end-to-end process in the noisy channel framework, progressing from generation of true text through its transformation into the noisy output of an OCR system. The model is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks. We present an implementation of the model based on finitestate models, demonstrate the model's ability to significantly reduce character and word error rate, and provide evaluation results involving automatic extraction of translation lexicons from printed text.",2003
lohk-etal-2016-experiences,https://aclanthology.org/2016.gwc-1.28,0,,,,,,,"Experiences of Lexicographers and Computer Scientists in Validating Estonian Wordnet with Test Patterns. New concepts and semantic relations are constantly added to Estonian Wordnet (EstWN) to increase its size. In addition to this, with the use of test patterns, the validation of EstWN hierarchies is also performed. This parallel work was carried out over the past four years (2011-2014) with 10 different EstWN versions (60-70). This has been a collaboration between the creators of test patterns and the lexicographers currently working on EstWN. This paper describes the usage of test patterns from the points of views of information scientists (the creators of test patterns) as well as the users (lexicographers). Using EstWN as an example, we illustrate how the continuous use of test patterns has led to significant improvement of the semantic hierarchies in EstWN.",Experiences of Lexicographers and Computer Scientists in Validating {E}stonian {W}ordnet with Test Patterns,"New concepts and semantic relations are constantly added to Estonian Wordnet (EstWN) to increase its size. In addition to this, with the use of test patterns, the validation of EstWN hierarchies is also performed. This parallel work was carried out over the past four years (2011-2014) with 10 different EstWN versions (60-70). This has been a collaboration between the creators of test patterns and the lexicographers currently working on EstWN. This paper describes the usage of test patterns from the points of views of information scientists (the creators of test patterns) as well as the users (lexicographers). Using EstWN as an example, we illustrate how the continuous use of test patterns has led to significant improvement of the semantic hierarchies in EstWN.",Experiences of Lexicographers and Computer Scientists in Validating Estonian Wordnet with Test Patterns,"New concepts and semantic relations are constantly added to Estonian Wordnet (EstWN) to increase its size. In addition to this, with the use of test patterns, the validation of EstWN hierarchies is also performed. This parallel work was carried out over the past four years (2011-2014) with 10 different EstWN versions (60-70). This has been a collaboration between the creators of test patterns and the lexicographers currently working on EstWN. This paper describes the usage of test patterns from the points of views of information scientists (the creators of test patterns) as well as the users (lexicographers). Using EstWN as an example, we illustrate how the continuous use of test patterns has led to significant improvement of the semantic hierarchies in EstWN.",,"Experiences of Lexicographers and Computer Scientists in Validating Estonian Wordnet with Test Patterns. New concepts and semantic relations are constantly added to Estonian Wordnet (EstWN) to increase its size. In addition to this, with the use of test patterns, the validation of EstWN hierarchies is also performed. This parallel work was carried out over the past four years (2011-2014) with 10 different EstWN versions (60-70). This has been a collaboration between the creators of test patterns and the lexicographers currently working on EstWN. This paper describes the usage of test patterns from the points of views of information scientists (the creators of test patterns) as well as the users (lexicographers). Using EstWN as an example, we illustrate how the continuous use of test patterns has led to significant improvement of the semantic hierarchies in EstWN.",2016
gao-etal-2021-ream,https://aclanthology.org/2021.findings-acl.220,0,,,,,,,"REAM$\sharp$: An Enhancement Approach to Reference-based Evaluation Metrics for Open-domain Dialog Generation. The lack of reliable automatic evaluation metrics is a major impediment to the development of open-domain dialogue systems. Various reference-based metrics have been proposed to calculate a score between a predicted response and a small set of references. However, these metrics show unsatisfactory correlations with human judgments. For a referencebased metric, its reliability mainly depends on two factors: its ability to measure the similarity between the predicted response and the reference response, as well as the reliability of the given reference set. Yet, there are few discussions on the latter. Our work attempts to fill this vacancy. We first clarify an assumption on reference-based metrics that, if more high-quality references are added into the reference set, the reliability of the metric will increase. Next, we present REAM : an enhancement approach to Reference-based EvAluation Metrics 1 for open-domain dialogue systems. A prediction model is designed to estimate the reliability of the given reference set. We show how its predicted results can be helpful to augment the reference set, and thus improve the reliability of the metric. Experiments validate both the effectiveness of our prediction model and that the reliability of reference-based metrics improves with the augmented reference sets.",{REAM}$\sharp$: An Enhancement Approach to Reference-based Evaluation Metrics for Open-domain Dialog Generation,"The lack of reliable automatic evaluation metrics is a major impediment to the development of open-domain dialogue systems. Various reference-based metrics have been proposed to calculate a score between a predicted response and a small set of references. However, these metrics show unsatisfactory correlations with human judgments. For a referencebased metric, its reliability mainly depends on two factors: its ability to measure the similarity between the predicted response and the reference response, as well as the reliability of the given reference set. Yet, there are few discussions on the latter. Our work attempts to fill this vacancy. We first clarify an assumption on reference-based metrics that, if more high-quality references are added into the reference set, the reliability of the metric will increase. Next, we present REAM : an enhancement approach to Reference-based EvAluation Metrics 1 for open-domain dialogue systems. A prediction model is designed to estimate the reliability of the given reference set. We show how its predicted results can be helpful to augment the reference set, and thus improve the reliability of the metric. Experiments validate both the effectiveness of our prediction model and that the reliability of reference-based metrics improves with the augmented reference sets.",REAM$\sharp$: An Enhancement Approach to Reference-based Evaluation Metrics for Open-domain Dialog Generation,"The lack of reliable automatic evaluation metrics is a major impediment to the development of open-domain dialogue systems. Various reference-based metrics have been proposed to calculate a score between a predicted response and a small set of references. However, these metrics show unsatisfactory correlations with human judgments. For a referencebased metric, its reliability mainly depends on two factors: its ability to measure the similarity between the predicted response and the reference response, as well as the reliability of the given reference set. Yet, there are few discussions on the latter. Our work attempts to fill this vacancy. We first clarify an assumption on reference-based metrics that, if more high-quality references are added into the reference set, the reliability of the metric will increase. Next, we present REAM : an enhancement approach to Reference-based EvAluation Metrics 1 for open-domain dialogue systems. A prediction model is designed to estimate the reliability of the given reference set. We show how its predicted results can be helpful to augment the reference set, and thus improve the reliability of the metric. Experiments validate both the effectiveness of our prediction model and that the reliability of reference-based metrics improves with the augmented reference sets.",,"REAM$\sharp$: An Enhancement Approach to Reference-based Evaluation Metrics for Open-domain Dialog Generation. The lack of reliable automatic evaluation metrics is a major impediment to the development of open-domain dialogue systems. Various reference-based metrics have been proposed to calculate a score between a predicted response and a small set of references. However, these metrics show unsatisfactory correlations with human judgments. For a referencebased metric, its reliability mainly depends on two factors: its ability to measure the similarity between the predicted response and the reference response, as well as the reliability of the given reference set. Yet, there are few discussions on the latter. Our work attempts to fill this vacancy. We first clarify an assumption on reference-based metrics that, if more high-quality references are added into the reference set, the reliability of the metric will increase. Next, we present REAM : an enhancement approach to Reference-based EvAluation Metrics 1 for open-domain dialogue systems. A prediction model is designed to estimate the reliability of the given reference set. We show how its predicted results can be helpful to augment the reference set, and thus improve the reliability of the metric. Experiments validate both the effectiveness of our prediction model and that the reliability of reference-based metrics improves with the augmented reference sets.",2021
mari-2002-specification,https://aclanthology.org/W02-0803,0,,,,,,,"Under-specification and contextual variability of abstract. In this paper we discuss some philosophical questions related to the treatment of abstract and underspecified prepositions. We consider three issues in particular: (i) the relation between sense and meanings, (ii) the privileged status of abstract meanings in the spectrum of contextual instantiations of basic senses, and finally (iii) the difference between prediction and inference. The discussion will be based on the study of avec (with) and the analysis of its abstract meaning of comitativity in particular. A model for avec semantic variability will also be suggested.",Under-specification and contextual variability of abstract,"In this paper we discuss some philosophical questions related to the treatment of abstract and underspecified prepositions. We consider three issues in particular: (i) the relation between sense and meanings, (ii) the privileged status of abstract meanings in the spectrum of contextual instantiations of basic senses, and finally (iii) the difference between prediction and inference. The discussion will be based on the study of avec (with) and the analysis of its abstract meaning of comitativity in particular. A model for avec semantic variability will also be suggested.",Under-specification and contextual variability of abstract,"In this paper we discuss some philosophical questions related to the treatment of abstract and underspecified prepositions. We consider three issues in particular: (i) the relation between sense and meanings, (ii) the privileged status of abstract meanings in the spectrum of contextual instantiations of basic senses, and finally (iii) the difference between prediction and inference. The discussion will be based on the study of avec (with) and the analysis of its abstract meaning of comitativity in particular. A model for avec semantic variability will also be suggested.",Acknowledgments Many thanks to Patrick Saint-Dizier and Jacques Jayez for their careful readings and useful suggestions.,"Under-specification and contextual variability of abstract. In this paper we discuss some philosophical questions related to the treatment of abstract and underspecified prepositions. We consider three issues in particular: (i) the relation between sense and meanings, (ii) the privileged status of abstract meanings in the spectrum of contextual instantiations of basic senses, and finally (iii) the difference between prediction and inference. The discussion will be based on the study of avec (with) and the analysis of its abstract meaning of comitativity in particular. A model for avec semantic variability will also be suggested.",2002
kannan-santhi-ponnusamy-2020-tukapo,https://aclanthology.org/2020.semeval-1.95,0,,,,,,,"T\""uKaPo at SemEval-2020 Task 6: Def(n)tly Not BERT: Definition Extraction Using pre-BERT Methods in a post-BERT World. We describe our system (TüKaPo) submitted for Task 6: DeftEval, at SemEval 2020. We developed a hybrid approach that combined existing CNN and RNN methods and investigated the impact of purely-syntactic and semantic features on the task of definition extraction, i.e, sentence classification. Our final model achieved a F1-score of 0.6851 in the first subtask.","{T}{\""u}{K}a{P}o at {S}em{E}val-2020 Task 6: Def(n)tly Not {BERT}: Definition Extraction Using pre-{BERT} Methods in a post-{BERT} World","We describe our system (TüKaPo) submitted for Task 6: DeftEval, at SemEval 2020. We developed a hybrid approach that combined existing CNN and RNN methods and investigated the impact of purely-syntactic and semantic features on the task of definition extraction, i.e, sentence classification. Our final model achieved a F1-score of 0.6851 in the first subtask.","T\""uKaPo at SemEval-2020 Task 6: Def(n)tly Not BERT: Definition Extraction Using pre-BERT Methods in a post-BERT World","We describe our system (TüKaPo) submitted for Task 6: DeftEval, at SemEval 2020. We developed a hybrid approach that combined existing CNN and RNN methods and investigated the impact of purely-syntactic and semantic features on the task of definition extraction, i.e, sentence classification. Our final model achieved a F1-score of 0.6851 in the first subtask.",,"T\""uKaPo at SemEval-2020 Task 6: Def(n)tly Not BERT: Definition Extraction Using pre-BERT Methods in a post-BERT World. We describe our system (TüKaPo) submitted for Task 6: DeftEval, at SemEval 2020. We developed a hybrid approach that combined existing CNN and RNN methods and investigated the impact of purely-syntactic and semantic features on the task of definition extraction, i.e, sentence classification. Our final model achieved a F1-score of 0.6851 in the first subtask.",2020
lindahl-2020-annotating,https://aclanthology.org/2020.argmining-1.11,0,,,,,,,"Annotating argumentation in Swedish social media. This paper presents a small study of annotating argumentation in Swedish social media. Annotators were asked to annotate spans of argumentation in 9 threads from two discussion forums. At the post level, Cohen's κ and Krippendorff's α 0.48 was achieved. When manually inspecting the annotations the annotators seemed to agree when conditions in the guidelines were explicitly met, but implicit argumentation and opinions, resulting in annotators having to interpret what's missing in the text, caused disagreements.",Annotating argumentation in {S}wedish social media,"This paper presents a small study of annotating argumentation in Swedish social media. Annotators were asked to annotate spans of argumentation in 9 threads from two discussion forums. At the post level, Cohen's κ and Krippendorff's α 0.48 was achieved. When manually inspecting the annotations the annotators seemed to agree when conditions in the guidelines were explicitly met, but implicit argumentation and opinions, resulting in annotators having to interpret what's missing in the text, caused disagreements.",Annotating argumentation in Swedish social media,"This paper presents a small study of annotating argumentation in Swedish social media. Annotators were asked to annotate spans of argumentation in 9 threads from two discussion forums. At the post level, Cohen's κ and Krippendorff's α 0.48 was achieved. When manually inspecting the annotations the annotators seemed to agree when conditions in the guidelines were explicitly met, but implicit argumentation and opinions, resulting in annotators having to interpret what's missing in the text, caused disagreements.","The work presented here has been partly supported by an infrastructure grant to Språkbanken Text, University of Gothenburg, for contributing to building and operating a national e-infrastructure funded jointly by the participating institutions and the Swedish Research Council (under contract no. 2017-00626). We would also like to thank the anonymous reviewers for their constructive comments and feedback.","Annotating argumentation in Swedish social media. This paper presents a small study of annotating argumentation in Swedish social media. Annotators were asked to annotate spans of argumentation in 9 threads from two discussion forums. At the post level, Cohen's κ and Krippendorff's α 0.48 was achieved. When manually inspecting the annotations the annotators seemed to agree when conditions in the guidelines were explicitly met, but implicit argumentation and opinions, resulting in annotators having to interpret what's missing in the text, caused disagreements.",2020
yan-pedersen-2017-duluth,https://aclanthology.org/S17-2064,0,,,,,,,"Duluth at SemEval-2017 Task 6: Language Models in Humor Detection. This paper describes the Duluth system that participated in SemEval-2017 Task 6 #HashtagWars: Learning a Sense of Humor. The system participated in Subtasks A and B using N-gram language models, ranking highly in the task evaluation. This paper discusses the results of our system in the development and evaluation stages and from two post-evaluation runs.",{D}uluth at {S}em{E}val-2017 Task 6: Language Models in Humor Detection,"This paper describes the Duluth system that participated in SemEval-2017 Task 6 #HashtagWars: Learning a Sense of Humor. The system participated in Subtasks A and B using N-gram language models, ranking highly in the task evaluation. This paper discusses the results of our system in the development and evaluation stages and from two post-evaluation runs.",Duluth at SemEval-2017 Task 6: Language Models in Humor Detection,"This paper describes the Duluth system that participated in SemEval-2017 Task 6 #HashtagWars: Learning a Sense of Humor. The system participated in Subtasks A and B using N-gram language models, ranking highly in the task evaluation. This paper discusses the results of our system in the development and evaluation stages and from two post-evaluation runs.",,"Duluth at SemEval-2017 Task 6: Language Models in Humor Detection. This paper describes the Duluth system that participated in SemEval-2017 Task 6 #HashtagWars: Learning a Sense of Humor. The system participated in Subtasks A and B using N-gram language models, ranking highly in the task evaluation. This paper discusses the results of our system in the development and evaluation stages and from two post-evaluation runs.",2017
li-2021-codewithzichao,https://aclanthology.org/2021.dravidianlangtech-1.21,1,,,,hate_speech,,,"Codewithzichao@DravidianLangTech-EACL2021: Exploring Multilingual Transformers for Offensive Language Identification on Code Mixing Text. This paper describes our solution submitted to shared task on Offensive Language Identification in Dravidian Languages. We participated in all three of offensive language identification. In order to address the task, we explored multilingual models based on XLM-RoBERTa and multilingual BERT trained on mixed data of three code-mixed languages. Besides, we solved the class-imbalance problem existed in training data by class combination, class weights and focal loss. Our model achieved weighted average F1 scores of 0.75 (ranked 4th), 0.94 (ranked 4th) and 0.72 (ranked 3rd) in Tamil-English task, Malayalam-English task and Kannada-English task, respectively.",Codewithzichao@{D}ravidian{L}ang{T}ech-{EACL}2021: Exploring Multilingual Transformers for Offensive Language Identification on Code Mixing Text,"This paper describes our solution submitted to shared task on Offensive Language Identification in Dravidian Languages. We participated in all three of offensive language identification. In order to address the task, we explored multilingual models based on XLM-RoBERTa and multilingual BERT trained on mixed data of three code-mixed languages. Besides, we solved the class-imbalance problem existed in training data by class combination, class weights and focal loss. Our model achieved weighted average F1 scores of 0.75 (ranked 4th), 0.94 (ranked 4th) and 0.72 (ranked 3rd) in Tamil-English task, Malayalam-English task and Kannada-English task, respectively.",Codewithzichao@DravidianLangTech-EACL2021: Exploring Multilingual Transformers for Offensive Language Identification on Code Mixing Text,"This paper describes our solution submitted to shared task on Offensive Language Identification in Dravidian Languages. We participated in all three of offensive language identification. In order to address the task, we explored multilingual models based on XLM-RoBERTa and multilingual BERT trained on mixed data of three code-mixed languages. Besides, we solved the class-imbalance problem existed in training data by class combination, class weights and focal loss. Our model achieved weighted average F1 scores of 0.75 (ranked 4th), 0.94 (ranked 4th) and 0.72 (ranked 3rd) in Tamil-English task, Malayalam-English task and Kannada-English task, respectively.",,"Codewithzichao@DravidianLangTech-EACL2021: Exploring Multilingual Transformers for Offensive Language Identification on Code Mixing Text. This paper describes our solution submitted to shared task on Offensive Language Identification in Dravidian Languages. We participated in all three of offensive language identification. In order to address the task, we explored multilingual models based on XLM-RoBERTa and multilingual BERT trained on mixed data of three code-mixed languages. Besides, we solved the class-imbalance problem existed in training data by class combination, class weights and focal loss. Our model achieved weighted average F1 scores of 0.75 (ranked 4th), 0.94 (ranked 4th) and 0.72 (ranked 3rd) in Tamil-English task, Malayalam-English task and Kannada-English task, respectively.",2021
shinnou-1998-revision,https://link.springer.com/chapter/10.1007/3-540-49478-2_36,0,,,,,,,Revision of morphological analysis errors through the person name construction model. ,Revision of morphological analysis errors through the person name construction model,,Revision of morphological analysis errors through the person name construction model,,,Revision of morphological analysis errors through the person name construction model. ,1998
finegan-dollak-verma-2020-layout,https://aclanthology.org/2020.insights-1.9,0,,,,,,,"Layout-Aware Text Representations Harm Clustering Documents by Type. Clustering documents by type-grouping invoices with invoices and articles with articles-is a desirable first step for organizing large collections of document scans. Humans approaching this task use both the semantics of the text and the document layout to assist in grouping like documents. Lay-outLM (Xu et al., 2019), a layout-aware transformer built on top of BERT with state-of-theart performance on document-type classification, could reasonably be expected to outperform regular BERT (Devlin et al., 2018) for document-type clustering. However, we find experimentally that BERT significantly outperforms LayoutLM on this task (p < 0.001). We analyze clusters to show where layout awareness is an asset and where it is a liability.",Layout-Aware Text Representations Harm Clustering Documents by Type,"Clustering documents by type-grouping invoices with invoices and articles with articles-is a desirable first step for organizing large collections of document scans. Humans approaching this task use both the semantics of the text and the document layout to assist in grouping like documents. Lay-outLM (Xu et al., 2019), a layout-aware transformer built on top of BERT with state-of-theart performance on document-type classification, could reasonably be expected to outperform regular BERT (Devlin et al., 2018) for document-type clustering. However, we find experimentally that BERT significantly outperforms LayoutLM on this task (p < 0.001). We analyze clusters to show where layout awareness is an asset and where it is a liability.",Layout-Aware Text Representations Harm Clustering Documents by Type,"Clustering documents by type-grouping invoices with invoices and articles with articles-is a desirable first step for organizing large collections of document scans. Humans approaching this task use both the semantics of the text and the document layout to assist in grouping like documents. Lay-outLM (Xu et al., 2019), a layout-aware transformer built on top of BERT with state-of-theart performance on document-type classification, could reasonably be expected to outperform regular BERT (Devlin et al., 2018) for document-type clustering. However, we find experimentally that BERT significantly outperforms LayoutLM on this task (p < 0.001). We analyze clusters to show where layout awareness is an asset and where it is a liability.","We would like to thank the anonymous reviewers for their helpful comments, as well as Anik Saha for many discussions on LayoutLM's strengths and weaknesses for supervised tasks.","Layout-Aware Text Representations Harm Clustering Documents by Type. Clustering documents by type-grouping invoices with invoices and articles with articles-is a desirable first step for organizing large collections of document scans. Humans approaching this task use both the semantics of the text and the document layout to assist in grouping like documents. Lay-outLM (Xu et al., 2019), a layout-aware transformer built on top of BERT with state-of-theart performance on document-type classification, could reasonably be expected to outperform regular BERT (Devlin et al., 2018) for document-type clustering. However, we find experimentally that BERT significantly outperforms LayoutLM on this task (p < 0.001). We analyze clusters to show where layout awareness is an asset and where it is a liability.",2020
khusainova-etal-2021-hierarchical,https://aclanthology.org/2021.vardial-1.2,0,,,,,,,"Hierarchical Transformer for Multilingual Machine Translation. The choice of parameter sharing strategy in multilingual machine translation models determines how optimally parameter space is used and hence, directly influences ultimate translation quality. Inspired by linguistic trees that show the degree of relatedness between different languages, the new general approach to parameter sharing in multilingual machine translation was suggested recently. The main idea is to use these expert language hierarchies as a basis for multilingual architecture: the closer two languages are, the more parameters they share. In this work, we test this idea using the Transformer architecture and show that despite the success in previous work there are problems inherent to training such hierarchical models. We demonstrate that in case of carefully chosen training strategy the hierarchical architecture can outperform bilingual models and multilingual models with full parameter sharing.",Hierarchical Transformer for Multilingual Machine Translation,"The choice of parameter sharing strategy in multilingual machine translation models determines how optimally parameter space is used and hence, directly influences ultimate translation quality. Inspired by linguistic trees that show the degree of relatedness between different languages, the new general approach to parameter sharing in multilingual machine translation was suggested recently. The main idea is to use these expert language hierarchies as a basis for multilingual architecture: the closer two languages are, the more parameters they share. In this work, we test this idea using the Transformer architecture and show that despite the success in previous work there are problems inherent to training such hierarchical models. We demonstrate that in case of carefully chosen training strategy the hierarchical architecture can outperform bilingual models and multilingual models with full parameter sharing.",Hierarchical Transformer for Multilingual Machine Translation,"The choice of parameter sharing strategy in multilingual machine translation models determines how optimally parameter space is used and hence, directly influences ultimate translation quality. Inspired by linguistic trees that show the degree of relatedness between different languages, the new general approach to parameter sharing in multilingual machine translation was suggested recently. The main idea is to use these expert language hierarchies as a basis for multilingual architecture: the closer two languages are, the more parameters they share. In this work, we test this idea using the Transformer architecture and show that despite the success in previous work there are problems inherent to training such hierarchical models. We demonstrate that in case of carefully chosen training strategy the hierarchical architecture can outperform bilingual models and multilingual models with full parameter sharing.",,"Hierarchical Transformer for Multilingual Machine Translation. The choice of parameter sharing strategy in multilingual machine translation models determines how optimally parameter space is used and hence, directly influences ultimate translation quality. Inspired by linguistic trees that show the degree of relatedness between different languages, the new general approach to parameter sharing in multilingual machine translation was suggested recently. The main idea is to use these expert language hierarchies as a basis for multilingual architecture: the closer two languages are, the more parameters they share. In this work, we test this idea using the Transformer architecture and show that despite the success in previous work there are problems inherent to training such hierarchical models. We demonstrate that in case of carefully chosen training strategy the hierarchical architecture can outperform bilingual models and multilingual models with full parameter sharing.",2021
ma-collins-2018-noise,https://aclanthology.org/D18-1405,0,,,,,,,"Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency. Noise Contrastive Estimation (NCE) is a powerful parameter estimation method for loglinear models, which avoids calculation of the partition function or its derivatives at each training step, a computationally demanding step in many cases. It is closely related to negative sampling methods, now widely used in NLP. This paper considers NCE-based estimation of conditional models. Conditional models are frequently encountered in practice; however there has not been a rigorous theoretical analysis of NCE in this setting, and we will argue there are subtle but important questions when generalizing NCE to the conditional case. In particular, we analyze two variants of NCE for conditional models: one based on a classification objective, the other based on a ranking objective. We show that the rankingbased variant of NCE gives consistent parameter estimates under weaker assumptions than the classification-based method; we analyze the statistical efficiency of the ranking-based and classification-based variants of NCE; finally we describe experiments on synthetic data and language modeling showing the effectiveness and trade-offs of both methods.",Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency,"Noise Contrastive Estimation (NCE) is a powerful parameter estimation method for loglinear models, which avoids calculation of the partition function or its derivatives at each training step, a computationally demanding step in many cases. It is closely related to negative sampling methods, now widely used in NLP. This paper considers NCE-based estimation of conditional models. Conditional models are frequently encountered in practice; however there has not been a rigorous theoretical analysis of NCE in this setting, and we will argue there are subtle but important questions when generalizing NCE to the conditional case. In particular, we analyze two variants of NCE for conditional models: one based on a classification objective, the other based on a ranking objective. We show that the rankingbased variant of NCE gives consistent parameter estimates under weaker assumptions than the classification-based method; we analyze the statistical efficiency of the ranking-based and classification-based variants of NCE; finally we describe experiments on synthetic data and language modeling showing the effectiveness and trade-offs of both methods.",Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency,"Noise Contrastive Estimation (NCE) is a powerful parameter estimation method for loglinear models, which avoids calculation of the partition function or its derivatives at each training step, a computationally demanding step in many cases. It is closely related to negative sampling methods, now widely used in NLP. This paper considers NCE-based estimation of conditional models. Conditional models are frequently encountered in practice; however there has not been a rigorous theoretical analysis of NCE in this setting, and we will argue there are subtle but important questions when generalizing NCE to the conditional case. In particular, we analyze two variants of NCE for conditional models: one based on a classification objective, the other based on a ranking objective. We show that the rankingbased variant of NCE gives consistent parameter estimates under weaker assumptions than the classification-based method; we analyze the statistical efficiency of the ranking-based and classification-based variants of NCE; finally we describe experiments on synthetic data and language modeling showing the effectiveness and trade-offs of both methods.","The authors thank Emily Pitler and Ali Elkahky for many useful conversations about the work, and David Weiss for comments on an earlier draft of the paper.","Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency. Noise Contrastive Estimation (NCE) is a powerful parameter estimation method for loglinear models, which avoids calculation of the partition function or its derivatives at each training step, a computationally demanding step in many cases. It is closely related to negative sampling methods, now widely used in NLP. This paper considers NCE-based estimation of conditional models. Conditional models are frequently encountered in practice; however there has not been a rigorous theoretical analysis of NCE in this setting, and we will argue there are subtle but important questions when generalizing NCE to the conditional case. In particular, we analyze two variants of NCE for conditional models: one based on a classification objective, the other based on a ranking objective. We show that the rankingbased variant of NCE gives consistent parameter estimates under weaker assumptions than the classification-based method; we analyze the statistical efficiency of the ranking-based and classification-based variants of NCE; finally we describe experiments on synthetic data and language modeling showing the effectiveness and trade-offs of both methods.",2018
king-1980-human,https://aclanthology.org/C80-1042,0,,,,,,,"Human Factors and Linguistic Considerations: Keys to High-Speed Chinese Character Input. With a keyboard and supporting system developed at Cornell University, input methods used to identify ideographs are adaptations of wellknown schemes; innovation is in the addition of automatic machine selection of ambiguously identified characters. The unique feature of the Cornell design is that a certain amount of intelligence has been built into the machine. This allows an operator to take advantage of the fact that about 60% of Chinese characters in text are paired with other characters to form two-syllable compounds or phrase words. In speech and writing these pairings eliminate about 95% of the ambiguities created by ambiguously identified syllables.",Human Factors and Linguistic Considerations: Keys to High-Speed {C}hinese Character Input,"With a keyboard and supporting system developed at Cornell University, input methods used to identify ideographs are adaptations of wellknown schemes; innovation is in the addition of automatic machine selection of ambiguously identified characters. The unique feature of the Cornell design is that a certain amount of intelligence has been built into the machine. This allows an operator to take advantage of the fact that about 60% of Chinese characters in text are paired with other characters to form two-syllable compounds or phrase words. In speech and writing these pairings eliminate about 95% of the ambiguities created by ambiguously identified syllables.",Human Factors and Linguistic Considerations: Keys to High-Speed Chinese Character Input,"With a keyboard and supporting system developed at Cornell University, input methods used to identify ideographs are adaptations of wellknown schemes; innovation is in the addition of automatic machine selection of ambiguously identified characters. The unique feature of the Cornell design is that a certain amount of intelligence has been built into the machine. This allows an operator to take advantage of the fact that about 60% of Chinese characters in text are paired with other characters to form two-syllable compounds or phrase words. In speech and writing these pairings eliminate about 95% of the ambiguities created by ambiguously identified syllables.",The work on which this paper was based received support from the NCR Corporation.,"Human Factors and Linguistic Considerations: Keys to High-Speed Chinese Character Input. With a keyboard and supporting system developed at Cornell University, input methods used to identify ideographs are adaptations of wellknown schemes; innovation is in the addition of automatic machine selection of ambiguously identified characters. The unique feature of the Cornell design is that a certain amount of intelligence has been built into the machine. This allows an operator to take advantage of the fact that about 60% of Chinese characters in text are paired with other characters to form two-syllable compounds or phrase words. In speech and writing these pairings eliminate about 95% of the ambiguities created by ambiguously identified syllables.",1980
ma-way-2010-hmm,https://aclanthology.org/W10-3813,0,,,,,,,"HMM Word-to-Phrase Alignment with Dependency Constraints. In this paper, we extend the HMM wordto-phrase alignment model with syntactic dependency constraints. The syntactic dependencies between multiple words in one language are introduced into the model in a bid to produce coherent alignments. Our experimental results on a variety of Chinese-English data show that our syntactically constrained model can lead to as much as a 3.24% relative improvement in BLEU score over current HMM word-to-phrase alignment models on a Phrase-Based Statistical Machine Translation system when the training data is small, and a comparable performance compared to IBM model 4 on a Hiero-style system with larger training data. An intrinsic alignment quality evaluation shows that our alignment model with dependency constraints leads to improvements in both precision (by 1.74% relative) and recall (by 1.75% relative) over the model without dependency information.",{HMM} Word-to-Phrase Alignment with Dependency Constraints,"In this paper, we extend the HMM wordto-phrase alignment model with syntactic dependency constraints. The syntactic dependencies between multiple words in one language are introduced into the model in a bid to produce coherent alignments. Our experimental results on a variety of Chinese-English data show that our syntactically constrained model can lead to as much as a 3.24% relative improvement in BLEU score over current HMM word-to-phrase alignment models on a Phrase-Based Statistical Machine Translation system when the training data is small, and a comparable performance compared to IBM model 4 on a Hiero-style system with larger training data. An intrinsic alignment quality evaluation shows that our alignment model with dependency constraints leads to improvements in both precision (by 1.74% relative) and recall (by 1.75% relative) over the model without dependency information.",HMM Word-to-Phrase Alignment with Dependency Constraints,"In this paper, we extend the HMM wordto-phrase alignment model with syntactic dependency constraints. The syntactic dependencies between multiple words in one language are introduced into the model in a bid to produce coherent alignments. Our experimental results on a variety of Chinese-English data show that our syntactically constrained model can lead to as much as a 3.24% relative improvement in BLEU score over current HMM word-to-phrase alignment models on a Phrase-Based Statistical Machine Translation system when the training data is small, and a comparable performance compared to IBM model 4 on a Hiero-style system with larger training data. An intrinsic alignment quality evaluation shows that our alignment model with dependency constraints leads to improvements in both precision (by 1.74% relative) and recall (by 1.75% relative) over the model without dependency information.",This research is supported by the Science Foundation Ireland (Grant 07/CE/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Dublin City University. Part of the work was carried out at Cambridge University Engineering Department with Dr. William Byrne. The authors would also like to thank the anonymous reviewers for their insightful comments.,"HMM Word-to-Phrase Alignment with Dependency Constraints. In this paper, we extend the HMM wordto-phrase alignment model with syntactic dependency constraints. The syntactic dependencies between multiple words in one language are introduced into the model in a bid to produce coherent alignments. Our experimental results on a variety of Chinese-English data show that our syntactically constrained model can lead to as much as a 3.24% relative improvement in BLEU score over current HMM word-to-phrase alignment models on a Phrase-Based Statistical Machine Translation system when the training data is small, and a comparable performance compared to IBM model 4 on a Hiero-style system with larger training data. An intrinsic alignment quality evaluation shows that our alignment model with dependency constraints leads to improvements in both precision (by 1.74% relative) and recall (by 1.75% relative) over the model without dependency information.",2010
zhang-etal-2019-long,https://aclanthology.org/N19-1306,0,,,,,,,"Long-tail Relation Extraction via Knowledge Graph Embeddings and Graph Convolution Networks. We propose a distance supervised relation extraction approach for long-tailed, imbalanced data which is prevalent in real-world settings. Here, the challenge is to learn accurate ""fewshot"" models for classes existing at the tail of the class distribution, for which little data is available. Inspired by the rich semantic correlations between classes at the long tail and those at the head, we take advantage of the knowledge from data-rich classes at the head of the distribution to boost the performance of the data-poor classes at the tail. First, we propose to leverage implicit relational knowledge among class labels from knowledge graph embeddings and learn explicit relational knowledge using graph convolution networks. Second, we integrate that relational knowledge into relation extraction model by coarse-tofine knowledge-aware attention mechanism. We demonstrate our results for a large-scale benchmark dataset which show that our approach significantly outperforms other baselines, especially for long-tail relations.",Long-tail Relation Extraction via Knowledge Graph Embeddings and Graph Convolution Networks,"We propose a distance supervised relation extraction approach for long-tailed, imbalanced data which is prevalent in real-world settings. Here, the challenge is to learn accurate ""fewshot"" models for classes existing at the tail of the class distribution, for which little data is available. Inspired by the rich semantic correlations between classes at the long tail and those at the head, we take advantage of the knowledge from data-rich classes at the head of the distribution to boost the performance of the data-poor classes at the tail. First, we propose to leverage implicit relational knowledge among class labels from knowledge graph embeddings and learn explicit relational knowledge using graph convolution networks. Second, we integrate that relational knowledge into relation extraction model by coarse-tofine knowledge-aware attention mechanism. We demonstrate our results for a large-scale benchmark dataset which show that our approach significantly outperforms other baselines, especially for long-tail relations.",Long-tail Relation Extraction via Knowledge Graph Embeddings and Graph Convolution Networks,"We propose a distance supervised relation extraction approach for long-tailed, imbalanced data which is prevalent in real-world settings. Here, the challenge is to learn accurate ""fewshot"" models for classes existing at the tail of the class distribution, for which little data is available. Inspired by the rich semantic correlations between classes at the long tail and those at the head, we take advantage of the knowledge from data-rich classes at the head of the distribution to boost the performance of the data-poor classes at the tail. First, we propose to leverage implicit relational knowledge among class labels from knowledge graph embeddings and learn explicit relational knowledge using graph convolution networks. Second, we integrate that relational knowledge into relation extraction model by coarse-tofine knowledge-aware attention mechanism. We demonstrate our results for a large-scale benchmark dataset which show that our approach significantly outperforms other baselines, especially for long-tail relations.","We want to express gratitude to the anonymous reviewers for their hard work and kind comments, which will further improve our work in the future.This work is funded by NSFC91846204/61473260, national key research program YS2018YFB140004, Alibaba CangJingGe (Knowledge Engine) Research Plan and Natural Science Foundation of Zhejiang Province of China (LQ19F030001).","Long-tail Relation Extraction via Knowledge Graph Embeddings and Graph Convolution Networks. We propose a distance supervised relation extraction approach for long-tailed, imbalanced data which is prevalent in real-world settings. Here, the challenge is to learn accurate ""fewshot"" models for classes existing at the tail of the class distribution, for which little data is available. Inspired by the rich semantic correlations between classes at the long tail and those at the head, we take advantage of the knowledge from data-rich classes at the head of the distribution to boost the performance of the data-poor classes at the tail. First, we propose to leverage implicit relational knowledge among class labels from knowledge graph embeddings and learn explicit relational knowledge using graph convolution networks. Second, we integrate that relational knowledge into relation extraction model by coarse-tofine knowledge-aware attention mechanism. We demonstrate our results for a large-scale benchmark dataset which show that our approach significantly outperforms other baselines, especially for long-tail relations.",2019
braune-fraser-2010-improved,https://aclanthology.org/C10-2010,0,,,,,,,Improved Unsupervised Sentence Alignment for Symmetrical and Asymmetrical Parallel Corpora. We address the problem of unsupervised and language-pair independent alignment of symmetrical and asymmetrical parallel corpora. Asymmetrical parallel corpora contain a large proportion of 1-to-0/0-to-1 and 1-to-many/many-to-1 sentence correspondences. We have developed a novel approach which is fast and allows us to achieve high accuracy in terms of F 1 for the alignment of both asymmetrical and symmetrical parallel corpora. The source code of our aligner and the test sets are freely available.,Improved Unsupervised Sentence Alignment for Symmetrical and Asymmetrical Parallel Corpora,We address the problem of unsupervised and language-pair independent alignment of symmetrical and asymmetrical parallel corpora. Asymmetrical parallel corpora contain a large proportion of 1-to-0/0-to-1 and 1-to-many/many-to-1 sentence correspondences. We have developed a novel approach which is fast and allows us to achieve high accuracy in terms of F 1 for the alignment of both asymmetrical and symmetrical parallel corpora. The source code of our aligner and the test sets are freely available.,Improved Unsupervised Sentence Alignment for Symmetrical and Asymmetrical Parallel Corpora,We address the problem of unsupervised and language-pair independent alignment of symmetrical and asymmetrical parallel corpora. Asymmetrical parallel corpora contain a large proportion of 1-to-0/0-to-1 and 1-to-many/many-to-1 sentence correspondences. We have developed a novel approach which is fast and allows us to achieve high accuracy in terms of F 1 for the alignment of both asymmetrical and symmetrical parallel corpora. The source code of our aligner and the test sets are freely available.,The first author was partially supported by the Hasler Stiftung 19 . Support for both authors was provided by Deutsche Forschungsgemeinschaft grants Models of Morphosyntax for Statistical Machine Translation and SFB 732.,Improved Unsupervised Sentence Alignment for Symmetrical and Asymmetrical Parallel Corpora. We address the problem of unsupervised and language-pair independent alignment of symmetrical and asymmetrical parallel corpora. Asymmetrical parallel corpora contain a large proportion of 1-to-0/0-to-1 and 1-to-many/many-to-1 sentence correspondences. We have developed a novel approach which is fast and allows us to achieve high accuracy in terms of F 1 for the alignment of both asymmetrical and symmetrical parallel corpora. The source code of our aligner and the test sets are freely available.,2010
guibon-etal-2021-shot,https://aclanthology.org/2021.emnlp-main.549,0,,,,,,,"Few-Shot Emotion Recognition in Conversation with Sequential Prototypical Networks. Several recent studies on dyadic humanhuman interactions have been done on conversations without specific business objectives. However, many companies might benefit from studies dedicated to more precise environments such as after sales services or customer satisfaction surveys. In this work, we place ourselves in the scope of a live chat customer service in which we want to detect emotions and their evolution in the conversation flow. This context leads to multiple challenges that range from exploiting restricted, small and mostly unlabeled datasets to finding and adapting methods for such context. We tackle these challenges by using Few-Shot Learning while making the hypothesis it can serve conversational emotion classification for different languages and sparse labels. We contribute by proposing a variation of Prototypical Networks for sequence labeling in conversation that we name ProtoSeq. We test this method on two datasets with different languages: daily conversations in English and customer service chat conversations in French. When applied to emotion classification in conversations, our method proved to be competitive even when compared to other ones. The code for Proto-Seq is available at https://github.com/ gguibon/ProtoSeq.",Few-Shot Emotion Recognition in Conversation with Sequential Prototypical Networks,"Several recent studies on dyadic humanhuman interactions have been done on conversations without specific business objectives. However, many companies might benefit from studies dedicated to more precise environments such as after sales services or customer satisfaction surveys. In this work, we place ourselves in the scope of a live chat customer service in which we want to detect emotions and their evolution in the conversation flow. This context leads to multiple challenges that range from exploiting restricted, small and mostly unlabeled datasets to finding and adapting methods for such context. We tackle these challenges by using Few-Shot Learning while making the hypothesis it can serve conversational emotion classification for different languages and sparse labels. We contribute by proposing a variation of Prototypical Networks for sequence labeling in conversation that we name ProtoSeq. We test this method on two datasets with different languages: daily conversations in English and customer service chat conversations in French. When applied to emotion classification in conversations, our method proved to be competitive even when compared to other ones. The code for Proto-Seq is available at https://github.com/ gguibon/ProtoSeq.",Few-Shot Emotion Recognition in Conversation with Sequential Prototypical Networks,"Several recent studies on dyadic humanhuman interactions have been done on conversations without specific business objectives. However, many companies might benefit from studies dedicated to more precise environments such as after sales services or customer satisfaction surveys. In this work, we place ourselves in the scope of a live chat customer service in which we want to detect emotions and their evolution in the conversation flow. This context leads to multiple challenges that range from exploiting restricted, small and mostly unlabeled datasets to finding and adapting methods for such context. We tackle these challenges by using Few-Shot Learning while making the hypothesis it can serve conversational emotion classification for different languages and sparse labels. We contribute by proposing a variation of Prototypical Networks for sequence labeling in conversation that we name ProtoSeq. We test this method on two datasets with different languages: daily conversations in English and customer service chat conversations in French. When applied to emotion classification in conversations, our method proved to be competitive even when compared to other ones. The code for Proto-Seq is available at https://github.com/ gguibon/ProtoSeq.","This project has received funding from SNCF, the French National Research Agency's grant ANR-17-MAOI and the DSAIDIS chair at Télécom-Paris.","Few-Shot Emotion Recognition in Conversation with Sequential Prototypical Networks. Several recent studies on dyadic humanhuman interactions have been done on conversations without specific business objectives. However, many companies might benefit from studies dedicated to more precise environments such as after sales services or customer satisfaction surveys. In this work, we place ourselves in the scope of a live chat customer service in which we want to detect emotions and their evolution in the conversation flow. This context leads to multiple challenges that range from exploiting restricted, small and mostly unlabeled datasets to finding and adapting methods for such context. We tackle these challenges by using Few-Shot Learning while making the hypothesis it can serve conversational emotion classification for different languages and sparse labels. We contribute by proposing a variation of Prototypical Networks for sequence labeling in conversation that we name ProtoSeq. We test this method on two datasets with different languages: daily conversations in English and customer service chat conversations in French. When applied to emotion classification in conversations, our method proved to be competitive even when compared to other ones. The code for Proto-Seq is available at https://github.com/ gguibon/ProtoSeq.",2021
sazzed-2020-cross,https://aclanthology.org/2020.wnut-1.8,0,,,,,,,"Cross-lingual sentiment classification in low-resource Bengali language. Sentiment analysis research in low-resource languages such as Bengali is still unexplored due to the scarcity of annotated data and the lack of text processing tools. Therefore, in this work, we focus on generating resources and showing the applicability of the crosslingual sentiment analysis approach in Bengali. For benchmarking, we created and annotated a comprehensive corpus of around 12000 Bengali reviews. To address the lack of standard text-processing tools in Bengali, we leverage resources from English utilizing machine translation. We determine the performance of supervised machine learning (ML) classifiers in machine-translated English corpus and compare it with the original Bengali corpus. Besides, we examine sentiment preservation in the machine-translated corpus utilizing Cohen's Kappa and Gwet's AC1. To circumvent the laborious data labeling process, we explore lexicon-based methods and study the applicability of utilizing cross-domain labeled data from the resource-rich language. We find that supervised ML classifiers show comparable performances in Bengali and machinetranslated English corpus. By utilizing labeled data, they achieve 15%-20% higher F1 scores compared to both lexicon-based and transfer learning-based methods. Besides, we observe that machine translation does not alter the sentiment polarity of the review for most of the cases. Our experimental results demonstrate that the machine translation based crosslingual approach can be an effective way for sentiment classification in Bengali.",Cross-lingual sentiment classification in low-resource {B}engali language,"Sentiment analysis research in low-resource languages such as Bengali is still unexplored due to the scarcity of annotated data and the lack of text processing tools. Therefore, in this work, we focus on generating resources and showing the applicability of the crosslingual sentiment analysis approach in Bengali. For benchmarking, we created and annotated a comprehensive corpus of around 12000 Bengali reviews. To address the lack of standard text-processing tools in Bengali, we leverage resources from English utilizing machine translation. We determine the performance of supervised machine learning (ML) classifiers in machine-translated English corpus and compare it with the original Bengali corpus. Besides, we examine sentiment preservation in the machine-translated corpus utilizing Cohen's Kappa and Gwet's AC1. To circumvent the laborious data labeling process, we explore lexicon-based methods and study the applicability of utilizing cross-domain labeled data from the resource-rich language. We find that supervised ML classifiers show comparable performances in Bengali and machinetranslated English corpus. By utilizing labeled data, they achieve 15%-20% higher F1 scores compared to both lexicon-based and transfer learning-based methods. Besides, we observe that machine translation does not alter the sentiment polarity of the review for most of the cases. Our experimental results demonstrate that the machine translation based crosslingual approach can be an effective way for sentiment classification in Bengali.",Cross-lingual sentiment classification in low-resource Bengali language,"Sentiment analysis research in low-resource languages such as Bengali is still unexplored due to the scarcity of annotated data and the lack of text processing tools. Therefore, in this work, we focus on generating resources and showing the applicability of the crosslingual sentiment analysis approach in Bengali. For benchmarking, we created and annotated a comprehensive corpus of around 12000 Bengali reviews. To address the lack of standard text-processing tools in Bengali, we leverage resources from English utilizing machine translation. We determine the performance of supervised machine learning (ML) classifiers in machine-translated English corpus and compare it with the original Bengali corpus. Besides, we examine sentiment preservation in the machine-translated corpus utilizing Cohen's Kappa and Gwet's AC1. To circumvent the laborious data labeling process, we explore lexicon-based methods and study the applicability of utilizing cross-domain labeled data from the resource-rich language. We find that supervised ML classifiers show comparable performances in Bengali and machinetranslated English corpus. By utilizing labeled data, they achieve 15%-20% higher F1 scores compared to both lexicon-based and transfer learning-based methods. Besides, we observe that machine translation does not alter the sentiment polarity of the review for most of the cases. Our experimental results demonstrate that the machine translation based crosslingual approach can be an effective way for sentiment classification in Bengali.",,"Cross-lingual sentiment classification in low-resource Bengali language. Sentiment analysis research in low-resource languages such as Bengali is still unexplored due to the scarcity of annotated data and the lack of text processing tools. Therefore, in this work, we focus on generating resources and showing the applicability of the crosslingual sentiment analysis approach in Bengali. For benchmarking, we created and annotated a comprehensive corpus of around 12000 Bengali reviews. To address the lack of standard text-processing tools in Bengali, we leverage resources from English utilizing machine translation. We determine the performance of supervised machine learning (ML) classifiers in machine-translated English corpus and compare it with the original Bengali corpus. Besides, we examine sentiment preservation in the machine-translated corpus utilizing Cohen's Kappa and Gwet's AC1. To circumvent the laborious data labeling process, we explore lexicon-based methods and study the applicability of utilizing cross-domain labeled data from the resource-rich language. We find that supervised ML classifiers show comparable performances in Bengali and machinetranslated English corpus. By utilizing labeled data, they achieve 15%-20% higher F1 scores compared to both lexicon-based and transfer learning-based methods. Besides, we observe that machine translation does not alter the sentiment polarity of the review for most of the cases. Our experimental results demonstrate that the machine translation based crosslingual approach can be an effective way for sentiment classification in Bengali.",2020
suzuki-kumano-2005-learning,https://aclanthology.org/2005.mtsummit-papers.5,0,,,,,,,"Learning Translations from Monolingual Corpora. This paper proposes a method for a machine translation (MT) system to automatically select and learn translation words, which suit the user's tastes or document fields by using a monolingual corpus manually compiled by the user, in order to achieve high-quality translation. We have constructed a system based on this method and carried out experiments to prove the validity of the proposed method. This learning system has been implemented in Toshiba's ""The Honyaku"" series.",Learning Translations from Monolingual Corpora,"This paper proposes a method for a machine translation (MT) system to automatically select and learn translation words, which suit the user's tastes or document fields by using a monolingual corpus manually compiled by the user, in order to achieve high-quality translation. We have constructed a system based on this method and carried out experiments to prove the validity of the proposed method. This learning system has been implemented in Toshiba's ""The Honyaku"" series.",Learning Translations from Monolingual Corpora,"This paper proposes a method for a machine translation (MT) system to automatically select and learn translation words, which suit the user's tastes or document fields by using a monolingual corpus manually compiled by the user, in order to achieve high-quality translation. We have constructed a system based on this method and carried out experiments to prove the validity of the proposed method. This learning system has been implemented in Toshiba's ""The Honyaku"" series.",,"Learning Translations from Monolingual Corpora. This paper proposes a method for a machine translation (MT) system to automatically select and learn translation words, which suit the user's tastes or document fields by using a monolingual corpus manually compiled by the user, in order to achieve high-quality translation. We have constructed a system based on this method and carried out experiments to prove the validity of the proposed method. This learning system has been implemented in Toshiba's ""The Honyaku"" series.",2005
cassell-etal-2001-non,https://aclanthology.org/P01-1016,0,,,,,,,"Non-Verbal Cues for Discourse Structure. This paper addresses the issue of designing embodied conversational agents that exhibit appropriate posture shifts during dialogues with human users. Previous research has noted the importance of hand gestures, eye gaze and head nods in conversations between embodied agents and humans. We present an analysis of human monologues and dialogues that suggests that postural shifts can be predicted as a function of discourse state in monologues, and discourse and conversation state in dialogues. On the basis of these findings, we have implemented an embodied conversational agent that uses Collagen in such a way as to generate postural shifts.",Non-Verbal Cues for Discourse Structure,"This paper addresses the issue of designing embodied conversational agents that exhibit appropriate posture shifts during dialogues with human users. Previous research has noted the importance of hand gestures, eye gaze and head nods in conversations between embodied agents and humans. We present an analysis of human monologues and dialogues that suggests that postural shifts can be predicted as a function of discourse state in monologues, and discourse and conversation state in dialogues. On the basis of these findings, we have implemented an embodied conversational agent that uses Collagen in such a way as to generate postural shifts.",Non-Verbal Cues for Discourse Structure,"This paper addresses the issue of designing embodied conversational agents that exhibit appropriate posture shifts during dialogues with human users. Previous research has noted the importance of hand gestures, eye gaze and head nods in conversations between embodied agents and humans. We present an analysis of human monologues and dialogues that suggests that postural shifts can be predicted as a function of discourse state in monologues, and discourse and conversation state in dialogues. On the basis of these findings, we have implemented an embodied conversational agent that uses Collagen in such a way as to generate postural shifts.","This research was supported by MERL, France Telecom, AT&T, and the other generous sponsors of the MIT Media Lab. Thanks to the other members of the Gesture and Narrative Language Group, in particular Ian Gouldstone and Hannes Vilhjálmsson.","Non-Verbal Cues for Discourse Structure. This paper addresses the issue of designing embodied conversational agents that exhibit appropriate posture shifts during dialogues with human users. Previous research has noted the importance of hand gestures, eye gaze and head nods in conversations between embodied agents and humans. We present an analysis of human monologues and dialogues that suggests that postural shifts can be predicted as a function of discourse state in monologues, and discourse and conversation state in dialogues. On the basis of these findings, we have implemented an embodied conversational agent that uses Collagen in such a way as to generate postural shifts.",2001
klein-etal-2002-robust,https://aclanthology.org/C02-2017,0,,,,,,,"Robust Interpretation of User Requests for Text Retrieval in a Multimodal Environment. We describe a parser for robust and flexible interpretation of user utterances in a multi-modal system for web search in newspaper databases. Users can speak or type, and they can navigate and follow links using mouse clicks. Spoken or written queries may combine search expressions with browser commands and search space restrictions. In interpreting input queries, the system has to be fault-tolerant to account for spontanous speech phenomena as well as typing or speech recognition errors which often distort the meaning of the utterance and are difficult to detect and correct. Our parser integrates shallow parsing techniques with knowledge-based text retrieval to allow for robust processing and coordination of input modes. Parsing relies on a two-layered approach: typical meta-expressions like those concerning search, newspaper types and dates are identified and excluded from the search string to be sent to the search engine. The search terms which are left after preprocessing are then grouped according to co-occurrence statistics which have been derived from a newspaper corpus. These co-occurrence statistics concern typical noun phrases as they appear in newspaper texts.",Robust Interpretation of User Requests for Text Retrieval in a Multimodal Environment,"We describe a parser for robust and flexible interpretation of user utterances in a multi-modal system for web search in newspaper databases. Users can speak or type, and they can navigate and follow links using mouse clicks. Spoken or written queries may combine search expressions with browser commands and search space restrictions. In interpreting input queries, the system has to be fault-tolerant to account for spontanous speech phenomena as well as typing or speech recognition errors which often distort the meaning of the utterance and are difficult to detect and correct. Our parser integrates shallow parsing techniques with knowledge-based text retrieval to allow for robust processing and coordination of input modes. Parsing relies on a two-layered approach: typical meta-expressions like those concerning search, newspaper types and dates are identified and excluded from the search string to be sent to the search engine. The search terms which are left after preprocessing are then grouped according to co-occurrence statistics which have been derived from a newspaper corpus. These co-occurrence statistics concern typical noun phrases as they appear in newspaper texts.",Robust Interpretation of User Requests for Text Retrieval in a Multimodal Environment,"We describe a parser for robust and flexible interpretation of user utterances in a multi-modal system for web search in newspaper databases. Users can speak or type, and they can navigate and follow links using mouse clicks. Spoken or written queries may combine search expressions with browser commands and search space restrictions. In interpreting input queries, the system has to be fault-tolerant to account for spontanous speech phenomena as well as typing or speech recognition errors which often distort the meaning of the utterance and are difficult to detect and correct. Our parser integrates shallow parsing techniques with knowledge-based text retrieval to allow for robust processing and coordination of input modes. Parsing relies on a two-layered approach: typical meta-expressions like those concerning search, newspaper types and dates are identified and excluded from the search string to be sent to the search engine. The search terms which are left after preprocessing are then grouped according to co-occurrence statistics which have been derived from a newspaper corpus. These co-occurrence statistics concern typical noun phrases as they appear in newspaper texts.","This work was supported by the Austrian Science Fund (FWF) under project number P-13704. Financial support forÖFAI is provided by the Austrian Federal Ministry of Education, Science and Culture.","Robust Interpretation of User Requests for Text Retrieval in a Multimodal Environment. We describe a parser for robust and flexible interpretation of user utterances in a multi-modal system for web search in newspaper databases. Users can speak or type, and they can navigate and follow links using mouse clicks. Spoken or written queries may combine search expressions with browser commands and search space restrictions. In interpreting input queries, the system has to be fault-tolerant to account for spontanous speech phenomena as well as typing or speech recognition errors which often distort the meaning of the utterance and are difficult to detect and correct. Our parser integrates shallow parsing techniques with knowledge-based text retrieval to allow for robust processing and coordination of input modes. Parsing relies on a two-layered approach: typical meta-expressions like those concerning search, newspaper types and dates are identified and excluded from the search string to be sent to the search engine. The search terms which are left after preprocessing are then grouped according to co-occurrence statistics which have been derived from a newspaper corpus. These co-occurrence statistics concern typical noun phrases as they appear in newspaper texts.",2002
nikoulina-etal-2012-hybrid,https://aclanthology.org/W12-5701,0,,,,,,,"Hybrid Adaptation of Named Entity Recognition for Statistical Machine Translation. Appropriate Named Entity handling is important for Statistical Machine Translation. In this work we address the challenging issues of generalization and sparsity of NEs in the context of SMT. Our approach uses the source NE Recognition (NER) system to generalize the training data by replacing the recognized Named Entities with place-holders, thus allowing a Phrase-Based Statistical Machine Translation (PBMT) system to learn more general patterns. At translation time, the recognized Named Entities are handled through a specifically adapted translation model, which improves the quality of their translation. We add a post-processing step to a standard NER system in order to make it more suitable for integration with SMT and we also learn a prediction model for deciding between options for translating the Named Entities, based on their context and on their impact on the translation of the entire sentence. We show important improvements in terms of BLEU and TER scores already after integration of NER into SMT, but especially after applying the SMT-adapted post-processing step to the NER component.",Hybrid Adaptation of Named Entity Recognition for Statistical Machine Translation,"Appropriate Named Entity handling is important for Statistical Machine Translation. In this work we address the challenging issues of generalization and sparsity of NEs in the context of SMT. Our approach uses the source NE Recognition (NER) system to generalize the training data by replacing the recognized Named Entities with place-holders, thus allowing a Phrase-Based Statistical Machine Translation (PBMT) system to learn more general patterns. At translation time, the recognized Named Entities are handled through a specifically adapted translation model, which improves the quality of their translation. We add a post-processing step to a standard NER system in order to make it more suitable for integration with SMT and we also learn a prediction model for deciding between options for translating the Named Entities, based on their context and on their impact on the translation of the entire sentence. We show important improvements in terms of BLEU and TER scores already after integration of NER into SMT, but especially after applying the SMT-adapted post-processing step to the NER component.",Hybrid Adaptation of Named Entity Recognition for Statistical Machine Translation,"Appropriate Named Entity handling is important for Statistical Machine Translation. In this work we address the challenging issues of generalization and sparsity of NEs in the context of SMT. Our approach uses the source NE Recognition (NER) system to generalize the training data by replacing the recognized Named Entities with place-holders, thus allowing a Phrase-Based Statistical Machine Translation (PBMT) system to learn more general patterns. At translation time, the recognized Named Entities are handled through a specifically adapted translation model, which improves the quality of their translation. We add a post-processing step to a standard NER system in order to make it more suitable for integration with SMT and we also learn a prediction model for deciding between options for translating the Named Entities, based on their context and on their impact on the translation of the entire sentence. We show important improvements in terms of BLEU and TER scores already after integration of NER into SMT, but especially after applying the SMT-adapted post-processing step to the NER component.","This work was partially supported by the Organic.Lingua project (http://www.organiclingua.eu/), funded by the European Commission under the ICT Policy Support Programme (ICT PSP).","Hybrid Adaptation of Named Entity Recognition for Statistical Machine Translation. Appropriate Named Entity handling is important for Statistical Machine Translation. In this work we address the challenging issues of generalization and sparsity of NEs in the context of SMT. Our approach uses the source NE Recognition (NER) system to generalize the training data by replacing the recognized Named Entities with place-holders, thus allowing a Phrase-Based Statistical Machine Translation (PBMT) system to learn more general patterns. At translation time, the recognized Named Entities are handled through a specifically adapted translation model, which improves the quality of their translation. We add a post-processing step to a standard NER system in order to make it more suitable for integration with SMT and we also learn a prediction model for deciding between options for translating the Named Entities, based on their context and on their impact on the translation of the entire sentence. We show important improvements in terms of BLEU and TER scores already after integration of NER into SMT, but especially after applying the SMT-adapted post-processing step to the NER component.",2012
griesshaber-etal-2020-fine,https://aclanthology.org/2020.coling-main.100,0,,,,,,,"Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning. Recently, leveraging pre-trained Transformer based language models in down stream, task specific models has advanced state of the art results in natural language understanding tasks. However, only a little research has explored the suitability of this approach in low resource settings with less than 1,000 training data points. In this work, we explore fine-tuning methods of BERT-a pre-trained Transformer based language model-by utilizing pool-based active learning to speed up training while keeping the cost of labeling new data constant. Our experimental results on the GLUE data set show an advantage in model performance by maximizing the approximate knowledge gain of the model when querying from the pool of unlabeled data. Finally, we demonstrate and analyze the benefits of freezing layers of the language model during fine-tuning to reduce the number of trainable parameters, making it more suitable for low-resource settings.",Fine-tuning {BERT} for Low-Resource Natural Language Understanding via Active Learning,"Recently, leveraging pre-trained Transformer based language models in down stream, task specific models has advanced state of the art results in natural language understanding tasks. However, only a little research has explored the suitability of this approach in low resource settings with less than 1,000 training data points. In this work, we explore fine-tuning methods of BERT-a pre-trained Transformer based language model-by utilizing pool-based active learning to speed up training while keeping the cost of labeling new data constant. Our experimental results on the GLUE data set show an advantage in model performance by maximizing the approximate knowledge gain of the model when querying from the pool of unlabeled data. Finally, we demonstrate and analyze the benefits of freezing layers of the language model during fine-tuning to reduce the number of trainable parameters, making it more suitable for low-resource settings.",Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning,"Recently, leveraging pre-trained Transformer based language models in down stream, task specific models has advanced state of the art results in natural language understanding tasks. However, only a little research has explored the suitability of this approach in low resource settings with less than 1,000 training data points. In this work, we explore fine-tuning methods of BERT-a pre-trained Transformer based language model-by utilizing pool-based active learning to speed up training while keeping the cost of labeling new data constant. Our experimental results on the GLUE data set show an advantage in model performance by maximizing the approximate knowledge gain of the model when querying from the pool of unlabeled data. Finally, we demonstrate and analyze the benefits of freezing layers of the language model during fine-tuning to reduce the number of trainable parameters, making it more suitable for low-resource settings.","This research and development project is funded within the ""Future of Work"" Program by the German Federal Ministry of Education and Research (BMBF) and the European Social Fund in Germany. It is implemented by the Project Management Agency Karlsruhe (PTKA). The authors are responsible for the content of this publication.","Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning. Recently, leveraging pre-trained Transformer based language models in down stream, task specific models has advanced state of the art results in natural language understanding tasks. However, only a little research has explored the suitability of this approach in low resource settings with less than 1,000 training data points. In this work, we explore fine-tuning methods of BERT-a pre-trained Transformer based language model-by utilizing pool-based active learning to speed up training while keeping the cost of labeling new data constant. Our experimental results on the GLUE data set show an advantage in model performance by maximizing the approximate knowledge gain of the model when querying from the pool of unlabeled data. Finally, we demonstrate and analyze the benefits of freezing layers of the language model during fine-tuning to reduce the number of trainable parameters, making it more suitable for low-resource settings.",2020
prazak-konopik-2019-ulsana,https://aclanthology.org/R19-1112,0,,,,,,,"ULSAna: Universal Language Semantic Analyzer. We present a live cross-lingual system capable of producing shallow semantic annotations of natural language sentences for 51 languages at this time. The domain of the input sentences is in principle unconstrained. The system uses single training data (in English) for all the languages. The resulting semantic annotations are therefore consistent across different languages. We use CoNLL Semantic Role Labeling training data and Universal dependencies as the basis for the system. The system is publicly available and supports processing data in batches; therefore, it can be easily used by the community for research tasks.",{ULSA}na: Universal Language Semantic Analyzer,"We present a live cross-lingual system capable of producing shallow semantic annotations of natural language sentences for 51 languages at this time. The domain of the input sentences is in principle unconstrained. The system uses single training data (in English) for all the languages. The resulting semantic annotations are therefore consistent across different languages. We use CoNLL Semantic Role Labeling training data and Universal dependencies as the basis for the system. The system is publicly available and supports processing data in batches; therefore, it can be easily used by the community for research tasks.",ULSAna: Universal Language Semantic Analyzer,"We present a live cross-lingual system capable of producing shallow semantic annotations of natural language sentences for 51 languages at this time. The domain of the input sentences is in principle unconstrained. The system uses single training data (in English) for all the languages. The resulting semantic annotations are therefore consistent across different languages. We use CoNLL Semantic Role Labeling training data and Universal dependencies as the basis for the system. The system is publicly available and supports processing data in batches; therefore, it can be easily used by the community for research tasks.","This publication was supported by the project LO1506 of the Czech Ministry of Education, Youth and Sports and by Grant No. SGS-2019-018 Processing of heterogeneous data and its specialized applications. Computational resources were provided by the CESNET LM2015042 and","ULSAna: Universal Language Semantic Analyzer. We present a live cross-lingual system capable of producing shallow semantic annotations of natural language sentences for 51 languages at this time. The domain of the input sentences is in principle unconstrained. The system uses single training data (in English) for all the languages. The resulting semantic annotations are therefore consistent across different languages. We use CoNLL Semantic Role Labeling training data and Universal dependencies as the basis for the system. The system is publicly available and supports processing data in batches; therefore, it can be easily used by the community for research tasks.",2019
hsieh-etal-2017-monpa,https://aclanthology.org/I17-2014,0,,,,,,,"MONPA: Multi-objective Named-entity and Part-of-speech Annotator for Chinese using Recurrent Neural Network. Part-of-speech (POS) tagging and named entity recognition (NER) are crucial steps in natural language processing. In addition, the difficulty of word segmentation places extra burden on those who deal with languages such as Chinese, and pipelined systems often suffer from error propagation. This work proposes an endto-end model using character-based recurrent neural network (RNN) to jointly accomplish segmentation, POS tagging and NER of a Chinese sentence. Experiments on previous word segmentation and NER competition datasets show that a single joint model using the proposed architecture is comparable to those trained specifically for each task, and outperforms freely-available softwares. Moreover, we provide a web-based interface for the public to easily access this resource.",{MONPA}: Multi-objective Named-entity and Part-of-speech Annotator for {C}hinese using Recurrent Neural Network,"Part-of-speech (POS) tagging and named entity recognition (NER) are crucial steps in natural language processing. In addition, the difficulty of word segmentation places extra burden on those who deal with languages such as Chinese, and pipelined systems often suffer from error propagation. This work proposes an endto-end model using character-based recurrent neural network (RNN) to jointly accomplish segmentation, POS tagging and NER of a Chinese sentence. Experiments on previous word segmentation and NER competition datasets show that a single joint model using the proposed architecture is comparable to those trained specifically for each task, and outperforms freely-available softwares. Moreover, we provide a web-based interface for the public to easily access this resource.",MONPA: Multi-objective Named-entity and Part-of-speech Annotator for Chinese using Recurrent Neural Network,"Part-of-speech (POS) tagging and named entity recognition (NER) are crucial steps in natural language processing. In addition, the difficulty of word segmentation places extra burden on those who deal with languages such as Chinese, and pipelined systems often suffer from error propagation. This work proposes an endto-end model using character-based recurrent neural network (RNN) to jointly accomplish segmentation, POS tagging and NER of a Chinese sentence. Experiments on previous word segmentation and NER competition datasets show that a single joint model using the proposed architecture is comparable to those trained specifically for each task, and outperforms freely-available softwares. Moreover, we provide a web-based interface for the public to easily access this resource.","We are grateful for the constructive comments from three anonymous reviewers. This work was supported by grant MOST106-3114-E-001-002 from the Ministry of Science and Technology, Taiwan.","MONPA: Multi-objective Named-entity and Part-of-speech Annotator for Chinese using Recurrent Neural Network. Part-of-speech (POS) tagging and named entity recognition (NER) are crucial steps in natural language processing. In addition, the difficulty of word segmentation places extra burden on those who deal with languages such as Chinese, and pipelined systems often suffer from error propagation. This work proposes an endto-end model using character-based recurrent neural network (RNN) to jointly accomplish segmentation, POS tagging and NER of a Chinese sentence. Experiments on previous word segmentation and NER competition datasets show that a single joint model using the proposed architecture is comparable to those trained specifically for each task, and outperforms freely-available softwares. Moreover, we provide a web-based interface for the public to easily access this resource.",2017
liu-2003-word,https://aclanthology.org/N03-3007,0,,,,,,,"Word Fragments Identification Using Acoustic-Prosodic Features in Conversational Speech. Word fragments pose serious problems for speech recognizers. Accurate identification of word fragments will not only improve recognition accuracy, but also be very helpful for disfluency detection algorithm because the occurrence of word fragments is a good indicator of speech disfluencies. Different from the previous effort of including word fragments in the acoustic model, in this paper, we investigate the problem of word fragment identification from another approach, i.e. building classifiers using acoustic-prosodic features. Our experiments show that, by combining a few voice quality measures and prosodic features extracted from the forced alignments with the human transcriptions, we obtain a precision rate of 74.3% and a recall rate of 70.1% on the downsampled data of spontaneous speech. The overall accuracy is 72.9%, which is significantly better than chance performance of 50%.",Word Fragments Identification Using Acoustic-Prosodic Features in Conversational Speech,"Word fragments pose serious problems for speech recognizers. Accurate identification of word fragments will not only improve recognition accuracy, but also be very helpful for disfluency detection algorithm because the occurrence of word fragments is a good indicator of speech disfluencies. Different from the previous effort of including word fragments in the acoustic model, in this paper, we investigate the problem of word fragment identification from another approach, i.e. building classifiers using acoustic-prosodic features. Our experiments show that, by combining a few voice quality measures and prosodic features extracted from the forced alignments with the human transcriptions, we obtain a precision rate of 74.3% and a recall rate of 70.1% on the downsampled data of spontaneous speech. The overall accuracy is 72.9%, which is significantly better than chance performance of 50%.",Word Fragments Identification Using Acoustic-Prosodic Features in Conversational Speech,"Word fragments pose serious problems for speech recognizers. Accurate identification of word fragments will not only improve recognition accuracy, but also be very helpful for disfluency detection algorithm because the occurrence of word fragments is a good indicator of speech disfluencies. Different from the previous effort of including word fragments in the acoustic model, in this paper, we investigate the problem of word fragment identification from another approach, i.e. building classifiers using acoustic-prosodic features. Our experiments show that, by combining a few voice quality measures and prosodic features extracted from the forced alignments with the human transcriptions, we obtain a precision rate of 74.3% and a recall rate of 70.1% on the downsampled data of spontaneous speech. The overall accuracy is 72.9%, which is significantly better than chance performance of 50%.","The author gratefully acknowledges Mary Harper for her comments on this work. Part of this work was conducted at Purdue University and continued at ICSI where the author is supported by DARPA under contract MDA972-02-C-0038. Thank Elizabeth Shriberg, Andreas Stolcke and Luciana Ferrer at SRI for their advice and help with the extraction of the prosodic features. They are supported by NSF IRI-9619921 and NASA Award NCC 2 1256. Any opinions expressed in this paper are those of the authors and do not necessarily reflect the view of DARPA, NSF, or NASA.","Word Fragments Identification Using Acoustic-Prosodic Features in Conversational Speech. Word fragments pose serious problems for speech recognizers. Accurate identification of word fragments will not only improve recognition accuracy, but also be very helpful for disfluency detection algorithm because the occurrence of word fragments is a good indicator of speech disfluencies. Different from the previous effort of including word fragments in the acoustic model, in this paper, we investigate the problem of word fragment identification from another approach, i.e. building classifiers using acoustic-prosodic features. Our experiments show that, by combining a few voice quality measures and prosodic features extracted from the forced alignments with the human transcriptions, we obtain a precision rate of 74.3% and a recall rate of 70.1% on the downsampled data of spontaneous speech. The overall accuracy is 72.9%, which is significantly better than chance performance of 50%.",2003
kuhlmann-oepen-2016-squibs,https://aclanthology.org/J16-4009,0,,,,,,,"Squibs: Towards a Catalogue of Linguistic Graph Banks. Graphs exceeding the formal complexity of rooted trees are of growing relevance to much NLP research. Although formally well understood in graph theory, there is substantial variation in the types of linguistic graphs, as well as in the interpretation of various structural properties. To provide a common terminology and transparent statistics across different collections of graphs in NLP, we propose to establish a shared community resource with an open-source reference implementation for common statistics.",{S}quibs: Towards a Catalogue of Linguistic Graph {B}anks,"Graphs exceeding the formal complexity of rooted trees are of growing relevance to much NLP research. Although formally well understood in graph theory, there is substantial variation in the types of linguistic graphs, as well as in the interpretation of various structural properties. To provide a common terminology and transparent statistics across different collections of graphs in NLP, we propose to establish a shared community resource with an open-source reference implementation for common statistics.",Squibs: Towards a Catalogue of Linguistic Graph Banks,"Graphs exceeding the formal complexity of rooted trees are of growing relevance to much NLP research. Although formally well understood in graph theory, there is substantial variation in the types of linguistic graphs, as well as in the interpretation of various structural properties. To provide a common terminology and transparent statistics across different collections of graphs in NLP, we propose to establish a shared community resource with an open-source reference implementation for common statistics.",,"Squibs: Towards a Catalogue of Linguistic Graph Banks. Graphs exceeding the formal complexity of rooted trees are of growing relevance to much NLP research. Although formally well understood in graph theory, there is substantial variation in the types of linguistic graphs, as well as in the interpretation of various structural properties. To provide a common terminology and transparent statistics across different collections of graphs in NLP, we propose to establish a shared community resource with an open-source reference implementation for common statistics.",2016
madhyastha-etal-2016-mapping,https://aclanthology.org/W16-1612,0,,,,,,,"Mapping Unseen Words to Task-Trained Embedding Spaces. We consider the supervised training setting in which we learn task-specific word embeddings. We assume that we start with initial embeddings learned from unlabelled data and update them to learn taskspecific embeddings for words in the supervised training data. However, for new words in the test set, we must use either their initial embeddings or a single unknown embedding, which often leads to errors. We address this by learning a neural network to map from initial embeddings to the task-specific embedding space, via a multi-loss objective function. The technique is general, but here we demonstrate its use for improved dependency parsing (especially for sentences with out-of-vocabulary words), as well as for downstream improvements on sentiment analysis.",Mapping Unseen Words to Task-Trained Embedding Spaces,"We consider the supervised training setting in which we learn task-specific word embeddings. We assume that we start with initial embeddings learned from unlabelled data and update them to learn taskspecific embeddings for words in the supervised training data. However, for new words in the test set, we must use either their initial embeddings or a single unknown embedding, which often leads to errors. We address this by learning a neural network to map from initial embeddings to the task-specific embedding space, via a multi-loss objective function. The technique is general, but here we demonstrate its use for improved dependency parsing (especially for sentences with out-of-vocabulary words), as well as for downstream improvements on sentiment analysis.",Mapping Unseen Words to Task-Trained Embedding Spaces,"We consider the supervised training setting in which we learn task-specific word embeddings. We assume that we start with initial embeddings learned from unlabelled data and update them to learn taskspecific embeddings for words in the supervised training data. However, for new words in the test set, we must use either their initial embeddings or a single unknown embedding, which often leads to errors. We address this by learning a neural network to map from initial embeddings to the task-specific embedding space, via a multi-loss objective function. The technique is general, but here we demonstrate its use for improved dependency parsing (especially for sentences with out-of-vocabulary words), as well as for downstream improvements on sentiment analysis.","We would like to thank the anonymous reviewers for their useful comments. This research was supported by a Google Faculty Research Award to Mohit Bansal, Karen Livescu and Kevin Gimpel.","Mapping Unseen Words to Task-Trained Embedding Spaces. We consider the supervised training setting in which we learn task-specific word embeddings. We assume that we start with initial embeddings learned from unlabelled data and update them to learn taskspecific embeddings for words in the supervised training data. However, for new words in the test set, we must use either their initial embeddings or a single unknown embedding, which often leads to errors. We address this by learning a neural network to map from initial embeddings to the task-specific embedding space, via a multi-loss objective function. The technique is general, but here we demonstrate its use for improved dependency parsing (especially for sentences with out-of-vocabulary words), as well as for downstream improvements on sentiment analysis.",2016
tomita-1989-parsing,https://aclanthology.org/W89-0243,0,,,,,,,"Parsing 2-Dimensional Language. 2-Dimensional Context-Free Grammar (2D-CFG) for 2-dimensional input text is introduced and efficient parsing algorithms for 2D-CFG are presented. In 2D-CFG, a grammar rule's right hand side symbols can be placed not only horizontally but also vertically. Terminal symbols in a 2-dimensional input text are combined to form a rectangular region, and regions are combined to form a larger region using a 2dimensional phrase structure rule. The parsing algorithms presented in this paper are the 2D-Ear1ey algorithm and 2D-LR algorithm, which are 2-dimensionally extended versions of Earley's algorithm and the LR(O) algorithm, respectively. 1. In tro d u c tio n Existing grammar formalisms and formal language theories, as well as parsing algorithms, deal only with one-dimensional strings. However, 2-dimensional layout information plays an important role In understanding a text. It is especially crucial for such texts as title pages of artldes, business cards, announcements and formal letters to be read by an optical character reader (OCR). A number of projects [1 1 ,6 ,7 ,2 ], most notably by Fujisawa et al. [4], try to analyze and utilize the 2-dimensional layout information. Fujisawa et al., unlike others, uses a procedural language called Form Definition Language (FDL) [5, 12] to specify layout rules. On the other hand, in the area of image understanding, several attempts have been also made to define a language to describe 2-dimensional images [3 , 10]. This paper presents a formalism called 2-Dimensional Context-Free Grammar (2D-CFG), and two parsing algorithms to parse 2-dimensional language with 2D-CFG. Unlike all the previous attempts mentioned above, our approach is to extend existing well-studied (one dimensional) grammar formalisms and parsing techniques to handle 2-dimensional language. In the rest of this section, we informally describe the 2-dimensional context-free grammar (2D-CFG) in comparison with the 1-dimensional traditional context-free grammar. Input to the traditional context-free grammar is a string, or sentence; namely a one-dimensional array of terminal symbols. Input to the 2-dimensional context-free grammar, on the other hand, is a rectangular block of symbols, or text, namely, a 2-dimensional array of terminal symbols. In the traditional context-free grammar, a non-terminal symbol represents a phrase, which is a substring of the original input string. A grammar rule is applied to combine adjoining phrases to form a larger phrase. In the 2-dimensional context-free grammar, on the other hand, a non-terminal represents a region, which is a rectangular sub-block of the input text. A grammar rule is applied to combine two adjoining regions to form a larger region. Rules like 1T h» research was supported by the National Science Foundation under contract IRI-8858085.-414-International Parsing Workshop '89 S: start symbol Let LEFT(p) be the left hand side symbol of p. Let RIGHT(p, i) be the i-th right hand side symbol of p.",Parsing 2-Dimensional Language,"2-Dimensional Context-Free Grammar (2D-CFG) for 2-dimensional input text is introduced and efficient parsing algorithms for 2D-CFG are presented. In 2D-CFG, a grammar rule's right hand side symbols can be placed not only horizontally but also vertically. Terminal symbols in a 2-dimensional input text are combined to form a rectangular region, and regions are combined to form a larger region using a 2dimensional phrase structure rule. The parsing algorithms presented in this paper are the 2D-Ear1ey algorithm and 2D-LR algorithm, which are 2-dimensionally extended versions of Earley's algorithm and the LR(O) algorithm, respectively. 1. In tro d u c tio n Existing grammar formalisms and formal language theories, as well as parsing algorithms, deal only with one-dimensional strings. However, 2-dimensional layout information plays an important role In understanding a text. It is especially crucial for such texts as title pages of artldes, business cards, announcements and formal letters to be read by an optical character reader (OCR). A number of projects [1 1 ,6 ,7 ,2 ], most notably by Fujisawa et al. [4], try to analyze and utilize the 2-dimensional layout information. Fujisawa et al., unlike others, uses a procedural language called Form Definition Language (FDL) [5, 12] to specify layout rules. On the other hand, in the area of image understanding, several attempts have been also made to define a language to describe 2-dimensional images [3 , 10]. This paper presents a formalism called 2-Dimensional Context-Free Grammar (2D-CFG), and two parsing algorithms to parse 2-dimensional language with 2D-CFG. Unlike all the previous attempts mentioned above, our approach is to extend existing well-studied (one dimensional) grammar formalisms and parsing techniques to handle 2-dimensional language. In the rest of this section, we informally describe the 2-dimensional context-free grammar (2D-CFG) in comparison with the 1-dimensional traditional context-free grammar. Input to the traditional context-free grammar is a string, or sentence; namely a one-dimensional array of terminal symbols. Input to the 2-dimensional context-free grammar, on the other hand, is a rectangular block of symbols, or text, namely, a 2-dimensional array of terminal symbols. In the traditional context-free grammar, a non-terminal symbol represents a phrase, which is a substring of the original input string. A grammar rule is applied to combine adjoining phrases to form a larger phrase. In the 2-dimensional context-free grammar, on the other hand, a non-terminal represents a region, which is a rectangular sub-block of the input text. A grammar rule is applied to combine two adjoining regions to form a larger region. Rules like 1T h» research was supported by the National Science Foundation under contract IRI-8858085.-414-International Parsing Workshop '89 S: start symbol Let LEFT(p) be the left hand side symbol of p. Let RIGHT(p, i) be the i-th right hand side symbol of p.",Parsing 2-Dimensional Language,"2-Dimensional Context-Free Grammar (2D-CFG) for 2-dimensional input text is introduced and efficient parsing algorithms for 2D-CFG are presented. In 2D-CFG, a grammar rule's right hand side symbols can be placed not only horizontally but also vertically. Terminal symbols in a 2-dimensional input text are combined to form a rectangular region, and regions are combined to form a larger region using a 2dimensional phrase structure rule. The parsing algorithms presented in this paper are the 2D-Ear1ey algorithm and 2D-LR algorithm, which are 2-dimensionally extended versions of Earley's algorithm and the LR(O) algorithm, respectively. 1. In tro d u c tio n Existing grammar formalisms and formal language theories, as well as parsing algorithms, deal only with one-dimensional strings. However, 2-dimensional layout information plays an important role In understanding a text. It is especially crucial for such texts as title pages of artldes, business cards, announcements and formal letters to be read by an optical character reader (OCR). A number of projects [1 1 ,6 ,7 ,2 ], most notably by Fujisawa et al. [4], try to analyze and utilize the 2-dimensional layout information. Fujisawa et al., unlike others, uses a procedural language called Form Definition Language (FDL) [5, 12] to specify layout rules. On the other hand, in the area of image understanding, several attempts have been also made to define a language to describe 2-dimensional images [3 , 10]. This paper presents a formalism called 2-Dimensional Context-Free Grammar (2D-CFG), and two parsing algorithms to parse 2-dimensional language with 2D-CFG. Unlike all the previous attempts mentioned above, our approach is to extend existing well-studied (one dimensional) grammar formalisms and parsing techniques to handle 2-dimensional language. In the rest of this section, we informally describe the 2-dimensional context-free grammar (2D-CFG) in comparison with the 1-dimensional traditional context-free grammar. Input to the traditional context-free grammar is a string, or sentence; namely a one-dimensional array of terminal symbols. Input to the 2-dimensional context-free grammar, on the other hand, is a rectangular block of symbols, or text, namely, a 2-dimensional array of terminal symbols. In the traditional context-free grammar, a non-terminal symbol represents a phrase, which is a substring of the original input string. A grammar rule is applied to combine adjoining phrases to form a larger phrase. In the 2-dimensional context-free grammar, on the other hand, a non-terminal represents a region, which is a rectangular sub-block of the input text. A grammar rule is applied to combine two adjoining regions to form a larger region. Rules like 1T h» research was supported by the National Science Foundation under contract IRI-8858085.-414-International Parsing Workshop '89 S: start symbol Let LEFT(p) be the left hand side symbol of p. Let RIGHT(p, i) be the i-th right hand side symbol of p.",,"Parsing 2-Dimensional Language. 2-Dimensional Context-Free Grammar (2D-CFG) for 2-dimensional input text is introduced and efficient parsing algorithms for 2D-CFG are presented. In 2D-CFG, a grammar rule's right hand side symbols can be placed not only horizontally but also vertically. Terminal symbols in a 2-dimensional input text are combined to form a rectangular region, and regions are combined to form a larger region using a 2dimensional phrase structure rule. The parsing algorithms presented in this paper are the 2D-Ear1ey algorithm and 2D-LR algorithm, which are 2-dimensionally extended versions of Earley's algorithm and the LR(O) algorithm, respectively. 1. In tro d u c tio n Existing grammar formalisms and formal language theories, as well as parsing algorithms, deal only with one-dimensional strings. However, 2-dimensional layout information plays an important role In understanding a text. It is especially crucial for such texts as title pages of artldes, business cards, announcements and formal letters to be read by an optical character reader (OCR). A number of projects [1 1 ,6 ,7 ,2 ], most notably by Fujisawa et al. [4], try to analyze and utilize the 2-dimensional layout information. Fujisawa et al., unlike others, uses a procedural language called Form Definition Language (FDL) [5, 12] to specify layout rules. On the other hand, in the area of image understanding, several attempts have been also made to define a language to describe 2-dimensional images [3 , 10]. This paper presents a formalism called 2-Dimensional Context-Free Grammar (2D-CFG), and two parsing algorithms to parse 2-dimensional language with 2D-CFG. Unlike all the previous attempts mentioned above, our approach is to extend existing well-studied (one dimensional) grammar formalisms and parsing techniques to handle 2-dimensional language. In the rest of this section, we informally describe the 2-dimensional context-free grammar (2D-CFG) in comparison with the 1-dimensional traditional context-free grammar. Input to the traditional context-free grammar is a string, or sentence; namely a one-dimensional array of terminal symbols. Input to the 2-dimensional context-free grammar, on the other hand, is a rectangular block of symbols, or text, namely, a 2-dimensional array of terminal symbols. In the traditional context-free grammar, a non-terminal symbol represents a phrase, which is a substring of the original input string. A grammar rule is applied to combine adjoining phrases to form a larger phrase. In the 2-dimensional context-free grammar, on the other hand, a non-terminal represents a region, which is a rectangular sub-block of the input text. A grammar rule is applied to combine two adjoining regions to form a larger region. Rules like 1T h» research was supported by the National Science Foundation under contract IRI-8858085.-414-International Parsing Workshop '89 S: start symbol Let LEFT(p) be the left hand side symbol of p. Let RIGHT(p, i) be the i-th right hand side symbol of p.",1989
li-etal-2020-event,https://aclanthology.org/2020.findings-emnlp.73,0,,,,,,,"Event Extraction as Multi-turn Question Answering. Event extraction, which aims to identify event triggers of pre-defined event types and their arguments of specific roles, is a challenging task in NLP. Most traditional approaches formulate this task as classification problems, with event types or argument roles taken as golden labels. Such approaches fail to model rich interactions among event types and arguments of different roles, and cannot generalize to new types or roles. This work proposes a new paradigm that formulates event extraction as multi-turn question answering. Our approach, MQAEE, casts the extraction task into a series of reading comprehension problems, by which it extracts triggers and arguments successively from a given sentence. A history answer embedding strategy is further adopted to model question answering history in the multi-turn process. By this new formulation, MQAEE makes full use of dependency among arguments and event types, and generalizes well to new types with new argument roles. Empirical results on ACE 2005 shows that MQAEE outperforms current state-of-the-art, pushing the final F1 of argument extraction to 53.4% (+2.0%). And it also has a good generalization ability, achieving competitive performance on 13 new event types even if trained only with a few samples of them.",Event Extraction as Multi-turn Question Answering,"Event extraction, which aims to identify event triggers of pre-defined event types and their arguments of specific roles, is a challenging task in NLP. Most traditional approaches formulate this task as classification problems, with event types or argument roles taken as golden labels. Such approaches fail to model rich interactions among event types and arguments of different roles, and cannot generalize to new types or roles. This work proposes a new paradigm that formulates event extraction as multi-turn question answering. Our approach, MQAEE, casts the extraction task into a series of reading comprehension problems, by which it extracts triggers and arguments successively from a given sentence. A history answer embedding strategy is further adopted to model question answering history in the multi-turn process. By this new formulation, MQAEE makes full use of dependency among arguments and event types, and generalizes well to new types with new argument roles. Empirical results on ACE 2005 shows that MQAEE outperforms current state-of-the-art, pushing the final F1 of argument extraction to 53.4% (+2.0%). And it also has a good generalization ability, achieving competitive performance on 13 new event types even if trained only with a few samples of them.",Event Extraction as Multi-turn Question Answering,"Event extraction, which aims to identify event triggers of pre-defined event types and their arguments of specific roles, is a challenging task in NLP. Most traditional approaches formulate this task as classification problems, with event types or argument roles taken as golden labels. Such approaches fail to model rich interactions among event types and arguments of different roles, and cannot generalize to new types or roles. This work proposes a new paradigm that formulates event extraction as multi-turn question answering. Our approach, MQAEE, casts the extraction task into a series of reading comprehension problems, by which it extracts triggers and arguments successively from a given sentence. A history answer embedding strategy is further adopted to model question answering history in the multi-turn process. By this new formulation, MQAEE makes full use of dependency among arguments and event types, and generalizes well to new types with new argument roles. Empirical results on ACE 2005 shows that MQAEE outperforms current state-of-the-art, pushing the final F1 of argument extraction to 53.4% (+2.0%). And it also has a good generalization ability, achieving competitive performance on 13 new event types even if trained only with a few samples of them.",,"Event Extraction as Multi-turn Question Answering. Event extraction, which aims to identify event triggers of pre-defined event types and their arguments of specific roles, is a challenging task in NLP. Most traditional approaches formulate this task as classification problems, with event types or argument roles taken as golden labels. Such approaches fail to model rich interactions among event types and arguments of different roles, and cannot generalize to new types or roles. This work proposes a new paradigm that formulates event extraction as multi-turn question answering. Our approach, MQAEE, casts the extraction task into a series of reading comprehension problems, by which it extracts triggers and arguments successively from a given sentence. A history answer embedding strategy is further adopted to model question answering history in the multi-turn process. By this new formulation, MQAEE makes full use of dependency among arguments and event types, and generalizes well to new types with new argument roles. Empirical results on ACE 2005 shows that MQAEE outperforms current state-of-the-art, pushing the final F1 of argument extraction to 53.4% (+2.0%). And it also has a good generalization ability, achieving competitive performance on 13 new event types even if trained only with a few samples of them.",2020
munoz-etal-2000-semantic,https://aclanthology.org/2000.bcs-1.17,0,,,,,,,Semantic approach to bridging reference resolution. ,Semantic approach to bridging reference resolution,,Semantic approach to bridging reference resolution,,,Semantic approach to bridging reference resolution. ,2000
maguino-valencia-etal-2018-wordnet,https://aclanthology.org/L18-1697,0,,,,,,,"WordNet-Shp: Towards the Building of a Lexical Database for a Peruvian Minority Language. WordNet-like resources are lexical databases with highly relevance information and data which could be exploited in more complex computational linguistics research and applications. The building process requires manual and automatic tasks, that could be more arduous if the language is a minority one with fewer digital resources. This study focuses in the construction of an initial WordNet database for a low-resourced and indigenous language in Peru: Shipibo-Konibo (shp). First, the stages of development from a scarce scenario (a bilingual dictionary shp-es) are described. Then, it is proposed a synset alignment method by comparing the definition glosses in the dictionary (written in Spanish) with the content of a Spanish WordNet. In this sense, word2vec similarity was the chosen metric for the proximity measure. Finally, an evaluation process is performed for the synsets, using a manually annotated Gold Standard in Shipibo-Konibo. The obtained results are promising, and this resource is expected to serve well in further applications, such as word sense disambiguation and even machine translation in the shp-es language pair.",{W}ord{N}et-Shp: Towards the Building of a Lexical Database for a {P}eruvian Minority Language,"WordNet-like resources are lexical databases with highly relevance information and data which could be exploited in more complex computational linguistics research and applications. The building process requires manual and automatic tasks, that could be more arduous if the language is a minority one with fewer digital resources. This study focuses in the construction of an initial WordNet database for a low-resourced and indigenous language in Peru: Shipibo-Konibo (shp). First, the stages of development from a scarce scenario (a bilingual dictionary shp-es) are described. Then, it is proposed a synset alignment method by comparing the definition glosses in the dictionary (written in Spanish) with the content of a Spanish WordNet. In this sense, word2vec similarity was the chosen metric for the proximity measure. Finally, an evaluation process is performed for the synsets, using a manually annotated Gold Standard in Shipibo-Konibo. The obtained results are promising, and this resource is expected to serve well in further applications, such as word sense disambiguation and even machine translation in the shp-es language pair.",WordNet-Shp: Towards the Building of a Lexical Database for a Peruvian Minority Language,"WordNet-like resources are lexical databases with highly relevance information and data which could be exploited in more complex computational linguistics research and applications. The building process requires manual and automatic tasks, that could be more arduous if the language is a minority one with fewer digital resources. This study focuses in the construction of an initial WordNet database for a low-resourced and indigenous language in Peru: Shipibo-Konibo (shp). First, the stages of development from a scarce scenario (a bilingual dictionary shp-es) are described. Then, it is proposed a synset alignment method by comparing the definition glosses in the dictionary (written in Spanish) with the content of a Spanish WordNet. In this sense, word2vec similarity was the chosen metric for the proximity measure. Finally, an evaluation process is performed for the synsets, using a manually annotated Gold Standard in Shipibo-Konibo. The obtained results are promising, and this resource is expected to serve well in further applications, such as word sense disambiguation and even machine translation in the shp-es language pair.","We highly appreciate the linguistic team effort that made possible the creation of this resource: Dr. Roberto Zariquiey, Alonso Vásquez, Gabriela Tello, Renzo Ego-Aguirre, Lea Reinhardt and Marcela Castro. We are also thankful to our native speakers (Shipibo-Konibo) collaborators: Juan Agustín, Carlos Guimaraes, Ronald Suárez and Miguel Gomez. Finally, we gratefully acknowledge the support of the ""Consejo Nacional de Ciencia, Tecnología e Innovación Tecnológica"" (CONCYTEC, Peru) under the contract 225-2015-FONDECYT.","WordNet-Shp: Towards the Building of a Lexical Database for a Peruvian Minority Language. WordNet-like resources are lexical databases with highly relevance information and data which could be exploited in more complex computational linguistics research and applications. The building process requires manual and automatic tasks, that could be more arduous if the language is a minority one with fewer digital resources. This study focuses in the construction of an initial WordNet database for a low-resourced and indigenous language in Peru: Shipibo-Konibo (shp). First, the stages of development from a scarce scenario (a bilingual dictionary shp-es) are described. Then, it is proposed a synset alignment method by comparing the definition glosses in the dictionary (written in Spanish) with the content of a Spanish WordNet. In this sense, word2vec similarity was the chosen metric for the proximity measure. Finally, an evaluation process is performed for the synsets, using a manually annotated Gold Standard in Shipibo-Konibo. The obtained results are promising, and this resource is expected to serve well in further applications, such as word sense disambiguation and even machine translation in the shp-es language pair.",2018
pu-etal-2017-sense,https://aclanthology.org/W17-4701,0,,,,,,,"Sense-Aware Statistical Machine Translation using Adaptive Context-Dependent Clustering. Statistical machine translation (SMT) systems use local cues from n-gram translation and language models to select the translation of each source word. Such systems do not explicitly perform word sense disambiguation (WSD), although this would enable them to select translations depending on the hypothesized sense of each word. Previous attempts to constrain word translations based on the results of generic WSD systems have suffered from their limited accuracy. We demonstrate that WSD systems can be adapted to help SMT, thanks to three key achievements: (1) we consider a larger context for WSD than SMT can afford to consider; (2) we adapt the number of senses per word to the ones observed in the training data using clustering-based WSD with K-means; and (3) we initialize senseclustering with definitions or examples extracted from WordNet. Our WSD system is competitive, and in combination with a factored SMT system improves noun and verb translation from English to Chinese, Dutch, French, German, and Spanish.",Sense-Aware Statistical Machine Translation using Adaptive Context-Dependent Clustering,"Statistical machine translation (SMT) systems use local cues from n-gram translation and language models to select the translation of each source word. Such systems do not explicitly perform word sense disambiguation (WSD), although this would enable them to select translations depending on the hypothesized sense of each word. Previous attempts to constrain word translations based on the results of generic WSD systems have suffered from their limited accuracy. We demonstrate that WSD systems can be adapted to help SMT, thanks to three key achievements: (1) we consider a larger context for WSD than SMT can afford to consider; (2) we adapt the number of senses per word to the ones observed in the training data using clustering-based WSD with K-means; and (3) we initialize senseclustering with definitions or examples extracted from WordNet. Our WSD system is competitive, and in combination with a factored SMT system improves noun and verb translation from English to Chinese, Dutch, French, German, and Spanish.",Sense-Aware Statistical Machine Translation using Adaptive Context-Dependent Clustering,"Statistical machine translation (SMT) systems use local cues from n-gram translation and language models to select the translation of each source word. Such systems do not explicitly perform word sense disambiguation (WSD), although this would enable them to select translations depending on the hypothesized sense of each word. Previous attempts to constrain word translations based on the results of generic WSD systems have suffered from their limited accuracy. We demonstrate that WSD systems can be adapted to help SMT, thanks to three key achievements: (1) we consider a larger context for WSD than SMT can afford to consider; (2) we adapt the number of senses per word to the ones observed in the training data using clustering-based WSD with K-means; and (3) we initialize senseclustering with definitions or examples extracted from WordNet. Our WSD system is competitive, and in combination with a factored SMT system improves noun and verb translation from English to Chinese, Dutch, French, German, and Spanish.","We are grateful for their support to the Swiss National Science Foundation (SNSF) under the Sinergia MODERN project (grant n. 147653, see www.idiap.ch/project/modern/) and to the European Union under the Horizon 2020 SUMMA project (grant n. 688139, see www.summaproject.eu). We thank the reviewers for their helpful suggestions.","Sense-Aware Statistical Machine Translation using Adaptive Context-Dependent Clustering. Statistical machine translation (SMT) systems use local cues from n-gram translation and language models to select the translation of each source word. Such systems do not explicitly perform word sense disambiguation (WSD), although this would enable them to select translations depending on the hypothesized sense of each word. Previous attempts to constrain word translations based on the results of generic WSD systems have suffered from their limited accuracy. We demonstrate that WSD systems can be adapted to help SMT, thanks to three key achievements: (1) we consider a larger context for WSD than SMT can afford to consider; (2) we adapt the number of senses per word to the ones observed in the training data using clustering-based WSD with K-means; and (3) we initialize senseclustering with definitions or examples extracted from WordNet. Our WSD system is competitive, and in combination with a factored SMT system improves noun and verb translation from English to Chinese, Dutch, French, German, and Spanish.",2017
brun-2011-detecting,https://aclanthology.org/R11-1054,0,,,,,,,"Detecting Opinions Using Deep Syntactic Analysis. In this paper, we present an opinion detection system built on top of a robust syntactic parser. The goal of this system is to extract opinions associated with products but also with characteristics of these products, i.e. to perform feature-based opinion extraction. To carry out this task, and following a target corpus study, the robust syntactic parser is enriched by associating polarities to pertinent lexical elements and by developing generic rules to extract relations of opinions together with their polarity, i.e. positive or negative. These relations are used to feed an opinion representation model. A first evaluation shows very encouraging results, but numerous perspectives and developments remain to be investigated.",Detecting Opinions Using Deep Syntactic Analysis,"In this paper, we present an opinion detection system built on top of a robust syntactic parser. The goal of this system is to extract opinions associated with products but also with characteristics of these products, i.e. to perform feature-based opinion extraction. To carry out this task, and following a target corpus study, the robust syntactic parser is enriched by associating polarities to pertinent lexical elements and by developing generic rules to extract relations of opinions together with their polarity, i.e. positive or negative. These relations are used to feed an opinion representation model. A first evaluation shows very encouraging results, but numerous perspectives and developments remain to be investigated.",Detecting Opinions Using Deep Syntactic Analysis,"In this paper, we present an opinion detection system built on top of a robust syntactic parser. The goal of this system is to extract opinions associated with products but also with characteristics of these products, i.e. to perform feature-based opinion extraction. To carry out this task, and following a target corpus study, the robust syntactic parser is enriched by associating polarities to pertinent lexical elements and by developing generic rules to extract relations of opinions together with their polarity, i.e. positive or negative. These relations are used to feed an opinion representation model. A first evaluation shows very encouraging results, but numerous perspectives and developments remain to be investigated.",,"Detecting Opinions Using Deep Syntactic Analysis. In this paper, we present an opinion detection system built on top of a robust syntactic parser. The goal of this system is to extract opinions associated with products but also with characteristics of these products, i.e. to perform feature-based opinion extraction. To carry out this task, and following a target corpus study, the robust syntactic parser is enriched by associating polarities to pertinent lexical elements and by developing generic rules to extract relations of opinions together with their polarity, i.e. positive or negative. These relations are used to feed an opinion representation model. A first evaluation shows very encouraging results, but numerous perspectives and developments remain to be investigated.",2011
burlot-etal-2016-limsi,https://aclanthology.org/2016.iwslt-1.19,0,,,,,,,"LIMSI@IWSLT'16: MT Track. This paper describes LIMSI's submission to the MT track of IWSLT 2016. We report results for translation from English into Czech. Our submission is an attempt to address the difficulties of translating into a morphologically rich language by paying special attention to the morphology generation on target side. To this end, we propose two ways of improving the morphological fluency of the output: 1. by performing translation and inflection of the target language in two separate steps, and 2. by using a neural language model with characted-based word representation. We finally present the combination of both methods used for our primary system submission.",{LIMSI}@{IWSLT}{'}16: {MT} Track,"This paper describes LIMSI's submission to the MT track of IWSLT 2016. We report results for translation from English into Czech. Our submission is an attempt to address the difficulties of translating into a morphologically rich language by paying special attention to the morphology generation on target side. To this end, we propose two ways of improving the morphological fluency of the output: 1. by performing translation and inflection of the target language in two separate steps, and 2. by using a neural language model with characted-based word representation. We finally present the combination of both methods used for our primary system submission.",LIMSI@IWSLT'16: MT Track,"This paper describes LIMSI's submission to the MT track of IWSLT 2016. We report results for translation from English into Czech. Our submission is an attempt to address the difficulties of translating into a morphologically rich language by paying special attention to the morphology generation on target side. To this end, we propose two ways of improving the morphological fluency of the output: 1. by performing translation and inflection of the target language in two separate steps, and 2. by using a neural language model with characted-based word representation. We finally present the combination of both methods used for our primary system submission.",This work has been partly funded by the European Unions Horizon 2020 research and innovation programme under grant agreement No. 645452 (QT21).,"LIMSI@IWSLT'16: MT Track. This paper describes LIMSI's submission to the MT track of IWSLT 2016. We report results for translation from English into Czech. Our submission is an attempt to address the difficulties of translating into a morphologically rich language by paying special attention to the morphology generation on target side. To this end, we propose two ways of improving the morphological fluency of the output: 1. by performing translation and inflection of the target language in two separate steps, and 2. by using a neural language model with characted-based word representation. We finally present the combination of both methods used for our primary system submission.",2016
dubbin-blunsom-2014-modelling,https://aclanthology.org/E14-1013,0,,,,,,,"Modelling the Lexicon in Unsupervised Part of Speech Induction. Automatically inducing the syntactic partof-speech categories for words in text is a fundamental task in Computational Linguistics. While the performance of unsupervised tagging models has been slowly improving, current state-of-the-art systems make the obviously incorrect assumption that all tokens of a given word type must share a single part-of-speech tag. This one-tag-per-type heuristic counters the tendency of Hidden Markov Model based taggers to over generate tags for a given word type. However, it is clearly incompatible with basic syntactic theory. In this paper we extend a state-ofthe-art Pitman-Yor Hidden Markov Model tagger with an explicit model of the lexicon. In doing so we are able to incorporate a soft bias towards inducing few tags per type. We develop a particle filter for drawing samples from the posterior of our model and present empirical results that show that our model is competitive with and faster than the state-of-the-art without making any unrealistic restrictions.",Modelling the Lexicon in Unsupervised Part of Speech Induction,"Automatically inducing the syntactic partof-speech categories for words in text is a fundamental task in Computational Linguistics. While the performance of unsupervised tagging models has been slowly improving, current state-of-the-art systems make the obviously incorrect assumption that all tokens of a given word type must share a single part-of-speech tag. This one-tag-per-type heuristic counters the tendency of Hidden Markov Model based taggers to over generate tags for a given word type. However, it is clearly incompatible with basic syntactic theory. In this paper we extend a state-ofthe-art Pitman-Yor Hidden Markov Model tagger with an explicit model of the lexicon. In doing so we are able to incorporate a soft bias towards inducing few tags per type. We develop a particle filter for drawing samples from the posterior of our model and present empirical results that show that our model is competitive with and faster than the state-of-the-art without making any unrealistic restrictions.",Modelling the Lexicon in Unsupervised Part of Speech Induction,"Automatically inducing the syntactic partof-speech categories for words in text is a fundamental task in Computational Linguistics. While the performance of unsupervised tagging models has been slowly improving, current state-of-the-art systems make the obviously incorrect assumption that all tokens of a given word type must share a single part-of-speech tag. This one-tag-per-type heuristic counters the tendency of Hidden Markov Model based taggers to over generate tags for a given word type. However, it is clearly incompatible with basic syntactic theory. In this paper we extend a state-ofthe-art Pitman-Yor Hidden Markov Model tagger with an explicit model of the lexicon. In doing so we are able to incorporate a soft bias towards inducing few tags per type. We develop a particle filter for drawing samples from the posterior of our model and present empirical results that show that our model is competitive with and faster than the state-of-the-art without making any unrealistic restrictions.",,"Modelling the Lexicon in Unsupervised Part of Speech Induction. Automatically inducing the syntactic partof-speech categories for words in text is a fundamental task in Computational Linguistics. While the performance of unsupervised tagging models has been slowly improving, current state-of-the-art systems make the obviously incorrect assumption that all tokens of a given word type must share a single part-of-speech tag. This one-tag-per-type heuristic counters the tendency of Hidden Markov Model based taggers to over generate tags for a given word type. However, it is clearly incompatible with basic syntactic theory. In this paper we extend a state-ofthe-art Pitman-Yor Hidden Markov Model tagger with an explicit model of the lexicon. In doing so we are able to incorporate a soft bias towards inducing few tags per type. We develop a particle filter for drawing samples from the posterior of our model and present empirical results that show that our model is competitive with and faster than the state-of-the-art without making any unrealistic restrictions.",2014
rogers-2004-wrapping,https://aclanthology.org/P04-1071,0,,,,,,,"Wrapping of Trees. We explore the descriptive power, in terms of syntactic phenomena, of a formalism that extends Tree-Adjoining Grammar (TAG) by adding a fourth level of hierarchical decomposition to the three levels TAG already employs. While extending the descriptive power minimally, the additional level of decomposition allows us to obtain a uniform account of a range of phenomena that has heretofore been difficult to encompass, an account that employs unitary elementary structures and eschews synchronized derivation operations, and which is, in many respects, closer to the spirit of the intuitions underlying TAG-based linguistic theory than previously considered extensions to TAG.",Wrapping of Trees,"We explore the descriptive power, in terms of syntactic phenomena, of a formalism that extends Tree-Adjoining Grammar (TAG) by adding a fourth level of hierarchical decomposition to the three levels TAG already employs. While extending the descriptive power minimally, the additional level of decomposition allows us to obtain a uniform account of a range of phenomena that has heretofore been difficult to encompass, an account that employs unitary elementary structures and eschews synchronized derivation operations, and which is, in many respects, closer to the spirit of the intuitions underlying TAG-based linguistic theory than previously considered extensions to TAG.",Wrapping of Trees,"We explore the descriptive power, in terms of syntactic phenomena, of a formalism that extends Tree-Adjoining Grammar (TAG) by adding a fourth level of hierarchical decomposition to the three levels TAG already employs. While extending the descriptive power minimally, the additional level of decomposition allows us to obtain a uniform account of a range of phenomena that has heretofore been difficult to encompass, an account that employs unitary elementary structures and eschews synchronized derivation operations, and which is, in many respects, closer to the spirit of the intuitions underlying TAG-based linguistic theory than previously considered extensions to TAG.",,"Wrapping of Trees. We explore the descriptive power, in terms of syntactic phenomena, of a formalism that extends Tree-Adjoining Grammar (TAG) by adding a fourth level of hierarchical decomposition to the three levels TAG already employs. While extending the descriptive power minimally, the additional level of decomposition allows us to obtain a uniform account of a range of phenomena that has heretofore been difficult to encompass, an account that employs unitary elementary structures and eschews synchronized derivation operations, and which is, in many respects, closer to the spirit of the intuitions underlying TAG-based linguistic theory than previously considered extensions to TAG.",2004
khairunnisa-etal-2020-towards,https://aclanthology.org/2020.aacl-srw.10,0,,,,,,,"Towards a Standardized Dataset on Indonesian Named Entity Recognition. In recent years, named entity recognition (NER) tasks in the Indonesian language have undergone extensive development. There are only a few corpora for Indonesian NER; hence, recent Indonesian NER studies have used diverse datasets. Although an open dataset is available, it includes only approximately 2,000 sentences and contains inconsistent annotations, thereby preventing accurate training of NER models without reliance on pre-trained models. Therefore, we re-annotated the dataset and compared the two annotations' performance using the Bidirectional Long Short-Term Memory and Conditional Random Field (BiLSTM-CRF) approach. Fixing the annotation yielded a more consistent result for the organization tag and improved the prediction score by a large margin. Moreover, to take full advantage of pre-trained models, we compared different feature embeddings to determine their impact on the NER task for the Indonesian language.",Towards a Standardized Dataset on {I}ndonesian Named Entity Recognition,"In recent years, named entity recognition (NER) tasks in the Indonesian language have undergone extensive development. There are only a few corpora for Indonesian NER; hence, recent Indonesian NER studies have used diverse datasets. Although an open dataset is available, it includes only approximately 2,000 sentences and contains inconsistent annotations, thereby preventing accurate training of NER models without reliance on pre-trained models. Therefore, we re-annotated the dataset and compared the two annotations' performance using the Bidirectional Long Short-Term Memory and Conditional Random Field (BiLSTM-CRF) approach. Fixing the annotation yielded a more consistent result for the organization tag and improved the prediction score by a large margin. Moreover, to take full advantage of pre-trained models, we compared different feature embeddings to determine their impact on the NER task for the Indonesian language.",Towards a Standardized Dataset on Indonesian Named Entity Recognition,"In recent years, named entity recognition (NER) tasks in the Indonesian language have undergone extensive development. There are only a few corpora for Indonesian NER; hence, recent Indonesian NER studies have used diverse datasets. Although an open dataset is available, it includes only approximately 2,000 sentences and contains inconsistent annotations, thereby preventing accurate training of NER models without reliance on pre-trained models. Therefore, we re-annotated the dataset and compared the two annotations' performance using the Bidirectional Long Short-Term Memory and Conditional Random Field (BiLSTM-CRF) approach. Fixing the annotation yielded a more consistent result for the organization tag and improved the prediction score by a large margin. Moreover, to take full advantage of pre-trained models, we compared different feature embeddings to determine their impact on the NER task for the Indonesian language.",,"Towards a Standardized Dataset on Indonesian Named Entity Recognition. In recent years, named entity recognition (NER) tasks in the Indonesian language have undergone extensive development. There are only a few corpora for Indonesian NER; hence, recent Indonesian NER studies have used diverse datasets. Although an open dataset is available, it includes only approximately 2,000 sentences and contains inconsistent annotations, thereby preventing accurate training of NER models without reliance on pre-trained models. Therefore, we re-annotated the dataset and compared the two annotations' performance using the Bidirectional Long Short-Term Memory and Conditional Random Field (BiLSTM-CRF) approach. Fixing the annotation yielded a more consistent result for the organization tag and improved the prediction score by a large margin. Moreover, to take full advantage of pre-trained models, we compared different feature embeddings to determine their impact on the NER task for the Indonesian language.",2020
wilcock-jokinen-2015-multilingual,https://aclanthology.org/W15-4623,0,,,,,,,"Multilingual WikiTalk: Wikipedia-based talking robots that switch languages.. At SIGDIAL-2013 our talking robot demonstrated Wikipedia-based spoken information access in English. Our new demo shows a robot speaking different languages, getting content from different language Wikipedias, and switching languages to meet the linguistic capabilities of different dialogue partners.",Multilingual {W}iki{T}alk: {W}ikipedia-based talking robots that switch languages.,"At SIGDIAL-2013 our talking robot demonstrated Wikipedia-based spoken information access in English. Our new demo shows a robot speaking different languages, getting content from different language Wikipedias, and switching languages to meet the linguistic capabilities of different dialogue partners.",Multilingual WikiTalk: Wikipedia-based talking robots that switch languages.,"At SIGDIAL-2013 our talking robot demonstrated Wikipedia-based spoken information access in English. Our new demo shows a robot speaking different languages, getting content from different language Wikipedias, and switching languages to meet the linguistic capabilities of different dialogue partners.",The second author gratefully acknowledges the financial support of Estonian Science Foundation project IUT20-56 (Eesti keele arvutimudelid; computational models for Estonian)We thank Niklas Laxström for his work on the internationalization of WikiTalk and the localized Finnish version. We also thank Kenichi Okonogi and Seiichi Yamamoto for their collaboration on the localized Japanese version.,"Multilingual WikiTalk: Wikipedia-based talking robots that switch languages.. At SIGDIAL-2013 our talking robot demonstrated Wikipedia-based spoken information access in English. Our new demo shows a robot speaking different languages, getting content from different language Wikipedias, and switching languages to meet the linguistic capabilities of different dialogue partners.",2015
schwenk-etal-2009-smt,https://aclanthology.org/W09-0423,0,,,,,,,SMT and SPE Machine Translation Systems for WMT`09. This paper describes the development of several machine translation systems for the 2009 WMT shared task evaluation. We only consider the translation between French and English. We describe a statistical system based on the Moses decoder and a statistical post-editing system using SYSTRAN's rule-based system. We also investigated techniques to automatically extract additional bilingual texts from comparable corpora.,{SMT} and {SPE} Machine Translation Systems for {WMT}{`}09,This paper describes the development of several machine translation systems for the 2009 WMT shared task evaluation. We only consider the translation between French and English. We describe a statistical system based on the Moses decoder and a statistical post-editing system using SYSTRAN's rule-based system. We also investigated techniques to automatically extract additional bilingual texts from comparable corpora.,SMT and SPE Machine Translation Systems for WMT`09,This paper describes the development of several machine translation systems for the 2009 WMT shared task evaluation. We only consider the translation between French and English. We describe a statistical system based on the Moses decoder and a statistical post-editing system using SYSTRAN's rule-based system. We also investigated techniques to automatically extract additional bilingual texts from comparable corpora.,"This work has been partially funded by the French Government under the project INSTAR (ANR JCJC06 143038) and the by the Higher Education Commission, Pakistan through the HEC Overseas Scholarship 2005.",SMT and SPE Machine Translation Systems for WMT`09. This paper describes the development of several machine translation systems for the 2009 WMT shared task evaluation. We only consider the translation between French and English. We describe a statistical system based on the Moses decoder and a statistical post-editing system using SYSTRAN's rule-based system. We also investigated techniques to automatically extract additional bilingual texts from comparable corpora.,2009
ren-etal-2014-positive,https://aclanthology.org/D14-1055,1,,,,deception_detection,,,"Positive Unlabeled Learning for Deceptive Reviews Detection. Deceptive reviews detection has attracted significant attention from both business and research communities. However, due to the difficulty of human labeling needed for supervised learning, the problem remains to be highly challenging. This paper proposed a novel angle to the problem by modeling PU (positive unlabeled) learning. A semi-supervised model, called mixing population and individual property PU learning (MPIPUL), is proposed. Firstly, some reliable negative examples are identified from the unlabeled dataset. Secondly, some representative positive examples and negative examples are generated based on LDA (Latent Dirichlet Allocation). Thirdly, for the remaining unlabeled examples (we call them spy examples), which can not be explicitly identified as positive and negative, two similarity weights are assigned, by which the probability of a spy example belonging to the positive class and the negative class are displayed. Finally, spy examples and their similarity weights are incorporated into SVM (Support Vector Machine) to build an accurate classifier. Experiments on gold-standard dataset demonstrate the effectiveness of MPIPUL which outperforms the state-of-the-art baselines.",Positive Unlabeled Learning for Deceptive Reviews Detection,"Deceptive reviews detection has attracted significant attention from both business and research communities. However, due to the difficulty of human labeling needed for supervised learning, the problem remains to be highly challenging. This paper proposed a novel angle to the problem by modeling PU (positive unlabeled) learning. A semi-supervised model, called mixing population and individual property PU learning (MPIPUL), is proposed. Firstly, some reliable negative examples are identified from the unlabeled dataset. Secondly, some representative positive examples and negative examples are generated based on LDA (Latent Dirichlet Allocation). Thirdly, for the remaining unlabeled examples (we call them spy examples), which can not be explicitly identified as positive and negative, two similarity weights are assigned, by which the probability of a spy example belonging to the positive class and the negative class are displayed. Finally, spy examples and their similarity weights are incorporated into SVM (Support Vector Machine) to build an accurate classifier. Experiments on gold-standard dataset demonstrate the effectiveness of MPIPUL which outperforms the state-of-the-art baselines.",Positive Unlabeled Learning for Deceptive Reviews Detection,"Deceptive reviews detection has attracted significant attention from both business and research communities. However, due to the difficulty of human labeling needed for supervised learning, the problem remains to be highly challenging. This paper proposed a novel angle to the problem by modeling PU (positive unlabeled) learning. A semi-supervised model, called mixing population and individual property PU learning (MPIPUL), is proposed. Firstly, some reliable negative examples are identified from the unlabeled dataset. Secondly, some representative positive examples and negative examples are generated based on LDA (Latent Dirichlet Allocation). Thirdly, for the remaining unlabeled examples (we call them spy examples), which can not be explicitly identified as positive and negative, two similarity weights are assigned, by which the probability of a spy example belonging to the positive class and the negative class are displayed. Finally, spy examples and their similarity weights are incorporated into SVM (Support Vector Machine) to build an accurate classifier. Experiments on gold-standard dataset demonstrate the effectiveness of MPIPUL which outperforms the state-of-the-art baselines.",We are grateful to the anonymous reviewers for their thoughtful comments. ,"Positive Unlabeled Learning for Deceptive Reviews Detection. Deceptive reviews detection has attracted significant attention from both business and research communities. However, due to the difficulty of human labeling needed for supervised learning, the problem remains to be highly challenging. This paper proposed a novel angle to the problem by modeling PU (positive unlabeled) learning. A semi-supervised model, called mixing population and individual property PU learning (MPIPUL), is proposed. Firstly, some reliable negative examples are identified from the unlabeled dataset. Secondly, some representative positive examples and negative examples are generated based on LDA (Latent Dirichlet Allocation). Thirdly, for the remaining unlabeled examples (we call them spy examples), which can not be explicitly identified as positive and negative, two similarity weights are assigned, by which the probability of a spy example belonging to the positive class and the negative class are displayed. Finally, spy examples and their similarity weights are incorporated into SVM (Support Vector Machine) to build an accurate classifier. Experiments on gold-standard dataset demonstrate the effectiveness of MPIPUL which outperforms the state-of-the-art baselines.",2014
leinonen-etal-2018-new,https://aclanthology.org/W18-0208,0,,,,,,,"New Baseline in Automatic Speech Recognition for Northern S\'ami. Automatic speech recognition has gone through many changes in recent years. Advances both in computer hardware and machine learning have made it possible to develop systems far more capable and complex than the previous state-of-theart. However, almost all of these improvements have been tested in major wellresourced languages. In this paper, we show that these techniques are capable of yielding improvements even in a small data scenario. We experiment with different deep neural network architectures for acoustic modeling for Northern Sámi and report up to 50% relative error rate reductions. We also run experiments to compare the performance of subwords as language modeling units in Northern Sámi. Tiivistelmä Automaattinen puheentunnistus on kehittynyt viime vuosina merkittävästi. Uudet innovaatiot sekä laitteistossa että koneoppimisessa ovat mahdollistaneet entistä paljon tehokkaammat ja monimutkaisemmat järjestelmät. Suurin osa näistä parannuksista on kuitenkin testattu vain valtakielillä, joiden kehittämiseen on tarjolla runsaasti aineistoja. Tässä paperissa näytämme että nämä tekniikat tuottavat parannuksia myös kielillä, joista aineistoa on vähän. Kokeilemme ja vertailemme erilaisia syviä neuroverkkoja pohjoissaamen akustisina malleina ja onnistumme vähentämään tunnistusvirheitä jopa 50%:lla. Tutkimme myös tapoja pilkkoa sanoja pienempiin osiin pohjoissaamen kielimalleissa.",New Baseline in Automatic Speech Recognition for {N}orthern {S}{\'a}mi,"Automatic speech recognition has gone through many changes in recent years. Advances both in computer hardware and machine learning have made it possible to develop systems far more capable and complex than the previous state-of-theart. However, almost all of these improvements have been tested in major wellresourced languages. In this paper, we show that these techniques are capable of yielding improvements even in a small data scenario. We experiment with different deep neural network architectures for acoustic modeling for Northern Sámi and report up to 50% relative error rate reductions. We also run experiments to compare the performance of subwords as language modeling units in Northern Sámi. Tiivistelmä Automaattinen puheentunnistus on kehittynyt viime vuosina merkittävästi. Uudet innovaatiot sekä laitteistossa että koneoppimisessa ovat mahdollistaneet entistä paljon tehokkaammat ja monimutkaisemmat järjestelmät. Suurin osa näistä parannuksista on kuitenkin testattu vain valtakielillä, joiden kehittämiseen on tarjolla runsaasti aineistoja. Tässä paperissa näytämme että nämä tekniikat tuottavat parannuksia myös kielillä, joista aineistoa on vähän. Kokeilemme ja vertailemme erilaisia syviä neuroverkkoja pohjoissaamen akustisina malleina ja onnistumme vähentämään tunnistusvirheitä jopa 50%:lla. Tutkimme myös tapoja pilkkoa sanoja pienempiin osiin pohjoissaamen kielimalleissa.",New Baseline in Automatic Speech Recognition for Northern S\'ami,"Automatic speech recognition has gone through many changes in recent years. Advances both in computer hardware and machine learning have made it possible to develop systems far more capable and complex than the previous state-of-theart. However, almost all of these improvements have been tested in major wellresourced languages. In this paper, we show that these techniques are capable of yielding improvements even in a small data scenario. We experiment with different deep neural network architectures for acoustic modeling for Northern Sámi and report up to 50% relative error rate reductions. We also run experiments to compare the performance of subwords as language modeling units in Northern Sámi. Tiivistelmä Automaattinen puheentunnistus on kehittynyt viime vuosina merkittävästi. Uudet innovaatiot sekä laitteistossa että koneoppimisessa ovat mahdollistaneet entistä paljon tehokkaammat ja monimutkaisemmat järjestelmät. Suurin osa näistä parannuksista on kuitenkin testattu vain valtakielillä, joiden kehittämiseen on tarjolla runsaasti aineistoja. Tässä paperissa näytämme että nämä tekniikat tuottavat parannuksia myös kielillä, joista aineistoa on vähän. Kokeilemme ja vertailemme erilaisia syviä neuroverkkoja pohjoissaamen akustisina malleina ja onnistumme vähentämään tunnistusvirheitä jopa 50%:lla. Tutkimme myös tapoja pilkkoa sanoja pienempiin osiin pohjoissaamen kielimalleissa.","We thank the University of Tromsø for the access to their Northern Sámi datasets and acknowledge the computational resources provided by the Aalto Science-IT project.This work was financially supported by the Tekes Challenge Finland project TELLme, Academy of Finland under the grant number 251170, and Kone foundation.","New Baseline in Automatic Speech Recognition for Northern S\'ami. Automatic speech recognition has gone through many changes in recent years. Advances both in computer hardware and machine learning have made it possible to develop systems far more capable and complex than the previous state-of-theart. However, almost all of these improvements have been tested in major wellresourced languages. In this paper, we show that these techniques are capable of yielding improvements even in a small data scenario. We experiment with different deep neural network architectures for acoustic modeling for Northern Sámi and report up to 50% relative error rate reductions. We also run experiments to compare the performance of subwords as language modeling units in Northern Sámi. Tiivistelmä Automaattinen puheentunnistus on kehittynyt viime vuosina merkittävästi. Uudet innovaatiot sekä laitteistossa että koneoppimisessa ovat mahdollistaneet entistä paljon tehokkaammat ja monimutkaisemmat järjestelmät. Suurin osa näistä parannuksista on kuitenkin testattu vain valtakielillä, joiden kehittämiseen on tarjolla runsaasti aineistoja. Tässä paperissa näytämme että nämä tekniikat tuottavat parannuksia myös kielillä, joista aineistoa on vähän. Kokeilemme ja vertailemme erilaisia syviä neuroverkkoja pohjoissaamen akustisina malleina ja onnistumme vähentämään tunnistusvirheitä jopa 50%:lla. Tutkimme myös tapoja pilkkoa sanoja pienempiin osiin pohjoissaamen kielimalleissa.",2018
randria-etal-2020-subjective,https://aclanthology.org/2020.lrec-1.286,0,,,,,,,"Subjective Evaluation of Comprehensibility in Movie Interactions. Various research works have dealt with the comprehensibility of textual, audio, or audiovisual documents, and showed that factors related to text (e.g., linguistic complexity), sound (e.g., speech intelligibility), image (e.g., presence of visual context), or even to cognition and emotion can play a major role in the ability of humans to understand the semantic and pragmatic contents of a given document. However, to date, no reference human data is available that could help investigating the role of the linguistic and extralinguistic information present at these different levels (i.e., linguistic, audio/phonetic, and visual) in multimodal documents (e.g., movies). The present work aimed at building a corpus of human annotations that would help to study further how much and in which way the human perception of comprehensibility (i.e., of the difficulty of comprehension, referred to in this paper as overall difficulty) of audiovisual documents is affected (1) by lexical complexity, grammatical complexity, and speech intelligibility, and (2) by the modality/ies (text, audio, video) available to the human recipient. To this end, a corpus of 55 short movie clips was created. Fifteen experts (language teachers) assessed the overall difficulty, the lexical difficulty, the grammatical difficulty and the speech intelligibility of the clips under different conditions in which one or more modality/ies was/were available. A study of the distribution of the experts' ratings showed that the perceived difficulty of the 55 clips range from very easy to very difficult, in all the aspects studied except for the grammatical complexity, for which most of the clips were considered as easy or moderately difficult. The study reflected the relationship existing between lexical complexity and difficulty, grammatical complexity and difficulty and speech intelligibility and difficulty, as lexical complexity and speech intelligibility are strongly and positively correlated to difficulty and the grammatical difficulty is moderately and positively correlated to difficulty. A multiple linear regression with difficulty as the dependent variable and lexical complexity, grammatical complexity and intelligibility as the independent variable achieved an adjusted R 2 of 0.82, indicating that these three variables explain most of the variance associated with the overall perceived difficulty. The results also suggest that documents were considered as most difficult when only the audio modality was available, and that adding text and/or video modalities allowed to decrease the difficulty, the difficulty scores being minimized by the combination of text, audio and video modalities.",Subjective Evaluation of Comprehensibility in Movie Interactions,"Various research works have dealt with the comprehensibility of textual, audio, or audiovisual documents, and showed that factors related to text (e.g., linguistic complexity), sound (e.g., speech intelligibility), image (e.g., presence of visual context), or even to cognition and emotion can play a major role in the ability of humans to understand the semantic and pragmatic contents of a given document. However, to date, no reference human data is available that could help investigating the role of the linguistic and extralinguistic information present at these different levels (i.e., linguistic, audio/phonetic, and visual) in multimodal documents (e.g., movies). The present work aimed at building a corpus of human annotations that would help to study further how much and in which way the human perception of comprehensibility (i.e., of the difficulty of comprehension, referred to in this paper as overall difficulty) of audiovisual documents is affected (1) by lexical complexity, grammatical complexity, and speech intelligibility, and (2) by the modality/ies (text, audio, video) available to the human recipient. To this end, a corpus of 55 short movie clips was created. Fifteen experts (language teachers) assessed the overall difficulty, the lexical difficulty, the grammatical difficulty and the speech intelligibility of the clips under different conditions in which one or more modality/ies was/were available. A study of the distribution of the experts' ratings showed that the perceived difficulty of the 55 clips range from very easy to very difficult, in all the aspects studied except for the grammatical complexity, for which most of the clips were considered as easy or moderately difficult. The study reflected the relationship existing between lexical complexity and difficulty, grammatical complexity and difficulty and speech intelligibility and difficulty, as lexical complexity and speech intelligibility are strongly and positively correlated to difficulty and the grammatical difficulty is moderately and positively correlated to difficulty. A multiple linear regression with difficulty as the dependent variable and lexical complexity, grammatical complexity and intelligibility as the independent variable achieved an adjusted R 2 of 0.82, indicating that these three variables explain most of the variance associated with the overall perceived difficulty. The results also suggest that documents were considered as most difficult when only the audio modality was available, and that adding text and/or video modalities allowed to decrease the difficulty, the difficulty scores being minimized by the combination of text, audio and video modalities.",Subjective Evaluation of Comprehensibility in Movie Interactions,"Various research works have dealt with the comprehensibility of textual, audio, or audiovisual documents, and showed that factors related to text (e.g., linguistic complexity), sound (e.g., speech intelligibility), image (e.g., presence of visual context), or even to cognition and emotion can play a major role in the ability of humans to understand the semantic and pragmatic contents of a given document. However, to date, no reference human data is available that could help investigating the role of the linguistic and extralinguistic information present at these different levels (i.e., linguistic, audio/phonetic, and visual) in multimodal documents (e.g., movies). The present work aimed at building a corpus of human annotations that would help to study further how much and in which way the human perception of comprehensibility (i.e., of the difficulty of comprehension, referred to in this paper as overall difficulty) of audiovisual documents is affected (1) by lexical complexity, grammatical complexity, and speech intelligibility, and (2) by the modality/ies (text, audio, video) available to the human recipient. To this end, a corpus of 55 short movie clips was created. Fifteen experts (language teachers) assessed the overall difficulty, the lexical difficulty, the grammatical difficulty and the speech intelligibility of the clips under different conditions in which one or more modality/ies was/were available. A study of the distribution of the experts' ratings showed that the perceived difficulty of the 55 clips range from very easy to very difficult, in all the aspects studied except for the grammatical complexity, for which most of the clips were considered as easy or moderately difficult. The study reflected the relationship existing between lexical complexity and difficulty, grammatical complexity and difficulty and speech intelligibility and difficulty, as lexical complexity and speech intelligibility are strongly and positively correlated to difficulty and the grammatical difficulty is moderately and positively correlated to difficulty. A multiple linear regression with difficulty as the dependent variable and lexical complexity, grammatical complexity and intelligibility as the independent variable achieved an adjusted R 2 of 0.82, indicating that these three variables explain most of the variance associated with the overall perceived difficulty. The results also suggest that documents were considered as most difficult when only the audio modality was available, and that adding text and/or video modalities allowed to decrease the difficulty, the difficulty scores being minimized by the combination of text, audio and video modalities.",,"Subjective Evaluation of Comprehensibility in Movie Interactions. Various research works have dealt with the comprehensibility of textual, audio, or audiovisual documents, and showed that factors related to text (e.g., linguistic complexity), sound (e.g., speech intelligibility), image (e.g., presence of visual context), or even to cognition and emotion can play a major role in the ability of humans to understand the semantic and pragmatic contents of a given document. However, to date, no reference human data is available that could help investigating the role of the linguistic and extralinguistic information present at these different levels (i.e., linguistic, audio/phonetic, and visual) in multimodal documents (e.g., movies). The present work aimed at building a corpus of human annotations that would help to study further how much and in which way the human perception of comprehensibility (i.e., of the difficulty of comprehension, referred to in this paper as overall difficulty) of audiovisual documents is affected (1) by lexical complexity, grammatical complexity, and speech intelligibility, and (2) by the modality/ies (text, audio, video) available to the human recipient. To this end, a corpus of 55 short movie clips was created. Fifteen experts (language teachers) assessed the overall difficulty, the lexical difficulty, the grammatical difficulty and the speech intelligibility of the clips under different conditions in which one or more modality/ies was/were available. A study of the distribution of the experts' ratings showed that the perceived difficulty of the 55 clips range from very easy to very difficult, in all the aspects studied except for the grammatical complexity, for which most of the clips were considered as easy or moderately difficult. The study reflected the relationship existing between lexical complexity and difficulty, grammatical complexity and difficulty and speech intelligibility and difficulty, as lexical complexity and speech intelligibility are strongly and positively correlated to difficulty and the grammatical difficulty is moderately and positively correlated to difficulty. A multiple linear regression with difficulty as the dependent variable and lexical complexity, grammatical complexity and intelligibility as the independent variable achieved an adjusted R 2 of 0.82, indicating that these three variables explain most of the variance associated with the overall perceived difficulty. The results also suggest that documents were considered as most difficult when only the audio modality was available, and that adding text and/or video modalities allowed to decrease the difficulty, the difficulty scores being minimized by the combination of text, audio and video modalities.",2020
li-etal-2011-engtube,https://aclanthology.org/2011.mtsummit-systems.2,1,,,,education,,,"ENGtube: an Integrated Subtitle Environment for ESL. Movies and TV shows are probably the most attractive media of language learning, and the associated subtitle is an important resource in the learning process. Despite its significance, subtitle has never been exploited effectively as it could be. In this paper we present ENGtube, which is a video service for ESL (English as Second Language) learners. The key component of this service is an integrated environment for displaying the video clips, the source subtitle and the translated subtitle with rich information at users' disposal. The rich information of subtitle is produced by various speech and language technologies.",{ENG}tube: an Integrated Subtitle Environment for {ESL},"Movies and TV shows are probably the most attractive media of language learning, and the associated subtitle is an important resource in the learning process. Despite its significance, subtitle has never been exploited effectively as it could be. In this paper we present ENGtube, which is a video service for ESL (English as Second Language) learners. The key component of this service is an integrated environment for displaying the video clips, the source subtitle and the translated subtitle with rich information at users' disposal. The rich information of subtitle is produced by various speech and language technologies.",ENGtube: an Integrated Subtitle Environment for ESL,"Movies and TV shows are probably the most attractive media of language learning, and the associated subtitle is an important resource in the learning process. Despite its significance, subtitle has never been exploited effectively as it could be. In this paper we present ENGtube, which is a video service for ESL (English as Second Language) learners. The key component of this service is an integrated environment for displaying the video clips, the source subtitle and the translated subtitle with rich information at users' disposal. The rich information of subtitle is produced by various speech and language technologies.",,"ENGtube: an Integrated Subtitle Environment for ESL. Movies and TV shows are probably the most attractive media of language learning, and the associated subtitle is an important resource in the learning process. Despite its significance, subtitle has never been exploited effectively as it could be. In this paper we present ENGtube, which is a video service for ESL (English as Second Language) learners. The key component of this service is an integrated environment for displaying the video clips, the source subtitle and the translated subtitle with rich information at users' disposal. The rich information of subtitle is produced by various speech and language technologies.",2011
akama-etal-2018-unsupervised,https://aclanthology.org/P18-2091,0,,,,,,,"Unsupervised Learning of Style-sensitive Word Vectors. This paper presents the first study aimed at capturing stylistic similarity between words in an unsupervised manner. We propose extending the continuous bag of words (CBOW) model (Mikolov et al., 2013a) to learn style-sensitive word vectors using a wider context window under the assumption that the style of all the words in an utterance is consistent. In addition, we introduce a novel task to predict lexical stylistic similarity and to create a benchmark dataset for this task. Our experiment with this dataset supports our assumption and demonstrates that the proposed extensions contribute to the acquisition of stylesensitive word embeddings.",Unsupervised Learning of Style-sensitive Word Vectors,"This paper presents the first study aimed at capturing stylistic similarity between words in an unsupervised manner. We propose extending the continuous bag of words (CBOW) model (Mikolov et al., 2013a) to learn style-sensitive word vectors using a wider context window under the assumption that the style of all the words in an utterance is consistent. In addition, we introduce a novel task to predict lexical stylistic similarity and to create a benchmark dataset for this task. Our experiment with this dataset supports our assumption and demonstrates that the proposed extensions contribute to the acquisition of stylesensitive word embeddings.",Unsupervised Learning of Style-sensitive Word Vectors,"This paper presents the first study aimed at capturing stylistic similarity between words in an unsupervised manner. We propose extending the continuous bag of words (CBOW) model (Mikolov et al., 2013a) to learn style-sensitive word vectors using a wider context window under the assumption that the style of all the words in an utterance is consistent. In addition, we introduce a novel task to predict lexical stylistic similarity and to create a benchmark dataset for this task. Our experiment with this dataset supports our assumption and demonstrates that the proposed extensions contribute to the acquisition of stylesensitive word embeddings.",This work was supported by JSPS KAKENHI Grant Number 15H01702. We thank our anonymous reviewers for their helpful comments and suggestions.,"Unsupervised Learning of Style-sensitive Word Vectors. This paper presents the first study aimed at capturing stylistic similarity between words in an unsupervised manner. We propose extending the continuous bag of words (CBOW) model (Mikolov et al., 2013a) to learn style-sensitive word vectors using a wider context window under the assumption that the style of all the words in an utterance is consistent. In addition, we introduce a novel task to predict lexical stylistic similarity and to create a benchmark dataset for this task. Our experiment with this dataset supports our assumption and demonstrates that the proposed extensions contribute to the acquisition of stylesensitive word embeddings.",2018
li-etal-2020-using,https://aclanthology.org/2020.coling-main.132,0,,,,,,,"Using a Penalty-based Loss Re-estimation Method to Improve Implicit Discourse Relation Classification. We tackle implicit discourse relation classification, a task of automatically determining semantic relationships between arguments. The attention-worthy words in arguments are crucial clues for classifying the discourse relations. Attention mechanisms have been proven effective in highlighting the attention-worthy words during encoding. However, our survey shows that some inessential words are unintentionally misjudged as the attention-worthy words and, therefore, assigned heavier attention weights than should be. We propose a penalty-based loss re-estimation method to regulate the attention learning process, integrating penalty coefficients into the computation of loss by means of overstability of attention weight distributions. We conduct experiments on the Penn Discourse TreeBank (PDTB) corpus. The test results show that our loss re-estimation method leads to substantial improvements for a variety of attention mechanisms.",Using a Penalty-based Loss Re-estimation Method to Improve Implicit Discourse Relation Classification,"We tackle implicit discourse relation classification, a task of automatically determining semantic relationships between arguments. The attention-worthy words in arguments are crucial clues for classifying the discourse relations. Attention mechanisms have been proven effective in highlighting the attention-worthy words during encoding. However, our survey shows that some inessential words are unintentionally misjudged as the attention-worthy words and, therefore, assigned heavier attention weights than should be. We propose a penalty-based loss re-estimation method to regulate the attention learning process, integrating penalty coefficients into the computation of loss by means of overstability of attention weight distributions. We conduct experiments on the Penn Discourse TreeBank (PDTB) corpus. The test results show that our loss re-estimation method leads to substantial improvements for a variety of attention mechanisms.",Using a Penalty-based Loss Re-estimation Method to Improve Implicit Discourse Relation Classification,"We tackle implicit discourse relation classification, a task of automatically determining semantic relationships between arguments. The attention-worthy words in arguments are crucial clues for classifying the discourse relations. Attention mechanisms have been proven effective in highlighting the attention-worthy words during encoding. However, our survey shows that some inessential words are unintentionally misjudged as the attention-worthy words and, therefore, assigned heavier attention weights than should be. We propose a penalty-based loss re-estimation method to regulate the attention learning process, integrating penalty coefficients into the computation of loss by means of overstability of attention weight distributions. We conduct experiments on the Penn Discourse TreeBank (PDTB) corpus. The test results show that our loss re-estimation method leads to substantial improvements for a variety of attention mechanisms.","We are grateful for the insightful comments of reviewers. This work is supported by the national NSF of China via Grant Nos. 62076174, 61672368, 61751206 and 61672367, as well as the Stability Support Program of National Defense Key Laboratory of Science and Technology via Grant No. 61421100407.","Using a Penalty-based Loss Re-estimation Method to Improve Implicit Discourse Relation Classification. We tackle implicit discourse relation classification, a task of automatically determining semantic relationships between arguments. The attention-worthy words in arguments are crucial clues for classifying the discourse relations. Attention mechanisms have been proven effective in highlighting the attention-worthy words during encoding. However, our survey shows that some inessential words are unintentionally misjudged as the attention-worthy words and, therefore, assigned heavier attention weights than should be. We propose a penalty-based loss re-estimation method to regulate the attention learning process, integrating penalty coefficients into the computation of loss by means of overstability of attention weight distributions. We conduct experiments on the Penn Discourse TreeBank (PDTB) corpus. The test results show that our loss re-estimation method leads to substantial improvements for a variety of attention mechanisms.",2020
zhang-wallace-2017-sensitivity,https://aclanthology.org/I17-1026,0,,,,,,,"A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification. Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on the practically important task of sentence classification (Kim, 2014; Kalchbrenner et al., 2014; Johnson and Zhang, 2014; Zhang et al., 2016). However, these models require practitioners to specify an exact model architecture and set accompanying hyperparameters, including the filter region size, regularization parameters, and so on. It is currently unknown how sensitive model performance is to changes in these configurations for the task of sentence classification. We thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of architecture components on model performance; our aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification. We focus on one-layer CNNs (to the exclusion of more complex models) due to their comparative simplicity and strong empirical performance, which makes it a modern standard baseline method akin to Support Vector Machine (SVMs) and logistic regression. We derive practical advice from our extensive empirical results for those interested in getting the most out of CNNs for sentence classification in real world settings.",A Sensitivity Analysis of (and Practitioners{'} Guide to) Convolutional Neural Networks for Sentence Classification,"Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on the practically important task of sentence classification (Kim, 2014; Kalchbrenner et al., 2014; Johnson and Zhang, 2014; Zhang et al., 2016). However, these models require practitioners to specify an exact model architecture and set accompanying hyperparameters, including the filter region size, regularization parameters, and so on. It is currently unknown how sensitive model performance is to changes in these configurations for the task of sentence classification. We thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of architecture components on model performance; our aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification. We focus on one-layer CNNs (to the exclusion of more complex models) due to their comparative simplicity and strong empirical performance, which makes it a modern standard baseline method akin to Support Vector Machine (SVMs) and logistic regression. We derive practical advice from our extensive empirical results for those interested in getting the most out of CNNs for sentence classification in real world settings.",A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification,"Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on the practically important task of sentence classification (Kim, 2014; Kalchbrenner et al., 2014; Johnson and Zhang, 2014; Zhang et al., 2016). However, these models require practitioners to specify an exact model architecture and set accompanying hyperparameters, including the filter region size, regularization parameters, and so on. It is currently unknown how sensitive model performance is to changes in these configurations for the task of sentence classification. We thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of architecture components on model performance; our aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification. We focus on one-layer CNNs (to the exclusion of more complex models) due to their comparative simplicity and strong empirical performance, which makes it a modern standard baseline method akin to Support Vector Machine (SVMs) and logistic regression. We derive practical advice from our extensive empirical results for those interested in getting the most out of CNNs for sentence classification in real world settings.",,"A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification. Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on the practically important task of sentence classification (Kim, 2014; Kalchbrenner et al., 2014; Johnson and Zhang, 2014; Zhang et al., 2016). However, these models require practitioners to specify an exact model architecture and set accompanying hyperparameters, including the filter region size, regularization parameters, and so on. It is currently unknown how sensitive model performance is to changes in these configurations for the task of sentence classification. We thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of architecture components on model performance; our aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification. We focus on one-layer CNNs (to the exclusion of more complex models) due to their comparative simplicity and strong empirical performance, which makes it a modern standard baseline method akin to Support Vector Machine (SVMs) and logistic regression. We derive practical advice from our extensive empirical results for those interested in getting the most out of CNNs for sentence classification in real world settings.",2017
widdows-dorow-2002-graph,https://aclanthology.org/C02-1114,0,,,,,,,"A Graph Model for Unsupervised Lexical Acquisition. This paper presents an unsupervised method for assembling semantic knowledge from a part-ofspeech tagged corpus using graph algorithms. The graph model is built by linking pairs of words which participate in particular syntactic relationships. We focus on the symmetric relationship between pairs of nouns which occur together in lists. An incremental cluster-building algorithm using this part of the graph achieves 82% accuracy at a lexical acquisition task, evaluated against WordNet classes. The model naturally realises domain and corpus specific ambiguities as distinct components in the graph surrounding an ambiguous word.",A Graph Model for Unsupervised Lexical Acquisition,"This paper presents an unsupervised method for assembling semantic knowledge from a part-ofspeech tagged corpus using graph algorithms. The graph model is built by linking pairs of words which participate in particular syntactic relationships. We focus on the symmetric relationship between pairs of nouns which occur together in lists. An incremental cluster-building algorithm using this part of the graph achieves 82% accuracy at a lexical acquisition task, evaluated against WordNet classes. The model naturally realises domain and corpus specific ambiguities as distinct components in the graph surrounding an ambiguous word.",A Graph Model for Unsupervised Lexical Acquisition,"This paper presents an unsupervised method for assembling semantic knowledge from a part-ofspeech tagged corpus using graph algorithms. The graph model is built by linking pairs of words which participate in particular syntactic relationships. We focus on the symmetric relationship between pairs of nouns which occur together in lists. An incremental cluster-building algorithm using this part of the graph achieves 82% accuracy at a lexical acquisition task, evaluated against WordNet classes. The model naturally realises domain and corpus specific ambiguities as distinct components in the graph surrounding an ambiguous word.","The authors would like to thank the anonymous reviewers whose comments were a great help in making this paper more focussed: any shortcomings remain entirely our own responsibility. This research was supported in part by the Research Collaboration between the NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation and CSLI, Stanford University, and by EC/NSF grant IST-1999-11438 for the MUCHMORE project. 2","A Graph Model for Unsupervised Lexical Acquisition. This paper presents an unsupervised method for assembling semantic knowledge from a part-ofspeech tagged corpus using graph algorithms. The graph model is built by linking pairs of words which participate in particular syntactic relationships. We focus on the symmetric relationship between pairs of nouns which occur together in lists. An incremental cluster-building algorithm using this part of the graph achieves 82% accuracy at a lexical acquisition task, evaluated against WordNet classes. The model naturally realises domain and corpus specific ambiguities as distinct components in the graph surrounding an ambiguous word.",2002
sogaard-etal-2015-inverted,https://aclanthology.org/P15-1165,0,,,,,,,"Inverted indexing for cross-lingual NLP. We present a novel, count-based approach to obtaining inter-lingual word representations based on inverted indexing of Wikipedia. We present experiments applying these representations to 17 datasets in document classification, POS tagging, dependency parsing, and word alignment. Our approach has the advantage that it is simple, computationally efficient and almost parameter-free, and, more importantly, it enables multi-source crosslingual learning. In 14/17 cases, we improve over using state-of-the-art bilingual embeddings.",Inverted indexing for cross-lingual {NLP},"We present a novel, count-based approach to obtaining inter-lingual word representations based on inverted indexing of Wikipedia. We present experiments applying these representations to 17 datasets in document classification, POS tagging, dependency parsing, and word alignment. Our approach has the advantage that it is simple, computationally efficient and almost parameter-free, and, more importantly, it enables multi-source crosslingual learning. In 14/17 cases, we improve over using state-of-the-art bilingual embeddings.",Inverted indexing for cross-lingual NLP,"We present a novel, count-based approach to obtaining inter-lingual word representations based on inverted indexing of Wikipedia. We present experiments applying these representations to 17 datasets in document classification, POS tagging, dependency parsing, and word alignment. Our approach has the advantage that it is simple, computationally efficient and almost parameter-free, and, more importantly, it enables multi-source crosslingual learning. In 14/17 cases, we improve over using state-of-the-art bilingual embeddings.",,"Inverted indexing for cross-lingual NLP. We present a novel, count-based approach to obtaining inter-lingual word representations based on inverted indexing of Wikipedia. We present experiments applying these representations to 17 datasets in document classification, POS tagging, dependency parsing, and word alignment. Our approach has the advantage that it is simple, computationally efficient and almost parameter-free, and, more importantly, it enables multi-source crosslingual learning. In 14/17 cases, we improve over using state-of-the-art bilingual embeddings.",2015
he-etal-2021-fast,https://aclanthology.org/2021.acl-long.246,0,,,,,,,"Fast and Accurate Neural Machine Translation with Translation Memory. It is generally believed that a translation memory (TM) should be beneficial for machine translation tasks. Unfortunately, existing wisdom demonstrates the superiority of TMbased neural machine translation (NMT) only on the TM-specialized translation tasks rather than general tasks, with a non-negligible computational overhead. In this paper, we propose a fast and accurate approach to TM-based NMT within the Transformer framework: the model architecture is simple and employs a single bilingual sentence as its TM, leading to efficient training and inference; and its parameters are effectively optimized through a novel training criterion. Extensive experiments on six TM-specialized tasks show that the proposed approach substantially surpasses several strong baselines that use multiple TMs, in terms of BLEU and running time. In particular, the proposed approach also advances the strong baselines on two general tasks (WMT news Zh→En and En→De).",Fast and Accurate Neural Machine Translation with Translation Memory,"It is generally believed that a translation memory (TM) should be beneficial for machine translation tasks. Unfortunately, existing wisdom demonstrates the superiority of TMbased neural machine translation (NMT) only on the TM-specialized translation tasks rather than general tasks, with a non-negligible computational overhead. In this paper, we propose a fast and accurate approach to TM-based NMT within the Transformer framework: the model architecture is simple and employs a single bilingual sentence as its TM, leading to efficient training and inference; and its parameters are effectively optimized through a novel training criterion. Extensive experiments on six TM-specialized tasks show that the proposed approach substantially surpasses several strong baselines that use multiple TMs, in terms of BLEU and running time. In particular, the proposed approach also advances the strong baselines on two general tasks (WMT news Zh→En and En→De).",Fast and Accurate Neural Machine Translation with Translation Memory,"It is generally believed that a translation memory (TM) should be beneficial for machine translation tasks. Unfortunately, existing wisdom demonstrates the superiority of TMbased neural machine translation (NMT) only on the TM-specialized translation tasks rather than general tasks, with a non-negligible computational overhead. In this paper, we propose a fast and accurate approach to TM-based NMT within the Transformer framework: the model architecture is simple and employs a single bilingual sentence as its TM, leading to efficient training and inference; and its parameters are effectively optimized through a novel training criterion. Extensive experiments on six TM-specialized tasks show that the proposed approach substantially surpasses several strong baselines that use multiple TMs, in terms of BLEU and running time. In particular, the proposed approach also advances the strong baselines on two general tasks (WMT news Zh→En and En→De).",This work is supported by NSFC (grant No. 61877051). We thank Jiatao Gu and Mengzhou Xia for providing their preprocessed datasets. We also thank the anonymous reviewers for providing valuable suggestions and feedbacks.,"Fast and Accurate Neural Machine Translation with Translation Memory. It is generally believed that a translation memory (TM) should be beneficial for machine translation tasks. Unfortunately, existing wisdom demonstrates the superiority of TMbased neural machine translation (NMT) only on the TM-specialized translation tasks rather than general tasks, with a non-negligible computational overhead. In this paper, we propose a fast and accurate approach to TM-based NMT within the Transformer framework: the model architecture is simple and employs a single bilingual sentence as its TM, leading to efficient training and inference; and its parameters are effectively optimized through a novel training criterion. Extensive experiments on six TM-specialized tasks show that the proposed approach substantially surpasses several strong baselines that use multiple TMs, in terms of BLEU and running time. In particular, the proposed approach also advances the strong baselines on two general tasks (WMT news Zh→En and En→De).",2021
daniels-2005-parsing,https://aclanthology.org/W05-1523,0,,,,,,,"Parsing Generalized ID/LP Grammars. The Generalized ID/LP (GIDLP) grammar formalism (Daniels and Meurers 2004a,b; Daniels 2005) was developed to serve as a processing backbone for linearization-HPSG grammars, separating the declaration of the recursive constituent structure from the declaration of word order domains. This paper shows that the key aspects of this formalismthe ability for grammar writers to explicitly declare word order domains and to arrange the right-hand side of each grammar rule to minimize the parser's search space -lead directly to improvements in parsing efficiency.",Parsing Generalized {ID}/{LP} Grammars,"The Generalized ID/LP (GIDLP) grammar formalism (Daniels and Meurers 2004a,b; Daniels 2005) was developed to serve as a processing backbone for linearization-HPSG grammars, separating the declaration of the recursive constituent structure from the declaration of word order domains. This paper shows that the key aspects of this formalismthe ability for grammar writers to explicitly declare word order domains and to arrange the right-hand side of each grammar rule to minimize the parser's search space -lead directly to improvements in parsing efficiency.",Parsing Generalized ID/LP Grammars,"The Generalized ID/LP (GIDLP) grammar formalism (Daniels and Meurers 2004a,b; Daniels 2005) was developed to serve as a processing backbone for linearization-HPSG grammars, separating the declaration of the recursive constituent structure from the declaration of word order domains. This paper shows that the key aspects of this formalismthe ability for grammar writers to explicitly declare word order domains and to arrange the right-hand side of each grammar rule to minimize the parser's search space -lead directly to improvements in parsing efficiency.",,"Parsing Generalized ID/LP Grammars. The Generalized ID/LP (GIDLP) grammar formalism (Daniels and Meurers 2004a,b; Daniels 2005) was developed to serve as a processing backbone for linearization-HPSG grammars, separating the declaration of the recursive constituent structure from the declaration of word order domains. This paper shows that the key aspects of this formalismthe ability for grammar writers to explicitly declare word order domains and to arrange the right-hand side of each grammar rule to minimize the parser's search space -lead directly to improvements in parsing efficiency.",2005
bramsen-etal-2011-extracting,https://aclanthology.org/P11-1078,1,,,,peace_justice_and_strong_institutions,partnership,,"Extracting Social Power Relationships from Natural Language. Sociolinguists have long argued that social context influences language use in all manner of ways, resulting in lects 1. This paper explores a text classification problem we will call lect modeling, an example of what has been termed computational sociolinguistics. In particular, we use machine learning techniques to identify social power relationships between members of a social network, based purely on the content of their interpersonal communication. We rely on statistical methods, as opposed to language-specific engineering, to extract features which represent vocabulary and grammar usage indicative of social power lect. We then apply support vector machines to model the social power lects representing superior-subordinate communication in the Enron email corpus. Our results validate the treatment of lect modeling as a text classification problem-albeit a hard one-and constitute a case for future research in computational sociolinguistics. * This work was done while these authors were at SET Corporation, an SAIC Company. 1 Fields that deal with society and language have inconsistent terminology; ""lect"" is chosen here because ""lect"" has no other English definitions and the etymology of the word gives it the sense we consider most relevant.",Extracting Social Power Relationships from Natural Language,"Sociolinguists have long argued that social context influences language use in all manner of ways, resulting in lects 1. This paper explores a text classification problem we will call lect modeling, an example of what has been termed computational sociolinguistics. In particular, we use machine learning techniques to identify social power relationships between members of a social network, based purely on the content of their interpersonal communication. We rely on statistical methods, as opposed to language-specific engineering, to extract features which represent vocabulary and grammar usage indicative of social power lect. We then apply support vector machines to model the social power lects representing superior-subordinate communication in the Enron email corpus. Our results validate the treatment of lect modeling as a text classification problem-albeit a hard one-and constitute a case for future research in computational sociolinguistics. * This work was done while these authors were at SET Corporation, an SAIC Company. 1 Fields that deal with society and language have inconsistent terminology; ""lect"" is chosen here because ""lect"" has no other English definitions and the etymology of the word gives it the sense we consider most relevant.",Extracting Social Power Relationships from Natural Language,"Sociolinguists have long argued that social context influences language use in all manner of ways, resulting in lects 1. This paper explores a text classification problem we will call lect modeling, an example of what has been termed computational sociolinguistics. In particular, we use machine learning techniques to identify social power relationships between members of a social network, based purely on the content of their interpersonal communication. We rely on statistical methods, as opposed to language-specific engineering, to extract features which represent vocabulary and grammar usage indicative of social power lect. We then apply support vector machines to model the social power lects representing superior-subordinate communication in the Enron email corpus. Our results validate the treatment of lect modeling as a text classification problem-albeit a hard one-and constitute a case for future research in computational sociolinguistics. * This work was done while these authors were at SET Corporation, an SAIC Company. 1 Fields that deal with society and language have inconsistent terminology; ""lect"" is chosen here because ""lect"" has no other English definitions and the etymology of the word gives it the sense we consider most relevant.","Dr. Richard Sproat contributed time, valuable insights, and wise counsel on several occasions during the course of the research. Dr. Lillian Lee and her students in Natural Language Processing and Social Interaction reviewed the paper, offering valuable feedback and helpful leads.Our colleague, Diane Bramsen, created an excellent graphical interface for probing and understanding the results. Jeff Lau guided and advised throughout the project.We thank our anonymous reviewers for prudent advice.This work was funded by the Army Studies Board and sponsored by Col. Timothy Hill of the United Stated Army Intelligence and Security Command (INSCOM) Futures Directorate under contract W911W4-08-D-0011.","Extracting Social Power Relationships from Natural Language. Sociolinguists have long argued that social context influences language use in all manner of ways, resulting in lects 1. This paper explores a text classification problem we will call lect modeling, an example of what has been termed computational sociolinguistics. In particular, we use machine learning techniques to identify social power relationships between members of a social network, based purely on the content of their interpersonal communication. We rely on statistical methods, as opposed to language-specific engineering, to extract features which represent vocabulary and grammar usage indicative of social power lect. We then apply support vector machines to model the social power lects representing superior-subordinate communication in the Enron email corpus. Our results validate the treatment of lect modeling as a text classification problem-albeit a hard one-and constitute a case for future research in computational sociolinguistics. * This work was done while these authors were at SET Corporation, an SAIC Company. 1 Fields that deal with society and language have inconsistent terminology; ""lect"" is chosen here because ""lect"" has no other English definitions and the etymology of the word gives it the sense we consider most relevant.",2011
lynn-etal-2017-human,https://aclanthology.org/D17-1119,0,,,,,,,"Human Centered NLP with User-Factor Adaptation. We pose the general task of user-factor adaptation-adapting supervised learning models to real-valued user factors inferred from a background of their language, reflecting the idea that a piece of text should be understood within the context of the user that wrote it. We introduce a continuous adaptation technique, suited for real-valued user factors that are common in social science and bringing us closer to personalized NLP, adapting to each user uniquely. We apply this technique with known user factors including age, gender, and personality traits, as well as latent factors, evaluating over five tasks:",Human Centered {NLP} with User-Factor Adaptation,"We pose the general task of user-factor adaptation-adapting supervised learning models to real-valued user factors inferred from a background of their language, reflecting the idea that a piece of text should be understood within the context of the user that wrote it. We introduce a continuous adaptation technique, suited for real-valued user factors that are common in social science and bringing us closer to personalized NLP, adapting to each user uniquely. We apply this technique with known user factors including age, gender, and personality traits, as well as latent factors, evaluating over five tasks:",Human Centered NLP with User-Factor Adaptation,"We pose the general task of user-factor adaptation-adapting supervised learning models to real-valued user factors inferred from a background of their language, reflecting the idea that a piece of text should be understood within the context of the user that wrote it. We introduce a continuous adaptation technique, suited for real-valued user factors that are common in social science and bringing us closer to personalized NLP, adapting to each user uniquely. We apply this technique with known user factors including age, gender, and personality traits, as well as latent factors, evaluating over five tasks:","This publication was made possible, in part, through the support of a grant from the Templeton Religion Trust -TRT0048. We wish to thank the following colleagues for their annotation help for the PP-attachment task: Chetan Naik, Heeyoung Kwon, Ibrahim Hammoud, Jun Kang, Masoud Rouhizadeh, Mohammadzaman Zamani, and Samuel Louvan.","Human Centered NLP with User-Factor Adaptation. We pose the general task of user-factor adaptation-adapting supervised learning models to real-valued user factors inferred from a background of their language, reflecting the idea that a piece of text should be understood within the context of the user that wrote it. We introduce a continuous adaptation technique, suited for real-valued user factors that are common in social science and bringing us closer to personalized NLP, adapting to each user uniquely. We apply this technique with known user factors including age, gender, and personality traits, as well as latent factors, evaluating over five tasks:",2017
angelidis-lapata-2018-multiple,https://aclanthology.org/Q18-1002,0,,,,,,,"Multiple Instance Learning Networks for Fine-Grained Sentiment Analysis. We consider the task of fine-grained sentiment analysis from the perspective of multiple instance learning (MIL). Our neural model is trained on document sentiment labels, and learns to predict the sentiment of text segments, i.e. sentences or elementary discourse units (EDUs), without segment-level supervision. We introduce an attention-based polarity scoring method for identifying positive and negative text snippets and a new dataset which we call SPOT (as shorthand for Segment-level POlariTy annotations) for evaluating MILstyle sentiment models like ours. Experimental results demonstrate superior performance against multiple baselines, whereas a judgement elicitation study shows that EDU-level opinion extraction produces more informative summaries than sentence-based alternatives.",Multiple Instance Learning Networks for Fine-Grained Sentiment Analysis,"We consider the task of fine-grained sentiment analysis from the perspective of multiple instance learning (MIL). Our neural model is trained on document sentiment labels, and learns to predict the sentiment of text segments, i.e. sentences or elementary discourse units (EDUs), without segment-level supervision. We introduce an attention-based polarity scoring method for identifying positive and negative text snippets and a new dataset which we call SPOT (as shorthand for Segment-level POlariTy annotations) for evaluating MILstyle sentiment models like ours. Experimental results demonstrate superior performance against multiple baselines, whereas a judgement elicitation study shows that EDU-level opinion extraction produces more informative summaries than sentence-based alternatives.",Multiple Instance Learning Networks for Fine-Grained Sentiment Analysis,"We consider the task of fine-grained sentiment analysis from the perspective of multiple instance learning (MIL). Our neural model is trained on document sentiment labels, and learns to predict the sentiment of text segments, i.e. sentences or elementary discourse units (EDUs), without segment-level supervision. We introduce an attention-based polarity scoring method for identifying positive and negative text snippets and a new dataset which we call SPOT (as shorthand for Segment-level POlariTy annotations) for evaluating MILstyle sentiment models like ours. Experimental results demonstrate superior performance against multiple baselines, whereas a judgement elicitation study shows that EDU-level opinion extraction produces more informative summaries than sentence-based alternatives.","The authors gratefully acknowledge the support of the European Research Council (award number 681760). We thank TACL action editor Ani Nenkova and the anonymous reviewers whose feedback helped improve the present paper, as well as Charles Sutton, Timothy Hospedales, and members of EdinburghNLP for helpful discussions and suggestions.","Multiple Instance Learning Networks for Fine-Grained Sentiment Analysis. We consider the task of fine-grained sentiment analysis from the perspective of multiple instance learning (MIL). Our neural model is trained on document sentiment labels, and learns to predict the sentiment of text segments, i.e. sentences or elementary discourse units (EDUs), without segment-level supervision. We introduce an attention-based polarity scoring method for identifying positive and negative text snippets and a new dataset which we call SPOT (as shorthand for Segment-level POlariTy annotations) for evaluating MILstyle sentiment models like ours. Experimental results demonstrate superior performance against multiple baselines, whereas a judgement elicitation study shows that EDU-level opinion extraction produces more informative summaries than sentence-based alternatives.",2018
teixeira-etal-2004-acoustic,http://www.lrec-conf.org/proceedings/lrec2004/pdf/610.pdf,0,,,,,,,"An Acoustic Corpus Contemplating Regional Variation for Studies of European Portuguese Nasals. Portuguese is one of the two standard Romance varieties having nasal vowels as independent phonemes. These are complex sounds that have a dynamic nature and present several problems for a complete description. In this paper we present a new corpus especially recorded to allow studies of European Portuguese nasal vowels. The main purpose of these studies is the improvement of knowledge about these sounds so that it can be applied to language teaching, speech therapy materials and the articulatory speech synthesizer that is being developed at the University of Aveiro. The corpus described is a valuable resource for such studies due to the regional and contextual coverage and the simultaneous availability of speech and EGG signal. Details about corpus definition, recording, annotation and availability are given.",An Acoustic Corpus Contemplating Regional Variation for Studies of {E}uropean {P}ortuguese Nasals,"Portuguese is one of the two standard Romance varieties having nasal vowels as independent phonemes. These are complex sounds that have a dynamic nature and present several problems for a complete description. In this paper we present a new corpus especially recorded to allow studies of European Portuguese nasal vowels. The main purpose of these studies is the improvement of knowledge about these sounds so that it can be applied to language teaching, speech therapy materials and the articulatory speech synthesizer that is being developed at the University of Aveiro. The corpus described is a valuable resource for such studies due to the regional and contextual coverage and the simultaneous availability of speech and EGG signal. Details about corpus definition, recording, annotation and availability are given.",An Acoustic Corpus Contemplating Regional Variation for Studies of European Portuguese Nasals,"Portuguese is one of the two standard Romance varieties having nasal vowels as independent phonemes. These are complex sounds that have a dynamic nature and present several problems for a complete description. In this paper we present a new corpus especially recorded to allow studies of European Portuguese nasal vowels. The main purpose of these studies is the improvement of knowledge about these sounds so that it can be applied to language teaching, speech therapy materials and the articulatory speech synthesizer that is being developed at the University of Aveiro. The corpus described is a valuable resource for such studies due to the regional and contextual coverage and the simultaneous availability of speech and EGG signal. Details about corpus definition, recording, annotation and availability are given.","We thank all the informants participating in corpora recordings. Without their patience and cooperation this work wouldn't be possible. We also thank FCT for the funding of Project POSI/36427/PLP/2000, Phonetics Applied to Speech Processing: The Portuguese Nasals.","An Acoustic Corpus Contemplating Regional Variation for Studies of European Portuguese Nasals. Portuguese is one of the two standard Romance varieties having nasal vowels as independent phonemes. These are complex sounds that have a dynamic nature and present several problems for a complete description. In this paper we present a new corpus especially recorded to allow studies of European Portuguese nasal vowels. The main purpose of these studies is the improvement of knowledge about these sounds so that it can be applied to language teaching, speech therapy materials and the articulatory speech synthesizer that is being developed at the University of Aveiro. The corpus described is a valuable resource for such studies due to the regional and contextual coverage and the simultaneous availability of speech and EGG signal. Details about corpus definition, recording, annotation and availability are given.",2004
watson-etal-2005-efficient,https://aclanthology.org/W05-1517,0,,,,,,,"Efficient Extraction of Grammatical Relations. We present a novel approach for applying the Inside-Outside Algorithm to a packed parse forest produced by a unificationbased parser. The approach allows a node in the forest to be assigned multiple inside and outside probabilities, enabling a set of 'weighted GRs' to be computed directly from the forest. The approach improves on previous work which either loses efficiency by unpacking the parse forest before extracting weighted GRs, or places extra constraints on which nodes can be packed, leading to less compact forests. Our experiments demonstrate substantial increases in parser accuracy and throughput for weighted GR output.",Efficient Extraction of Grammatical Relations,"We present a novel approach for applying the Inside-Outside Algorithm to a packed parse forest produced by a unificationbased parser. The approach allows a node in the forest to be assigned multiple inside and outside probabilities, enabling a set of 'weighted GRs' to be computed directly from the forest. The approach improves on previous work which either loses efficiency by unpacking the parse forest before extracting weighted GRs, or places extra constraints on which nodes can be packed, leading to less compact forests. Our experiments demonstrate substantial increases in parser accuracy and throughput for weighted GR output.",Efficient Extraction of Grammatical Relations,"We present a novel approach for applying the Inside-Outside Algorithm to a packed parse forest produced by a unificationbased parser. The approach allows a node in the forest to be assigned multiple inside and outside probabilities, enabling a set of 'weighted GRs' to be computed directly from the forest. The approach improves on previous work which either loses efficiency by unpacking the parse forest before extracting weighted GRs, or places extra constraints on which nodes can be packed, leading to less compact forests. Our experiments demonstrate substantial increases in parser accuracy and throughput for weighted GR output.",This work is in part funded by the Overseas Research Students Awards Scheme and the Poynton Scholarship appointed by the Cambridge Australia Trust in collaboration with the Cambridge Commonwealth Trust. We would like to thank four anonymous reviewers who provided many useful suggestions for improvement.,"Efficient Extraction of Grammatical Relations. We present a novel approach for applying the Inside-Outside Algorithm to a packed parse forest produced by a unificationbased parser. The approach allows a node in the forest to be assigned multiple inside and outside probabilities, enabling a set of 'weighted GRs' to be computed directly from the forest. The approach improves on previous work which either loses efficiency by unpacking the parse forest before extracting weighted GRs, or places extra constraints on which nodes can be packed, leading to less compact forests. Our experiments demonstrate substantial increases in parser accuracy and throughput for weighted GR output.",2005
niebuhr-etal-2013-speech,https://aclanthology.org/W13-4040,0,,,,,,,"Speech Reduction, Intensity, and F0 Shape are Cues to Turn-Taking. Based on German production data from the 'Kiel Corpus of Spontaneous Speech', we conducted two perception experiments, using an innovative interactive task in which participants gave real oral responses to resynthesized question stimuli. Differences in the time interval between stimulus question and response show that segmental reduction, intensity level, and the shape of the phrase-final rise all function as cues to turn-taking in conversation. Thus, the phonetics of turntaking goes beyond the traditional triad of duration, voice quality, and F0 level.","Speech Reduction, Intensity, and F0 Shape are Cues to Turn-Taking","Based on German production data from the 'Kiel Corpus of Spontaneous Speech', we conducted two perception experiments, using an innovative interactive task in which participants gave real oral responses to resynthesized question stimuli. Differences in the time interval between stimulus question and response show that segmental reduction, intensity level, and the shape of the phrase-final rise all function as cues to turn-taking in conversation. Thus, the phonetics of turntaking goes beyond the traditional triad of duration, voice quality, and F0 level.","Speech Reduction, Intensity, and F0 Shape are Cues to Turn-Taking","Based on German production data from the 'Kiel Corpus of Spontaneous Speech', we conducted two perception experiments, using an innovative interactive task in which participants gave real oral responses to resynthesized question stimuli. Differences in the time interval between stimulus question and response show that segmental reduction, intensity level, and the shape of the phrase-final rise all function as cues to turn-taking in conversation. Thus, the phonetics of turntaking goes beyond the traditional triad of duration, voice quality, and F0 level.",,"Speech Reduction, Intensity, and F0 Shape are Cues to Turn-Taking. Based on German production data from the 'Kiel Corpus of Spontaneous Speech', we conducted two perception experiments, using an innovative interactive task in which participants gave real oral responses to resynthesized question stimuli. Differences in the time interval between stimulus question and response show that segmental reduction, intensity level, and the shape of the phrase-final rise all function as cues to turn-taking in conversation. Thus, the phonetics of turntaking goes beyond the traditional triad of duration, voice quality, and F0 level.",2013
gupta-etal-2020-human,https://aclanthology.org/2020.sigdial-1.30,1,,,,health,,,"Human-Human Health Coaching via Text Messages: Corpus, Annotation, and Analysis. Our goal is to develop and deploy a virtual assistant health coach that can help patients set realistic physical activity goals and live a more active lifestyle. Since there is no publicly shared dataset of health coaching dialogues, the first phase of our research focused on data collection. We hired a certified health coach and 28 patients to collect the first round of human-human health coaching interaction which took place via text messages. This resulted in 2853 messages. The data collection phase was followed by conversation analysis to gain insight into the way information exchange takes place between a health coach and a patient. This was formalized using two annotation schemas: one that focuses on the goals the patient is setting and another that models the higher-level structure of the interactions. In this paper, we discuss these schemas and briefly talk about their application for automatically extracting activity goals and annotating the second round of data, collected with different health coaches and patients. Given the resource-intensive nature of data annotation, successfully annotating a new dataset automatically is key to answer the need for high quality, large datasets.","Human-Human Health Coaching via Text Messages: Corpus, Annotation, and Analysis","Our goal is to develop and deploy a virtual assistant health coach that can help patients set realistic physical activity goals and live a more active lifestyle. Since there is no publicly shared dataset of health coaching dialogues, the first phase of our research focused on data collection. We hired a certified health coach and 28 patients to collect the first round of human-human health coaching interaction which took place via text messages. This resulted in 2853 messages. The data collection phase was followed by conversation analysis to gain insight into the way information exchange takes place between a health coach and a patient. This was formalized using two annotation schemas: one that focuses on the goals the patient is setting and another that models the higher-level structure of the interactions. In this paper, we discuss these schemas and briefly talk about their application for automatically extracting activity goals and annotating the second round of data, collected with different health coaches and patients. Given the resource-intensive nature of data annotation, successfully annotating a new dataset automatically is key to answer the need for high quality, large datasets.","Human-Human Health Coaching via Text Messages: Corpus, Annotation, and Analysis","Our goal is to develop and deploy a virtual assistant health coach that can help patients set realistic physical activity goals and live a more active lifestyle. Since there is no publicly shared dataset of health coaching dialogues, the first phase of our research focused on data collection. We hired a certified health coach and 28 patients to collect the first round of human-human health coaching interaction which took place via text messages. This resulted in 2853 messages. The data collection phase was followed by conversation analysis to gain insight into the way information exchange takes place between a health coach and a patient. This was formalized using two annotation schemas: one that focuses on the goals the patient is setting and another that models the higher-level structure of the interactions. In this paper, we discuss these schemas and briefly talk about their application for automatically extracting activity goals and annotating the second round of data, collected with different health coaches and patients. Given the resource-intensive nature of data annotation, successfully annotating a new dataset automatically is key to answer the need for high quality, large datasets.",This work is supported by the National Science Foundation through awards IIS 1650900 and 1838770.,"Human-Human Health Coaching via Text Messages: Corpus, Annotation, and Analysis. Our goal is to develop and deploy a virtual assistant health coach that can help patients set realistic physical activity goals and live a more active lifestyle. Since there is no publicly shared dataset of health coaching dialogues, the first phase of our research focused on data collection. We hired a certified health coach and 28 patients to collect the first round of human-human health coaching interaction which took place via text messages. This resulted in 2853 messages. The data collection phase was followed by conversation analysis to gain insight into the way information exchange takes place between a health coach and a patient. This was formalized using two annotation schemas: one that focuses on the goals the patient is setting and another that models the higher-level structure of the interactions. In this paper, we discuss these schemas and briefly talk about their application for automatically extracting activity goals and annotating the second round of data, collected with different health coaches and patients. Given the resource-intensive nature of data annotation, successfully annotating a new dataset automatically is key to answer the need for high quality, large datasets.",2020
hegde-etal-2022-mucs,https://aclanthology.org/2022.dravidianlangtech-1.23,0,,,,,,,"MUCS@DravidianLangTech@ACL2022: Ensemble of Logistic Regression Penalties to Identify Emotions in Tamil Text. Emotion Analysis (EA) is the process of automatically analyzing and categorizing the input text into one of the predefined sets of emotions. In recent years, people have turned to social media to express their emotions, opinions or feelings about news, movies, products, services, and so on. These users' emotions may help the public, governments, business organizations, film producers, and others in devising strategies, making decisions, and so on. The increasing number of social media users and the increasing amount of user generated text containing emotions on social media demands automated tools for the analysis of such data as handling this data manually is labor intensive and error prone. Further, the characteristics of social media data makes the EA challenging. Most of the EA research works have focused on English language leaving several Indian languages including Tamil unexplored for this task. To address the challenges of EA in Tamil texts, in this paper, we-team MUCS, describe the model submitted to the shared task on Emotion Analysis in Tamil at DravidianLangTech@ACL 2022. Out of the two subtasks in this shared task, our team submitted the model only for Task a. The proposed model comprises of an Ensemble of Logistic Regression (LR) classifiers with three penalties, namely: L1, L2, and Elasticnet. This Ensemble model trained with Term Frequency-Inverse Document Frequency (TF-IDF) of character bigrams and trigrams secured 4 th rank in Task a with a macro averaged F1-score of 0.04. The code to reproduce the proposed models is available in github 1 .",{MUCS}@{D}ravidian{L}ang{T}ech@{ACL}2022: Ensemble of Logistic Regression Penalties to Identify Emotions in {T}amil Text,"Emotion Analysis (EA) is the process of automatically analyzing and categorizing the input text into one of the predefined sets of emotions. In recent years, people have turned to social media to express their emotions, opinions or feelings about news, movies, products, services, and so on. These users' emotions may help the public, governments, business organizations, film producers, and others in devising strategies, making decisions, and so on. The increasing number of social media users and the increasing amount of user generated text containing emotions on social media demands automated tools for the analysis of such data as handling this data manually is labor intensive and error prone. Further, the characteristics of social media data makes the EA challenging. Most of the EA research works have focused on English language leaving several Indian languages including Tamil unexplored for this task. To address the challenges of EA in Tamil texts, in this paper, we-team MUCS, describe the model submitted to the shared task on Emotion Analysis in Tamil at DravidianLangTech@ACL 2022. Out of the two subtasks in this shared task, our team submitted the model only for Task a. The proposed model comprises of an Ensemble of Logistic Regression (LR) classifiers with three penalties, namely: L1, L2, and Elasticnet. This Ensemble model trained with Term Frequency-Inverse Document Frequency (TF-IDF) of character bigrams and trigrams secured 4 th rank in Task a with a macro averaged F1-score of 0.04. The code to reproduce the proposed models is available in github 1 .",MUCS@DravidianLangTech@ACL2022: Ensemble of Logistic Regression Penalties to Identify Emotions in Tamil Text,"Emotion Analysis (EA) is the process of automatically analyzing and categorizing the input text into one of the predefined sets of emotions. In recent years, people have turned to social media to express their emotions, opinions or feelings about news, movies, products, services, and so on. These users' emotions may help the public, governments, business organizations, film producers, and others in devising strategies, making decisions, and so on. The increasing number of social media users and the increasing amount of user generated text containing emotions on social media demands automated tools for the analysis of such data as handling this data manually is labor intensive and error prone. Further, the characteristics of social media data makes the EA challenging. Most of the EA research works have focused on English language leaving several Indian languages including Tamil unexplored for this task. To address the challenges of EA in Tamil texts, in this paper, we-team MUCS, describe the model submitted to the shared task on Emotion Analysis in Tamil at DravidianLangTech@ACL 2022. Out of the two subtasks in this shared task, our team submitted the model only for Task a. The proposed model comprises of an Ensemble of Logistic Regression (LR) classifiers with three penalties, namely: L1, L2, and Elasticnet. This Ensemble model trained with Term Frequency-Inverse Document Frequency (TF-IDF) of character bigrams and trigrams secured 4 th rank in Task a with a macro averaged F1-score of 0.04. The code to reproduce the proposed models is available in github 1 .",,"MUCS@DravidianLangTech@ACL2022: Ensemble of Logistic Regression Penalties to Identify Emotions in Tamil Text. Emotion Analysis (EA) is the process of automatically analyzing and categorizing the input text into one of the predefined sets of emotions. In recent years, people have turned to social media to express their emotions, opinions or feelings about news, movies, products, services, and so on. These users' emotions may help the public, governments, business organizations, film producers, and others in devising strategies, making decisions, and so on. The increasing number of social media users and the increasing amount of user generated text containing emotions on social media demands automated tools for the analysis of such data as handling this data manually is labor intensive and error prone. Further, the characteristics of social media data makes the EA challenging. Most of the EA research works have focused on English language leaving several Indian languages including Tamil unexplored for this task. To address the challenges of EA in Tamil texts, in this paper, we-team MUCS, describe the model submitted to the shared task on Emotion Analysis in Tamil at DravidianLangTech@ACL 2022. Out of the two subtasks in this shared task, our team submitted the model only for Task a. The proposed model comprises of an Ensemble of Logistic Regression (LR) classifiers with three penalties, namely: L1, L2, and Elasticnet. This Ensemble model trained with Term Frequency-Inverse Document Frequency (TF-IDF) of character bigrams and trigrams secured 4 th rank in Task a with a macro averaged F1-score of 0.04. The code to reproduce the proposed models is available in github 1 .",2022
chen-ng-2012-chinese,https://aclanthology.org/C12-2019,0,,,,,,,"Chinese Noun Phrase Coreference Resolution: Insights into the State of the Art. Compared to the amount of research on English coreference resolution, relatively little work has been done on Chinese coreference resolution. Worse still, it has been difficult to determine the state of the art in Chinese coreference resolution, owing in part to the lack of a standard evaluation dataset. The organizers of the CoNLL-2012 shared task, Modeling Unrestricted Multilingual Coreference in OntoNotes, have recently addressed this issue by providing standard training and test sets for developing and evaluating Chinese coreference resolvers. We aim to gain insights into the state of the art via extensive experimentation with our Chinese resolver, which is ranked first in the shared task on the Chinese test data.",{C}hinese Noun Phrase Coreference Resolution: Insights into the State of the Art,"Compared to the amount of research on English coreference resolution, relatively little work has been done on Chinese coreference resolution. Worse still, it has been difficult to determine the state of the art in Chinese coreference resolution, owing in part to the lack of a standard evaluation dataset. The organizers of the CoNLL-2012 shared task, Modeling Unrestricted Multilingual Coreference in OntoNotes, have recently addressed this issue by providing standard training and test sets for developing and evaluating Chinese coreference resolvers. We aim to gain insights into the state of the art via extensive experimentation with our Chinese resolver, which is ranked first in the shared task on the Chinese test data.",Chinese Noun Phrase Coreference Resolution: Insights into the State of the Art,"Compared to the amount of research on English coreference resolution, relatively little work has been done on Chinese coreference resolution. Worse still, it has been difficult to determine the state of the art in Chinese coreference resolution, owing in part to the lack of a standard evaluation dataset. The organizers of the CoNLL-2012 shared task, Modeling Unrestricted Multilingual Coreference in OntoNotes, have recently addressed this issue by providing standard training and test sets for developing and evaluating Chinese coreference resolvers. We aim to gain insights into the state of the art via extensive experimentation with our Chinese resolver, which is ranked first in the shared task on the Chinese test data.",We thank the three anonymous reviewers for their invaluable comments on an earlier draft of the paper. This work was supported in part by NSF Grants IIS-1147644 and IIS-1219142.,"Chinese Noun Phrase Coreference Resolution: Insights into the State of the Art. Compared to the amount of research on English coreference resolution, relatively little work has been done on Chinese coreference resolution. Worse still, it has been difficult to determine the state of the art in Chinese coreference resolution, owing in part to the lack of a standard evaluation dataset. The organizers of the CoNLL-2012 shared task, Modeling Unrestricted Multilingual Coreference in OntoNotes, have recently addressed this issue by providing standard training and test sets for developing and evaluating Chinese coreference resolvers. We aim to gain insights into the state of the art via extensive experimentation with our Chinese resolver, which is ranked first in the shared task on the Chinese test data.",2012
milward-1992-dynamics,https://aclanthology.org/C92-4171,0,,,,,,,"Dynamics, Dependency Grammar and Incremental Interpretation. The paper describes two equiwtlent grammatical for-malisnLs. The first is a lexicalised version of depen dency grammar, and this can be nsed to provide tree-structured analyses of sentences (though somewhat tlatter than those usually provided by phra.se structure grammars). The second is a new forrnal ism, 'Dynamic Dependency Gramniar', which uses axioms and deduction rules to provide analyses of sentences in terms of transitioos between states.
A reformulation of dependency grammar usiug state transitions is of interest on several grounds. Firstly, it can be used to show that incremental interpretation is possible without requiring notions of overlapping, or flexible constituency (as ill some versions of categorial grammar), and without destroy ing a trasmparent link between syntax and semantics. Secondly, the reformulation provides a level of description which can act as an intermediate stage between the original grammar and a parsing algorithm. Thirdly, it is possible to extend the relbrnm lated grammars with further axioii~s and deduction rules to provide coverage of syntactic constructions such as coortlination which are tlitficult to encode lexically.","Dynamics, Dependency Grammar and Incremental Interpretation","The paper describes two equiwtlent grammatical for-malisnLs. The first is a lexicalised version of depen dency grammar, and this can be nsed to provide tree-structured analyses of sentences (though somewhat tlatter than those usually provided by phra.se structure grammars). The second is a new forrnal ism, 'Dynamic Dependency Gramniar', which uses axioms and deduction rules to provide analyses of sentences in terms of transitioos between states.
A reformulation of dependency grammar usiug state transitions is of interest on several grounds. Firstly, it can be used to show that incremental interpretation is possible without requiring notions of overlapping, or flexible constituency (as ill some versions of categorial grammar), and without destroy ing a trasmparent link between syntax and semantics. Secondly, the reformulation provides a level of description which can act as an intermediate stage between the original grammar and a parsing algorithm. Thirdly, it is possible to extend the relbrnm lated grammars with further axioii~s and deduction rules to provide coverage of syntactic constructions such as coortlination which are tlitficult to encode lexically.","Dynamics, Dependency Grammar and Incremental Interpretation","The paper describes two equiwtlent grammatical for-malisnLs. The first is a lexicalised version of depen dency grammar, and this can be nsed to provide tree-structured analyses of sentences (though somewhat tlatter than those usually provided by phra.se structure grammars). The second is a new forrnal ism, 'Dynamic Dependency Gramniar', which uses axioms and deduction rules to provide analyses of sentences in terms of transitioos between states.
A reformulation of dependency grammar usiug state transitions is of interest on several grounds. Firstly, it can be used to show that incremental interpretation is possible without requiring notions of overlapping, or flexible constituency (as ill some versions of categorial grammar), and without destroy ing a trasmparent link between syntax and semantics. Secondly, the reformulation provides a level of description which can act as an intermediate stage between the original grammar and a parsing algorithm. Thirdly, it is possible to extend the relbrnm lated grammars with further axioii~s and deduction rules to provide coverage of syntactic constructions such as coortlination which are tlitficult to encode lexically.",,"Dynamics, Dependency Grammar and Incremental Interpretation. The paper describes two equiwtlent grammatical for-malisnLs. The first is a lexicalised version of depen dency grammar, and this can be nsed to provide tree-structured analyses of sentences (though somewhat tlatter than those usually provided by phra.se structure grammars). The second is a new forrnal ism, 'Dynamic Dependency Gramniar', which uses axioms and deduction rules to provide analyses of sentences in terms of transitioos between states.
A reformulation of dependency grammar usiug state transitions is of interest on several grounds. Firstly, it can be used to show that incremental interpretation is possible without requiring notions of overlapping, or flexible constituency (as ill some versions of categorial grammar), and without destroy ing a trasmparent link between syntax and semantics. Secondly, the reformulation provides a level of description which can act as an intermediate stage between the original grammar and a parsing algorithm. Thirdly, it is possible to extend the relbrnm lated grammars with further axioii~s and deduction rules to provide coverage of syntactic constructions such as coortlination which are tlitficult to encode lexically.",1992
kelly-etal-2012-semi,https://aclanthology.org/W12-1702,0,,,,,,,"Semi-supervised learning for automatic conceptual property extraction. For a given concrete noun concept, humans are usually able to cite properties (e.g., elephant is animal, car has wheels) of that concept; cognitive psychologists have theorised that such properties are fundamental to understanding the abstract mental representation of concepts in the brain. Consequently, the ability to automatically extract such properties would be of enormous benefit to the field of experimental psychology. This paper investigates the use of semi-supervised learning and support vector machines to automatically extract concept-relation-feature triples from two large corpora (Wikipedia and UKWAC) for concrete noun concepts. Previous approaches have relied on manually-generated rules and hand-crafted resources such as WordNet; our method requires neither yet achieves better performance than these prior approaches, measured both by comparison with a property norm-derived gold standard as well as direct human evaluation. Our technique performs particularly well on extracting features relevant to a given concept, and suggests a number of promising areas for future focus.",Semi-supervised learning for automatic conceptual property extraction,"For a given concrete noun concept, humans are usually able to cite properties (e.g., elephant is animal, car has wheels) of that concept; cognitive psychologists have theorised that such properties are fundamental to understanding the abstract mental representation of concepts in the brain. Consequently, the ability to automatically extract such properties would be of enormous benefit to the field of experimental psychology. This paper investigates the use of semi-supervised learning and support vector machines to automatically extract concept-relation-feature triples from two large corpora (Wikipedia and UKWAC) for concrete noun concepts. Previous approaches have relied on manually-generated rules and hand-crafted resources such as WordNet; our method requires neither yet achieves better performance than these prior approaches, measured both by comparison with a property norm-derived gold standard as well as direct human evaluation. Our technique performs particularly well on extracting features relevant to a given concept, and suggests a number of promising areas for future focus.",Semi-supervised learning for automatic conceptual property extraction,"For a given concrete noun concept, humans are usually able to cite properties (e.g., elephant is animal, car has wheels) of that concept; cognitive psychologists have theorised that such properties are fundamental to understanding the abstract mental representation of concepts in the brain. Consequently, the ability to automatically extract such properties would be of enormous benefit to the field of experimental psychology. This paper investigates the use of semi-supervised learning and support vector machines to automatically extract concept-relation-feature triples from two large corpora (Wikipedia and UKWAC) for concrete noun concepts. Previous approaches have relied on manually-generated rules and hand-crafted resources such as WordNet; our method requires neither yet achieves better performance than these prior approaches, measured both by comparison with a property norm-derived gold standard as well as direct human evaluation. Our technique performs particularly well on extracting features relevant to a given concept, and suggests a number of promising areas for future focus.","This research was supported by EPSRC grant EP/F030061/1. We are grateful to McRae and colleagues for making their norms publicly available, and to the anonymous reviewers for their helpful input.","Semi-supervised learning for automatic conceptual property extraction. For a given concrete noun concept, humans are usually able to cite properties (e.g., elephant is animal, car has wheels) of that concept; cognitive psychologists have theorised that such properties are fundamental to understanding the abstract mental representation of concepts in the brain. Consequently, the ability to automatically extract such properties would be of enormous benefit to the field of experimental psychology. This paper investigates the use of semi-supervised learning and support vector machines to automatically extract concept-relation-feature triples from two large corpora (Wikipedia and UKWAC) for concrete noun concepts. Previous approaches have relied on manually-generated rules and hand-crafted resources such as WordNet; our method requires neither yet achieves better performance than these prior approaches, measured both by comparison with a property norm-derived gold standard as well as direct human evaluation. Our technique performs particularly well on extracting features relevant to a given concept, and suggests a number of promising areas for future focus.",2012
pal-etal-2010-handling,https://aclanthology.org/W10-3707,0,,,,,,,"Handling Named Entities and Compound Verbs in Phrase-Based Statistical Machine Translation. Data preprocessing plays a crucial role in phrase-based statistical machine translation (PB-SMT). In this paper, we show how single-tokenization of two types of multi-word expressions (MWE), namely named entities (NE) and compound verbs, as well as their prior alignment can boost the performance of PB-SMT. Single-tokenization of compound verbs and named entities (NE) provides significant gains over the baseline PB-SMT system. Automatic alignment of NEs substantially improves the overall MT performance, and thereby the word alignment quality indirectly. For establishing NE alignments, we transliterate source NEs into the target language and then compare them with the target NEs. Target language NEs are first converted into a canonical form before the comparison takes place. Our best system achieves statistically significant improvements (4.59 BLEU points absolute, 52.5% relative improvement) on an English-Bangla translation task.",Handling Named Entities and Compound Verbs in Phrase-Based Statistical Machine Translation,"Data preprocessing plays a crucial role in phrase-based statistical machine translation (PB-SMT). In this paper, we show how single-tokenization of two types of multi-word expressions (MWE), namely named entities (NE) and compound verbs, as well as their prior alignment can boost the performance of PB-SMT. Single-tokenization of compound verbs and named entities (NE) provides significant gains over the baseline PB-SMT system. Automatic alignment of NEs substantially improves the overall MT performance, and thereby the word alignment quality indirectly. For establishing NE alignments, we transliterate source NEs into the target language and then compare them with the target NEs. Target language NEs are first converted into a canonical form before the comparison takes place. Our best system achieves statistically significant improvements (4.59 BLEU points absolute, 52.5% relative improvement) on an English-Bangla translation task.",Handling Named Entities and Compound Verbs in Phrase-Based Statistical Machine Translation,"Data preprocessing plays a crucial role in phrase-based statistical machine translation (PB-SMT). In this paper, we show how single-tokenization of two types of multi-word expressions (MWE), namely named entities (NE) and compound verbs, as well as their prior alignment can boost the performance of PB-SMT. Single-tokenization of compound verbs and named entities (NE) provides significant gains over the baseline PB-SMT system. Automatic alignment of NEs substantially improves the overall MT performance, and thereby the word alignment quality indirectly. For establishing NE alignments, we transliterate source NEs into the target language and then compare them with the target NEs. Target language NEs are first converted into a canonical form before the comparison takes place. Our best system achieves statistically significant improvements (4.59 BLEU points absolute, 52.5% relative improvement) on an English-Bangla translation task.","This research is partially supported by the Science Foundation Ireland (Grant 07/CE/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Dublin City University, and EU projects PANACEA (Grant 7FP-ITC-248064) and META-NET (Grant FP7-ICT-249119).","Handling Named Entities and Compound Verbs in Phrase-Based Statistical Machine Translation. Data preprocessing plays a crucial role in phrase-based statistical machine translation (PB-SMT). In this paper, we show how single-tokenization of two types of multi-word expressions (MWE), namely named entities (NE) and compound verbs, as well as their prior alignment can boost the performance of PB-SMT. Single-tokenization of compound verbs and named entities (NE) provides significant gains over the baseline PB-SMT system. Automatic alignment of NEs substantially improves the overall MT performance, and thereby the word alignment quality indirectly. For establishing NE alignments, we transliterate source NEs into the target language and then compare them with the target NEs. Target language NEs are first converted into a canonical form before the comparison takes place. Our best system achieves statistically significant improvements (4.59 BLEU points absolute, 52.5% relative improvement) on an English-Bangla translation task.",2010
molino-etal-2019-parallax,https://aclanthology.org/P19-3028,0,,,,,,,"Parallax: Visualizing and Understanding the Semantics of Embedding Spaces via Algebraic Formulae. Embeddings are a fundamental component of many modern machine learning and natural language processing models. Understanding them and visualizing them is essential for gathering insights about the information they capture and the behavior of the models. In this paper, we introduce Parallax 1 , a tool explicitly designed for this task. Parallax allows the user to use both state-of-the-art embedding analysis methods (PCA and t-SNE) and a simple yet effective task-oriented approach where users can explicitly define the axes of the projection through algebraic formulae. In this approach, embeddings are projected into a semantically meaningful subspace, which enhances interpretability and allows for more fine-grained analysis. We demonstrate 2 the power of the tool and the proposed methodology through a series of case studies and a user study.",{P}arallax: Visualizing and Understanding the Semantics of Embedding Spaces via Algebraic Formulae,"Embeddings are a fundamental component of many modern machine learning and natural language processing models. Understanding them and visualizing them is essential for gathering insights about the information they capture and the behavior of the models. In this paper, we introduce Parallax 1 , a tool explicitly designed for this task. Parallax allows the user to use both state-of-the-art embedding analysis methods (PCA and t-SNE) and a simple yet effective task-oriented approach where users can explicitly define the axes of the projection through algebraic formulae. In this approach, embeddings are projected into a semantically meaningful subspace, which enhances interpretability and allows for more fine-grained analysis. We demonstrate 2 the power of the tool and the proposed methodology through a series of case studies and a user study.",Parallax: Visualizing and Understanding the Semantics of Embedding Spaces via Algebraic Formulae,"Embeddings are a fundamental component of many modern machine learning and natural language processing models. Understanding them and visualizing them is essential for gathering insights about the information they capture and the behavior of the models. In this paper, we introduce Parallax 1 , a tool explicitly designed for this task. Parallax allows the user to use both state-of-the-art embedding analysis methods (PCA and t-SNE) and a simple yet effective task-oriented approach where users can explicitly define the axes of the projection through algebraic formulae. In this approach, embeddings are projected into a semantically meaningful subspace, which enhances interpretability and allows for more fine-grained analysis. We demonstrate 2 the power of the tool and the proposed methodology through a series of case studies and a user study.",,"Parallax: Visualizing and Understanding the Semantics of Embedding Spaces via Algebraic Formulae. Embeddings are a fundamental component of many modern machine learning and natural language processing models. Understanding them and visualizing them is essential for gathering insights about the information they capture and the behavior of the models. In this paper, we introduce Parallax 1 , a tool explicitly designed for this task. Parallax allows the user to use both state-of-the-art embedding analysis methods (PCA and t-SNE) and a simple yet effective task-oriented approach where users can explicitly define the axes of the projection through algebraic formulae. In this approach, embeddings are projected into a semantically meaningful subspace, which enhances interpretability and allows for more fine-grained analysis. We demonstrate 2 the power of the tool and the proposed methodology through a series of case studies and a user study.",2019
litkowski-2005-cl,https://aclanthology.org/P05-3004,0,,,,,,,"CL Research's Knowledge Management System. CL Research began experimenting with massive XML tagging of texts to answer questions in TREC 2002. In DUC 2003, the experiments were extended into text summarization. Based on these experiments, The Knowledge Management System (KMS) was developed to combine these two capabilities and to serve as a unified basis for other types of document exploration. KMS has been extended to include web question answering, both general and topic-based summarization, information extraction, and document exploration. The document exploration functionality includes identification of semantically similar concepts and dynamic ontology creation. As development of KMS has continued, user modeling has become a key research issue: how will different users want to use the information they identify.",{CL} Research{'}s Knowledge Management System,"CL Research began experimenting with massive XML tagging of texts to answer questions in TREC 2002. In DUC 2003, the experiments were extended into text summarization. Based on these experiments, The Knowledge Management System (KMS) was developed to combine these two capabilities and to serve as a unified basis for other types of document exploration. KMS has been extended to include web question answering, both general and topic-based summarization, information extraction, and document exploration. The document exploration functionality includes identification of semantically similar concepts and dynamic ontology creation. As development of KMS has continued, user modeling has become a key research issue: how will different users want to use the information they identify.",CL Research's Knowledge Management System,"CL Research began experimenting with massive XML tagging of texts to answer questions in TREC 2002. In DUC 2003, the experiments were extended into text summarization. Based on these experiments, The Knowledge Management System (KMS) was developed to combine these two capabilities and to serve as a unified basis for other types of document exploration. KMS has been extended to include web question answering, both general and topic-based summarization, information extraction, and document exploration. The document exploration functionality includes identification of semantically similar concepts and dynamic ontology creation. As development of KMS has continued, user modeling has become a key research issue: how will different users want to use the information they identify.",,"CL Research's Knowledge Management System. CL Research began experimenting with massive XML tagging of texts to answer questions in TREC 2002. In DUC 2003, the experiments were extended into text summarization. Based on these experiments, The Knowledge Management System (KMS) was developed to combine these two capabilities and to serve as a unified basis for other types of document exploration. KMS has been extended to include web question answering, both general and topic-based summarization, information extraction, and document exploration. The document exploration functionality includes identification of semantically similar concepts and dynamic ontology creation. As development of KMS has continued, user modeling has become a key research issue: how will different users want to use the information they identify.",2005
loukanova-2019-computational,https://aclanthology.org/W19-1005,0,,,,,,,"Computational Syntax-Semantics Interface with Type-Theory of Acyclic Recursion for Underspecified Semantics. The paper provides a technique for algorithmic syntax-semantics interface in computational grammar with underspecified semantic representations of human language. The technique is introduced for expressions that contain NP quantifiers, by using computational, generalised Constraint-Based Lexicalised Grammar (GCBLG) that represents major, common syntactic characteristics of a variety of approaches to formal grammar and natural language processing (NLP). Our solution can be realised by any of the grammar formalisms in the CBLG class, e.g., Head-Driven Phrase Structure Grammar (HPSG), Lexical Functional Grammar (LFG), Categorial Grammar (CG). The type-theory of acyclic recursion L λ ar , provides facility for representing major semantic ambiguities, as underspecification, at the object level of the formal language of L λ ar , without recourse of metalanguage variables. Specific semantic representations can be obtained by instantiations of underspecified L λ arterms, in context. These are subject to constraints provided by a newly introduced feature-structure description of syntax-semantics interface in GCBLG. 1 Introduction Ambiguity permeates human language, in all of its manifestations, by interdependences, across lexicon, syntax, semantics, discourse, context, etc. Alternative interpretations may persist even when specific context and discourse resolve or discard some specific instances in syntax and semantics. We present computational grammar that integrates lexicon, syntax, types, constraints, and semantics. The formal facilities of the grammar have components that integrate syntactic constructions with semantic representations. The syntax-semantic interface, internally in the grammar, handles some ambiguities as phenomena of underspecification in human language. We employ a computational grammar, which we call Generalised Constraint-Based Lexicalised Grammar (GCBLG). The formal system GCBLG uses feature-value descriptions and constraints in a grammar with a hierarchy of dependent types, which covers lexicon, phrasal structures, and semantic representations. In GCBLG, for the syntax, we use feature-value descriptions, similar to that in Sag et al. (2003), which are presented formally in Loukanova (2017a) as a class of formal languages designating mathematical structures of functional domains of linguistics information. GCBLG is a generalisation from major lexical and syntactic facilities of frameworks in the class of Constraint-Based Lexicalist Grammar (CBLG) approaches. To some extend, this is reminiscence of Vijay-Shanker and Weir (1994). We lift the idea of extending classic formal grammars to cover semantic representations with semantic underspecification via syntax-semantics interface within computational grammar. We introduce the technique here for varieties of grammar formalisms from the CBLG approach, in particular:",Computational Syntax-Semantics Interface with Type-Theory of Acyclic Recursion for Underspecified Semantics,"The paper provides a technique for algorithmic syntax-semantics interface in computational grammar with underspecified semantic representations of human language. The technique is introduced for expressions that contain NP quantifiers, by using computational, generalised Constraint-Based Lexicalised Grammar (GCBLG) that represents major, common syntactic characteristics of a variety of approaches to formal grammar and natural language processing (NLP). Our solution can be realised by any of the grammar formalisms in the CBLG class, e.g., Head-Driven Phrase Structure Grammar (HPSG), Lexical Functional Grammar (LFG), Categorial Grammar (CG). The type-theory of acyclic recursion L λ ar , provides facility for representing major semantic ambiguities, as underspecification, at the object level of the formal language of L λ ar , without recourse of metalanguage variables. Specific semantic representations can be obtained by instantiations of underspecified L λ arterms, in context. These are subject to constraints provided by a newly introduced feature-structure description of syntax-semantics interface in GCBLG. 1 Introduction Ambiguity permeates human language, in all of its manifestations, by interdependences, across lexicon, syntax, semantics, discourse, context, etc. Alternative interpretations may persist even when specific context and discourse resolve or discard some specific instances in syntax and semantics. We present computational grammar that integrates lexicon, syntax, types, constraints, and semantics. The formal facilities of the grammar have components that integrate syntactic constructions with semantic representations. The syntax-semantic interface, internally in the grammar, handles some ambiguities as phenomena of underspecification in human language. We employ a computational grammar, which we call Generalised Constraint-Based Lexicalised Grammar (GCBLG). The formal system GCBLG uses feature-value descriptions and constraints in a grammar with a hierarchy of dependent types, which covers lexicon, phrasal structures, and semantic representations. In GCBLG, for the syntax, we use feature-value descriptions, similar to that in Sag et al. (2003), which are presented formally in Loukanova (2017a) as a class of formal languages designating mathematical structures of functional domains of linguistics information. GCBLG is a generalisation from major lexical and syntactic facilities of frameworks in the class of Constraint-Based Lexicalist Grammar (CBLG) approaches. To some extend, this is reminiscence of Vijay-Shanker and Weir (1994). We lift the idea of extending classic formal grammars to cover semantic representations with semantic underspecification via syntax-semantics interface within computational grammar. We introduce the technique here for varieties of grammar formalisms from the CBLG approach, in particular:",Computational Syntax-Semantics Interface with Type-Theory of Acyclic Recursion for Underspecified Semantics,"The paper provides a technique for algorithmic syntax-semantics interface in computational grammar with underspecified semantic representations of human language. The technique is introduced for expressions that contain NP quantifiers, by using computational, generalised Constraint-Based Lexicalised Grammar (GCBLG) that represents major, common syntactic characteristics of a variety of approaches to formal grammar and natural language processing (NLP). Our solution can be realised by any of the grammar formalisms in the CBLG class, e.g., Head-Driven Phrase Structure Grammar (HPSG), Lexical Functional Grammar (LFG), Categorial Grammar (CG). The type-theory of acyclic recursion L λ ar , provides facility for representing major semantic ambiguities, as underspecification, at the object level of the formal language of L λ ar , without recourse of metalanguage variables. Specific semantic representations can be obtained by instantiations of underspecified L λ arterms, in context. These are subject to constraints provided by a newly introduced feature-structure description of syntax-semantics interface in GCBLG. 1 Introduction Ambiguity permeates human language, in all of its manifestations, by interdependences, across lexicon, syntax, semantics, discourse, context, etc. Alternative interpretations may persist even when specific context and discourse resolve or discard some specific instances in syntax and semantics. We present computational grammar that integrates lexicon, syntax, types, constraints, and semantics. The formal facilities of the grammar have components that integrate syntactic constructions with semantic representations. The syntax-semantic interface, internally in the grammar, handles some ambiguities as phenomena of underspecification in human language. We employ a computational grammar, which we call Generalised Constraint-Based Lexicalised Grammar (GCBLG). The formal system GCBLG uses feature-value descriptions and constraints in a grammar with a hierarchy of dependent types, which covers lexicon, phrasal structures, and semantic representations. In GCBLG, for the syntax, we use feature-value descriptions, similar to that in Sag et al. (2003), which are presented formally in Loukanova (2017a) as a class of formal languages designating mathematical structures of functional domains of linguistics information. GCBLG is a generalisation from major lexical and syntactic facilities of frameworks in the class of Constraint-Based Lexicalist Grammar (CBLG) approaches. To some extend, this is reminiscence of Vijay-Shanker and Weir (1994). We lift the idea of extending classic formal grammars to cover semantic representations with semantic underspecification via syntax-semantics interface within computational grammar. We introduce the technique here for varieties of grammar formalisms from the CBLG approach, in particular:",,"Computational Syntax-Semantics Interface with Type-Theory of Acyclic Recursion for Underspecified Semantics. The paper provides a technique for algorithmic syntax-semantics interface in computational grammar with underspecified semantic representations of human language. The technique is introduced for expressions that contain NP quantifiers, by using computational, generalised Constraint-Based Lexicalised Grammar (GCBLG) that represents major, common syntactic characteristics of a variety of approaches to formal grammar and natural language processing (NLP). Our solution can be realised by any of the grammar formalisms in the CBLG class, e.g., Head-Driven Phrase Structure Grammar (HPSG), Lexical Functional Grammar (LFG), Categorial Grammar (CG). The type-theory of acyclic recursion L λ ar , provides facility for representing major semantic ambiguities, as underspecification, at the object level of the formal language of L λ ar , without recourse of metalanguage variables. Specific semantic representations can be obtained by instantiations of underspecified L λ arterms, in context. These are subject to constraints provided by a newly introduced feature-structure description of syntax-semantics interface in GCBLG. 1 Introduction Ambiguity permeates human language, in all of its manifestations, by interdependences, across lexicon, syntax, semantics, discourse, context, etc. Alternative interpretations may persist even when specific context and discourse resolve or discard some specific instances in syntax and semantics. We present computational grammar that integrates lexicon, syntax, types, constraints, and semantics. The formal facilities of the grammar have components that integrate syntactic constructions with semantic representations. The syntax-semantic interface, internally in the grammar, handles some ambiguities as phenomena of underspecification in human language. We employ a computational grammar, which we call Generalised Constraint-Based Lexicalised Grammar (GCBLG). The formal system GCBLG uses feature-value descriptions and constraints in a grammar with a hierarchy of dependent types, which covers lexicon, phrasal structures, and semantic representations. In GCBLG, for the syntax, we use feature-value descriptions, similar to that in Sag et al. (2003), which are presented formally in Loukanova (2017a) as a class of formal languages designating mathematical structures of functional domains of linguistics information. GCBLG is a generalisation from major lexical and syntactic facilities of frameworks in the class of Constraint-Based Lexicalist Grammar (CBLG) approaches. To some extend, this is reminiscence of Vijay-Shanker and Weir (1994). We lift the idea of extending classic formal grammars to cover semantic representations with semantic underspecification via syntax-semantics interface within computational grammar. We introduce the technique here for varieties of grammar formalisms from the CBLG approach, in particular:",2019
ciaramita-johnson-2003-supersense,https://aclanthology.org/W03-1022,0,,,,,,,"Supersense Tagging of Unknown Nouns in WordNet. We present a new framework for classifying common nouns that extends namedentity classification. We used a fixed set of 26 semantic labels, which we called supersenses. These are the labels used by lexicographers developing WordNet. This framework has a number of practical advantages. We show how information contained in the dictionary can be used as additional training data that improves accuracy in learning new nouns. We also define a more realistic evaluation procedure than cross-validation.",Supersense Tagging of Unknown Nouns in {W}ord{N}et,"We present a new framework for classifying common nouns that extends namedentity classification. We used a fixed set of 26 semantic labels, which we called supersenses. These are the labels used by lexicographers developing WordNet. This framework has a number of practical advantages. We show how information contained in the dictionary can be used as additional training data that improves accuracy in learning new nouns. We also define a more realistic evaluation procedure than cross-validation.",Supersense Tagging of Unknown Nouns in WordNet,"We present a new framework for classifying common nouns that extends namedentity classification. We used a fixed set of 26 semantic labels, which we called supersenses. These are the labels used by lexicographers developing WordNet. This framework has a number of practical advantages. We show how information contained in the dictionary can be used as additional training data that improves accuracy in learning new nouns. We also define a more realistic evaluation procedure than cross-validation.",,"Supersense Tagging of Unknown Nouns in WordNet. We present a new framework for classifying common nouns that extends namedentity classification. We used a fixed set of 26 semantic labels, which we called supersenses. These are the labels used by lexicographers developing WordNet. This framework has a number of practical advantages. We show how information contained in the dictionary can be used as additional training data that improves accuracy in learning new nouns. We also define a more realistic evaluation procedure than cross-validation.",2003
xu-etal-2020-tero,https://aclanthology.org/2020.coling-main.139,0,,,,,,,"TeRo: A Time-aware Knowledge Graph Embedding via Temporal Rotation. In the last few years, there has been a surge of interest in learning representations of entities and relations in knowledge graph (KG). However, the recent availability of temporal knowledge graphs (TKGs) that contain time information for each fact created the need for reasoning over time in such TKGs. In this regard, we present a new approach of TKG embedding, TeRo, which defines the temporal evolution of entity embedding as a rotation from the initial time to the current time in the complex vector space. Specially, for facts involving time intervals, each relation is represented as a pair of dual complex embeddings to handle the beginning and the end of the relation, respectively. We show our proposed model overcomes the limitations of the existing KG embedding models and TKG embedding models and has the ability of learning and inferring various relation patterns over time. Experimental results on four different TKGs show that TeRo significantly outperforms existing state-of-the-art models for link prediction. In addition, we analyze the effect of time granularity on link prediction over TKGs, which as far as we know has not been investigated in previous literature.",{T}e{R}o: A Time-aware Knowledge Graph Embedding via Temporal Rotation,"In the last few years, there has been a surge of interest in learning representations of entities and relations in knowledge graph (KG). However, the recent availability of temporal knowledge graphs (TKGs) that contain time information for each fact created the need for reasoning over time in such TKGs. In this regard, we present a new approach of TKG embedding, TeRo, which defines the temporal evolution of entity embedding as a rotation from the initial time to the current time in the complex vector space. Specially, for facts involving time intervals, each relation is represented as a pair of dual complex embeddings to handle the beginning and the end of the relation, respectively. We show our proposed model overcomes the limitations of the existing KG embedding models and TKG embedding models and has the ability of learning and inferring various relation patterns over time. Experimental results on four different TKGs show that TeRo significantly outperforms existing state-of-the-art models for link prediction. In addition, we analyze the effect of time granularity on link prediction over TKGs, which as far as we know has not been investigated in previous literature.",TeRo: A Time-aware Knowledge Graph Embedding via Temporal Rotation,"In the last few years, there has been a surge of interest in learning representations of entities and relations in knowledge graph (KG). However, the recent availability of temporal knowledge graphs (TKGs) that contain time information for each fact created the need for reasoning over time in such TKGs. In this regard, we present a new approach of TKG embedding, TeRo, which defines the temporal evolution of entity embedding as a rotation from the initial time to the current time in the complex vector space. Specially, for facts involving time intervals, each relation is represented as a pair of dual complex embeddings to handle the beginning and the end of the relation, respectively. We show our proposed model overcomes the limitations of the existing KG embedding models and TKG embedding models and has the ability of learning and inferring various relation patterns over time. Experimental results on four different TKGs show that TeRo significantly outperforms existing state-of-the-art models for link prediction. In addition, we analyze the effect of time granularity on link prediction over TKGs, which as far as we know has not been investigated in previous literature.","This work is supported by the CLEOPATRA project (GA no. 812997), the German national funded BmBF project MLwin and the BOOST project.","TeRo: A Time-aware Knowledge Graph Embedding via Temporal Rotation. In the last few years, there has been a surge of interest in learning representations of entities and relations in knowledge graph (KG). However, the recent availability of temporal knowledge graphs (TKGs) that contain time information for each fact created the need for reasoning over time in such TKGs. In this regard, we present a new approach of TKG embedding, TeRo, which defines the temporal evolution of entity embedding as a rotation from the initial time to the current time in the complex vector space. Specially, for facts involving time intervals, each relation is represented as a pair of dual complex embeddings to handle the beginning and the end of the relation, respectively. We show our proposed model overcomes the limitations of the existing KG embedding models and TKG embedding models and has the ability of learning and inferring various relation patterns over time. Experimental results on four different TKGs show that TeRo significantly outperforms existing state-of-the-art models for link prediction. In addition, we analyze the effect of time granularity on link prediction over TKGs, which as far as we know has not been investigated in previous literature.",2020
nikolova-ma-2008-assistive,https://aclanthology.org/W08-0806,1,,,,industry_innovation_infrastructure,,,Assistive Mobile Communication Support. This paper reflects on our work in providing communication support for people with speech and language disabilities. We discuss the role of mobile technologies in assistive systems and share ongoing research efforts.,Assistive Mobile Communication Support,This paper reflects on our work in providing communication support for people with speech and language disabilities. We discuss the role of mobile technologies in assistive systems and share ongoing research efforts.,Assistive Mobile Communication Support,This paper reflects on our work in providing communication support for people with speech and language disabilities. We discuss the role of mobile technologies in assistive systems and share ongoing research efforts.,,Assistive Mobile Communication Support. This paper reflects on our work in providing communication support for people with speech and language disabilities. We discuss the role of mobile technologies in assistive systems and share ongoing research efforts.,2008
roy-2016-perception,https://aclanthology.org/W16-6340,0,,,,,,,"Perception of Phi-Phrase boundaries in Hindi.. This paper proposes an algorithm for finding phonological phrase boundaries in sentences with neutral focus spoken in both normal and fast tempos. A perceptual experiment is designed using Praat's experiment MFC program to investigate the phonological phrase boundaries. Phonological phrasing and its relation to syntactic structure in the framework of the endbased rules proposed by (Selkirk, 1986), and relation to purely phonological rules, i.e., the principle of increasing units proposed by (Ghini, 1993) are investigated. In addition to that, this paper explores the acoustic cues signalling phonological phrase boundaries in both normal and fast tempos speech. It is found that phonological phrasing in Hindi follows both endbased rule (Selkirk, 1986) and the principle of increasing units (Ghini, 1993). The end-based rules are used for phonological phrasing and the principle of increasing units is used for phonological phrase restructuring.",Perception of Phi-Phrase boundaries in {H}indi.,"This paper proposes an algorithm for finding phonological phrase boundaries in sentences with neutral focus spoken in both normal and fast tempos. A perceptual experiment is designed using Praat's experiment MFC program to investigate the phonological phrase boundaries. Phonological phrasing and its relation to syntactic structure in the framework of the endbased rules proposed by (Selkirk, 1986), and relation to purely phonological rules, i.e., the principle of increasing units proposed by (Ghini, 1993) are investigated. In addition to that, this paper explores the acoustic cues signalling phonological phrase boundaries in both normal and fast tempos speech. It is found that phonological phrasing in Hindi follows both endbased rule (Selkirk, 1986) and the principle of increasing units (Ghini, 1993). The end-based rules are used for phonological phrasing and the principle of increasing units is used for phonological phrase restructuring.",Perception of Phi-Phrase boundaries in Hindi.,"This paper proposes an algorithm for finding phonological phrase boundaries in sentences with neutral focus spoken in both normal and fast tempos. A perceptual experiment is designed using Praat's experiment MFC program to investigate the phonological phrase boundaries. Phonological phrasing and its relation to syntactic structure in the framework of the endbased rules proposed by (Selkirk, 1986), and relation to purely phonological rules, i.e., the principle of increasing units proposed by (Ghini, 1993) are investigated. In addition to that, this paper explores the acoustic cues signalling phonological phrase boundaries in both normal and fast tempos speech. It is found that phonological phrasing in Hindi follows both endbased rule (Selkirk, 1986) and the principle of increasing units (Ghini, 1993). The end-based rules are used for phonological phrasing and the principle of increasing units is used for phonological phrase restructuring.",,"Perception of Phi-Phrase boundaries in Hindi.. This paper proposes an algorithm for finding phonological phrase boundaries in sentences with neutral focus spoken in both normal and fast tempos. A perceptual experiment is designed using Praat's experiment MFC program to investigate the phonological phrase boundaries. Phonological phrasing and its relation to syntactic structure in the framework of the endbased rules proposed by (Selkirk, 1986), and relation to purely phonological rules, i.e., the principle of increasing units proposed by (Ghini, 1993) are investigated. In addition to that, this paper explores the acoustic cues signalling phonological phrase boundaries in both normal and fast tempos speech. It is found that phonological phrasing in Hindi follows both endbased rule (Selkirk, 1986) and the principle of increasing units (Ghini, 1993). The end-based rules are used for phonological phrasing and the principle of increasing units is used for phonological phrase restructuring.",2016
jayez-rossari-1998-discourse,https://aclanthology.org/W98-0313,0,,,,,,,"Discourse Relations versus Discourse Marker Relations. While it seems intuitively obvious that many discourse markers (DMs) are able to express discourse relations (DRs) which exist independently, the specific contribution of DMs-if any-is not clear. In this paper, we investigate the status of some consequence DMs in French. We observe that it is difficult to construct a clear and simple definition based on DRs for these DMs. Next, we show that the lexical constraints associated with such DMs extend far beyond simple compatibility with DRs. This suggests that the view of DMs as signaling general allpurpose DRs is to be seriously amended in favor of more precise descriptions of DMs, in which the compatibility with DRs is derived from a lexical semantic profile.",Discourse Relations versus Discourse Marker Relations,"While it seems intuitively obvious that many discourse markers (DMs) are able to express discourse relations (DRs) which exist independently, the specific contribution of DMs-if any-is not clear. In this paper, we investigate the status of some consequence DMs in French. We observe that it is difficult to construct a clear and simple definition based on DRs for these DMs. Next, we show that the lexical constraints associated with such DMs extend far beyond simple compatibility with DRs. This suggests that the view of DMs as signaling general allpurpose DRs is to be seriously amended in favor of more precise descriptions of DMs, in which the compatibility with DRs is derived from a lexical semantic profile.",Discourse Relations versus Discourse Marker Relations,"While it seems intuitively obvious that many discourse markers (DMs) are able to express discourse relations (DRs) which exist independently, the specific contribution of DMs-if any-is not clear. In this paper, we investigate the status of some consequence DMs in French. We observe that it is difficult to construct a clear and simple definition based on DRs for these DMs. Next, we show that the lexical constraints associated with such DMs extend far beyond simple compatibility with DRs. This suggests that the view of DMs as signaling general allpurpose DRs is to be seriously amended in favor of more precise descriptions of DMs, in which the compatibility with DRs is derived from a lexical semantic profile.",,"Discourse Relations versus Discourse Marker Relations. While it seems intuitively obvious that many discourse markers (DMs) are able to express discourse relations (DRs) which exist independently, the specific contribution of DMs-if any-is not clear. In this paper, we investigate the status of some consequence DMs in French. We observe that it is difficult to construct a clear and simple definition based on DRs for these DMs. Next, we show that the lexical constraints associated with such DMs extend far beyond simple compatibility with DRs. This suggests that the view of DMs as signaling general allpurpose DRs is to be seriously amended in favor of more precise descriptions of DMs, in which the compatibility with DRs is derived from a lexical semantic profile.",1998
liu-etal-2021-progressively,https://aclanthology.org/2021.emnlp-main.733,0,,,,,,,"Progressively Guide to Attend: An Iterative Alignment Framework for Temporal Sentence Grounding. A key solution to temporal sentence grounding (TSG) exists in how to learn effective alignment between vision and language features extracted from an untrimmed video and a sentence description. Existing methods mainly leverage vanilla soft attention to perform the alignment in a single-step process. However, such single-step attention is insufficient in practice, since complicated relations between inter-and intra-modality are usually obtained through multi-step reasoning. In this paper, we propose an Iterative Alignment Network (IA-Net) for TSG task, which iteratively interacts inter-and intra-modal features within multiple steps for more accurate grounding. Specifically, during the iterative reasoning process, we pad multi-modal features with learnable parameters to alleviate the nowhere-toattend problem of non-matched frame-word pairs, and enhance the basic co-attention mechanism in a parallel manner. To further calibrate the misaligned attention caused by each reasoning step, we also devise a calibration module following each attention module to refine the alignment knowledge. With such iterative alignment scheme, our IA-Net can robustly capture the fine-grained relations between vision and language domains step-bystep for progressively reasoning the temporal boundaries. Extensive experiments conducted on three challenging benchmarks demonstrate that our proposed model performs better than the state-of-the-arts.",Progressively Guide to Attend: An Iterative Alignment Framework for Temporal Sentence Grounding,"A key solution to temporal sentence grounding (TSG) exists in how to learn effective alignment between vision and language features extracted from an untrimmed video and a sentence description. Existing methods mainly leverage vanilla soft attention to perform the alignment in a single-step process. However, such single-step attention is insufficient in practice, since complicated relations between inter-and intra-modality are usually obtained through multi-step reasoning. In this paper, we propose an Iterative Alignment Network (IA-Net) for TSG task, which iteratively interacts inter-and intra-modal features within multiple steps for more accurate grounding. Specifically, during the iterative reasoning process, we pad multi-modal features with learnable parameters to alleviate the nowhere-toattend problem of non-matched frame-word pairs, and enhance the basic co-attention mechanism in a parallel manner. To further calibrate the misaligned attention caused by each reasoning step, we also devise a calibration module following each attention module to refine the alignment knowledge. With such iterative alignment scheme, our IA-Net can robustly capture the fine-grained relations between vision and language domains step-bystep for progressively reasoning the temporal boundaries. Extensive experiments conducted on three challenging benchmarks demonstrate that our proposed model performs better than the state-of-the-arts.",Progressively Guide to Attend: An Iterative Alignment Framework for Temporal Sentence Grounding,"A key solution to temporal sentence grounding (TSG) exists in how to learn effective alignment between vision and language features extracted from an untrimmed video and a sentence description. Existing methods mainly leverage vanilla soft attention to perform the alignment in a single-step process. However, such single-step attention is insufficient in practice, since complicated relations between inter-and intra-modality are usually obtained through multi-step reasoning. In this paper, we propose an Iterative Alignment Network (IA-Net) for TSG task, which iteratively interacts inter-and intra-modal features within multiple steps for more accurate grounding. Specifically, during the iterative reasoning process, we pad multi-modal features with learnable parameters to alleviate the nowhere-toattend problem of non-matched frame-word pairs, and enhance the basic co-attention mechanism in a parallel manner. To further calibrate the misaligned attention caused by each reasoning step, we also devise a calibration module following each attention module to refine the alignment knowledge. With such iterative alignment scheme, our IA-Net can robustly capture the fine-grained relations between vision and language domains step-bystep for progressively reasoning the temporal boundaries. Extensive experiments conducted on three challenging benchmarks demonstrate that our proposed model performs better than the state-of-the-arts.",This work was supported in part by the National Natural Science Foundation of China under grant No. 61972448.,"Progressively Guide to Attend: An Iterative Alignment Framework for Temporal Sentence Grounding. A key solution to temporal sentence grounding (TSG) exists in how to learn effective alignment between vision and language features extracted from an untrimmed video and a sentence description. Existing methods mainly leverage vanilla soft attention to perform the alignment in a single-step process. However, such single-step attention is insufficient in practice, since complicated relations between inter-and intra-modality are usually obtained through multi-step reasoning. In this paper, we propose an Iterative Alignment Network (IA-Net) for TSG task, which iteratively interacts inter-and intra-modal features within multiple steps for more accurate grounding. Specifically, during the iterative reasoning process, we pad multi-modal features with learnable parameters to alleviate the nowhere-toattend problem of non-matched frame-word pairs, and enhance the basic co-attention mechanism in a parallel manner. To further calibrate the misaligned attention caused by each reasoning step, we also devise a calibration module following each attention module to refine the alignment knowledge. With such iterative alignment scheme, our IA-Net can robustly capture the fine-grained relations between vision and language domains step-bystep for progressively reasoning the temporal boundaries. Extensive experiments conducted on three challenging benchmarks demonstrate that our proposed model performs better than the state-of-the-arts.",2021
zhang-etal-2021-point,https://aclanthology.org/2021.acl-long.307,0,,,,,,,"Point, Disambiguate and Copy: Incorporating Bilingual Dictionaries for Neural Machine Translation. This paper proposes a sophisticated neural architecture to incorporate bilingual dictionaries into Neural Machine Translation (NMT) models. By introducing three novel components: Pointer, Disambiguator, and Copier, our method PDC achieves the following merits inherently compared with previous efforts: (1) Pointer leverages the semantic information from bilingual dictionaries, for the first time, to better locate source words whose translation in dictionaries can potentially be used; (2) Disambiguator synthesizes contextual information from the source view and the target view, both of which contribute to distinguishing the proper translation of a specific source word from multiple candidates in dictionaries; (3) Copier systematically connects Pointer and Disambiguator based on a hierarchical copy mechanism seamlessly integrated with Transformer, thereby building an end-to-end architecture that could avoid error propagation problems in alternative pipeline methods. The experimental results on Chinese-English and English-Japanese benchmarks demonstrate the PDC's overall superiority and effectiveness of each component.","Point, Disambiguate and Copy: Incorporating Bilingual Dictionaries for Neural Machine Translation","This paper proposes a sophisticated neural architecture to incorporate bilingual dictionaries into Neural Machine Translation (NMT) models. By introducing three novel components: Pointer, Disambiguator, and Copier, our method PDC achieves the following merits inherently compared with previous efforts: (1) Pointer leverages the semantic information from bilingual dictionaries, for the first time, to better locate source words whose translation in dictionaries can potentially be used; (2) Disambiguator synthesizes contextual information from the source view and the target view, both of which contribute to distinguishing the proper translation of a specific source word from multiple candidates in dictionaries; (3) Copier systematically connects Pointer and Disambiguator based on a hierarchical copy mechanism seamlessly integrated with Transformer, thereby building an end-to-end architecture that could avoid error propagation problems in alternative pipeline methods. The experimental results on Chinese-English and English-Japanese benchmarks demonstrate the PDC's overall superiority and effectiveness of each component.","Point, Disambiguate and Copy: Incorporating Bilingual Dictionaries for Neural Machine Translation","This paper proposes a sophisticated neural architecture to incorporate bilingual dictionaries into Neural Machine Translation (NMT) models. By introducing three novel components: Pointer, Disambiguator, and Copier, our method PDC achieves the following merits inherently compared with previous efforts: (1) Pointer leverages the semantic information from bilingual dictionaries, for the first time, to better locate source words whose translation in dictionaries can potentially be used; (2) Disambiguator synthesizes contextual information from the source view and the target view, both of which contribute to distinguishing the proper translation of a specific source word from multiple candidates in dictionaries; (3) Copier systematically connects Pointer and Disambiguator based on a hierarchical copy mechanism seamlessly integrated with Transformer, thereby building an end-to-end architecture that could avoid error propagation problems in alternative pipeline methods. The experimental results on Chinese-English and English-Japanese benchmarks demonstrate the PDC's overall superiority and effectiveness of each component.",We thank anonymous reviewers for valuable comments. This research was supported by the Na- ,"Point, Disambiguate and Copy: Incorporating Bilingual Dictionaries for Neural Machine Translation. This paper proposes a sophisticated neural architecture to incorporate bilingual dictionaries into Neural Machine Translation (NMT) models. By introducing three novel components: Pointer, Disambiguator, and Copier, our method PDC achieves the following merits inherently compared with previous efforts: (1) Pointer leverages the semantic information from bilingual dictionaries, for the first time, to better locate source words whose translation in dictionaries can potentially be used; (2) Disambiguator synthesizes contextual information from the source view and the target view, both of which contribute to distinguishing the proper translation of a specific source word from multiple candidates in dictionaries; (3) Copier systematically connects Pointer and Disambiguator based on a hierarchical copy mechanism seamlessly integrated with Transformer, thereby building an end-to-end architecture that could avoid error propagation problems in alternative pipeline methods. The experimental results on Chinese-English and English-Japanese benchmarks demonstrate the PDC's overall superiority and effectiveness of each component.",2021
bolanos-etal-2009-multi,https://aclanthology.org/N09-2026,0,,,,,,,"Multi-scale Personalization for Voice Search Applications. Voice Search applications provide a very convenient and direct access to a broad variety of services and information. However, due to the vast amount of information available and the open nature of the spoken queries, these applications still suffer from recognition errors. This paper explores the utilization of personalization features for the post-processing of recognition results in the form of n-best lists. Personalization is carried out from three different angles: short-term, long-term and Web-based, and a large variety of features are proposed for use in a log-linear classification framework. Experimental results on data obtained from a commercially deployed Voice Search system show that the combination of the proposed features leads to a substantial sentence error rate reduction. In addition, it is shown that personalization features which are very different in nature can successfully complement each other.",Multi-scale Personalization for Voice Search Applications,"Voice Search applications provide a very convenient and direct access to a broad variety of services and information. However, due to the vast amount of information available and the open nature of the spoken queries, these applications still suffer from recognition errors. This paper explores the utilization of personalization features for the post-processing of recognition results in the form of n-best lists. Personalization is carried out from three different angles: short-term, long-term and Web-based, and a large variety of features are proposed for use in a log-linear classification framework. Experimental results on data obtained from a commercially deployed Voice Search system show that the combination of the proposed features leads to a substantial sentence error rate reduction. In addition, it is shown that personalization features which are very different in nature can successfully complement each other.",Multi-scale Personalization for Voice Search Applications,"Voice Search applications provide a very convenient and direct access to a broad variety of services and information. However, due to the vast amount of information available and the open nature of the spoken queries, these applications still suffer from recognition errors. This paper explores the utilization of personalization features for the post-processing of recognition results in the form of n-best lists. Personalization is carried out from three different angles: short-term, long-term and Web-based, and a large variety of features are proposed for use in a log-linear classification framework. Experimental results on data obtained from a commercially deployed Voice Search system show that the combination of the proposed features leads to a substantial sentence error rate reduction. In addition, it is shown that personalization features which are very different in nature can successfully complement each other.",,"Multi-scale Personalization for Voice Search Applications. Voice Search applications provide a very convenient and direct access to a broad variety of services and information. However, due to the vast amount of information available and the open nature of the spoken queries, these applications still suffer from recognition errors. This paper explores the utilization of personalization features for the post-processing of recognition results in the form of n-best lists. Personalization is carried out from three different angles: short-term, long-term and Web-based, and a large variety of features are proposed for use in a log-linear classification framework. Experimental results on data obtained from a commercially deployed Voice Search system show that the combination of the proposed features leads to a substantial sentence error rate reduction. In addition, it is shown that personalization features which are very different in nature can successfully complement each other.",2009
peters-etal-2019-knowledge,https://aclanthology.org/D19-1005,0,,,,,,,"Knowledge Enhanced Contextual Word Representations. Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities. We propose a general method to embed multiple knowledge bases (KBs) into large scale models, and thereby enhance their representations with structured, human-curated knowledge. For each KB, we first use an integrated entity linker to retrieve relevant entity embeddings, then update contextual word representations via a form of word-to-entity attention. In contrast to previous approaches, the entity linkers and selfsupervised language modeling objective are jointly trained end-to-end in a multitask setting that combines a small amount of entity linking supervision with a large amount of raw text. After integrating WordNet and a subset of Wikipedia into BERT, the knowledge enhanced BERT (KnowBert) demonstrates improved perplexity, ability to recall facts as measured in a probing task and downstream performance on relationship extraction, entity typing, and word sense disambiguation. KnowBert's runtime is comparable to BERT's and it scales to large KBs.",Knowledge Enhanced Contextual Word Representations,"Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities. We propose a general method to embed multiple knowledge bases (KBs) into large scale models, and thereby enhance their representations with structured, human-curated knowledge. For each KB, we first use an integrated entity linker to retrieve relevant entity embeddings, then update contextual word representations via a form of word-to-entity attention. In contrast to previous approaches, the entity linkers and selfsupervised language modeling objective are jointly trained end-to-end in a multitask setting that combines a small amount of entity linking supervision with a large amount of raw text. After integrating WordNet and a subset of Wikipedia into BERT, the knowledge enhanced BERT (KnowBert) demonstrates improved perplexity, ability to recall facts as measured in a probing task and downstream performance on relationship extraction, entity typing, and word sense disambiguation. KnowBert's runtime is comparable to BERT's and it scales to large KBs.",Knowledge Enhanced Contextual Word Representations,"Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities. We propose a general method to embed multiple knowledge bases (KBs) into large scale models, and thereby enhance their representations with structured, human-curated knowledge. For each KB, we first use an integrated entity linker to retrieve relevant entity embeddings, then update contextual word representations via a form of word-to-entity attention. In contrast to previous approaches, the entity linkers and selfsupervised language modeling objective are jointly trained end-to-end in a multitask setting that combines a small amount of entity linking supervision with a large amount of raw text. After integrating WordNet and a subset of Wikipedia into BERT, the knowledge enhanced BERT (KnowBert) demonstrates improved perplexity, ability to recall facts as measured in a probing task and downstream performance on relationship extraction, entity typing, and word sense disambiguation. KnowBert's runtime is comparable to BERT's and it scales to large KBs.",The authors acknowledge helpful feedback from anonymous reviewers and the AllenNLP team. This research was funded in part by the NSF under awards IIS-1817183 and CNS-1730158.,"Knowledge Enhanced Contextual Word Representations. Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities. We propose a general method to embed multiple knowledge bases (KBs) into large scale models, and thereby enhance their representations with structured, human-curated knowledge. For each KB, we first use an integrated entity linker to retrieve relevant entity embeddings, then update contextual word representations via a form of word-to-entity attention. In contrast to previous approaches, the entity linkers and selfsupervised language modeling objective are jointly trained end-to-end in a multitask setting that combines a small amount of entity linking supervision with a large amount of raw text. After integrating WordNet and a subset of Wikipedia into BERT, the knowledge enhanced BERT (KnowBert) demonstrates improved perplexity, ability to recall facts as measured in a probing task and downstream performance on relationship extraction, entity typing, and word sense disambiguation. KnowBert's runtime is comparable to BERT's and it scales to large KBs.",2019
dimitroff-etal-2013-weighted,https://aclanthology.org/R13-1027,0,,,,,,,"Weighted maximum likelihood loss as a convenient shortcut to optimizing the F-measure of maximum entropy classifiers. We link the weighted maximum entropy and the optimization of the expected F βmeasure, by viewing them in the framework of a general common multi-criteria optimization problem. As a result, each solution of the expected F β-measure maximization can be realized as a weighted maximum likelihood solution-a well understood and behaved problem. The specific structure of maximum entropy models allows us to approximate this characterization via the much simpler class-wise weighted maximum likelihood. Our approach reveals any probabilistic learning scheme as a specific trade-off between different objectives and provides the framework to link it to the expected F β-measure.",Weighted maximum likelihood loss as a convenient shortcut to optimizing the {F}-measure of maximum entropy classifiers,"We link the weighted maximum entropy and the optimization of the expected F βmeasure, by viewing them in the framework of a general common multi-criteria optimization problem. As a result, each solution of the expected F β-measure maximization can be realized as a weighted maximum likelihood solution-a well understood and behaved problem. The specific structure of maximum entropy models allows us to approximate this characterization via the much simpler class-wise weighted maximum likelihood. Our approach reveals any probabilistic learning scheme as a specific trade-off between different objectives and provides the framework to link it to the expected F β-measure.",Weighted maximum likelihood loss as a convenient shortcut to optimizing the F-measure of maximum entropy classifiers,"We link the weighted maximum entropy and the optimization of the expected F βmeasure, by viewing them in the framework of a general common multi-criteria optimization problem. As a result, each solution of the expected F β-measure maximization can be realized as a weighted maximum likelihood solution-a well understood and behaved problem. The specific structure of maximum entropy models allows us to approximate this characterization via the much simpler class-wise weighted maximum likelihood. Our approach reveals any probabilistic learning scheme as a specific trade-off between different objectives and provides the framework to link it to the expected F β-measure.",,"Weighted maximum likelihood loss as a convenient shortcut to optimizing the F-measure of maximum entropy classifiers. We link the weighted maximum entropy and the optimization of the expected F βmeasure, by viewing them in the framework of a general common multi-criteria optimization problem. As a result, each solution of the expected F β-measure maximization can be realized as a weighted maximum likelihood solution-a well understood and behaved problem. The specific structure of maximum entropy models allows us to approximate this characterization via the much simpler class-wise weighted maximum likelihood. Our approach reveals any probabilistic learning scheme as a specific trade-off between different objectives and provides the framework to link it to the expected F β-measure.",2013
jang-mostow-2012-inferring,https://aclanthology.org/E12-1038,0,,,,,,,"Inferring Selectional Preferences from Part-Of-Speech N-grams. We present the PONG method to compute selectional preferences using part-of-speech (POS) N-grams. From a corpus labeled with grammatical dependencies, PONG learns the distribution of word relations for each POS N-gram. From the much larger but unlabeled Google N-grams corpus, PONG learns the distribution of POS N-grams for a given pair of words. We derive the probability that one word has a given grammatical relation to the other. PONG estimates this probability by combining both distributions, whether or not either word occurs in the labeled corpus. PONG achieves higher average precision on 16 relations than a state-of-the-art baseline in a pseudo-disambiguation task, but lower coverage and recall.",Inferring Selectional Preferences from Part-Of-Speech N-grams,"We present the PONG method to compute selectional preferences using part-of-speech (POS) N-grams. From a corpus labeled with grammatical dependencies, PONG learns the distribution of word relations for each POS N-gram. From the much larger but unlabeled Google N-grams corpus, PONG learns the distribution of POS N-grams for a given pair of words. We derive the probability that one word has a given grammatical relation to the other. PONG estimates this probability by combining both distributions, whether or not either word occurs in the labeled corpus. PONG achieves higher average precision on 16 relations than a state-of-the-art baseline in a pseudo-disambiguation task, but lower coverage and recall.",Inferring Selectional Preferences from Part-Of-Speech N-grams,"We present the PONG method to compute selectional preferences using part-of-speech (POS) N-grams. From a corpus labeled with grammatical dependencies, PONG learns the distribution of word relations for each POS N-gram. From the much larger but unlabeled Google N-grams corpus, PONG learns the distribution of POS N-grams for a given pair of words. We derive the probability that one word has a given grammatical relation to the other. PONG estimates this probability by combining both distributions, whether or not either word occurs in the labeled corpus. PONG achieves higher average precision on 16 relations than a state-of-the-art baseline in a pseudo-disambiguation task, but lower coverage and recall.","The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A080157. The opinions expressed are those of the authors and do not necessarily represent the views of the Institute or the U.S. Department of Education. We thank the helpful reviewers and Katrin Erk for her generous assistance.","Inferring Selectional Preferences from Part-Of-Speech N-grams. We present the PONG method to compute selectional preferences using part-of-speech (POS) N-grams. From a corpus labeled with grammatical dependencies, PONG learns the distribution of word relations for each POS N-gram. From the much larger but unlabeled Google N-grams corpus, PONG learns the distribution of POS N-grams for a given pair of words. We derive the probability that one word has a given grammatical relation to the other. PONG estimates this probability by combining both distributions, whether or not either word occurs in the labeled corpus. PONG achieves higher average precision on 16 relations than a state-of-the-art baseline in a pseudo-disambiguation task, but lower coverage and recall.",2012
bian-etal-2021-attention,https://aclanthology.org/2021.naacl-main.72,0,,,,,,,"On Attention Redundancy: A Comprehensive Study. Multi-layer multi-head self-attention mechanism is widely applied in modern neural language models. Attention redundancy has been observed among attention heads but has not been deeply studied in the literature. Using BERT-base model as an example, this paper provides a comprehensive study on attention redundancy which is helpful for model interpretation and model compression. We analyze the attention redundancy with Five-Ws and How. (What) We define and focus the study on redundancy matrices generated from pre-trained and fine-tuned BERT-base model for GLUE datasets. (How) We use both token-based and sentence-based distance functions to measure the redundancy. (Where) Clear and similar redundancy patterns (cluster structure) are observed among attention heads. (When) Redundancy patterns are similar in both pre-training and fine-tuning phases. (Who) We discover that redundancy patterns are task-agnostic. Similar redundancy patterns even exist for randomly generated token sequences. (""Why"") We also evaluate influences of the pre-training dropout ratios on attention redundancy. Based on the phaseindependent and task-agnostic attention redundancy patterns, we propose a simple zero-shot pruning method as a case study. Experiments on fine-tuning GLUE tasks verify its effectiveness. The comprehensive analyses on attention redundancy make model understanding and zero-shot model pruning promising.",On Attention Redundancy: A Comprehensive Study,"Multi-layer multi-head self-attention mechanism is widely applied in modern neural language models. Attention redundancy has been observed among attention heads but has not been deeply studied in the literature. Using BERT-base model as an example, this paper provides a comprehensive study on attention redundancy which is helpful for model interpretation and model compression. We analyze the attention redundancy with Five-Ws and How. (What) We define and focus the study on redundancy matrices generated from pre-trained and fine-tuned BERT-base model for GLUE datasets. (How) We use both token-based and sentence-based distance functions to measure the redundancy. (Where) Clear and similar redundancy patterns (cluster structure) are observed among attention heads. (When) Redundancy patterns are similar in both pre-training and fine-tuning phases. (Who) We discover that redundancy patterns are task-agnostic. Similar redundancy patterns even exist for randomly generated token sequences. (""Why"") We also evaluate influences of the pre-training dropout ratios on attention redundancy. Based on the phaseindependent and task-agnostic attention redundancy patterns, we propose a simple zero-shot pruning method as a case study. Experiments on fine-tuning GLUE tasks verify its effectiveness. The comprehensive analyses on attention redundancy make model understanding and zero-shot model pruning promising.",On Attention Redundancy: A Comprehensive Study,"Multi-layer multi-head self-attention mechanism is widely applied in modern neural language models. Attention redundancy has been observed among attention heads but has not been deeply studied in the literature. Using BERT-base model as an example, this paper provides a comprehensive study on attention redundancy which is helpful for model interpretation and model compression. We analyze the attention redundancy with Five-Ws and How. (What) We define and focus the study on redundancy matrices generated from pre-trained and fine-tuned BERT-base model for GLUE datasets. (How) We use both token-based and sentence-based distance functions to measure the redundancy. (Where) Clear and similar redundancy patterns (cluster structure) are observed among attention heads. (When) Redundancy patterns are similar in both pre-training and fine-tuning phases. (Who) We discover that redundancy patterns are task-agnostic. Similar redundancy patterns even exist for randomly generated token sequences. (""Why"") We also evaluate influences of the pre-training dropout ratios on attention redundancy. Based on the phaseindependent and task-agnostic attention redundancy patterns, we propose a simple zero-shot pruning method as a case study. Experiments on fine-tuning GLUE tasks verify its effectiveness. The comprehensive analyses on attention redundancy make model understanding and zero-shot model pruning promising.",,"On Attention Redundancy: A Comprehensive Study. Multi-layer multi-head self-attention mechanism is widely applied in modern neural language models. Attention redundancy has been observed among attention heads but has not been deeply studied in the literature. Using BERT-base model as an example, this paper provides a comprehensive study on attention redundancy which is helpful for model interpretation and model compression. We analyze the attention redundancy with Five-Ws and How. (What) We define and focus the study on redundancy matrices generated from pre-trained and fine-tuned BERT-base model for GLUE datasets. (How) We use both token-based and sentence-based distance functions to measure the redundancy. (Where) Clear and similar redundancy patterns (cluster structure) are observed among attention heads. (When) Redundancy patterns are similar in both pre-training and fine-tuning phases. (Who) We discover that redundancy patterns are task-agnostic. Similar redundancy patterns even exist for randomly generated token sequences. (""Why"") We also evaluate influences of the pre-training dropout ratios on attention redundancy. Based on the phaseindependent and task-agnostic attention redundancy patterns, we propose a simple zero-shot pruning method as a case study. Experiments on fine-tuning GLUE tasks verify its effectiveness. The comprehensive analyses on attention redundancy make model understanding and zero-shot model pruning promising.",2021
guo-kok-2021-bique,https://aclanthology.org/2021.emnlp-main.657,0,,,,,,,"BiQUE: Biquaternionic Embeddings of Knowledge Graphs. Knowledge graph embeddings (KGEs) compactly encode multi-relational knowledge graphs (KGs). Existing KGE models rely on geometric operations to model relational patterns. Euclidean (circular) rotation is useful for modeling patterns such as symmetry, but cannot represent hierarchical semantics. In contrast, hyperbolic models are effective at modeling hierarchical relations, but do not perform as well on patterns on which circular rotation excels. It is crucial for KGE models to unify multiple geometric transformations so as to fully cover the multifarious relations in KGs. To do so, we propose BiQUE, a novel model that employs biquaternions to integrate multiple geometric transformations, viz., scaling, translation, Euclidean rotation, and hyperbolic rotation. BiQUE makes the best tradeoffs among geometric operators during training, picking the best one (or their best combination) for each relation. Experiments on five datasets show BiQUE's effectiveness.",{BiQUE}: {B}iquaternionic Embeddings of Knowledge Graphs,"Knowledge graph embeddings (KGEs) compactly encode multi-relational knowledge graphs (KGs). Existing KGE models rely on geometric operations to model relational patterns. Euclidean (circular) rotation is useful for modeling patterns such as symmetry, but cannot represent hierarchical semantics. In contrast, hyperbolic models are effective at modeling hierarchical relations, but do not perform as well on patterns on which circular rotation excels. It is crucial for KGE models to unify multiple geometric transformations so as to fully cover the multifarious relations in KGs. To do so, we propose BiQUE, a novel model that employs biquaternions to integrate multiple geometric transformations, viz., scaling, translation, Euclidean rotation, and hyperbolic rotation. BiQUE makes the best tradeoffs among geometric operators during training, picking the best one (or their best combination) for each relation. Experiments on five datasets show BiQUE's effectiveness.",BiQUE: Biquaternionic Embeddings of Knowledge Graphs,"Knowledge graph embeddings (KGEs) compactly encode multi-relational knowledge graphs (KGs). Existing KGE models rely on geometric operations to model relational patterns. Euclidean (circular) rotation is useful for modeling patterns such as symmetry, but cannot represent hierarchical semantics. In contrast, hyperbolic models are effective at modeling hierarchical relations, but do not perform as well on patterns on which circular rotation excels. It is crucial for KGE models to unify multiple geometric transformations so as to fully cover the multifarious relations in KGs. To do so, we propose BiQUE, a novel model that employs biquaternions to integrate multiple geometric transformations, viz., scaling, translation, Euclidean rotation, and hyperbolic rotation. BiQUE makes the best tradeoffs among geometric operators during training, picking the best one (or their best combination) for each relation. Experiments on five datasets show BiQUE's effectiveness.","This research is partly supported by MOE's AcRF Tier 1 Grant to Stanley Kok. Any opinions, findings, conclusions, or recommendations expressed herein are solely those of the authors.","BiQUE: Biquaternionic Embeddings of Knowledge Graphs. Knowledge graph embeddings (KGEs) compactly encode multi-relational knowledge graphs (KGs). Existing KGE models rely on geometric operations to model relational patterns. Euclidean (circular) rotation is useful for modeling patterns such as symmetry, but cannot represent hierarchical semantics. In contrast, hyperbolic models are effective at modeling hierarchical relations, but do not perform as well on patterns on which circular rotation excels. It is crucial for KGE models to unify multiple geometric transformations so as to fully cover the multifarious relations in KGs. To do so, we propose BiQUE, a novel model that employs biquaternions to integrate multiple geometric transformations, viz., scaling, translation, Euclidean rotation, and hyperbolic rotation. BiQUE makes the best tradeoffs among geometric operators during training, picking the best one (or their best combination) for each relation. Experiments on five datasets show BiQUE's effectiveness.",2021
ghosh-srivastava-2022-epic,https://aclanthology.org/2022.acl-long.276,0,,,,,,,"ePiC: Employing Proverbs in Context as a Benchmark for Abstract Language Understanding. While large language models have shown exciting progress on several NLP benchmarks, evaluating their ability for complex analogical reasoning remains under-explored. Here, we introduce a high-quality crowdsourced dataset of narratives for employing proverbs in context as a benchmark for abstract language understanding. The dataset provides fine-grained annotation of aligned spans between proverbs and narratives, and contains minimal lexical overlaps between narratives and proverbs, ensuring that models need to go beyond surfacelevel reasoning to succeed. We explore three tasks: (1) proverb recommendation and alignment prediction, (2) narrative generation for a given proverb and topic, and (3) identifying narratives with similar motifs. Our experiments show that neural language models struggle on these tasks compared to humans, and these tasks pose multiple learning challenges.",e{P}i{C}: Employing Proverbs in Context as a Benchmark for Abstract Language Understanding,"While large language models have shown exciting progress on several NLP benchmarks, evaluating their ability for complex analogical reasoning remains under-explored. Here, we introduce a high-quality crowdsourced dataset of narratives for employing proverbs in context as a benchmark for abstract language understanding. The dataset provides fine-grained annotation of aligned spans between proverbs and narratives, and contains minimal lexical overlaps between narratives and proverbs, ensuring that models need to go beyond surfacelevel reasoning to succeed. We explore three tasks: (1) proverb recommendation and alignment prediction, (2) narrative generation for a given proverb and topic, and (3) identifying narratives with similar motifs. Our experiments show that neural language models struggle on these tasks compared to humans, and these tasks pose multiple learning challenges.",ePiC: Employing Proverbs in Context as a Benchmark for Abstract Language Understanding,"While large language models have shown exciting progress on several NLP benchmarks, evaluating their ability for complex analogical reasoning remains under-explored. Here, we introduce a high-quality crowdsourced dataset of narratives for employing proverbs in context as a benchmark for abstract language understanding. The dataset provides fine-grained annotation of aligned spans between proverbs and narratives, and contains minimal lexical overlaps between narratives and proverbs, ensuring that models need to go beyond surfacelevel reasoning to succeed. We explore three tasks: (1) proverb recommendation and alignment prediction, (2) narrative generation for a given proverb and topic, and (3) identifying narratives with similar motifs. Our experiments show that neural language models struggle on these tasks compared to humans, and these tasks pose multiple learning challenges.",,"ePiC: Employing Proverbs in Context as a Benchmark for Abstract Language Understanding. While large language models have shown exciting progress on several NLP benchmarks, evaluating their ability for complex analogical reasoning remains under-explored. Here, we introduce a high-quality crowdsourced dataset of narratives for employing proverbs in context as a benchmark for abstract language understanding. The dataset provides fine-grained annotation of aligned spans between proverbs and narratives, and contains minimal lexical overlaps between narratives and proverbs, ensuring that models need to go beyond surfacelevel reasoning to succeed. We explore three tasks: (1) proverb recommendation and alignment prediction, (2) narrative generation for a given proverb and topic, and (3) identifying narratives with similar motifs. Our experiments show that neural language models struggle on these tasks compared to humans, and these tasks pose multiple learning challenges.",2022
ousidhoum-etal-2021-probing,https://aclanthology.org/2021.acl-long.329,1,,,,hate_speech,,,"Probing Toxic Content in Large Pre-Trained Language Models. Large pre-trained language models (PTLMs) have been shown to carry biases towards different social groups which leads to the reproduction of stereotypical and toxic content by major NLP systems. We propose a method based on logistic regression classifiers to probe English, French, and Arabic PTLMs and quantify the potentially harmful content that they convey with respect to a set of templates. The templates are prompted by a name of a social group followed by a cause-effect relation. We use PTLMs to predict masked tokens at the end of a sentence in order to examine how likely they enable toxicity towards specific communities. We shed the light on how such negative content can be triggered within unrelated and benign contexts based on evidence from a large-scale study, then we explain how to take advantage of our methodology to assess and mitigate the toxicity transmitted by PTLMs.",Probing Toxic Content in Large Pre-Trained Language Models,"Large pre-trained language models (PTLMs) have been shown to carry biases towards different social groups which leads to the reproduction of stereotypical and toxic content by major NLP systems. We propose a method based on logistic regression classifiers to probe English, French, and Arabic PTLMs and quantify the potentially harmful content that they convey with respect to a set of templates. The templates are prompted by a name of a social group followed by a cause-effect relation. We use PTLMs to predict masked tokens at the end of a sentence in order to examine how likely they enable toxicity towards specific communities. We shed the light on how such negative content can be triggered within unrelated and benign contexts based on evidence from a large-scale study, then we explain how to take advantage of our methodology to assess and mitigate the toxicity transmitted by PTLMs.",Probing Toxic Content in Large Pre-Trained Language Models,"Large pre-trained language models (PTLMs) have been shown to carry biases towards different social groups which leads to the reproduction of stereotypical and toxic content by major NLP systems. We propose a method based on logistic regression classifiers to probe English, French, and Arabic PTLMs and quantify the potentially harmful content that they convey with respect to a set of templates. The templates are prompted by a name of a social group followed by a cause-effect relation. We use PTLMs to predict masked tokens at the end of a sentence in order to examine how likely they enable toxicity towards specific communities. We shed the light on how such negative content can be triggered within unrelated and benign contexts based on evidence from a large-scale study, then we explain how to take advantage of our methodology to assess and mitigate the toxicity transmitted by PTLMs.","We thank the annotators and anonymous reviewers and meta-reviewer for their valuable feedback.This paper was supported by the Theme-based Research Scheme Project (T31-604/18-N), the NSFC Grant (No. U20B2053) from China, the Early Career Scheme (ECS, No. 26206717), the General Research Fund (GRF, No. 16211520), and the Research Impact Fund (RIF, from the Research Grants Council (RGC) of Hong Kong.","Probing Toxic Content in Large Pre-Trained Language Models. Large pre-trained language models (PTLMs) have been shown to carry biases towards different social groups which leads to the reproduction of stereotypical and toxic content by major NLP systems. We propose a method based on logistic regression classifiers to probe English, French, and Arabic PTLMs and quantify the potentially harmful content that they convey with respect to a set of templates. The templates are prompted by a name of a social group followed by a cause-effect relation. We use PTLMs to predict masked tokens at the end of a sentence in order to examine how likely they enable toxicity towards specific communities. We shed the light on how such negative content can be triggered within unrelated and benign contexts based on evidence from a large-scale study, then we explain how to take advantage of our methodology to assess and mitigate the toxicity transmitted by PTLMs.",2021
och-ney-2001-statistical,https://aclanthology.org/2001.mtsummit-papers.46,0,,,,,,,"Statistical multi-source translation. We describe methods for translating a text given in multiple source languages into a single target language. The goal is to improve translation quality in applications where the ultimate goal is to translate the same document into many languages. We describe a statistical approach and two specific statistical models to deal with this problem. Our method is generally applicable as it is independent of specific models, languages or application domains. We evaluate the approach on a multilingual corpus covering all eleven official European Union languages that was collected automatically from the Internet. In various tests we show that these methods can significantly improve translation quality. As a side effect, we also compare the quality of statistical machine translation systems for many European languages in the same domain.",Statistical multi-source translation,"We describe methods for translating a text given in multiple source languages into a single target language. The goal is to improve translation quality in applications where the ultimate goal is to translate the same document into many languages. We describe a statistical approach and two specific statistical models to deal with this problem. Our method is generally applicable as it is independent of specific models, languages or application domains. We evaluate the approach on a multilingual corpus covering all eleven official European Union languages that was collected automatically from the Internet. In various tests we show that these methods can significantly improve translation quality. As a side effect, we also compare the quality of statistical machine translation systems for many European languages in the same domain.",Statistical multi-source translation,"We describe methods for translating a text given in multiple source languages into a single target language. The goal is to improve translation quality in applications where the ultimate goal is to translate the same document into many languages. We describe a statistical approach and two specific statistical models to deal with this problem. Our method is generally applicable as it is independent of specific models, languages or application domains. We evaluate the approach on a multilingual corpus covering all eleven official European Union languages that was collected automatically from the Internet. In various tests we show that these methods can significantly improve translation quality. As a side effect, we also compare the quality of statistical machine translation systems for many European languages in the same domain.",,"Statistical multi-source translation. We describe methods for translating a text given in multiple source languages into a single target language. The goal is to improve translation quality in applications where the ultimate goal is to translate the same document into many languages. We describe a statistical approach and two specific statistical models to deal with this problem. Our method is generally applicable as it is independent of specific models, languages or application domains. We evaluate the approach on a multilingual corpus covering all eleven official European Union languages that was collected automatically from the Internet. In various tests we show that these methods can significantly improve translation quality. As a side effect, we also compare the quality of statistical machine translation systems for many European languages in the same domain.",2001
bin-wasi-etal-2014-cmuq,https://aclanthology.org/S14-2029,0,,,,,,,"CMUQ@Qatar:Using Rich Lexical Features for Sentiment Analysis on Twitter. In this paper, we describe our system for the Sentiment Analysis of Twitter shared task in SemEval 2014. Our system uses an SVM classifier along with rich set of lexical features to detect the sentiment of a phrase within a tweet (Task-A) and also the sentiment of the whole tweet (Task-B). We start from the lexical features that were used in the 2013 shared tasks, we enhance the underlying lexicon and also introduce new features. We focus our feature engineering effort mainly on Task-A. Moreover, we adapt our initial framework and introduce new features for Task-B. Our system reaches weighted score of 87.11% in Task-A and 64.52% in Task-B. This places us in the 4th rank in the Task-A and 15th in the Task-B.",{CMUQ}@{Q}atar:Using Rich Lexical Features for Sentiment Analysis on {T}witter,"In this paper, we describe our system for the Sentiment Analysis of Twitter shared task in SemEval 2014. Our system uses an SVM classifier along with rich set of lexical features to detect the sentiment of a phrase within a tweet (Task-A) and also the sentiment of the whole tweet (Task-B). We start from the lexical features that were used in the 2013 shared tasks, we enhance the underlying lexicon and also introduce new features. We focus our feature engineering effort mainly on Task-A. Moreover, we adapt our initial framework and introduce new features for Task-B. Our system reaches weighted score of 87.11% in Task-A and 64.52% in Task-B. This places us in the 4th rank in the Task-A and 15th in the Task-B.",CMUQ@Qatar:Using Rich Lexical Features for Sentiment Analysis on Twitter,"In this paper, we describe our system for the Sentiment Analysis of Twitter shared task in SemEval 2014. Our system uses an SVM classifier along with rich set of lexical features to detect the sentiment of a phrase within a tweet (Task-A) and also the sentiment of the whole tweet (Task-B). We start from the lexical features that were used in the 2013 shared tasks, we enhance the underlying lexicon and also introduce new features. We focus our feature engineering effort mainly on Task-A. Moreover, we adapt our initial framework and introduce new features for Task-B. Our system reaches weighted score of 87.11% in Task-A and 64.52% in Task-B. This places us in the 4th rank in the Task-A and 15th in the Task-B.",We would like to thank Kemal Oflazer and the shared task organizers for their support throughout this work.,"CMUQ@Qatar:Using Rich Lexical Features for Sentiment Analysis on Twitter. In this paper, we describe our system for the Sentiment Analysis of Twitter shared task in SemEval 2014. Our system uses an SVM classifier along with rich set of lexical features to detect the sentiment of a phrase within a tweet (Task-A) and also the sentiment of the whole tweet (Task-B). We start from the lexical features that were used in the 2013 shared tasks, we enhance the underlying lexicon and also introduce new features. We focus our feature engineering effort mainly on Task-A. Moreover, we adapt our initial framework and introduce new features for Task-B. Our system reaches weighted score of 87.11% in Task-A and 64.52% in Task-B. This places us in the 4th rank in the Task-A and 15th in the Task-B.",2014
ngo-ho-yvon-2020-generative,https://aclanthology.org/2020.amta-research.6,0,,,,,,,"Generative latent neural models for automatic word alignment. Word alignments identify translational correspondences between words in a parallel sentence pair and are used, for instance, to learn bilingual dictionaries, to train statistical machine translation systems or to perform quality estimation. Variational autoencoders have been recently used in various of natural language processing to learn in an unsupervised way latent representations that are useful for language generation tasks. In this paper, we study these models for the task of word alignment and propose and assess several evolutions of a vanilla variational autoencoders. We demonstrate that these techniques can yield competitive results as compared to Giza++ and to a strong neural network alignment system for two language pairs.",Generative latent neural models for automatic word alignment,"Word alignments identify translational correspondences between words in a parallel sentence pair and are used, for instance, to learn bilingual dictionaries, to train statistical machine translation systems or to perform quality estimation. Variational autoencoders have been recently used in various of natural language processing to learn in an unsupervised way latent representations that are useful for language generation tasks. In this paper, we study these models for the task of word alignment and propose and assess several evolutions of a vanilla variational autoencoders. We demonstrate that these techniques can yield competitive results as compared to Giza++ and to a strong neural network alignment system for two language pairs.",Generative latent neural models for automatic word alignment,"Word alignments identify translational correspondences between words in a parallel sentence pair and are used, for instance, to learn bilingual dictionaries, to train statistical machine translation systems or to perform quality estimation. Variational autoencoders have been recently used in various of natural language processing to learn in an unsupervised way latent representations that are useful for language generation tasks. In this paper, we study these models for the task of word alignment and propose and assess several evolutions of a vanilla variational autoencoders. We demonstrate that these techniques can yield competitive results as compared to Giza++ and to a strong neural network alignment system for two language pairs."," 2 We omit the initial step, consisting in sampling the lengths I and J and the dependencies wrt. these variables.","Generative latent neural models for automatic word alignment. Word alignments identify translational correspondences between words in a parallel sentence pair and are used, for instance, to learn bilingual dictionaries, to train statistical machine translation systems or to perform quality estimation. Variational autoencoders have been recently used in various of natural language processing to learn in an unsupervised way latent representations that are useful for language generation tasks. In this paper, we study these models for the task of word alignment and propose and assess several evolutions of a vanilla variational autoencoders. We demonstrate that these techniques can yield competitive results as compared to Giza++ and to a strong neural network alignment system for two language pairs.",2020
ochitani-etal-1997-goal,https://aclanthology.org/W97-0708,0,,,,,,,"Goal-Directed Approach for Text Summarization. The information to InClude m a summary vanes depending on the author's mtentmn and the use of the summary To create the best summaries, the appropriate goals of the extracting process should be set and a guide should be outlined that instructs the system how to meet the tasks The approach described m thin report m intended to be a basic archltecture to extract a set of concme sentences that are indicated or predlcted by goals and contexts To evaluate a sentence, the sentence selection algorithm simply measures the mformatlveness of each sentence by comparing with the determined goals, and the algorlthm extracts a set of the hlghest scored bentences by repeat apphcatmn of thin comparmon Thin approach m apphed m the summary of newspaper artlcles The headhnes are used as the goals Also the method to extract charactenstlc sentences by using property mformatlon of text is shown In thls experiment m whlch Japanese news articles are summarized, the sunlmarles consmt of about 30% of the original text On avelage, thin method extracts 50% less text than the slmple tltle-keyword method",Goal-Directed Approach for Text Summarization,"The information to InClude m a summary vanes depending on the author's mtentmn and the use of the summary To create the best summaries, the appropriate goals of the extracting process should be set and a guide should be outlined that instructs the system how to meet the tasks The approach described m thin report m intended to be a basic archltecture to extract a set of concme sentences that are indicated or predlcted by goals and contexts To evaluate a sentence, the sentence selection algorithm simply measures the mformatlveness of each sentence by comparing with the determined goals, and the algorlthm extracts a set of the hlghest scored bentences by repeat apphcatmn of thin comparmon Thin approach m apphed m the summary of newspaper artlcles The headhnes are used as the goals Also the method to extract charactenstlc sentences by using property mformatlon of text is shown In thls experiment m whlch Japanese news articles are summarized, the sunlmarles consmt of about 30% of the original text On avelage, thin method extracts 50% less text than the slmple tltle-keyword method",Goal-Directed Approach for Text Summarization,"The information to InClude m a summary vanes depending on the author's mtentmn and the use of the summary To create the best summaries, the appropriate goals of the extracting process should be set and a guide should be outlined that instructs the system how to meet the tasks The approach described m thin report m intended to be a basic archltecture to extract a set of concme sentences that are indicated or predlcted by goals and contexts To evaluate a sentence, the sentence selection algorithm simply measures the mformatlveness of each sentence by comparing with the determined goals, and the algorlthm extracts a set of the hlghest scored bentences by repeat apphcatmn of thin comparmon Thin approach m apphed m the summary of newspaper artlcles The headhnes are used as the goals Also the method to extract charactenstlc sentences by using property mformatlon of text is shown In thls experiment m whlch Japanese news articles are summarized, the sunlmarles consmt of about 30% of the original text On avelage, thin method extracts 50% less text than the slmple tltle-keyword method",,"Goal-Directed Approach for Text Summarization. The information to InClude m a summary vanes depending on the author's mtentmn and the use of the summary To create the best summaries, the appropriate goals of the extracting process should be set and a guide should be outlined that instructs the system how to meet the tasks The approach described m thin report m intended to be a basic archltecture to extract a set of concme sentences that are indicated or predlcted by goals and contexts To evaluate a sentence, the sentence selection algorithm simply measures the mformatlveness of each sentence by comparing with the determined goals, and the algorlthm extracts a set of the hlghest scored bentences by repeat apphcatmn of thin comparmon Thin approach m apphed m the summary of newspaper artlcles The headhnes are used as the goals Also the method to extract charactenstlc sentences by using property mformatlon of text is shown In thls experiment m whlch Japanese news articles are summarized, the sunlmarles consmt of about 30% of the original text On avelage, thin method extracts 50% less text than the slmple tltle-keyword method",1997
kuhn-2004-experiments,https://aclanthology.org/P04-1060,0,,,,,,,"Experiments in parallel-text based grammar induction. This paper discusses the use of statistical word alignment over multiple parallel texts for the identification of string spans that cannot be constituents in one of the languages. This information is exploited in monolingual PCFG grammar induction for that language, within an augmented version of the inside-outside algorithm. Besides the aligned corpus, no other resources are required. We discuss an implemented system and present experimental results with an evaluation against the Penn Treebank.",Experiments in parallel-text based grammar induction,"This paper discusses the use of statistical word alignment over multiple parallel texts for the identification of string spans that cannot be constituents in one of the languages. This information is exploited in monolingual PCFG grammar induction for that language, within an augmented version of the inside-outside algorithm. Besides the aligned corpus, no other resources are required. We discuss an implemented system and present experimental results with an evaluation against the Penn Treebank.",Experiments in parallel-text based grammar induction,"This paper discusses the use of statistical word alignment over multiple parallel texts for the identification of string spans that cannot be constituents in one of the languages. This information is exploited in monolingual PCFG grammar induction for that language, within an augmented version of the inside-outside algorithm. Besides the aligned corpus, no other resources are required. We discuss an implemented system and present experimental results with an evaluation against the Penn Treebank.",,"Experiments in parallel-text based grammar induction. This paper discusses the use of statistical word alignment over multiple parallel texts for the identification of string spans that cannot be constituents in one of the languages. This information is exploited in monolingual PCFG grammar induction for that language, within an augmented version of the inside-outside algorithm. Besides the aligned corpus, no other resources are required. We discuss an implemented system and present experimental results with an evaluation against the Penn Treebank.",2004
edmonds-1997-choosing,https://aclanthology.org/P97-1067,0,,,,,,,"Choosing the Word Most Typical in Context Using a Lexical Co-occurrence Network. This paper presents a partial solution to a component of the problem of lexical choice: choosing the synonym most typical, or expected, in context. We apply a new statistical approach to representing the context of a word through lexical co-occurrence networks. The implementation was trained and evaluated on a large corpus, and results show that the inclusion of second-order co-occurrence relations improves the performance of our implemented lexical choice program.",Choosing the Word Most Typical in Context Using a Lexical Co-occurrence Network,"This paper presents a partial solution to a component of the problem of lexical choice: choosing the synonym most typical, or expected, in context. We apply a new statistical approach to representing the context of a word through lexical co-occurrence networks. The implementation was trained and evaluated on a large corpus, and results show that the inclusion of second-order co-occurrence relations improves the performance of our implemented lexical choice program.",Choosing the Word Most Typical in Context Using a Lexical Co-occurrence Network,"This paper presents a partial solution to a component of the problem of lexical choice: choosing the synonym most typical, or expected, in context. We apply a new statistical approach to representing the context of a word through lexical co-occurrence networks. The implementation was trained and evaluated on a large corpus, and results show that the inclusion of second-order co-occurrence relations improves the performance of our implemented lexical choice program.","For comments and advice, I thank Graeme Hirst, Eduard Hovy, and Stephen Green. This work is financially supported by the Natural Sciences and Engineering Council of Canada.","Choosing the Word Most Typical in Context Using a Lexical Co-occurrence Network. This paper presents a partial solution to a component of the problem of lexical choice: choosing the synonym most typical, or expected, in context. We apply a new statistical approach to representing the context of a word through lexical co-occurrence networks. The implementation was trained and evaluated on a large corpus, and results show that the inclusion of second-order co-occurrence relations improves the performance of our implemented lexical choice program.",1997
mackinlay-2005-using,https://aclanthology.org/U05-1011,0,,,,,,,"Using Diverse Information Sources to Retrieve Samples of Low Density Languages. Language samples are useful as an object of study for a diverse range of people. Samples of low-density languages in particular are often valuable in their own right, yet it is these samples which are most difficult to locate, especially in a vast repository of information such as the World Wide Web. We identify here some shortcomings to the more obvious approaches to locating such samples and present an alternative technique based on a search query using publicly available wordlists augmented with geospatial evidence, and show that the technique is successful for a number of languages.",Using Diverse Information Sources to Retrieve Samples of Low Density Languages,"Language samples are useful as an object of study for a diverse range of people. Samples of low-density languages in particular are often valuable in their own right, yet it is these samples which are most difficult to locate, especially in a vast repository of information such as the World Wide Web. We identify here some shortcomings to the more obvious approaches to locating such samples and present an alternative technique based on a search query using publicly available wordlists augmented with geospatial evidence, and show that the technique is successful for a number of languages.",Using Diverse Information Sources to Retrieve Samples of Low Density Languages,"Language samples are useful as an object of study for a diverse range of people. Samples of low-density languages in particular are often valuable in their own right, yet it is these samples which are most difficult to locate, especially in a vast repository of information such as the World Wide Web. We identify here some shortcomings to the more obvious approaches to locating such samples and present an alternative technique based on a search query using publicly available wordlists augmented with geospatial evidence, and show that the technique is successful for a number of languages.",,"Using Diverse Information Sources to Retrieve Samples of Low Density Languages. Language samples are useful as an object of study for a diverse range of people. Samples of low-density languages in particular are often valuable in their own right, yet it is these samples which are most difficult to locate, especially in a vast repository of information such as the World Wide Web. We identify here some shortcomings to the more obvious approaches to locating such samples and present an alternative technique based on a search query using publicly available wordlists augmented with geospatial evidence, and show that the technique is successful for a number of languages.",2005
chung-etal-2014-sampling,https://aclanthology.org/J14-1007,0,,,,,,,"Sampling Tree Fragments from Forests. We study the problem of sampling trees from forests, in the setting where probabilities for each tree may be a function of arbitrarily large tree fragments. This setting extends recent work for sampling to learn Tree Substitution Grammars to the case where the tree structure (TSG derived tree) is not fixed. We develop a Markov chain Monte Carlo algorithm which corrects for the bias introduced by unbalanced forests, and we present experiments using the algorithm to learn Synchronous Context-Free Grammar rules for machine translation. In this application, the forests being sampled represent the set of Hiero-style rules that are consistent with fixed input word-level alignments. We demonstrate equivalent machine translation performance to standard techniques but with much smaller grammars.",Sampling Tree Fragments from Forests,"We study the problem of sampling trees from forests, in the setting where probabilities for each tree may be a function of arbitrarily large tree fragments. This setting extends recent work for sampling to learn Tree Substitution Grammars to the case where the tree structure (TSG derived tree) is not fixed. We develop a Markov chain Monte Carlo algorithm which corrects for the bias introduced by unbalanced forests, and we present experiments using the algorithm to learn Synchronous Context-Free Grammar rules for machine translation. In this application, the forests being sampled represent the set of Hiero-style rules that are consistent with fixed input word-level alignments. We demonstrate equivalent machine translation performance to standard techniques but with much smaller grammars.",Sampling Tree Fragments from Forests,"We study the problem of sampling trees from forests, in the setting where probabilities for each tree may be a function of arbitrarily large tree fragments. This setting extends recent work for sampling to learn Tree Substitution Grammars to the case where the tree structure (TSG derived tree) is not fixed. We develop a Markov chain Monte Carlo algorithm which corrects for the bias introduced by unbalanced forests, and we present experiments using the algorithm to learn Synchronous Context-Free Grammar rules for machine translation. In this application, the forests being sampled represent the set of Hiero-style rules that are consistent with fixed input word-level alignments. We demonstrate equivalent machine translation performance to standard techniques but with much smaller grammars.","1 We randomly sampled our data from various different sources (LDC2006E86, LDC2006E93, LDC2002E18, LDC2002L27, LDC2003E07, LDC2003E14, LDC2004T08, LDC2005T06, LDC2005T10, LDC2005T34, LDC2006E26, LDC2005E83, LDC2006E34, LDC2006E85, LDC2006E92, LDC2006E24, LDC2006E92, LDC2006E24). The language model is trained on the English side of entire data (1.65M sentences, which is 39.3M words).","Sampling Tree Fragments from Forests. We study the problem of sampling trees from forests, in the setting where probabilities for each tree may be a function of arbitrarily large tree fragments. This setting extends recent work for sampling to learn Tree Substitution Grammars to the case where the tree structure (TSG derived tree) is not fixed. We develop a Markov chain Monte Carlo algorithm which corrects for the bias introduced by unbalanced forests, and we present experiments using the algorithm to learn Synchronous Context-Free Grammar rules for machine translation. In this application, the forests being sampled represent the set of Hiero-style rules that are consistent with fixed input word-level alignments. We demonstrate equivalent machine translation performance to standard techniques but with much smaller grammars.",2014
antona-tsujii-1993-treatment,https://aclanthology.org/1993.tmi-1.11,0,,,,,,,"Treatment of Tense and Aspect in Translation from Italian to Greek --- An Example of Treatment of Implicit Information in Knowledge-based Transfer MT ---. Treatment of tense and aspect is one of the well-known difficulties in MT, since individual languages differ as to their temporal and aspectual systems and do not allow simple correspondence of verbal forms of two languages. An approach to time suitable for MT has been elaborated in the EUROTRA project (e.g. [van Eynde 1988] ) which avoids a direct mapping of forms by:",Treatment of Tense and Aspect in Translation from {I}talian to {G}reek {---} An Example of Treatment of Implicit Information in Knowledge-based Transfer {MT} {---},"Treatment of tense and aspect is one of the well-known difficulties in MT, since individual languages differ as to their temporal and aspectual systems and do not allow simple correspondence of verbal forms of two languages. An approach to time suitable for MT has been elaborated in the EUROTRA project (e.g. [van Eynde 1988] ) which avoids a direct mapping of forms by:",Treatment of Tense and Aspect in Translation from Italian to Greek --- An Example of Treatment of Implicit Information in Knowledge-based Transfer MT ---,"Treatment of tense and aspect is one of the well-known difficulties in MT, since individual languages differ as to their temporal and aspectual systems and do not allow simple correspondence of verbal forms of two languages. An approach to time suitable for MT has been elaborated in the EUROTRA project (e.g. [van Eynde 1988] ) which avoids a direct mapping of forms by:",We are grateful to Sophia Ananiadou for her comments on an earlier draft of the paper and for examples of Greek translations.,"Treatment of Tense and Aspect in Translation from Italian to Greek --- An Example of Treatment of Implicit Information in Knowledge-based Transfer MT ---. Treatment of tense and aspect is one of the well-known difficulties in MT, since individual languages differ as to their temporal and aspectual systems and do not allow simple correspondence of verbal forms of two languages. An approach to time suitable for MT has been elaborated in the EUROTRA project (e.g. [van Eynde 1988] ) which avoids a direct mapping of forms by:",1993
granfeldt-etal-2006-cefle,http://www.lrec-conf.org/proceedings/lrec2006/pdf/246_pdf.pdf,1,,,,education,,,"CEFLE and Direkt Profil: a New Computer Learner Corpus in French L2 and a System for Grammatical Profiling. The importance of computer learner corpora for research in both second language acquisition and foreign language teaching is rapidly increasing. Computer learner corpora can provide us with data to describe the learner's interlanguage system at different points of its development and they can be used to create pedagogical tools. In this paper, we first present a new computer learner corpora in French. We then describe an analyzer called Direkt Profil, that we have developed using this corpus. The system carries out a sentence analysis based on developmental sequences, i.e. local morphosyntactic phenomena linked to a development in the acquisition of French as a foreign language. We present a brief introduction to developmental sequences and some examples in French. In the final section, we introduce and evaluate a method to optimize the definition and detection of learner profiles using machine-learning techniques.",{CEFLE} and Direkt Profil: a New Computer Learner Corpus in {F}rench {L}2 and a System for Grammatical Profiling,"The importance of computer learner corpora for research in both second language acquisition and foreign language teaching is rapidly increasing. Computer learner corpora can provide us with data to describe the learner's interlanguage system at different points of its development and they can be used to create pedagogical tools. In this paper, we first present a new computer learner corpora in French. We then describe an analyzer called Direkt Profil, that we have developed using this corpus. The system carries out a sentence analysis based on developmental sequences, i.e. local morphosyntactic phenomena linked to a development in the acquisition of French as a foreign language. We present a brief introduction to developmental sequences and some examples in French. In the final section, we introduce and evaluate a method to optimize the definition and detection of learner profiles using machine-learning techniques.",CEFLE and Direkt Profil: a New Computer Learner Corpus in French L2 and a System for Grammatical Profiling,"The importance of computer learner corpora for research in both second language acquisition and foreign language teaching is rapidly increasing. Computer learner corpora can provide us with data to describe the learner's interlanguage system at different points of its development and they can be used to create pedagogical tools. In this paper, we first present a new computer learner corpora in French. We then describe an analyzer called Direkt Profil, that we have developed using this corpus. The system carries out a sentence analysis based on developmental sequences, i.e. local morphosyntactic phenomena linked to a development in the acquisition of French as a foreign language. We present a brief introduction to developmental sequences and some examples in French. In the final section, we introduce and evaluate a method to optimize the definition and detection of learner profiles using machine-learning techniques.","The research presented here is supported by a grant from the Swedish Research Council, grant number 2004-1674 to the first author and by grants from the Elisabeth Rausing foundation for research in the Humanities and from Erik Philip-Sörenssens foundation for research.","CEFLE and Direkt Profil: a New Computer Learner Corpus in French L2 and a System for Grammatical Profiling. The importance of computer learner corpora for research in both second language acquisition and foreign language teaching is rapidly increasing. Computer learner corpora can provide us with data to describe the learner's interlanguage system at different points of its development and they can be used to create pedagogical tools. In this paper, we first present a new computer learner corpora in French. We then describe an analyzer called Direkt Profil, that we have developed using this corpus. The system carries out a sentence analysis based on developmental sequences, i.e. local morphosyntactic phenomena linked to a development in the acquisition of French as a foreign language. We present a brief introduction to developmental sequences and some examples in French. In the final section, we introduce and evaluate a method to optimize the definition and detection of learner profiles using machine-learning techniques.",2006
zhao-etal-2016-textual,https://aclanthology.org/C16-1212,0,,,,,,,"Textual Entailment with Structured Attentions and Composition. Deep learning techniques are increasingly popular in the textual entailment task, overcoming the fragility of traditional discrete models with hard alignments and logics. In particular, the recently proposed attention models (Rocktäschel et al., 2015; Wang and Jiang, 2015) achieves state-of-the-art accuracy by computing soft word alignments between the premise and hypothesis sentences. However, there remains a major limitation: this line of work completely ignores syntax and recursion, which is helpful in many traditional efforts. We show that it is beneficial to extend the attention model to tree nodes between premise and hypothesis. More importantly, this subtree-level attention reveals information about entailment relation. We study the recursive composition of this subtree-level entailment relation, which can be viewed as a soft version of the Natural Logic framework (MacCartney and Manning, 2009). Experiments show that our structured attention and entailment composition model can correctly identify and infer entailment relations from the bottom up, and bring significant improvements in accuracy.",Textual Entailment with Structured Attentions and Composition,"Deep learning techniques are increasingly popular in the textual entailment task, overcoming the fragility of traditional discrete models with hard alignments and logics. In particular, the recently proposed attention models (Rocktäschel et al., 2015; Wang and Jiang, 2015) achieves state-of-the-art accuracy by computing soft word alignments between the premise and hypothesis sentences. However, there remains a major limitation: this line of work completely ignores syntax and recursion, which is helpful in many traditional efforts. We show that it is beneficial to extend the attention model to tree nodes between premise and hypothesis. More importantly, this subtree-level attention reveals information about entailment relation. We study the recursive composition of this subtree-level entailment relation, which can be viewed as a soft version of the Natural Logic framework (MacCartney and Manning, 2009). Experiments show that our structured attention and entailment composition model can correctly identify and infer entailment relations from the bottom up, and bring significant improvements in accuracy.",Textual Entailment with Structured Attentions and Composition,"Deep learning techniques are increasingly popular in the textual entailment task, overcoming the fragility of traditional discrete models with hard alignments and logics. In particular, the recently proposed attention models (Rocktäschel et al., 2015; Wang and Jiang, 2015) achieves state-of-the-art accuracy by computing soft word alignments between the premise and hypothesis sentences. However, there remains a major limitation: this line of work completely ignores syntax and recursion, which is helpful in many traditional efforts. We show that it is beneficial to extend the attention model to tree nodes between premise and hypothesis. More importantly, this subtree-level attention reveals information about entailment relation. We study the recursive composition of this subtree-level entailment relation, which can be viewed as a soft version of the Natural Logic framework (MacCartney and Manning, 2009). Experiments show that our structured attention and entailment composition model can correctly identify and infer entailment relations from the bottom up, and bring significant improvements in accuracy.","We thank the anonymous reviewers for helpful comments. We are also grateful to James Cross, Dezhong Deng, and Lemao Liu for suggestions. This project was supported in part by NSF IIS-1656051, DARPA FA8750-13-2-0041 (DEFT), and a Google Faculty Research Award.","Textual Entailment with Structured Attentions and Composition. Deep learning techniques are increasingly popular in the textual entailment task, overcoming the fragility of traditional discrete models with hard alignments and logics. In particular, the recently proposed attention models (Rocktäschel et al., 2015; Wang and Jiang, 2015) achieves state-of-the-art accuracy by computing soft word alignments between the premise and hypothesis sentences. However, there remains a major limitation: this line of work completely ignores syntax and recursion, which is helpful in many traditional efforts. We show that it is beneficial to extend the attention model to tree nodes between premise and hypothesis. More importantly, this subtree-level attention reveals information about entailment relation. We study the recursive composition of this subtree-level entailment relation, which can be viewed as a soft version of the Natural Logic framework (MacCartney and Manning, 2009). Experiments show that our structured attention and entailment composition model can correctly identify and infer entailment relations from the bottom up, and bring significant improvements in accuracy.",2016
jin-etal-2022-good,https://aclanthology.org/2022.acl-long.197,0,,,,,,,"A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models. Large pre-trained vision-language (VL) models can learn a new task with a handful of examples and generalize to a new task without fine-tuning. However, these VL models are hard to deploy for real-world applications due to their impractically huge sizes and slow inference speed. To solve this limitation, we study prompt-based low-resource learning of VL tasks with our proposed method, FEWVLM, relatively smaller than recent fewshot learners. For FEWVLM, we pre-train a sequence-to-sequence transformer model with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM). Furthermore, we analyze the effect of diverse prompts for few-shot tasks. Experimental results on VQA show that FEWVLM with prompt-based learning outperforms Frozen (Tsimpoukelli et al., 2021) which is 31× larger than FEWVLM by 18.2% point and achieves comparable results to a 246× larger model, PICa (Yang et al., 2021). In our analysis, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance, (2) models with noisy prompts learn as quickly as hand-crafted prompts given larger training data, and (3) MaskedLM helps VQA tasks while PrefixLM boosts captioning performance. Our code is publicly available at https://github. com/woojeongjin/FewVLM * Work was mainly done while interning at Microsoft Azure AI.",A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models,"Large pre-trained vision-language (VL) models can learn a new task with a handful of examples and generalize to a new task without fine-tuning. However, these VL models are hard to deploy for real-world applications due to their impractically huge sizes and slow inference speed. To solve this limitation, we study prompt-based low-resource learning of VL tasks with our proposed method, FEWVLM, relatively smaller than recent fewshot learners. For FEWVLM, we pre-train a sequence-to-sequence transformer model with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM). Furthermore, we analyze the effect of diverse prompts for few-shot tasks. Experimental results on VQA show that FEWVLM with prompt-based learning outperforms Frozen (Tsimpoukelli et al., 2021) which is 31× larger than FEWVLM by 18.2% point and achieves comparable results to a 246× larger model, PICa (Yang et al., 2021). In our analysis, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance, (2) models with noisy prompts learn as quickly as hand-crafted prompts given larger training data, and (3) MaskedLM helps VQA tasks while PrefixLM boosts captioning performance. Our code is publicly available at https://github. com/woojeongjin/FewVLM * Work was mainly done while interning at Microsoft Azure AI.",A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models,"Large pre-trained vision-language (VL) models can learn a new task with a handful of examples and generalize to a new task without fine-tuning. However, these VL models are hard to deploy for real-world applications due to their impractically huge sizes and slow inference speed. To solve this limitation, we study prompt-based low-resource learning of VL tasks with our proposed method, FEWVLM, relatively smaller than recent fewshot learners. For FEWVLM, we pre-train a sequence-to-sequence transformer model with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM). Furthermore, we analyze the effect of diverse prompts for few-shot tasks. Experimental results on VQA show that FEWVLM with prompt-based learning outperforms Frozen (Tsimpoukelli et al., 2021) which is 31× larger than FEWVLM by 18.2% point and achieves comparable results to a 246× larger model, PICa (Yang et al., 2021). In our analysis, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance, (2) models with noisy prompts learn as quickly as hand-crafted prompts given larger training data, and (3) MaskedLM helps VQA tasks while PrefixLM boosts captioning performance. Our code is publicly available at https://github. com/woojeongjin/FewVLM * Work was mainly done while interning at Microsoft Azure AI.",,"A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models. Large pre-trained vision-language (VL) models can learn a new task with a handful of examples and generalize to a new task without fine-tuning. However, these VL models are hard to deploy for real-world applications due to their impractically huge sizes and slow inference speed. To solve this limitation, we study prompt-based low-resource learning of VL tasks with our proposed method, FEWVLM, relatively smaller than recent fewshot learners. For FEWVLM, we pre-train a sequence-to-sequence transformer model with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM). Furthermore, we analyze the effect of diverse prompts for few-shot tasks. Experimental results on VQA show that FEWVLM with prompt-based learning outperforms Frozen (Tsimpoukelli et al., 2021) which is 31× larger than FEWVLM by 18.2% point and achieves comparable results to a 246× larger model, PICa (Yang et al., 2021). In our analysis, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance, (2) models with noisy prompts learn as quickly as hand-crafted prompts given larger training data, and (3) MaskedLM helps VQA tasks while PrefixLM boosts captioning performance. Our code is publicly available at https://github. com/woojeongjin/FewVLM * Work was mainly done while interning at Microsoft Azure AI.",2022
cheng-etal-2020-exploiting,https://aclanthology.org/2020.rocling-1.27,0,,,,,,,"Exploiting Text Prompts for the Development of an End-to-End Computer-Assisted Pronunciation Training System. More recently, there is a growing demand for the development of computer assisted pronunciation training (CAPT) systems, which can be capitalized to automatically assess the pronunciation quality of L2 learners. However, current CAPT systems that build on end-to-end (E2E) neural network architectures still fall short of expectation for the detection of",Exploiting Text Prompts for the Development of an End-to-End Computer-Assisted Pronunciation Training System,"More recently, there is a growing demand for the development of computer assisted pronunciation training (CAPT) systems, which can be capitalized to automatically assess the pronunciation quality of L2 learners. However, current CAPT systems that build on end-to-end (E2E) neural network architectures still fall short of expectation for the detection of",Exploiting Text Prompts for the Development of an End-to-End Computer-Assisted Pronunciation Training System,"More recently, there is a growing demand for the development of computer assisted pronunciation training (CAPT) systems, which can be capitalized to automatically assess the pronunciation quality of L2 learners. However, current CAPT systems that build on end-to-end (E2E) neural network architectures still fall short of expectation for the detection of",,"Exploiting Text Prompts for the Development of an End-to-End Computer-Assisted Pronunciation Training System. More recently, there is a growing demand for the development of computer assisted pronunciation training (CAPT) systems, which can be capitalized to automatically assess the pronunciation quality of L2 learners. However, current CAPT systems that build on end-to-end (E2E) neural network architectures still fall short of expectation for the detection of",2020
mohammad-2018-word,https://aclanthology.org/L18-1027,0,,,,,,,"Word Affect Intensities. Words often convey affect-emotions, feelings, and attitudes. Further, different words can convey affect to various degrees (intensities). However, existing manually created lexicons for basic emotions (such as anger and fear) indicate only coarse categories of affect association (for example, associated with anger or not associated with anger). Automatic lexicons of affect provide fine degrees of association, but they tend not to be accurate as human-created lexicons. Here, for the first time, we present a manually created affect intensity lexicon with real-valued scores of intensity for four basic emotions: anger, fear, joy, and sadness. (We will subsequently add entries for more emotions such as disgust, anticipation, trust, and surprise.) We refer to this dataset as the NRC Affect Intensity Lexicon, or AIL for short. AIL has entries for close to 6,000 English words. We used a technique called best-worst scaling (BWS) to create the lexicon. BWS improves annotation consistency and obtains reliable fine-grained scores (split-half reliability > 0.91). We also compare the entries in AIL with the entries in the NRC VAD Lexicon, which has valence, arousal, and dominance (VAD) scores for 20K English words. We find that anger, fear, and sadness words, on average, have very similar VAD scores. However, sadness words tend to have slightly lower dominance scores than fear and anger words. The Affect Intensity Lexicon has applications in automatic emotion analysis in a number of domains such as commerce, education, intelligence, and public health. AIL is also useful in the building of natural language generation systems.",Word Affect Intensities,"Words often convey affect-emotions, feelings, and attitudes. Further, different words can convey affect to various degrees (intensities). However, existing manually created lexicons for basic emotions (such as anger and fear) indicate only coarse categories of affect association (for example, associated with anger or not associated with anger). Automatic lexicons of affect provide fine degrees of association, but they tend not to be accurate as human-created lexicons. Here, for the first time, we present a manually created affect intensity lexicon with real-valued scores of intensity for four basic emotions: anger, fear, joy, and sadness. (We will subsequently add entries for more emotions such as disgust, anticipation, trust, and surprise.) We refer to this dataset as the NRC Affect Intensity Lexicon, or AIL for short. AIL has entries for close to 6,000 English words. We used a technique called best-worst scaling (BWS) to create the lexicon. BWS improves annotation consistency and obtains reliable fine-grained scores (split-half reliability > 0.91). We also compare the entries in AIL with the entries in the NRC VAD Lexicon, which has valence, arousal, and dominance (VAD) scores for 20K English words. We find that anger, fear, and sadness words, on average, have very similar VAD scores. However, sadness words tend to have slightly lower dominance scores than fear and anger words. The Affect Intensity Lexicon has applications in automatic emotion analysis in a number of domains such as commerce, education, intelligence, and public health. AIL is also useful in the building of natural language generation systems.",Word Affect Intensities,"Words often convey affect-emotions, feelings, and attitudes. Further, different words can convey affect to various degrees (intensities). However, existing manually created lexicons for basic emotions (such as anger and fear) indicate only coarse categories of affect association (for example, associated with anger or not associated with anger). Automatic lexicons of affect provide fine degrees of association, but they tend not to be accurate as human-created lexicons. Here, for the first time, we present a manually created affect intensity lexicon with real-valued scores of intensity for four basic emotions: anger, fear, joy, and sadness. (We will subsequently add entries for more emotions such as disgust, anticipation, trust, and surprise.) We refer to this dataset as the NRC Affect Intensity Lexicon, or AIL for short. AIL has entries for close to 6,000 English words. We used a technique called best-worst scaling (BWS) to create the lexicon. BWS improves annotation consistency and obtains reliable fine-grained scores (split-half reliability > 0.91). We also compare the entries in AIL with the entries in the NRC VAD Lexicon, which has valence, arousal, and dominance (VAD) scores for 20K English words. We find that anger, fear, and sadness words, on average, have very similar VAD scores. However, sadness words tend to have slightly lower dominance scores than fear and anger words. The Affect Intensity Lexicon has applications in automatic emotion analysis in a number of domains such as commerce, education, intelligence, and public health. AIL is also useful in the building of natural language generation systems.",Many thanks to Svetlana Kiritchenko and Tara Small for helpful discussions.,"Word Affect Intensities. Words often convey affect-emotions, feelings, and attitudes. Further, different words can convey affect to various degrees (intensities). However, existing manually created lexicons for basic emotions (such as anger and fear) indicate only coarse categories of affect association (for example, associated with anger or not associated with anger). Automatic lexicons of affect provide fine degrees of association, but they tend not to be accurate as human-created lexicons. Here, for the first time, we present a manually created affect intensity lexicon with real-valued scores of intensity for four basic emotions: anger, fear, joy, and sadness. (We will subsequently add entries for more emotions such as disgust, anticipation, trust, and surprise.) We refer to this dataset as the NRC Affect Intensity Lexicon, or AIL for short. AIL has entries for close to 6,000 English words. We used a technique called best-worst scaling (BWS) to create the lexicon. BWS improves annotation consistency and obtains reliable fine-grained scores (split-half reliability > 0.91). We also compare the entries in AIL with the entries in the NRC VAD Lexicon, which has valence, arousal, and dominance (VAD) scores for 20K English words. We find that anger, fear, and sadness words, on average, have very similar VAD scores. However, sadness words tend to have slightly lower dominance scores than fear and anger words. The Affect Intensity Lexicon has applications in automatic emotion analysis in a number of domains such as commerce, education, intelligence, and public health. AIL is also useful in the building of natural language generation systems.",2018
leavitt-1992-morphe,https://aclanthology.org/A92-1034,0,,,,,,,"MORPHE: A Practical Compiler for Reversible Morphology Rules. Morph~ is a Common Lisp compiler for reversible inflectional morphology rules developed at the Center for Machine Translation at Carnegie Mellon University. This paper describes the Morph~ processing model, its implementation, and how it handles some common morphological processes.",{MORPHE}: A Practical Compiler for Reversible Morphology Rules,"Morph~ is a Common Lisp compiler for reversible inflectional morphology rules developed at the Center for Machine Translation at Carnegie Mellon University. This paper describes the Morph~ processing model, its implementation, and how it handles some common morphological processes.",MORPHE: A Practical Compiler for Reversible Morphology Rules,"Morph~ is a Common Lisp compiler for reversible inflectional morphology rules developed at the Center for Machine Translation at Carnegie Mellon University. This paper describes the Morph~ processing model, its implementation, and how it handles some common morphological processes.","I would like to thank Alex Franz, Nicholas Brownlow, and Deryle Lonsdale for their comments on drafts of this paper.","MORPHE: A Practical Compiler for Reversible Morphology Rules. Morph~ is a Common Lisp compiler for reversible inflectional morphology rules developed at the Center for Machine Translation at Carnegie Mellon University. This paper describes the Morph~ processing model, its implementation, and how it handles some common morphological processes.",1992
sun-iyyer-2021-revisiting,https://aclanthology.org/2021.naacl-main.407,0,,,,,,,"Revisiting Simple Neural Probabilistic Language Models. Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements. In this paper, we revisit the neural probabilistic language model (NPLM) of Bengio et al. (2003), which simply concatenates word embeddings within a fixed window and passes the result through a feed-forward network to predict the next word. When scaled up to modern hardware, this model (despite its many limitations) performs much better than expected on word-level language model benchmarks. Our analysis reveals that the NPLM achieves lower perplexity than a baseline Transformer with short input contexts but struggles to handle long-term dependencies. Inspired by this result, we modify the Transformer by replacing its first selfattention layer with the NPLM's local concatenation layer, which results in small but consistent perplexity decreases across three wordlevel language modeling datasets.",Revisiting Simple Neural Probabilistic Language Models,"Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements. In this paper, we revisit the neural probabilistic language model (NPLM) of Bengio et al. (2003), which simply concatenates word embeddings within a fixed window and passes the result through a feed-forward network to predict the next word. When scaled up to modern hardware, this model (despite its many limitations) performs much better than expected on word-level language model benchmarks. Our analysis reveals that the NPLM achieves lower perplexity than a baseline Transformer with short input contexts but struggles to handle long-term dependencies. Inspired by this result, we modify the Transformer by replacing its first selfattention layer with the NPLM's local concatenation layer, which results in small but consistent perplexity decreases across three wordlevel language modeling datasets.",Revisiting Simple Neural Probabilistic Language Models,"Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements. In this paper, we revisit the neural probabilistic language model (NPLM) of Bengio et al. (2003), which simply concatenates word embeddings within a fixed window and passes the result through a feed-forward network to predict the next word. When scaled up to modern hardware, this model (despite its many limitations) performs much better than expected on word-level language model benchmarks. Our analysis reveals that the NPLM achieves lower perplexity than a baseline Transformer with short input contexts but struggles to handle long-term dependencies. Inspired by this result, we modify the Transformer by replacing its first selfattention layer with the NPLM's local concatenation layer, which results in small but consistent perplexity decreases across three wordlevel language modeling datasets.","We thank Nader Akoury, Andrew Drozdov, Shufan Wang, and the rest of UMass NLP group for their constructive suggestions on the draft of this paper. We also thank the anonymous reviewers for their helpful comments. This work was supported by award IIS-1955567 from the National Science Foundation (NSF).","Revisiting Simple Neural Probabilistic Language Models. Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements. In this paper, we revisit the neural probabilistic language model (NPLM) of Bengio et al. (2003), which simply concatenates word embeddings within a fixed window and passes the result through a feed-forward network to predict the next word. When scaled up to modern hardware, this model (despite its many limitations) performs much better than expected on word-level language model benchmarks. Our analysis reveals that the NPLM achieves lower perplexity than a baseline Transformer with short input contexts but struggles to handle long-term dependencies. Inspired by this result, we modify the Transformer by replacing its first selfattention layer with the NPLM's local concatenation layer, which results in small but consistent perplexity decreases across three wordlevel language modeling datasets.",2021
huang-kurohashi-2017-improving,https://aclanthology.org/W17-2704,0,,,,,,,"Improving Shared Argument Identification in Japanese Event Knowledge Acquisition. Event relation knowledge represents the knowledge of causal and temporal relations between events. Shared arguments of event relation knowledge encode patterns of role shifting in successive events. A two-stage framework was proposed for the task of Japanese event relation knowledge acquisition, in which related event pairs are first extracted, and shared arguments are then identified to form the complete event relation knowledge. This paper focuses on the second stage of this framework, and proposes a method to improve the shared argument identification of related event pairs. We constructed a gold dataset for shared argument learning. By evaluating our system on this gold dataset, we found that our proposed model outperformed the baseline models by a large margin.",Improving Shared Argument Identification in {J}apanese Event Knowledge Acquisition,"Event relation knowledge represents the knowledge of causal and temporal relations between events. Shared arguments of event relation knowledge encode patterns of role shifting in successive events. A two-stage framework was proposed for the task of Japanese event relation knowledge acquisition, in which related event pairs are first extracted, and shared arguments are then identified to form the complete event relation knowledge. This paper focuses on the second stage of this framework, and proposes a method to improve the shared argument identification of related event pairs. We constructed a gold dataset for shared argument learning. By evaluating our system on this gold dataset, we found that our proposed model outperformed the baseline models by a large margin.",Improving Shared Argument Identification in Japanese Event Knowledge Acquisition,"Event relation knowledge represents the knowledge of causal and temporal relations between events. Shared arguments of event relation knowledge encode patterns of role shifting in successive events. A two-stage framework was proposed for the task of Japanese event relation knowledge acquisition, in which related event pairs are first extracted, and shared arguments are then identified to form the complete event relation knowledge. This paper focuses on the second stage of this framework, and proposes a method to improve the shared argument identification of related event pairs. We constructed a gold dataset for shared argument learning. By evaluating our system on this gold dataset, we found that our proposed model outperformed the baseline models by a large margin.",,"Improving Shared Argument Identification in Japanese Event Knowledge Acquisition. Event relation knowledge represents the knowledge of causal and temporal relations between events. Shared arguments of event relation knowledge encode patterns of role shifting in successive events. A two-stage framework was proposed for the task of Japanese event relation knowledge acquisition, in which related event pairs are first extracted, and shared arguments are then identified to form the complete event relation knowledge. This paper focuses on the second stage of this framework, and proposes a method to improve the shared argument identification of related event pairs. We constructed a gold dataset for shared argument learning. By evaluating our system on this gold dataset, we found that our proposed model outperformed the baseline models by a large margin.",2017
abbes-etal-2004-architecture,https://aclanthology.org/W04-1604,0,,,,,,,"The Architecture of a Standard Arabic Lexical Database. Some Figures, Ratios and Categories from the DIINAR.1 Source Program. This paper is a contribution to the issuewhich has, in the course of the last decade, become critical-of the basic requirements and validation criteria for lexical language resources in Standard Arabic. The work is based on a critical analysis of the architecture of the DIINAR.1 lexical database, the entries of which are associated with grammar-lexis relations operating at word-form level (i.e. in morphological analysis). Investigation shows a crucial difference, in the concept of 'lexical database', between source program and generated lexica. The source program underlying DIINAR.1 is analysed, and some figures and ratios are presented. The original categorisations are, in the course of scrutiny, partly revisited. Results and ratios given here for basic entries on the one hand, and for generated lexica of inflected word-forms on the other. They aim at giving a first answer to the question of the ratios between the number of lemma-entries and inflected word-forms that can be expected to be included in, or generated by, a Standard Arabic lexical dB. These ratios can be considered as one overall language-specific criterion for the analysis, evaluation and validation of lexical dB-s in Arabic.","The Architecture of a {S}tandard {A}rabic Lexical Database. Some Figures, Ratios and Categories from the {DIINAR}.1 Source Program","This paper is a contribution to the issuewhich has, in the course of the last decade, become critical-of the basic requirements and validation criteria for lexical language resources in Standard Arabic. The work is based on a critical analysis of the architecture of the DIINAR.1 lexical database, the entries of which are associated with grammar-lexis relations operating at word-form level (i.e. in morphological analysis). Investigation shows a crucial difference, in the concept of 'lexical database', between source program and generated lexica. The source program underlying DIINAR.1 is analysed, and some figures and ratios are presented. The original categorisations are, in the course of scrutiny, partly revisited. Results and ratios given here for basic entries on the one hand, and for generated lexica of inflected word-forms on the other. They aim at giving a first answer to the question of the ratios between the number of lemma-entries and inflected word-forms that can be expected to be included in, or generated by, a Standard Arabic lexical dB. These ratios can be considered as one overall language-specific criterion for the analysis, evaluation and validation of lexical dB-s in Arabic.","The Architecture of a Standard Arabic Lexical Database. Some Figures, Ratios and Categories from the DIINAR.1 Source Program","This paper is a contribution to the issuewhich has, in the course of the last decade, become critical-of the basic requirements and validation criteria for lexical language resources in Standard Arabic. The work is based on a critical analysis of the architecture of the DIINAR.1 lexical database, the entries of which are associated with grammar-lexis relations operating at word-form level (i.e. in morphological analysis). Investigation shows a crucial difference, in the concept of 'lexical database', between source program and generated lexica. The source program underlying DIINAR.1 is analysed, and some figures and ratios are presented. The original categorisations are, in the course of scrutiny, partly revisited. Results and ratios given here for basic entries on the one hand, and for generated lexica of inflected word-forms on the other. They aim at giving a first answer to the question of the ratios between the number of lemma-entries and inflected word-forms that can be expected to be included in, or generated by, a Standard Arabic lexical dB. These ratios can be considered as one overall language-specific criterion for the analysis, evaluation and validation of lexical dB-s in Arabic.",,"The Architecture of a Standard Arabic Lexical Database. Some Figures, Ratios and Categories from the DIINAR.1 Source Program. This paper is a contribution to the issuewhich has, in the course of the last decade, become critical-of the basic requirements and validation criteria for lexical language resources in Standard Arabic. The work is based on a critical analysis of the architecture of the DIINAR.1 lexical database, the entries of which are associated with grammar-lexis relations operating at word-form level (i.e. in morphological analysis). Investigation shows a crucial difference, in the concept of 'lexical database', between source program and generated lexica. The source program underlying DIINAR.1 is analysed, and some figures and ratios are presented. The original categorisations are, in the course of scrutiny, partly revisited. Results and ratios given here for basic entries on the one hand, and for generated lexica of inflected word-forms on the other. They aim at giving a first answer to the question of the ratios between the number of lemma-entries and inflected word-forms that can be expected to be included in, or generated by, a Standard Arabic lexical dB. These ratios can be considered as one overall language-specific criterion for the analysis, evaluation and validation of lexical dB-s in Arabic.",2004
pust-etal-2015-parsing,https://aclanthology.org/D15-1136,0,,,,,,,"Parsing English into Abstract Meaning Representation Using Syntax-Based Machine Translation. We present a parser for Abstract Meaning Representation (AMR). We treat Englishto-AMR conversion within the framework of string-to-tree, syntax-based machine translation (SBMT). To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling. We introduce an AMR-specific language model and add data and features drawn from semantic resources. Our resulting AMR parser significantly improves upon state-of-the-art results.",Parsing {E}nglish into {A}bstract {M}eaning {R}epresentation Using Syntax-Based Machine Translation,"We present a parser for Abstract Meaning Representation (AMR). We treat Englishto-AMR conversion within the framework of string-to-tree, syntax-based machine translation (SBMT). To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling. We introduce an AMR-specific language model and add data and features drawn from semantic resources. Our resulting AMR parser significantly improves upon state-of-the-art results.",Parsing English into Abstract Meaning Representation Using Syntax-Based Machine Translation,"We present a parser for Abstract Meaning Representation (AMR). We treat Englishto-AMR conversion within the framework of string-to-tree, syntax-based machine translation (SBMT). To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling. We introduce an AMR-specific language model and add data and features drawn from semantic resources. Our resulting AMR parser significantly improves upon state-of-the-art results.","Thanks to Julian Schamper and Allen Schmaltz for early attempts at this problem. This work was sponsored by DARPA DEFT (FA8750-13-2-0045), DARPA BOLT (HR0011-12-C-0014), and DARPA Big Mechanism (W911NF-14-1-0364).","Parsing English into Abstract Meaning Representation Using Syntax-Based Machine Translation. We present a parser for Abstract Meaning Representation (AMR). We treat Englishto-AMR conversion within the framework of string-to-tree, syntax-based machine translation (SBMT). To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling. We introduce an AMR-specific language model and add data and features drawn from semantic resources. Our resulting AMR parser significantly improves upon state-of-the-art results.",2015
sogaard-2010-inversion,https://aclanthology.org/2010.eamt-1.5,0,,,,,,,"Can inversion transduction grammars generate hand alignments. The adequacy of inversion transduction grammars (ITGs) has been widely debated, and the discussion's crux seems to be whether the search space is inclusive enough (Zens and Ney, 2003; Wellington et al., 2006; Søgaard and Wu, 2009). Parse failure rate when parses are constrained by word alignments is one metric that has been used, but no one has studied parse failure rates of the full class of ITGs on representative hand aligned corpora. It has also been noted that ITGs in Chomsky normal form induce strictly less alignments than ITGs (Søgaard and Wu, 2009). This study is the first study that directly compares parse failure rates for this subclass and the full class of ITGs.",Can inversion transduction grammars generate hand alignments,"The adequacy of inversion transduction grammars (ITGs) has been widely debated, and the discussion's crux seems to be whether the search space is inclusive enough (Zens and Ney, 2003; Wellington et al., 2006; Søgaard and Wu, 2009). Parse failure rate when parses are constrained by word alignments is one metric that has been used, but no one has studied parse failure rates of the full class of ITGs on representative hand aligned corpora. It has also been noted that ITGs in Chomsky normal form induce strictly less alignments than ITGs (Søgaard and Wu, 2009). This study is the first study that directly compares parse failure rates for this subclass and the full class of ITGs.",Can inversion transduction grammars generate hand alignments,"The adequacy of inversion transduction grammars (ITGs) has been widely debated, and the discussion's crux seems to be whether the search space is inclusive enough (Zens and Ney, 2003; Wellington et al., 2006; Søgaard and Wu, 2009). Parse failure rate when parses are constrained by word alignments is one metric that has been used, but no one has studied parse failure rates of the full class of ITGs on representative hand aligned corpora. It has also been noted that ITGs in Chomsky normal form induce strictly less alignments than ITGs (Søgaard and Wu, 2009). This study is the first study that directly compares parse failure rates for this subclass and the full class of ITGs.",,"Can inversion transduction grammars generate hand alignments. The adequacy of inversion transduction grammars (ITGs) has been widely debated, and the discussion's crux seems to be whether the search space is inclusive enough (Zens and Ney, 2003; Wellington et al., 2006; Søgaard and Wu, 2009). Parse failure rate when parses are constrained by word alignments is one metric that has been used, but no one has studied parse failure rates of the full class of ITGs on representative hand aligned corpora. It has also been noted that ITGs in Chomsky normal form induce strictly less alignments than ITGs (Søgaard and Wu, 2009). This study is the first study that directly compares parse failure rates for this subclass and the full class of ITGs.",2010
klie-etal-2018-inception,https://aclanthology.org/C18-2002,0,,,,,,,"The INCEpTION Platform: Machine-Assisted and Knowledge-Oriented Interactive Annotation. We introduce INCEpTION, a new annotation platform for tasks including interactive and semantic annotation (e.g., concept linking, fact linking, knowledge base population, semantic frame annotation). These tasks are very time consuming and demanding for annotators, especially when knowledge bases are used. We address these issues by developing an annotation platform that incorporates machine learning capabilities which actively assist and guide annotators. The platform is both generic and modular. It targets a range of research domains in need of semantic annotation, such as digital humanities, bioinformatics, or linguistics. INCEpTION is publicly available as open-source software. 1",The {INCE}p{TION} Platform: Machine-Assisted and Knowledge-Oriented Interactive Annotation,"We introduce INCEpTION, a new annotation platform for tasks including interactive and semantic annotation (e.g., concept linking, fact linking, knowledge base population, semantic frame annotation). These tasks are very time consuming and demanding for annotators, especially when knowledge bases are used. We address these issues by developing an annotation platform that incorporates machine learning capabilities which actively assist and guide annotators. The platform is both generic and modular. It targets a range of research domains in need of semantic annotation, such as digital humanities, bioinformatics, or linguistics. INCEpTION is publicly available as open-source software. 1",The INCEpTION Platform: Machine-Assisted and Knowledge-Oriented Interactive Annotation,"We introduce INCEpTION, a new annotation platform for tasks including interactive and semantic annotation (e.g., concept linking, fact linking, knowledge base population, semantic frame annotation). These tasks are very time consuming and demanding for annotators, especially when knowledge bases are used. We address these issues by developing an annotation platform that incorporates machine learning capabilities which actively assist and guide annotators. The platform is both generic and modular. It targets a range of research domains in need of semantic annotation, such as digital humanities, bioinformatics, or linguistics. INCEpTION is publicly available as open-source software. 1","We thank Wei Ding, Peter Jiang and Marcel de Boer and Naveen Kumar for their valuable contributions and Teresa Botschen and Yevgeniy Puzikov for their helpful comments. This work was supported by the German Research Foundation under grant No. EC 503/1-1 and GU 798/21-1 (INCEpTION).","The INCEpTION Platform: Machine-Assisted and Knowledge-Oriented Interactive Annotation. We introduce INCEpTION, a new annotation platform for tasks including interactive and semantic annotation (e.g., concept linking, fact linking, knowledge base population, semantic frame annotation). These tasks are very time consuming and demanding for annotators, especially when knowledge bases are used. We address these issues by developing an annotation platform that incorporates machine learning capabilities which actively assist and guide annotators. The platform is both generic and modular. It targets a range of research domains in need of semantic annotation, such as digital humanities, bioinformatics, or linguistics. INCEpTION is publicly available as open-source software. 1",2018
shain-2021-cdrnn,https://aclanthology.org/2021.acl-long.288,0,,,,,,,"CDRNN: Discovering Complex Dynamics in Human Language Processing. The human mind is a dynamical system, yet many analysis techniques used to study it are limited in their ability to capture the complex dynamics that may characterize mental processes. This study proposes the continuoustime deconvolutional regressive neural network (CDRNN), a deep neural extension of continuous-time deconvolutional regression (CDR, Shain and Schuler, 2021) that jointly captures time-varying, non-linear, and delayed influences of predictors (e.g. word surprisal) on the response (e.g. reading time). Despite this flexibility, CDRNN is interpretable and able to illuminate patterns in human cognition that are otherwise difficult to study. Behavioral and fMRI experiments reveal detailed and plausible estimates of human language processing dynamics that generalize better than CDR and other baselines, supporting a potential role for CDRNN in studying human language processing.",{CDRNN}: Discovering Complex Dynamics in Human Language Processing,"The human mind is a dynamical system, yet many analysis techniques used to study it are limited in their ability to capture the complex dynamics that may characterize mental processes. This study proposes the continuoustime deconvolutional regressive neural network (CDRNN), a deep neural extension of continuous-time deconvolutional regression (CDR, Shain and Schuler, 2021) that jointly captures time-varying, non-linear, and delayed influences of predictors (e.g. word surprisal) on the response (e.g. reading time). Despite this flexibility, CDRNN is interpretable and able to illuminate patterns in human cognition that are otherwise difficult to study. Behavioral and fMRI experiments reveal detailed and plausible estimates of human language processing dynamics that generalize better than CDR and other baselines, supporting a potential role for CDRNN in studying human language processing.",CDRNN: Discovering Complex Dynamics in Human Language Processing,"The human mind is a dynamical system, yet many analysis techniques used to study it are limited in their ability to capture the complex dynamics that may characterize mental processes. This study proposes the continuoustime deconvolutional regressive neural network (CDRNN), a deep neural extension of continuous-time deconvolutional regression (CDR, Shain and Schuler, 2021) that jointly captures time-varying, non-linear, and delayed influences of predictors (e.g. word surprisal) on the response (e.g. reading time). Despite this flexibility, CDRNN is interpretable and able to illuminate patterns in human cognition that are otherwise difficult to study. Behavioral and fMRI experiments reveal detailed and plausible estimates of human language processing dynamics that generalize better than CDR and other baselines, supporting a potential role for CDRNN in studying human language processing.",,"CDRNN: Discovering Complex Dynamics in Human Language Processing. The human mind is a dynamical system, yet many analysis techniques used to study it are limited in their ability to capture the complex dynamics that may characterize mental processes. This study proposes the continuoustime deconvolutional regressive neural network (CDRNN), a deep neural extension of continuous-time deconvolutional regression (CDR, Shain and Schuler, 2021) that jointly captures time-varying, non-linear, and delayed influences of predictors (e.g. word surprisal) on the response (e.g. reading time). Despite this flexibility, CDRNN is interpretable and able to illuminate patterns in human cognition that are otherwise difficult to study. Behavioral and fMRI experiments reveal detailed and plausible estimates of human language processing dynamics that generalize better than CDR and other baselines, supporting a potential role for CDRNN in studying human language processing.",2021
hovy-etal-2002-computer,http://www.lrec-conf.org/proceedings/lrec2002/pdf/5.pdf,0,,,,,,,"Computer-Aided Specification of Quality Models for Machine Translation Evaluation. This article describes the principles and mechanism of an integrative effort in machine translation (MT) evaluation. Building upon previous standardization initiatives, above all ISO/IEC 9126, 14598 and EAGLES, we attempt to classify into a coherent taxonomy most of the characteristics, attributes and metrics that have been proposed for MT evaluation. The main articulation of this flexible framework is the link between a taxonomy that helps evaluators define a context of use for the evaluated software, and a taxonomy of the quality characteristics and associated metrics. The article explains the theoretical grounds of this articulation, along with an overview of the taxonomies in their present state, and a perspective on ongoing work in MT evaluation standardization.",Computer-Aided Specification of Quality Models for Machine Translation Evaluation,"This article describes the principles and mechanism of an integrative effort in machine translation (MT) evaluation. Building upon previous standardization initiatives, above all ISO/IEC 9126, 14598 and EAGLES, we attempt to classify into a coherent taxonomy most of the characteristics, attributes and metrics that have been proposed for MT evaluation. The main articulation of this flexible framework is the link between a taxonomy that helps evaluators define a context of use for the evaluated software, and a taxonomy of the quality characteristics and associated metrics. The article explains the theoretical grounds of this articulation, along with an overview of the taxonomies in their present state, and a perspective on ongoing work in MT evaluation standardization.",Computer-Aided Specification of Quality Models for Machine Translation Evaluation,"This article describes the principles and mechanism of an integrative effort in machine translation (MT) evaluation. Building upon previous standardization initiatives, above all ISO/IEC 9126, 14598 and EAGLES, we attempt to classify into a coherent taxonomy most of the characteristics, attributes and metrics that have been proposed for MT evaluation. The main articulation of this flexible framework is the link between a taxonomy that helps evaluators define a context of use for the evaluated software, and a taxonomy of the quality characteristics and associated metrics. The article explains the theoretical grounds of this articulation, along with an overview of the taxonomies in their present state, and a perspective on ongoing work in MT evaluation standardization.",,"Computer-Aided Specification of Quality Models for Machine Translation Evaluation. This article describes the principles and mechanism of an integrative effort in machine translation (MT) evaluation. Building upon previous standardization initiatives, above all ISO/IEC 9126, 14598 and EAGLES, we attempt to classify into a coherent taxonomy most of the characteristics, attributes and metrics that have been proposed for MT evaluation. The main articulation of this flexible framework is the link between a taxonomy that helps evaluators define a context of use for the evaluated software, and a taxonomy of the quality characteristics and associated metrics. The article explains the theoretical grounds of this articulation, along with an overview of the taxonomies in their present state, and a perspective on ongoing work in MT evaluation standardization.",2002
trujillo-1992-locations,https://aclanthology.org/1992.tmi-1.2,0,,,,,,,"Locations in the machine translation of prepositional phrases. An approach to the machine translation of locative prepositional phrases (PP) is presented. The technique has been implemented for use in an experimental transfer-based, multilingual machine translation system. Previous approaches to this problem are described and they are compared to the solution presented.",Locations in the machine translation of prepositional phrases,"An approach to the machine translation of locative prepositional phrases (PP) is presented. The technique has been implemented for use in an experimental transfer-based, multilingual machine translation system. Previous approaches to this problem are described and they are compared to the solution presented.",Locations in the machine translation of prepositional phrases,"An approach to the machine translation of locative prepositional phrases (PP) is presented. The technique has been implemented for use in an experimental transfer-based, multilingual machine translation system. Previous approaches to this problem are described and they are compared to the solution presented.","This work was funded by the UK Science and Engineering Research Council. Many thanks to Ted Briscoe, Antonio Sanfilippo, John Beaven, Ann Copestake, Valeria de Paiva, and three anonymous reviewers. Thanks also to Trinity Hall, Cambridge, for a travel grant. All remaining errors are mine.","Locations in the machine translation of prepositional phrases. An approach to the machine translation of locative prepositional phrases (PP) is presented. The technique has been implemented for use in an experimental transfer-based, multilingual machine translation system. Previous approaches to this problem are described and they are compared to the solution presented.",1992
zupon-etal-2019-lightly,https://aclanthology.org/W19-1504,0,,,,,,,"Lightly-supervised Representation Learning with Global Interpretability. We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning. Our algorithm iteratively learns custom embeddings for both the multi-word entities to be extracted and the patterns that match them from a few example entities per category. We demonstrate that this representation-based approach outperforms three other state-of-theart bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes. Additionally, using these embeddings, our approach outputs a globally-interpretable model consisting of a decision list, by ranking patterns based on their proximity to the average entity embedding in a given class. We show that this interpretable model performs close to our complete bootstrapping model, proving that representation learning can be used to produce interpretable models with small loss in performance. This decision list can be edited by human experts to mitigate some of that loss and in some cases outperform the original model.",Lightly-supervised Representation Learning with Global Interpretability,"We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning. Our algorithm iteratively learns custom embeddings for both the multi-word entities to be extracted and the patterns that match them from a few example entities per category. We demonstrate that this representation-based approach outperforms three other state-of-theart bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes. Additionally, using these embeddings, our approach outputs a globally-interpretable model consisting of a decision list, by ranking patterns based on their proximity to the average entity embedding in a given class. We show that this interpretable model performs close to our complete bootstrapping model, proving that representation learning can be used to produce interpretable models with small loss in performance. This decision list can be edited by human experts to mitigate some of that loss and in some cases outperform the original model.",Lightly-supervised Representation Learning with Global Interpretability,"We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning. Our algorithm iteratively learns custom embeddings for both the multi-word entities to be extracted and the patterns that match them from a few example entities per category. We demonstrate that this representation-based approach outperforms three other state-of-theart bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes. Additionally, using these embeddings, our approach outputs a globally-interpretable model consisting of a decision list, by ranking patterns based on their proximity to the average entity embedding in a given class. We show that this interpretable model performs close to our complete bootstrapping model, proving that representation learning can be used to produce interpretable models with small loss in performance. This decision list can be edited by human experts to mitigate some of that loss and in some cases outperform the original model.","We gratefully thank Yoav Goldberg for his suggestions for the manual curation experiments.This work was supported by the Defense Advanced Research Projects Agency (DARPA) under the Big Mechanism program, grant W911NF-14-1-0395, and by the Bill and Melinda Gates Foundation HBGDki Initiative. Marco Valenzuela-Escárcega and Mihai Surdeanu declare a financial interest in lum.ai. This interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies.","Lightly-supervised Representation Learning with Global Interpretability. We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning. Our algorithm iteratively learns custom embeddings for both the multi-word entities to be extracted and the patterns that match them from a few example entities per category. We demonstrate that this representation-based approach outperforms three other state-of-theart bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes. Additionally, using these embeddings, our approach outputs a globally-interpretable model consisting of a decision list, by ranking patterns based on their proximity to the average entity embedding in a given class. We show that this interpretable model performs close to our complete bootstrapping model, proving that representation learning can be used to produce interpretable models with small loss in performance. This decision list can be edited by human experts to mitigate some of that loss and in some cases outperform the original model.",2019
biesialska-etal-2019-talp,https://aclanthology.org/W19-5424,0,,,,,,,"The TALP-UPC System for the WMT Similar Language Task: Statistical vs Neural Machine Translation. Although the problem of similar language translation has been an area of research interest for many years, yet it is still far from being solved. In this paper, we study the performance of two popular approaches: statistical and neural. We conclude that both methods yield similar results; however, the performance varies depending on the language pair. While the statistical approach outperforms the neural one by a difference of 6 BLEU points for the Spanish-Portuguese language pair, the proposed neural model surpasses the statistical one by a difference of 2 BLEU points for Czech-Polish. In the former case, the language similarity (based on perplexity) is much higher than in the latter case. Additionally, we report negative results for the system combination with back-translation. Our TALP-UPC system submission won 1st place for Czech→Polish and 2nd place for Spanish→Portuguese in the official evaluation of the 1st WMT Similar Language Translation task.",The {TALP}-{UPC} System for the {WMT} Similar Language Task: Statistical vs Neural Machine Translation,"Although the problem of similar language translation has been an area of research interest for many years, yet it is still far from being solved. In this paper, we study the performance of two popular approaches: statistical and neural. We conclude that both methods yield similar results; however, the performance varies depending on the language pair. While the statistical approach outperforms the neural one by a difference of 6 BLEU points for the Spanish-Portuguese language pair, the proposed neural model surpasses the statistical one by a difference of 2 BLEU points for Czech-Polish. In the former case, the language similarity (based on perplexity) is much higher than in the latter case. Additionally, we report negative results for the system combination with back-translation. Our TALP-UPC system submission won 1st place for Czech→Polish and 2nd place for Spanish→Portuguese in the official evaluation of the 1st WMT Similar Language Translation task.",The TALP-UPC System for the WMT Similar Language Task: Statistical vs Neural Machine Translation,"Although the problem of similar language translation has been an area of research interest for many years, yet it is still far from being solved. In this paper, we study the performance of two popular approaches: statistical and neural. We conclude that both methods yield similar results; however, the performance varies depending on the language pair. While the statistical approach outperforms the neural one by a difference of 6 BLEU points for the Spanish-Portuguese language pair, the proposed neural model surpasses the statistical one by a difference of 2 BLEU points for Czech-Polish. In the former case, the language similarity (based on perplexity) is much higher than in the latter case. Additionally, we report negative results for the system combination with back-translation. Our TALP-UPC system submission won 1st place for Czech→Polish and 2nd place for Spanish→Portuguese in the official evaluation of the 1st WMT Similar Language Translation task.","The authors want to thank Pablo Gamallo, José Ramom Pichel Campos and Iñaki Alegria for sharing their valuable insights on their language distance studies.This work is supported in part by the Spanish Ministerio de Economía y Competitividad, the European Regional Development Fund and the Agencia Estatal de Investigación, through the postdoctoral senior grant Ramón y Cajal, the contract TEC2015-69266-P (MINECO/FEDER,EU) and the contract PCIN-2017-079 (AEI/MINECO).","The TALP-UPC System for the WMT Similar Language Task: Statistical vs Neural Machine Translation. Although the problem of similar language translation has been an area of research interest for many years, yet it is still far from being solved. In this paper, we study the performance of two popular approaches: statistical and neural. We conclude that both methods yield similar results; however, the performance varies depending on the language pair. While the statistical approach outperforms the neural one by a difference of 6 BLEU points for the Spanish-Portuguese language pair, the proposed neural model surpasses the statistical one by a difference of 2 BLEU points for Czech-Polish. In the former case, the language similarity (based on perplexity) is much higher than in the latter case. Additionally, we report negative results for the system combination with back-translation. Our TALP-UPC system submission won 1st place for Czech→Polish and 2nd place for Spanish→Portuguese in the official evaluation of the 1st WMT Similar Language Translation task.",2019
bhat-sharma-2013-animacy,https://aclanthology.org/I13-1008,0,,,,,,,"Animacy Acquisition Using Morphological Case. Animacy is an inherent property of entities that nominals refer to in the physical world. This semantic property of a nominal has received much attention in both linguistics and computational linguistics. In this paper, we present a robust unsupervised technique to infer the animacy of nominals in languages with rich morphological case. The intuition behind our method is that the control/agency of a noun depicted by case marking can approximate its animacy. A higher control over an action implies higher animacy. Our experiments on Hindi show promising results with F β and P urity scores of 89 and 86 respectively.",{A}nimacy Acquisition Using Morphological Case,"Animacy is an inherent property of entities that nominals refer to in the physical world. This semantic property of a nominal has received much attention in both linguistics and computational linguistics. In this paper, we present a robust unsupervised technique to infer the animacy of nominals in languages with rich morphological case. The intuition behind our method is that the control/agency of a noun depicted by case marking can approximate its animacy. A higher control over an action implies higher animacy. Our experiments on Hindi show promising results with F β and P urity scores of 89 and 86 respectively.",Animacy Acquisition Using Morphological Case,"Animacy is an inherent property of entities that nominals refer to in the physical world. This semantic property of a nominal has received much attention in both linguistics and computational linguistics. In this paper, we present a robust unsupervised technique to infer the animacy of nominals in languages with rich morphological case. The intuition behind our method is that the control/agency of a noun depicted by case marking can approximate its animacy. A higher control over an action implies higher animacy. Our experiments on Hindi show promising results with F β and P urity scores of 89 and 86 respectively.",We would like to thank the anonymous reviewers for their useful comments which helped to improve this paper. We furthermore thank Sambhav Jain for his help and useful feedback.,"Animacy Acquisition Using Morphological Case. Animacy is an inherent property of entities that nominals refer to in the physical world. This semantic property of a nominal has received much attention in both linguistics and computational linguistics. In this paper, we present a robust unsupervised technique to infer the animacy of nominals in languages with rich morphological case. The intuition behind our method is that the control/agency of a noun depicted by case marking can approximate its animacy. A higher control over an action implies higher animacy. Our experiments on Hindi show promising results with F β and P urity scores of 89 and 86 respectively.",2013
ahuja-desai-2020-accelerating,https://aclanthology.org/2020.nlp4convai-1.6,0,,,,,,,"Accelerating Natural Language Understanding in Task-Oriented Dialog. Task-oriented dialog models typically leverage complex neural architectures and large-scale, pre-trained Transformers to achieve state-ofthe-art performance on popular natural language understanding benchmarks. However, these models frequently have in excess of tens of millions of parameters, making them impossible to deploy on-device where resourceefficiency is a major concern. In this work, we show that a simple convolutional model compressed with structured pruning achieves largely comparable results to BERT (Devlin et al., 2019) on ATIS and Snips, with under 100K parameters. Moreover, we perform acceleration experiments on CPUs, where we observe our multi-task model predicts intents and slots nearly 63× faster than even DistilBERT (Sanh et al., 2019).",Accelerating Natural Language Understanding in Task-Oriented Dialog,"Task-oriented dialog models typically leverage complex neural architectures and large-scale, pre-trained Transformers to achieve state-ofthe-art performance on popular natural language understanding benchmarks. However, these models frequently have in excess of tens of millions of parameters, making them impossible to deploy on-device where resourceefficiency is a major concern. In this work, we show that a simple convolutional model compressed with structured pruning achieves largely comparable results to BERT (Devlin et al., 2019) on ATIS and Snips, with under 100K parameters. Moreover, we perform acceleration experiments on CPUs, where we observe our multi-task model predicts intents and slots nearly 63× faster than even DistilBERT (Sanh et al., 2019).",Accelerating Natural Language Understanding in Task-Oriented Dialog,"Task-oriented dialog models typically leverage complex neural architectures and large-scale, pre-trained Transformers to achieve state-ofthe-art performance on popular natural language understanding benchmarks. However, these models frequently have in excess of tens of millions of parameters, making them impossible to deploy on-device where resourceefficiency is a major concern. In this work, we show that a simple convolutional model compressed with structured pruning achieves largely comparable results to BERT (Devlin et al., 2019) on ATIS and Snips, with under 100K parameters. Moreover, we perform acceleration experiments on CPUs, where we observe our multi-task model predicts intents and slots nearly 63× faster than even DistilBERT (Sanh et al., 2019).",Thanks to our anonymous reviewers for their helpful comments and feedback.,"Accelerating Natural Language Understanding in Task-Oriented Dialog. Task-oriented dialog models typically leverage complex neural architectures and large-scale, pre-trained Transformers to achieve state-ofthe-art performance on popular natural language understanding benchmarks. However, these models frequently have in excess of tens of millions of parameters, making them impossible to deploy on-device where resourceefficiency is a major concern. In this work, we show that a simple convolutional model compressed with structured pruning achieves largely comparable results to BERT (Devlin et al., 2019) on ATIS and Snips, with under 100K parameters. Moreover, we perform acceleration experiments on CPUs, where we observe our multi-task model predicts intents and slots nearly 63× faster than even DistilBERT (Sanh et al., 2019).",2020
jha-etal-2010-corpus,https://aclanthology.org/W10-0702,0,,,,,,,"Corpus Creation for New Genres: A Crowdsourced Approach to PP Attachment. This paper explores the task of building an accurate prepositional phrase attachment corpus for new genres while avoiding a large investment in terms of time and money by crowdsourcing judgments. We develop and present a system to extract prepositional phrases and their potential attachments from ungrammatical and informal sentences and pose the subsequent disambiguation tasks as multiple choice questions to workers from Amazon's Mechanical Turk service. Our analysis shows that this two-step approach is capable of producing reliable annotations on informal and potentially noisy blog text, and this semi-automated strategy holds promise for similar annotation projects in new genres.",Corpus Creation for New Genres: A Crowdsourced Approach to {PP} Attachment,"This paper explores the task of building an accurate prepositional phrase attachment corpus for new genres while avoiding a large investment in terms of time and money by crowdsourcing judgments. We develop and present a system to extract prepositional phrases and their potential attachments from ungrammatical and informal sentences and pose the subsequent disambiguation tasks as multiple choice questions to workers from Amazon's Mechanical Turk service. Our analysis shows that this two-step approach is capable of producing reliable annotations on informal and potentially noisy blog text, and this semi-automated strategy holds promise for similar annotation projects in new genres.",Corpus Creation for New Genres: A Crowdsourced Approach to PP Attachment,"This paper explores the task of building an accurate prepositional phrase attachment corpus for new genres while avoiding a large investment in terms of time and money by crowdsourcing judgments. We develop and present a system to extract prepositional phrases and their potential attachments from ungrammatical and informal sentences and pose the subsequent disambiguation tasks as multiple choice questions to workers from Amazon's Mechanical Turk service. Our analysis shows that this two-step approach is capable of producing reliable annotations on informal and potentially noisy blog text, and this semi-automated strategy holds promise for similar annotation projects in new genres.","The authors would like to thank Kevin Lerman for his help in formulating the original ideas for this work. This material is based on research supported in part by the U.S. National Science Foundation (NSF) under IIS-05-34871. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.","Corpus Creation for New Genres: A Crowdsourced Approach to PP Attachment. This paper explores the task of building an accurate prepositional phrase attachment corpus for new genres while avoiding a large investment in terms of time and money by crowdsourcing judgments. We develop and present a system to extract prepositional phrases and their potential attachments from ungrammatical and informal sentences and pose the subsequent disambiguation tasks as multiple choice questions to workers from Amazon's Mechanical Turk service. Our analysis shows that this two-step approach is capable of producing reliable annotations on informal and potentially noisy blog text, and this semi-automated strategy holds promise for similar annotation projects in new genres.",2010
st-jacques-barriere-2006-similarity,https://aclanthology.org/W06-1103,0,,,,,,,"Similarity Judgments: Philosophical, Psychological and Mathematical Investigations. This study investigates similarity judgments from two angles. First, we look at models suggested in the psychology and philosophy literature which capture the essence of concept similarity evaluation for humans. Second, we analyze the properties of many metrics which simulate such evaluation capabilities. The first angle reveals that non-experts can judge similarity and that their judgments need not be based on predefined traits. We use such conclusions to inform us on how gold standards for word sense disambiguation tasks could be established. From the second angle, we conclude that more attention should be paid to metric properties before assigning them to perform a particular task.","Similarity Judgments: Philosophical, Psychological and Mathematical Investigations","This study investigates similarity judgments from two angles. First, we look at models suggested in the psychology and philosophy literature which capture the essence of concept similarity evaluation for humans. Second, we analyze the properties of many metrics which simulate such evaluation capabilities. The first angle reveals that non-experts can judge similarity and that their judgments need not be based on predefined traits. We use such conclusions to inform us on how gold standards for word sense disambiguation tasks could be established. From the second angle, we conclude that more attention should be paid to metric properties before assigning them to perform a particular task.","Similarity Judgments: Philosophical, Psychological and Mathematical Investigations","This study investigates similarity judgments from two angles. First, we look at models suggested in the psychology and philosophy literature which capture the essence of concept similarity evaluation for humans. Second, we analyze the properties of many metrics which simulate such evaluation capabilities. The first angle reveals that non-experts can judge similarity and that their judgments need not be based on predefined traits. We use such conclusions to inform us on how gold standards for word sense disambiguation tasks could be established. From the second angle, we conclude that more attention should be paid to metric properties before assigning them to perform a particular task.",,"Similarity Judgments: Philosophical, Psychological and Mathematical Investigations. This study investigates similarity judgments from two angles. First, we look at models suggested in the psychology and philosophy literature which capture the essence of concept similarity evaluation for humans. Second, we analyze the properties of many metrics which simulate such evaluation capabilities. The first angle reveals that non-experts can judge similarity and that their judgments need not be based on predefined traits. We use such conclusions to inform us on how gold standards for word sense disambiguation tasks could be established. From the second angle, we conclude that more attention should be paid to metric properties before assigning them to perform a particular task.",2006
tomuro-1998-semi,https://aclanthology.org/W98-0715,0,,,,,,,"Semi-automatic Induction of Systematic Polysemy from WordNet. This paper describes a semi-automatic method of inducing underspecified semantic classes from WordNet verbs and nouns. An underspecified semantic class is an abstract semantic class which encodes systematic polysem~f, a set of word senses that are related in systematic and predictable ways. We show the usefulness of the induced classes in the semantic interpretations and contextual inferences of real-word texts by applying them to the predicate-argument structures in Brown corpus.",Semi-automatic Induction of Systematic Polysemy from {W}ord{N}et,"This paper describes a semi-automatic method of inducing underspecified semantic classes from WordNet verbs and nouns. An underspecified semantic class is an abstract semantic class which encodes systematic polysem~f, a set of word senses that are related in systematic and predictable ways. We show the usefulness of the induced classes in the semantic interpretations and contextual inferences of real-word texts by applying them to the predicate-argument structures in Brown corpus.",Semi-automatic Induction of Systematic Polysemy from WordNet,"This paper describes a semi-automatic method of inducing underspecified semantic classes from WordNet verbs and nouns. An underspecified semantic class is an abstract semantic class which encodes systematic polysem~f, a set of word senses that are related in systematic and predictable ways. We show the usefulness of the induced classes in the semantic interpretations and contextual inferences of real-word texts by applying them to the predicate-argument structures in Brown corpus.","The author would like to thank Paul Buitelaar for helpful discussions, insights and encouragement.","Semi-automatic Induction of Systematic Polysemy from WordNet. This paper describes a semi-automatic method of inducing underspecified semantic classes from WordNet verbs and nouns. An underspecified semantic class is an abstract semantic class which encodes systematic polysem~f, a set of word senses that are related in systematic and predictable ways. We show the usefulness of the induced classes in the semantic interpretations and contextual inferences of real-word texts by applying them to the predicate-argument structures in Brown corpus.",1998
yazdani-etal-2015-learning,https://aclanthology.org/D15-1201,0,,,,,,,"Learning Semantic Composition to Detect Non-compositionality of Multiword Expressions. Non-compositionality of multiword expressions is an intriguing problem that can be the source of error in a variety of NLP tasks such as language generation, machine translation and word sense disambiguation. We present methods of non-compositionality detection for English noun compounds using the unsupervised learning of a semantic composition function. Compounds which are not well modeled by the learned semantic composition function are considered noncompositional. We explore a range of distributional vector-space models for semantic composition, empirically evaluate these models, and propose additional methods which improve results further. We show that a complex function such as polynomial projection can learn semantic composition and identify non-compositionality in an unsupervised way, beating all other baselines ranging from simple to complex. We show that enforcing sparsity is a useful regularizer in learning complex composition functions. We show further improvements by training a decomposition function in addition to the composition function. Finally, we propose an EM algorithm over latent compositionality annotations that also improves the performance.",Learning Semantic Composition to Detect Non-compositionality of Multiword Expressions,"Non-compositionality of multiword expressions is an intriguing problem that can be the source of error in a variety of NLP tasks such as language generation, machine translation and word sense disambiguation. We present methods of non-compositionality detection for English noun compounds using the unsupervised learning of a semantic composition function. Compounds which are not well modeled by the learned semantic composition function are considered noncompositional. We explore a range of distributional vector-space models for semantic composition, empirically evaluate these models, and propose additional methods which improve results further. We show that a complex function such as polynomial projection can learn semantic composition and identify non-compositionality in an unsupervised way, beating all other baselines ranging from simple to complex. We show that enforcing sparsity is a useful regularizer in learning complex composition functions. We show further improvements by training a decomposition function in addition to the composition function. Finally, we propose an EM algorithm over latent compositionality annotations that also improves the performance.",Learning Semantic Composition to Detect Non-compositionality of Multiword Expressions,"Non-compositionality of multiword expressions is an intriguing problem that can be the source of error in a variety of NLP tasks such as language generation, machine translation and word sense disambiguation. We present methods of non-compositionality detection for English noun compounds using the unsupervised learning of a semantic composition function. Compounds which are not well modeled by the learned semantic composition function are considered noncompositional. We explore a range of distributional vector-space models for semantic composition, empirically evaluate these models, and propose additional methods which improve results further. We show that a complex function such as polynomial projection can learn semantic composition and identify non-compositionality in an unsupervised way, beating all other baselines ranging from simple to complex. We show that enforcing sparsity is a useful regularizer in learning complex composition functions. We show further improvements by training a decomposition function in addition to the composition function. Finally, we propose an EM algorithm over latent compositionality annotations that also improves the performance.","This research was partially funded by Hasler foundation project no. 15019, ""Deep Neural Network Dependency Parser for Context-aware Representation Learning"".","Learning Semantic Composition to Detect Non-compositionality of Multiword Expressions. Non-compositionality of multiword expressions is an intriguing problem that can be the source of error in a variety of NLP tasks such as language generation, machine translation and word sense disambiguation. We present methods of non-compositionality detection for English noun compounds using the unsupervised learning of a semantic composition function. Compounds which are not well modeled by the learned semantic composition function are considered noncompositional. We explore a range of distributional vector-space models for semantic composition, empirically evaluate these models, and propose additional methods which improve results further. We show that a complex function such as polynomial projection can learn semantic composition and identify non-compositionality in an unsupervised way, beating all other baselines ranging from simple to complex. We show that enforcing sparsity is a useful regularizer in learning complex composition functions. We show further improvements by training a decomposition function in addition to the composition function. Finally, we propose an EM algorithm over latent compositionality annotations that also improves the performance.",2015
liu-etal-2009-capturing,https://aclanthology.org/P09-2007,0,,,,,,,"Capturing Errors in Written Chinese Words. A collection of 3208 reported errors of Chinese words were analyzed. Among which, 7.2% involved rarely used character, and 98.4% were assigned common classifications of their causes by human subjects. In particular, 80% of the errors observed in writings of middle school students were related to the pronunciations and 30% were related to the compositions of words. Experimental results show that using intuitive Web-based statistics helped us capture only about 75% of these errors. In a related task, the Web-based statistics are useful for recommending incorrect characters for composing test items for ""incorrect character identification"" tests about 93% of the time.",Capturing Errors in Written {C}hinese Words,"A collection of 3208 reported errors of Chinese words were analyzed. Among which, 7.2% involved rarely used character, and 98.4% were assigned common classifications of their causes by human subjects. In particular, 80% of the errors observed in writings of middle school students were related to the pronunciations and 30% were related to the compositions of words. Experimental results show that using intuitive Web-based statistics helped us capture only about 75% of these errors. In a related task, the Web-based statistics are useful for recommending incorrect characters for composing test items for ""incorrect character identification"" tests about 93% of the time.",Capturing Errors in Written Chinese Words,"A collection of 3208 reported errors of Chinese words were analyzed. Among which, 7.2% involved rarely used character, and 98.4% were assigned common classifications of their causes by human subjects. In particular, 80% of the errors observed in writings of middle school students were related to the pronunciations and 30% were related to the compositions of words. Experimental results show that using intuitive Web-based statistics helped us capture only about 75% of these errors. In a related task, the Web-based statistics are useful for recommending incorrect characters for composing test items for ""incorrect character identification"" tests about 93% of the time.","This research has been funded in part by the National Science Council of Taiwan under the grant NSC-97-2221-E-004-007-MY2. We thank the anonymous reviewers for invaluable comments, and more responses to the comments are available in (Liu et al. 2009) .","Capturing Errors in Written Chinese Words. A collection of 3208 reported errors of Chinese words were analyzed. Among which, 7.2% involved rarely used character, and 98.4% were assigned common classifications of their causes by human subjects. In particular, 80% of the errors observed in writings of middle school students were related to the pronunciations and 30% were related to the compositions of words. Experimental results show that using intuitive Web-based statistics helped us capture only about 75% of these errors. In a related task, the Web-based statistics are useful for recommending incorrect characters for composing test items for ""incorrect character identification"" tests about 93% of the time.",2009
saumya-etal-2021-offensive,https://aclanthology.org/2021.dravidianlangtech-1.5,1,,,,hate_speech,,,"Offensive language identification in Dravidian code mixed social media text. Hate speech and offensive language recognition in social media platforms have been an active field of research over recent years. In non-native English spoken countries, social media texts are mostly in code mixed or script mixed/switched form. The current study presents extensive experiments using multiple machine learning, deep learning, and transfer learning models to detect offensive content on Twitter. The data set used for this study are in Tanglish (Tamil and English), Manglish (Malayalam and English) code-mixed, and Malayalam script-mixed. The experimental results showed that 1 to 6-gram character TF-IDF features are better for the said task. The best performing models were naive bayes, logistic regression, and vanilla neural network for the dataset Tamil code-mix, Malayalam code-mixed, and Malayalam script-mixed, respectively instead of more popular transfer learning models such as BERT and ULMFiT and hybrid deep models.",Offensive language identification in {D}ravidian code mixed social media text,"Hate speech and offensive language recognition in social media platforms have been an active field of research over recent years. In non-native English spoken countries, social media texts are mostly in code mixed or script mixed/switched form. The current study presents extensive experiments using multiple machine learning, deep learning, and transfer learning models to detect offensive content on Twitter. The data set used for this study are in Tanglish (Tamil and English), Manglish (Malayalam and English) code-mixed, and Malayalam script-mixed. The experimental results showed that 1 to 6-gram character TF-IDF features are better for the said task. The best performing models were naive bayes, logistic regression, and vanilla neural network for the dataset Tamil code-mix, Malayalam code-mixed, and Malayalam script-mixed, respectively instead of more popular transfer learning models such as BERT and ULMFiT and hybrid deep models.",Offensive language identification in Dravidian code mixed social media text,"Hate speech and offensive language recognition in social media platforms have been an active field of research over recent years. In non-native English spoken countries, social media texts are mostly in code mixed or script mixed/switched form. The current study presents extensive experiments using multiple machine learning, deep learning, and transfer learning models to detect offensive content on Twitter. The data set used for this study are in Tanglish (Tamil and English), Manglish (Malayalam and English) code-mixed, and Malayalam script-mixed. The experimental results showed that 1 to 6-gram character TF-IDF features are better for the said task. The best performing models were naive bayes, logistic regression, and vanilla neural network for the dataset Tamil code-mix, Malayalam code-mixed, and Malayalam script-mixed, respectively instead of more popular transfer learning models such as BERT and ULMFiT and hybrid deep models.",,"Offensive language identification in Dravidian code mixed social media text. Hate speech and offensive language recognition in social media platforms have been an active field of research over recent years. In non-native English spoken countries, social media texts are mostly in code mixed or script mixed/switched form. The current study presents extensive experiments using multiple machine learning, deep learning, and transfer learning models to detect offensive content on Twitter. The data set used for this study are in Tanglish (Tamil and English), Manglish (Malayalam and English) code-mixed, and Malayalam script-mixed. The experimental results showed that 1 to 6-gram character TF-IDF features are better for the said task. The best performing models were naive bayes, logistic regression, and vanilla neural network for the dataset Tamil code-mix, Malayalam code-mixed, and Malayalam script-mixed, respectively instead of more popular transfer learning models such as BERT and ULMFiT and hybrid deep models.",2021
lialin-etal-2022-life,https://aclanthology.org/2022.acl-long.227,0,,,,,,,"Life after BERT: What do Other Muppets Understand about Language?. Existing pre-trained transformer analysis works usually focus only on one or two model families at a time, overlooking the variability of the architecture and pre-training objectives. In our work, we utilize the oLMpics benchmark and psycholinguistic probing datasets for a diverse set of 29 models including T5, BART, and ALBERT. Additionally, we adapt the oLMpics zero-shot setup for autoregressive models and evaluate GPT networks of different sizes. Our findings show that none of these models can resolve compositional questions in a zero-shot fashion, suggesting that this skill is not learnable using existing pre-training objectives. Furthermore, we find that global model decisions such as architecture, directionality, size of the dataset, and pre-training objective are not predictive of a model's linguistic capabilities. The code for this study is available on GitHub 1 .",Life after {BERT}: What do Other Muppets Understand about Language?,"Existing pre-trained transformer analysis works usually focus only on one or two model families at a time, overlooking the variability of the architecture and pre-training objectives. In our work, we utilize the oLMpics benchmark and psycholinguistic probing datasets for a diverse set of 29 models including T5, BART, and ALBERT. Additionally, we adapt the oLMpics zero-shot setup for autoregressive models and evaluate GPT networks of different sizes. Our findings show that none of these models can resolve compositional questions in a zero-shot fashion, suggesting that this skill is not learnable using existing pre-training objectives. Furthermore, we find that global model decisions such as architecture, directionality, size of the dataset, and pre-training objective are not predictive of a model's linguistic capabilities. The code for this study is available on GitHub 1 .",Life after BERT: What do Other Muppets Understand about Language?,"Existing pre-trained transformer analysis works usually focus only on one or two model families at a time, overlooking the variability of the architecture and pre-training objectives. In our work, we utilize the oLMpics benchmark and psycholinguistic probing datasets for a diverse set of 29 models including T5, BART, and ALBERT. Additionally, we adapt the oLMpics zero-shot setup for autoregressive models and evaluate GPT networks of different sizes. Our findings show that none of these models can resolve compositional questions in a zero-shot fashion, suggesting that this skill is not learnable using existing pre-training objectives. Furthermore, we find that global model decisions such as architecture, directionality, size of the dataset, and pre-training objective are not predictive of a model's linguistic capabilities. The code for this study is available on GitHub 1 .",This work is funded in part by the NSF award number IIS-1844740.,"Life after BERT: What do Other Muppets Understand about Language?. Existing pre-trained transformer analysis works usually focus only on one or two model families at a time, overlooking the variability of the architecture and pre-training objectives. In our work, we utilize the oLMpics benchmark and psycholinguistic probing datasets for a diverse set of 29 models including T5, BART, and ALBERT. Additionally, we adapt the oLMpics zero-shot setup for autoregressive models and evaluate GPT networks of different sizes. Our findings show that none of these models can resolve compositional questions in a zero-shot fashion, suggesting that this skill is not learnable using existing pre-training objectives. Furthermore, we find that global model decisions such as architecture, directionality, size of the dataset, and pre-training objective are not predictive of a model's linguistic capabilities. The code for this study is available on GitHub 1 .",2022
yan-etal-2021-unified-generative,https://aclanthology.org/2021.acl-long.451,0,,,,,,,"A Unified Generative Framework for Various NER Subtasks. Named Entity Recognition (NER) is the task of identifying spans that represent entities in sentences. Whether the entity spans are nested or discontinuous, the NER task can be categorized into the flat NER, nested NER, and discontinuous NER subtasks. These subtasks have been mainly solved by the token-level sequence labelling or span-level classification. However, these solutions can hardly tackle the three kinds of NER subtasks concurrently. To that end, we propose to formulate the NER subtasks as an entity span sequence generation task, which can be solved by a unified sequence-to-sequence (Seq2Seq) framework. Based on our unified framework, we can leverage the pre-trained Seq2Seq model to solve all three kinds of NER subtasks without the special design of the tagging schema or ways to enumerate spans. We exploit three types of entity representations to linearize entities into a sequence. Our proposed framework is easy-to-implement and achieves state-of-theart (SoTA) or near SoTA performance on eight English NER datasets, including two flat NER datasets, three nested NER datasets, and three discontinuous NER datasets 1 .",A Unified Generative Framework for Various {NER} Subtasks,"Named Entity Recognition (NER) is the task of identifying spans that represent entities in sentences. Whether the entity spans are nested or discontinuous, the NER task can be categorized into the flat NER, nested NER, and discontinuous NER subtasks. These subtasks have been mainly solved by the token-level sequence labelling or span-level classification. However, these solutions can hardly tackle the three kinds of NER subtasks concurrently. To that end, we propose to formulate the NER subtasks as an entity span sequence generation task, which can be solved by a unified sequence-to-sequence (Seq2Seq) framework. Based on our unified framework, we can leverage the pre-trained Seq2Seq model to solve all three kinds of NER subtasks without the special design of the tagging schema or ways to enumerate spans. We exploit three types of entity representations to linearize entities into a sequence. Our proposed framework is easy-to-implement and achieves state-of-theart (SoTA) or near SoTA performance on eight English NER datasets, including two flat NER datasets, three nested NER datasets, and three discontinuous NER datasets 1 .",A Unified Generative Framework for Various NER Subtasks,"Named Entity Recognition (NER) is the task of identifying spans that represent entities in sentences. Whether the entity spans are nested or discontinuous, the NER task can be categorized into the flat NER, nested NER, and discontinuous NER subtasks. These subtasks have been mainly solved by the token-level sequence labelling or span-level classification. However, these solutions can hardly tackle the three kinds of NER subtasks concurrently. To that end, we propose to formulate the NER subtasks as an entity span sequence generation task, which can be solved by a unified sequence-to-sequence (Seq2Seq) framework. Based on our unified framework, we can leverage the pre-trained Seq2Seq model to solve all three kinds of NER subtasks without the special design of the tagging schema or ways to enumerate spans. We exploit three types of entity representations to linearize entities into a sequence. Our proposed framework is easy-to-implement and achieves state-of-theart (SoTA) or near SoTA performance on eight English NER datasets, including two flat NER datasets, three nested NER datasets, and three discontinuous NER datasets 1 .",We would like to thank the anonymous reviewers for their insightful comments. The discussion with colleagues in AWS Shanghai AI Lab was quite fruitful. We also thank the developers of fastNLP 10 and fitlog 11 . We thank Juntao Yu for helpful discussion about dataset processing. This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106700) and National Natural Science Foundation of China (No. 62022027).,"A Unified Generative Framework for Various NER Subtasks. Named Entity Recognition (NER) is the task of identifying spans that represent entities in sentences. Whether the entity spans are nested or discontinuous, the NER task can be categorized into the flat NER, nested NER, and discontinuous NER subtasks. These subtasks have been mainly solved by the token-level sequence labelling or span-level classification. However, these solutions can hardly tackle the three kinds of NER subtasks concurrently. To that end, we propose to formulate the NER subtasks as an entity span sequence generation task, which can be solved by a unified sequence-to-sequence (Seq2Seq) framework. Based on our unified framework, we can leverage the pre-trained Seq2Seq model to solve all three kinds of NER subtasks without the special design of the tagging schema or ways to enumerate spans. We exploit three types of entity representations to linearize entities into a sequence. Our proposed framework is easy-to-implement and achieves state-of-theart (SoTA) or near SoTA performance on eight English NER datasets, including two flat NER datasets, three nested NER datasets, and three discontinuous NER datasets 1 .",2021
siddharthan-2003-preserving,https://aclanthology.org/W03-2314,0,,,,,,,Preserving Discourse Structure when Simplifying Text. ,Preserving Discourse Structure when Simplifying Text,,Preserving Discourse Structure when Simplifying Text,,,Preserving Discourse Structure when Simplifying Text. ,2003
kilbury-etal-1991-datr,https://aclanthology.org/E91-1024,0,,,,,,,"Datr as a Lexical Component for PATR. means that associated information is represented together or bundled. One advantage of this bundled information is its reusability, which allows redundancy to be reduced. The representation of lexical information should enable us to express a further kind of generalization, namely the relations between regularity, subregularity, and irregularity. Furthermore, the representation has to be computationaUy tractable and --possibly with the addition of""syntactic sugar"" --more or less readable for human users.
In the project ""Simulation of Lexical Acquisition"" (SIMLEX) unification is used to create new lexical entries through the monotonic accumulation of contextual grammatical information during parsing. The system which we implemented for this purpose is a variant of PATR as described in (Shieber, 1986) .",Datr as a Lexical Component for {PATR},"means that associated information is represented together or bundled. One advantage of this bundled information is its reusability, which allows redundancy to be reduced. The representation of lexical information should enable us to express a further kind of generalization, namely the relations between regularity, subregularity, and irregularity. Furthermore, the representation has to be computationaUy tractable and --possibly with the addition of""syntactic sugar"" --more or less readable for human users.
In the project ""Simulation of Lexical Acquisition"" (SIMLEX) unification is used to create new lexical entries through the monotonic accumulation of contextual grammatical information during parsing. The system which we implemented for this purpose is a variant of PATR as described in (Shieber, 1986) .",Datr as a Lexical Component for PATR,"means that associated information is represented together or bundled. One advantage of this bundled information is its reusability, which allows redundancy to be reduced. The representation of lexical information should enable us to express a further kind of generalization, namely the relations between regularity, subregularity, and irregularity. Furthermore, the representation has to be computationaUy tractable and --possibly with the addition of""syntactic sugar"" --more or less readable for human users.
In the project ""Simulation of Lexical Acquisition"" (SIMLEX) unification is used to create new lexical entries through the monotonic accumulation of contextual grammatical information during parsing. The system which we implemented for this purpose is a variant of PATR as described in (Shieber, 1986) .","The research project SLMLEX is supported by the DFG under grant number Ki 374/1. The authors are indebted to the participants of the Workshop on Inheritance, Tilburg 1990. ","Datr as a Lexical Component for PATR. means that associated information is represented together or bundled. One advantage of this bundled information is its reusability, which allows redundancy to be reduced. The representation of lexical information should enable us to express a further kind of generalization, namely the relations between regularity, subregularity, and irregularity. Furthermore, the representation has to be computationaUy tractable and --possibly with the addition of""syntactic sugar"" --more or less readable for human users.
In the project ""Simulation of Lexical Acquisition"" (SIMLEX) unification is used to create new lexical entries through the monotonic accumulation of contextual grammatical information during parsing. The system which we implemented for this purpose is a variant of PATR as described in (Shieber, 1986) .",1991
thorne-etal-2013-automated,https://aclanthology.org/I13-1160,1,,,,health,,,"Automated Activity Recognition in Clinical Documents. We describe a first experiment on the identification and extraction of computerinterpretable guideline (CIG) components (activities, actors and consumed artifacts) from clinical documents, based on clinical entity recognition techniques. We rely on MetaMap and the UMLS Metathesaurus to provide lexical information, and study the impact of clinical document syntax and semantics on activity recognition.",Automated Activity Recognition in Clinical Documents,"We describe a first experiment on the identification and extraction of computerinterpretable guideline (CIG) components (activities, actors and consumed artifacts) from clinical documents, based on clinical entity recognition techniques. We rely on MetaMap and the UMLS Metathesaurus to provide lexical information, and study the impact of clinical document syntax and semantics on activity recognition.",Automated Activity Recognition in Clinical Documents,"We describe a first experiment on the identification and extraction of computerinterpretable guideline (CIG) components (activities, actors and consumed artifacts) from clinical documents, based on clinical entity recognition techniques. We rely on MetaMap and the UMLS Metathesaurus to provide lexical information, and study the impact of clinical document syntax and semantics on activity recognition.",,"Automated Activity Recognition in Clinical Documents. We describe a first experiment on the identification and extraction of computerinterpretable guideline (CIG) components (activities, actors and consumed artifacts) from clinical documents, based on clinical entity recognition techniques. We rely on MetaMap and the UMLS Metathesaurus to provide lexical information, and study the impact of clinical document syntax and semantics on activity recognition.",2013
farahmand-henderson-2016-modeling,https://aclanthology.org/W16-1809,0,,,,,,,"Modeling the Non-Substitutability of Multiword Expressions with Distributional Semantics and a Log-Linear Model. Non-substitutability is a property of Multiword Expressions (MWEs) that often causes lexical rigidity and is relevant for most types of MWEs. Efficient identification of this property can result in the efficient identification of MWEs. In this work we propose using distributional semantics, in the form of word embeddings, to identify candidate substitutions for a candidate MWE and model its substitutability. We use our models to rank MWEs based on their lexical rigidity and study their performance in comparison with association measures. We also study the interaction between our models and association measures. We show that one of our models can significantly improve over the association measure baselines, identifying collocations.",Modeling the Non-Substitutability of Multiword Expressions with Distributional Semantics and a Log-Linear Model,"Non-substitutability is a property of Multiword Expressions (MWEs) that often causes lexical rigidity and is relevant for most types of MWEs. Efficient identification of this property can result in the efficient identification of MWEs. In this work we propose using distributional semantics, in the form of word embeddings, to identify candidate substitutions for a candidate MWE and model its substitutability. We use our models to rank MWEs based on their lexical rigidity and study their performance in comparison with association measures. We also study the interaction between our models and association measures. We show that one of our models can significantly improve over the association measure baselines, identifying collocations.",Modeling the Non-Substitutability of Multiword Expressions with Distributional Semantics and a Log-Linear Model,"Non-substitutability is a property of Multiword Expressions (MWEs) that often causes lexical rigidity and is relevant for most types of MWEs. Efficient identification of this property can result in the efficient identification of MWEs. In this work we propose using distributional semantics, in the form of word embeddings, to identify candidate substitutions for a candidate MWE and model its substitutability. We use our models to rank MWEs based on their lexical rigidity and study their performance in comparison with association measures. We also study the interaction between our models and association measures. We show that one of our models can significantly improve over the association measure baselines, identifying collocations.",,"Modeling the Non-Substitutability of Multiword Expressions with Distributional Semantics and a Log-Linear Model. Non-substitutability is a property of Multiword Expressions (MWEs) that often causes lexical rigidity and is relevant for most types of MWEs. Efficient identification of this property can result in the efficient identification of MWEs. In this work we propose using distributional semantics, in the form of word embeddings, to identify candidate substitutions for a candidate MWE and model its substitutability. We use our models to rank MWEs based on their lexical rigidity and study their performance in comparison with association measures. We also study the interaction between our models and association measures. We show that one of our models can significantly improve over the association measure baselines, identifying collocations.",2016
vandeghinste-schuurman-2014-linking,http://www.lrec-conf.org/proceedings/lrec2014/pdf/189_Paper.pdf,0,,,,,,,"Linking Pictographs to Synsets: Sclera2Cornetto. Social inclusion of people with Intellectual and Developmental Disabilities can be promoted by offering them ways to independently use the internet. People with reading or writing disabilities can use pictographs instead of text. We present a resource in which we have linked a set of 5710 pictographs to lexical-semantic concepts in Cornetto, a Wordnet-like database for Dutch. We show that, by using this resource in a text-to-pictograph translation system, we can greatly improve the coverage comparing with a baseline where words are converted into pictographs only if the word equals the filename.",Linking Pictographs to Synsets: {S}clera2{C}ornetto,"Social inclusion of people with Intellectual and Developmental Disabilities can be promoted by offering them ways to independently use the internet. People with reading or writing disabilities can use pictographs instead of text. We present a resource in which we have linked a set of 5710 pictographs to lexical-semantic concepts in Cornetto, a Wordnet-like database for Dutch. We show that, by using this resource in a text-to-pictograph translation system, we can greatly improve the coverage comparing with a baseline where words are converted into pictographs only if the word equals the filename.",Linking Pictographs to Synsets: Sclera2Cornetto,"Social inclusion of people with Intellectual and Developmental Disabilities can be promoted by offering them ways to independently use the internet. People with reading or writing disabilities can use pictographs instead of text. We present a resource in which we have linked a set of 5710 pictographs to lexical-semantic concepts in Cornetto, a Wordnet-like database for Dutch. We show that, by using this resource in a text-to-pictograph translation system, we can greatly improve the coverage comparing with a baseline where words are converted into pictographs only if the word equals the filename.","This research is done in the Picto project, funded by the Support Fund Marguerite-Marie Delacroix. 17 Follow up work on the localisation of the text to pictograph translator is funded by the European Commission CIP-621055 in the Able-to-Include project.","Linking Pictographs to Synsets: Sclera2Cornetto. Social inclusion of people with Intellectual and Developmental Disabilities can be promoted by offering them ways to independently use the internet. People with reading or writing disabilities can use pictographs instead of text. We present a resource in which we have linked a set of 5710 pictographs to lexical-semantic concepts in Cornetto, a Wordnet-like database for Dutch. We show that, by using this resource in a text-to-pictograph translation system, we can greatly improve the coverage comparing with a baseline where words are converted into pictographs only if the word equals the filename.",2014
fu-etal-2014-improving,https://aclanthology.org/W14-6807,0,,,,,,,"Improving Chinese Sentence Polarity Classification via Opinion Paraphrasing. While substantial studies have been achieved on sentiment polarity classification to date, lacking enough opinion-annotated corpora for reliable t rain ing is still a challenge. In this paper we propose to improve a supported vector mach ines based polarity classifier by enriching both training data and test data via opinion paraphrasing. In particular, we first extract an equivalent set of attributeevaluation pairs fro m the training data and then exploit it to generate opinion paraphrases in order to expand the training corpus or enrich opinionated sentences for polarity classification. We tested our system over two sets of online product reviews in car and mobilephone domains. The experimental results show that using opinion paraphrases results in significant performance imp rovement in polarity classification.",Improving {C}hinese Sentence Polarity Classification via Opinion Paraphrasing,"While substantial studies have been achieved on sentiment polarity classification to date, lacking enough opinion-annotated corpora for reliable t rain ing is still a challenge. In this paper we propose to improve a supported vector mach ines based polarity classifier by enriching both training data and test data via opinion paraphrasing. In particular, we first extract an equivalent set of attributeevaluation pairs fro m the training data and then exploit it to generate opinion paraphrases in order to expand the training corpus or enrich opinionated sentences for polarity classification. We tested our system over two sets of online product reviews in car and mobilephone domains. The experimental results show that using opinion paraphrases results in significant performance imp rovement in polarity classification.",Improving Chinese Sentence Polarity Classification via Opinion Paraphrasing,"While substantial studies have been achieved on sentiment polarity classification to date, lacking enough opinion-annotated corpora for reliable t rain ing is still a challenge. In this paper we propose to improve a supported vector mach ines based polarity classifier by enriching both training data and test data via opinion paraphrasing. In particular, we first extract an equivalent set of attributeevaluation pairs fro m the training data and then exploit it to generate opinion paraphrases in order to expand the training corpus or enrich opinionated sentences for polarity classification. We tested our system over two sets of online product reviews in car and mobilephone domains. The experimental results show that using opinion paraphrases results in significant performance imp rovement in polarity classification.","This study was supported by National Natural Science Foundation of China under Grant No.61170148 and No.60973081, the Returned Scholar Foundation of Heilongjiang Province, and Harbin Innovative Foundation for Returnees under Grant No.2009RFLXG007, respectively. ","Improving Chinese Sentence Polarity Classification via Opinion Paraphrasing. While substantial studies have been achieved on sentiment polarity classification to date, lacking enough opinion-annotated corpora for reliable t rain ing is still a challenge. In this paper we propose to improve a supported vector mach ines based polarity classifier by enriching both training data and test data via opinion paraphrasing. In particular, we first extract an equivalent set of attributeevaluation pairs fro m the training data and then exploit it to generate opinion paraphrases in order to expand the training corpus or enrich opinionated sentences for polarity classification. We tested our system over two sets of online product reviews in car and mobilephone domains. The experimental results show that using opinion paraphrases results in significant performance imp rovement in polarity classification.",2014
liu-etal-2020-unsupervised,https://aclanthology.org/2020.acl-main.28,0,,,,,,,"Unsupervised Paraphrasing by Simulated Annealing. We propose UPSA, a novel approach that accomplishes Unsupervised Paraphrasing by Simulated Annealing. We model paraphrase generation as an optimization problem and propose a sophisticated objective function, involving semantic similarity, expression diversity, and language fluency of paraphrases. UPSA searches the sentence space towards this objective by performing a sequence of local edits. We evaluate our approach on various datasets, namely, Quora, Wikianswers, MSCOCO, and Twitter. Extensive results show that UPSA achieves the state-of-the-art performance compared with previous unsupervised methods in terms of both automatic and human evaluations. Further, our approach outperforms most existing domain-adapted supervised models, showing the generalizability of UPSA. 1",Unsupervised Paraphrasing by Simulated Annealing,"We propose UPSA, a novel approach that accomplishes Unsupervised Paraphrasing by Simulated Annealing. We model paraphrase generation as an optimization problem and propose a sophisticated objective function, involving semantic similarity, expression diversity, and language fluency of paraphrases. UPSA searches the sentence space towards this objective by performing a sequence of local edits. We evaluate our approach on various datasets, namely, Quora, Wikianswers, MSCOCO, and Twitter. Extensive results show that UPSA achieves the state-of-the-art performance compared with previous unsupervised methods in terms of both automatic and human evaluations. Further, our approach outperforms most existing domain-adapted supervised models, showing the generalizability of UPSA. 1",Unsupervised Paraphrasing by Simulated Annealing,"We propose UPSA, a novel approach that accomplishes Unsupervised Paraphrasing by Simulated Annealing. We model paraphrase generation as an optimization problem and propose a sophisticated objective function, involving semantic similarity, expression diversity, and language fluency of paraphrases. UPSA searches the sentence space towards this objective by performing a sequence of local edits. We evaluate our approach on various datasets, namely, Quora, Wikianswers, MSCOCO, and Twitter. Extensive results show that UPSA achieves the state-of-the-art performance compared with previous unsupervised methods in terms of both automatic and human evaluations. Further, our approach outperforms most existing domain-adapted supervised models, showing the generalizability of UPSA. 1","We thank the anonymous reviewers for their insightful suggestions. This work was supported in part by the Beijing Innovation Center for Future Chip. Lili Mou is supported by AltaML, the Amii Fellow Program, and the Canadian CIFAR AI Chair Program; he also acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), RGPIN-2020-04465. Sen Song is the corresponding author of this paper.","Unsupervised Paraphrasing by Simulated Annealing. We propose UPSA, a novel approach that accomplishes Unsupervised Paraphrasing by Simulated Annealing. We model paraphrase generation as an optimization problem and propose a sophisticated objective function, involving semantic similarity, expression diversity, and language fluency of paraphrases. UPSA searches the sentence space towards this objective by performing a sequence of local edits. We evaluate our approach on various datasets, namely, Quora, Wikianswers, MSCOCO, and Twitter. Extensive results show that UPSA achieves the state-of-the-art performance compared with previous unsupervised methods in terms of both automatic and human evaluations. Further, our approach outperforms most existing domain-adapted supervised models, showing the generalizability of UPSA. 1",2020
dohsaka-etal-2010-user,https://aclanthology.org/W10-4358,1,,,,industry_innovation_infrastructure,partnership,,"User-adaptive Coordination of Agent Communicative Behavior in Spoken Dialogue. In this paper, which addresses smooth spoken interaction between human users and conversational agents, we present an experimental study that evaluates a method for user-adaptive coordination of agent communicative behavior. Our method adapts the pause duration preceding agent utterances and the agent gaze duration to reduce the discomfort perceived by individual users during interaction. The experimental results showed a statistically significant tendency: the duration of the agent pause and the gaze converged during interaction with the method. The method also significantly improved the perceived relevance of the agent communicative behavior.",User-adaptive Coordination of Agent Communicative Behavior in Spoken Dialogue,"In this paper, which addresses smooth spoken interaction between human users and conversational agents, we present an experimental study that evaluates a method for user-adaptive coordination of agent communicative behavior. Our method adapts the pause duration preceding agent utterances and the agent gaze duration to reduce the discomfort perceived by individual users during interaction. The experimental results showed a statistically significant tendency: the duration of the agent pause and the gaze converged during interaction with the method. The method also significantly improved the perceived relevance of the agent communicative behavior.",User-adaptive Coordination of Agent Communicative Behavior in Spoken Dialogue,"In this paper, which addresses smooth spoken interaction between human users and conversational agents, we present an experimental study that evaluates a method for user-adaptive coordination of agent communicative behavior. Our method adapts the pause duration preceding agent utterances and the agent gaze duration to reduce the discomfort perceived by individual users during interaction. The experimental results showed a statistically significant tendency: the duration of the agent pause and the gaze converged during interaction with the method. The method also significantly improved the perceived relevance of the agent communicative behavior.","This work was partially supported by a Grant-in-Aid for Scientific Research on Innovative Areas, ""Founding a creative society via collaboration between humans and robots"" (21118004), from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.","User-adaptive Coordination of Agent Communicative Behavior in Spoken Dialogue. In this paper, which addresses smooth spoken interaction between human users and conversational agents, we present an experimental study that evaluates a method for user-adaptive coordination of agent communicative behavior. Our method adapts the pause duration preceding agent utterances and the agent gaze duration to reduce the discomfort perceived by individual users during interaction. The experimental results showed a statistically significant tendency: the duration of the agent pause and the gaze converged during interaction with the method. The method also significantly improved the perceived relevance of the agent communicative behavior.",2010
ikeda-etal-1998-information,https://aclanthology.org/C98-1090,0,,,,,,,"Information Classification and Navigation Based on 5WlH of the Target Information. This paper proposes a method by which 5W1H (who, when, where, what, why, how, and predicate) information is used to classify and navigate Japaneselanguage texts. 5WlH information, extracted from text data, has an access platform with three functions: episodic retrieval, multi-dimensional classification, and overall classification. In a six-month trial, the platform was used by 50 people to access 6400 newspaper articles. The three functions proved to be effective for office documentation work and the precision of extraction was approximately 82%.",Information Classification and Navigation Based on 5{W}l{H} of the Target Information,"This paper proposes a method by which 5W1H (who, when, where, what, why, how, and predicate) information is used to classify and navigate Japaneselanguage texts. 5WlH information, extracted from text data, has an access platform with three functions: episodic retrieval, multi-dimensional classification, and overall classification. In a six-month trial, the platform was used by 50 people to access 6400 newspaper articles. The three functions proved to be effective for office documentation work and the precision of extraction was approximately 82%.",Information Classification and Navigation Based on 5WlH of the Target Information,"This paper proposes a method by which 5W1H (who, when, where, what, why, how, and predicate) information is used to classify and navigate Japaneselanguage texts. 5WlH information, extracted from text data, has an access platform with three functions: episodic retrieval, multi-dimensional classification, and overall classification. In a six-month trial, the platform was used by 50 people to access 6400 newspaper articles. The three functions proved to be effective for office documentation work and the precision of extraction was approximately 82%.","We would like to thank Dr. Satoshi Goto and Dr. Takao Watanabe for their encouragement and continued support throughout this work.We also appreciate the contribution of Mr. Kenji Satoh, Mr. Takayoshi Ochiai, Mr. Satoshi Shimokawara, and Mr. Masahito Abe to this work.","Information Classification and Navigation Based on 5WlH of the Target Information. This paper proposes a method by which 5W1H (who, when, where, what, why, how, and predicate) information is used to classify and navigate Japaneselanguage texts. 5WlH information, extracted from text data, has an access platform with three functions: episodic retrieval, multi-dimensional classification, and overall classification. In a six-month trial, the platform was used by 50 people to access 6400 newspaper articles. The three functions proved to be effective for office documentation work and the precision of extraction was approximately 82%.",1998
klein-nabi-2020-contrastive,https://aclanthology.org/2020.acl-main.671,0,,,,,,,"Contrastive Self-Supervised Learning for Commonsense Reasoning. We propose a self-supervised method to solve Pronoun Disambiguation and Winograd Schema Challenge problems. Our approach exploits the characteristic structure of training corpora related to so-called ""trigger"" words, which are responsible for flipping the answer in pronoun disambiguation. We achieve such commonsense reasoning by constructing pairwise contrastive auxiliary predictions. To this end, we leverage a mutual exclusive loss regularized by a contrastive margin. Our architecture is based on the recently introduced transformer networks, BERT, that exhibits strong performance on many NLP benchmarks. Empirical results show that our method alleviates the limitation of current supervised approaches for commonsense reasoning. This study opens up avenues for exploiting inexpensive self-supervision to achieve performance gain in commonsense reasoning tasks. 1",Contrastive Self-Supervised Learning for Commonsense Reasoning,"We propose a self-supervised method to solve Pronoun Disambiguation and Winograd Schema Challenge problems. Our approach exploits the characteristic structure of training corpora related to so-called ""trigger"" words, which are responsible for flipping the answer in pronoun disambiguation. We achieve such commonsense reasoning by constructing pairwise contrastive auxiliary predictions. To this end, we leverage a mutual exclusive loss regularized by a contrastive margin. Our architecture is based on the recently introduced transformer networks, BERT, that exhibits strong performance on many NLP benchmarks. Empirical results show that our method alleviates the limitation of current supervised approaches for commonsense reasoning. This study opens up avenues for exploiting inexpensive self-supervision to achieve performance gain in commonsense reasoning tasks. 1",Contrastive Self-Supervised Learning for Commonsense Reasoning,"We propose a self-supervised method to solve Pronoun Disambiguation and Winograd Schema Challenge problems. Our approach exploits the characteristic structure of training corpora related to so-called ""trigger"" words, which are responsible for flipping the answer in pronoun disambiguation. We achieve such commonsense reasoning by constructing pairwise contrastive auxiliary predictions. To this end, we leverage a mutual exclusive loss regularized by a contrastive margin. Our architecture is based on the recently introduced transformer networks, BERT, that exhibits strong performance on many NLP benchmarks. Empirical results show that our method alleviates the limitation of current supervised approaches for commonsense reasoning. This study opens up avenues for exploiting inexpensive self-supervision to achieve performance gain in commonsense reasoning tasks. 1",,"Contrastive Self-Supervised Learning for Commonsense Reasoning. We propose a self-supervised method to solve Pronoun Disambiguation and Winograd Schema Challenge problems. Our approach exploits the characteristic structure of training corpora related to so-called ""trigger"" words, which are responsible for flipping the answer in pronoun disambiguation. We achieve such commonsense reasoning by constructing pairwise contrastive auxiliary predictions. To this end, we leverage a mutual exclusive loss regularized by a contrastive margin. Our architecture is based on the recently introduced transformer networks, BERT, that exhibits strong performance on many NLP benchmarks. Empirical results show that our method alleviates the limitation of current supervised approaches for commonsense reasoning. This study opens up avenues for exploiting inexpensive self-supervision to achieve performance gain in commonsense reasoning tasks. 1",2020
kumar-etal-2020-vocabulary,https://aclanthology.org/2020.aacl-main.78,0,,,,,,,"Vocabulary Matters: A Simple yet Effective Approach to Paragraph-level Question Generation. Question generation (QG) has recently attracted considerable attention. Most of the current neural models take as input only one or two sentences and perform poorly when multiple sentences or complete paragraphs are given as input. However, in real-world scenarios, it is very important to be able to generate high-quality questions from complete paragraphs. In this paper, we present a simple yet effective technique for answer-aware question generation from paragraphs. We augment a basic sequence-to-sequence QG model with dynamic, paragraph-specific dictionary and copy attention that is persistent across the corpus, without requiring features generated by sophisticated NLP pipelines or handcrafted rules. Our evaluation on SQuAD shows that our model significantly outperforms current state-of-theart systems in question generation from paragraphs in both automatic and human evaluation. We achieve a 6-point improvement over the best system on BLEU-4, from 16.38 to 22.62.",Vocabulary Matters: A Simple yet Effective Approach to Paragraph-level Question Generation,"Question generation (QG) has recently attracted considerable attention. Most of the current neural models take as input only one or two sentences and perform poorly when multiple sentences or complete paragraphs are given as input. However, in real-world scenarios, it is very important to be able to generate high-quality questions from complete paragraphs. In this paper, we present a simple yet effective technique for answer-aware question generation from paragraphs. We augment a basic sequence-to-sequence QG model with dynamic, paragraph-specific dictionary and copy attention that is persistent across the corpus, without requiring features generated by sophisticated NLP pipelines or handcrafted rules. Our evaluation on SQuAD shows that our model significantly outperforms current state-of-theart systems in question generation from paragraphs in both automatic and human evaluation. We achieve a 6-point improvement over the best system on BLEU-4, from 16.38 to 22.62.",Vocabulary Matters: A Simple yet Effective Approach to Paragraph-level Question Generation,"Question generation (QG) has recently attracted considerable attention. Most of the current neural models take as input only one or two sentences and perform poorly when multiple sentences or complete paragraphs are given as input. However, in real-world scenarios, it is very important to be able to generate high-quality questions from complete paragraphs. In this paper, we present a simple yet effective technique for answer-aware question generation from paragraphs. We augment a basic sequence-to-sequence QG model with dynamic, paragraph-specific dictionary and copy attention that is persistent across the corpus, without requiring features generated by sophisticated NLP pipelines or handcrafted rules. Our evaluation on SQuAD shows that our model significantly outperforms current state-of-theart systems in question generation from paragraphs in both automatic and human evaluation. We achieve a 6-point improvement over the best system on BLEU-4, from 16.38 to 22.62.",,"Vocabulary Matters: A Simple yet Effective Approach to Paragraph-level Question Generation. Question generation (QG) has recently attracted considerable attention. Most of the current neural models take as input only one or two sentences and perform poorly when multiple sentences or complete paragraphs are given as input. However, in real-world scenarios, it is very important to be able to generate high-quality questions from complete paragraphs. In this paper, we present a simple yet effective technique for answer-aware question generation from paragraphs. We augment a basic sequence-to-sequence QG model with dynamic, paragraph-specific dictionary and copy attention that is persistent across the corpus, without requiring features generated by sophisticated NLP pipelines or handcrafted rules. Our evaluation on SQuAD shows that our model significantly outperforms current state-of-theart systems in question generation from paragraphs in both automatic and human evaluation. We achieve a 6-point improvement over the best system on BLEU-4, from 16.38 to 22.62.",2020
effenberger-etal-2021-analysis-language,https://aclanthology.org/2021.findings-emnlp.239,0,,,,,,,"Analysis of Language Change in Collaborative Instruction Following. We analyze language change over time in a collaborative, goal-oriented instructional task, where utility-maximizing participants form conventions and increase their expertise. Prior work studied such scenarios mostly in the context of reference games, and consistently found that language complexity is reduced along multiple dimensions, such as utterance length, as conventions are formed. In contrast, we find that, given the ability to increase instruction utility, instructors increase language complexity along these previously studied dimensions to better collaborate with increasingly skilled instruction followers. * Equal contribution. Decile 1: get the card in front Decile 5: Collect the green square card in front of you. Decile 10: turn around on the trail, go straight and get 2 green circles, continue straight on the trail to the right side of the glacier and get 1 black triangle.",Analysis of Language Change in Collaborative Instruction Following,"We analyze language change over time in a collaborative, goal-oriented instructional task, where utility-maximizing participants form conventions and increase their expertise. Prior work studied such scenarios mostly in the context of reference games, and consistently found that language complexity is reduced along multiple dimensions, such as utterance length, as conventions are formed. In contrast, we find that, given the ability to increase instruction utility, instructors increase language complexity along these previously studied dimensions to better collaborate with increasingly skilled instruction followers. * Equal contribution. Decile 1: get the card in front Decile 5: Collect the green square card in front of you. Decile 10: turn around on the trail, go straight and get 2 green circles, continue straight on the trail to the right side of the glacier and get 1 black triangle.",Analysis of Language Change in Collaborative Instruction Following,"We analyze language change over time in a collaborative, goal-oriented instructional task, where utility-maximizing participants form conventions and increase their expertise. Prior work studied such scenarios mostly in the context of reference games, and consistently found that language complexity is reduced along multiple dimensions, such as utterance length, as conventions are formed. In contrast, we find that, given the ability to increase instruction utility, instructors increase language complexity along these previously studied dimensions to better collaborate with increasingly skilled instruction followers. * Equal contribution. Decile 1: get the card in front Decile 5: Collect the green square card in front of you. Decile 10: turn around on the trail, go straight and get 2 green circles, continue straight on the trail to the right side of the glacier and get 1 black triangle.","This research was supported by NSF under grants No. 1750499, 1750499-REU, and DGE-1650441. It also received support from a Google Focused Award, the Break Through Tech summer internship program, and a Facebook PhD Fellowship. We thank Chris Potts and Robert Hawkins for early discussions that initiated this analysis; and Ge Gao and Forrest Davis for their comments.","Analysis of Language Change in Collaborative Instruction Following. We analyze language change over time in a collaborative, goal-oriented instructional task, where utility-maximizing participants form conventions and increase their expertise. Prior work studied such scenarios mostly in the context of reference games, and consistently found that language complexity is reduced along multiple dimensions, such as utterance length, as conventions are formed. In contrast, we find that, given the ability to increase instruction utility, instructors increase language complexity along these previously studied dimensions to better collaborate with increasingly skilled instruction followers. * Equal contribution. Decile 1: get the card in front Decile 5: Collect the green square card in front of you. Decile 10: turn around on the trail, go straight and get 2 green circles, continue straight on the trail to the right side of the glacier and get 1 black triangle.",2021
bird-2022-local,https://aclanthology.org/2022.acl-long.539,0,,,,,,,"Local Languages, Third Spaces, and other High-Resource Scenarios. How can language technology address the diverse situations of the world's languages? In one view, languages exist on a resource continuum and the challenge is to scale existing solutions, bringing under-resourced languages into the high-resource world. In another view, presented here, the world's language ecology includes standardised languages, local languages, and contact languages. These are often subsumed under the label of 'under-resourced languages' even though they have distinct functions and prospects. I explore this position and propose some ecologically-aware language technology agendas.","Local Languages, Third Spaces, and other High-Resource Scenarios","How can language technology address the diverse situations of the world's languages? In one view, languages exist on a resource continuum and the challenge is to scale existing solutions, bringing under-resourced languages into the high-resource world. In another view, presented here, the world's language ecology includes standardised languages, local languages, and contact languages. These are often subsumed under the label of 'under-resourced languages' even though they have distinct functions and prospects. I explore this position and propose some ecologically-aware language technology agendas.","Local Languages, Third Spaces, and other High-Resource Scenarios","How can language technology address the diverse situations of the world's languages? In one view, languages exist on a resource continuum and the challenge is to scale existing solutions, bringing under-resourced languages into the high-resource world. In another view, presented here, the world's language ecology includes standardised languages, local languages, and contact languages. These are often subsumed under the label of 'under-resourced languages' even though they have distinct functions and prospects. I explore this position and propose some ecologically-aware language technology agendas.",,"Local Languages, Third Spaces, and other High-Resource Scenarios. How can language technology address the diverse situations of the world's languages? In one view, languages exist on a resource continuum and the challenge is to scale existing solutions, bringing under-resourced languages into the high-resource world. In another view, presented here, the world's language ecology includes standardised languages, local languages, and contact languages. These are often subsumed under the label of 'under-resourced languages' even though they have distinct functions and prospects. I explore this position and propose some ecologically-aware language technology agendas.",2022
mehdad-etal-2010-towards,https://aclanthology.org/N10-1045,0,,,,,,,"Towards Cross-Lingual Textual Entailment. This paper investigates cross-lingual textual entailment as a semantic relation between two text portions in different languages, and proposes a prospective research direction. We argue that cross-lingual textual entailment (CLTE) can be a core technology for several cross-lingual NLP applications and tasks. Through preliminary experiments, we aim at proving the feasibility of the task, and providing a reliable baseline. We also introduce new applications for CLTE that will be explored in future work.",Towards Cross-Lingual Textual Entailment,"This paper investigates cross-lingual textual entailment as a semantic relation between two text portions in different languages, and proposes a prospective research direction. We argue that cross-lingual textual entailment (CLTE) can be a core technology for several cross-lingual NLP applications and tasks. Through preliminary experiments, we aim at proving the feasibility of the task, and providing a reliable baseline. We also introduce new applications for CLTE that will be explored in future work.",Towards Cross-Lingual Textual Entailment,"This paper investigates cross-lingual textual entailment as a semantic relation between two text portions in different languages, and proposes a prospective research direction. We argue that cross-lingual textual entailment (CLTE) can be a core technology for several cross-lingual NLP applications and tasks. Through preliminary experiments, we aim at proving the feasibility of the task, and providing a reliable baseline. We also introduce new applications for CLTE that will be explored in future work.",This work has been partially supported by the ECfunded project CoSyne (FP7-ICT-4-24853),"Towards Cross-Lingual Textual Entailment. This paper investigates cross-lingual textual entailment as a semantic relation between two text portions in different languages, and proposes a prospective research direction. We argue that cross-lingual textual entailment (CLTE) can be a core technology for several cross-lingual NLP applications and tasks. Through preliminary experiments, we aim at proving the feasibility of the task, and providing a reliable baseline. We also introduce new applications for CLTE that will be explored in future work.",2010
he-etal-2020-syntactic,https://aclanthology.org/2020.coling-main.246,0,,,,,,,"Syntactic Graph Convolutional Network for Spoken Language Understanding. Slot filling and intent detection are two major tasks for spoken language understanding. In most existing work, these two tasks are built as joint models with multi-task learning with no consideration of prior linguistic knowledge. In this paper, we propose a novel joint model that applies a graph convolutional network over dependency trees to integrate the syntactic structure for learning slot filling and intent detection jointly. Experimental results show that our proposed model achieves state-of-the-art performance on two public benchmark datasets and outperforms existing work. At last, we apply the BERT model to further improve the performance on both slot filling and intent detection. * The work was done when the first author was an intern at Meituan Group. The first two authors contribute equally.",Syntactic Graph Convolutional Network for Spoken Language Understanding,"Slot filling and intent detection are two major tasks for spoken language understanding. In most existing work, these two tasks are built as joint models with multi-task learning with no consideration of prior linguistic knowledge. In this paper, we propose a novel joint model that applies a graph convolutional network over dependency trees to integrate the syntactic structure for learning slot filling and intent detection jointly. Experimental results show that our proposed model achieves state-of-the-art performance on two public benchmark datasets and outperforms existing work. At last, we apply the BERT model to further improve the performance on both slot filling and intent detection. * The work was done when the first author was an intern at Meituan Group. The first two authors contribute equally.",Syntactic Graph Convolutional Network for Spoken Language Understanding,"Slot filling and intent detection are two major tasks for spoken language understanding. In most existing work, these two tasks are built as joint models with multi-task learning with no consideration of prior linguistic knowledge. In this paper, we propose a novel joint model that applies a graph convolutional network over dependency trees to integrate the syntactic structure for learning slot filling and intent detection jointly. Experimental results show that our proposed model achieves state-of-the-art performance on two public benchmark datasets and outperforms existing work. At last, we apply the BERT model to further improve the performance on both slot filling and intent detection. * The work was done when the first author was an intern at Meituan Group. The first two authors contribute equally.","The work was done when the first author was an intern at Meituan Dialogue Group. We thank Xiaojie Wang, Jiangnan Xia and Hengtong Lu for the discussion. We thank all anonymous reviewers for their constructive feedback.","Syntactic Graph Convolutional Network for Spoken Language Understanding. Slot filling and intent detection are two major tasks for spoken language understanding. In most existing work, these two tasks are built as joint models with multi-task learning with no consideration of prior linguistic knowledge. In this paper, we propose a novel joint model that applies a graph convolutional network over dependency trees to integrate the syntactic structure for learning slot filling and intent detection jointly. Experimental results show that our proposed model achieves state-of-the-art performance on two public benchmark datasets and outperforms existing work. At last, we apply the BERT model to further improve the performance on both slot filling and intent detection. * The work was done when the first author was an intern at Meituan Group. The first two authors contribute equally.",2020
nigmatulina-etal-2020-asr,https://aclanthology.org/2020.vardial-1.2,0,,,,,,,"ASR for Non-standardised Languages with Dialectal Variation: the case of Swiss German. Strong regional variation, together with the lack of standard orthography, makes Swiss German automatic speech recognition (ASR) particularly difficult in a multi-dialectal setting. This paper focuses on one of the many challenges, namely, the choice of the output text to represent non-standardised Swiss German. We investigate two potential options: a) dialectal writing-approximate phonemic transcriptions that provide close correspondence between grapheme labels and the acoustic signal but are highly inconsistent and b) normalised writing-transcriptions resembling standard German that are relatively consistent but distant from the acoustic signal. To find out which writing facilitates Swiss German ASR, we build several systems using the Kaldi toolkit and a dataset covering 14 regional varieties. A formal comparison shows that the system trained on the normalised transcriptions achieves better results in word error rate (WER) (29.39%) but underperforms at the character level, suggesting dialectal transcriptions offer a viable solution for downstream applications where dialectal differences are important. To better assess word-level performance for dialectal transcriptions, we use a flexible WER measure (FlexWER). When evaluated with this metric, the system trained on dialectal transcriptions outperforms that trained on the normalised writing. Besides establishing a benchmark for Swiss German multi-dialectal ASR, our findings can be helpful in designing ASR systems for other languages without standard orthography.",{ASR} for Non-standardised Languages with Dialectal Variation: the case of {S}wiss {G}erman,"Strong regional variation, together with the lack of standard orthography, makes Swiss German automatic speech recognition (ASR) particularly difficult in a multi-dialectal setting. This paper focuses on one of the many challenges, namely, the choice of the output text to represent non-standardised Swiss German. We investigate two potential options: a) dialectal writing-approximate phonemic transcriptions that provide close correspondence between grapheme labels and the acoustic signal but are highly inconsistent and b) normalised writing-transcriptions resembling standard German that are relatively consistent but distant from the acoustic signal. To find out which writing facilitates Swiss German ASR, we build several systems using the Kaldi toolkit and a dataset covering 14 regional varieties. A formal comparison shows that the system trained on the normalised transcriptions achieves better results in word error rate (WER) (29.39%) but underperforms at the character level, suggesting dialectal transcriptions offer a viable solution for downstream applications where dialectal differences are important. To better assess word-level performance for dialectal transcriptions, we use a flexible WER measure (FlexWER). When evaluated with this metric, the system trained on dialectal transcriptions outperforms that trained on the normalised writing. Besides establishing a benchmark for Swiss German multi-dialectal ASR, our findings can be helpful in designing ASR systems for other languages without standard orthography.",ASR for Non-standardised Languages with Dialectal Variation: the case of Swiss German,"Strong regional variation, together with the lack of standard orthography, makes Swiss German automatic speech recognition (ASR) particularly difficult in a multi-dialectal setting. This paper focuses on one of the many challenges, namely, the choice of the output text to represent non-standardised Swiss German. We investigate two potential options: a) dialectal writing-approximate phonemic transcriptions that provide close correspondence between grapheme labels and the acoustic signal but are highly inconsistent and b) normalised writing-transcriptions resembling standard German that are relatively consistent but distant from the acoustic signal. To find out which writing facilitates Swiss German ASR, we build several systems using the Kaldi toolkit and a dataset covering 14 regional varieties. A formal comparison shows that the system trained on the normalised transcriptions achieves better results in word error rate (WER) (29.39%) but underperforms at the character level, suggesting dialectal transcriptions offer a viable solution for downstream applications where dialectal differences are important. To better assess word-level performance for dialectal transcriptions, we use a flexible WER measure (FlexWER). When evaluated with this metric, the system trained on dialectal transcriptions outperforms that trained on the normalised writing. Besides establishing a benchmark for Swiss German multi-dialectal ASR, our findings can be helpful in designing ASR systems for other languages without standard orthography.",,"ASR for Non-standardised Languages with Dialectal Variation: the case of Swiss German. Strong regional variation, together with the lack of standard orthography, makes Swiss German automatic speech recognition (ASR) particularly difficult in a multi-dialectal setting. This paper focuses on one of the many challenges, namely, the choice of the output text to represent non-standardised Swiss German. We investigate two potential options: a) dialectal writing-approximate phonemic transcriptions that provide close correspondence between grapheme labels and the acoustic signal but are highly inconsistent and b) normalised writing-transcriptions resembling standard German that are relatively consistent but distant from the acoustic signal. To find out which writing facilitates Swiss German ASR, we build several systems using the Kaldi toolkit and a dataset covering 14 regional varieties. A formal comparison shows that the system trained on the normalised transcriptions achieves better results in word error rate (WER) (29.39%) but underperforms at the character level, suggesting dialectal transcriptions offer a viable solution for downstream applications where dialectal differences are important. To better assess word-level performance for dialectal transcriptions, we use a flexible WER measure (FlexWER). When evaluated with this metric, the system trained on dialectal transcriptions outperforms that trained on the normalised writing. Besides establishing a benchmark for Swiss German multi-dialectal ASR, our findings can be helpful in designing ASR systems for other languages without standard orthography.",2020
yin-etal-2016-neural-generative,https://aclanthology.org/W16-0106,0,,,,,,,"Neural Generative Question Answering. This paper presents an end-to-end neural network model, named Neural Generative Question Answering (GENQA), that can generate answers to simple factoid questions, based on the facts in a knowledge-base. More specifically, the model is built on the encoderdecoder framework for sequence-to-sequence learning, while equipped with the ability to enquire the knowledge-base, and is trained on a corpus of question-answer pairs, with their associated triples in the knowledge-base. Empirical study shows the proposed model can effectively deal with the variations of questions and answers, and generate right and natural answers by referring to the facts in the knowledge-base. The experiment on question answering demonstrates that the proposed model can outperform an embedding-based QA model as well as a neural dialogue model trained on the same data.",Neural Generative Question Answering,"This paper presents an end-to-end neural network model, named Neural Generative Question Answering (GENQA), that can generate answers to simple factoid questions, based on the facts in a knowledge-base. More specifically, the model is built on the encoderdecoder framework for sequence-to-sequence learning, while equipped with the ability to enquire the knowledge-base, and is trained on a corpus of question-answer pairs, with their associated triples in the knowledge-base. Empirical study shows the proposed model can effectively deal with the variations of questions and answers, and generate right and natural answers by referring to the facts in the knowledge-base. The experiment on question answering demonstrates that the proposed model can outperform an embedding-based QA model as well as a neural dialogue model trained on the same data.",Neural Generative Question Answering,"This paper presents an end-to-end neural network model, named Neural Generative Question Answering (GENQA), that can generate answers to simple factoid questions, based on the facts in a knowledge-base. More specifically, the model is built on the encoderdecoder framework for sequence-to-sequence learning, while equipped with the ability to enquire the knowledge-base, and is trained on a corpus of question-answer pairs, with their associated triples in the knowledge-base. Empirical study shows the proposed model can effectively deal with the variations of questions and answers, and generate right and natural answers by referring to the facts in the knowledge-base. The experiment on question answering demonstrates that the proposed model can outperform an embedding-based QA model as well as a neural dialogue model trained on the same data.",,"Neural Generative Question Answering. This paper presents an end-to-end neural network model, named Neural Generative Question Answering (GENQA), that can generate answers to simple factoid questions, based on the facts in a knowledge-base. More specifically, the model is built on the encoderdecoder framework for sequence-to-sequence learning, while equipped with the ability to enquire the knowledge-base, and is trained on a corpus of question-answer pairs, with their associated triples in the knowledge-base. Empirical study shows the proposed model can effectively deal with the variations of questions and answers, and generate right and natural answers by referring to the facts in the knowledge-base. The experiment on question answering demonstrates that the proposed model can outperform an embedding-based QA model as well as a neural dialogue model trained on the same data.",2016
stahlberg-etal-2016-edit,https://aclanthology.org/W16-2324,0,,,,,,,"The Edit Distance Transducer in Action: The University of Cambridge English-German System at WMT16. This paper presents the University of Cambridge submission to WMT16. Motivated by the complementary nature of syntactical machine translation and neural machine translation (NMT), we exploit the synergies of Hiero and NMT in different combination schemes. Starting out with a simple neural lattice rescoring approach, we show that the Hiero lattices are often too narrow for NMT ensembles. Therefore, instead of a hard restriction of the NMT search space to the lattice, we propose to loosely couple NMT and Hiero by composition with a modified version of the edit distance transducer. The loose combination outperforms lattice rescoring, especially when using multiple NMT systems in an ensemble.",The Edit Distance Transducer in Action: The {U}niversity of {C}ambridge {E}nglish-{G}erman System at {WMT}16,"This paper presents the University of Cambridge submission to WMT16. Motivated by the complementary nature of syntactical machine translation and neural machine translation (NMT), we exploit the synergies of Hiero and NMT in different combination schemes. Starting out with a simple neural lattice rescoring approach, we show that the Hiero lattices are often too narrow for NMT ensembles. Therefore, instead of a hard restriction of the NMT search space to the lattice, we propose to loosely couple NMT and Hiero by composition with a modified version of the edit distance transducer. The loose combination outperforms lattice rescoring, especially when using multiple NMT systems in an ensemble.",The Edit Distance Transducer in Action: The University of Cambridge English-German System at WMT16,"This paper presents the University of Cambridge submission to WMT16. Motivated by the complementary nature of syntactical machine translation and neural machine translation (NMT), we exploit the synergies of Hiero and NMT in different combination schemes. Starting out with a simple neural lattice rescoring approach, we show that the Hiero lattices are often too narrow for NMT ensembles. Therefore, instead of a hard restriction of the NMT search space to the lattice, we propose to loosely couple NMT and Hiero by composition with a modified version of the edit distance transducer. The loose combination outperforms lattice rescoring, especially when using multiple NMT systems in an ensemble.",This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC grant EP/L027623/1).,"The Edit Distance Transducer in Action: The University of Cambridge English-German System at WMT16. This paper presents the University of Cambridge submission to WMT16. Motivated by the complementary nature of syntactical machine translation and neural machine translation (NMT), we exploit the synergies of Hiero and NMT in different combination schemes. Starting out with a simple neural lattice rescoring approach, we show that the Hiero lattices are often too narrow for NMT ensembles. Therefore, instead of a hard restriction of the NMT search space to the lattice, we propose to loosely couple NMT and Hiero by composition with a modified version of the edit distance transducer. The loose combination outperforms lattice rescoring, especially when using multiple NMT systems in an ensemble.",2016
applegate-1960-syntax,https://aclanthology.org/1960.earlymt-nsmt.33,0,,,,,,,"Syntax of the German Noun Phrase. It is generally agreed that a successful mechanical translation routine must be based on an accurate grammatical description of both the source and target languages.
Furthermore, the description should be presented in a form that can easily be adapted for computer programming.",Syntax of the {G}erman Noun Phrase,"It is generally agreed that a successful mechanical translation routine must be based on an accurate grammatical description of both the source and target languages.
Furthermore, the description should be presented in a form that can easily be adapted for computer programming.",Syntax of the German Noun Phrase,"It is generally agreed that a successful mechanical translation routine must be based on an accurate grammatical description of both the source and target languages.
Furthermore, the description should be presented in a form that can easily be adapted for computer programming.",,"Syntax of the German Noun Phrase. It is generally agreed that a successful mechanical translation routine must be based on an accurate grammatical description of both the source and target languages.
Furthermore, the description should be presented in a form that can easily be adapted for computer programming.",1960
gautam-bhattacharyya-2014-layered,https://aclanthology.org/W14-3350,0,,,,,,,"LAYERED: Metric for Machine Translation Evaluation. This paper describes the LAYERED metric which is used for the shared WMT'14 metrics task. Various metrics exist for MT evaluation: BLEU (Papineni, 2002), METEOR (Alon Lavie, 2007), TER (Snover, 2006) etc., but are found inadequate in quite a few language settings like, for example, in case of free word order languages. In this paper, we propose an MT evaluation scheme that is based on the NLP layers: lexical, syntactic and semantic. We contend that higher layer metrics are after all needed. Results are presented on the corpora of ACL-WMT, 2013 and 2014. We end with a metric which is composed of weighted metrics at individual layers, which correlates very well with human judgment.",{LAYERED}: Metric for Machine Translation Evaluation,"This paper describes the LAYERED metric which is used for the shared WMT'14 metrics task. Various metrics exist for MT evaluation: BLEU (Papineni, 2002), METEOR (Alon Lavie, 2007), TER (Snover, 2006) etc., but are found inadequate in quite a few language settings like, for example, in case of free word order languages. In this paper, we propose an MT evaluation scheme that is based on the NLP layers: lexical, syntactic and semantic. We contend that higher layer metrics are after all needed. Results are presented on the corpora of ACL-WMT, 2013 and 2014. We end with a metric which is composed of weighted metrics at individual layers, which correlates very well with human judgment.",LAYERED: Metric for Machine Translation Evaluation,"This paper describes the LAYERED metric which is used for the shared WMT'14 metrics task. Various metrics exist for MT evaluation: BLEU (Papineni, 2002), METEOR (Alon Lavie, 2007), TER (Snover, 2006) etc., but are found inadequate in quite a few language settings like, for example, in case of free word order languages. In this paper, we propose an MT evaluation scheme that is based on the NLP layers: lexical, syntactic and semantic. We contend that higher layer metrics are after all needed. Results are presented on the corpora of ACL-WMT, 2013 and 2014. We end with a metric which is composed of weighted metrics at individual layers, which correlates very well with human judgment.",,"LAYERED: Metric for Machine Translation Evaluation. This paper describes the LAYERED metric which is used for the shared WMT'14 metrics task. Various metrics exist for MT evaluation: BLEU (Papineni, 2002), METEOR (Alon Lavie, 2007), TER (Snover, 2006) etc., but are found inadequate in quite a few language settings like, for example, in case of free word order languages. In this paper, we propose an MT evaluation scheme that is based on the NLP layers: lexical, syntactic and semantic. We contend that higher layer metrics are after all needed. Results are presented on the corpora of ACL-WMT, 2013 and 2014. We end with a metric which is composed of weighted metrics at individual layers, which correlates very well with human judgment.",2014
arase-etal-2020-annotation,https://aclanthology.org/2020.lrec-1.836,1,,,,health,,,"Annotation of Adverse Drug Reactions in Patients' Weblogs. Adverse drug reactions are a severe problem that significantly degrade quality of life, or even threaten the life of patients. Patientgenerated texts available on the web have been gaining attention as a promising source of information in this regard. While previous studies annotated such patient-generated content, they only reported on limited information, such as whether a text described an adverse drug reaction or not. Further, they only annotated short texts of a few sentences crawled from online forums and social networking services. The dataset we present in this paper is unique for the richness of annotated information, including detailed descriptions of drug reactions with full context. We crawled patient's weblog articles shared on an online patient-networking platform and annotated the effects of drugs therein reported. We identified spans describing drug reactions and assigned labels for related drug names, standard codes for the symptoms of the reactions, and types of effects. As a first dataset, we annotated 677 drug reactions with these detailed labels based on 169 weblog articles by Japanese lung cancer patients. Our annotation dataset is made publicly available for further research on the detection of adverse drug reactions and more broadly, on patient-generated text processing.",Annotation of Adverse Drug Reactions in Patients{'} Weblogs,"Adverse drug reactions are a severe problem that significantly degrade quality of life, or even threaten the life of patients. Patientgenerated texts available on the web have been gaining attention as a promising source of information in this regard. While previous studies annotated such patient-generated content, they only reported on limited information, such as whether a text described an adverse drug reaction or not. Further, they only annotated short texts of a few sentences crawled from online forums and social networking services. The dataset we present in this paper is unique for the richness of annotated information, including detailed descriptions of drug reactions with full context. We crawled patient's weblog articles shared on an online patient-networking platform and annotated the effects of drugs therein reported. We identified spans describing drug reactions and assigned labels for related drug names, standard codes for the symptoms of the reactions, and types of effects. As a first dataset, we annotated 677 drug reactions with these detailed labels based on 169 weblog articles by Japanese lung cancer patients. Our annotation dataset is made publicly available for further research on the detection of adverse drug reactions and more broadly, on patient-generated text processing.",Annotation of Adverse Drug Reactions in Patients' Weblogs,"Adverse drug reactions are a severe problem that significantly degrade quality of life, or even threaten the life of patients. Patientgenerated texts available on the web have been gaining attention as a promising source of information in this regard. While previous studies annotated such patient-generated content, they only reported on limited information, such as whether a text described an adverse drug reaction or not. Further, they only annotated short texts of a few sentences crawled from online forums and social networking services. The dataset we present in this paper is unique for the richness of annotated information, including detailed descriptions of drug reactions with full context. We crawled patient's weblog articles shared on an online patient-networking platform and annotated the effects of drugs therein reported. We identified spans describing drug reactions and assigned labels for related drug names, standard codes for the symptoms of the reactions, and types of effects. As a first dataset, we annotated 677 drug reactions with these detailed labels based on 169 weblog articles by Japanese lung cancer patients. Our annotation dataset is made publicly available for further research on the detection of adverse drug reactions and more broadly, on patient-generated text processing.","We thank Kazuki Ashihara for his contribution to annotation as well as valuable discussions with us. This work was supported by JST AIP-PRISM Grant Number JP-MJCR18Y1, Japan.","Annotation of Adverse Drug Reactions in Patients' Weblogs. Adverse drug reactions are a severe problem that significantly degrade quality of life, or even threaten the life of patients. Patientgenerated texts available on the web have been gaining attention as a promising source of information in this regard. While previous studies annotated such patient-generated content, they only reported on limited information, such as whether a text described an adverse drug reaction or not. Further, they only annotated short texts of a few sentences crawled from online forums and social networking services. The dataset we present in this paper is unique for the richness of annotated information, including detailed descriptions of drug reactions with full context. We crawled patient's weblog articles shared on an online patient-networking platform and annotated the effects of drugs therein reported. We identified spans describing drug reactions and assigned labels for related drug names, standard codes for the symptoms of the reactions, and types of effects. As a first dataset, we annotated 677 drug reactions with these detailed labels based on 169 weblog articles by Japanese lung cancer patients. Our annotation dataset is made publicly available for further research on the detection of adverse drug reactions and more broadly, on patient-generated text processing.",2020
kopotev-etal-2013-automatic,https://aclanthology.org/W13-1011,0,,,,,,,"Automatic Detection of Stable Grammatical Features in N-Grams. This paper presents an algorithm that allows the user to issue a query pattern, collects multi-word expressions (MWEs) that match the pattern, and then ranks them in a uniform fashion. This is achieved by quantifying the strength of all possible relations between the tokens and their features in the MWEs. The algorithm collects the frequency of morphological categories of the given pattern on a unified scale in order to choose the stable categories and their values. For every part of speech, and for all of its categories, we calculate a normalized Kullback-Leibler divergence between the category's distribution in the pattern and its distribution in the corpus overall. Categories with the largest divergence are considered to be the most significant. The particular values of the categories are sorted according to a frequency ratio. As a result, we obtain morphosyntactic profiles of a given pattern, which includes the most stable category of the pattern, and their values.",Automatic Detection of Stable Grammatical Features in N-Grams,"This paper presents an algorithm that allows the user to issue a query pattern, collects multi-word expressions (MWEs) that match the pattern, and then ranks them in a uniform fashion. This is achieved by quantifying the strength of all possible relations between the tokens and their features in the MWEs. The algorithm collects the frequency of morphological categories of the given pattern on a unified scale in order to choose the stable categories and their values. For every part of speech, and for all of its categories, we calculate a normalized Kullback-Leibler divergence between the category's distribution in the pattern and its distribution in the corpus overall. Categories with the largest divergence are considered to be the most significant. The particular values of the categories are sorted according to a frequency ratio. As a result, we obtain morphosyntactic profiles of a given pattern, which includes the most stable category of the pattern, and their values.",Automatic Detection of Stable Grammatical Features in N-Grams,"This paper presents an algorithm that allows the user to issue a query pattern, collects multi-word expressions (MWEs) that match the pattern, and then ranks them in a uniform fashion. This is achieved by quantifying the strength of all possible relations between the tokens and their features in the MWEs. The algorithm collects the frequency of morphological categories of the given pattern on a unified scale in order to choose the stable categories and their values. For every part of speech, and for all of its categories, we calculate a normalized Kullback-Leibler divergence between the category's distribution in the pattern and its distribution in the corpus overall. Categories with the largest divergence are considered to be the most significant. The particular values of the categories are sorted according to a frequency ratio. As a result, we obtain morphosyntactic profiles of a given pattern, which includes the most stable category of the pattern, and their values.","We are very grateful to the Russian National Corpus developers, especially E. Rakhilina and O. Lyashevskaya, for providing us with the data.","Automatic Detection of Stable Grammatical Features in N-Grams. This paper presents an algorithm that allows the user to issue a query pattern, collects multi-word expressions (MWEs) that match the pattern, and then ranks them in a uniform fashion. This is achieved by quantifying the strength of all possible relations between the tokens and their features in the MWEs. The algorithm collects the frequency of morphological categories of the given pattern on a unified scale in order to choose the stable categories and their values. For every part of speech, and for all of its categories, we calculate a normalized Kullback-Leibler divergence between the category's distribution in the pattern and its distribution in the corpus overall. Categories with the largest divergence are considered to be the most significant. The particular values of the categories are sorted according to a frequency ratio. As a result, we obtain morphosyntactic profiles of a given pattern, which includes the most stable category of the pattern, and their values.",2013
elsner-charniak-2008-coreference,https://aclanthology.org/P08-2011,0,,,,,,,"Coreference-inspired Coherence Modeling. Research on coreference resolution and summarization has modeled the way entities are realized as concrete phrases in discourse. In particular there exist models of the noun phrase syntax used for discourse-new versus discourse-old referents, and models describing the likely distance between a pronoun and its antecedent. However, models of discourse coherence, as applied to information ordering tasks, have ignored these kinds of information. We apply a discourse-new classifier and pronoun coreference algorithm to the information ordering task, and show significant improvements in performance over the entity grid, a popular model of local coherence.",Coreference-inspired Coherence Modeling,"Research on coreference resolution and summarization has modeled the way entities are realized as concrete phrases in discourse. In particular there exist models of the noun phrase syntax used for discourse-new versus discourse-old referents, and models describing the likely distance between a pronoun and its antecedent. However, models of discourse coherence, as applied to information ordering tasks, have ignored these kinds of information. We apply a discourse-new classifier and pronoun coreference algorithm to the information ordering task, and show significant improvements in performance over the entity grid, a popular model of local coherence.",Coreference-inspired Coherence Modeling,"Research on coreference resolution and summarization has modeled the way entities are realized as concrete phrases in discourse. In particular there exist models of the noun phrase syntax used for discourse-new versus discourse-old referents, and models describing the likely distance between a pronoun and its antecedent. However, models of discourse coherence, as applied to information ordering tasks, have ignored these kinds of information. We apply a discourse-new classifier and pronoun coreference algorithm to the information ordering task, and show significant improvements in performance over the entity grid, a popular model of local coherence.","Chen and Barzilay, reviewers, DARPA, et al. ","Coreference-inspired Coherence Modeling. Research on coreference resolution and summarization has modeled the way entities are realized as concrete phrases in discourse. In particular there exist models of the noun phrase syntax used for discourse-new versus discourse-old referents, and models describing the likely distance between a pronoun and its antecedent. However, models of discourse coherence, as applied to information ordering tasks, have ignored these kinds of information. We apply a discourse-new classifier and pronoun coreference algorithm to the information ordering task, and show significant improvements in performance over the entity grid, a popular model of local coherence.",2008
gahl-1998-automatic,https://aclanthology.org/C98-1068,0,,,,,,,"Automatic extraction of subcorpora based on subcategorization frames from a part-of-speech tagged corpus. This paper presents a method for extracting sub.cor.pora documenting different subcategorlzatlon frames for verbs, nouns, and adjectives in the 100 mio. word British National Corpus. The extraction tool consists of a set of batch files for use with the Corpus Query Processor (CQP), which is part of the IMS corpus workbench (cf. Christ 1994a,b). A macroprocessor has been developed that allows the user to specify in a simple input file which subcorpora are to be created for a given lemma. The resulting subcorpora can be used (1) to provide evidence for the subcategorization properties of a given lemma, and to facilitate the selection of corpus lines for lexicographic research, and (2) to determine the frequencies of different syntactic contexts of each lemma.",Automatic extraction of subcorpora based on subcategorization frames from a part-of-speech tagged corpus,"This paper presents a method for extracting sub.cor.pora documenting different subcategorlzatlon frames for verbs, nouns, and adjectives in the 100 mio. word British National Corpus. The extraction tool consists of a set of batch files for use with the Corpus Query Processor (CQP), which is part of the IMS corpus workbench (cf. Christ 1994a,b). A macroprocessor has been developed that allows the user to specify in a simple input file which subcorpora are to be created for a given lemma. The resulting subcorpora can be used (1) to provide evidence for the subcategorization properties of a given lemma, and to facilitate the selection of corpus lines for lexicographic research, and (2) to determine the frequencies of different syntactic contexts of each lemma.",Automatic extraction of subcorpora based on subcategorization frames from a part-of-speech tagged corpus,"This paper presents a method for extracting sub.cor.pora documenting different subcategorlzatlon frames for verbs, nouns, and adjectives in the 100 mio. word British National Corpus. The extraction tool consists of a set of batch files for use with the Corpus Query Processor (CQP), which is part of the IMS corpus workbench (cf. Christ 1994a,b). A macroprocessor has been developed that allows the user to specify in a simple input file which subcorpora are to be created for a given lemma. The resulting subcorpora can be used (1) to provide evidence for the subcategorization properties of a given lemma, and to facilitate the selection of corpus lines for lexicographic research, and (2) to determine the frequencies of different syntactic contexts of each lemma.","This work grew out of an extremely enjoyable collaborative effort with Dr. Ulrich Heid of IMS Stuttgart and Dan Jurafsky of the University of Boulder, Colorado. I would like to thank Doug Roland and especially the untiring Collin Baker for their work on the macroprocessor. I would also like to thank the members of the FrameNet project for their comments and suggestions. I thank Judith Eckle-Kohler of IMS-Stuttgart, JB Lowe of ICSI-Berkeley and Dan Jurafsky for comments on an earlier draft of this paper.","Automatic extraction of subcorpora based on subcategorization frames from a part-of-speech tagged corpus. This paper presents a method for extracting sub.cor.pora documenting different subcategorlzatlon frames for verbs, nouns, and adjectives in the 100 mio. word British National Corpus. The extraction tool consists of a set of batch files for use with the Corpus Query Processor (CQP), which is part of the IMS corpus workbench (cf. Christ 1994a,b). A macroprocessor has been developed that allows the user to specify in a simple input file which subcorpora are to be created for a given lemma. The resulting subcorpora can be used (1) to provide evidence for the subcategorization properties of a given lemma, and to facilitate the selection of corpus lines for lexicographic research, and (2) to determine the frequencies of different syntactic contexts of each lemma.",1998
budin-etal-1999-integrating,https://aclanthology.org/1999.tc-1.15,0,,,,,,,"Integrating Translation Technologies Using SALT. The acronym SALT stands for Standards-based Access to multilingual Lexicons and Terminologies. The objective of the SALT project is to develop and promote a range of tools that will be made available on the World Wide Web to various user groups, in particular translators, terminology managers, localizers, technical communicators, but also tools developers, database managers, and language engineers. The resulting toolkit will facilitate access and re-use of heterogeneous multilingual resources derived from both NLP lexicons and human-oriented terminology databases.",Integrating Translation Technologies Using {SALT},"The acronym SALT stands for Standards-based Access to multilingual Lexicons and Terminologies. The objective of the SALT project is to develop and promote a range of tools that will be made available on the World Wide Web to various user groups, in particular translators, terminology managers, localizers, technical communicators, but also tools developers, database managers, and language engineers. The resulting toolkit will facilitate access and re-use of heterogeneous multilingual resources derived from both NLP lexicons and human-oriented terminology databases.",Integrating Translation Technologies Using SALT,"The acronym SALT stands for Standards-based Access to multilingual Lexicons and Terminologies. The objective of the SALT project is to develop and promote a range of tools that will be made available on the World Wide Web to various user groups, in particular translators, terminology managers, localizers, technical communicators, but also tools developers, database managers, and language engineers. The resulting toolkit will facilitate access and re-use of heterogeneous multilingual resources derived from both NLP lexicons and human-oriented terminology databases.",,"Integrating Translation Technologies Using SALT. The acronym SALT stands for Standards-based Access to multilingual Lexicons and Terminologies. The objective of the SALT project is to develop and promote a range of tools that will be made available on the World Wide Web to various user groups, in particular translators, terminology managers, localizers, technical communicators, but also tools developers, database managers, and language engineers. The resulting toolkit will facilitate access and re-use of heterogeneous multilingual resources derived from both NLP lexicons and human-oriented terminology databases.",1999
chen-etal-2020-mpdd,https://aclanthology.org/2020.lrec-1.76,0,,,,,,,"MPDD: A Multi-Party Dialogue Dataset for Analysis of Emotions and Interpersonal Relationships. A dialogue dataset is an indispensable resource for building a dialogue system. Additional information like emotions and interpersonal relationships labeled on conversations enables the system to capture the emotion flow of the participants in the dialogue. However, there is no publicly available Chinese dialogue dataset with emotion and relation labels. In this paper, we collect the conversions from TV series scripts, and annotate emotion and interpersonal relationship labels on each utterance. This dataset contains 25,548 utterances from 4,142 dialogues. We also set up some experiments to observe the effects of the responded utterance on the current utterance, and the correlation between emotion and relation types in emotion and relation classification tasks.",{MPDD}: A Multi-Party Dialogue Dataset for Analysis of Emotions and Interpersonal Relationships,"A dialogue dataset is an indispensable resource for building a dialogue system. Additional information like emotions and interpersonal relationships labeled on conversations enables the system to capture the emotion flow of the participants in the dialogue. However, there is no publicly available Chinese dialogue dataset with emotion and relation labels. In this paper, we collect the conversions from TV series scripts, and annotate emotion and interpersonal relationship labels on each utterance. This dataset contains 25,548 utterances from 4,142 dialogues. We also set up some experiments to observe the effects of the responded utterance on the current utterance, and the correlation between emotion and relation types in emotion and relation classification tasks.",MPDD: A Multi-Party Dialogue Dataset for Analysis of Emotions and Interpersonal Relationships,"A dialogue dataset is an indispensable resource for building a dialogue system. Additional information like emotions and interpersonal relationships labeled on conversations enables the system to capture the emotion flow of the participants in the dialogue. However, there is no publicly available Chinese dialogue dataset with emotion and relation labels. In this paper, we collect the conversions from TV series scripts, and annotate emotion and interpersonal relationship labels on each utterance. This dataset contains 25,548 utterances from 4,142 dialogues. We also set up some experiments to observe the effects of the responded utterance on the current utterance, and the correlation between emotion and relation types in emotion and relation classification tasks.","This research was partially supported by the Ministry of Science and Technology, Taiwan, under grants MOST-106-2923-E-002-012-MY3, MOST-108-2634-F-002-008-, MOST-108-2218-E-009-051-, and MOST-109-2634-F-002-034 and by Academia Sinica, Taiwan, under grant AS-TP-107-M05.","MPDD: A Multi-Party Dialogue Dataset for Analysis of Emotions and Interpersonal Relationships. A dialogue dataset is an indispensable resource for building a dialogue system. Additional information like emotions and interpersonal relationships labeled on conversations enables the system to capture the emotion flow of the participants in the dialogue. However, there is no publicly available Chinese dialogue dataset with emotion and relation labels. In this paper, we collect the conversions from TV series scripts, and annotate emotion and interpersonal relationship labels on each utterance. This dataset contains 25,548 utterances from 4,142 dialogues. We also set up some experiments to observe the effects of the responded utterance on the current utterance, and the correlation between emotion and relation types in emotion and relation classification tasks.",2020
jelinek-lafferty-1991-computation,https://aclanthology.org/J91-3004,0,,,,,,,"Computation of the Probability of Initial Substring Generation by Stochastic Context-Free Grammars. Speech recognition language models are based on probabilities P(Wk+I = v [ WlW2~..., Wk) that the next word Wk+l will be any particular word v of the vocabulary, given that the word sequence Wl, w2,..., Wk is hypothesized to have been uttered in the past. If probabilistic context-free grammars are to be used as the basis of the language model, it will be necessary to compute the probability that successive application of the grammar rewrite rules (beginning with the sentence start symbol s) produces a word string whose initial substring is an arbitrary sequence wl, w2,. .. , Wk+l. In this paper we describe a new algorithm that achieves the required computation in at most a constant times k3-steps.",Computation of the Probability of Initial Substring Generation by Stochastic Context-Free Grammars,"Speech recognition language models are based on probabilities P(Wk+I = v [ WlW2~..., Wk) that the next word Wk+l will be any particular word v of the vocabulary, given that the word sequence Wl, w2,..., Wk is hypothesized to have been uttered in the past. If probabilistic context-free grammars are to be used as the basis of the language model, it will be necessary to compute the probability that successive application of the grammar rewrite rules (beginning with the sentence start symbol s) produces a word string whose initial substring is an arbitrary sequence wl, w2,. .. , Wk+l. In this paper we describe a new algorithm that achieves the required computation in at most a constant times k3-steps.",Computation of the Probability of Initial Substring Generation by Stochastic Context-Free Grammars,"Speech recognition language models are based on probabilities P(Wk+I = v [ WlW2~..., Wk) that the next word Wk+l will be any particular word v of the vocabulary, given that the word sequence Wl, w2,..., Wk is hypothesized to have been uttered in the past. If probabilistic context-free grammars are to be used as the basis of the language model, it will be necessary to compute the probability that successive application of the grammar rewrite rules (beginning with the sentence start symbol s) produces a word string whose initial substring is an arbitrary sequence wl, w2,. .. , Wk+l. In this paper we describe a new algorithm that achieves the required computation in at most a constant times k3-steps.",,"Computation of the Probability of Initial Substring Generation by Stochastic Context-Free Grammars. Speech recognition language models are based on probabilities P(Wk+I = v [ WlW2~..., Wk) that the next word Wk+l will be any particular word v of the vocabulary, given that the word sequence Wl, w2,..., Wk is hypothesized to have been uttered in the past. If probabilistic context-free grammars are to be used as the basis of the language model, it will be necessary to compute the probability that successive application of the grammar rewrite rules (beginning with the sentence start symbol s) produces a word string whose initial substring is an arbitrary sequence wl, w2,. .. , Wk+l. In this paper we describe a new algorithm that achieves the required computation in at most a constant times k3-steps.",1991
mihalcea-tarau-2004-textrank,https://aclanthology.org/W04-3252,0,,,,,,,"TextRank: Bringing Order into Text. In this paper, we introduce TextRank-a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications. In particular, we propose two innovative unsupervised methods for keyword and sentence extraction, and show that the results obtained compare favorably with previously published results on established benchmarks.",{T}ext{R}ank: Bringing Order into Text,"In this paper, we introduce TextRank-a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications. In particular, we propose two innovative unsupervised methods for keyword and sentence extraction, and show that the results obtained compare favorably with previously published results on established benchmarks.",TextRank: Bringing Order into Text,"In this paper, we introduce TextRank-a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications. In particular, we propose two innovative unsupervised methods for keyword and sentence extraction, and show that the results obtained compare favorably with previously published results on established benchmarks.",,"TextRank: Bringing Order into Text. In this paper, we introduce TextRank-a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications. In particular, we propose two innovative unsupervised methods for keyword and sentence extraction, and show that the results obtained compare favorably with previously published results on established benchmarks.",2004
white-1993-delimitedness,https://aclanthology.org/E93-1048,0,,,,,,,"Delimitedness and Trajectory-of-Motion Events. The first part of the paper develops a novel, sortally-based approach to the problem of aspectual composition. The account is argued to be superior on both empirical and computational grounds to previous semantic approaches relying on referential homogeneity tests. While the account is restricted to manner-of-motion verbs, it does cover their interaction with mass terms, amount phrases, locative PPs, and distance, frequency, and temporal modifiers. The second part of the paper describes an implemented system based on the theoretical treatment which determines whether a specified sequence of events is or is not possible under varying situationally supplied constraints, given certain restrictive and simplifying assumptions. Briefly, the system extracts a set of constraint equations from the derived logical forms and solves them according to a best-value metric. Three particular limitations of the system and possible ways of addressing them are discussed in the conclusion.",Delimitedness and Trajectory-of-Motion Events,"The first part of the paper develops a novel, sortally-based approach to the problem of aspectual composition. The account is argued to be superior on both empirical and computational grounds to previous semantic approaches relying on referential homogeneity tests. While the account is restricted to manner-of-motion verbs, it does cover their interaction with mass terms, amount phrases, locative PPs, and distance, frequency, and temporal modifiers. The second part of the paper describes an implemented system based on the theoretical treatment which determines whether a specified sequence of events is or is not possible under varying situationally supplied constraints, given certain restrictive and simplifying assumptions. Briefly, the system extracts a set of constraint equations from the derived logical forms and solves them according to a best-value metric. Three particular limitations of the system and possible ways of addressing them are discussed in the conclusion.",Delimitedness and Trajectory-of-Motion Events,"The first part of the paper develops a novel, sortally-based approach to the problem of aspectual composition. The account is argued to be superior on both empirical and computational grounds to previous semantic approaches relying on referential homogeneity tests. While the account is restricted to manner-of-motion verbs, it does cover their interaction with mass terms, amount phrases, locative PPs, and distance, frequency, and temporal modifiers. The second part of the paper describes an implemented system based on the theoretical treatment which determines whether a specified sequence of events is or is not possible under varying situationally supplied constraints, given certain restrictive and simplifying assumptions. Briefly, the system extracts a set of constraint equations from the derived logical forms and solves them according to a best-value metric. Three particular limitations of the system and possible ways of addressing them are discussed in the conclusion.",,"Delimitedness and Trajectory-of-Motion Events. The first part of the paper develops a novel, sortally-based approach to the problem of aspectual composition. The account is argued to be superior on both empirical and computational grounds to previous semantic approaches relying on referential homogeneity tests. While the account is restricted to manner-of-motion verbs, it does cover their interaction with mass terms, amount phrases, locative PPs, and distance, frequency, and temporal modifiers. The second part of the paper describes an implemented system based on the theoretical treatment which determines whether a specified sequence of events is or is not possible under varying situationally supplied constraints, given certain restrictive and simplifying assumptions. Briefly, the system extracts a set of constraint equations from the derived logical forms and solves them according to a best-value metric. Three particular limitations of the system and possible ways of addressing them are discussed in the conclusion.",1993
perkoff-etal-2021-orthographic,https://aclanthology.org/2021.sigmorphon-1.10,0,,,,,,,"Orthographic vs. Semantic Representations for Unsupervised Morphological Paradigm Clustering. This paper presents two different systems for unsupervised clustering of morphological paradigms, in the context of the SIGMOR-PHON 2021 Shared Task 2. The goal of this task is to correctly cluster words in a given language by their inflectional paradigm, without any previous knowledge of the language and without supervision from labeled data of any sort. The words in a single morphological paradigm are different inflectional variants of an underlying lemma, meaning that the words share a common core meaning. They alsousually-show a high degree of orthographical similarity. Following these intuitions, we investigate KMeans clustering using two different types of word representations: one focusing on orthographical similarity and the other focusing on semantic similarity. Additionally, we discuss the merits of randomly initialized centroids versus pre-defined centroids for clustering. Pre-defined centroids are identified based on either a standard longest common substring algorithm or a connected graph method built off of longest common substring. For all development languages, the characterbased embeddings perform similarly to the baseline, and the semantic embeddings perform well below the baseline. Analysis of the systems' errors suggests that clustering based on orthographic representations is suitable for a wide range of morphological mechanisms, particularly as part of a larger system.",Orthographic vs. Semantic Representations for Unsupervised Morphological Paradigm Clustering,"This paper presents two different systems for unsupervised clustering of morphological paradigms, in the context of the SIGMOR-PHON 2021 Shared Task 2. The goal of this task is to correctly cluster words in a given language by their inflectional paradigm, without any previous knowledge of the language and without supervision from labeled data of any sort. The words in a single morphological paradigm are different inflectional variants of an underlying lemma, meaning that the words share a common core meaning. They alsousually-show a high degree of orthographical similarity. Following these intuitions, we investigate KMeans clustering using two different types of word representations: one focusing on orthographical similarity and the other focusing on semantic similarity. Additionally, we discuss the merits of randomly initialized centroids versus pre-defined centroids for clustering. Pre-defined centroids are identified based on either a standard longest common substring algorithm or a connected graph method built off of longest common substring. For all development languages, the characterbased embeddings perform similarly to the baseline, and the semantic embeddings perform well below the baseline. Analysis of the systems' errors suggests that clustering based on orthographic representations is suitable for a wide range of morphological mechanisms, particularly as part of a larger system.",Orthographic vs. Semantic Representations for Unsupervised Morphological Paradigm Clustering,"This paper presents two different systems for unsupervised clustering of morphological paradigms, in the context of the SIGMOR-PHON 2021 Shared Task 2. The goal of this task is to correctly cluster words in a given language by their inflectional paradigm, without any previous knowledge of the language and without supervision from labeled data of any sort. The words in a single morphological paradigm are different inflectional variants of an underlying lemma, meaning that the words share a common core meaning. They alsousually-show a high degree of orthographical similarity. Following these intuitions, we investigate KMeans clustering using two different types of word representations: one focusing on orthographical similarity and the other focusing on semantic similarity. Additionally, we discuss the merits of randomly initialized centroids versus pre-defined centroids for clustering. Pre-defined centroids are identified based on either a standard longest common substring algorithm or a connected graph method built off of longest common substring. For all development languages, the characterbased embeddings perform similarly to the baseline, and the semantic embeddings perform well below the baseline. Analysis of the systems' errors suggests that clustering based on orthographic representations is suitable for a wide range of morphological mechanisms, particularly as part of a larger system.",,"Orthographic vs. Semantic Representations for Unsupervised Morphological Paradigm Clustering. This paper presents two different systems for unsupervised clustering of morphological paradigms, in the context of the SIGMOR-PHON 2021 Shared Task 2. The goal of this task is to correctly cluster words in a given language by their inflectional paradigm, without any previous knowledge of the language and without supervision from labeled data of any sort. The words in a single morphological paradigm are different inflectional variants of an underlying lemma, meaning that the words share a common core meaning. They alsousually-show a high degree of orthographical similarity. Following these intuitions, we investigate KMeans clustering using two different types of word representations: one focusing on orthographical similarity and the other focusing on semantic similarity. Additionally, we discuss the merits of randomly initialized centroids versus pre-defined centroids for clustering. Pre-defined centroids are identified based on either a standard longest common substring algorithm or a connected graph method built off of longest common substring. For all development languages, the characterbased embeddings perform similarly to the baseline, and the semantic embeddings perform well below the baseline. Analysis of the systems' errors suggests that clustering based on orthographic representations is suitable for a wide range of morphological mechanisms, particularly as part of a larger system.",2021
chandran-nair-etal-2021-enough,https://aclanthology.org/2021.dravidianlangtech-1.13,0,,,,,,,"Is this Enough?-Evaluation of Malayalam Wordnet. The quality of a product is the degree to which a product meets the Customer's expectations, which must also be valid in the case of lexical-semantic resources. Conducting a periodic evaluation of resources is essential to ensure that they meet a native speaker's expectations and are free from errors. This paper defines the possible errors that a lexical-semantic resource can contain, how they may impact downstream applications and explains the steps applied to evaluate and quantify the quality of Malayalam WordNet. Malayalam is one of the classical languages of India. We propose an approach allowing to subset the part of the WordNet tied to the lowest quality scores. We aim to work on this subset in a crowdsourcing context to improve the quality of the resource.",Is this Enough?-Evaluation of {M}alayalam {W}ordnet,"The quality of a product is the degree to which a product meets the Customer's expectations, which must also be valid in the case of lexical-semantic resources. Conducting a periodic evaluation of resources is essential to ensure that they meet a native speaker's expectations and are free from errors. This paper defines the possible errors that a lexical-semantic resource can contain, how they may impact downstream applications and explains the steps applied to evaluate and quantify the quality of Malayalam WordNet. Malayalam is one of the classical languages of India. We propose an approach allowing to subset the part of the WordNet tied to the lowest quality scores. We aim to work on this subset in a crowdsourcing context to improve the quality of the resource.",Is this Enough?-Evaluation of Malayalam Wordnet,"The quality of a product is the degree to which a product meets the Customer's expectations, which must also be valid in the case of lexical-semantic resources. Conducting a periodic evaluation of resources is essential to ensure that they meet a native speaker's expectations and are free from errors. This paper defines the possible errors that a lexical-semantic resource can contain, how they may impact downstream applications and explains the steps applied to evaluate and quantify the quality of Malayalam WordNet. Malayalam is one of the classical languages of India. We propose an approach allowing to subset the part of the WordNet tied to the lowest quality scores. We aim to work on this subset in a crowdsourcing context to improve the quality of the resource.","We are not taking these values as the final deciding factor. We will be using this low score synsets as a candidate set for our crowdsourcing application. This application will have different tasks like define gloss, provide Synset, validate the gloss, and so on.","Is this Enough?-Evaluation of Malayalam Wordnet. The quality of a product is the degree to which a product meets the Customer's expectations, which must also be valid in the case of lexical-semantic resources. Conducting a periodic evaluation of resources is essential to ensure that they meet a native speaker's expectations and are free from errors. This paper defines the possible errors that a lexical-semantic resource can contain, how they may impact downstream applications and explains the steps applied to evaluate and quantify the quality of Malayalam WordNet. Malayalam is one of the classical languages of India. We propose an approach allowing to subset the part of the WordNet tied to the lowest quality scores. We aim to work on this subset in a crowdsourcing context to improve the quality of the resource.",2021
bamman-etal-2020-annotated,https://aclanthology.org/2020.lrec-1.6,0,,,,,,,"An Annotated Dataset of Coreference in English Literature. We present in this work a new dataset of coreference annotations for works of literature in English, covering 29,103 mentions in 210,532 tokens from 100 works of fiction. This dataset differs from previous coreference datasets in containing documents whose average length (2,105.3 words) is four times longer than other benchmark datasets (463.7 for OntoNotes), and contains examples of difficult coreference problems common in literature. This dataset allows for an evaluation of cross-domain performance for the task of coreference resolution, and analysis into the characteristics of long-distance within-document coreference.",An Annotated Dataset of Coreference in {E}nglish Literature,"We present in this work a new dataset of coreference annotations for works of literature in English, covering 29,103 mentions in 210,532 tokens from 100 works of fiction. This dataset differs from previous coreference datasets in containing documents whose average length (2,105.3 words) is four times longer than other benchmark datasets (463.7 for OntoNotes), and contains examples of difficult coreference problems common in literature. This dataset allows for an evaluation of cross-domain performance for the task of coreference resolution, and analysis into the characteristics of long-distance within-document coreference.",An Annotated Dataset of Coreference in English Literature,"We present in this work a new dataset of coreference annotations for works of literature in English, covering 29,103 mentions in 210,532 tokens from 100 works of fiction. This dataset differs from previous coreference datasets in containing documents whose average length (2,105.3 words) is four times longer than other benchmark datasets (463.7 for OntoNotes), and contains examples of difficult coreference problems common in literature. This dataset allows for an evaluation of cross-domain performance for the task of coreference resolution, and analysis into the characteristics of long-distance within-document coreference.",The research reported in this article was supported by an Amazon Research Award and by resources provided by NVIDIA and Berkeley Research Computing.,"An Annotated Dataset of Coreference in English Literature. We present in this work a new dataset of coreference annotations for works of literature in English, covering 29,103 mentions in 210,532 tokens from 100 works of fiction. This dataset differs from previous coreference datasets in containing documents whose average length (2,105.3 words) is four times longer than other benchmark datasets (463.7 for OntoNotes), and contains examples of difficult coreference problems common in literature. This dataset allows for an evaluation of cross-domain performance for the task of coreference resolution, and analysis into the characteristics of long-distance within-document coreference.",2020
wu-dredze-2020-explicit,https://aclanthology.org/2020.emnlp-main.362,0,,,,,,,"Do Explicit Alignments Robustly Improve Multilingual Encoders?. Multilingual BERT (Devlin et al., 2019, mBERT), XLM-RoBERTa (Conneau et al., 2019, XLMR) and other unsupervised multilingual encoders can effectively learn crosslingual representation. Explicit alignment objectives based on bitexts like Europarl or Mul-tiUN have been shown to further improve these representations. However, word-level alignments are often suboptimal and such bitexts are unavailable for many languages. In this paper, we propose a new contrastive alignment objective that can better utilize such signal, and examine whether these previous alignment methods can be adapted to noisier sources of aligned data: a randomly sampled 1 million pair subset of the OPUS collection. Additionally, rather than report results on a single dataset with a single model run, we report the mean and standard derivation of multiple runs with different seeds, on four datasets and tasks. Our more extensive analysis finds that, while our new objective outperforms previous work, overall these methods do not improve performance with a more robust evaluation framework. Furthermore, the gains from using a better underlying model eclipse any benefits from alignment training. These negative results dictate more care in evaluating these methods and suggest limitations in applying explicit alignment objectives.",Do Explicit Alignments Robustly Improve Multilingual Encoders?,"Multilingual BERT (Devlin et al., 2019, mBERT), XLM-RoBERTa (Conneau et al., 2019, XLMR) and other unsupervised multilingual encoders can effectively learn crosslingual representation. Explicit alignment objectives based on bitexts like Europarl or Mul-tiUN have been shown to further improve these representations. However, word-level alignments are often suboptimal and such bitexts are unavailable for many languages. In this paper, we propose a new contrastive alignment objective that can better utilize such signal, and examine whether these previous alignment methods can be adapted to noisier sources of aligned data: a randomly sampled 1 million pair subset of the OPUS collection. Additionally, rather than report results on a single dataset with a single model run, we report the mean and standard derivation of multiple runs with different seeds, on four datasets and tasks. Our more extensive analysis finds that, while our new objective outperforms previous work, overall these methods do not improve performance with a more robust evaluation framework. Furthermore, the gains from using a better underlying model eclipse any benefits from alignment training. These negative results dictate more care in evaluating these methods and suggest limitations in applying explicit alignment objectives.",Do Explicit Alignments Robustly Improve Multilingual Encoders?,"Multilingual BERT (Devlin et al., 2019, mBERT), XLM-RoBERTa (Conneau et al., 2019, XLMR) and other unsupervised multilingual encoders can effectively learn crosslingual representation. Explicit alignment objectives based on bitexts like Europarl or Mul-tiUN have been shown to further improve these representations. However, word-level alignments are often suboptimal and such bitexts are unavailable for many languages. In this paper, we propose a new contrastive alignment objective that can better utilize such signal, and examine whether these previous alignment methods can be adapted to noisier sources of aligned data: a randomly sampled 1 million pair subset of the OPUS collection. Additionally, rather than report results on a single dataset with a single model run, we report the mean and standard derivation of multiple runs with different seeds, on four datasets and tasks. Our more extensive analysis finds that, while our new objective outperforms previous work, overall these methods do not improve performance with a more robust evaluation framework. Furthermore, the gains from using a better underlying model eclipse any benefits from alignment training. These negative results dictate more care in evaluating these methods and suggest limitations in applying explicit alignment objectives.","This research is supported in part by ODNI, IARPA, via the BETTER Program contract #2019-","Do Explicit Alignments Robustly Improve Multilingual Encoders?. Multilingual BERT (Devlin et al., 2019, mBERT), XLM-RoBERTa (Conneau et al., 2019, XLMR) and other unsupervised multilingual encoders can effectively learn crosslingual representation. Explicit alignment objectives based on bitexts like Europarl or Mul-tiUN have been shown to further improve these representations. However, word-level alignments are often suboptimal and such bitexts are unavailable for many languages. In this paper, we propose a new contrastive alignment objective that can better utilize such signal, and examine whether these previous alignment methods can be adapted to noisier sources of aligned data: a randomly sampled 1 million pair subset of the OPUS collection. Additionally, rather than report results on a single dataset with a single model run, we report the mean and standard derivation of multiple runs with different seeds, on four datasets and tasks. Our more extensive analysis finds that, while our new objective outperforms previous work, overall these methods do not improve performance with a more robust evaluation framework. Furthermore, the gains from using a better underlying model eclipse any benefits from alignment training. These negative results dictate more care in evaluating these methods and suggest limitations in applying explicit alignment objectives.",2020
qian-etal-2021-lifelong,https://aclanthology.org/2021.naacl-main.183,1,,,,hate_speech,,,"Lifelong Learning of Hate Speech Classification on Social Media. Existing work on automated hate speech classification assumes that the dataset is fixed and the classes are pre-defined. However, the amount of data in social media increases every day, and the hot topics changes rapidly, requiring the classifiers to be able to continuously adapt to new data without forgetting the previously learned knowledge. This ability, referred to as lifelong learning, is crucial for the realword application of hate speech classifiers in social media. In this work, we propose lifelong learning of hate speech classification on social media. To alleviate catastrophic forgetting, we propose to use Variational Representation Learning (VRL) along with a memory module based on LB-SOINN (Load-Balancing Self-Organizing Incremental Neural Network). Experimentally, we show that combining variational representation learning and the LB-SOINN memory module achieves better performance than the commonly-used lifelong learning techniques.",Lifelong Learning of Hate Speech Classification on Social Media,"Existing work on automated hate speech classification assumes that the dataset is fixed and the classes are pre-defined. However, the amount of data in social media increases every day, and the hot topics changes rapidly, requiring the classifiers to be able to continuously adapt to new data without forgetting the previously learned knowledge. This ability, referred to as lifelong learning, is crucial for the realword application of hate speech classifiers in social media. In this work, we propose lifelong learning of hate speech classification on social media. To alleviate catastrophic forgetting, we propose to use Variational Representation Learning (VRL) along with a memory module based on LB-SOINN (Load-Balancing Self-Organizing Incremental Neural Network). Experimentally, we show that combining variational representation learning and the LB-SOINN memory module achieves better performance than the commonly-used lifelong learning techniques.",Lifelong Learning of Hate Speech Classification on Social Media,"Existing work on automated hate speech classification assumes that the dataset is fixed and the classes are pre-defined. However, the amount of data in social media increases every day, and the hot topics changes rapidly, requiring the classifiers to be able to continuously adapt to new data without forgetting the previously learned knowledge. This ability, referred to as lifelong learning, is crucial for the realword application of hate speech classifiers in social media. In this work, we propose lifelong learning of hate speech classification on social media. To alleviate catastrophic forgetting, we propose to use Variational Representation Learning (VRL) along with a memory module based on LB-SOINN (Load-Balancing Self-Organizing Incremental Neural Network). Experimentally, we show that combining variational representation learning and the LB-SOINN memory module achieves better performance than the commonly-used lifelong learning techniques.",,"Lifelong Learning of Hate Speech Classification on Social Media. Existing work on automated hate speech classification assumes that the dataset is fixed and the classes are pre-defined. However, the amount of data in social media increases every day, and the hot topics changes rapidly, requiring the classifiers to be able to continuously adapt to new data without forgetting the previously learned knowledge. This ability, referred to as lifelong learning, is crucial for the realword application of hate speech classifiers in social media. In this work, we propose lifelong learning of hate speech classification on social media. To alleviate catastrophic forgetting, we propose to use Variational Representation Learning (VRL) along with a memory module based on LB-SOINN (Load-Balancing Self-Organizing Incremental Neural Network). Experimentally, we show that combining variational representation learning and the LB-SOINN memory module achieves better performance than the commonly-used lifelong learning techniques.",2021
yuan-bryant-2021-document,https://aclanthology.org/2021.bea-1.8,0,,,,,,,"Document-level grammatical error correction. Document-level context can provide valuable information in grammatical error correction (GEC), which is crucial for correcting certain errors and resolving inconsistencies. In this paper, we investigate context-aware approaches and propose document-level GEC systems. Additionally, we employ a three-step training strategy to benefit from both sentence-level and document-level data. Our system outperforms previous document-level and all other NMT-based single-model systems, achieving state of the art on a common test set.",Document-level grammatical error correction,"Document-level context can provide valuable information in grammatical error correction (GEC), which is crucial for correcting certain errors and resolving inconsistencies. In this paper, we investigate context-aware approaches and propose document-level GEC systems. Additionally, we employ a three-step training strategy to benefit from both sentence-level and document-level data. Our system outperforms previous document-level and all other NMT-based single-model systems, achieving state of the art on a common test set.",Document-level grammatical error correction,"Document-level context can provide valuable information in grammatical error correction (GEC), which is crucial for correcting certain errors and resolving inconsistencies. In this paper, we investigate context-aware approaches and propose document-level GEC systems. Additionally, we employ a three-step training strategy to benefit from both sentence-level and document-level data. Our system outperforms previous document-level and all other NMT-based single-model systems, achieving state of the art on a common test set.","We would like to thank Cambridge Assessment for supporting this research, and the anonymous re-","Document-level grammatical error correction. Document-level context can provide valuable information in grammatical error correction (GEC), which is crucial for correcting certain errors and resolving inconsistencies. In this paper, we investigate context-aware approaches and propose document-level GEC systems. Additionally, we employ a three-step training strategy to benefit from both sentence-level and document-level data. Our system outperforms previous document-level and all other NMT-based single-model systems, achieving state of the art on a common test set.",2021
lascarides-oberlander-1993-temporal,https://aclanthology.org/E93-1031,0,,,,,,,"Temporal Connectives in a Discourse Context. We examine the role of temporal connectives in multi-sentence discourse. In certain contexts, sentences containing temporal connectives that are equivalent in temporai structure can fail to be equivalent in terms of discourse coherence. We account for this by offering a novel, formal mechanism for accommodating the presuppositions in temporal subordinate clauses. This mechanism encompasses both accommodation by discourse aftachme,f and accommodation by temporal addition. As such, it offers a precise and systematic model of interactions between presupposed material, discourse context, and the reader's background knowledge. We show how the results of accommodation help to determine a discou~e's coherence.",Temporal Connectives in a Discourse Context,"We examine the role of temporal connectives in multi-sentence discourse. In certain contexts, sentences containing temporal connectives that are equivalent in temporai structure can fail to be equivalent in terms of discourse coherence. We account for this by offering a novel, formal mechanism for accommodating the presuppositions in temporal subordinate clauses. This mechanism encompasses both accommodation by discourse aftachme,f and accommodation by temporal addition. As such, it offers a precise and systematic model of interactions between presupposed material, discourse context, and the reader's background knowledge. We show how the results of accommodation help to determine a discou~e's coherence.",Temporal Connectives in a Discourse Context,"We examine the role of temporal connectives in multi-sentence discourse. In certain contexts, sentences containing temporal connectives that are equivalent in temporai structure can fail to be equivalent in terms of discourse coherence. We account for this by offering a novel, formal mechanism for accommodating the presuppositions in temporal subordinate clauses. This mechanism encompasses both accommodation by discourse aftachme,f and accommodation by temporal addition. As such, it offers a precise and systematic model of interactions between presupposed material, discourse context, and the reader's background knowledge. We show how the results of accommodation help to determine a discou~e's coherence.",,"Temporal Connectives in a Discourse Context. We examine the role of temporal connectives in multi-sentence discourse. In certain contexts, sentences containing temporal connectives that are equivalent in temporai structure can fail to be equivalent in terms of discourse coherence. We account for this by offering a novel, formal mechanism for accommodating the presuppositions in temporal subordinate clauses. This mechanism encompasses both accommodation by discourse aftachme,f and accommodation by temporal addition. As such, it offers a precise and systematic model of interactions between presupposed material, discourse context, and the reader's background knowledge. We show how the results of accommodation help to determine a discou~e's coherence.",1993
benton-etal-2019-deep,https://aclanthology.org/W19-4301,0,,,,,,,"Deep Generalized Canonical Correlation Analysis. We present Deep Generalized Canonical Correlation Analysis (DGCCA)-a method for learning nonlinear transformations of arbitrarily many views of data, such that the resulting transformations are maximally informative of each other. While methods for nonlinear twoview representation learning (Deep CCA, (Andrew et al., 2013)) and linear many-view representation learning (Generalized CCA (Horst, 1961)) exist, DGCCA combines the flexibility of nonlinear (deep) representation learning with the statistical power of incorporating information from many sources, or views. We present the DGCCA formulation as well as an efficient stochastic optimization algorithm for solving it. We learn and evaluate DGCCA representations for three downstream tasks: phonetic transcription from acoustic & articulatory measurements, recommending hashtags, and recommending friends on a dataset of Twitter users.",Deep Generalized Canonical Correlation Analysis,"We present Deep Generalized Canonical Correlation Analysis (DGCCA)-a method for learning nonlinear transformations of arbitrarily many views of data, such that the resulting transformations are maximally informative of each other. While methods for nonlinear twoview representation learning (Deep CCA, (Andrew et al., 2013)) and linear many-view representation learning (Generalized CCA (Horst, 1961)) exist, DGCCA combines the flexibility of nonlinear (deep) representation learning with the statistical power of incorporating information from many sources, or views. We present the DGCCA formulation as well as an efficient stochastic optimization algorithm for solving it. We learn and evaluate DGCCA representations for three downstream tasks: phonetic transcription from acoustic & articulatory measurements, recommending hashtags, and recommending friends on a dataset of Twitter users.",Deep Generalized Canonical Correlation Analysis,"We present Deep Generalized Canonical Correlation Analysis (DGCCA)-a method for learning nonlinear transformations of arbitrarily many views of data, such that the resulting transformations are maximally informative of each other. While methods for nonlinear twoview representation learning (Deep CCA, (Andrew et al., 2013)) and linear many-view representation learning (Generalized CCA (Horst, 1961)) exist, DGCCA combines the flexibility of nonlinear (deep) representation learning with the statistical power of incorporating information from many sources, or views. We present the DGCCA formulation as well as an efficient stochastic optimization algorithm for solving it. We learn and evaluate DGCCA representations for three downstream tasks: phonetic transcription from acoustic & articulatory measurements, recommending hashtags, and recommending friends on a dataset of Twitter users.",,"Deep Generalized Canonical Correlation Analysis. We present Deep Generalized Canonical Correlation Analysis (DGCCA)-a method for learning nonlinear transformations of arbitrarily many views of data, such that the resulting transformations are maximally informative of each other. While methods for nonlinear twoview representation learning (Deep CCA, (Andrew et al., 2013)) and linear many-view representation learning (Generalized CCA (Horst, 1961)) exist, DGCCA combines the flexibility of nonlinear (deep) representation learning with the statistical power of incorporating information from many sources, or views. We present the DGCCA formulation as well as an efficient stochastic optimization algorithm for solving it. We learn and evaluate DGCCA representations for three downstream tasks: phonetic transcription from acoustic & articulatory measurements, recommending hashtags, and recommending friends on a dataset of Twitter users.",2019
forbes-webber-2002-semantic,https://aclanthology.org/W02-0204,0,,,,,,,A Semantic Account of Adverbials as Discourse Connectives. ,A Semantic Account of Adverbials as Discourse Connectives,,A Semantic Account of Adverbials as Discourse Connectives,,,A Semantic Account of Adverbials as Discourse Connectives. ,2002
seker-etal-2018-universal,https://aclanthology.org/K18-2021,0,,,,,,,"Universal Morpho-Syntactic Parsing and the Contribution of Lexica: Analyzing the ONLP Lab Submission to the CoNLL 2018 Shared Task. We present the contribution of the ONLP lab at the Open University of Israel to the CONLL 2018 UD SHARED TASK on MULTILINGUAL PARSING FROM RAW TEXT TO UNIVERSAL DEPENDENCIES. Our contribution is based on a transitionbased parser called yap: yet another parser which includes a standalone morphological model, a standalone dependency model, and a joint morphosyntactic model. In the task we used yap's standalone dependency parser to parse input morphologically disambiguated by UD-Pipe, and obtained the official score of 58.35 LAS. In a follow up investigation we use yap to show how the incorporation of morphological and lexical resources may improve the performance of end-to-end raw-to-dependencies parsing in the case of a morphologically-rich and low-resource language, Modern Hebrew. Our results on Hebrew underscore the importance of CoNLL-UL, a UD-compatible standard for accessing external lexical resources, for enhancing end-to-end UD parsing, in particular for morphologically rich and low-resource languages. We thus encourage the community to create, convert, or make available more such lexica.",Universal Morpho-Syntactic Parsing and the Contribution of Lexica: Analyzing the {ONLP} Lab Submission to the {C}o{NLL} 2018 Shared Task,"We present the contribution of the ONLP lab at the Open University of Israel to the CONLL 2018 UD SHARED TASK on MULTILINGUAL PARSING FROM RAW TEXT TO UNIVERSAL DEPENDENCIES. Our contribution is based on a transitionbased parser called yap: yet another parser which includes a standalone morphological model, a standalone dependency model, and a joint morphosyntactic model. In the task we used yap's standalone dependency parser to parse input morphologically disambiguated by UD-Pipe, and obtained the official score of 58.35 LAS. In a follow up investigation we use yap to show how the incorporation of morphological and lexical resources may improve the performance of end-to-end raw-to-dependencies parsing in the case of a morphologically-rich and low-resource language, Modern Hebrew. Our results on Hebrew underscore the importance of CoNLL-UL, a UD-compatible standard for accessing external lexical resources, for enhancing end-to-end UD parsing, in particular for morphologically rich and low-resource languages. We thus encourage the community to create, convert, or make available more such lexica.",Universal Morpho-Syntactic Parsing and the Contribution of Lexica: Analyzing the ONLP Lab Submission to the CoNLL 2018 Shared Task,"We present the contribution of the ONLP lab at the Open University of Israel to the CONLL 2018 UD SHARED TASK on MULTILINGUAL PARSING FROM RAW TEXT TO UNIVERSAL DEPENDENCIES. Our contribution is based on a transitionbased parser called yap: yet another parser which includes a standalone morphological model, a standalone dependency model, and a joint morphosyntactic model. In the task we used yap's standalone dependency parser to parse input morphologically disambiguated by UD-Pipe, and obtained the official score of 58.35 LAS. In a follow up investigation we use yap to show how the incorporation of morphological and lexical resources may improve the performance of end-to-end raw-to-dependencies parsing in the case of a morphologically-rich and low-resource language, Modern Hebrew. Our results on Hebrew underscore the importance of CoNLL-UL, a UD-compatible standard for accessing external lexical resources, for enhancing end-to-end UD parsing, in particular for morphologically rich and low-resource languages. We thus encourage the community to create, convert, or make available more such lexica.","We thank the CoNLL Shared Task Organizing Committee for their hard work and their timely support. We also thank the TIRA platform team (Potthast et al., 2014) for providing a system that facilitates competition and reproducible research. The research towards this shared task submission has been supported by the European Research Council, ERC-StG-2015 Grant 677352, and by an Israel Science Foundation (ISF) Grant 1739/26, for which we are grateful.","Universal Morpho-Syntactic Parsing and the Contribution of Lexica: Analyzing the ONLP Lab Submission to the CoNLL 2018 Shared Task. We present the contribution of the ONLP lab at the Open University of Israel to the CONLL 2018 UD SHARED TASK on MULTILINGUAL PARSING FROM RAW TEXT TO UNIVERSAL DEPENDENCIES. Our contribution is based on a transitionbased parser called yap: yet another parser which includes a standalone morphological model, a standalone dependency model, and a joint morphosyntactic model. In the task we used yap's standalone dependency parser to parse input morphologically disambiguated by UD-Pipe, and obtained the official score of 58.35 LAS. In a follow up investigation we use yap to show how the incorporation of morphological and lexical resources may improve the performance of end-to-end raw-to-dependencies parsing in the case of a morphologically-rich and low-resource language, Modern Hebrew. Our results on Hebrew underscore the importance of CoNLL-UL, a UD-compatible standard for accessing external lexical resources, for enhancing end-to-end UD parsing, in particular for morphologically rich and low-resource languages. We thus encourage the community to create, convert, or make available more such lexica.",2018
aberdeen-etal-2001-finding,https://aclanthology.org/H01-1028,0,,,,,,,"Finding Errors Automatically in Semantically Tagged Dialogues. We describe a novel method for detecting errors in task-based human-computer (HC) dialogues by automatically deriving them from semantic tags. We examined 27 HC dialogues from the DARPA Communicator air travel domain, comparing user inputs to system responses to look for slot value discrepancies, both automatically and manually. For the automatic method, we labeled the dialogues with semantic tags corresponding to ""slots"" that would be filled in ""frames"" in the course of the travel task. We then applied an automatic algorithm to detect errors in the dialogues. The same dialogues were also manually tagged (by a different annotator) to label errors directly. An analysis of the results of the two tagging methods indicates that it may be possible to detect errors automatically in this way, but our method needs further work to reduce the number of false errors detected. Finally, we present a discussion of the differing results from the two tagging methods.",Finding Errors Automatically in Semantically Tagged Dialogues,"We describe a novel method for detecting errors in task-based human-computer (HC) dialogues by automatically deriving them from semantic tags. We examined 27 HC dialogues from the DARPA Communicator air travel domain, comparing user inputs to system responses to look for slot value discrepancies, both automatically and manually. For the automatic method, we labeled the dialogues with semantic tags corresponding to ""slots"" that would be filled in ""frames"" in the course of the travel task. We then applied an automatic algorithm to detect errors in the dialogues. The same dialogues were also manually tagged (by a different annotator) to label errors directly. An analysis of the results of the two tagging methods indicates that it may be possible to detect errors automatically in this way, but our method needs further work to reduce the number of false errors detected. Finally, we present a discussion of the differing results from the two tagging methods.",Finding Errors Automatically in Semantically Tagged Dialogues,"We describe a novel method for detecting errors in task-based human-computer (HC) dialogues by automatically deriving them from semantic tags. We examined 27 HC dialogues from the DARPA Communicator air travel domain, comparing user inputs to system responses to look for slot value discrepancies, both automatically and manually. For the automatic method, we labeled the dialogues with semantic tags corresponding to ""slots"" that would be filled in ""frames"" in the course of the travel task. We then applied an automatic algorithm to detect errors in the dialogues. The same dialogues were also manually tagged (by a different annotator) to label errors directly. An analysis of the results of the two tagging methods indicates that it may be possible to detect errors automatically in this way, but our method needs further work to reduce the number of false errors detected. Finally, we present a discussion of the differing results from the two tagging methods.",,"Finding Errors Automatically in Semantically Tagged Dialogues. We describe a novel method for detecting errors in task-based human-computer (HC) dialogues by automatically deriving them from semantic tags. We examined 27 HC dialogues from the DARPA Communicator air travel domain, comparing user inputs to system responses to look for slot value discrepancies, both automatically and manually. For the automatic method, we labeled the dialogues with semantic tags corresponding to ""slots"" that would be filled in ""frames"" in the course of the travel task. We then applied an automatic algorithm to detect errors in the dialogues. The same dialogues were also manually tagged (by a different annotator) to label errors directly. An analysis of the results of the two tagging methods indicates that it may be possible to detect errors automatically in this way, but our method needs further work to reduce the number of false errors detected. Finally, we present a discussion of the differing results from the two tagging methods.",2001
vazquez-etal-2020-systematic,https://aclanthology.org/2020.cl-2.5,0,,,,,,,"A Systematic Study of Inner-Attention-Based Sentence Representations in Multilingual Neural Machine Translation. Neural machine translation has considerably improved the quality of automatic translations by learning good representations of input sentences. In this article, we explore a multilingual translation model capable of producing fixed-size sentence representations by incorporating an intermediate crosslingual shared layer, which we refer to as attention bridge. This layer exploits the semantics from each language and develops into a language-agnostic meaning representation that can be efficiently used for transfer learning. We systematically study the impact of the size of the attention bridge and the effect of including additional languages in the model. In contrast to related previous work, we demonstrate that there is no conflict between translation performance and the use of sentence representations in downstream tasks. In particular, we show that larger intermediate layers not only improve translation quality, especially for long sentences, but also push the accuracy of trainable classification tasks. Nevertheless, shorter representations lead to increased compression that is beneficial in non-trainable similarity tasks. Similarly, we show that trainable downstream tasks benefit from multilingual models, whereas additional language signals do not improve performance in non-trainable benchmarks. This is an important insight Submission",A Systematic Study of Inner-Attention-Based Sentence Representations in Multilingual Neural Machine Translation,"Neural machine translation has considerably improved the quality of automatic translations by learning good representations of input sentences. In this article, we explore a multilingual translation model capable of producing fixed-size sentence representations by incorporating an intermediate crosslingual shared layer, which we refer to as attention bridge. This layer exploits the semantics from each language and develops into a language-agnostic meaning representation that can be efficiently used for transfer learning. We systematically study the impact of the size of the attention bridge and the effect of including additional languages in the model. In contrast to related previous work, we demonstrate that there is no conflict between translation performance and the use of sentence representations in downstream tasks. In particular, we show that larger intermediate layers not only improve translation quality, especially for long sentences, but also push the accuracy of trainable classification tasks. Nevertheless, shorter representations lead to increased compression that is beneficial in non-trainable similarity tasks. Similarly, we show that trainable downstream tasks benefit from multilingual models, whereas additional language signals do not improve performance in non-trainable benchmarks. This is an important insight Submission",A Systematic Study of Inner-Attention-Based Sentence Representations in Multilingual Neural Machine Translation,"Neural machine translation has considerably improved the quality of automatic translations by learning good representations of input sentences. In this article, we explore a multilingual translation model capable of producing fixed-size sentence representations by incorporating an intermediate crosslingual shared layer, which we refer to as attention bridge. This layer exploits the semantics from each language and develops into a language-agnostic meaning representation that can be efficiently used for transfer learning. We systematically study the impact of the size of the attention bridge and the effect of including additional languages in the model. In contrast to related previous work, we demonstrate that there is no conflict between translation performance and the use of sentence representations in downstream tasks. In particular, we show that larger intermediate layers not only improve translation quality, especially for long sentences, but also push the accuracy of trainable classification tasks. Nevertheless, shorter representations lead to increased compression that is beneficial in non-trainable similarity tasks. Similarly, we show that trainable downstream tasks benefit from multilingual models, whereas additional language signals do not improve performance in non-trainable benchmarks. This is an important insight Submission","This work is part of the FoTran project, funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement no 771113). The authors gratefully acknowledge the support of the Academy of Finland through project 314062 from the ICT 2023 call on Computation, Machine Learning and Artificial Intelligence and projects 270354 and 273457. Finally, we would also like to acknowledge CSC -IT Center for Science, Finland, for computational resources, as well as NVIDIA and their GPU grant.","A Systematic Study of Inner-Attention-Based Sentence Representations in Multilingual Neural Machine Translation. Neural machine translation has considerably improved the quality of automatic translations by learning good representations of input sentences. In this article, we explore a multilingual translation model capable of producing fixed-size sentence representations by incorporating an intermediate crosslingual shared layer, which we refer to as attention bridge. This layer exploits the semantics from each language and develops into a language-agnostic meaning representation that can be efficiently used for transfer learning. We systematically study the impact of the size of the attention bridge and the effect of including additional languages in the model. In contrast to related previous work, we demonstrate that there is no conflict between translation performance and the use of sentence representations in downstream tasks. In particular, we show that larger intermediate layers not only improve translation quality, especially for long sentences, but also push the accuracy of trainable classification tasks. Nevertheless, shorter representations lead to increased compression that is beneficial in non-trainable similarity tasks. Similarly, we show that trainable downstream tasks benefit from multilingual models, whereas additional language signals do not improve performance in non-trainable benchmarks. This is an important insight Submission",2020
nirenburg-etal-1986-knowledge,https://aclanthology.org/C86-1148,0,,,,,,,"On Knowledge-Based Machine Translation. This paper describes the design of tile knowledge representation medium used for representing concepts and assertions, respectively, in a subworld chosen for a knowledge-based machine u'anslation system. This design is used in the TRANSLATOR machine translation project. The kuowledge representation language, or interlingua, has two components, DIL and TIL. DIL stands for 'dictionary of interlingua' and descibes tile semantics of a subworld. TIL stands for 'text of interlingua' and is responsible for producing an interlingua text, which represents tile meaning of an input text in tile terms of trte interlingua. We maintain that involved analysis of various types of linguistic and eucyclopaedic meaniug is necessary for the task of autx)matic translatiou. The mechanisms for extracting and nlanipnlating and reproducing the nteaning of te~ts will be reported in detail elsewhere. The linguistic (inchlding tile syutactic) knowledge about source altd target languages is used by the nlechanisnls that translate texts into aud from the btterlingua. Since interlingua is an artificial langnage, we can (and do, through TII,) control tile syntax and semantics of the allowed interlingua elements. The interlingua, snggesled for TRANSI.ATOR has a ln'oader coverage than other knowledge re, presentation schemata for natural language. It involves the knowledge about discourse, speech acts, focus, thne, space and other facets of the overall meaning of texts.",On Knowledge-Based Machine Translation,"This paper describes the design of tile knowledge representation medium used for representing concepts and assertions, respectively, in a subworld chosen for a knowledge-based machine u'anslation system. This design is used in the TRANSLATOR machine translation project. The kuowledge representation language, or interlingua, has two components, DIL and TIL. DIL stands for 'dictionary of interlingua' and descibes tile semantics of a subworld. TIL stands for 'text of interlingua' and is responsible for producing an interlingua text, which represents tile meaning of an input text in tile terms of trte interlingua. We maintain that involved analysis of various types of linguistic and eucyclopaedic meaniug is necessary for the task of autx)matic translatiou. The mechanisms for extracting and nlanipnlating and reproducing the nteaning of te~ts will be reported in detail elsewhere. The linguistic (inchlding tile syutactic) knowledge about source altd target languages is used by the nlechanisnls that translate texts into aud from the btterlingua. Since interlingua is an artificial langnage, we can (and do, through TII,) control tile syntax and semantics of the allowed interlingua elements. The interlingua, snggesled for TRANSI.ATOR has a ln'oader coverage than other knowledge re, presentation schemata for natural language. It involves the knowledge about discourse, speech acts, focus, thne, space and other facets of the overall meaning of texts.",On Knowledge-Based Machine Translation,"This paper describes the design of tile knowledge representation medium used for representing concepts and assertions, respectively, in a subworld chosen for a knowledge-based machine u'anslation system. This design is used in the TRANSLATOR machine translation project. The kuowledge representation language, or interlingua, has two components, DIL and TIL. DIL stands for 'dictionary of interlingua' and descibes tile semantics of a subworld. TIL stands for 'text of interlingua' and is responsible for producing an interlingua text, which represents tile meaning of an input text in tile terms of trte interlingua. We maintain that involved analysis of various types of linguistic and eucyclopaedic meaniug is necessary for the task of autx)matic translatiou. The mechanisms for extracting and nlanipnlating and reproducing the nteaning of te~ts will be reported in detail elsewhere. The linguistic (inchlding tile syutactic) knowledge about source altd target languages is used by the nlechanisnls that translate texts into aud from the btterlingua. Since interlingua is an artificial langnage, we can (and do, through TII,) control tile syntax and semantics of the allowed interlingua elements. The interlingua, snggesled for TRANSI.ATOR has a ln'oader coverage than other knowledge re, presentation schemata for natural language. It involves the knowledge about discourse, speech acts, focus, thne, space and other facets of the overall meaning of texts.","Acknowledgement. The authors wish to thank Irene Nirenburg for reading, discussing and criticizing the numerous successive versions of the manuscript. Needless to say, it's we who are to blame for the remaining errors.","On Knowledge-Based Machine Translation. This paper describes the design of tile knowledge representation medium used for representing concepts and assertions, respectively, in a subworld chosen for a knowledge-based machine u'anslation system. This design is used in the TRANSLATOR machine translation project. The kuowledge representation language, or interlingua, has two components, DIL and TIL. DIL stands for 'dictionary of interlingua' and descibes tile semantics of a subworld. TIL stands for 'text of interlingua' and is responsible for producing an interlingua text, which represents tile meaning of an input text in tile terms of trte interlingua. We maintain that involved analysis of various types of linguistic and eucyclopaedic meaniug is necessary for the task of autx)matic translatiou. The mechanisms for extracting and nlanipnlating and reproducing the nteaning of te~ts will be reported in detail elsewhere. The linguistic (inchlding tile syutactic) knowledge about source altd target languages is used by the nlechanisnls that translate texts into aud from the btterlingua. Since interlingua is an artificial langnage, we can (and do, through TII,) control tile syntax and semantics of the allowed interlingua elements. The interlingua, snggesled for TRANSI.ATOR has a ln'oader coverage than other knowledge re, presentation schemata for natural language. It involves the knowledge about discourse, speech acts, focus, thne, space and other facets of the overall meaning of texts.",1986
aarts-1995-acyclic,https://aclanthology.org/1995.iwpt-1.2,0,,,,,,,"Acyclic Context-sensitive Grammars. A grammar formalism is introduced that generates parse trees with crossing branches. The uniform recognition problem is NP-complete, but for any fixed grammar the recognition problem is polynomial.",Acyclic Context-sensitive Grammars,"A grammar formalism is introduced that generates parse trees with crossing branches. The uniform recognition problem is NP-complete, but for any fixed grammar the recognition problem is polynomial.",Acyclic Context-sensitive Grammars,"A grammar formalism is introduced that generates parse trees with crossing branches. The uniform recognition problem is NP-complete, but for any fixed grammar the recognition problem is polynomial.",,"Acyclic Context-sensitive Grammars. A grammar formalism is introduced that generates parse trees with crossing branches. The uniform recognition problem is NP-complete, but for any fixed grammar the recognition problem is polynomial.",1995
zhang-etal-2015-binarized,https://aclanthology.org/D15-1250,0,,,,,,,"A Binarized Neural Network Joint Model for Machine Translation. The neural network joint model (NNJM), which augments the neural network language model (NNLM) with an m-word source context window, has achieved large gains in machine translation accuracy, but also has problems with high normalization cost when using large vocabularies. Training the NNJM with noise-contrastive estimation (NCE), instead of standard maximum likelihood estimation (MLE), can reduce computation cost. In this paper, we propose an alternative to NCE, the binarized NNJM (BNNJM), which learns a binary classifier that takes both the context and target words as input, and can be efficiently trained using MLE. We compare the BNNJM and NNJM trained by NCE on various translation tasks.",A Binarized Neural Network Joint Model for Machine Translation,"The neural network joint model (NNJM), which augments the neural network language model (NNLM) with an m-word source context window, has achieved large gains in machine translation accuracy, but also has problems with high normalization cost when using large vocabularies. Training the NNJM with noise-contrastive estimation (NCE), instead of standard maximum likelihood estimation (MLE), can reduce computation cost. In this paper, we propose an alternative to NCE, the binarized NNJM (BNNJM), which learns a binary classifier that takes both the context and target words as input, and can be efficiently trained using MLE. We compare the BNNJM and NNJM trained by NCE on various translation tasks.",A Binarized Neural Network Joint Model for Machine Translation,"The neural network joint model (NNJM), which augments the neural network language model (NNLM) with an m-word source context window, has achieved large gains in machine translation accuracy, but also has problems with high normalization cost when using large vocabularies. Training the NNJM with noise-contrastive estimation (NCE), instead of standard maximum likelihood estimation (MLE), can reduce computation cost. In this paper, we propose an alternative to NCE, the binarized NNJM (BNNJM), which learns a binary classifier that takes both the context and target words as input, and can be efficiently trained using MLE. We compare the BNNJM and NNJM trained by NCE on various translation tasks.",,"A Binarized Neural Network Joint Model for Machine Translation. The neural network joint model (NNJM), which augments the neural network language model (NNLM) with an m-word source context window, has achieved large gains in machine translation accuracy, but also has problems with high normalization cost when using large vocabularies. Training the NNJM with noise-contrastive estimation (NCE), instead of standard maximum likelihood estimation (MLE), can reduce computation cost. In this paper, we propose an alternative to NCE, the binarized NNJM (BNNJM), which learns a binary classifier that takes both the context and target words as input, and can be efficiently trained using MLE. We compare the BNNJM and NNJM trained by NCE on various translation tasks.",2015
mariani-etal-2016-study,https://aclanthology.org/W16-1509,1,,,,industry_innovation_infrastructure,peace_justice_and_strong_institutions,,"A Study of Reuse and Plagiarism in Speech and Natural Language Processing papers. The aim of this experiment is to present an easy way to compare fragments of texts in order to detect (supposed) results of copy & paste operations between articles in the domain of Natural Language Processing, including Speech Processing (NLP). The search space of the comparisons is a corpus labelled as NLP4NLP, which includes 34 different sources and gathers a large part of the publications in the NLP field over the past 50 years. This study considers the similarity between the papers of each individual source and the complete set of papers in the whole corpus, according to four different types of relationship (self-reuse, self-plagiarism, reuse and plagiarism) and in both directions: a source paper borrowing a fragment of text from another paper of the collection, or in the reverse direction, fragments of text from the source paper being borrowed and inserted in another paper of the collection.",A Study of Reuse and Plagiarism in Speech and Natural Language Processing papers,"The aim of this experiment is to present an easy way to compare fragments of texts in order to detect (supposed) results of copy & paste operations between articles in the domain of Natural Language Processing, including Speech Processing (NLP). The search space of the comparisons is a corpus labelled as NLP4NLP, which includes 34 different sources and gathers a large part of the publications in the NLP field over the past 50 years. This study considers the similarity between the papers of each individual source and the complete set of papers in the whole corpus, according to four different types of relationship (self-reuse, self-plagiarism, reuse and plagiarism) and in both directions: a source paper borrowing a fragment of text from another paper of the collection, or in the reverse direction, fragments of text from the source paper being borrowed and inserted in another paper of the collection.",A Study of Reuse and Plagiarism in Speech and Natural Language Processing papers,"The aim of this experiment is to present an easy way to compare fragments of texts in order to detect (supposed) results of copy & paste operations between articles in the domain of Natural Language Processing, including Speech Processing (NLP). The search space of the comparisons is a corpus labelled as NLP4NLP, which includes 34 different sources and gathers a large part of the publications in the NLP field over the past 50 years. This study considers the similarity between the papers of each individual source and the complete set of papers in the whole corpus, according to four different types of relationship (self-reuse, self-plagiarism, reuse and plagiarism) and in both directions: a source paper borrowing a fragment of text from another paper of the collection, or in the reverse direction, fragments of text from the source paper being borrowed and inserted in another paper of the collection.",,"A Study of Reuse and Plagiarism in Speech and Natural Language Processing papers. The aim of this experiment is to present an easy way to compare fragments of texts in order to detect (supposed) results of copy & paste operations between articles in the domain of Natural Language Processing, including Speech Processing (NLP). The search space of the comparisons is a corpus labelled as NLP4NLP, which includes 34 different sources and gathers a large part of the publications in the NLP field over the past 50 years. This study considers the similarity between the papers of each individual source and the complete set of papers in the whole corpus, according to four different types of relationship (self-reuse, self-plagiarism, reuse and plagiarism) and in both directions: a source paper borrowing a fragment of text from another paper of the collection, or in the reverse direction, fragments of text from the source paper being borrowed and inserted in another paper of the collection.",2016
cunningham-etal-1997-gate,https://aclanthology.org/A97-2017,0,,,,,,,"GATE - a General Architecture for Text Engineering. yorick@dcs, shef. ac. uk For a variety of reasons NLP has recently spawned a related engineering discipline called language engineering (LE), whose orientation is towards the application of NLP techniques to solving large-scale, real-world language processing problems in a robust and predictable way. Aside from the host of fundamental theoretical problems that remain to be answered in NLP, language engineering faces a variety of problems of its own. First, there is no theory of language which is universally accepted, and no computational model of even a part of the process of language understanding which stands uncontested. Second, building intelligent application systems, systems which model or reproduce enough human language processing capability to be useful, is a largescale engineering effort which, given political and economic realities, must rely on the efforts of many small groups of researchers, spatially and temporally distributed. The first point means that any attempt to push researchers into a theoretical or representational straight-jacket is premature, unhealthy and doomed to failure. The second means that no research team alone is likely to have the resources to build from scratch an entire state-of-the-art LE application system. Given this state of affairs, what is the best practical support that can be given to advance the field? Clearly, the pressure to build on the efforts of others demands that LE tools or component technologies be readily available for experimentation and reuse. But the pressure towards theoretical diversity means that there is no point attempting to gain agreement, in the short term, on what set of component technologies should be developed or on the informational content or syntax of representations that these components should require or produce.
Our response has been to design and implement a software environment called GATE (Cunninham et al., 1997) , which we will demonstrate at ANLP. GATE attempts to meet the following objectives:",{GATE} - a General Architecture for Text Engineering,"yorick@dcs, shef. ac. uk For a variety of reasons NLP has recently spawned a related engineering discipline called language engineering (LE), whose orientation is towards the application of NLP techniques to solving large-scale, real-world language processing problems in a robust and predictable way. Aside from the host of fundamental theoretical problems that remain to be answered in NLP, language engineering faces a variety of problems of its own. First, there is no theory of language which is universally accepted, and no computational model of even a part of the process of language understanding which stands uncontested. Second, building intelligent application systems, systems which model or reproduce enough human language processing capability to be useful, is a largescale engineering effort which, given political and economic realities, must rely on the efforts of many small groups of researchers, spatially and temporally distributed. The first point means that any attempt to push researchers into a theoretical or representational straight-jacket is premature, unhealthy and doomed to failure. The second means that no research team alone is likely to have the resources to build from scratch an entire state-of-the-art LE application system. Given this state of affairs, what is the best practical support that can be given to advance the field? Clearly, the pressure to build on the efforts of others demands that LE tools or component technologies be readily available for experimentation and reuse. But the pressure towards theoretical diversity means that there is no point attempting to gain agreement, in the short term, on what set of component technologies should be developed or on the informational content or syntax of representations that these components should require or produce.
Our response has been to design and implement a software environment called GATE (Cunninham et al., 1997) , which we will demonstrate at ANLP. GATE attempts to meet the following objectives:",GATE - a General Architecture for Text Engineering,"yorick@dcs, shef. ac. uk For a variety of reasons NLP has recently spawned a related engineering discipline called language engineering (LE), whose orientation is towards the application of NLP techniques to solving large-scale, real-world language processing problems in a robust and predictable way. Aside from the host of fundamental theoretical problems that remain to be answered in NLP, language engineering faces a variety of problems of its own. First, there is no theory of language which is universally accepted, and no computational model of even a part of the process of language understanding which stands uncontested. Second, building intelligent application systems, systems which model or reproduce enough human language processing capability to be useful, is a largescale engineering effort which, given political and economic realities, must rely on the efforts of many small groups of researchers, spatially and temporally distributed. The first point means that any attempt to push researchers into a theoretical or representational straight-jacket is premature, unhealthy and doomed to failure. The second means that no research team alone is likely to have the resources to build from scratch an entire state-of-the-art LE application system. Given this state of affairs, what is the best practical support that can be given to advance the field? Clearly, the pressure to build on the efforts of others demands that LE tools or component technologies be readily available for experimentation and reuse. But the pressure towards theoretical diversity means that there is no point attempting to gain agreement, in the short term, on what set of component technologies should be developed or on the informational content or syntax of representations that these components should require or produce.
Our response has been to design and implement a software environment called GATE (Cunninham et al., 1997) , which we will demonstrate at ANLP. GATE attempts to meet the following objectives:",,"GATE - a General Architecture for Text Engineering. yorick@dcs, shef. ac. uk For a variety of reasons NLP has recently spawned a related engineering discipline called language engineering (LE), whose orientation is towards the application of NLP techniques to solving large-scale, real-world language processing problems in a robust and predictable way. Aside from the host of fundamental theoretical problems that remain to be answered in NLP, language engineering faces a variety of problems of its own. First, there is no theory of language which is universally accepted, and no computational model of even a part of the process of language understanding which stands uncontested. Second, building intelligent application systems, systems which model or reproduce enough human language processing capability to be useful, is a largescale engineering effort which, given political and economic realities, must rely on the efforts of many small groups of researchers, spatially and temporally distributed. The first point means that any attempt to push researchers into a theoretical or representational straight-jacket is premature, unhealthy and doomed to failure. The second means that no research team alone is likely to have the resources to build from scratch an entire state-of-the-art LE application system. Given this state of affairs, what is the best practical support that can be given to advance the field? Clearly, the pressure to build on the efforts of others demands that LE tools or component technologies be readily available for experimentation and reuse. But the pressure towards theoretical diversity means that there is no point attempting to gain agreement, in the short term, on what set of component technologies should be developed or on the informational content or syntax of representations that these components should require or produce.
Our response has been to design and implement a software environment called GATE (Cunninham et al., 1997) , which we will demonstrate at ANLP. GATE attempts to meet the following objectives:",1997
ahmed-nurnberger-2008-arabic,https://aclanthology.org/2008.eamt-1.3,0,,,,,,,"Arabic/English word translation disambiguation using parallel corpora and matching schemes. The limited coverage of available Arabic language lexicons causes a serious challenge in Arabic cross language information retrieval. Translation in cross language information retrieval consists of assigning one of the semantic representation terms in the target language to the intended query. Despite the problem of the completeness of the dictionary, we also face the problem of which one of the translations proposed by the dictionary for each query term should be included in the query translations. In this paper, we describe the implementation and evaluation of an Arabic/English word translation disambiguation approach that is based on exploiting a large bilingual corpus and statistical co-occurrence to find the correct sense for the query translations terms. The correct word translations of the given query term are determined based on their cohesion with words in the training corpus and a special similarity score measure. The specific properties of the Arabic language that frequently hinder the correct match are taken into account.",{A}rabic/{E}nglish word translation disambiguation using parallel corpora and matching schemes,"The limited coverage of available Arabic language lexicons causes a serious challenge in Arabic cross language information retrieval. Translation in cross language information retrieval consists of assigning one of the semantic representation terms in the target language to the intended query. Despite the problem of the completeness of the dictionary, we also face the problem of which one of the translations proposed by the dictionary for each query term should be included in the query translations. In this paper, we describe the implementation and evaluation of an Arabic/English word translation disambiguation approach that is based on exploiting a large bilingual corpus and statistical co-occurrence to find the correct sense for the query translations terms. The correct word translations of the given query term are determined based on their cohesion with words in the training corpus and a special similarity score measure. The specific properties of the Arabic language that frequently hinder the correct match are taken into account.",Arabic/English word translation disambiguation using parallel corpora and matching schemes,"The limited coverage of available Arabic language lexicons causes a serious challenge in Arabic cross language information retrieval. Translation in cross language information retrieval consists of assigning one of the semantic representation terms in the target language to the intended query. Despite the problem of the completeness of the dictionary, we also face the problem of which one of the translations proposed by the dictionary for each query term should be included in the query translations. In this paper, we describe the implementation and evaluation of an Arabic/English word translation disambiguation approach that is based on exploiting a large bilingual corpus and statistical co-occurrence to find the correct sense for the query translations terms. The correct word translations of the given query term are determined based on their cohesion with words in the training corpus and a special similarity score measure. The specific properties of the Arabic language that frequently hinder the correct match are taken into account.",,"Arabic/English word translation disambiguation using parallel corpora and matching schemes. The limited coverage of available Arabic language lexicons causes a serious challenge in Arabic cross language information retrieval. Translation in cross language information retrieval consists of assigning one of the semantic representation terms in the target language to the intended query. Despite the problem of the completeness of the dictionary, we also face the problem of which one of the translations proposed by the dictionary for each query term should be included in the query translations. In this paper, we describe the implementation and evaluation of an Arabic/English word translation disambiguation approach that is based on exploiting a large bilingual corpus and statistical co-occurrence to find the correct sense for the query translations terms. The correct word translations of the given query term are determined based on their cohesion with words in the training corpus and a special similarity score measure. The specific properties of the Arabic language that frequently hinder the correct match are taken into account.",2008
gorz-paulus-1988-finite,https://aclanthology.org/C88-1043,0,,,,,,,"A Finite State Approach to German Verb Morphology. This paper presents a new, language independent model for analysis and generation of word forms based on Finite State Transducers (FSTs). It has been completely implemented on a PC and successfully tested with lexicons and rules covering all of German verb morphology and the most interesting subsets of French and Spanish verbs as well. The linguistic databases consist of a'letter-tree structured lexicon with annc~ tated feature lists and a FST which is constructed from a set of morphophonological rules. These rewriting rules operate on complete words unlike other FST-based systems.",A Finite State Approach to {G}erman Verb Morphology,"This paper presents a new, language independent model for analysis and generation of word forms based on Finite State Transducers (FSTs). It has been completely implemented on a PC and successfully tested with lexicons and rules covering all of German verb morphology and the most interesting subsets of French and Spanish verbs as well. The linguistic databases consist of a'letter-tree structured lexicon with annc~ tated feature lists and a FST which is constructed from a set of morphophonological rules. These rewriting rules operate on complete words unlike other FST-based systems.",A Finite State Approach to German Verb Morphology,"This paper presents a new, language independent model for analysis and generation of word forms based on Finite State Transducers (FSTs). It has been completely implemented on a PC and successfully tested with lexicons and rules covering all of German verb morphology and the most interesting subsets of French and Spanish verbs as well. The linguistic databases consist of a'letter-tree structured lexicon with annc~ tated feature lists and a FST which is constructed from a set of morphophonological rules. These rewriting rules operate on complete words unlike other FST-based systems.",,"A Finite State Approach to German Verb Morphology. This paper presents a new, language independent model for analysis and generation of word forms based on Finite State Transducers (FSTs). It has been completely implemented on a PC and successfully tested with lexicons and rules covering all of German verb morphology and the most interesting subsets of French and Spanish verbs as well. The linguistic databases consist of a'letter-tree structured lexicon with annc~ tated feature lists and a FST which is constructed from a set of morphophonological rules. These rewriting rules operate on complete words unlike other FST-based systems.",1988
lamont-2018-decomposing,https://aclanthology.org/W18-0310,0,,,,,,,"Decomposing phonological transformations in serial derivations. While most phonological transformations have been shown to be subsequential, there are tonal processes that do not belong to any subregular class, thereby making it difficult to identify a tighter bound on the complexity of phonological processes than the regular languages. This paper argues that a tighter bound obtains from examining the way transformations are computed: when derived in serial, phonological processes can be decomposed into iterated subsequential maps.",Decomposing phonological transformations in serial derivations,"While most phonological transformations have been shown to be subsequential, there are tonal processes that do not belong to any subregular class, thereby making it difficult to identify a tighter bound on the complexity of phonological processes than the regular languages. This paper argues that a tighter bound obtains from examining the way transformations are computed: when derived in serial, phonological processes can be decomposed into iterated subsequential maps.",Decomposing phonological transformations in serial derivations,"While most phonological transformations have been shown to be subsequential, there are tonal processes that do not belong to any subregular class, thereby making it difficult to identify a tighter bound on the complexity of phonological processes than the regular languages. This paper argues that a tighter bound obtains from examining the way transformations are computed: when derived in serial, phonological processes can be decomposed into iterated subsequential maps.","This work has greatly benefited from discussions with Carolyn Anderson, Thomas Graf, Jeff Heinz, Adam Jardine, Gaja Jarosz, John McCarthy, Joe Pater, Brandon Prickett, Kristine Yu, participants in the Phonology Reading Group and Sound Workshop at the University of Massachusetts, Amherst, and the audience at NECPHON 11, as well as comments from three anonymous reviewers for SCiL 2018. This work was supported by the National Science Foundation through grant BCS-424077. All remaining errors are of course my own.","Decomposing phonological transformations in serial derivations. While most phonological transformations have been shown to be subsequential, there are tonal processes that do not belong to any subregular class, thereby making it difficult to identify a tighter bound on the complexity of phonological processes than the regular languages. This paper argues that a tighter bound obtains from examining the way transformations are computed: when derived in serial, phonological processes can be decomposed into iterated subsequential maps.",2018
traum-etal-2004-evaluation,http://www.lrec-conf.org/proceedings/lrec2004/pdf/768.pdf,0,,,,,,,"Evaluation of Multi-party Virtual Reality Dialogue Interaction. We describe a dialogue evaluation plan for a multi-character virtual reality training simulation. A multi-component evaluation plan is presented, including user satisfaction, intended task completion, recognition rate, and a new annotation scheme for appropriateness. Preliminary results for formative tests are also presented.",Evaluation of Multi-party Virtual Reality Dialogue Interaction,"We describe a dialogue evaluation plan for a multi-character virtual reality training simulation. A multi-component evaluation plan is presented, including user satisfaction, intended task completion, recognition rate, and a new annotation scheme for appropriateness. Preliminary results for formative tests are also presented.",Evaluation of Multi-party Virtual Reality Dialogue Interaction,"We describe a dialogue evaluation plan for a multi-character virtual reality training simulation. A multi-component evaluation plan is presented, including user satisfaction, intended task completion, recognition rate, and a new annotation scheme for appropriateness. Preliminary results for formative tests are also presented.","We would like to thank the many members of the MRE project team for help in this work. First, those who helped build parts of the system. Also Sheryl Kwak, Lori Weiss, Bryan Kramer, Dave Miraglia, Rob Groome, Jon Gratch, and Kate Labore for helping with the data collection, and Captain Roland Miraco and Sergeant Dan Johnson for helping find cadet trainees. Eduard Hovy, Shri Narayanan, Kevin Knight, and Anton Leuski have given useful advice on evaluation. The work described in this paper was supported by the Department of the Army under contract number DAAD 19-99-D-0046. Any opinions, findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the Department of the Army.","Evaluation of Multi-party Virtual Reality Dialogue Interaction. We describe a dialogue evaluation plan for a multi-character virtual reality training simulation. A multi-component evaluation plan is presented, including user satisfaction, intended task completion, recognition rate, and a new annotation scheme for appropriateness. Preliminary results for formative tests are also presented.",2004
passonneau-etal-2010-learning,https://aclanthology.org/N10-1126,0,,,,,,,"Learning about Voice Search for Spoken Dialogue Systems. In a Wizard-of-Oz experiment with multiple wizard subjects, each wizard viewed automated speech recognition (ASR) results for utterances whose interpretation is critical to task success: requests for books by title from a library database. To avoid non-understandings, the wizard directly queried the application database with the ASR hypothesis (voice search). To learn how to avoid misunderstandings, we investigated how wizards dealt with uncertainty in voice search results. Wizards were quite successful at selecting the correct title from query results that included a match. The most successful wizard could also tell when the query results did not contain the requested title. Our learned models of the best wizard's behavior combine features available to wizards with some that are not, such as recognition confidence and acoustic model scores.",Learning about Voice Search for Spoken Dialogue Systems,"In a Wizard-of-Oz experiment with multiple wizard subjects, each wizard viewed automated speech recognition (ASR) results for utterances whose interpretation is critical to task success: requests for books by title from a library database. To avoid non-understandings, the wizard directly queried the application database with the ASR hypothesis (voice search). To learn how to avoid misunderstandings, we investigated how wizards dealt with uncertainty in voice search results. Wizards were quite successful at selecting the correct title from query results that included a match. The most successful wizard could also tell when the query results did not contain the requested title. Our learned models of the best wizard's behavior combine features available to wizards with some that are not, such as recognition confidence and acoustic model scores.",Learning about Voice Search for Spoken Dialogue Systems,"In a Wizard-of-Oz experiment with multiple wizard subjects, each wizard viewed automated speech recognition (ASR) results for utterances whose interpretation is critical to task success: requests for books by title from a library database. To avoid non-understandings, the wizard directly queried the application database with the ASR hypothesis (voice search). To learn how to avoid misunderstandings, we investigated how wizards dealt with uncertainty in voice search results. Wizards were quite successful at selecting the correct title from query results that included a match. The most successful wizard could also tell when the query results did not contain the requested title. Our learned models of the best wizard's behavior combine features available to wizards with some that are not, such as recognition confidence and acoustic model scores.","This research was supported by the National Science Foundation under IIS-0745369, IIS-084966, and IIS-0744904. We thank the anonymous reviewers, the Heiskell Library, our CMU collaborators, our statistical wizard Liana Epstein, and our enthusiastic undergraduate research assistants.","Learning about Voice Search for Spoken Dialogue Systems. In a Wizard-of-Oz experiment with multiple wizard subjects, each wizard viewed automated speech recognition (ASR) results for utterances whose interpretation is critical to task success: requests for books by title from a library database. To avoid non-understandings, the wizard directly queried the application database with the ASR hypothesis (voice search). To learn how to avoid misunderstandings, we investigated how wizards dealt with uncertainty in voice search results. Wizards were quite successful at selecting the correct title from query results that included a match. The most successful wizard could also tell when the query results did not contain the requested title. Our learned models of the best wizard's behavior combine features available to wizards with some that are not, such as recognition confidence and acoustic model scores.",2010
aly-etal-2021-fact,https://aclanthology.org/2021.fever-1.1,1,,,,disinformation_and_fake_news,,,"The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) Shared Task. The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) shared task, asks participating systems to determine whether human-authored claims are SUPPORTED or REFUTED based on evidence retrieved from Wikipedia (or NOTENOUGHINFO if the claim cannot be verified). Compared to the FEVER 2018 shared task, the main challenge is the addition of structured data (tables and lists) as a source of evidence. The claims in the FEVEROUS dataset can be verified using only structured evidence, only unstructured evidence, or a mixture of both. Submissions are evaluated using the FEVEROUS score that combines label accuracy and evidence retrieval. Unlike FEVER 2018 (Thorne et al., 2018a), FEVEROUS requires partial evidence to be returned for NOTENOUGHINFO claims, and the claims are longer and thus more complex. The shared task received 13 entries, six of which were able to beat the baseline system. The winning team was ""Bust a move!"", achieving a FEVEROUS score of 27% (+9% compared to the baseline). In this paper we describe the shared task, present the full results and highlight commonalities and innovations among the participating systems. Claim: In the 2018 Naples general election, Roberto Fico, an Italian politician and member of the Five Star Movement, received 57,119 votes with 57.6 percent of the total votes.",The Fact Extraction and {VER}ification Over Unstructured and Structured information ({FEVEROUS}) Shared Task,"The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) shared task, asks participating systems to determine whether human-authored claims are SUPPORTED or REFUTED based on evidence retrieved from Wikipedia (or NOTENOUGHINFO if the claim cannot be verified). Compared to the FEVER 2018 shared task, the main challenge is the addition of structured data (tables and lists) as a source of evidence. The claims in the FEVEROUS dataset can be verified using only structured evidence, only unstructured evidence, or a mixture of both. Submissions are evaluated using the FEVEROUS score that combines label accuracy and evidence retrieval. Unlike FEVER 2018 (Thorne et al., 2018a), FEVEROUS requires partial evidence to be returned for NOTENOUGHINFO claims, and the claims are longer and thus more complex. The shared task received 13 entries, six of which were able to beat the baseline system. The winning team was ""Bust a move!"", achieving a FEVEROUS score of 27% (+9% compared to the baseline). In this paper we describe the shared task, present the full results and highlight commonalities and innovations among the participating systems. Claim: In the 2018 Naples general election, Roberto Fico, an Italian politician and member of the Five Star Movement, received 57,119 votes with 57.6 percent of the total votes.",The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) Shared Task,"The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) shared task, asks participating systems to determine whether human-authored claims are SUPPORTED or REFUTED based on evidence retrieved from Wikipedia (or NOTENOUGHINFO if the claim cannot be verified). Compared to the FEVER 2018 shared task, the main challenge is the addition of structured data (tables and lists) as a source of evidence. The claims in the FEVEROUS dataset can be verified using only structured evidence, only unstructured evidence, or a mixture of both. Submissions are evaluated using the FEVEROUS score that combines label accuracy and evidence retrieval. Unlike FEVER 2018 (Thorne et al., 2018a), FEVEROUS requires partial evidence to be returned for NOTENOUGHINFO claims, and the claims are longer and thus more complex. The shared task received 13 entries, six of which were able to beat the baseline system. The winning team was ""Bust a move!"", achieving a FEVEROUS score of 27% (+9% compared to the baseline). In this paper we describe the shared task, present the full results and highlight commonalities and innovations among the participating systems. Claim: In the 2018 Naples general election, Roberto Fico, an Italian politician and member of the Five Star Movement, received 57,119 votes with 57.6 percent of the total votes.","We would like to thank Amazon for sponsoring the dataset generation and supporting the FEVER workshop and the FEVEROUS shared task. Rami Aly is supported by the Engineering and Physical Sciences Research Council Doctoral Training Partnership (EPSRC). James Thorne is supported by an Amazon Alexa Graduate Research Fellowship. Zhijiang Guo, Michael Schlichtkrull and Andreas Vlachos are supported by the ERC grant AVeriTeC (GA 865958).","The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) Shared Task. The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) shared task, asks participating systems to determine whether human-authored claims are SUPPORTED or REFUTED based on evidence retrieved from Wikipedia (or NOTENOUGHINFO if the claim cannot be verified). Compared to the FEVER 2018 shared task, the main challenge is the addition of structured data (tables and lists) as a source of evidence. The claims in the FEVEROUS dataset can be verified using only structured evidence, only unstructured evidence, or a mixture of both. Submissions are evaluated using the FEVEROUS score that combines label accuracy and evidence retrieval. Unlike FEVER 2018 (Thorne et al., 2018a), FEVEROUS requires partial evidence to be returned for NOTENOUGHINFO claims, and the claims are longer and thus more complex. The shared task received 13 entries, six of which were able to beat the baseline system. The winning team was ""Bust a move!"", achieving a FEVEROUS score of 27% (+9% compared to the baseline). In this paper we describe the shared task, present the full results and highlight commonalities and innovations among the participating systems. Claim: In the 2018 Naples general election, Roberto Fico, an Italian politician and member of the Five Star Movement, received 57,119 votes with 57.6 percent of the total votes.",2021
utiyama-etal-2009-mining,https://aclanthology.org/2009.mtsummit-papers.18,0,,,,,,,Mining Parallel Texts from Mixed-Language Web Pages. We propose to mine parallel texts from mixedlanguage web pages. We define a mixedlanguage web page as a web page consisting of (at least) two languages. We mined Japanese-English parallel texts from mixedlanguage web pages. We presented the statistics for extracted parallel texts and conducted machine translation experiments. These statistics and experiments showed that mixedlanguage web pages are rich sources of parallel texts.,Mining Parallel Texts from Mixed-Language Web Pages,We propose to mine parallel texts from mixedlanguage web pages. We define a mixedlanguage web page as a web page consisting of (at least) two languages. We mined Japanese-English parallel texts from mixedlanguage web pages. We presented the statistics for extracted parallel texts and conducted machine translation experiments. These statistics and experiments showed that mixedlanguage web pages are rich sources of parallel texts.,Mining Parallel Texts from Mixed-Language Web Pages,We propose to mine parallel texts from mixedlanguage web pages. We define a mixedlanguage web page as a web page consisting of (at least) two languages. We mined Japanese-English parallel texts from mixedlanguage web pages. We presented the statistics for extracted parallel texts and conducted machine translation experiments. These statistics and experiments showed that mixedlanguage web pages are rich sources of parallel texts.,,Mining Parallel Texts from Mixed-Language Web Pages. We propose to mine parallel texts from mixedlanguage web pages. We define a mixedlanguage web page as a web page consisting of (at least) two languages. We mined Japanese-English parallel texts from mixedlanguage web pages. We presented the statistics for extracted parallel texts and conducted machine translation experiments. These statistics and experiments showed that mixedlanguage web pages are rich sources of parallel texts.,2009
zhang-etal-2021-crafting,https://aclanthology.org/2021.acl-long.153,0,,,,,,,"Crafting Adversarial Examples for Neural Machine Translation. Effective adversary generation for neural machine translation (NMT) is a crucial prerequisite for building robust machine translation systems. In this work, we investigate veritable evaluations of NMT adversarial attacks, and propose a novel method to craft NMT adversarial examples. We first show the current NMT adversarial attacks may be improperly estimated by the commonly used monodirectional translation, and we propose to leverage the round-trip translation technique to build valid metrics for evaluating NMT adversarial attacks. Our intuition is that an effective NMT adversarial example, which imposes minor shifting on the source and degrades the translation dramatically, would naturally lead to a semantic-destroyed round-trip translation result. We then propose a promising black-box attack method called Word Saliency speedup Local Search (WSLS) that could effectively attack the mainstream NMT architectures. Comprehensive experiments demonstrate that the proposed metrics could accurately evaluate the attack effectiveness, and the proposed WSLS could significantly break the state-of-art NMT models with small perturbation. Besides, WSLS exhibits strong transferability on attacking Baidu and Bing online translators.",Crafting Adversarial Examples for Neural Machine Translation,"Effective adversary generation for neural machine translation (NMT) is a crucial prerequisite for building robust machine translation systems. In this work, we investigate veritable evaluations of NMT adversarial attacks, and propose a novel method to craft NMT adversarial examples. We first show the current NMT adversarial attacks may be improperly estimated by the commonly used monodirectional translation, and we propose to leverage the round-trip translation technique to build valid metrics for evaluating NMT adversarial attacks. Our intuition is that an effective NMT adversarial example, which imposes minor shifting on the source and degrades the translation dramatically, would naturally lead to a semantic-destroyed round-trip translation result. We then propose a promising black-box attack method called Word Saliency speedup Local Search (WSLS) that could effectively attack the mainstream NMT architectures. Comprehensive experiments demonstrate that the proposed metrics could accurately evaluate the attack effectiveness, and the proposed WSLS could significantly break the state-of-art NMT models with small perturbation. Besides, WSLS exhibits strong transferability on attacking Baidu and Bing online translators.",Crafting Adversarial Examples for Neural Machine Translation,"Effective adversary generation for neural machine translation (NMT) is a crucial prerequisite for building robust machine translation systems. In this work, we investigate veritable evaluations of NMT adversarial attacks, and propose a novel method to craft NMT adversarial examples. We first show the current NMT adversarial attacks may be improperly estimated by the commonly used monodirectional translation, and we propose to leverage the round-trip translation technique to build valid metrics for evaluating NMT adversarial attacks. Our intuition is that an effective NMT adversarial example, which imposes minor shifting on the source and degrades the translation dramatically, would naturally lead to a semantic-destroyed round-trip translation result. We then propose a promising black-box attack method called Word Saliency speedup Local Search (WSLS) that could effectively attack the mainstream NMT architectures. Comprehensive experiments demonstrate that the proposed metrics could accurately evaluate the attack effectiveness, and the proposed WSLS could significantly break the state-of-art NMT models with small perturbation. Besides, WSLS exhibits strong transferability on attacking Baidu and Bing online translators.",This work is supported by National Natural Science Foundation (62076105) and Microsft Research Asia Collaborative Research Fund (99245180). We thank Xiaosen Wang for helpful suggestions on our work.,"Crafting Adversarial Examples for Neural Machine Translation. Effective adversary generation for neural machine translation (NMT) is a crucial prerequisite for building robust machine translation systems. In this work, we investigate veritable evaluations of NMT adversarial attacks, and propose a novel method to craft NMT adversarial examples. We first show the current NMT adversarial attacks may be improperly estimated by the commonly used monodirectional translation, and we propose to leverage the round-trip translation technique to build valid metrics for evaluating NMT adversarial attacks. Our intuition is that an effective NMT adversarial example, which imposes minor shifting on the source and degrades the translation dramatically, would naturally lead to a semantic-destroyed round-trip translation result. We then propose a promising black-box attack method called Word Saliency speedup Local Search (WSLS) that could effectively attack the mainstream NMT architectures. Comprehensive experiments demonstrate that the proposed metrics could accurately evaluate the attack effectiveness, and the proposed WSLS could significantly break the state-of-art NMT models with small perturbation. Besides, WSLS exhibits strong transferability on attacking Baidu and Bing online translators.",2021
malmasi-etal-2015-oracle,https://aclanthology.org/W15-0620,0,,,,,,,"Oracle and Human Baselines for Native Language Identification. We examine different ensemble methods, including an oracle, to estimate the upper-limit of classification accuracy for Native Language Identification (NLI). The oracle outperforms state-of-the-art systems by over 10% and results indicate that for many misclassified texts the correct class label receives a significant portion of the ensemble votes, often being the runner-up. We also present a pilot study of human performance for NLI, the first such experiment. While some participants achieve modest results on our simplified setup with 5 L1s, they did not outperform our NLI system, and this performance gap is likely to widen on the standard NLI setup.",Oracle and Human Baselines for Native Language Identification,"We examine different ensemble methods, including an oracle, to estimate the upper-limit of classification accuracy for Native Language Identification (NLI). The oracle outperforms state-of-the-art systems by over 10% and results indicate that for many misclassified texts the correct class label receives a significant portion of the ensemble votes, often being the runner-up. We also present a pilot study of human performance for NLI, the first such experiment. While some participants achieve modest results on our simplified setup with 5 L1s, they did not outperform our NLI system, and this performance gap is likely to widen on the standard NLI setup.",Oracle and Human Baselines for Native Language Identification,"We examine different ensemble methods, including an oracle, to estimate the upper-limit of classification accuracy for Native Language Identification (NLI). The oracle outperforms state-of-the-art systems by over 10% and results indicate that for many misclassified texts the correct class label receives a significant portion of the ensemble votes, often being the runner-up. We also present a pilot study of human performance for NLI, the first such experiment. While some participants achieve modest results on our simplified setup with 5 L1s, they did not outperform our NLI system, and this performance gap is likely to widen on the standard NLI setup.","We would like to thank the three anonymous reviewers as well as our raters: Martin Chodorow, Carla Parra Escartin, Marte Kvamme, Aasish Pappu, Dragomir Radev, Patti Spinner, Robert Stine, Kapil Thadani, Alissa Vik and Gloria Zen.","Oracle and Human Baselines for Native Language Identification. We examine different ensemble methods, including an oracle, to estimate the upper-limit of classification accuracy for Native Language Identification (NLI). The oracle outperforms state-of-the-art systems by over 10% and results indicate that for many misclassified texts the correct class label receives a significant portion of the ensemble votes, often being the runner-up. We also present a pilot study of human performance for NLI, the first such experiment. While some participants achieve modest results on our simplified setup with 5 L1s, they did not outperform our NLI system, and this performance gap is likely to widen on the standard NLI setup.",2015
wang-lee-2018-learning,https://aclanthology.org/D18-1451,0,,,,,,,"Learning to Encode Text as Human-Readable Summaries using Generative Adversarial Networks. Auto-encoders compress input data into a latent-space representation and reconstruct the original data from the representation. This latent representation is not easily interpreted by humans. In this paper, we propose training an auto-encoder that encodes input text into human-readable sentences, and unpaired abstractive summarization is thereby achieved. The auto-encoder is composed of a generator and a reconstructor. The generator encodes the input text into a shorter word sequence, and the reconstructor recovers the generator input from the generator output. To make the generator output human-readable, a discriminator restricts the output of the generator to resemble human-written sentences. By taking the generator output as the summary of the input text, abstractive summarization is achieved without document-summary pairs as training data. Promising results are shown on both English and Chinese corpora.",Learning to Encode Text as Human-Readable Summaries using Generative Adversarial Networks,"Auto-encoders compress input data into a latent-space representation and reconstruct the original data from the representation. This latent representation is not easily interpreted by humans. In this paper, we propose training an auto-encoder that encodes input text into human-readable sentences, and unpaired abstractive summarization is thereby achieved. The auto-encoder is composed of a generator and a reconstructor. The generator encodes the input text into a shorter word sequence, and the reconstructor recovers the generator input from the generator output. To make the generator output human-readable, a discriminator restricts the output of the generator to resemble human-written sentences. By taking the generator output as the summary of the input text, abstractive summarization is achieved without document-summary pairs as training data. Promising results are shown on both English and Chinese corpora.",Learning to Encode Text as Human-Readable Summaries using Generative Adversarial Networks,"Auto-encoders compress input data into a latent-space representation and reconstruct the original data from the representation. This latent representation is not easily interpreted by humans. In this paper, we propose training an auto-encoder that encodes input text into human-readable sentences, and unpaired abstractive summarization is thereby achieved. The auto-encoder is composed of a generator and a reconstructor. The generator encodes the input text into a shorter word sequence, and the reconstructor recovers the generator input from the generator output. To make the generator output human-readable, a discriminator restricts the output of the generator to resemble human-written sentences. By taking the generator output as the summary of the input text, abstractive summarization is achieved without document-summary pairs as training data. Promising results are shown on both English and Chinese corpora.",,"Learning to Encode Text as Human-Readable Summaries using Generative Adversarial Networks. Auto-encoders compress input data into a latent-space representation and reconstruct the original data from the representation. This latent representation is not easily interpreted by humans. In this paper, we propose training an auto-encoder that encodes input text into human-readable sentences, and unpaired abstractive summarization is thereby achieved. The auto-encoder is composed of a generator and a reconstructor. The generator encodes the input text into a shorter word sequence, and the reconstructor recovers the generator input from the generator output. To make the generator output human-readable, a discriminator restricts the output of the generator to resemble human-written sentences. By taking the generator output as the summary of the input text, abstractive summarization is achieved without document-summary pairs as training data. Promising results are shown on both English and Chinese corpora.",2018
li-etal-2021-conversations,https://aclanthology.org/2021.acl-long.11,0,,,,,,,"Conversations Are Not Flat: Modeling the Dynamic Information Flow across Dialogue Utterances. Nowadays, open-domain dialogue models can generate acceptable responses according to the historical context based on the large-scale pretrained language models. However, they generally concatenate the dialogue history directly as the model input to predict the response, which we named as the flat pattern and ignores the dynamic information flow across dialogue utterances. In this work, we propose the DialoFlow model, in which we introduce a dynamic flow mechanism to model the context flow, and design three training objectives to capture the information dynamics across dialogue utterances by addressing the semantic influence brought about by each utterance in large-scale pre-training. Experiments on the multi-reference Reddit Dataset and DailyDialog Dataset demonstrate that our DialoFlow significantly outperforms the DialoGPT on the dialogue generation task. Besides, we propose the Flow score, an effective automatic metric for evaluating interactive human-bot conversation quality based on the pre-trained DialoFlow, which presents high chatbot-level correlation (r = 0.9) with human ratings among 11 chatbots. Code and pre-trained models will be public. 1",Conversations Are Not Flat: Modeling the Dynamic Information Flow across Dialogue Utterances,"Nowadays, open-domain dialogue models can generate acceptable responses according to the historical context based on the large-scale pretrained language models. However, they generally concatenate the dialogue history directly as the model input to predict the response, which we named as the flat pattern and ignores the dynamic information flow across dialogue utterances. In this work, we propose the DialoFlow model, in which we introduce a dynamic flow mechanism to model the context flow, and design three training objectives to capture the information dynamics across dialogue utterances by addressing the semantic influence brought about by each utterance in large-scale pre-training. Experiments on the multi-reference Reddit Dataset and DailyDialog Dataset demonstrate that our DialoFlow significantly outperforms the DialoGPT on the dialogue generation task. Besides, we propose the Flow score, an effective automatic metric for evaluating interactive human-bot conversation quality based on the pre-trained DialoFlow, which presents high chatbot-level correlation (r = 0.9) with human ratings among 11 chatbots. Code and pre-trained models will be public. 1",Conversations Are Not Flat: Modeling the Dynamic Information Flow across Dialogue Utterances,"Nowadays, open-domain dialogue models can generate acceptable responses according to the historical context based on the large-scale pretrained language models. However, they generally concatenate the dialogue history directly as the model input to predict the response, which we named as the flat pattern and ignores the dynamic information flow across dialogue utterances. In this work, we propose the DialoFlow model, in which we introduce a dynamic flow mechanism to model the context flow, and design three training objectives to capture the information dynamics across dialogue utterances by addressing the semantic influence brought about by each utterance in large-scale pre-training. Experiments on the multi-reference Reddit Dataset and DailyDialog Dataset demonstrate that our DialoFlow significantly outperforms the DialoGPT on the dialogue generation task. Besides, we propose the Flow score, an effective automatic metric for evaluating interactive human-bot conversation quality based on the pre-trained DialoFlow, which presents high chatbot-level correlation (r = 0.9) with human ratings among 11 chatbots. Code and pre-trained models will be public. 1",We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions. This work is supported by National Key R&D Program of China (NO. 2018AAA0102502).,"Conversations Are Not Flat: Modeling the Dynamic Information Flow across Dialogue Utterances. Nowadays, open-domain dialogue models can generate acceptable responses according to the historical context based on the large-scale pretrained language models. However, they generally concatenate the dialogue history directly as the model input to predict the response, which we named as the flat pattern and ignores the dynamic information flow across dialogue utterances. In this work, we propose the DialoFlow model, in which we introduce a dynamic flow mechanism to model the context flow, and design three training objectives to capture the information dynamics across dialogue utterances by addressing the semantic influence brought about by each utterance in large-scale pre-training. Experiments on the multi-reference Reddit Dataset and DailyDialog Dataset demonstrate that our DialoFlow significantly outperforms the DialoGPT on the dialogue generation task. Besides, we propose the Flow score, an effective automatic metric for evaluating interactive human-bot conversation quality based on the pre-trained DialoFlow, which presents high chatbot-level correlation (r = 0.9) with human ratings among 11 chatbots. Code and pre-trained models will be public. 1",2021
tomabechi-1991-quasi,https://aclanthology.org/1991.iwpt-1.19,0,,,,,,,Quasi-Destructive Graph Unification. Graph unification is the most expensive part of unification-based grammar parsing. It of ten takes over 90% of the total parsing time of a sentence. We focus on two speed-up,Quasi-Destructive Graph Unification,Graph unification is the most expensive part of unification-based grammar parsing. It of ten takes over 90% of the total parsing time of a sentence. We focus on two speed-up,Quasi-Destructive Graph Unification,Graph unification is the most expensive part of unification-based grammar parsing. It of ten takes over 90% of the total parsing time of a sentence. We focus on two speed-up,,Quasi-Destructive Graph Unification. Graph unification is the most expensive part of unification-based grammar parsing. It of ten takes over 90% of the total parsing time of a sentence. We focus on two speed-up,1991
avramidis-etal-2020-fine,https://aclanthology.org/2020.wmt-1.38,0,,,,,,,"Fine-grained linguistic evaluation for state-of-the-art Machine Translation. This paper describes a test suite submission providing detailed statistics of linguistic performance for the state-of-the-art German-English systems of the Fifth Conference of Machine Translation (WMT20). The analysis covers 107 phenomena organized in 14 categories based on about 5,500 test items, including a manual annotation effort of 45 person hours. Two systems (Tohoku and VolcanTrans) appear to have significantly better test suite accuracy than the others, although the best system of WMT20 is not significantly better than the one from WMT19 in a macro-average. Additionally, we identify some linguistic phenomena where all systems suffer (such as idioms, resultative predicates and pluperfect), but we are also able to identify particular weaknesses for individual systems (such as quotation marks, lexical ambiguity and sluicing). Most of the systems of WMT19 which submitted new versions this year show improvements.",Fine-grained linguistic evaluation for state-of-the-art Machine Translation,"This paper describes a test suite submission providing detailed statistics of linguistic performance for the state-of-the-art German-English systems of the Fifth Conference of Machine Translation (WMT20). The analysis covers 107 phenomena organized in 14 categories based on about 5,500 test items, including a manual annotation effort of 45 person hours. Two systems (Tohoku and VolcanTrans) appear to have significantly better test suite accuracy than the others, although the best system of WMT20 is not significantly better than the one from WMT19 in a macro-average. Additionally, we identify some linguistic phenomena where all systems suffer (such as idioms, resultative predicates and pluperfect), but we are also able to identify particular weaknesses for individual systems (such as quotation marks, lexical ambiguity and sluicing). Most of the systems of WMT19 which submitted new versions this year show improvements.",Fine-grained linguistic evaluation for state-of-the-art Machine Translation,"This paper describes a test suite submission providing detailed statistics of linguistic performance for the state-of-the-art German-English systems of the Fifth Conference of Machine Translation (WMT20). The analysis covers 107 phenomena organized in 14 categories based on about 5,500 test items, including a manual annotation effort of 45 person hours. Two systems (Tohoku and VolcanTrans) appear to have significantly better test suite accuracy than the others, although the best system of WMT20 is not significantly better than the one from WMT19 in a macro-average. Additionally, we identify some linguistic phenomena where all systems suffer (such as idioms, resultative predicates and pluperfect), but we are also able to identify particular weaknesses for individual systems (such as quotation marks, lexical ambiguity and sluicing). Most of the systems of WMT19 which submitted new versions this year show improvements.",This research was supported by the German Research Foundation through the project TextQ and by the German Federal Ministry of Education through the project SocialWear.,"Fine-grained linguistic evaluation for state-of-the-art Machine Translation. This paper describes a test suite submission providing detailed statistics of linguistic performance for the state-of-the-art German-English systems of the Fifth Conference of Machine Translation (WMT20). The analysis covers 107 phenomena organized in 14 categories based on about 5,500 test items, including a manual annotation effort of 45 person hours. Two systems (Tohoku and VolcanTrans) appear to have significantly better test suite accuracy than the others, although the best system of WMT20 is not significantly better than the one from WMT19 in a macro-average. Additionally, we identify some linguistic phenomena where all systems suffer (such as idioms, resultative predicates and pluperfect), but we are also able to identify particular weaknesses for individual systems (such as quotation marks, lexical ambiguity and sluicing). Most of the systems of WMT19 which submitted new versions this year show improvements.",2020
chakravarthi-etal-2020-corpus,https://aclanthology.org/2020.sltu-1.28,0,,,,,,,"Corpus Creation for Sentiment Analysis in Code-Mixed Tamil-English Text. Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.",Corpus Creation for Sentiment Analysis in Code-Mixed {T}amil-{E}nglish Text,"Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.",Corpus Creation for Sentiment Analysis in Code-Mixed Tamil-English Text,"Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.",,"Corpus Creation for Sentiment Analysis in Code-Mixed Tamil-English Text. Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.",2020
schmid-schulte-im-walde-2000-robust,https://aclanthology.org/C00-2105,0,,,,,,,"Robust German Noun Chunking With a Probabilistic Context-Free Grammar. We present a noun chunker for German which is based on a head-lexicalised probabilistic contextfree grammar. A manually developed grammar was semi-automatically extended with robustness rules in order to allow parsing of unrestricted text. The model parameters were learned from unlabelled training data by a probabilistic context-free parser. For extracting noun chunks, the parser generates all possible noun chunk analyses, scores them with a n o vel algorithm which maximizes the best chunk sequence criterion, and chooses the most probable chunk sequence. An evaluation of the chunker on 2,140 hand-annotated noun chunks yielded 92 recall and 93 precision.",Robust {G}erman Noun Chunking With a Probabilistic Context-Free Grammar,"We present a noun chunker for German which is based on a head-lexicalised probabilistic contextfree grammar. A manually developed grammar was semi-automatically extended with robustness rules in order to allow parsing of unrestricted text. The model parameters were learned from unlabelled training data by a probabilistic context-free parser. For extracting noun chunks, the parser generates all possible noun chunk analyses, scores them with a n o vel algorithm which maximizes the best chunk sequence criterion, and chooses the most probable chunk sequence. An evaluation of the chunker on 2,140 hand-annotated noun chunks yielded 92 recall and 93 precision.",Robust German Noun Chunking With a Probabilistic Context-Free Grammar,"We present a noun chunker for German which is based on a head-lexicalised probabilistic contextfree grammar. A manually developed grammar was semi-automatically extended with robustness rules in order to allow parsing of unrestricted text. The model parameters were learned from unlabelled training data by a probabilistic context-free parser. For extracting noun chunks, the parser generates all possible noun chunk analyses, scores them with a n o vel algorithm which maximizes the best chunk sequence criterion, and chooses the most probable chunk sequence. An evaluation of the chunker on 2,140 hand-annotated noun chunks yielded 92 recall and 93 precision.",,"Robust German Noun Chunking With a Probabilistic Context-Free Grammar. We present a noun chunker for German which is based on a head-lexicalised probabilistic contextfree grammar. A manually developed grammar was semi-automatically extended with robustness rules in order to allow parsing of unrestricted text. The model parameters were learned from unlabelled training data by a probabilistic context-free parser. For extracting noun chunks, the parser generates all possible noun chunk analyses, scores them with a n o vel algorithm which maximizes the best chunk sequence criterion, and chooses the most probable chunk sequence. An evaluation of the chunker on 2,140 hand-annotated noun chunks yielded 92 recall and 93 precision.",2000
bredenkamp-etal-2000-looking,http://www.lrec-conf.org/proceedings/lrec2000/pdf/299.pdf,0,,,,,,,"Looking for Errors: A Declarative Formalism for Resource-adaptive Language Checking. The paper describes a phenomenon-based approach to grammar checking, which draws on the integration of different shallow NLP technologies, including morphological and POS taggers, as well as probabilistic and rule-based partial parsers. We present a declarative specification formalism for grammar checking and controlled language applications which greatly facilitates the development of checking components.",Looking for Errors: A Declarative Formalism for Resource-adaptive Language Checking,"The paper describes a phenomenon-based approach to grammar checking, which draws on the integration of different shallow NLP technologies, including morphological and POS taggers, as well as probabilistic and rule-based partial parsers. We present a declarative specification formalism for grammar checking and controlled language applications which greatly facilitates the development of checking components.",Looking for Errors: A Declarative Formalism for Resource-adaptive Language Checking,"The paper describes a phenomenon-based approach to grammar checking, which draws on the integration of different shallow NLP technologies, including morphological and POS taggers, as well as probabilistic and rule-based partial parsers. We present a declarative specification formalism for grammar checking and controlled language applications which greatly facilitates the development of checking components.",,"Looking for Errors: A Declarative Formalism for Resource-adaptive Language Checking. The paper describes a phenomenon-based approach to grammar checking, which draws on the integration of different shallow NLP technologies, including morphological and POS taggers, as well as probabilistic and rule-based partial parsers. We present a declarative specification formalism for grammar checking and controlled language applications which greatly facilitates the development of checking components.",2000
senellart-1999-semi,https://aclanthology.org/1999.eamt-1.5,0,,,,,,,Semi-automatic acquisition of lexical resources for new languages or new domains. ,Semi-automatic acquisition of lexical resources for new languages or new domains,,Semi-automatic acquisition of lexical resources for new languages or new domains,,,Semi-automatic acquisition of lexical resources for new languages or new domains. ,1999
el-mekki-etal-2021-domain,https://aclanthology.org/2021.naacl-main.226,0,,,,,,,"Domain Adaptation for Arabic Cross-Domain and Cross-Dialect Sentiment Analysis from Contextualized Word Embedding. Finetuning deep pre-trained language models has shown state-of-the-art performances on a wide range of Natural Language Processing (NLP) applications. Nevertheless, their generalization performance drops under domain shift. In the case of Arabic language, diglossia makes building and annotating corpora for each dialect and/or domain a more challenging task. Unsupervised Domain Adaptation tackles this issue by transferring the learned knowledge from labeled source domain data to unlabeled target domain data. In this paper, we propose a new unsupervised domain adaptation method for Arabic cross-domain and crossdialect sentiment analysis from Contextualized Word Embedding. Several experiments are performed adopting the coarse-grained and the fine-grained taxonomies of Arabic dialects. The obtained results show that our method yields very promising results and outperforms several domain adaptation methods for most of the evaluated datasets. On average, our method increases the performance by an improvement rate of 20.8% over the zero-shot transfer learning from BERT.",Domain Adaptation for {A}rabic Cross-Domain and Cross-Dialect Sentiment Analysis from Contextualized Word Embedding,"Finetuning deep pre-trained language models has shown state-of-the-art performances on a wide range of Natural Language Processing (NLP) applications. Nevertheless, their generalization performance drops under domain shift. In the case of Arabic language, diglossia makes building and annotating corpora for each dialect and/or domain a more challenging task. Unsupervised Domain Adaptation tackles this issue by transferring the learned knowledge from labeled source domain data to unlabeled target domain data. In this paper, we propose a new unsupervised domain adaptation method for Arabic cross-domain and crossdialect sentiment analysis from Contextualized Word Embedding. Several experiments are performed adopting the coarse-grained and the fine-grained taxonomies of Arabic dialects. The obtained results show that our method yields very promising results and outperforms several domain adaptation methods for most of the evaluated datasets. On average, our method increases the performance by an improvement rate of 20.8% over the zero-shot transfer learning from BERT.",Domain Adaptation for Arabic Cross-Domain and Cross-Dialect Sentiment Analysis from Contextualized Word Embedding,"Finetuning deep pre-trained language models has shown state-of-the-art performances on a wide range of Natural Language Processing (NLP) applications. Nevertheless, their generalization performance drops under domain shift. In the case of Arabic language, diglossia makes building and annotating corpora for each dialect and/or domain a more challenging task. Unsupervised Domain Adaptation tackles this issue by transferring the learned knowledge from labeled source domain data to unlabeled target domain data. In this paper, we propose a new unsupervised domain adaptation method for Arabic cross-domain and crossdialect sentiment analysis from Contextualized Word Embedding. Several experiments are performed adopting the coarse-grained and the fine-grained taxonomies of Arabic dialects. The obtained results show that our method yields very promising results and outperforms several domain adaptation methods for most of the evaluated datasets. On average, our method increases the performance by an improvement rate of 20.8% over the zero-shot transfer learning from BERT.",,"Domain Adaptation for Arabic Cross-Domain and Cross-Dialect Sentiment Analysis from Contextualized Word Embedding. Finetuning deep pre-trained language models has shown state-of-the-art performances on a wide range of Natural Language Processing (NLP) applications. Nevertheless, their generalization performance drops under domain shift. In the case of Arabic language, diglossia makes building and annotating corpora for each dialect and/or domain a more challenging task. Unsupervised Domain Adaptation tackles this issue by transferring the learned knowledge from labeled source domain data to unlabeled target domain data. In this paper, we propose a new unsupervised domain adaptation method for Arabic cross-domain and crossdialect sentiment analysis from Contextualized Word Embedding. Several experiments are performed adopting the coarse-grained and the fine-grained taxonomies of Arabic dialects. The obtained results show that our method yields very promising results and outperforms several domain adaptation methods for most of the evaluated datasets. On average, our method increases the performance by an improvement rate of 20.8% over the zero-shot transfer learning from BERT.",2021
fujiyoshi-2004-restrictions,https://aclanthology.org/C04-1012,0,,,,,,,"Restrictions on Monadic Context-Free Tree Grammars. In this paper, subclasses of monadic contextfree tree grammars (CFTGs) are compared. Since linear, nondeleting, monadic CFTGs generate the same class of string languages as tree adjoining grammars (TAGs), it is examined whether the restrictions of linearity and nondeletion on monadic CFTGs are necessary to generate the same class of languages. Epsilonfreeness on linear, nondeleting, monadic CFTG is also examined. G * ⇒ α. The string language generated by G is L S (G) = yield(α) | α ∈ L(G). Note that L S (G) ⊆ (Σ 0 − ε) * .",Restrictions on Monadic Context-Free Tree Grammars,"In this paper, subclasses of monadic contextfree tree grammars (CFTGs) are compared. Since linear, nondeleting, monadic CFTGs generate the same class of string languages as tree adjoining grammars (TAGs), it is examined whether the restrictions of linearity and nondeletion on monadic CFTGs are necessary to generate the same class of languages. Epsilonfreeness on linear, nondeleting, monadic CFTG is also examined. G * ⇒ α}. The string language generated by G is L S (G) = {yield(α) | α ∈ L(G)}. Note that L S (G) ⊆ (Σ 0 − {ε}) * .",Restrictions on Monadic Context-Free Tree Grammars,"In this paper, subclasses of monadic contextfree tree grammars (CFTGs) are compared. Since linear, nondeleting, monadic CFTGs generate the same class of string languages as tree adjoining grammars (TAGs), it is examined whether the restrictions of linearity and nondeletion on monadic CFTGs are necessary to generate the same class of languages. Epsilonfreeness on linear, nondeleting, monadic CFTG is also examined. G * ⇒ α. The string language generated by G is L S (G) = yield(α) | α ∈ L(G). Note that L S (G) ⊆ (Σ 0 − ε) * .",,"Restrictions on Monadic Context-Free Tree Grammars. In this paper, subclasses of monadic contextfree tree grammars (CFTGs) are compared. Since linear, nondeleting, monadic CFTGs generate the same class of string languages as tree adjoining grammars (TAGs), it is examined whether the restrictions of linearity and nondeletion on monadic CFTGs are necessary to generate the same class of languages. Epsilonfreeness on linear, nondeleting, monadic CFTG is also examined. G * ⇒ α. The string language generated by G is L S (G) = yield(α) | α ∈ L(G). Note that L S (G) ⊆ (Σ 0 − ε) * .",2004
bertagna-etal-2004-content,http://www.lrec-conf.org/proceedings/lrec2004/pdf/743.pdf,0,,,,,,,"Content Interoperability of Lexical Resources: Open Issues and ``MILE'' Perspectives. The paper tackles the issue of content interoperability among lexical resources, by presenting an experiment of mapping differently conceived lexicons, FrameNet and NOMLEX, onto MILE (Multilingual ISLE Lexical Entry), a meta-entry for the encoding of multilingual lexical information, acting as a general schema of shared and common lexical objects. The aim is to (i) raise problems and (ii) test the expressive potentialities of MILE as a standard environment for Computational Lexicons.",Content Interoperability of Lexical Resources: Open Issues and {``}{MILE}{''} Perspectives,"The paper tackles the issue of content interoperability among lexical resources, by presenting an experiment of mapping differently conceived lexicons, FrameNet and NOMLEX, onto MILE (Multilingual ISLE Lexical Entry), a meta-entry for the encoding of multilingual lexical information, acting as a general schema of shared and common lexical objects. The aim is to (i) raise problems and (ii) test the expressive potentialities of MILE as a standard environment for Computational Lexicons.",Content Interoperability of Lexical Resources: Open Issues and ``MILE'' Perspectives,"The paper tackles the issue of content interoperability among lexical resources, by presenting an experiment of mapping differently conceived lexicons, FrameNet and NOMLEX, onto MILE (Multilingual ISLE Lexical Entry), a meta-entry for the encoding of multilingual lexical information, acting as a general schema of shared and common lexical objects. The aim is to (i) raise problems and (ii) test the expressive potentialities of MILE as a standard environment for Computational Lexicons.","We want to dedicate this contribution to the memory of Antonio Zampolli, who has been the pioneer of standardization initiatives in Europe.","Content Interoperability of Lexical Resources: Open Issues and ``MILE'' Perspectives. The paper tackles the issue of content interoperability among lexical resources, by presenting an experiment of mapping differently conceived lexicons, FrameNet and NOMLEX, onto MILE (Multilingual ISLE Lexical Entry), a meta-entry for the encoding of multilingual lexical information, acting as a general schema of shared and common lexical objects. The aim is to (i) raise problems and (ii) test the expressive potentialities of MILE as a standard environment for Computational Lexicons.",2004
schlichtkrull-martinez-alonso-2016-msejrku,https://aclanthology.org/S16-1209,0,,,,,,,"MSejrKu at SemEval-2016 Task 14: Taxonomy Enrichment by Evidence Ranking. Automatic enrichment of semantic taxonomies with novel data is a relatively unexplored task with potential benefits in a broad array of natural language processing problems. Task 14 of SemEval 2016 poses the challenge of designing systems for this task. In this paper, we describe and evaluate several machine learning systems constructed for our participation in the competition. We demonstrate an f1-score of 0.680 for our submitted systems-a small improvement over the 0.679 produced by the hard baseline.",{MS}ejr{K}u at {S}em{E}val-2016 Task 14: Taxonomy Enrichment by Evidence Ranking,"Automatic enrichment of semantic taxonomies with novel data is a relatively unexplored task with potential benefits in a broad array of natural language processing problems. Task 14 of SemEval 2016 poses the challenge of designing systems for this task. In this paper, we describe and evaluate several machine learning systems constructed for our participation in the competition. We demonstrate an f1-score of 0.680 for our submitted systems-a small improvement over the 0.679 produced by the hard baseline.",MSejrKu at SemEval-2016 Task 14: Taxonomy Enrichment by Evidence Ranking,"Automatic enrichment of semantic taxonomies with novel data is a relatively unexplored task with potential benefits in a broad array of natural language processing problems. Task 14 of SemEval 2016 poses the challenge of designing systems for this task. In this paper, we describe and evaluate several machine learning systems constructed for our participation in the competition. We demonstrate an f1-score of 0.680 for our submitted systems-a small improvement over the 0.679 produced by the hard baseline.",,"MSejrKu at SemEval-2016 Task 14: Taxonomy Enrichment by Evidence Ranking. Automatic enrichment of semantic taxonomies with novel data is a relatively unexplored task with potential benefits in a broad array of natural language processing problems. Task 14 of SemEval 2016 poses the challenge of designing systems for this task. In this paper, we describe and evaluate several machine learning systems constructed for our participation in the competition. We demonstrate an f1-score of 0.680 for our submitted systems-a small improvement over the 0.679 produced by the hard baseline.",2016
gey-etal-2008-japanese,http://www.lrec-conf.org/proceedings/lrec2008/pdf/363_paper.pdf,0,,,,,,,"A Japanese-English Technical Lexicon for Translation and Language Research. In this paper we present a Japanese-English Bilingual lexicon of technical terms. The lexicon was derived from the first and second NTCIR evaluation collections for research into cross-language information retrieval for Asian languages. While it can be utilized for translation between Japanese and English, the lexicon is also suitable for language research and language engineering. Since it is collection-derived, it contains instances of word variants and miss-spellings which make it eminently suitable for further research. For a subset of the lexicon we make available the collection statistics. In addition we make available a Katakana subset suitable for transliteration research.",A {J}apanese-{E}nglish Technical Lexicon for Translation and Language Research,"In this paper we present a Japanese-English Bilingual lexicon of technical terms. The lexicon was derived from the first and second NTCIR evaluation collections for research into cross-language information retrieval for Asian languages. While it can be utilized for translation between Japanese and English, the lexicon is also suitable for language research and language engineering. Since it is collection-derived, it contains instances of word variants and miss-spellings which make it eminently suitable for further research. For a subset of the lexicon we make available the collection statistics. In addition we make available a Katakana subset suitable for transliteration research.",A Japanese-English Technical Lexicon for Translation and Language Research,"In this paper we present a Japanese-English Bilingual lexicon of technical terms. The lexicon was derived from the first and second NTCIR evaluation collections for research into cross-language information retrieval for Asian languages. While it can be utilized for translation between Japanese and English, the lexicon is also suitable for language research and language engineering. Since it is collection-derived, it contains instances of word variants and miss-spellings which make it eminently suitable for further research. For a subset of the lexicon we make available the collection statistics. In addition we make available a Katakana subset suitable for transliteration research.",,"A Japanese-English Technical Lexicon for Translation and Language Research. In this paper we present a Japanese-English Bilingual lexicon of technical terms. The lexicon was derived from the first and second NTCIR evaluation collections for research into cross-language information retrieval for Asian languages. While it can be utilized for translation between Japanese and English, the lexicon is also suitable for language research and language engineering. Since it is collection-derived, it contains instances of word variants and miss-spellings which make it eminently suitable for further research. For a subset of the lexicon we make available the collection statistics. In addition we make available a Katakana subset suitable for transliteration research.",2008
oprea-magdy-2019-exploring,https://aclanthology.org/P19-1275,0,,,,,,,"Exploring Author Context for Detecting Intended vs Perceived Sarcasm. We investigate the impact of using author context on textual sarcasm detection. We define author context as the embedded representation of their historical posts on Twitter and suggest neural models that extract these representations. We experiment with two tweet datasets, one labelled manually for sarcasm, and the other via tag-based distant supervision. We achieve state-of-the-art performance on the second dataset, but not on the one labelled manually, indicating a difference between intended sarcasm, captured by distant supervision, and perceived sarcasm, captured by manual labelling.",Exploring Author Context for Detecting Intended vs Perceived Sarcasm,"We investigate the impact of using author context on textual sarcasm detection. We define author context as the embedded representation of their historical posts on Twitter and suggest neural models that extract these representations. We experiment with two tweet datasets, one labelled manually for sarcasm, and the other via tag-based distant supervision. We achieve state-of-the-art performance on the second dataset, but not on the one labelled manually, indicating a difference between intended sarcasm, captured by distant supervision, and perceived sarcasm, captured by manual labelling.",Exploring Author Context for Detecting Intended vs Perceived Sarcasm,"We investigate the impact of using author context on textual sarcasm detection. We define author context as the embedded representation of their historical posts on Twitter and suggest neural models that extract these representations. We experiment with two tweet datasets, one labelled manually for sarcasm, and the other via tag-based distant supervision. We achieve state-of-the-art performance on the second dataset, but not on the one labelled manually, indicating a difference between intended sarcasm, captured by distant supervision, and perceived sarcasm, captured by manual labelling.","This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1); the University of Edinburgh; and The Financial Times.","Exploring Author Context for Detecting Intended vs Perceived Sarcasm. We investigate the impact of using author context on textual sarcasm detection. We define author context as the embedded representation of their historical posts on Twitter and suggest neural models that extract these representations. We experiment with two tweet datasets, one labelled manually for sarcasm, and the other via tag-based distant supervision. We achieve state-of-the-art performance on the second dataset, but not on the one labelled manually, indicating a difference between intended sarcasm, captured by distant supervision, and perceived sarcasm, captured by manual labelling.",2019
almeida-etal-2015-aligning,https://aclanthology.org/P15-1040,0,,,,,,,"Aligning Opinions: Cross-Lingual Opinion Mining with Dependencies. We propose a cross-lingual framework for fine-grained opinion mining using bitext projection. The only requirements are a running system in a source language and word-aligned parallel data. Our method projects opinion frames from the source to the target language, and then trains a system on the target language using the automatic annotations. Key to our approach is a novel dependency-based model for opinion mining, which we show, as a byproduct, to be on par with the current state of the art for English, while avoiding the need for integer programming or reranking. In cross-lingual mode (English to Portuguese), our approach compares favorably to a supervised system (with scarce labeled data), and to a delexicalized model trained using universal tags and bilingual word embeddings.",Aligning Opinions: Cross-Lingual Opinion Mining with Dependencies,"We propose a cross-lingual framework for fine-grained opinion mining using bitext projection. The only requirements are a running system in a source language and word-aligned parallel data. Our method projects opinion frames from the source to the target language, and then trains a system on the target language using the automatic annotations. Key to our approach is a novel dependency-based model for opinion mining, which we show, as a byproduct, to be on par with the current state of the art for English, while avoiding the need for integer programming or reranking. In cross-lingual mode (English to Portuguese), our approach compares favorably to a supervised system (with scarce labeled data), and to a delexicalized model trained using universal tags and bilingual word embeddings.",Aligning Opinions: Cross-Lingual Opinion Mining with Dependencies,"We propose a cross-lingual framework for fine-grained opinion mining using bitext projection. The only requirements are a running system in a source language and word-aligned parallel data. Our method projects opinion frames from the source to the target language, and then trains a system on the target language using the automatic annotations. Key to our approach is a novel dependency-based model for opinion mining, which we show, as a byproduct, to be on par with the current state of the art for English, while avoiding the need for integer programming or reranking. In cross-lingual mode (English to Portuguese), our approach compares favorably to a supervised system (with scarce labeled data), and to a delexicalized model trained using universal tags and bilingual word embeddings.","We would like to thank the anonymous reviewers for their insightful comments, and Richard Johansson for sharing his code and for answering several questions.This work was partially supported by the EU/FEDER programme, QREN/POR Lisboa (Portugal), under the Intelligo project (contract 2012/24803) and by a FCT grants UID/EEA/50008/2013 and PTDC/EEI-SII/2312/2012.","Aligning Opinions: Cross-Lingual Opinion Mining with Dependencies. We propose a cross-lingual framework for fine-grained opinion mining using bitext projection. The only requirements are a running system in a source language and word-aligned parallel data. Our method projects opinion frames from the source to the target language, and then trains a system on the target language using the automatic annotations. Key to our approach is a novel dependency-based model for opinion mining, which we show, as a byproduct, to be on par with the current state of the art for English, while avoiding the need for integer programming or reranking. In cross-lingual mode (English to Portuguese), our approach compares favorably to a supervised system (with scarce labeled data), and to a delexicalized model trained using universal tags and bilingual word embeddings.",2015
aloraini-poesio-2020-cross,https://aclanthology.org/2020.lrec-1.11,0,,,,,,,"Cross-lingual Zero Pronoun Resolution. In languages like Arabic, Chinese, Italian, Japanese, Korean, Portuguese, Spanish, and many others, predicate arguments in certain syntactic positions are not realized instead of being realized as overt pronouns, and are thus called zero-or null-pronouns. Identifying and resolving such omitted arguments is crucial to machine translation, information extraction and other NLP tasks, but depends heavily on semantic coherence and lexical relationships. We propose a BERT-based cross-lingual model for zero pronoun resolution, and evaluate it on the Arabic and Chinese portions of OntoNotes 5.0. As far as we know, ours is the first neural model of zero-pronoun resolution for Arabic; and our model also outperforms the state-of-the-art for Chinese. In the paper we also evaluate BERT feature extraction and fine-tune models on the task, and compare them with our model. We also report on an investigation of BERT layers indicating which layer encodes the most suitable representation for the task. Our code is available at https://github.com/amaloraini/cross-lingual-ZP.",Cross-lingual Zero Pronoun Resolution,"In languages like Arabic, Chinese, Italian, Japanese, Korean, Portuguese, Spanish, and many others, predicate arguments in certain syntactic positions are not realized instead of being realized as overt pronouns, and are thus called zero-or null-pronouns. Identifying and resolving such omitted arguments is crucial to machine translation, information extraction and other NLP tasks, but depends heavily on semantic coherence and lexical relationships. We propose a BERT-based cross-lingual model for zero pronoun resolution, and evaluate it on the Arabic and Chinese portions of OntoNotes 5.0. As far as we know, ours is the first neural model of zero-pronoun resolution for Arabic; and our model also outperforms the state-of-the-art for Chinese. In the paper we also evaluate BERT feature extraction and fine-tune models on the task, and compare them with our model. We also report on an investigation of BERT layers indicating which layer encodes the most suitable representation for the task. Our code is available at https://github.com/amaloraini/cross-lingual-ZP.",Cross-lingual Zero Pronoun Resolution,"In languages like Arabic, Chinese, Italian, Japanese, Korean, Portuguese, Spanish, and many others, predicate arguments in certain syntactic positions are not realized instead of being realized as overt pronouns, and are thus called zero-or null-pronouns. Identifying and resolving such omitted arguments is crucial to machine translation, information extraction and other NLP tasks, but depends heavily on semantic coherence and lexical relationships. We propose a BERT-based cross-lingual model for zero pronoun resolution, and evaluate it on the Arabic and Chinese portions of OntoNotes 5.0. As far as we know, ours is the first neural model of zero-pronoun resolution for Arabic; and our model also outperforms the state-of-the-art for Chinese. In the paper we also evaluate BERT feature extraction and fine-tune models on the task, and compare them with our model. We also report on an investigation of BERT layers indicating which layer encodes the most suitable representation for the task. Our code is available at https://github.com/amaloraini/cross-lingual-ZP.",We would like to thank the anonymous reviewers for their insightful comments and suggestions which helped to improve the quality of the paper.,"Cross-lingual Zero Pronoun Resolution. In languages like Arabic, Chinese, Italian, Japanese, Korean, Portuguese, Spanish, and many others, predicate arguments in certain syntactic positions are not realized instead of being realized as overt pronouns, and are thus called zero-or null-pronouns. Identifying and resolving such omitted arguments is crucial to machine translation, information extraction and other NLP tasks, but depends heavily on semantic coherence and lexical relationships. We propose a BERT-based cross-lingual model for zero pronoun resolution, and evaluate it on the Arabic and Chinese portions of OntoNotes 5.0. As far as we know, ours is the first neural model of zero-pronoun resolution for Arabic; and our model also outperforms the state-of-the-art for Chinese. In the paper we also evaluate BERT feature extraction and fine-tune models on the task, and compare them with our model. We also report on an investigation of BERT layers indicating which layer encodes the most suitable representation for the task. Our code is available at https://github.com/amaloraini/cross-lingual-ZP.",2020
jwalapuram-2017-evaluating,https://doi.org/10.26615/issn.1314-9156.2017_003,0,,,,,,,Evaluating Dialogs based on Grice's Maxims. ,Evaluating Dialogs based on {G}rice{'}s Maxims,,Evaluating Dialogs based on Grice's Maxims,,,Evaluating Dialogs based on Grice's Maxims. ,2017
heeman-2007-combining,https://aclanthology.org/N07-1034,0,,,,,,,"Combining Reinformation Learning with Information-State Update Rules. Reinforcement learning gives a way to learn under what circumstances to perform which actions. However, this approach lacks a formal framework for specifying hand-crafted restrictions, for specifying the effects of the system actions, or for specifying the user simulation. The information state approach, in contrast, allows system and user behavior to be specified as update rules, with preconditions and effects. This approach can be used to specify complex dialogue behavior in a systematic way. We propose combining these two approaches, thus allowing a formal specification of the dialogue behavior, and allowing hand-crafted preconditions, with remaining ones determined via reinforcement learning so as to minimize dialogue cost.",Combining Reinformation Learning with Information-State Update Rules,"Reinforcement learning gives a way to learn under what circumstances to perform which actions. However, this approach lacks a formal framework for specifying hand-crafted restrictions, for specifying the effects of the system actions, or for specifying the user simulation. The information state approach, in contrast, allows system and user behavior to be specified as update rules, with preconditions and effects. This approach can be used to specify complex dialogue behavior in a systematic way. We propose combining these two approaches, thus allowing a formal specification of the dialogue behavior, and allowing hand-crafted preconditions, with remaining ones determined via reinforcement learning so as to minimize dialogue cost.",Combining Reinformation Learning with Information-State Update Rules,"Reinforcement learning gives a way to learn under what circumstances to perform which actions. However, this approach lacks a formal framework for specifying hand-crafted restrictions, for specifying the effects of the system actions, or for specifying the user simulation. The information state approach, in contrast, allows system and user behavior to be specified as update rules, with preconditions and effects. This approach can be used to specify complex dialogue behavior in a systematic way. We propose combining these two approaches, thus allowing a formal specification of the dialogue behavior, and allowing hand-crafted preconditions, with remaining ones determined via reinforcement learning so as to minimize dialogue cost.",,"Combining Reinformation Learning with Information-State Update Rules. Reinforcement learning gives a way to learn under what circumstances to perform which actions. However, this approach lacks a formal framework for specifying hand-crafted restrictions, for specifying the effects of the system actions, or for specifying the user simulation. The information state approach, in contrast, allows system and user behavior to be specified as update rules, with preconditions and effects. This approach can be used to specify complex dialogue behavior in a systematic way. We propose combining these two approaches, thus allowing a formal specification of the dialogue behavior, and allowing hand-crafted preconditions, with remaining ones determined via reinforcement learning so as to minimize dialogue cost.",2007
wiebe-1993-issues,https://aclanthology.org/W93-0239,0,,,,,,,"Issues in Linguistic Segmentation. This paper addresses discourse structure from the perspective of understanding. It would perhaps help us understand the na,ture of discourse relatiolls il"" we better understood what units of a text. can be related to one a.nother. In Olle ma.jor theory of discourse structure, Rhetorical Structure Theory (Mann &: Thompson 1988; Imrea.l'ter simply RS'T), the smallest possible linguistic units that can lmrtMl)ate in a rhetorical rela.tion a,re called units,",Issues in Linguistic Segmentation,"This paper addresses discourse structure from the perspective of understanding. It would perhaps help us understand the na,ture of discourse relatiolls il"" we better understood what units of a text. can be related to one a.nother. In Olle ma.jor theory of discourse structure, Rhetorical Structure Theory (Mann &: Thompson 1988; Imrea.l'ter simply RS'T), the smallest possible linguistic units that can lmrtMl)ate in a rhetorical rela.tion a,re called units,",Issues in Linguistic Segmentation,"This paper addresses discourse structure from the perspective of understanding. It would perhaps help us understand the na,ture of discourse relatiolls il"" we better understood what units of a text. can be related to one a.nother. In Olle ma.jor theory of discourse structure, Rhetorical Structure Theory (Mann &: Thompson 1988; Imrea.l'ter simply RS'T), the smallest possible linguistic units that can lmrtMl)ate in a rhetorical rela.tion a,re called units,",,"Issues in Linguistic Segmentation. This paper addresses discourse structure from the perspective of understanding. It would perhaps help us understand the na,ture of discourse relatiolls il"" we better understood what units of a text. can be related to one a.nother. In Olle ma.jor theory of discourse structure, Rhetorical Structure Theory (Mann &: Thompson 1988; Imrea.l'ter simply RS'T), the smallest possible linguistic units that can lmrtMl)ate in a rhetorical rela.tion a,re called units,",1993
mihaylov-etal-2015-exposing,https://aclanthology.org/R15-1058,1,,,,peace_justice_and_strong_institutions,,,"Exposing Paid Opinion Manipulation Trolls. Recently, Web forums have been invaded by opinion manipulation trolls. Some trolls try to influence the other users driven by their own convictions, while in other cases they can be organized and paid, e.g., by a political party or a PR agency that gives them specific instructions what to write. Finding paid trolls automatically using machine learning is a hard task, as there is no enough training data to train a classifier; yet some test data is possible to obtain, as these trolls are sometimes caught and widely exposed. In this paper, we solve the training data problem by assuming that a user who is called a troll by several different people is likely to be such, and one who has never been called a troll is unlikely to be such. We compare the profiles of (i) paid trolls vs. (ii) ""mentioned"" trolls vs. (iii) non-trolls, and we further show that a classifier trained to distinguish (ii) from (iii) does quite well also at telling apart (i) from (iii).",Exposing Paid Opinion Manipulation Trolls,"Recently, Web forums have been invaded by opinion manipulation trolls. Some trolls try to influence the other users driven by their own convictions, while in other cases they can be organized and paid, e.g., by a political party or a PR agency that gives them specific instructions what to write. Finding paid trolls automatically using machine learning is a hard task, as there is no enough training data to train a classifier; yet some test data is possible to obtain, as these trolls are sometimes caught and widely exposed. In this paper, we solve the training data problem by assuming that a user who is called a troll by several different people is likely to be such, and one who has never been called a troll is unlikely to be such. We compare the profiles of (i) paid trolls vs. (ii) ""mentioned"" trolls vs. (iii) non-trolls, and we further show that a classifier trained to distinguish (ii) from (iii) does quite well also at telling apart (i) from (iii).",Exposing Paid Opinion Manipulation Trolls,"Recently, Web forums have been invaded by opinion manipulation trolls. Some trolls try to influence the other users driven by their own convictions, while in other cases they can be organized and paid, e.g., by a political party or a PR agency that gives them specific instructions what to write. Finding paid trolls automatically using machine learning is a hard task, as there is no enough training data to train a classifier; yet some test data is possible to obtain, as these trolls are sometimes caught and widely exposed. In this paper, we solve the training data problem by assuming that a user who is called a troll by several different people is likely to be such, and one who has never been called a troll is unlikely to be such. We compare the profiles of (i) paid trolls vs. (ii) ""mentioned"" trolls vs. (iii) non-trolls, and we further show that a classifier trained to distinguish (ii) from (iii) does quite well also at telling apart (i) from (iii).",,"Exposing Paid Opinion Manipulation Trolls. Recently, Web forums have been invaded by opinion manipulation trolls. Some trolls try to influence the other users driven by their own convictions, while in other cases they can be organized and paid, e.g., by a political party or a PR agency that gives them specific instructions what to write. Finding paid trolls automatically using machine learning is a hard task, as there is no enough training data to train a classifier; yet some test data is possible to obtain, as these trolls are sometimes caught and widely exposed. In this paper, we solve the training data problem by assuming that a user who is called a troll by several different people is likely to be such, and one who has never been called a troll is unlikely to be such. We compare the profiles of (i) paid trolls vs. (ii) ""mentioned"" trolls vs. (iii) non-trolls, and we further show that a classifier trained to distinguish (ii) from (iii) does quite well also at telling apart (i) from (iii).",2015
tarmom-etal-2020-automatic,https://aclanthology.org/2020.icon-main.4,0,,,,,,,"Automatic Hadith Segmentation using PPM Compression. In this paper we explore the use of Prediction by partial matching (PPM) compression based to segment Hadith into its two main components (Isnad and Matan). The experiments utilized the PPMD variant of the PPM, showing that PPMD is effective in Hadith segmentation. It was also tested on Hadith corpora of different structures. In the first experiment we used the nonauthentic Hadith (NAH) corpus for training models and testing, and in the second experiment we used the NAH corpus for training models and the Leeds University and King Saud University (LK) Hadith corpus for testing PPMD segmenter. PPMD of order 7 achieved an accuracy of 92.76% and 90.10% in the first and second experiments, respectively.",Automatic Hadith Segmentation using {PPM} Compression,"In this paper we explore the use of Prediction by partial matching (PPM) compression based to segment Hadith into its two main components (Isnad and Matan). The experiments utilized the PPMD variant of the PPM, showing that PPMD is effective in Hadith segmentation. It was also tested on Hadith corpora of different structures. In the first experiment we used the nonauthentic Hadith (NAH) corpus for training models and testing, and in the second experiment we used the NAH corpus for training models and the Leeds University and King Saud University (LK) Hadith corpus for testing PPMD segmenter. PPMD of order 7 achieved an accuracy of 92.76% and 90.10% in the first and second experiments, respectively.",Automatic Hadith Segmentation using PPM Compression,"In this paper we explore the use of Prediction by partial matching (PPM) compression based to segment Hadith into its two main components (Isnad and Matan). The experiments utilized the PPMD variant of the PPM, showing that PPMD is effective in Hadith segmentation. It was also tested on Hadith corpora of different structures. In the first experiment we used the nonauthentic Hadith (NAH) corpus for training models and testing, and in the second experiment we used the NAH corpus for training models and the Leeds University and King Saud University (LK) Hadith corpus for testing PPMD segmenter. PPMD of order 7 achieved an accuracy of 92.76% and 90.10% in the first and second experiments, respectively.",The first author is grateful to the Saudi government for their support.,"Automatic Hadith Segmentation using PPM Compression. In this paper we explore the use of Prediction by partial matching (PPM) compression based to segment Hadith into its two main components (Isnad and Matan). The experiments utilized the PPMD variant of the PPM, showing that PPMD is effective in Hadith segmentation. It was also tested on Hadith corpora of different structures. In the first experiment we used the nonauthentic Hadith (NAH) corpus for training models and testing, and in the second experiment we used the NAH corpus for training models and the Leeds University and King Saud University (LK) Hadith corpus for testing PPMD segmenter. PPMD of order 7 achieved an accuracy of 92.76% and 90.10% in the first and second experiments, respectively.",2020
basile-etal-2021-probabilistic,https://aclanthology.org/2021.ranlp-1.16,0,,,,,,,"Probabilistic Ensembles of Zero- and Few-Shot Learning Models for Emotion Classification. Emotion Classification is the task of automatically associating a text with a human emotion. State-of-the-art models are usually learned using annotated corpora or rely on hand-crafted affective lexicons. We present an emotion classification model that does not require a large annotated corpus to be competitive. We experiment with pretrained language models in both a zero-shot and few-shot configuration. We build several of such models and consider them as biased, noisy annotators, whose individual performance is poor. We aggregate the predictions of these models using a Bayesian method originally developed for modelling crowdsourced annotations. Next, we show that the resulting system performs better than the strongest individual model. Finally, we show that when trained on few labelled data, our systems outperform fully-supervised models.",Probabilistic Ensembles of Zero- and Few-Shot Learning Models for Emotion Classification,"Emotion Classification is the task of automatically associating a text with a human emotion. State-of-the-art models are usually learned using annotated corpora or rely on hand-crafted affective lexicons. We present an emotion classification model that does not require a large annotated corpus to be competitive. We experiment with pretrained language models in both a zero-shot and few-shot configuration. We build several of such models and consider them as biased, noisy annotators, whose individual performance is poor. We aggregate the predictions of these models using a Bayesian method originally developed for modelling crowdsourced annotations. Next, we show that the resulting system performs better than the strongest individual model. Finally, we show that when trained on few labelled data, our systems outperform fully-supervised models.",Probabilistic Ensembles of Zero- and Few-Shot Learning Models for Emotion Classification,"Emotion Classification is the task of automatically associating a text with a human emotion. State-of-the-art models are usually learned using annotated corpora or rely on hand-crafted affective lexicons. We present an emotion classification model that does not require a large annotated corpus to be competitive. We experiment with pretrained language models in both a zero-shot and few-shot configuration. We build several of such models and consider them as biased, noisy annotators, whose individual performance is poor. We aggregate the predictions of these models using a Bayesian method originally developed for modelling crowdsourced annotations. Next, we show that the resulting system performs better than the strongest individual model. Finally, we show that when trained on few labelled data, our systems outperform fully-supervised models.",,"Probabilistic Ensembles of Zero- and Few-Shot Learning Models for Emotion Classification. Emotion Classification is the task of automatically associating a text with a human emotion. State-of-the-art models are usually learned using annotated corpora or rely on hand-crafted affective lexicons. We present an emotion classification model that does not require a large annotated corpus to be competitive. We experiment with pretrained language models in both a zero-shot and few-shot configuration. We build several of such models and consider them as biased, noisy annotators, whose individual performance is poor. We aggregate the predictions of these models using a Bayesian method originally developed for modelling crowdsourced annotations. Next, we show that the resulting system performs better than the strongest individual model. Finally, we show that when trained on few labelled data, our systems outperform fully-supervised models.",2021
li-etal-2019-information,https://aclanthology.org/N19-1359,0,,,,,,,"Information Aggregation for Multi-Head Attention with Routing-by-Agreement. Multi-head attention is appealing for its ability to jointly extract different types of information from multiple representation subspaces. Concerning the information aggregation, a common practice is to use a concatenation followed by a linear transformation, which may not fully exploit the expressiveness of multi-head attention. In this work, we propose to improve the information aggregation for multi-head attention with a more powerful routing-by-agreement algorithm. Specifically, the routing algorithm iteratively updates the proportion of how much a part (i.e. the distinct information learned from a specific subspace) should be assigned to a whole (i.e. the final output representation), based on the agreement between parts and wholes. Experimental results on linguistic probing tasks and machine translation tasks prove the superiority of the advanced information aggregation over the standard linear transformation.",Information Aggregation for Multi-Head Attention with Routing-by-Agreement,"Multi-head attention is appealing for its ability to jointly extract different types of information from multiple representation subspaces. Concerning the information aggregation, a common practice is to use a concatenation followed by a linear transformation, which may not fully exploit the expressiveness of multi-head attention. In this work, we propose to improve the information aggregation for multi-head attention with a more powerful routing-by-agreement algorithm. Specifically, the routing algorithm iteratively updates the proportion of how much a part (i.e. the distinct information learned from a specific subspace) should be assigned to a whole (i.e. the final output representation), based on the agreement between parts and wholes. Experimental results on linguistic probing tasks and machine translation tasks prove the superiority of the advanced information aggregation over the standard linear transformation.",Information Aggregation for Multi-Head Attention with Routing-by-Agreement,"Multi-head attention is appealing for its ability to jointly extract different types of information from multiple representation subspaces. Concerning the information aggregation, a common practice is to use a concatenation followed by a linear transformation, which may not fully exploit the expressiveness of multi-head attention. In this work, we propose to improve the information aggregation for multi-head attention with a more powerful routing-by-agreement algorithm. Specifically, the routing algorithm iteratively updates the proportion of how much a part (i.e. the distinct information learned from a specific subspace) should be assigned to a whole (i.e. the final output representation), based on the agreement between parts and wholes. Experimental results on linguistic probing tasks and machine translation tasks prove the superiority of the advanced information aggregation over the standard linear transformation.","Jian Li and Michael R. Lyu were supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14210717 of the General Research Fund), and Microsoft Research Asia (2018 Microsoft Research Asia Collaborative Research Award). We thank the anonymous reviewers for their insightful comments and suggestions.","Information Aggregation for Multi-Head Attention with Routing-by-Agreement. Multi-head attention is appealing for its ability to jointly extract different types of information from multiple representation subspaces. Concerning the information aggregation, a common practice is to use a concatenation followed by a linear transformation, which may not fully exploit the expressiveness of multi-head attention. In this work, we propose to improve the information aggregation for multi-head attention with a more powerful routing-by-agreement algorithm. Specifically, the routing algorithm iteratively updates the proportion of how much a part (i.e. the distinct information learned from a specific subspace) should be assigned to a whole (i.e. the final output representation), based on the agreement between parts and wholes. Experimental results on linguistic probing tasks and machine translation tasks prove the superiority of the advanced information aggregation over the standard linear transformation.",2019
wang-etal-2020-negative,https://aclanthology.org/2020.emnlp-main.359,0,,,,,,,"On Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment. Modern multilingual models are trained on concatenated text from multiple languages in hopes of conferring benefits to each (positive transfer), with the most pronounced benefits accruing to low-resource languages. However, recent work has shown that this approach can degrade performance on high-resource languages, a phenomenon known as negative interference. In this paper, we present the first systematic study of negative interference. We show that, contrary to previous belief, negative interference also impacts low-resource languages. While parameters are maximally shared to learn language-universal structures, we demonstrate that language-specific parameters do exist in multilingual models and they are a potential cause of negative interference. Motivated by these observations, we also present a meta-learning algorithm that obtains better cross-lingual transferability and alleviates negative interference, by adding languagespecific layers as meta-parameters and training them in a manner that explicitly improves shared layers' generalization on all languages. Overall, our results show that negative interference is more common than previously known, suggesting new directions for improving multilingual representations. 1",On Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment,"Modern multilingual models are trained on concatenated text from multiple languages in hopes of conferring benefits to each (positive transfer), with the most pronounced benefits accruing to low-resource languages. However, recent work has shown that this approach can degrade performance on high-resource languages, a phenomenon known as negative interference. In this paper, we present the first systematic study of negative interference. We show that, contrary to previous belief, negative interference also impacts low-resource languages. While parameters are maximally shared to learn language-universal structures, we demonstrate that language-specific parameters do exist in multilingual models and they are a potential cause of negative interference. Motivated by these observations, we also present a meta-learning algorithm that obtains better cross-lingual transferability and alleviates negative interference, by adding languagespecific layers as meta-parameters and training them in a manner that explicitly improves shared layers' generalization on all languages. Overall, our results show that negative interference is more common than previously known, suggesting new directions for improving multilingual representations. 1",On Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment,"Modern multilingual models are trained on concatenated text from multiple languages in hopes of conferring benefits to each (positive transfer), with the most pronounced benefits accruing to low-resource languages. However, recent work has shown that this approach can degrade performance on high-resource languages, a phenomenon known as negative interference. In this paper, we present the first systematic study of negative interference. We show that, contrary to previous belief, negative interference also impacts low-resource languages. While parameters are maximally shared to learn language-universal structures, we demonstrate that language-specific parameters do exist in multilingual models and they are a potential cause of negative interference. Motivated by these observations, we also present a meta-learning algorithm that obtains better cross-lingual transferability and alleviates negative interference, by adding languagespecific layers as meta-parameters and training them in a manner that explicitly improves shared layers' generalization on all languages. Overall, our results show that negative interference is more common than previously known, suggesting new directions for improving multilingual representations. 1","We want to thank Jaime Carbonell for his support on the early stage of this project. We also would like to thank Zihang Dai, Graham Neubig, Orhan Firat, Yuan Cao, Jiateng Xie, Xinyi Wang, Ruochen Xu and Yiheng Zhou for insightful discussions. Lastly, we thank anonymous reviewers for their valueable feedbacks.","On Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment. Modern multilingual models are trained on concatenated text from multiple languages in hopes of conferring benefits to each (positive transfer), with the most pronounced benefits accruing to low-resource languages. However, recent work has shown that this approach can degrade performance on high-resource languages, a phenomenon known as negative interference. In this paper, we present the first systematic study of negative interference. We show that, contrary to previous belief, negative interference also impacts low-resource languages. While parameters are maximally shared to learn language-universal structures, we demonstrate that language-specific parameters do exist in multilingual models and they are a potential cause of negative interference. Motivated by these observations, we also present a meta-learning algorithm that obtains better cross-lingual transferability and alleviates negative interference, by adding languagespecific layers as meta-parameters and training them in a manner that explicitly improves shared layers' generalization on all languages. Overall, our results show that negative interference is more common than previously known, suggesting new directions for improving multilingual representations. 1",2020
besacier-etal-2010-lig,https://aclanthology.org/2010.iwslt-evaluation.12,0,,,,,,,LIG statistical machine translation systems for IWSLT 2010. ,{LIG} statistical machine translation systems for {IWSLT} 2010,,LIG statistical machine translation systems for IWSLT 2010,,,LIG statistical machine translation systems for IWSLT 2010. ,2010
dou-etal-2019-unsupervised,https://aclanthology.org/D19-1147,0,,,,,,,"Unsupervised Domain Adaptation for Neural Machine Translation with Domain-Aware Feature Embeddings. The recent success of neural machine translation models relies on the availability of high quality, in-domain data. Domain adaptation is required when domain-specific data is scarce or nonexistent. Previous unsupervised domain adaptation strategies include training the model with in-domain copied monolingual or back-translated data. However, these methods use generic representations for text regardless of domain shift, which makes it infeasible for translation models to control outputs conditional on a specific domain. In this work, we propose an approach that adapts models with domain-aware feature embeddings, which are learned via an auxiliary language modeling task. Our approach allows the model to assign domain-specific representations to words and output sentences in the desired domain. Our empirical results demonstrate the effectiveness of the proposed strategy, achieving consistent improvements in multiple experimental settings. In addition, we show that combining our method with back translation can further improve the performance of the model. 1",Unsupervised Domain Adaptation for Neural Machine Translation with Domain-Aware Feature Embeddings,"The recent success of neural machine translation models relies on the availability of high quality, in-domain data. Domain adaptation is required when domain-specific data is scarce or nonexistent. Previous unsupervised domain adaptation strategies include training the model with in-domain copied monolingual or back-translated data. However, these methods use generic representations for text regardless of domain shift, which makes it infeasible for translation models to control outputs conditional on a specific domain. In this work, we propose an approach that adapts models with domain-aware feature embeddings, which are learned via an auxiliary language modeling task. Our approach allows the model to assign domain-specific representations to words and output sentences in the desired domain. Our empirical results demonstrate the effectiveness of the proposed strategy, achieving consistent improvements in multiple experimental settings. In addition, we show that combining our method with back translation can further improve the performance of the model. 1",Unsupervised Domain Adaptation for Neural Machine Translation with Domain-Aware Feature Embeddings,"The recent success of neural machine translation models relies on the availability of high quality, in-domain data. Domain adaptation is required when domain-specific data is scarce or nonexistent. Previous unsupervised domain adaptation strategies include training the model with in-domain copied monolingual or back-translated data. However, these methods use generic representations for text regardless of domain shift, which makes it infeasible for translation models to control outputs conditional on a specific domain. In this work, we propose an approach that adapts models with domain-aware feature embeddings, which are learned via an auxiliary language modeling task. Our approach allows the model to assign domain-specific representations to words and output sentences in the desired domain. Our empirical results demonstrate the effectiveness of the proposed strategy, achieving consistent improvements in multiple experimental settings. In addition, we show that combining our method with back translation can further improve the performance of the model. 1","We are grateful to Xinyi Wang and anonymous reviewers for their helpful suggestions and insightful comments. We also thank Zhi-Hao Zhou, Shuyan Zhou and Anna Belova for proofreading the paper.This material is based upon work generously supported partly by the National Science Foundation under grant 1761548 and the Defense Advanced Research Projects Agency Information In-novation Office (I2O) Low Resource Languages for Emergent Incidents (LORELEI) program under Contract No. HR0011-15-C0114. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.","Unsupervised Domain Adaptation for Neural Machine Translation with Domain-Aware Feature Embeddings. The recent success of neural machine translation models relies on the availability of high quality, in-domain data. Domain adaptation is required when domain-specific data is scarce or nonexistent. Previous unsupervised domain adaptation strategies include training the model with in-domain copied monolingual or back-translated data. However, these methods use generic representations for text regardless of domain shift, which makes it infeasible for translation models to control outputs conditional on a specific domain. In this work, we propose an approach that adapts models with domain-aware feature embeddings, which are learned via an auxiliary language modeling task. Our approach allows the model to assign domain-specific representations to words and output sentences in the desired domain. Our empirical results demonstrate the effectiveness of the proposed strategy, achieving consistent improvements in multiple experimental settings. In addition, we show that combining our method with back translation can further improve the performance of the model. 1",2019
read-etal-2012-sentence,https://aclanthology.org/C12-2096,0,,,,,,,"Sentence Boundary Detection: A Long Solved Problem?. We review the state of the art in automated sentence boundary detection (SBD) for English and call for a renewed research interest in this foundational first step in natural language processing. We observe severe limitations in comparability and reproducibility of earlier work and a general lack of knowledge about genre-and domain-specific variations. To overcome these barriers, we conduct a systematic empirical survey of a large number of extant approaches, across a broad range of diverse corpora. We further observe that much previous work interpreted the SBD task too narrowly, leading to overly optimistic estimates of SBD performance on running text. To better relate SBD to practical NLP use cases, we thus propose a generalized definition of the task, eliminating text-or language-specific assumptions about candidate boundary points. More specifically, we quantify degrees of variation across 'standard' corpora of edited, relatively formal language, as well as performance degradation when moving to less formal language, viz. various samples of user-generated Web content. For these latter types of text, we demonstrate how moderate interpretation of document structure (as is now often available more or less explicitly through markup) can substantially contribute to overall SBD performance.",Sentence Boundary Detection: A Long Solved Problem?,"We review the state of the art in automated sentence boundary detection (SBD) for English and call for a renewed research interest in this foundational first step in natural language processing. We observe severe limitations in comparability and reproducibility of earlier work and a general lack of knowledge about genre-and domain-specific variations. To overcome these barriers, we conduct a systematic empirical survey of a large number of extant approaches, across a broad range of diverse corpora. We further observe that much previous work interpreted the SBD task too narrowly, leading to overly optimistic estimates of SBD performance on running text. To better relate SBD to practical NLP use cases, we thus propose a generalized definition of the task, eliminating text-or language-specific assumptions about candidate boundary points. More specifically, we quantify degrees of variation across 'standard' corpora of edited, relatively formal language, as well as performance degradation when moving to less formal language, viz. various samples of user-generated Web content. For these latter types of text, we demonstrate how moderate interpretation of document structure (as is now often available more or less explicitly through markup) can substantially contribute to overall SBD performance.",Sentence Boundary Detection: A Long Solved Problem?,"We review the state of the art in automated sentence boundary detection (SBD) for English and call for a renewed research interest in this foundational first step in natural language processing. We observe severe limitations in comparability and reproducibility of earlier work and a general lack of knowledge about genre-and domain-specific variations. To overcome these barriers, we conduct a systematic empirical survey of a large number of extant approaches, across a broad range of diverse corpora. We further observe that much previous work interpreted the SBD task too narrowly, leading to overly optimistic estimates of SBD performance on running text. To better relate SBD to practical NLP use cases, we thus propose a generalized definition of the task, eliminating text-or language-specific assumptions about candidate boundary points. More specifically, we quantify degrees of variation across 'standard' corpora of edited, relatively formal language, as well as performance degradation when moving to less formal language, viz. various samples of user-generated Web content. For these latter types of text, we demonstrate how moderate interpretation of document structure (as is now often available more or less explicitly through markup) can substantially contribute to overall SBD performance.",,"Sentence Boundary Detection: A Long Solved Problem?. We review the state of the art in automated sentence boundary detection (SBD) for English and call for a renewed research interest in this foundational first step in natural language processing. We observe severe limitations in comparability and reproducibility of earlier work and a general lack of knowledge about genre-and domain-specific variations. To overcome these barriers, we conduct a systematic empirical survey of a large number of extant approaches, across a broad range of diverse corpora. We further observe that much previous work interpreted the SBD task too narrowly, leading to overly optimistic estimates of SBD performance on running text. To better relate SBD to practical NLP use cases, we thus propose a generalized definition of the task, eliminating text-or language-specific assumptions about candidate boundary points. More specifically, we quantify degrees of variation across 'standard' corpora of edited, relatively formal language, as well as performance degradation when moving to less formal language, viz. various samples of user-generated Web content. For these latter types of text, we demonstrate how moderate interpretation of document structure (as is now often available more or less explicitly through markup) can substantially contribute to overall SBD performance.",2012
muller-etal-2021-unseen,https://aclanthology.org/2021.naacl-main.38,0,,,,,,,"When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models. Transfer learning based on pretraining language models on a large amount of raw data has become a new norm to reach state-of-theart performance in NLP. Still, it remains unclear how this approach should be applied for unseen languages that are not covered by any available large-scale multilingual language model and for which only a small amount of raw data is generally available. In this work, by comparing multilingual and monolingual models, we show that such models behave in multiple ways on unseen languages. Some languages greatly benefit from transfer learning and behave similarly to closely related high resource languages whereas others apparently do not. Focusing on the latter, we show that this failure to transfer is largely related to the impact of the script used to write such languages. We show that transliterating those languages significantly improves the potential of large-scale multilingual language models on downstream tasks. This result provides a promising direction towards making these massively multilingual models useful for a new set of unseen languages. 1",When Being Unseen from m{BERT} is just the Beginning: Handling New Languages With Multilingual Language Models,"Transfer learning based on pretraining language models on a large amount of raw data has become a new norm to reach state-of-theart performance in NLP. Still, it remains unclear how this approach should be applied for unseen languages that are not covered by any available large-scale multilingual language model and for which only a small amount of raw data is generally available. In this work, by comparing multilingual and monolingual models, we show that such models behave in multiple ways on unseen languages. Some languages greatly benefit from transfer learning and behave similarly to closely related high resource languages whereas others apparently do not. Focusing on the latter, we show that this failure to transfer is largely related to the impact of the script used to write such languages. We show that transliterating those languages significantly improves the potential of large-scale multilingual language models on downstream tasks. This result provides a promising direction towards making these massively multilingual models useful for a new set of unseen languages. 1",When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models,"Transfer learning based on pretraining language models on a large amount of raw data has become a new norm to reach state-of-theart performance in NLP. Still, it remains unclear how this approach should be applied for unseen languages that are not covered by any available large-scale multilingual language model and for which only a small amount of raw data is generally available. In this work, by comparing multilingual and monolingual models, we show that such models behave in multiple ways on unseen languages. Some languages greatly benefit from transfer learning and behave similarly to closely related high resource languages whereas others apparently do not. Focusing on the latter, we show that this failure to transfer is largely related to the impact of the script used to write such languages. We show that transliterating those languages significantly improves the potential of large-scale multilingual language models on downstream tasks. This result provides a promising direction towards making these massively multilingual models useful for a new set of unseen languages. 1","The Inria authors were partly funded by two French Research National agency projects, namely projects PARSITI (ANR-16-CE33-0021) and SoSweet (ANR-15-CE38-0011), as well as by Benoit Sagot's chair in the PRAIRIE institute as part of the ""Investissements d'avenir"" programme under the reference ANR-19-P3IA-0001. Antonios Anastasopoulos is generously supported by NSF Award 2040926 and is also thankful to Graham Neubig for very insightful initial discussions on this research direction..","When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models. Transfer learning based on pretraining language models on a large amount of raw data has become a new norm to reach state-of-theart performance in NLP. Still, it remains unclear how this approach should be applied for unseen languages that are not covered by any available large-scale multilingual language model and for which only a small amount of raw data is generally available. In this work, by comparing multilingual and monolingual models, we show that such models behave in multiple ways on unseen languages. Some languages greatly benefit from transfer learning and behave similarly to closely related high resource languages whereas others apparently do not. Focusing on the latter, we show that this failure to transfer is largely related to the impact of the script used to write such languages. We show that transliterating those languages significantly improves the potential of large-scale multilingual language models on downstream tasks. This result provides a promising direction towards making these massively multilingual models useful for a new set of unseen languages. 1",2021
tomokiyo-ries-1997-makes,https://aclanthology.org/W97-1008,0,,,,,,,"What makes a word: Learning base units in Japanese for speech recognition. We describe an automatic process for learning word units in Japanese. Since the Japanese orthography has no spaces delimiting words, the first step in building a Japanese speech recognition system is to define the units that will be recognized. Our method applies a compound-finding algorithm, previously used to find word sequences in English, to learning syllable sequences in Japanese. We report that we were able not only to extract meaningful units, eliminating the need for possibly inconsistent manual segmentation, but also to decrease perplexity using this automatic procedure, which relies on a statistical, not syntactic, measure of relevance. Our algorithm also uncovers the kinds of environments that help the recognizer predict phonological alternations, which are often hidden by morphologically-motivated tokenization.",What makes a word: Learning base units in {J}apanese for speech recognition,"We describe an automatic process for learning word units in Japanese. Since the Japanese orthography has no spaces delimiting words, the first step in building a Japanese speech recognition system is to define the units that will be recognized. Our method applies a compound-finding algorithm, previously used to find word sequences in English, to learning syllable sequences in Japanese. We report that we were able not only to extract meaningful units, eliminating the need for possibly inconsistent manual segmentation, but also to decrease perplexity using this automatic procedure, which relies on a statistical, not syntactic, measure of relevance. Our algorithm also uncovers the kinds of environments that help the recognizer predict phonological alternations, which are often hidden by morphologically-motivated tokenization.",What makes a word: Learning base units in Japanese for speech recognition,"We describe an automatic process for learning word units in Japanese. Since the Japanese orthography has no spaces delimiting words, the first step in building a Japanese speech recognition system is to define the units that will be recognized. Our method applies a compound-finding algorithm, previously used to find word sequences in English, to learning syllable sequences in Japanese. We report that we were able not only to extract meaningful units, eliminating the need for possibly inconsistent manual segmentation, but also to decrease perplexity using this automatic procedure, which relies on a statistical, not syntactic, measure of relevance. Our algorithm also uncovers the kinds of environments that help the recognizer predict phonological alternations, which are often hidden by morphologically-motivated tokenization.",,"What makes a word: Learning base units in Japanese for speech recognition. We describe an automatic process for learning word units in Japanese. Since the Japanese orthography has no spaces delimiting words, the first step in building a Japanese speech recognition system is to define the units that will be recognized. Our method applies a compound-finding algorithm, previously used to find word sequences in English, to learning syllable sequences in Japanese. We report that we were able not only to extract meaningful units, eliminating the need for possibly inconsistent manual segmentation, but also to decrease perplexity using this automatic procedure, which relies on a statistical, not syntactic, measure of relevance. Our algorithm also uncovers the kinds of environments that help the recognizer predict phonological alternations, which are often hidden by morphologically-motivated tokenization.",1997
takmaz-etal-2020-generating,https://aclanthology.org/2020.emnlp-main.377,0,,,,,,,"Generating Image Descriptions via Sequential Cross-Modal Alignment Guided by Human Gaze. When speakers describe an image, they tend to look at objects before mentioning them. In this paper, we investigate such sequential crossmodal alignment by modelling the image description generation process computationally. We take as our starting point a state-of-theart image captioning system and develop several model variants that exploit information from human gaze patterns recorded during language production. In particular, we propose the first approach to image description generation where visual processing is modelled sequentially. Our experiments and analyses confirm that better descriptions can be obtained by exploiting gaze-driven attention and shed light on human cognitive processes by comparing different ways of aligning the gaze modality with language production. We find that processing gaze data sequentially leads to descriptions that are better aligned to those produced by speakers, more diverse, and more naturalparticularly when gaze is encoded with a dedicated recurrent component.",{G}enerating {I}mage {D}escriptions via {S}equential {C}ross-{M}odal {A}lignment {G}uided by {H}uman {G}aze,"When speakers describe an image, they tend to look at objects before mentioning them. In this paper, we investigate such sequential crossmodal alignment by modelling the image description generation process computationally. We take as our starting point a state-of-theart image captioning system and develop several model variants that exploit information from human gaze patterns recorded during language production. In particular, we propose the first approach to image description generation where visual processing is modelled sequentially. Our experiments and analyses confirm that better descriptions can be obtained by exploiting gaze-driven attention and shed light on human cognitive processes by comparing different ways of aligning the gaze modality with language production. We find that processing gaze data sequentially leads to descriptions that are better aligned to those produced by speakers, more diverse, and more naturalparticularly when gaze is encoded with a dedicated recurrent component.",Generating Image Descriptions via Sequential Cross-Modal Alignment Guided by Human Gaze,"When speakers describe an image, they tend to look at objects before mentioning them. In this paper, we investigate such sequential crossmodal alignment by modelling the image description generation process computationally. We take as our starting point a state-of-theart image captioning system and develop several model variants that exploit information from human gaze patterns recorded during language production. In particular, we propose the first approach to image description generation where visual processing is modelled sequentially. Our experiments and analyses confirm that better descriptions can be obtained by exploiting gaze-driven attention and shed light on human cognitive processes by comparing different ways of aligning the gaze modality with language production. We find that processing gaze data sequentially leads to descriptions that are better aligned to those produced by speakers, more diverse, and more naturalparticularly when gaze is encoded with a dedicated recurrent component.","We are grateful to Lieke Gelderloos for her help with the Dutch transcriptions, and to Jelle Zuidema and the participants of EurNLP 2019 for their feedback on a preliminary version of the work. Lisa Beinborn worked on the project mostly when being employed at the University of Amsterdam. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819455 awarded to Raquel Fernández).","Generating Image Descriptions via Sequential Cross-Modal Alignment Guided by Human Gaze. When speakers describe an image, they tend to look at objects before mentioning them. In this paper, we investigate such sequential crossmodal alignment by modelling the image description generation process computationally. We take as our starting point a state-of-theart image captioning system and develop several model variants that exploit information from human gaze patterns recorded during language production. In particular, we propose the first approach to image description generation where visual processing is modelled sequentially. Our experiments and analyses confirm that better descriptions can be obtained by exploiting gaze-driven attention and shed light on human cognitive processes by comparing different ways of aligning the gaze modality with language production. We find that processing gaze data sequentially leads to descriptions that are better aligned to those produced by speakers, more diverse, and more naturalparticularly when gaze is encoded with a dedicated recurrent component.",2020
scharffe-2017-class,https://aclanthology.org/W17-7303,0,,,,,,,"Class Disjointness Constraints as Specific Objective Functions in Neural Network Classifiers. Increasing performance of deep learning techniques on computer vision tasks like object detection has led to systems able to detect a large number of classes of objects. Most deep learning models use simple unstructured labels and assume that any domain knowledge will be learned from the data. However when the domain is complex and the data limited, it may be useful to use domain knowledge encoded in an ontology to guide the learning process. In this paper, we conduct experiments to introduce constraints into the training process of a neural network. We show that explicitly modeling a disjointness axiom between a set of classes as a specific objective function leads to reducing violations for this constraint, while also reducing the overall classification error. This opens a way to import domain knowledge modeled in an ontology into a deep learning process.",Class Disjointness Constraints as Specific Objective Functions in Neural Network Classifiers,"Increasing performance of deep learning techniques on computer vision tasks like object detection has led to systems able to detect a large number of classes of objects. Most deep learning models use simple unstructured labels and assume that any domain knowledge will be learned from the data. However when the domain is complex and the data limited, it may be useful to use domain knowledge encoded in an ontology to guide the learning process. In this paper, we conduct experiments to introduce constraints into the training process of a neural network. We show that explicitly modeling a disjointness axiom between a set of classes as a specific objective function leads to reducing violations for this constraint, while also reducing the overall classification error. This opens a way to import domain knowledge modeled in an ontology into a deep learning process.",Class Disjointness Constraints as Specific Objective Functions in Neural Network Classifiers,"Increasing performance of deep learning techniques on computer vision tasks like object detection has led to systems able to detect a large number of classes of objects. Most deep learning models use simple unstructured labels and assume that any domain knowledge will be learned from the data. However when the domain is complex and the data limited, it may be useful to use domain knowledge encoded in an ontology to guide the learning process. In this paper, we conduct experiments to introduce constraints into the training process of a neural network. We show that explicitly modeling a disjointness axiom between a set of classes as a specific objective function leads to reducing violations for this constraint, while also reducing the overall classification error. This opens a way to import domain knowledge modeled in an ontology into a deep learning process.",,"Class Disjointness Constraints as Specific Objective Functions in Neural Network Classifiers. Increasing performance of deep learning techniques on computer vision tasks like object detection has led to systems able to detect a large number of classes of objects. Most deep learning models use simple unstructured labels and assume that any domain knowledge will be learned from the data. However when the domain is complex and the data limited, it may be useful to use domain knowledge encoded in an ontology to guide the learning process. In this paper, we conduct experiments to introduce constraints into the training process of a neural network. We show that explicitly modeling a disjointness axiom between a set of classes as a specific objective function leads to reducing violations for this constraint, while also reducing the overall classification error. This opens a way to import domain knowledge modeled in an ontology into a deep learning process.",2017
bowman-zhu-2019-deep,https://aclanthology.org/N19-5002,0,,,,,,,"Deep Learning for Natural Language Inference. The task of natural language inference (NLI; also known as recognizing textual entailment, or RTE) asks a system to evaluate the relationships between the truth-conditional meanings of two sentences or, in other words, decide whether one sentence follows from another. This task neatly isolates the core NLP problem of sentence understanding as a classification problem, and also offers promise as an intermediate step in the building of complex systems (Dagan et al., 2005; MacCartney, 2009; Bowman et al., 2015) .",Deep Learning for Natural Language Inference,"The task of natural language inference (NLI; also known as recognizing textual entailment, or RTE) asks a system to evaluate the relationships between the truth-conditional meanings of two sentences or, in other words, decide whether one sentence follows from another. This task neatly isolates the core NLP problem of sentence understanding as a classification problem, and also offers promise as an intermediate step in the building of complex systems (Dagan et al., 2005; MacCartney, 2009; Bowman et al., 2015) .",Deep Learning for Natural Language Inference,"The task of natural language inference (NLI; also known as recognizing textual entailment, or RTE) asks a system to evaluate the relationships between the truth-conditional meanings of two sentences or, in other words, decide whether one sentence follows from another. This task neatly isolates the core NLP problem of sentence understanding as a classification problem, and also offers promise as an intermediate step in the building of complex systems (Dagan et al., 2005; MacCartney, 2009; Bowman et al., 2015) .",,"Deep Learning for Natural Language Inference. The task of natural language inference (NLI; also known as recognizing textual entailment, or RTE) asks a system to evaluate the relationships between the truth-conditional meanings of two sentences or, in other words, decide whether one sentence follows from another. This task neatly isolates the core NLP problem of sentence understanding as a classification problem, and also offers promise as an intermediate step in the building of complex systems (Dagan et al., 2005; MacCartney, 2009; Bowman et al., 2015) .",2019
kang-etal-2018-dataset,https://aclanthology.org/N18-1149,1,,,,industry_innovation_infrastructure,,,"A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications. Peer reviewing is a central component in the scientific publishing process. We present the first public dataset of scientific peer reviews available for research purposes (PeerRead v1), 1 providing an opportunity to study this important artifact. The dataset consists of 14.7K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR. The dataset also includes 10.7K textual peer reviews written by experts for a subset of the papers. We describe the data collection process and report interesting observed phenomena in the peer reviews. We also propose two novel NLP tasks based on this dataset and provide simple baseline models. In the first task, we show that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline. In the second task, we predict the numerical scores of review aspects and show that simple models can outperform the mean baseline for aspects with high variance such as 'originality' and 'impact'.","A Dataset of Peer Reviews ({P}eer{R}ead): Collection, Insights and {NLP} Applications","Peer reviewing is a central component in the scientific publishing process. We present the first public dataset of scientific peer reviews available for research purposes (PeerRead v1), 1 providing an opportunity to study this important artifact. The dataset consists of 14.7K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR. The dataset also includes 10.7K textual peer reviews written by experts for a subset of the papers. We describe the data collection process and report interesting observed phenomena in the peer reviews. We also propose two novel NLP tasks based on this dataset and provide simple baseline models. In the first task, we show that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline. In the second task, we predict the numerical scores of review aspects and show that simple models can outperform the mean baseline for aspects with high variance such as 'originality' and 'impact'.","A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications","Peer reviewing is a central component in the scientific publishing process. We present the first public dataset of scientific peer reviews available for research purposes (PeerRead v1), 1 providing an opportunity to study this important artifact. The dataset consists of 14.7K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR. The dataset also includes 10.7K textual peer reviews written by experts for a subset of the papers. We describe the data collection process and report interesting observed phenomena in the peer reviews. We also propose two novel NLP tasks based on this dataset and provide simple baseline models. In the first task, we show that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline. In the second task, we predict the numerical scores of review aspects and show that simple models can outperform the mean baseline for aspects with high variance such as 'originality' and 'impact'.","This work would not have been possible without the efforts of Rich Gerber and Paolo Gai (developers of the softconf.com conference management system), Stefan Riezler, Yoav Goldberg (chairs of CoNLL 2016), Min-Yen Kan, Regina Barzilay (chairs of ACL 2017) for allowing authors and reviewers to opt-in for this dataset during the official review process. We thank the openreview.net, arxiv.org and semanticscholar.org teams for their commitment to promoting transparency and openness in scientific communication. We also thank Peter Clark, Chris Dyer, Oren Etzioni, Matt Gardner, Nicholas FitzGerald, Dan Jurafsky, Hao Peng, Minjoon Seo, Noah A. Smith, Swabha Swayamdipta, Sam Thomson, Trang Tran, Vicki Zayats and Luke Zettlemoyer for their helpful comments.","A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications. Peer reviewing is a central component in the scientific publishing process. We present the first public dataset of scientific peer reviews available for research purposes (PeerRead v1), 1 providing an opportunity to study this important artifact. The dataset consists of 14.7K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR. The dataset also includes 10.7K textual peer reviews written by experts for a subset of the papers. We describe the data collection process and report interesting observed phenomena in the peer reviews. We also propose two novel NLP tasks based on this dataset and provide simple baseline models. In the first task, we show that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline. In the second task, we predict the numerical scores of review aspects and show that simple models can outperform the mean baseline for aspects with high variance such as 'originality' and 'impact'.",2018
kim-lee-2000-decision,https://aclanthology.org/C00-2156,0,,,,,,,"Decision-Tree based Error Correction for Statistical Phrase Break Prediction in Korean. In this paper, we present a new phrase break prediction architecture that integrates probabilistic approach with decision-tree based error correction. The probabilistic method alone usually su ers from performance degradation due to inherent data sparseness problems and it only covers a limited range of contextual information. Moreover, the module can not utilize the selective morpheme tag and relative distance to the other phrase breaks. The decision-tree based error correction was tightly integrated to overcome these limitations. The initially phrase break tagged morpheme sequence is corrected with the error correcting decision tree which w as induced by C4.5 from the correctly tagged corpus with the output of the probabilistic predictor. The decision tree-based post error correction provided improved results even with the phrase break predictor that has poor initial performance. Moreover, the system can be exibly tuned to new corpus without massive retraining.",Decision-Tree based Error Correction for Statistical Phrase Break Prediction in {K}orean,"In this paper, we present a new phrase break prediction architecture that integrates probabilistic approach with decision-tree based error correction. The probabilistic method alone usually su ers from performance degradation due to inherent data sparseness problems and it only covers a limited range of contextual information. Moreover, the module can not utilize the selective morpheme tag and relative distance to the other phrase breaks. The decision-tree based error correction was tightly integrated to overcome these limitations. The initially phrase break tagged morpheme sequence is corrected with the error correcting decision tree which w as induced by C4.5 from the correctly tagged corpus with the output of the probabilistic predictor. The decision tree-based post error correction provided improved results even with the phrase break predictor that has poor initial performance. Moreover, the system can be exibly tuned to new corpus without massive retraining.",Decision-Tree based Error Correction for Statistical Phrase Break Prediction in Korean,"In this paper, we present a new phrase break prediction architecture that integrates probabilistic approach with decision-tree based error correction. The probabilistic method alone usually su ers from performance degradation due to inherent data sparseness problems and it only covers a limited range of contextual information. Moreover, the module can not utilize the selective morpheme tag and relative distance to the other phrase breaks. The decision-tree based error correction was tightly integrated to overcome these limitations. The initially phrase break tagged morpheme sequence is corrected with the error correcting decision tree which w as induced by C4.5 from the correctly tagged corpus with the output of the probabilistic predictor. The decision tree-based post error correction provided improved results even with the phrase break predictor that has poor initial performance. Moreover, the system can be exibly tuned to new corpus without massive retraining.",,"Decision-Tree based Error Correction for Statistical Phrase Break Prediction in Korean. In this paper, we present a new phrase break prediction architecture that integrates probabilistic approach with decision-tree based error correction. The probabilistic method alone usually su ers from performance degradation due to inherent data sparseness problems and it only covers a limited range of contextual information. Moreover, the module can not utilize the selective morpheme tag and relative distance to the other phrase breaks. The decision-tree based error correction was tightly integrated to overcome these limitations. The initially phrase break tagged morpheme sequence is corrected with the error correcting decision tree which w as induced by C4.5 from the correctly tagged corpus with the output of the probabilistic predictor. The decision tree-based post error correction provided improved results even with the phrase break predictor that has poor initial performance. Moreover, the system can be exibly tuned to new corpus without massive retraining.",2000
aulamo-tiedemann-2019-opus,https://aclanthology.org/W19-6146,0,,,,,,,"The OPUS Resource Repository: An Open Package for Creating Parallel Corpora and Machine Translation Services. This paper presents a flexible and powerful system for creating parallel corpora and for running neural machine translation services. Our package provides a scalable data repository backend that offers transparent data pre-processing pipelines and automatic alignment procedures that facilitate the compilation of extensive parallel data sets from a variety of sources. Moreover, we develop a web-based interface that constitutes an intuitive frontend for end-users of the platform. The whole system can easily be distributed over virtual machines and implements a sophisticated permission system with secure connections and a flexible database for storing arbitrary metadata. Furthermore, we also provide an interface for neural machine translation that can run as a service on virtual machines, which also incorporates a connection to the data repository software.",The {OPUS} Resource Repository: An Open Package for Creating Parallel Corpora and Machine Translation Services,"This paper presents a flexible and powerful system for creating parallel corpora and for running neural machine translation services. Our package provides a scalable data repository backend that offers transparent data pre-processing pipelines and automatic alignment procedures that facilitate the compilation of extensive parallel data sets from a variety of sources. Moreover, we develop a web-based interface that constitutes an intuitive frontend for end-users of the platform. The whole system can easily be distributed over virtual machines and implements a sophisticated permission system with secure connections and a flexible database for storing arbitrary metadata. Furthermore, we also provide an interface for neural machine translation that can run as a service on virtual machines, which also incorporates a connection to the data repository software.",The OPUS Resource Repository: An Open Package for Creating Parallel Corpora and Machine Translation Services,"This paper presents a flexible and powerful system for creating parallel corpora and for running neural machine translation services. Our package provides a scalable data repository backend that offers transparent data pre-processing pipelines and automatic alignment procedures that facilitate the compilation of extensive parallel data sets from a variety of sources. Moreover, we develop a web-based interface that constitutes an intuitive frontend for end-users of the platform. The whole system can easily be distributed over virtual machines and implements a sophisticated permission system with secure connections and a flexible database for storing arbitrary metadata. Furthermore, we also provide an interface for neural machine translation that can run as a service on virtual machines, which also incorporates a connection to the data repository software.","The work was supported by the Swedish Culture Foundation and we are grateful for the resources provided by the Finnish IT Center for Science, CSC.","The OPUS Resource Repository: An Open Package for Creating Parallel Corpora and Machine Translation Services. This paper presents a flexible and powerful system for creating parallel corpora and for running neural machine translation services. Our package provides a scalable data repository backend that offers transparent data pre-processing pipelines and automatic alignment procedures that facilitate the compilation of extensive parallel data sets from a variety of sources. Moreover, we develop a web-based interface that constitutes an intuitive frontend for end-users of the platform. The whole system can easily be distributed over virtual machines and implements a sophisticated permission system with secure connections and a flexible database for storing arbitrary metadata. Furthermore, we also provide an interface for neural machine translation that can run as a service on virtual machines, which also incorporates a connection to the data repository software.",2019
shim-kim-1993-towards,https://aclanthology.org/1993.tmi-1.24,0,,,,,,,Towards a Machine Translation System with Self-Critiquing Capability. ,Towards a Machine Translation System with Self-Critiquing Capability,,Towards a Machine Translation System with Self-Critiquing Capability,,,Towards a Machine Translation System with Self-Critiquing Capability. ,1993
kaeshammer-demberg-2012-german,http://www.lrec-conf.org/proceedings/lrec2012/pdf/398_Paper.pdf,0,,,,,,,"German and English Treebanks and Lexica for Tree-Adjoining Grammars. We present a treebank and lexicon for German and English, which have been developed for PLTAG parsing. PLTAG is a psycholinguistically motivated, incremental version of tree-adjoining grammar (TAG). The resources are however also applicable to parsing with other variants of TAG. The German PLTAG resources are based on the TIGER corpus and, to the best of our knowledge, constitute the first scalable German TAG grammar. The English PLTAG resources go beyond existing resources in that they include the NP annotation by (Vadas and Curran, 2007), and include the prediction lexicon necessary for PLTAG.",{G}erman and {E}nglish Treebanks and Lexica for {T}ree-{A}djoining {G}rammars,"We present a treebank and lexicon for German and English, which have been developed for PLTAG parsing. PLTAG is a psycholinguistically motivated, incremental version of tree-adjoining grammar (TAG). The resources are however also applicable to parsing with other variants of TAG. The German PLTAG resources are based on the TIGER corpus and, to the best of our knowledge, constitute the first scalable German TAG grammar. The English PLTAG resources go beyond existing resources in that they include the NP annotation by (Vadas and Curran, 2007), and include the prediction lexicon necessary for PLTAG.",German and English Treebanks and Lexica for Tree-Adjoining Grammars,"We present a treebank and lexicon for German and English, which have been developed for PLTAG parsing. PLTAG is a psycholinguistically motivated, incremental version of tree-adjoining grammar (TAG). The resources are however also applicable to parsing with other variants of TAG. The German PLTAG resources are based on the TIGER corpus and, to the best of our knowledge, constitute the first scalable German TAG grammar. The English PLTAG resources go beyond existing resources in that they include the NP annotation by (Vadas and Curran, 2007), and include the prediction lexicon necessary for PLTAG.",,"German and English Treebanks and Lexica for Tree-Adjoining Grammars. We present a treebank and lexicon for German and English, which have been developed for PLTAG parsing. PLTAG is a psycholinguistically motivated, incremental version of tree-adjoining grammar (TAG). The resources are however also applicable to parsing with other variants of TAG. The German PLTAG resources are based on the TIGER corpus and, to the best of our knowledge, constitute the first scalable German TAG grammar. The English PLTAG resources go beyond existing resources in that they include the NP annotation by (Vadas and Curran, 2007), and include the prediction lexicon necessary for PLTAG.",2012
bick-2004-named,http://www.lrec-conf.org/proceedings/lrec2004/pdf/99.pdf,0,,,,,,,"A Named Entity Recognizer for Danish. This paper describes how a preexisting Constraint Grammar based parser for Danish (DanGram, Bick 2002) has been adapted and semantically enhanced in order to accommodate for named entity recognition (NER), using rule based and lexical, rather than probabilistic methodology. The project is part of a multilingual Nordic initiative, Nomen Nescio, which targets 6 primary name types (human, organisation, place, event, title/semantic product and brand/object). Training data, examples and statistical text data specifics were taken from the Korpus90/2000 annotation initiative (Bick 2003-1). The NER task is addressed following the progressive multi-level parsing architecture of DanGram, delegating different NER-subtasks to different specialised levels. Thus named entities are successively treated as first strings, words, types, and then as contextual units at the morphological, syntactic and semantic levels, consecutively. While lower levels mainly use pattern matching tools, the higher levels make increasing use of context based Constraint Grammar rules on the one hand, and lexical information, both morphological and semantic, on the other hand. Levels are implemented as a sequential chain of Perl-programs and CG-grammars. Two evaluation runs on Korpus90/2000 data showed about 2% chunking errors and false positive/false negative proper noun readings (originating at the lower levels), while the NER-typer as such had a 5% error rate with 0.1-0.5% remaining ambiguity, if measured only for correctly chunked proper nouns.",A Named Entity Recognizer for {D}anish,"This paper describes how a preexisting Constraint Grammar based parser for Danish (DanGram, Bick 2002) has been adapted and semantically enhanced in order to accommodate for named entity recognition (NER), using rule based and lexical, rather than probabilistic methodology. The project is part of a multilingual Nordic initiative, Nomen Nescio, which targets 6 primary name types (human, organisation, place, event, title/semantic product and brand/object). Training data, examples and statistical text data specifics were taken from the Korpus90/2000 annotation initiative (Bick 2003-1). The NER task is addressed following the progressive multi-level parsing architecture of DanGram, delegating different NER-subtasks to different specialised levels. Thus named entities are successively treated as first strings, words, types, and then as contextual units at the morphological, syntactic and semantic levels, consecutively. While lower levels mainly use pattern matching tools, the higher levels make increasing use of context based Constraint Grammar rules on the one hand, and lexical information, both morphological and semantic, on the other hand. Levels are implemented as a sequential chain of Perl-programs and CG-grammars. Two evaluation runs on Korpus90/2000 data showed about 2% chunking errors and false positive/false negative proper noun readings (originating at the lower levels), while the NER-typer as such had a 5% error rate with 0.1-0.5% remaining ambiguity, if measured only for correctly chunked proper nouns.",A Named Entity Recognizer for Danish,"This paper describes how a preexisting Constraint Grammar based parser for Danish (DanGram, Bick 2002) has been adapted and semantically enhanced in order to accommodate for named entity recognition (NER), using rule based and lexical, rather than probabilistic methodology. The project is part of a multilingual Nordic initiative, Nomen Nescio, which targets 6 primary name types (human, organisation, place, event, title/semantic product and brand/object). Training data, examples and statistical text data specifics were taken from the Korpus90/2000 annotation initiative (Bick 2003-1). The NER task is addressed following the progressive multi-level parsing architecture of DanGram, delegating different NER-subtasks to different specialised levels. Thus named entities are successively treated as first strings, words, types, and then as contextual units at the morphological, syntactic and semantic levels, consecutively. While lower levels mainly use pattern matching tools, the higher levels make increasing use of context based Constraint Grammar rules on the one hand, and lexical information, both morphological and semantic, on the other hand. Levels are implemented as a sequential chain of Perl-programs and CG-grammars. Two evaluation runs on Korpus90/2000 data showed about 2% chunking errors and false positive/false negative proper noun readings (originating at the lower levels), while the NER-typer as such had a 5% error rate with 0.1-0.5% remaining ambiguity, if measured only for correctly chunked proper nouns.",,"A Named Entity Recognizer for Danish. This paper describes how a preexisting Constraint Grammar based parser for Danish (DanGram, Bick 2002) has been adapted and semantically enhanced in order to accommodate for named entity recognition (NER), using rule based and lexical, rather than probabilistic methodology. The project is part of a multilingual Nordic initiative, Nomen Nescio, which targets 6 primary name types (human, organisation, place, event, title/semantic product and brand/object). Training data, examples and statistical text data specifics were taken from the Korpus90/2000 annotation initiative (Bick 2003-1). The NER task is addressed following the progressive multi-level parsing architecture of DanGram, delegating different NER-subtasks to different specialised levels. Thus named entities are successively treated as first strings, words, types, and then as contextual units at the morphological, syntactic and semantic levels, consecutively. While lower levels mainly use pattern matching tools, the higher levels make increasing use of context based Constraint Grammar rules on the one hand, and lexical information, both morphological and semantic, on the other hand. Levels are implemented as a sequential chain of Perl-programs and CG-grammars. Two evaluation runs on Korpus90/2000 data showed about 2% chunking errors and false positive/false negative proper noun readings (originating at the lower levels), while the NER-typer as such had a 5% error rate with 0.1-0.5% remaining ambiguity, if measured only for correctly chunked proper nouns.",2004
yang-etal-2019-tokyotech,https://aclanthology.org/S19-2061,0,,,,,,,"TokyoTech\_NLP at SemEval-2019 Task 3: Emotion-related Symbols in Emotion Detection. This paper presents our contextual emotion detection system in approaching the SemEval-2019 shared task 3: EmoContext: Contextual Emotion Detection in Text. This system cooperates with an emotion detection neural network method (Poria et al., 2017), emoji2vec (Eisner et al., 2016) embedding, word2vec embedding (Mikolov et al., 2013), and our proposed emoticon and emoji preprocessing method. The experimental results demonstrate the usefulness of our emoticon and emoji prepossessing method, and representations of emoticons and emoji contribute model's emotion detection.",{T}okyo{T}ech{\_}{NLP} at {S}em{E}val-2019 Task 3: Emotion-related Symbols in Emotion Detection,"This paper presents our contextual emotion detection system in approaching the SemEval-2019 shared task 3: EmoContext: Contextual Emotion Detection in Text. This system cooperates with an emotion detection neural network method (Poria et al., 2017), emoji2vec (Eisner et al., 2016) embedding, word2vec embedding (Mikolov et al., 2013), and our proposed emoticon and emoji preprocessing method. The experimental results demonstrate the usefulness of our emoticon and emoji prepossessing method, and representations of emoticons and emoji contribute model's emotion detection.",TokyoTech\_NLP at SemEval-2019 Task 3: Emotion-related Symbols in Emotion Detection,"This paper presents our contextual emotion detection system in approaching the SemEval-2019 shared task 3: EmoContext: Contextual Emotion Detection in Text. This system cooperates with an emotion detection neural network method (Poria et al., 2017), emoji2vec (Eisner et al., 2016) embedding, word2vec embedding (Mikolov et al., 2013), and our proposed emoticon and emoji preprocessing method. The experimental results demonstrate the usefulness of our emoticon and emoji prepossessing method, and representations of emoticons and emoji contribute model's emotion detection.","The research results have been achieved by ""Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation"", the Commissioned Research of National Institute of Information and Communications Technology (NICT), Japan.","TokyoTech\_NLP at SemEval-2019 Task 3: Emotion-related Symbols in Emotion Detection. This paper presents our contextual emotion detection system in approaching the SemEval-2019 shared task 3: EmoContext: Contextual Emotion Detection in Text. This system cooperates with an emotion detection neural network method (Poria et al., 2017), emoji2vec (Eisner et al., 2016) embedding, word2vec embedding (Mikolov et al., 2013), and our proposed emoticon and emoji preprocessing method. The experimental results demonstrate the usefulness of our emoticon and emoji prepossessing method, and representations of emoticons and emoji contribute model's emotion detection.",2019
freibott-1992-computer,https://aclanthology.org/1992.tc-1.5,1,,,,industry_innovation_infrastructure,,,"Computer aided translation in an integrated document production process: Tools and applications. The intemationalisation of markets, the ever shortening life cycles of products as well as the increasing importance of information technology all demand a change in technical equipment, the software used on it and the organisational structures and processes in our working environment. Translation as a whole, but in particular as an integral part of the document production process, has to cope with these changes and with new and additional requirements. This paper describes the organisational and technical solutions developed and implemented in an industrial company for a number of computer aided translation applications integrated in the document production process to meet these requirements and to ensure high-quality mono and multilingual documentation on restricted budgetary grounds.",Computer aided translation in an integrated document production process: Tools and applications,"The intemationalisation of markets, the ever shortening life cycles of products as well as the increasing importance of information technology all demand a change in technical equipment, the software used on it and the organisational structures and processes in our working environment. Translation as a whole, but in particular as an integral part of the document production process, has to cope with these changes and with new and additional requirements. This paper describes the organisational and technical solutions developed and implemented in an industrial company for a number of computer aided translation applications integrated in the document production process to meet these requirements and to ensure high-quality mono and multilingual documentation on restricted budgetary grounds.",Computer aided translation in an integrated document production process: Tools and applications,"The intemationalisation of markets, the ever shortening life cycles of products as well as the increasing importance of information technology all demand a change in technical equipment, the software used on it and the organisational structures and processes in our working environment. Translation as a whole, but in particular as an integral part of the document production process, has to cope with these changes and with new and additional requirements. This paper describes the organisational and technical solutions developed and implemented in an industrial company for a number of computer aided translation applications integrated in the document production process to meet these requirements and to ensure high-quality mono and multilingual documentation on restricted budgetary grounds.",,"Computer aided translation in an integrated document production process: Tools and applications. The intemationalisation of markets, the ever shortening life cycles of products as well as the increasing importance of information technology all demand a change in technical equipment, the software used on it and the organisational structures and processes in our working environment. Translation as a whole, but in particular as an integral part of the document production process, has to cope with these changes and with new and additional requirements. This paper describes the organisational and technical solutions developed and implemented in an industrial company for a number of computer aided translation applications integrated in the document production process to meet these requirements and to ensure high-quality mono and multilingual documentation on restricted budgetary grounds.",1992
kozareva-hovy-2011-insights,https://aclanthology.org/P11-1162,0,,,,,,,"Insights from Network Structure for Text Mining. Text mining and data harvesting algorithms have become popular in the computational linguistics community. They employ patterns that specify the kind of information to be harvested, and usually bootstrap either the pattern learning or the term harvesting process (or both) in a recursive cycle, using data learned in one step to generate more seeds for the next. They therefore treat the source text corpus as a network, in which words are the nodes and relations linking them are the edges. The results of computational network analysis, especially from the world wide web, are thus applicable. Surprisingly, these results have not yet been broadly introduced into the computational linguistics community. In this paper we show how various results apply to text mining, how they explain some previously observed phenomena, and how they can be helpful for computational linguistics applications.",Insights from Network Structure for Text Mining,"Text mining and data harvesting algorithms have become popular in the computational linguistics community. They employ patterns that specify the kind of information to be harvested, and usually bootstrap either the pattern learning or the term harvesting process (or both) in a recursive cycle, using data learned in one step to generate more seeds for the next. They therefore treat the source text corpus as a network, in which words are the nodes and relations linking them are the edges. The results of computational network analysis, especially from the world wide web, are thus applicable. Surprisingly, these results have not yet been broadly introduced into the computational linguistics community. In this paper we show how various results apply to text mining, how they explain some previously observed phenomena, and how they can be helpful for computational linguistics applications.",Insights from Network Structure for Text Mining,"Text mining and data harvesting algorithms have become popular in the computational linguistics community. They employ patterns that specify the kind of information to be harvested, and usually bootstrap either the pattern learning or the term harvesting process (or both) in a recursive cycle, using data learned in one step to generate more seeds for the next. They therefore treat the source text corpus as a network, in which words are the nodes and relations linking them are the edges. The results of computational network analysis, especially from the world wide web, are thus applicable. Surprisingly, these results have not yet been broadly introduced into the computational linguistics community. In this paper we show how various results apply to text mining, how they explain some previously observed phenomena, and how they can be helpful for computational linguistics applications.",We acknowledge the support of DARPA contract number FA8750-09-C-3705 and NSF grant IIS-0429360. We would like to thank Sujith Ravi for his useful comments and suggestions.,"Insights from Network Structure for Text Mining. Text mining and data harvesting algorithms have become popular in the computational linguistics community. They employ patterns that specify the kind of information to be harvested, and usually bootstrap either the pattern learning or the term harvesting process (or both) in a recursive cycle, using data learned in one step to generate more seeds for the next. They therefore treat the source text corpus as a network, in which words are the nodes and relations linking them are the edges. The results of computational network analysis, especially from the world wide web, are thus applicable. Surprisingly, these results have not yet been broadly introduced into the computational linguistics community. In this paper we show how various results apply to text mining, how they explain some previously observed phenomena, and how they can be helpful for computational linguistics applications.",2011
etchegoyhen-etal-2016-exploiting,https://aclanthology.org/L16-1560,0,,,,,,,"Exploiting a Large Strongly Comparable Corpus. This article describes a large comparable corpus for Basque and Spanish and the methods employed to build a parallel resource from the original data. The EITB corpus, a strongly comparable corpus in the news domain, is to be shared with the research community, as an aid for the development and testing of methods in comparable corpora exploitation, and as basis for the improvement of data-driven machine translation systems for this language pair. Competing approaches were explored for the alignment of comparable segments in the corpus, resulting in the design of a simple method which outperformed a state-of-the-art method on the corpus test sets. The method we present is highly portable, computationally efficient, and significantly reduces deployment work, a welcome result for the exploitation of comparable corpora.",Exploiting a Large Strongly Comparable Corpus,"This article describes a large comparable corpus for Basque and Spanish and the methods employed to build a parallel resource from the original data. The EITB corpus, a strongly comparable corpus in the news domain, is to be shared with the research community, as an aid for the development and testing of methods in comparable corpora exploitation, and as basis for the improvement of data-driven machine translation systems for this language pair. Competing approaches were explored for the alignment of comparable segments in the corpus, resulting in the design of a simple method which outperformed a state-of-the-art method on the corpus test sets. The method we present is highly portable, computationally efficient, and significantly reduces deployment work, a welcome result for the exploitation of comparable corpora.",Exploiting a Large Strongly Comparable Corpus,"This article describes a large comparable corpus for Basque and Spanish and the methods employed to build a parallel resource from the original data. The EITB corpus, a strongly comparable corpus in the news domain, is to be shared with the research community, as an aid for the development and testing of methods in comparable corpora exploitation, and as basis for the improvement of data-driven machine translation systems for this language pair. Competing approaches were explored for the alignment of comparable segments in the corpus, resulting in the design of a simple method which outperformed a state-of-the-art method on the corpus test sets. The method we present is highly portable, computationally efficient, and significantly reduces deployment work, a welcome result for the exploitation of comparable corpora.","Acknowledgements The authors wish to thank Euskal Irrati Telebista, for providing the resources and agreeing to share them with the research community, and the three anonymous LREC reviewers for their constructive feedback. This work was partially supported by the Basque Government through its funding of project PLATA (Gaitek Programme, 2012-2014).","Exploiting a Large Strongly Comparable Corpus. This article describes a large comparable corpus for Basque and Spanish and the methods employed to build a parallel resource from the original data. The EITB corpus, a strongly comparable corpus in the news domain, is to be shared with the research community, as an aid for the development and testing of methods in comparable corpora exploitation, and as basis for the improvement of data-driven machine translation systems for this language pair. Competing approaches were explored for the alignment of comparable segments in the corpus, resulting in the design of a simple method which outperformed a state-of-the-art method on the corpus test sets. The method we present is highly portable, computationally efficient, and significantly reduces deployment work, a welcome result for the exploitation of comparable corpora.",2016
guo-etal-2021-bertweetfr,https://aclanthology.org/2021.wnut-1.49,0,,,,,,,"BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French Tweets. We introduce BERTweetFR, the first largescale pre-trained language model for French tweets. Our model is initialized using the general-domain French language model CamemBERT (Martin et al., 2020) which follows the base architecture of BERT. Experiments show that BERTweetFR outperforms all previous general-domain French language models on two downstream Twitter NLP tasks of offensiveness identification and named entity recognition. The dataset used in the offensiveness detection task is first created and annotated by our team, filling in the gap of such analytic datasets in French. We make our model publicly available in the transformers library with the aim of promoting future research in analytic tasks for French tweets.",{BERT}weet{FR} : Domain Adaptation of Pre-Trained Language Models for {F}rench Tweets,"We introduce BERTweetFR, the first largescale pre-trained language model for French tweets. Our model is initialized using the general-domain French language model CamemBERT (Martin et al., 2020) which follows the base architecture of BERT. Experiments show that BERTweetFR outperforms all previous general-domain French language models on two downstream Twitter NLP tasks of offensiveness identification and named entity recognition. The dataset used in the offensiveness detection task is first created and annotated by our team, filling in the gap of such analytic datasets in French. We make our model publicly available in the transformers library with the aim of promoting future research in analytic tasks for French tweets.",BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French Tweets,"We introduce BERTweetFR, the first largescale pre-trained language model for French tweets. Our model is initialized using the general-domain French language model CamemBERT (Martin et al., 2020) which follows the base architecture of BERT. Experiments show that BERTweetFR outperforms all previous general-domain French language models on two downstream Twitter NLP tasks of offensiveness identification and named entity recognition. The dataset used in the offensiveness detection task is first created and annotated by our team, filling in the gap of such analytic datasets in French. We make our model publicly available in the transformers library with the aim of promoting future research in analytic tasks for French tweets.",This research is supported by the French National research agency (ANR) via the ANR XCOVIF (AAP RA-COVID-19 V6) project. We would also like to thank the National Center for Scientific Research (CNRS) for giving us access to their Jean Zay supercomputer.,"BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French Tweets. We introduce BERTweetFR, the first largescale pre-trained language model for French tweets. Our model is initialized using the general-domain French language model CamemBERT (Martin et al., 2020) which follows the base architecture of BERT. Experiments show that BERTweetFR outperforms all previous general-domain French language models on two downstream Twitter NLP tasks of offensiveness identification and named entity recognition. The dataset used in the offensiveness detection task is first created and annotated by our team, filling in the gap of such analytic datasets in French. We make our model publicly available in the transformers library with the aim of promoting future research in analytic tasks for French tweets.",2021
schockaert-2018-knowledge,https://aclanthology.org/W18-4006,0,,,,,,,"Knowledge Representation with Conceptual Spaces. is a professor at Cardiff University. His current research interests include commonsense reasoning, interpretable machine learning, vagueness and uncertainty modelling, representation learning, and information retrieval. He holds an ERC Starting Grant, and has previously been supported by funding from the Leverhulme Trust, EPSRC, and FWO, among others. He was the recipient of the 2008 ECCAI Doctoral Dissertation Award and the IBM Belgium Prize for Computer Science. He is on the board of directors of EurAI, on the editorial board of Artificial Intelligence and an area editor for Fuzzy Sets and Systems. He was PC co-chair of SUM 2016 and the general chair of UKCI 2017.",Knowledge Representation with Conceptual Spaces,"is a professor at Cardiff University. His current research interests include commonsense reasoning, interpretable machine learning, vagueness and uncertainty modelling, representation learning, and information retrieval. He holds an ERC Starting Grant, and has previously been supported by funding from the Leverhulme Trust, EPSRC, and FWO, among others. He was the recipient of the 2008 ECCAI Doctoral Dissertation Award and the IBM Belgium Prize for Computer Science. He is on the board of directors of EurAI, on the editorial board of Artificial Intelligence and an area editor for Fuzzy Sets and Systems. He was PC co-chair of SUM 2016 and the general chair of UKCI 2017.",Knowledge Representation with Conceptual Spaces,"is a professor at Cardiff University. His current research interests include commonsense reasoning, interpretable machine learning, vagueness and uncertainty modelling, representation learning, and information retrieval. He holds an ERC Starting Grant, and has previously been supported by funding from the Leverhulme Trust, EPSRC, and FWO, among others. He was the recipient of the 2008 ECCAI Doctoral Dissertation Award and the IBM Belgium Prize for Computer Science. He is on the board of directors of EurAI, on the editorial board of Artificial Intelligence and an area editor for Fuzzy Sets and Systems. He was PC co-chair of SUM 2016 and the general chair of UKCI 2017.",,"Knowledge Representation with Conceptual Spaces. is a professor at Cardiff University. His current research interests include commonsense reasoning, interpretable machine learning, vagueness and uncertainty modelling, representation learning, and information retrieval. He holds an ERC Starting Grant, and has previously been supported by funding from the Leverhulme Trust, EPSRC, and FWO, among others. He was the recipient of the 2008 ECCAI Doctoral Dissertation Award and the IBM Belgium Prize for Computer Science. He is on the board of directors of EurAI, on the editorial board of Artificial Intelligence and an area editor for Fuzzy Sets and Systems. He was PC co-chair of SUM 2016 and the general chair of UKCI 2017.",2018
gasperin-briscoe-2008-statistical,https://aclanthology.org/C08-1033,1,,,,health,,,"Statistical Anaphora Resolution in Biomedical Texts. This paper presents a probabilistic model for resolution of non-pronominal anaphora in biomedical texts. The model seeks to find the antecedents of anaphoric expressions, both coreferent and associative ones, and also to identify discourse-new expressions. We consider only the noun phrases referring to biomedical entities. The model reaches state-of-the art performance: 56-69% precision and 54-67% recall on coreferent cases, and reasonable performance on different classes of associative cases.",Statistical Anaphora Resolution in Biomedical Texts,"This paper presents a probabilistic model for resolution of non-pronominal anaphora in biomedical texts. The model seeks to find the antecedents of anaphoric expressions, both coreferent and associative ones, and also to identify discourse-new expressions. We consider only the noun phrases referring to biomedical entities. The model reaches state-of-the art performance: 56-69% precision and 54-67% recall on coreferent cases, and reasonable performance on different classes of associative cases.",Statistical Anaphora Resolution in Biomedical Texts,"This paper presents a probabilistic model for resolution of non-pronominal anaphora in biomedical texts. The model seeks to find the antecedents of anaphoric expressions, both coreferent and associative ones, and also to identify discourse-new expressions. We consider only the noun phrases referring to biomedical entities. The model reaches state-of-the art performance: 56-69% precision and 54-67% recall on coreferent cases, and reasonable performance on different classes of associative cases.",This work is part of the BBSRC-funded FlySlip project. Caroline Gasperin is funded by a CAPES award from the Brazilian government.,"Statistical Anaphora Resolution in Biomedical Texts. This paper presents a probabilistic model for resolution of non-pronominal anaphora in biomedical texts. The model seeks to find the antecedents of anaphoric expressions, both coreferent and associative ones, and also to identify discourse-new expressions. We consider only the noun phrases referring to biomedical entities. The model reaches state-of-the art performance: 56-69% precision and 54-67% recall on coreferent cases, and reasonable performance on different classes of associative cases.",2008
bond-etal-1994-countability,https://aclanthology.org/C94-1002,0,,,,,,,"Countability and Number in Japanese to English Machine Translation. This paper presents a heuristic method that uses information in the Japanese text along with knowledge of English countability and number stored in transfer dictionaries to determine the countability and number of English.noun phrases. Incorporating this method into the machine translation system ALTJ/E, helped tO raise the percentage of noun phrases generated with correct use of articles and number from 65% to 73%.",Countability and Number in {J}apanese to {E}nglish Machine Translation,"This paper presents a heuristic method that uses information in the Japanese text along with knowledge of English countability and number stored in transfer dictionaries to determine the countability and number of English.noun phrases. Incorporating this method into the machine translation system ALTJ/E, helped tO raise the percentage of noun phrases generated with correct use of articles and number from 65% to 73%.",Countability and Number in Japanese to English Machine Translation,"This paper presents a heuristic method that uses information in the Japanese text along with knowledge of English countability and number stored in transfer dictionaries to determine the countability and number of English.noun phrases. Incorporating this method into the machine translation system ALTJ/E, helped tO raise the percentage of noun phrases generated with correct use of articles and number from 65% to 73%.",,"Countability and Number in Japanese to English Machine Translation. This paper presents a heuristic method that uses information in the Japanese text along with knowledge of English countability and number stored in transfer dictionaries to determine the countability and number of English.noun phrases. Incorporating this method into the machine translation system ALTJ/E, helped tO raise the percentage of noun phrases generated with correct use of articles and number from 65% to 73%.",1994
bender-2012-100,https://aclanthology.org/N12-4001,0,,,,,,,"100 Things You Always Wanted to Know about Linguistics But Were Afraid to Ask*. Many NLP tasks have at their core a subtask of extracting the dependencies-who did what to whom-from natural language sentences. This task can be understood as the inverse of the problem solved in different ways by diverse human languages, namely, how to indicate the relationship between different parts of a sentence. Understanding how languages solve the problem can be extremely useful in both feature design and error analysis in the application of machine learning to NLP. Likewise, understanding cross-linguistic variation can be important for the design of MT systems and other multilingual applications. The purpose of this tutorial is to present in a succinct and accessible fashion information about the structure of human languages that can be useful in creating more linguistically sophisticated, more language independent, and thus more successful NLP systems.",100 Things You Always Wanted to Know about Linguistics But Were Afraid to Ask*,"Many NLP tasks have at their core a subtask of extracting the dependencies-who did what to whom-from natural language sentences. This task can be understood as the inverse of the problem solved in different ways by diverse human languages, namely, how to indicate the relationship between different parts of a sentence. Understanding how languages solve the problem can be extremely useful in both feature design and error analysis in the application of machine learning to NLP. Likewise, understanding cross-linguistic variation can be important for the design of MT systems and other multilingual applications. The purpose of this tutorial is to present in a succinct and accessible fashion information about the structure of human languages that can be useful in creating more linguistically sophisticated, more language independent, and thus more successful NLP systems.",100 Things You Always Wanted to Know about Linguistics But Were Afraid to Ask*,"Many NLP tasks have at their core a subtask of extracting the dependencies-who did what to whom-from natural language sentences. This task can be understood as the inverse of the problem solved in different ways by diverse human languages, namely, how to indicate the relationship between different parts of a sentence. Understanding how languages solve the problem can be extremely useful in both feature design and error analysis in the application of machine learning to NLP. Likewise, understanding cross-linguistic variation can be important for the design of MT systems and other multilingual applications. The purpose of this tutorial is to present in a succinct and accessible fashion information about the structure of human languages that can be useful in creating more linguistically sophisticated, more language independent, and thus more successful NLP systems.",,"100 Things You Always Wanted to Know about Linguistics But Were Afraid to Ask*. Many NLP tasks have at their core a subtask of extracting the dependencies-who did what to whom-from natural language sentences. This task can be understood as the inverse of the problem solved in different ways by diverse human languages, namely, how to indicate the relationship between different parts of a sentence. Understanding how languages solve the problem can be extremely useful in both feature design and error analysis in the application of machine learning to NLP. Likewise, understanding cross-linguistic variation can be important for the design of MT systems and other multilingual applications. The purpose of this tutorial is to present in a succinct and accessible fashion information about the structure of human languages that can be useful in creating more linguistically sophisticated, more language independent, and thus more successful NLP systems.",2012
bojar-etal-2010-data,http://www.lrec-conf.org/proceedings/lrec2010/pdf/756_Paper.pdf,0,,,,,,,"Data Issues in English-to-Hindi Machine Translation. Statistical machine translation to morphologically richer languages is a challenging task and more so if the source and target languages differ in word order. Current state-of-the-art MT systems thus deliver mediocre results. Adding more parallel data often helps improve the results; if it doesn't, it may be caused by various problems such as different domains, bad alignment or noise in the new data. In this paper we evaluate the English-to-Hindi MT task from this data perspective. We discuss several available parallel data sources and provide crossevaluation results on their combinations using two freely available statistical MT systems. We demonstrate various problems encountered in the data and describe automatic methods of data cleaning and normalization. We also show that the contents of two independently distributed data sets can unexpectedly overlap, which negatively affects translation quality. Together with the error analysis, we also present a new tool for viewing aligned corpora, which makes it easier to detect difficult parts in the data even for a developer not speaking the target language.",Data Issues in {E}nglish-to-{H}indi Machine Translation,"Statistical machine translation to morphologically richer languages is a challenging task and more so if the source and target languages differ in word order. Current state-of-the-art MT systems thus deliver mediocre results. Adding more parallel data often helps improve the results; if it doesn't, it may be caused by various problems such as different domains, bad alignment or noise in the new data. In this paper we evaluate the English-to-Hindi MT task from this data perspective. We discuss several available parallel data sources and provide crossevaluation results on their combinations using two freely available statistical MT systems. We demonstrate various problems encountered in the data and describe automatic methods of data cleaning and normalization. We also show that the contents of two independently distributed data sets can unexpectedly overlap, which negatively affects translation quality. Together with the error analysis, we also present a new tool for viewing aligned corpora, which makes it easier to detect difficult parts in the data even for a developer not speaking the target language.",Data Issues in English-to-Hindi Machine Translation,"Statistical machine translation to morphologically richer languages is a challenging task and more so if the source and target languages differ in word order. Current state-of-the-art MT systems thus deliver mediocre results. Adding more parallel data often helps improve the results; if it doesn't, it may be caused by various problems such as different domains, bad alignment or noise in the new data. In this paper we evaluate the English-to-Hindi MT task from this data perspective. We discuss several available parallel data sources and provide crossevaluation results on their combinations using two freely available statistical MT systems. We demonstrate various problems encountered in the data and describe automatic methods of data cleaning and normalization. We also show that the contents of two independently distributed data sets can unexpectedly overlap, which negatively affects translation quality. Together with the error analysis, we also present a new tool for viewing aligned corpora, which makes it easier to detect difficult parts in the data even for a developer not speaking the target language.",The research has been supported by the grants MSM0021620838 (Czech Ministry of Education) and EuromatrixPlus (FP7-ICT-2007-3-231720 of the EU and 7E09003 of the Czech Republic).,"Data Issues in English-to-Hindi Machine Translation. Statistical machine translation to morphologically richer languages is a challenging task and more so if the source and target languages differ in word order. Current state-of-the-art MT systems thus deliver mediocre results. Adding more parallel data often helps improve the results; if it doesn't, it may be caused by various problems such as different domains, bad alignment or noise in the new data. In this paper we evaluate the English-to-Hindi MT task from this data perspective. We discuss several available parallel data sources and provide crossevaluation results on their combinations using two freely available statistical MT systems. We demonstrate various problems encountered in the data and describe automatic methods of data cleaning and normalization. We also show that the contents of two independently distributed data sets can unexpectedly overlap, which negatively affects translation quality. Together with the error analysis, we also present a new tool for viewing aligned corpora, which makes it easier to detect difficult parts in the data even for a developer not speaking the target language.",2010
tenfjord-etal-2006-ask,http://www.lrec-conf.org/proceedings/lrec2006/pdf/573_pdf.pdf,1,,,,education,,,"The ASK Corpus - a Language Learner Corpus of Norwegian as a Second Language. In our paper we present the design and interface of ASK, a language learner corpus of Norwegian as a second language which contains essays collected from language tests on two different proficiency levels as well as personal data from the test takers. In addition, the corpus also contains texts and relevant personal data from native Norwegians as control data. The texts as well as the personal data are marked up in XML according to the TEI Guidelines. In order to be able to classify ""errors"" in the texts, we have introduced new attributes to the TEI corr and sic tags. For each error tag, a correct form is also in the text annotation. Finally, we employ an automatic tagger developed for standard Norwegian, the ""Oslo-Bergen Tagger"", together with a facility for manual tag correction. As corpus query system, we are using the Corpus Workbench developed at the University of Stuttgart together with a web search interface developed at Aksis, University of Bergen. The system allows for searching for combinations of words, error types, grammatical annotation and personal data.",The {ASK} Corpus - a Language Learner Corpus of {N}orwegian as a Second Language,"In our paper we present the design and interface of ASK, a language learner corpus of Norwegian as a second language which contains essays collected from language tests on two different proficiency levels as well as personal data from the test takers. In addition, the corpus also contains texts and relevant personal data from native Norwegians as control data. The texts as well as the personal data are marked up in XML according to the TEI Guidelines. In order to be able to classify ""errors"" in the texts, we have introduced new attributes to the TEI corr and sic tags. For each error tag, a correct form is also in the text annotation. Finally, we employ an automatic tagger developed for standard Norwegian, the ""Oslo-Bergen Tagger"", together with a facility for manual tag correction. As corpus query system, we are using the Corpus Workbench developed at the University of Stuttgart together with a web search interface developed at Aksis, University of Bergen. The system allows for searching for combinations of words, error types, grammatical annotation and personal data.",The ASK Corpus - a Language Learner Corpus of Norwegian as a Second Language,"In our paper we present the design and interface of ASK, a language learner corpus of Norwegian as a second language which contains essays collected from language tests on two different proficiency levels as well as personal data from the test takers. In addition, the corpus also contains texts and relevant personal data from native Norwegians as control data. The texts as well as the personal data are marked up in XML according to the TEI Guidelines. In order to be able to classify ""errors"" in the texts, we have introduced new attributes to the TEI corr and sic tags. For each error tag, a correct form is also in the text annotation. Finally, we employ an automatic tagger developed for standard Norwegian, the ""Oslo-Bergen Tagger"", together with a facility for manual tag correction. As corpus query system, we are using the Corpus Workbench developed at the University of Stuttgart together with a web search interface developed at Aksis, University of Bergen. The system allows for searching for combinations of words, error types, grammatical annotation and personal data.",,"The ASK Corpus - a Language Learner Corpus of Norwegian as a Second Language. In our paper we present the design and interface of ASK, a language learner corpus of Norwegian as a second language which contains essays collected from language tests on two different proficiency levels as well as personal data from the test takers. In addition, the corpus also contains texts and relevant personal data from native Norwegians as control data. The texts as well as the personal data are marked up in XML according to the TEI Guidelines. In order to be able to classify ""errors"" in the texts, we have introduced new attributes to the TEI corr and sic tags. For each error tag, a correct form is also in the text annotation. Finally, we employ an automatic tagger developed for standard Norwegian, the ""Oslo-Bergen Tagger"", together with a facility for manual tag correction. As corpus query system, we are using the Corpus Workbench developed at the University of Stuttgart together with a web search interface developed at Aksis, University of Bergen. The system allows for searching for combinations of words, error types, grammatical annotation and personal data.",2006
garain-basu-2019-titans-semeval,https://aclanthology.org/S19-2133,1,,,,hate_speech,,,"The Titans at SemEval-2019 Task 6: Offensive Language Identification, Categorization and Target Identification. This system paper is a description of the system submitted to ""SemEval-2019 Task 6"", where we had to detect offensive language in Twitter. There were two specific target audiences, immigrants and women. The language of the tweets was English. We were required to first detect whether a tweet contains offensive content, and then we had to find out whether the tweet was targeted against some individual, group or other entity. Finally we were required to classify the targeted audience.","The Titans at {S}em{E}val-2019 Task 6: Offensive Language Identification, Categorization and Target Identification","This system paper is a description of the system submitted to ""SemEval-2019 Task 6"", where we had to detect offensive language in Twitter. There were two specific target audiences, immigrants and women. The language of the tweets was English. We were required to first detect whether a tweet contains offensive content, and then we had to find out whether the tweet was targeted against some individual, group or other entity. Finally we were required to classify the targeted audience.","The Titans at SemEval-2019 Task 6: Offensive Language Identification, Categorization and Target Identification","This system paper is a description of the system submitted to ""SemEval-2019 Task 6"", where we had to detect offensive language in Twitter. There were two specific target audiences, immigrants and women. The language of the tweets was English. We were required to first detect whether a tweet contains offensive content, and then we had to find out whether the tweet was targeted against some individual, group or other entity. Finally we were required to classify the targeted audience.",,"The Titans at SemEval-2019 Task 6: Offensive Language Identification, Categorization and Target Identification. This system paper is a description of the system submitted to ""SemEval-2019 Task 6"", where we had to detect offensive language in Twitter. There were two specific target audiences, immigrants and women. The language of the tweets was English. We were required to first detect whether a tweet contains offensive content, and then we had to find out whether the tweet was targeted against some individual, group or other entity. Finally we were required to classify the targeted audience.",2019
lin-etal-2019-ji,https://aclanthology.org/2019.rocling-1.13,0,,,,,,,基於深度學習之簡答題問答系統初步探討(A Preliminary Study on Deep Learning-based Short Answer Question Answering System). ,基於深度學習之簡答題問答系統初步探討(A Preliminary Study on Deep Learning-based Short Answer Question Answering System),,基於深度學習之簡答題問答系統初步探討(A Preliminary Study on Deep Learning-based Short Answer Question Answering System),,,基於深度學習之簡答題問答系統初步探討(A Preliminary Study on Deep Learning-based Short Answer Question Answering System). ,2019
quochi-2004-representing,http://www.lrec-conf.org/proceedings/lrec2004/pdf/463.pdf,0,,,,,,,"Representing Italian Complex Nominals: A Pilot Study. A corpus-based investigation of Italian Complex Nominals (CNs), of the form N+PP, which aims at clarifying their syntactic and semantic constitution, is presented. The main goal is to find out useful parameters for their representation in a computational lexicon. As a reference model we have taken an implementation of Pustejovsky's Generative Lexicon Theory (1995), the SIMPLE Italian Lexicon, and in particular the Extended Qualia Structure. Italian CN formation mainly exploits post-modification; of particular interest here are CNs of the kind N+PP since this syntactic pattern is highly productive in Italian and such CNs very often translate compound nouns of other languages. One of the major problems posed by CNs for interpretation is the retrieval or identification of the semantic relation linking their components, which is (at least partially) implicit on the surface. Studying a small sample, we observed some interesting facts that could be useful when setting up a larger experiment to identify semantic relations and/or automatically learn the syntactic peculiarities of given semantic paradigms. Finally, a set of representational features exploiting the results from our corpus is proposed.",Representing {I}talian Complex Nominals: A Pilot Study,"A corpus-based investigation of Italian Complex Nominals (CNs), of the form N+PP, which aims at clarifying their syntactic and semantic constitution, is presented. The main goal is to find out useful parameters for their representation in a computational lexicon. As a reference model we have taken an implementation of Pustejovsky's Generative Lexicon Theory (1995), the SIMPLE Italian Lexicon, and in particular the Extended Qualia Structure. Italian CN formation mainly exploits post-modification; of particular interest here are CNs of the kind N+PP since this syntactic pattern is highly productive in Italian and such CNs very often translate compound nouns of other languages. One of the major problems posed by CNs for interpretation is the retrieval or identification of the semantic relation linking their components, which is (at least partially) implicit on the surface. Studying a small sample, we observed some interesting facts that could be useful when setting up a larger experiment to identify semantic relations and/or automatically learn the syntactic peculiarities of given semantic paradigms. Finally, a set of representational features exploiting the results from our corpus is proposed.",Representing Italian Complex Nominals: A Pilot Study,"A corpus-based investigation of Italian Complex Nominals (CNs), of the form N+PP, which aims at clarifying their syntactic and semantic constitution, is presented. The main goal is to find out useful parameters for their representation in a computational lexicon. As a reference model we have taken an implementation of Pustejovsky's Generative Lexicon Theory (1995), the SIMPLE Italian Lexicon, and in particular the Extended Qualia Structure. Italian CN formation mainly exploits post-modification; of particular interest here are CNs of the kind N+PP since this syntactic pattern is highly productive in Italian and such CNs very often translate compound nouns of other languages. One of the major problems posed by CNs for interpretation is the retrieval or identification of the semantic relation linking their components, which is (at least partially) implicit on the surface. Studying a small sample, we observed some interesting facts that could be useful when setting up a larger experiment to identify semantic relations and/or automatically learn the syntactic peculiarities of given semantic paradigms. Finally, a set of representational features exploiting the results from our corpus is proposed.",,"Representing Italian Complex Nominals: A Pilot Study. A corpus-based investigation of Italian Complex Nominals (CNs), of the form N+PP, which aims at clarifying their syntactic and semantic constitution, is presented. The main goal is to find out useful parameters for their representation in a computational lexicon. As a reference model we have taken an implementation of Pustejovsky's Generative Lexicon Theory (1995), the SIMPLE Italian Lexicon, and in particular the Extended Qualia Structure. Italian CN formation mainly exploits post-modification; of particular interest here are CNs of the kind N+PP since this syntactic pattern is highly productive in Italian and such CNs very often translate compound nouns of other languages. One of the major problems posed by CNs for interpretation is the retrieval or identification of the semantic relation linking their components, which is (at least partially) implicit on the surface. Studying a small sample, we observed some interesting facts that could be useful when setting up a larger experiment to identify semantic relations and/or automatically learn the syntactic peculiarities of given semantic paradigms. Finally, a set of representational features exploiting the results from our corpus is proposed.",2004
bajcsy-joshi-1978-problem,https://aclanthology.org/J78-3028,0,,,,,,,"The Problem of Naming Shapes: Vision-Language Interface. Philadelphia, PA 19104 In this paper, we will pose more questions than-present solutions. We want to raise some questions in the context of the representation of shapes of 3-D objects. One way to get a handle o r 1 this problem is to investigate whether labels of shapes and the& acquisition reveals any s-trm~$ux of attributes or components of shapes that might be used for representation purposes. Another aspect o f the puzzle of 'representation is the question whether the infomation is to be stored in analog or propositional form, and at what level this transformation f m m analog to propositional form takes place.
In general, shape ulS a 3-D compact object M S two aspects: the surface aspect, and the volume aspect. The surface aspect lncludes properties llke concavity, convexity, planarity of surfaces, edges, and corners. The volume aspect distinguishes objects witl-1 holes from those without (topological properties 1, and describes obj edts with respect $ 0 thqir ~;yrrPnetry planes and axes, relative proportions,-etc.",The Problem of Naming Shapes: Vision-Language Interface,"Philadelphia, PA 19104 In this paper, we will pose more questions than-present solutions. We want to raise some questions in the context of the representation of shapes of 3-D objects. One way to get a handle o r 1 this problem is to investigate whether labels of shapes and the& acquisition reveals any s-trm~$ux of attributes or components of shapes that might be used for representation purposes. Another aspect o f the puzzle of 'representation is the question whether the infomation is to be stored in analog or propositional form, and at what level this transformation f m m analog to propositional form takes place.
In general, shape ulS a 3-D compact object M S two aspects: the surface aspect, and the volume aspect. The surface aspect lncludes properties llke concavity, convexity, planarity of surfaces, edges, and corners. The volume aspect distinguishes objects witl-1 holes from those without (topological properties 1, and describes obj edts with respect $ 0 thqir ~;yrrPnetry planes and axes, relative proportions,-etc.",The Problem of Naming Shapes: Vision-Language Interface,"Philadelphia, PA 19104 In this paper, we will pose more questions than-present solutions. We want to raise some questions in the context of the representation of shapes of 3-D objects. One way to get a handle o r 1 this problem is to investigate whether labels of shapes and the& acquisition reveals any s-trm~$ux of attributes or components of shapes that might be used for representation purposes. Another aspect o f the puzzle of 'representation is the question whether the infomation is to be stored in analog or propositional form, and at what level this transformation f m m analog to propositional form takes place.
In general, shape ulS a 3-D compact object M S two aspects: the surface aspect, and the volume aspect. The surface aspect lncludes properties llke concavity, convexity, planarity of surfaces, edges, and corners. The volume aspect distinguishes objects witl-1 holes from those without (topological properties 1, and describes obj edts with respect $ 0 thqir ~;yrrPnetry planes and axes, relative proportions,-etc.",,"The Problem of Naming Shapes: Vision-Language Interface. Philadelphia, PA 19104 In this paper, we will pose more questions than-present solutions. We want to raise some questions in the context of the representation of shapes of 3-D objects. One way to get a handle o r 1 this problem is to investigate whether labels of shapes and the& acquisition reveals any s-trm~$ux of attributes or components of shapes that might be used for representation purposes. Another aspect o f the puzzle of 'representation is the question whether the infomation is to be stored in analog or propositional form, and at what level this transformation f m m analog to propositional form takes place.
In general, shape ulS a 3-D compact object M S two aspects: the surface aspect, and the volume aspect. The surface aspect lncludes properties llke concavity, convexity, planarity of surfaces, edges, and corners. The volume aspect distinguishes objects witl-1 holes from those without (topological properties 1, and describes obj edts with respect $ 0 thqir ~;yrrPnetry planes and axes, relative proportions,-etc.",1978
dasgupta-ng-2007-high,https://aclanthology.org/N07-1020,0,,,,,,,"High-Performance, Language-Independent Morphological Segmentation. This paper introduces an unsupervised morphological segmentation algorithm that shows robust performance for four languages with different levels of morphological complexity. In particular, our algorithm outperforms Goldsmith's Linguistica and Creutz and Lagus's Morphessor for English and Bengali, and achieves performance that is comparable to the best results for all three PASCAL evaluation datasets. Improvements arise from (1) the use of relative corpus frequency and suffix level similarity for detecting incorrect morpheme attachments and (2) the induction of orthographic rules and allomorphs for segmenting words where roots exhibit spelling changes during morpheme attachments.","High-Performance, Language-Independent Morphological Segmentation","This paper introduces an unsupervised morphological segmentation algorithm that shows robust performance for four languages with different levels of morphological complexity. In particular, our algorithm outperforms Goldsmith's Linguistica and Creutz and Lagus's Morphessor for English and Bengali, and achieves performance that is comparable to the best results for all three PASCAL evaluation datasets. Improvements arise from (1) the use of relative corpus frequency and suffix level similarity for detecting incorrect morpheme attachments and (2) the induction of orthographic rules and allomorphs for segmenting words where roots exhibit spelling changes during morpheme attachments.","High-Performance, Language-Independent Morphological Segmentation","This paper introduces an unsupervised morphological segmentation algorithm that shows robust performance for four languages with different levels of morphological complexity. In particular, our algorithm outperforms Goldsmith's Linguistica and Creutz and Lagus's Morphessor for English and Bengali, and achieves performance that is comparable to the best results for all three PASCAL evaluation datasets. Improvements arise from (1) the use of relative corpus frequency and suffix level similarity for detecting incorrect morpheme attachments and (2) the induction of orthographic rules and allomorphs for segmenting words where roots exhibit spelling changes during morpheme attachments.",,"High-Performance, Language-Independent Morphological Segmentation. This paper introduces an unsupervised morphological segmentation algorithm that shows robust performance for four languages with different levels of morphological complexity. In particular, our algorithm outperforms Goldsmith's Linguistica and Creutz and Lagus's Morphessor for English and Bengali, and achieves performance that is comparable to the best results for all three PASCAL evaluation datasets. Improvements arise from (1) the use of relative corpus frequency and suffix level similarity for detecting incorrect morpheme attachments and (2) the induction of orthographic rules and allomorphs for segmenting words where roots exhibit spelling changes during morpheme attachments.",2007
mordido-meinel-2020-mark,https://aclanthology.org/2020.coling-main.178,0,,,,,,,"Mark-Evaluate: Assessing Language Generation using Population Estimation Methods. We propose a family of metrics to assess language generation derived from population estimation methods widely used in ecology. More specifically, we use mark-recapture and maximumlikelihood methods that have been applied over the past several decades to estimate the size of closed populations in the wild. We propose three novel metrics: ME Petersen and ME CAPTURE , which retrieve a single-valued assessment, and ME Schnabel which returns a double-valued metric to assess the evaluation set in terms of quality and diversity, separately. In synthetic experiments, our family of methods is sensitive to drops in quality and diversity. Moreover, our methods show a higher correlation to human evaluation than existing metrics on several challenging tasks, namely unconditional language generation, machine translation, and text summarization.",Mark-Evaluate: Assessing Language Generation using Population Estimation Methods,"We propose a family of metrics to assess language generation derived from population estimation methods widely used in ecology. More specifically, we use mark-recapture and maximumlikelihood methods that have been applied over the past several decades to estimate the size of closed populations in the wild. We propose three novel metrics: ME Petersen and ME CAPTURE , which retrieve a single-valued assessment, and ME Schnabel which returns a double-valued metric to assess the evaluation set in terms of quality and diversity, separately. In synthetic experiments, our family of methods is sensitive to drops in quality and diversity. Moreover, our methods show a higher correlation to human evaluation than existing metrics on several challenging tasks, namely unconditional language generation, machine translation, and text summarization.",Mark-Evaluate: Assessing Language Generation using Population Estimation Methods,"We propose a family of metrics to assess language generation derived from population estimation methods widely used in ecology. More specifically, we use mark-recapture and maximumlikelihood methods that have been applied over the past several decades to estimate the size of closed populations in the wild. We propose three novel metrics: ME Petersen and ME CAPTURE , which retrieve a single-valued assessment, and ME Schnabel which returns a double-valued metric to assess the evaluation set in terms of quality and diversity, separately. In synthetic experiments, our family of methods is sensitive to drops in quality and diversity. Moreover, our methods show a higher correlation to human evaluation than existing metrics on several challenging tasks, namely unconditional language generation, machine translation, and text summarization.",,"Mark-Evaluate: Assessing Language Generation using Population Estimation Methods. We propose a family of metrics to assess language generation derived from population estimation methods widely used in ecology. More specifically, we use mark-recapture and maximumlikelihood methods that have been applied over the past several decades to estimate the size of closed populations in the wild. We propose three novel metrics: ME Petersen and ME CAPTURE , which retrieve a single-valued assessment, and ME Schnabel which returns a double-valued metric to assess the evaluation set in terms of quality and diversity, separately. In synthetic experiments, our family of methods is sensitive to drops in quality and diversity. Moreover, our methods show a higher correlation to human evaluation than existing metrics on several challenging tasks, namely unconditional language generation, machine translation, and text summarization.",2020
popescu-2009-name,https://aclanthology.org/N09-2039,0,,,,,,,"Name Perplexity. The accuracy of a Cross Document Coreference system depends on the amount of context available, which is a parameter that varies greatly from corpora to corpora. This paper presents a statistical model for computing name perplexity classes. For each perplexity class, the prior probability of coreference is estimated. The amount of context required for coreference is controlled by the prior coreference probability. We show that the prior probability coreference is an important factor for maintaining a good balance between precision and recall for cross document coreference systems.",Name Perplexity,"The accuracy of a Cross Document Coreference system depends on the amount of context available, which is a parameter that varies greatly from corpora to corpora. This paper presents a statistical model for computing name perplexity classes. For each perplexity class, the prior probability of coreference is estimated. The amount of context required for coreference is controlled by the prior coreference probability. We show that the prior probability coreference is an important factor for maintaining a good balance between precision and recall for cross document coreference systems.",Name Perplexity,"The accuracy of a Cross Document Coreference system depends on the amount of context available, which is a parameter that varies greatly from corpora to corpora. This paper presents a statistical model for computing name perplexity classes. For each perplexity class, the prior probability of coreference is estimated. The amount of context required for coreference is controlled by the prior coreference probability. We show that the prior probability coreference is an important factor for maintaining a good balance between precision and recall for cross document coreference systems.",,"Name Perplexity. The accuracy of a Cross Document Coreference system depends on the amount of context available, which is a parameter that varies greatly from corpora to corpora. This paper presents a statistical model for computing name perplexity classes. For each perplexity class, the prior probability of coreference is estimated. The amount of context required for coreference is controlled by the prior coreference probability. We show that the prior probability coreference is an important factor for maintaining a good balance between precision and recall for cross document coreference systems.",2009
pogodalla-2000-generation-lambek,https://aclanthology.org/C00-2091,0,,,,,,,"Generation, Lambek Calculus, Montague's Semantics and Semantic Proof Nets. Most of the studies in the framework of Lambek calculus have considered the parsing process and ignored the generation process. This paper wants to rely on the close link between Lambek calculus and linear logic to present a method for the generation process with semantic proof nets. We express the process as a proof search procedure based on a graph calculus and the solutions appear as a matrix computation preserving the decidability properties, and we characterize a polynomial time case.","Generation, {L}ambek Calculus, {M}ontague{'}s Semantics and Semantic Proof Nets","Most of the studies in the framework of Lambek calculus have considered the parsing process and ignored the generation process. This paper wants to rely on the close link between Lambek calculus and linear logic to present a method for the generation process with semantic proof nets. We express the process as a proof search procedure based on a graph calculus and the solutions appear as a matrix computation preserving the decidability properties, and we characterize a polynomial time case.","Generation, Lambek Calculus, Montague's Semantics and Semantic Proof Nets","Most of the studies in the framework of Lambek calculus have considered the parsing process and ignored the generation process. This paper wants to rely on the close link between Lambek calculus and linear logic to present a method for the generation process with semantic proof nets. We express the process as a proof search procedure based on a graph calculus and the solutions appear as a matrix computation preserving the decidability properties, and we characterize a polynomial time case.","I would like to thank Christian Retoré who pointed out to me Girard's algebraic interpretation of the cut elimination, and the anonymous reviewers for their helpful comments.","Generation, Lambek Calculus, Montague's Semantics and Semantic Proof Nets. Most of the studies in the framework of Lambek calculus have considered the parsing process and ignored the generation process. This paper wants to rely on the close link between Lambek calculus and linear logic to present a method for the generation process with semantic proof nets. We express the process as a proof search procedure based on a graph calculus and the solutions appear as a matrix computation preserving the decidability properties, and we characterize a polynomial time case.",2000
yang-etal-2021-multilingual,https://aclanthology.org/2021.acl-short.31,0,,,,,,,"Multilingual Agreement for Multilingual Neural Machine Translation. Although multilingual neural machine translation (MNMT) enables multiple language translations, the training process is based on independent multilingual objectives. Most multilingual models can not explicitly exploit different language pairs to assist each other, ignoring the relationships among them. In this work, we propose a novel agreement-based method to encourage multilingual agreement among different translation directions, which minimizes the differences among them. We combine the multilingual training objectives with the agreement term by randomly substituting some fragments of the source language with their counterpart translations of auxiliary languages. To examine the effectiveness of our method, we conduct experiments on the multilingual translation task of 10 language pairs. Experimental results show that our method achieves significant improvements over the previous multilingual baselines.",Multilingual Agreement for Multilingual Neural Machine Translation,"Although multilingual neural machine translation (MNMT) enables multiple language translations, the training process is based on independent multilingual objectives. Most multilingual models can not explicitly exploit different language pairs to assist each other, ignoring the relationships among them. In this work, we propose a novel agreement-based method to encourage multilingual agreement among different translation directions, which minimizes the differences among them. We combine the multilingual training objectives with the agreement term by randomly substituting some fragments of the source language with their counterpart translations of auxiliary languages. To examine the effectiveness of our method, we conduct experiments on the multilingual translation task of 10 language pairs. Experimental results show that our method achieves significant improvements over the previous multilingual baselines.",Multilingual Agreement for Multilingual Neural Machine Translation,"Although multilingual neural machine translation (MNMT) enables multiple language translations, the training process is based on independent multilingual objectives. Most multilingual models can not explicitly exploit different language pairs to assist each other, ignoring the relationships among them. In this work, we propose a novel agreement-based method to encourage multilingual agreement among different translation directions, which minimizes the differences among them. We combine the multilingual training objectives with the agreement term by randomly substituting some fragments of the source language with their counterpart translations of auxiliary languages. To examine the effectiveness of our method, we conduct experiments on the multilingual translation task of 10 language pairs. Experimental results show that our method achieves significant improvements over the previous multilingual baselines.","This work was supported in part by the National Natural Science Foundation of China (Grant Nos.U1636211, 61672081, 61370126), the 2020 Tencent WeChat Rhino-Bird Focused Research Program, and the Fund of the State Key Laboratory of Software Development Environment (Grant No.SKLSDE2019ZX-17).","Multilingual Agreement for Multilingual Neural Machine Translation. Although multilingual neural machine translation (MNMT) enables multiple language translations, the training process is based on independent multilingual objectives. Most multilingual models can not explicitly exploit different language pairs to assist each other, ignoring the relationships among them. In this work, we propose a novel agreement-based method to encourage multilingual agreement among different translation directions, which minimizes the differences among them. We combine the multilingual training objectives with the agreement term by randomly substituting some fragments of the source language with their counterpart translations of auxiliary languages. To examine the effectiveness of our method, we conduct experiments on the multilingual translation task of 10 language pairs. Experimental results show that our method achieves significant improvements over the previous multilingual baselines.",2021
nguyen-etal-2020-vietnamese,https://aclanthology.org/2020.coling-main.233,0,,,,,,,"A Vietnamese Dataset for Evaluating Machine Reading Comprehension. Over 97 million people speak Vietnamese as their native language in the world. However, there are few research studies on machine reading comprehension (MRC) for Vietnamese, the task of understanding a text and answering questions related to it. Due to the lack of benchmark datasets for Vietnamese, we present the Vietnamese Question Answering Dataset (UIT-ViQuAD), a new dataset for the low-resource language as Vietnamese to evaluate MRC models. This dataset comprises over 23,000 human-generated question-answer pairs based on 5,109 passages of 174 Vietnamese articles from Wikipedia. In particular, we propose a new process of dataset creation for Vietnamese MRC. Our in-depth analyses illustrate that our dataset requires abilities beyond simple reasoning like word matching and demands single-sentence and multiple-sentence inferences. Besides, we conduct experiments on state-of-the-art MRC methods for English and Chinese as the first experimental models on UIT-ViQuAD. We also estimate human performance on the dataset and compare it to the experimental results of powerful machine learning models. As a result, the substantial differences between human performance and the best model performance on the dataset indicate that improvements can be made on UIT-ViQuAD in future research. Our dataset is freely available on our website 1 to encourage the research community to overcome challenges in Vietnamese MRC.",A {V}ietnamese Dataset for Evaluating Machine Reading Comprehension,"Over 97 million people speak Vietnamese as their native language in the world. However, there are few research studies on machine reading comprehension (MRC) for Vietnamese, the task of understanding a text and answering questions related to it. Due to the lack of benchmark datasets for Vietnamese, we present the Vietnamese Question Answering Dataset (UIT-ViQuAD), a new dataset for the low-resource language as Vietnamese to evaluate MRC models. This dataset comprises over 23,000 human-generated question-answer pairs based on 5,109 passages of 174 Vietnamese articles from Wikipedia. In particular, we propose a new process of dataset creation for Vietnamese MRC. Our in-depth analyses illustrate that our dataset requires abilities beyond simple reasoning like word matching and demands single-sentence and multiple-sentence inferences. Besides, we conduct experiments on state-of-the-art MRC methods for English and Chinese as the first experimental models on UIT-ViQuAD. We also estimate human performance on the dataset and compare it to the experimental results of powerful machine learning models. As a result, the substantial differences between human performance and the best model performance on the dataset indicate that improvements can be made on UIT-ViQuAD in future research. Our dataset is freely available on our website 1 to encourage the research community to overcome challenges in Vietnamese MRC.",A Vietnamese Dataset for Evaluating Machine Reading Comprehension,"Over 97 million people speak Vietnamese as their native language in the world. However, there are few research studies on machine reading comprehension (MRC) for Vietnamese, the task of understanding a text and answering questions related to it. Due to the lack of benchmark datasets for Vietnamese, we present the Vietnamese Question Answering Dataset (UIT-ViQuAD), a new dataset for the low-resource language as Vietnamese to evaluate MRC models. This dataset comprises over 23,000 human-generated question-answer pairs based on 5,109 passages of 174 Vietnamese articles from Wikipedia. In particular, we propose a new process of dataset creation for Vietnamese MRC. Our in-depth analyses illustrate that our dataset requires abilities beyond simple reasoning like word matching and demands single-sentence and multiple-sentence inferences. Besides, we conduct experiments on state-of-the-art MRC methods for English and Chinese as the first experimental models on UIT-ViQuAD. We also estimate human performance on the dataset and compare it to the experimental results of powerful machine learning models. As a result, the substantial differences between human performance and the best model performance on the dataset indicate that improvements can be made on UIT-ViQuAD in future research. Our dataset is freely available on our website 1 to encourage the research community to overcome challenges in Vietnamese MRC.","We would like to thank the reviewers' comments which are helpful for improving the quality of our work. In addition, we would like to thank our workers for their cooperation.","A Vietnamese Dataset for Evaluating Machine Reading Comprehension. Over 97 million people speak Vietnamese as their native language in the world. However, there are few research studies on machine reading comprehension (MRC) for Vietnamese, the task of understanding a text and answering questions related to it. Due to the lack of benchmark datasets for Vietnamese, we present the Vietnamese Question Answering Dataset (UIT-ViQuAD), a new dataset for the low-resource language as Vietnamese to evaluate MRC models. This dataset comprises over 23,000 human-generated question-answer pairs based on 5,109 passages of 174 Vietnamese articles from Wikipedia. In particular, we propose a new process of dataset creation for Vietnamese MRC. Our in-depth analyses illustrate that our dataset requires abilities beyond simple reasoning like word matching and demands single-sentence and multiple-sentence inferences. Besides, we conduct experiments on state-of-the-art MRC methods for English and Chinese as the first experimental models on UIT-ViQuAD. We also estimate human performance on the dataset and compare it to the experimental results of powerful machine learning models. As a result, the substantial differences between human performance and the best model performance on the dataset indicate that improvements can be made on UIT-ViQuAD in future research. Our dataset is freely available on our website 1 to encourage the research community to overcome challenges in Vietnamese MRC.",2020
chudy-etal-2013-tmuse,https://aclanthology.org/I13-2011,0,,,,,,,"Tmuse: Lexical Network Exploration. We demonstrate an online application to explore lexical networks. Tmuse displays a 3D interactive graph of similar words, whose layout is based on the proxemy between vertices of synonymy and translation networks. Semantic themes of words related to a query are outlined, and projected across languages. The application is useful as, for example, a writing assistance. It is available, online, for Mandarin Chinese, English and French, as well as the corresponding language pairs, and can easily be fitted to new resources.",{T}muse: Lexical Network Exploration,"We demonstrate an online application to explore lexical networks. Tmuse displays a 3D interactive graph of similar words, whose layout is based on the proxemy between vertices of synonymy and translation networks. Semantic themes of words related to a query are outlined, and projected across languages. The application is useful as, for example, a writing assistance. It is available, online, for Mandarin Chinese, English and French, as well as the corresponding language pairs, and can easily be fitted to new resources.",Tmuse: Lexical Network Exploration,"We demonstrate an online application to explore lexical networks. Tmuse displays a 3D interactive graph of similar words, whose layout is based on the proxemy between vertices of synonymy and translation networks. Semantic themes of words related to a query are outlined, and projected across languages. The application is useful as, for example, a writing assistance. It is available, online, for Mandarin Chinese, English and French, as well as the corresponding language pairs, and can easily be fitted to new resources.",,"Tmuse: Lexical Network Exploration. We demonstrate an online application to explore lexical networks. Tmuse displays a 3D interactive graph of similar words, whose layout is based on the proxemy between vertices of synonymy and translation networks. Semantic themes of words related to a query are outlined, and projected across languages. The application is useful as, for example, a writing assistance. It is available, online, for Mandarin Chinese, English and French, as well as the corresponding language pairs, and can easily be fitted to new resources.",2013
chu-etal-2020-solving,https://aclanthology.org/2020.emnlp-main.471,0,,,,,,,"Solving Historical Dictionary Codes with a Neural Language Model. We solve difficult word-based substitution codes by constructing a decoding lattice and searching that lattice with a neural language model. We apply our method to a set of enciphered letters exchanged between US Army General James Wilkinson and agents of the Spanish Crown in the late 1700s and early 1800s, obtained from the US Library of Congress. We are able to decipher 75.1% of the cipher-word tokens correctly. Table-based key Book-based key Cipher Caesar cipher Beale cipher (character) Simple substitution Zodiac 408 Copiale cipher Code Rossignols' Mexico-Nauen code (word) Grand Chiffre Scovell code Wilkinson code",{S}olving {H}istorical {D}ictionary {C}odes with a {N}eural {L}anguage {M}odel,"We solve difficult word-based substitution codes by constructing a decoding lattice and searching that lattice with a neural language model. We apply our method to a set of enciphered letters exchanged between US Army General James Wilkinson and agents of the Spanish Crown in the late 1700s and early 1800s, obtained from the US Library of Congress. We are able to decipher 75.1% of the cipher-word tokens correctly. Table-based key Book-based key Cipher Caesar cipher Beale cipher (character) Simple substitution Zodiac 408 Copiale cipher Code Rossignols' Mexico-Nauen code (word) Grand Chiffre Scovell code Wilkinson code",Solving Historical Dictionary Codes with a Neural Language Model,"We solve difficult word-based substitution codes by constructing a decoding lattice and searching that lattice with a neural language model. We apply our method to a set of enciphered letters exchanged between US Army General James Wilkinson and agents of the Spanish Crown in the late 1700s and early 1800s, obtained from the US Library of Congress. We are able to decipher 75.1% of the cipher-word tokens correctly. Table-based key Book-based key Cipher Caesar cipher Beale cipher (character) Simple substitution Zodiac 408 Copiale cipher Code Rossignols' Mexico-Nauen code (word) Grand Chiffre Scovell code Wilkinson code","We would like to thank Johnny Fountain and Kevin Chatupornpitak of Karga7, and the staff who transcribed data from the Library of Congress, who provided scans of the original documents. We would also like to thank the anonymous reviewers for many helpful suggestions.","Solving Historical Dictionary Codes with a Neural Language Model. We solve difficult word-based substitution codes by constructing a decoding lattice and searching that lattice with a neural language model. We apply our method to a set of enciphered letters exchanged between US Army General James Wilkinson and agents of the Spanish Crown in the late 1700s and early 1800s, obtained from the US Library of Congress. We are able to decipher 75.1% of the cipher-word tokens correctly. Table-based key Book-based key Cipher Caesar cipher Beale cipher (character) Simple substitution Zodiac 408 Copiale cipher Code Rossignols' Mexico-Nauen code (word) Grand Chiffre Scovell code Wilkinson code",2020
macklovitch-1992-tagger,https://aclanthology.org/1992.tmi-1.10,0,,,,,,,"Where the tagger falters. Statistical n-gram taggers like that of [Church 1988] or [Foster 1991] assign a part-ofspeech label to each word in a text on the basis of probability estimates that are automatically derived from a large, already tagged training corpus. This paper examines the grammatical constructions which cause such taggers to falter most frequently. As one would expect, certain of these errors are due to linguistic dependencies that extend beyond the limited scope of statistical taggers, while others can be seen to derive from the composition of the tag set; many can only be corrected through a full syntactic or semantic analysis of the sentence. The paper goes on to consider two very different approaches to the problem of automatically detecting tagging errors. The first uses statistical information that is already at the tagger's disposal; the second attempts to isolate error-prone contexts by formulating linguistic diagnostics in terms of regular expressions over tag sequences. In a small experiment focussing on the preterite/past participle ambiguity, the linguistic technique turns out to be more efficient, while the statistical technique is more effective.",Where the tagger falters,"Statistical n-gram taggers like that of [Church 1988] or [Foster 1991] assign a part-ofspeech label to each word in a text on the basis of probability estimates that are automatically derived from a large, already tagged training corpus. This paper examines the grammatical constructions which cause such taggers to falter most frequently. As one would expect, certain of these errors are due to linguistic dependencies that extend beyond the limited scope of statistical taggers, while others can be seen to derive from the composition of the tag set; many can only be corrected through a full syntactic or semantic analysis of the sentence. The paper goes on to consider two very different approaches to the problem of automatically detecting tagging errors. The first uses statistical information that is already at the tagger's disposal; the second attempts to isolate error-prone contexts by formulating linguistic diagnostics in terms of regular expressions over tag sequences. In a small experiment focussing on the preterite/past participle ambiguity, the linguistic technique turns out to be more efficient, while the statistical technique is more effective.",Where the tagger falters,"Statistical n-gram taggers like that of [Church 1988] or [Foster 1991] assign a part-ofspeech label to each word in a text on the basis of probability estimates that are automatically derived from a large, already tagged training corpus. This paper examines the grammatical constructions which cause such taggers to falter most frequently. As one would expect, certain of these errors are due to linguistic dependencies that extend beyond the limited scope of statistical taggers, while others can be seen to derive from the composition of the tag set; many can only be corrected through a full syntactic or semantic analysis of the sentence. The paper goes on to consider two very different approaches to the problem of automatically detecting tagging errors. The first uses statistical information that is already at the tagger's disposal; the second attempts to isolate error-prone contexts by formulating linguistic diagnostics in terms of regular expressions over tag sequences. In a small experiment focussing on the preterite/past participle ambiguity, the linguistic technique turns out to be more efficient, while the statistical technique is more effective.","This paper is based on George Foster's excellent Master's thesis. I am indebted to him, both for his explanations of points in the thesis and for kindly providing me with the supplementary data on error frequencies. All responsibility for errors of interpretation is mine alone. Pierre Isabelle, Michel Simard, Marc Dymetman and Marie-Louise Hannan all provided comments on an earlier version of this paper, for which I also express my gratitude.Notes","Where the tagger falters. Statistical n-gram taggers like that of [Church 1988] or [Foster 1991] assign a part-ofspeech label to each word in a text on the basis of probability estimates that are automatically derived from a large, already tagged training corpus. This paper examines the grammatical constructions which cause such taggers to falter most frequently. As one would expect, certain of these errors are due to linguistic dependencies that extend beyond the limited scope of statistical taggers, while others can be seen to derive from the composition of the tag set; many can only be corrected through a full syntactic or semantic analysis of the sentence. The paper goes on to consider two very different approaches to the problem of automatically detecting tagging errors. The first uses statistical information that is already at the tagger's disposal; the second attempts to isolate error-prone contexts by formulating linguistic diagnostics in terms of regular expressions over tag sequences. In a small experiment focussing on the preterite/past participle ambiguity, the linguistic technique turns out to be more efficient, while the statistical technique is more effective.",1992
yang-etal-2008-resolving,https://aclanthology.org/I08-2098,0,,,,,,,"Resolving Ambiguities of Chinese Conjunctive Structures by Divide-and-conquer Approaches. This paper presents a method to enhance a Chinese parser in parsing conjunctive structures. Long conjunctive structures cause long-distance dependencies and tremendous syntactic ambiguities. Pure syntactic approaches hardly can determine boundaries of conjunctive phrases properly. In this paper, we propose a divide-andconquer approach which overcomes the difficulty of data-sparseness of the training data and uses both syntactic symmetry and semantic reasonableness to evaluate ambiguous conjunctive structures. In comparing with the performances of the PCFG parser without using the divide-andconquer approach, the precision of the conjunctive boundary detection is improved from 53.47% to 83.17%, and the bracketing f-score of sentences with conjunctive structures is raised up about 11 %.",Resolving Ambiguities of {C}hinese Conjunctive Structures by Divide-and-conquer Approaches,"This paper presents a method to enhance a Chinese parser in parsing conjunctive structures. Long conjunctive structures cause long-distance dependencies and tremendous syntactic ambiguities. Pure syntactic approaches hardly can determine boundaries of conjunctive phrases properly. In this paper, we propose a divide-andconquer approach which overcomes the difficulty of data-sparseness of the training data and uses both syntactic symmetry and semantic reasonableness to evaluate ambiguous conjunctive structures. In comparing with the performances of the PCFG parser without using the divide-andconquer approach, the precision of the conjunctive boundary detection is improved from 53.47% to 83.17%, and the bracketing f-score of sentences with conjunctive structures is raised up about 11 %.",Resolving Ambiguities of Chinese Conjunctive Structures by Divide-and-conquer Approaches,"This paper presents a method to enhance a Chinese parser in parsing conjunctive structures. Long conjunctive structures cause long-distance dependencies and tremendous syntactic ambiguities. Pure syntactic approaches hardly can determine boundaries of conjunctive phrases properly. In this paper, we propose a divide-andconquer approach which overcomes the difficulty of data-sparseness of the training data and uses both syntactic symmetry and semantic reasonableness to evaluate ambiguous conjunctive structures. In comparing with the performances of the PCFG parser without using the divide-andconquer approach, the precision of the conjunctive boundary detection is improved from 53.47% to 83.17%, and the bracketing f-score of sentences with conjunctive structures is raised up about 11 %.","This research was supported in part by National Digital Archives Program (NDAP, Taiwan) sponsored by the National Science Council of Taiwan under NSC Grants: NSC95-2422-H-001-031-.","Resolving Ambiguities of Chinese Conjunctive Structures by Divide-and-conquer Approaches. This paper presents a method to enhance a Chinese parser in parsing conjunctive structures. Long conjunctive structures cause long-distance dependencies and tremendous syntactic ambiguities. Pure syntactic approaches hardly can determine boundaries of conjunctive phrases properly. In this paper, we propose a divide-andconquer approach which overcomes the difficulty of data-sparseness of the training data and uses both syntactic symmetry and semantic reasonableness to evaluate ambiguous conjunctive structures. In comparing with the performances of the PCFG parser without using the divide-andconquer approach, the precision of the conjunctive boundary detection is improved from 53.47% to 83.17%, and the bracketing f-score of sentences with conjunctive structures is raised up about 11 %.",2008
blom-1998-statistical,https://aclanthology.org/W98-1617,0,,,,,,,A statistical and structural approach to extracting collocations likely to be of relevance in relation to an LSP sub-domain text. D e p a rtm e n t o f L e x ic o g ra p h y a n d C o m p u ta tio n a l L in g u istic s T h e A a rh u s B u s in e s s S ch o o l b b @ ln g .h h a .d k,A statistical and structural approach to extracting collocations likely to be of relevance in relation to an {LSP} sub-domain text,D e p a rtm e n t o f L e x ic o g ra p h y a n d C o m p u ta tio n a l L in g u istic s T h e A a rh u s B u s in e s s S ch o o l b b @ ln g .h h a .d k,A statistical and structural approach to extracting collocations likely to be of relevance in relation to an LSP sub-domain text,D e p a rtm e n t o f L e x ic o g ra p h y a n d C o m p u ta tio n a l L in g u istic s T h e A a rh u s B u s in e s s S ch o o l b b @ ln g .h h a .d k,,A statistical and structural approach to extracting collocations likely to be of relevance in relation to an LSP sub-domain text. D e p a rtm e n t o f L e x ic o g ra p h y a n d C o m p u ta tio n a l L in g u istic s T h e A a rh u s B u s in e s s S ch o o l b b @ ln g .h h a .d k,1998
hieber-riezler-2015-bag,https://aclanthology.org/N15-1123,0,,,,,,,"Bag-of-Words Forced Decoding for Cross-Lingual Information Retrieval. Current approaches to cross-lingual information retrieval (CLIR) rely on standard retrieval models into which query translations by statistical machine translation (SMT) are integrated at varying degree. In this paper, we present an attempt to turn this situation on its head: Instead of the retrieval aspect, we emphasize the translation component in CLIR. We perform search by using an SMT decoder in forced decoding mode to produce a bag-ofwords representation of the target documents to be ranked. The SMT model is extended by retrieval-specific features that are optimized jointly with standard translation features for a ranking objective. We find significant gains over the state-of-the-art in a large-scale evaluation on cross-lingual search in the domains patents and Wikipedia.",Bag-of-Words Forced Decoding for Cross-Lingual Information Retrieval,"Current approaches to cross-lingual information retrieval (CLIR) rely on standard retrieval models into which query translations by statistical machine translation (SMT) are integrated at varying degree. In this paper, we present an attempt to turn this situation on its head: Instead of the retrieval aspect, we emphasize the translation component in CLIR. We perform search by using an SMT decoder in forced decoding mode to produce a bag-ofwords representation of the target documents to be ranked. The SMT model is extended by retrieval-specific features that are optimized jointly with standard translation features for a ranking objective. We find significant gains over the state-of-the-art in a large-scale evaluation on cross-lingual search in the domains patents and Wikipedia.",Bag-of-Words Forced Decoding for Cross-Lingual Information Retrieval,"Current approaches to cross-lingual information retrieval (CLIR) rely on standard retrieval models into which query translations by statistical machine translation (SMT) are integrated at varying degree. In this paper, we present an attempt to turn this situation on its head: Instead of the retrieval aspect, we emphasize the translation component in CLIR. We perform search by using an SMT decoder in forced decoding mode to produce a bag-ofwords representation of the target documents to be ranked. The SMT model is extended by retrieval-specific features that are optimized jointly with standard translation features for a ranking objective. We find significant gains over the state-of-the-art in a large-scale evaluation on cross-lingual search in the domains patents and Wikipedia.","This research was supported in part by DFG grant RI-2221/1-2 ""Weakly Supervised Learning of Cross-Lingual Systems"".","Bag-of-Words Forced Decoding for Cross-Lingual Information Retrieval. Current approaches to cross-lingual information retrieval (CLIR) rely on standard retrieval models into which query translations by statistical machine translation (SMT) are integrated at varying degree. In this paper, we present an attempt to turn this situation on its head: Instead of the retrieval aspect, we emphasize the translation component in CLIR. We perform search by using an SMT decoder in forced decoding mode to produce a bag-ofwords representation of the target documents to be ranked. The SMT model is extended by retrieval-specific features that are optimized jointly with standard translation features for a ranking objective. We find significant gains over the state-of-the-art in a large-scale evaluation on cross-lingual search in the domains patents and Wikipedia.",2015
andreevskaia-bergler-2008-specialists,https://aclanthology.org/P08-1034,0,,,,,,,"When Specialists and Generalists Work Together: Overcoming Domain Dependence in Sentiment Tagging. This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on Word-Net. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.",When Specialists and Generalists Work Together: Overcoming Domain Dependence in Sentiment Tagging,"This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on Word-Net. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.",When Specialists and Generalists Work Together: Overcoming Domain Dependence in Sentiment Tagging,"This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on Word-Net. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.",,"When Specialists and Generalists Work Together: Overcoming Domain Dependence in Sentiment Tagging. This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on Word-Net. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.",2008
prickett-etal-2018-seq2seq,https://aclanthology.org/W18-5810,0,,,,,,,"Seq2Seq Models with Dropout can Learn Generalizable Reduplication. Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999). Sequence-to-sequence neural networks have been shown to perform well at a number of other morphological tasks (Cotterell et al., 2016), and produce results that highly correlate with human behavior (Kirov, 2017; Kirov & Cotterell, 2018) but do not include any explicit variables in their architecture. We find that they can learn a reduplicative pattern that generalizes to novel segments if they are trained with dropout (Srivastava et al., 2014). We argue that this matches the scope of generalization observed in human reduplication.",{S}eq2{S}eq Models with Dropout can Learn Generalizable Reduplication,"Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999). Sequence-to-sequence neural networks have been shown to perform well at a number of other morphological tasks (Cotterell et al., 2016), and produce results that highly correlate with human behavior (Kirov, 2017; Kirov & Cotterell, 2018) but do not include any explicit variables in their architecture. We find that they can learn a reduplicative pattern that generalizes to novel segments if they are trained with dropout (Srivastava et al., 2014). We argue that this matches the scope of generalization observed in human reduplication.",Seq2Seq Models with Dropout can Learn Generalizable Reduplication,"Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999). Sequence-to-sequence neural networks have been shown to perform well at a number of other morphological tasks (Cotterell et al., 2016), and produce results that highly correlate with human behavior (Kirov, 2017; Kirov & Cotterell, 2018) but do not include any explicit variables in their architecture. We find that they can learn a reduplicative pattern that generalizes to novel segments if they are trained with dropout (Srivastava et al., 2014). We argue that this matches the scope of generalization observed in human reduplication.","The authors would like to thank the members of the UMass Sound Workshop, the members of the UMass NLP Reading Group, Tal Linzen, and Ryan Cotterell for helpful feedback and discussion. Additionally, we would like to thank the SIGMORPHON reviewers for their comments. This work was supported by NSF Grant #1650957.","Seq2Seq Models with Dropout can Learn Generalizable Reduplication. Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999). Sequence-to-sequence neural networks have been shown to perform well at a number of other morphological tasks (Cotterell et al., 2016), and produce results that highly correlate with human behavior (Kirov, 2017; Kirov & Cotterell, 2018) but do not include any explicit variables in their architecture. We find that they can learn a reduplicative pattern that generalizes to novel segments if they are trained with dropout (Srivastava et al., 2014). We argue that this matches the scope of generalization observed in human reduplication.",2018
simoes-etal-2016-enriching,https://aclanthology.org/L16-1426,0,,,,,,,"Enriching a Portuguese WordNet using Synonyms from a Monolingual Dictionary. In this article we present an exploratory approach to enrich a WordNet-like lexical ontology with the synonyms present in a standard monolingual Portuguese dictionary. The dictionary was converted from PDF into XML and senses were automatically identified and annotated. This allowed us to extract them, independently of definitions, and to create sets of synonyms (synsets). These synsets were then aligned with WordNet synsets, both in the same language (Portuguese) and projecting the Portuguese terms into English, Spanish and Galician. This process allowed both the addition of new term variants to existing synsets, as to create new synsets for Portuguese.",Enriching a {P}ortuguese {W}ord{N}et using Synonyms from a Monolingual Dictionary,"In this article we present an exploratory approach to enrich a WordNet-like lexical ontology with the synonyms present in a standard monolingual Portuguese dictionary. The dictionary was converted from PDF into XML and senses were automatically identified and annotated. This allowed us to extract them, independently of definitions, and to create sets of synonyms (synsets). These synsets were then aligned with WordNet synsets, both in the same language (Portuguese) and projecting the Portuguese terms into English, Spanish and Galician. This process allowed both the addition of new term variants to existing synsets, as to create new synsets for Portuguese.",Enriching a Portuguese WordNet using Synonyms from a Monolingual Dictionary,"In this article we present an exploratory approach to enrich a WordNet-like lexical ontology with the synonyms present in a standard monolingual Portuguese dictionary. The dictionary was converted from PDF into XML and senses were automatically identified and annotated. This allowed us to extract them, independently of definitions, and to create sets of synonyms (synsets). These synsets were then aligned with WordNet synsets, both in the same language (Portuguese) and projecting the Portuguese terms into English, Spanish and Galician. This process allowed both the addition of new term variants to existing synsets, as to create new synsets for Portuguese.",,"Enriching a Portuguese WordNet using Synonyms from a Monolingual Dictionary. In this article we present an exploratory approach to enrich a WordNet-like lexical ontology with the synonyms present in a standard monolingual Portuguese dictionary. The dictionary was converted from PDF into XML and senses were automatically identified and annotated. This allowed us to extract them, independently of definitions, and to create sets of synonyms (synsets). These synsets were then aligned with WordNet synsets, both in the same language (Portuguese) and projecting the Portuguese terms into English, Spanish and Galician. This process allowed both the addition of new term variants to existing synsets, as to create new synsets for Portuguese.",2016
levin-2018-annotation,https://aclanthology.org/W18-4901,0,,,,,,,"Annotation Schemes for Surface Construction Labeling. In this talk I will describe the interaction of linguistics and language technologies in Surface Construction Labeling (SCL) from the perspective of corpus annotation tasks such as definiteness, modality, and causality. Linguistically, following Construction Grammar, SCL recognizes that meaning may be carried by morphemes, words, or arbitrary constellations of morpho-lexical elements. SCL is like Shallow Semantic Parsing in that it does not attempt a full compositional analysis of meaning, but rather identifies only the main elements of a semantic frame, where the frames may be invoked by constructions as well as lexical items. Computationally, SCL is different from tasks such as information extraction in that it deals only with meanings that are expressed in a conventional, grammaticalized way and does not address inferred meanings. I review the work of Dunietz (2018) on the labeling of causal frames including causal connectives and cause and effect arguments. I will describe how to design an annotation scheme for SCL, including isolating basic units of form and meaning and building a ""constructicon"". I will conclude with remarks about the nature of universal categories and universal meaning representations in language technologies. This talk describes joint work with",Annotation Schemes for Surface Construction Labeling,"In this talk I will describe the interaction of linguistics and language technologies in Surface Construction Labeling (SCL) from the perspective of corpus annotation tasks such as definiteness, modality, and causality. Linguistically, following Construction Grammar, SCL recognizes that meaning may be carried by morphemes, words, or arbitrary constellations of morpho-lexical elements. SCL is like Shallow Semantic Parsing in that it does not attempt a full compositional analysis of meaning, but rather identifies only the main elements of a semantic frame, where the frames may be invoked by constructions as well as lexical items. Computationally, SCL is different from tasks such as information extraction in that it deals only with meanings that are expressed in a conventional, grammaticalized way and does not address inferred meanings. I review the work of Dunietz (2018) on the labeling of causal frames including causal connectives and cause and effect arguments. I will describe how to design an annotation scheme for SCL, including isolating basic units of form and meaning and building a ""constructicon"". I will conclude with remarks about the nature of universal categories and universal meaning representations in language technologies. This talk describes joint work with",Annotation Schemes for Surface Construction Labeling,"In this talk I will describe the interaction of linguistics and language technologies in Surface Construction Labeling (SCL) from the perspective of corpus annotation tasks such as definiteness, modality, and causality. Linguistically, following Construction Grammar, SCL recognizes that meaning may be carried by morphemes, words, or arbitrary constellations of morpho-lexical elements. SCL is like Shallow Semantic Parsing in that it does not attempt a full compositional analysis of meaning, but rather identifies only the main elements of a semantic frame, where the frames may be invoked by constructions as well as lexical items. Computationally, SCL is different from tasks such as information extraction in that it deals only with meanings that are expressed in a conventional, grammaticalized way and does not address inferred meanings. I review the work of Dunietz (2018) on the labeling of causal frames including causal connectives and cause and effect arguments. I will describe how to design an annotation scheme for SCL, including isolating basic units of form and meaning and building a ""constructicon"". I will conclude with remarks about the nature of universal categories and universal meaning representations in language technologies. This talk describes joint work with",,"Annotation Schemes for Surface Construction Labeling. In this talk I will describe the interaction of linguistics and language technologies in Surface Construction Labeling (SCL) from the perspective of corpus annotation tasks such as definiteness, modality, and causality. Linguistically, following Construction Grammar, SCL recognizes that meaning may be carried by morphemes, words, or arbitrary constellations of morpho-lexical elements. SCL is like Shallow Semantic Parsing in that it does not attempt a full compositional analysis of meaning, but rather identifies only the main elements of a semantic frame, where the frames may be invoked by constructions as well as lexical items. Computationally, SCL is different from tasks such as information extraction in that it deals only with meanings that are expressed in a conventional, grammaticalized way and does not address inferred meanings. I review the work of Dunietz (2018) on the labeling of causal frames including causal connectives and cause and effect arguments. I will describe how to design an annotation scheme for SCL, including isolating basic units of form and meaning and building a ""constructicon"". I will conclude with remarks about the nature of universal categories and universal meaning representations in language technologies. This talk describes joint work with",2018
iwamoto-yukawa-2020-rijp,https://aclanthology.org/2020.semeval-1.10,0,,,,,,,"RIJP at SemEval-2020 Task 1: Gaussian-based Embeddings for Semantic Change Detection. This paper describes the model proposed and submitted by our RIJP team to SemEval 2020 Task1: Unsupervised Lexical Semantic Change Detection. In the model, words are represented by Gaussian distributions. For Subtask 1, the model achieved average scores of 0.51 and 0.70 in the evaluation and post-evaluation processes, respectively. The higher score in the post-evaluation process than that in the evaluation process was achieved owing to appropriate parameter tuning. The results indicate that the proposed Gaussian-based embedding model is able to express semantic shifts while having a low computational complexity.",{RIJP} at {S}em{E}val-2020 Task 1: {G}aussian-based Embeddings for Semantic Change Detection,"This paper describes the model proposed and submitted by our RIJP team to SemEval 2020 Task1: Unsupervised Lexical Semantic Change Detection. In the model, words are represented by Gaussian distributions. For Subtask 1, the model achieved average scores of 0.51 and 0.70 in the evaluation and post-evaluation processes, respectively. The higher score in the post-evaluation process than that in the evaluation process was achieved owing to appropriate parameter tuning. The results indicate that the proposed Gaussian-based embedding model is able to express semantic shifts while having a low computational complexity.",RIJP at SemEval-2020 Task 1: Gaussian-based Embeddings for Semantic Change Detection,"This paper describes the model proposed and submitted by our RIJP team to SemEval 2020 Task1: Unsupervised Lexical Semantic Change Detection. In the model, words are represented by Gaussian distributions. For Subtask 1, the model achieved average scores of 0.51 and 0.70 in the evaluation and post-evaluation processes, respectively. The higher score in the post-evaluation process than that in the evaluation process was achieved owing to appropriate parameter tuning. The results indicate that the proposed Gaussian-based embedding model is able to express semantic shifts while having a low computational complexity.",We gratefully acknowledge Kwangjin Jeong for valuable discussions and the anonymous reviewers for useful comments.,"RIJP at SemEval-2020 Task 1: Gaussian-based Embeddings for Semantic Change Detection. This paper describes the model proposed and submitted by our RIJP team to SemEval 2020 Task1: Unsupervised Lexical Semantic Change Detection. In the model, words are represented by Gaussian distributions. For Subtask 1, the model achieved average scores of 0.51 and 0.70 in the evaluation and post-evaluation processes, respectively. The higher score in the post-evaluation process than that in the evaluation process was achieved owing to appropriate parameter tuning. The results indicate that the proposed Gaussian-based embedding model is able to express semantic shifts while having a low computational complexity.",2020
jones-1994-exploring,https://aclanthology.org/C94-1069,0,,,,,,,"Exploring the Role of Punctuation in Parsing Natural Text. Few, if any, current NLP systems make any significant use of punctuation. Intuitively, a treatment of lrunctuation seems necessary to the analysis and production of text. Whilst this has been suggested in the fiekls of discourse strnetnre, it is still nnclear whether punctuation can help in the syntactic field. This investigation atteml)ts to answer this question by parsing some corpus-based material with two similar grammars-one including rules for i)unctuation, the other igno,'ing it. The punctuated grammar significantly outq)erforms the unpunctnated on% and so the conclnsion is that punctuation can play a usefifl role in syntactic processing.",Exploring the Role of Punctuation in Parsing Natural Text,"Few, if any, current NLP systems make any significant use of punctuation. Intuitively, a treatment of lrunctuation seems necessary to the analysis and production of text. Whilst this has been suggested in the fiekls of discourse strnetnre, it is still nnclear whether punctuation can help in the syntactic field. This investigation atteml)ts to answer this question by parsing some corpus-based material with two similar grammars-one including rules for i)unctuation, the other igno,'ing it. The punctuated grammar significantly outq)erforms the unpunctnated on% and so the conclnsion is that punctuation can play a usefifl role in syntactic processing.",Exploring the Role of Punctuation in Parsing Natural Text,"Few, if any, current NLP systems make any significant use of punctuation. Intuitively, a treatment of lrunctuation seems necessary to the analysis and production of text. Whilst this has been suggested in the fiekls of discourse strnetnre, it is still nnclear whether punctuation can help in the syntactic field. This investigation atteml)ts to answer this question by parsing some corpus-based material with two similar grammars-one including rules for i)unctuation, the other igno,'ing it. The punctuated grammar significantly outq)erforms the unpunctnated on% and so the conclnsion is that punctuation can play a usefifl role in syntactic processing.","This work was carried out under Esprit Acquilex-II, lIRA 7315, and an ESRC l/,eseareh Stndentship, 1/.004293:1,1171. '['hanks tbr instrtetive and helpful comments to Ted Briseoe, John Carroll, Rol)ert Dale, Ilenry 'Fhompson and anonymous CoLing reviewers.","Exploring the Role of Punctuation in Parsing Natural Text. Few, if any, current NLP systems make any significant use of punctuation. Intuitively, a treatment of lrunctuation seems necessary to the analysis and production of text. Whilst this has been suggested in the fiekls of discourse strnetnre, it is still nnclear whether punctuation can help in the syntactic field. This investigation atteml)ts to answer this question by parsing some corpus-based material with two similar grammars-one including rules for i)unctuation, the other igno,'ing it. The punctuated grammar significantly outq)erforms the unpunctnated on% and so the conclnsion is that punctuation can play a usefifl role in syntactic processing.",1994
wachsmuth-etal-2018-argumentation,https://aclanthology.org/C18-1318,0,,,,,,,"Argumentation Synthesis following Rhetorical Strategies. Persuasion is rarely achieved through a loose set of arguments alone. Rather, an effective delivery of arguments follows a rhetorical strategy, combining logical reasoning with appeals to ethics and emotion. We argue that such a strategy means to select, arrange, and phrase a set of argumentative discourse units. In this paper, we model rhetorical strategies for the computational synthesis of effective argumentation. In a study, we let 26 experts synthesize argumentative texts with different strategies for 10 topics. We find that the experts agree in the selection significantly more when following the same strategy. While the texts notably vary for different strategies, especially their arrangement remains stable. The results suggest that our model enables a strategical synthesis.",Argumentation Synthesis following Rhetorical Strategies,"Persuasion is rarely achieved through a loose set of arguments alone. Rather, an effective delivery of arguments follows a rhetorical strategy, combining logical reasoning with appeals to ethics and emotion. We argue that such a strategy means to select, arrange, and phrase a set of argumentative discourse units. In this paper, we model rhetorical strategies for the computational synthesis of effective argumentation. In a study, we let 26 experts synthesize argumentative texts with different strategies for 10 topics. We find that the experts agree in the selection significantly more when following the same strategy. While the texts notably vary for different strategies, especially their arrangement remains stable. The results suggest that our model enables a strategical synthesis.",Argumentation Synthesis following Rhetorical Strategies,"Persuasion is rarely achieved through a loose set of arguments alone. Rather, an effective delivery of arguments follows a rhetorical strategy, combining logical reasoning with appeals to ethics and emotion. We argue that such a strategy means to select, arrange, and phrase a set of argumentative discourse units. In this paper, we model rhetorical strategies for the computational synthesis of effective argumentation. In a study, we let 26 experts synthesize argumentative texts with different strategies for 10 topics. We find that the experts agree in the selection significantly more when following the same strategy. While the texts notably vary for different strategies, especially their arrangement remains stable. The results suggest that our model enables a strategical synthesis.","Acknowledgments Thanks to Yamen Ajjour, Wei-Fan Chen, Yulia Clausen, Debopam Das, Erdan Genc, Tim Gollub, Yulia Grishina, Erik Hägert, Johannes Kiesel, Lukas Paschen, Martin Potthast, Robin Schäfer, Constanze Schmitt, Uladzimir Sidarenka, Shahbaz Syed, and Michael Völske for taking part in our study.","Argumentation Synthesis following Rhetorical Strategies. Persuasion is rarely achieved through a loose set of arguments alone. Rather, an effective delivery of arguments follows a rhetorical strategy, combining logical reasoning with appeals to ethics and emotion. We argue that such a strategy means to select, arrange, and phrase a set of argumentative discourse units. In this paper, we model rhetorical strategies for the computational synthesis of effective argumentation. In a study, we let 26 experts synthesize argumentative texts with different strategies for 10 topics. We find that the experts agree in the selection significantly more when following the same strategy. While the texts notably vary for different strategies, especially their arrangement remains stable. The results suggest that our model enables a strategical synthesis.",2018
guo-etal-2017-effective,https://aclanthology.org/E17-1011,1,,,,education,,,"Which is the Effective Way for Gaokao: Information Retrieval or Neural Networks?. As one of the most important test of China, Gaokao is designed to be difficult enough to distinguish the excellent high school students. In this work, we detailed the Gaokao History Multiple Choice Questions(GKHMC) and proposed two different approaches to address them using various resources. One approach is based on entity search technique (IR approach), the other is based on text entailment approach where we specifically employ deep neural networks(NN approach). The result of experiment on our collected real Gaokao questions showed that they are good at different categories of questions, i.e. IR approach performs much better at entity questions(EQs) while NN approach shows its advantage on sentence questions(SQs). Our new method achieves state-of-the-art performance and show that it's indispensable to apply hybrid method when participating in the real-world tests.",Which is the Effective Way for {G}aokao: Information Retrieval or Neural Networks?,"As one of the most important test of China, Gaokao is designed to be difficult enough to distinguish the excellent high school students. In this work, we detailed the Gaokao History Multiple Choice Questions(GKHMC) and proposed two different approaches to address them using various resources. One approach is based on entity search technique (IR approach), the other is based on text entailment approach where we specifically employ deep neural networks(NN approach). The result of experiment on our collected real Gaokao questions showed that they are good at different categories of questions, i.e. IR approach performs much better at entity questions(EQs) while NN approach shows its advantage on sentence questions(SQs). Our new method achieves state-of-the-art performance and show that it's indispensable to apply hybrid method when participating in the real-world tests.",Which is the Effective Way for Gaokao: Information Retrieval or Neural Networks?,"As one of the most important test of China, Gaokao is designed to be difficult enough to distinguish the excellent high school students. In this work, we detailed the Gaokao History Multiple Choice Questions(GKHMC) and proposed two different approaches to address them using various resources. One approach is based on entity search technique (IR approach), the other is based on text entailment approach where we specifically employ deep neural networks(NN approach). The result of experiment on our collected real Gaokao questions showed that they are good at different categories of questions, i.e. IR approach performs much better at entity questions(EQs) while NN approach shows its advantage on sentence questions(SQs). Our new method achieves state-of-the-art performance and show that it's indispensable to apply hybrid method when participating in the real-world tests.",We thank for the anonymous reviewers for helpful comments. This work was supported by the National High Technology Development 863 Program of China (No.2015AA015405) and the Natural Science Foundation of China (No.61533018). And this research work was also supported by Google through focused research awards program.,"Which is the Effective Way for Gaokao: Information Retrieval or Neural Networks?. As one of the most important test of China, Gaokao is designed to be difficult enough to distinguish the excellent high school students. In this work, we detailed the Gaokao History Multiple Choice Questions(GKHMC) and proposed two different approaches to address them using various resources. One approach is based on entity search technique (IR approach), the other is based on text entailment approach where we specifically employ deep neural networks(NN approach). The result of experiment on our collected real Gaokao questions showed that they are good at different categories of questions, i.e. IR approach performs much better at entity questions(EQs) while NN approach shows its advantage on sentence questions(SQs). Our new method achieves state-of-the-art performance and show that it's indispensable to apply hybrid method when participating in the real-world tests.",2017
blekhman-etal-1997-pars,https://aclanthology.org/1997.mtsummit-papers.16,0,,,,business_use,,,"PARS/U for Windows: The World's First Commercial English-Ukrainian and Ukrainian-English Machine Translation System. The paper describes the PARS/U Ukrainian-English bidirectional MT system by Lingvistica '93 Co. PARS/U translates MS Word and HTML files as well as screen Helps. It features an easy-to-master dictionary updating program, which permits the user to customize the system by means of running subject-area oriented texts through the MT engine. PARS/U is marketed in Ukraine and North America.",{PARS}/{U} for Windows: The World{'}s First Commercial {E}nglish-{U}krainian and {U}krainian-{E}nglish Machine Translation System,"The paper describes the PARS/U Ukrainian-English bidirectional MT system by Lingvistica '93 Co. PARS/U translates MS Word and HTML files as well as screen Helps. It features an easy-to-master dictionary updating program, which permits the user to customize the system by means of running subject-area oriented texts through the MT engine. PARS/U is marketed in Ukraine and North America.",PARS/U for Windows: The World's First Commercial English-Ukrainian and Ukrainian-English Machine Translation System,"The paper describes the PARS/U Ukrainian-English bidirectional MT system by Lingvistica '93 Co. PARS/U translates MS Word and HTML files as well as screen Helps. It features an easy-to-master dictionary updating program, which permits the user to customize the system by means of running subject-area oriented texts through the MT engine. PARS/U is marketed in Ukraine and North America.",,"PARS/U for Windows: The World's First Commercial English-Ukrainian and Ukrainian-English Machine Translation System. The paper describes the PARS/U Ukrainian-English bidirectional MT system by Lingvistica '93 Co. PARS/U translates MS Word and HTML files as well as screen Helps. It features an easy-to-master dictionary updating program, which permits the user to customize the system by means of running subject-area oriented texts through the MT engine. PARS/U is marketed in Ukraine and North America.",1997
ding-etal-2014-using,https://aclanthology.org/D14-1148,0,,,,finance,,,"Using Structured Events to Predict Stock Price Movement: An Empirical Investigation. It has been shown that news events influence the trends of stock price movements. However, previous work on news-driven stock market prediction rely on shallow features (such as bags-of-words, named entities and noun phrases), which do not capture structured entity-relation information, and hence cannot represent complete and exact events. Recent advances in Open Information Extraction (Open IE) techniques enable the extraction of structured events from web-scale data. We propose to adapt Open IE technology for event-based stock price movement prediction, extracting structured events from large-scale public news without manual efforts. Both linear and nonlinear models are employed to empirically investigate the hidden and complex relationships between events and the stock market. Largescale experiments show that the accuracy of S&P 500 index prediction is 60%, and that of individual stock prediction can be over 70%. Our event-based system outperforms bags-of-words-based baselines, and previously reported systems trained on S&P 500 stock historical data.",Using Structured Events to Predict Stock Price Movement: An Empirical Investigation,"It has been shown that news events influence the trends of stock price movements. However, previous work on news-driven stock market prediction rely on shallow features (such as bags-of-words, named entities and noun phrases), which do not capture structured entity-relation information, and hence cannot represent complete and exact events. Recent advances in Open Information Extraction (Open IE) techniques enable the extraction of structured events from web-scale data. We propose to adapt Open IE technology for event-based stock price movement prediction, extracting structured events from large-scale public news without manual efforts. Both linear and nonlinear models are employed to empirically investigate the hidden and complex relationships between events and the stock market. Largescale experiments show that the accuracy of S&P 500 index prediction is 60%, and that of individual stock prediction can be over 70%. Our event-based system outperforms bags-of-words-based baselines, and previously reported systems trained on S&P 500 stock historical data.",Using Structured Events to Predict Stock Price Movement: An Empirical Investigation,"It has been shown that news events influence the trends of stock price movements. However, previous work on news-driven stock market prediction rely on shallow features (such as bags-of-words, named entities and noun phrases), which do not capture structured entity-relation information, and hence cannot represent complete and exact events. Recent advances in Open Information Extraction (Open IE) techniques enable the extraction of structured events from web-scale data. We propose to adapt Open IE technology for event-based stock price movement prediction, extracting structured events from large-scale public news without manual efforts. Both linear and nonlinear models are employed to empirically investigate the hidden and complex relationships between events and the stock market. Largescale experiments show that the accuracy of S&P 500 index prediction is 60%, and that of individual stock prediction can be over 70%. Our event-based system outperforms bags-of-words-based baselines, and previously reported systems trained on S&P 500 stock historical data.","We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Basic Research Program (973 Program) of China via Grant 2014CB340503, the National Natural Science Foundation of China (NSFC) via Grant 61133012 and 61202277, the Singapore Ministry of Education (MOE) AcRF Tier 2 grant T2MOE201301 and SRG ISTD 2012 038 from Singapore University of Technology and Design. We are very grateful to Ji Ma for providing an implementation of the neural network algorithm.","Using Structured Events to Predict Stock Price Movement: An Empirical Investigation. It has been shown that news events influence the trends of stock price movements. However, previous work on news-driven stock market prediction rely on shallow features (such as bags-of-words, named entities and noun phrases), which do not capture structured entity-relation information, and hence cannot represent complete and exact events. Recent advances in Open Information Extraction (Open IE) techniques enable the extraction of structured events from web-scale data. We propose to adapt Open IE technology for event-based stock price movement prediction, extracting structured events from large-scale public news without manual efforts. Both linear and nonlinear models are employed to empirically investigate the hidden and complex relationships between events and the stock market. Largescale experiments show that the accuracy of S&P 500 index prediction is 60%, and that of individual stock prediction can be over 70%. Our event-based system outperforms bags-of-words-based baselines, and previously reported systems trained on S&P 500 stock historical data.",2014
deleger-etal-2014-annotation,http://www.lrec-conf.org/proceedings/lrec2014/pdf/552_Paper.pdf,0,,,,,,,"Annotation of specialized corpora using a comprehensive entity and relation scheme. Annotated corpora are essential resources for many applications in Natural Language Processing. They provide insight on the linguistic and semantic characteristics of the genre and domain covered, and can be used for the training and evaluation of automatic tools. In the biomedical domain, annotated corpora of English texts have become available for several genres and subfields. However, very few similar resources are available for languages other than English. In this paper we present an effort to produce a high-quality corpus of clinical documents in French, annotated with a comprehensive scheme of entities and relations. We present the annotation scheme as well as the results of a pilot annotation study covering 35 clinical documents in a variety of subfields and genres. We show that high inter-annotator agreement can be achieved using a complex annotation scheme.",Annotation of specialized corpora using a comprehensive entity and relation scheme,"Annotated corpora are essential resources for many applications in Natural Language Processing. They provide insight on the linguistic and semantic characteristics of the genre and domain covered, and can be used for the training and evaluation of automatic tools. In the biomedical domain, annotated corpora of English texts have become available for several genres and subfields. However, very few similar resources are available for languages other than English. In this paper we present an effort to produce a high-quality corpus of clinical documents in French, annotated with a comprehensive scheme of entities and relations. We present the annotation scheme as well as the results of a pilot annotation study covering 35 clinical documents in a variety of subfields and genres. We show that high inter-annotator agreement can be achieved using a complex annotation scheme.",Annotation of specialized corpora using a comprehensive entity and relation scheme,"Annotated corpora are essential resources for many applications in Natural Language Processing. They provide insight on the linguistic and semantic characteristics of the genre and domain covered, and can be used for the training and evaluation of automatic tools. In the biomedical domain, annotated corpora of English texts have become available for several genres and subfields. However, very few similar resources are available for languages other than English. In this paper we present an effort to produce a high-quality corpus of clinical documents in French, annotated with a comprehensive scheme of entities and relations. We present the annotation scheme as well as the results of a pilot annotation study covering 35 clinical documents in a variety of subfields and genres. We show that high inter-annotator agreement can be achieved using a complex annotation scheme.",This work was supported by the French National Agency for Research under grants CABeRneT 3 (ANR-13-JCJC) and Accordys 4 (ANR-12-CORD-007-03).,"Annotation of specialized corpora using a comprehensive entity and relation scheme. Annotated corpora are essential resources for many applications in Natural Language Processing. They provide insight on the linguistic and semantic characteristics of the genre and domain covered, and can be used for the training and evaluation of automatic tools. In the biomedical domain, annotated corpora of English texts have become available for several genres and subfields. However, very few similar resources are available for languages other than English. In this paper we present an effort to produce a high-quality corpus of clinical documents in French, annotated with a comprehensive scheme of entities and relations. We present the annotation scheme as well as the results of a pilot annotation study covering 35 clinical documents in a variety of subfields and genres. We show that high inter-annotator agreement can be achieved using a complex annotation scheme.",2014
hinrichs-etal-2010-weblicht-web,https://aclanthology.org/P10-4005,0,,,,,,,"WebLicht: Web-Based LRT Services for German. This software demonstration presents WebLicht (short for: Web-Based Linguistic Chaining Tool), a webbased service environment for the integration and use of language resources and tools (LRT). WebLicht is being developed as part of the D-SPIN project 1. We-bLicht is implemented as a web application so that there is no need for users to install any software on their own computers or to concern themselves with the technical details involved in building tool chains. The integrated web services are part of a prototypical infrastructure that was developed to facilitate chaining of LRT services. WebLicht allows the integration and use of distributed web services with standardized APIs. The nature of these open and standardized APIs makes it possible to access the web services from nearly any programming language, shell script or workflow engine (UIMA, Gate etc.) Additionally, an application for integration of additional services is available, allowing anyone to contribute his own web service.",{W}eb{L}icht: Web-Based {LRT} Services for {G}erman,"This software demonstration presents WebLicht (short for: Web-Based Linguistic Chaining Tool), a webbased service environment for the integration and use of language resources and tools (LRT). WebLicht is being developed as part of the D-SPIN project 1. We-bLicht is implemented as a web application so that there is no need for users to install any software on their own computers or to concern themselves with the technical details involved in building tool chains. The integrated web services are part of a prototypical infrastructure that was developed to facilitate chaining of LRT services. WebLicht allows the integration and use of distributed web services with standardized APIs. The nature of these open and standardized APIs makes it possible to access the web services from nearly any programming language, shell script or workflow engine (UIMA, Gate etc.) Additionally, an application for integration of additional services is available, allowing anyone to contribute his own web service.",WebLicht: Web-Based LRT Services for German,"This software demonstration presents WebLicht (short for: Web-Based Linguistic Chaining Tool), a webbased service environment for the integration and use of language resources and tools (LRT). WebLicht is being developed as part of the D-SPIN project 1. We-bLicht is implemented as a web application so that there is no need for users to install any software on their own computers or to concern themselves with the technical details involved in building tool chains. The integrated web services are part of a prototypical infrastructure that was developed to facilitate chaining of LRT services. WebLicht allows the integration and use of distributed web services with standardized APIs. The nature of these open and standardized APIs makes it possible to access the web services from nearly any programming language, shell script or workflow engine (UIMA, Gate etc.) Additionally, an application for integration of additional services is available, allowing anyone to contribute his own web service.","WebLicht is the product of a combined effort within the D-SPIN projects (www.d-spin.org). Currently, partners include:Seminar für Sprachwissenschaft/Computerlinguistik, Universität Tübingen, Abteilung für Automatische Sprachverarbeitung, Universität Leipzig, Institut für Maschinelle Sprachverarbeitung, Universität Stuttgart and Berlin Brandenburgische Akademie der Wissenschaften.","WebLicht: Web-Based LRT Services for German. This software demonstration presents WebLicht (short for: Web-Based Linguistic Chaining Tool), a webbased service environment for the integration and use of language resources and tools (LRT). WebLicht is being developed as part of the D-SPIN project 1. We-bLicht is implemented as a web application so that there is no need for users to install any software on their own computers or to concern themselves with the technical details involved in building tool chains. The integrated web services are part of a prototypical infrastructure that was developed to facilitate chaining of LRT services. WebLicht allows the integration and use of distributed web services with standardized APIs. The nature of these open and standardized APIs makes it possible to access the web services from nearly any programming language, shell script or workflow engine (UIMA, Gate etc.) Additionally, an application for integration of additional services is available, allowing anyone to contribute his own web service.",2010
huang-etal-2014-sentence,http://www.lrec-conf.org/proceedings/lrec2014/pdf/60_Paper.pdf,0,,,,,,,"Sentence Rephrasing for Parsing Sentences with OOV Words. This paper addresses the problems of out-of-vocabulary (OOV) words, named entities in particular, in dependency parsing. The OOV words, whose word forms are unknown to the learning-based parser, in a sentence may decrease the parsing performance. To deal with this problem, we propose a sentence rephrasing approach to replace each OOV word in a sentence with a popular word of the same named entity type in the training set, so that the knowledge of the word forms can be used for parsing. The highest-frequency-based rephrasing strategy and the information-retrieval-based rephrasing strategy are explored to select the word to replace, and the Chinese Treebank 6.0 (CTB6) corpus is adopted to evaluate the feasibility of the proposed sentence rephrasing strategies. Experimental results show that rephrasing some specific types of OOV words such as Corporation, Organization, and Competition increases the parsing performances. This methodology can be applied to domain adaptation to deal with OOV problems.",Sentence Rephrasing for Parsing Sentences with {OOV} Words,"This paper addresses the problems of out-of-vocabulary (OOV) words, named entities in particular, in dependency parsing. The OOV words, whose word forms are unknown to the learning-based parser, in a sentence may decrease the parsing performance. To deal with this problem, we propose a sentence rephrasing approach to replace each OOV word in a sentence with a popular word of the same named entity type in the training set, so that the knowledge of the word forms can be used for parsing. The highest-frequency-based rephrasing strategy and the information-retrieval-based rephrasing strategy are explored to select the word to replace, and the Chinese Treebank 6.0 (CTB6) corpus is adopted to evaluate the feasibility of the proposed sentence rephrasing strategies. Experimental results show that rephrasing some specific types of OOV words such as Corporation, Organization, and Competition increases the parsing performances. This methodology can be applied to domain adaptation to deal with OOV problems.",Sentence Rephrasing for Parsing Sentences with OOV Words,"This paper addresses the problems of out-of-vocabulary (OOV) words, named entities in particular, in dependency parsing. The OOV words, whose word forms are unknown to the learning-based parser, in a sentence may decrease the parsing performance. To deal with this problem, we propose a sentence rephrasing approach to replace each OOV word in a sentence with a popular word of the same named entity type in the training set, so that the knowledge of the word forms can be used for parsing. The highest-frequency-based rephrasing strategy and the information-retrieval-based rephrasing strategy are explored to select the word to replace, and the Chinese Treebank 6.0 (CTB6) corpus is adopted to evaluate the feasibility of the proposed sentence rephrasing strategies. Experimental results show that rephrasing some specific types of OOV words such as Corporation, Organization, and Competition increases the parsing performances. This methodology can be applied to domain adaptation to deal with OOV problems.","This research was partially supported by National Science Council, Taiwan under NSC101-2221-E-002-195-MY3.","Sentence Rephrasing for Parsing Sentences with OOV Words. This paper addresses the problems of out-of-vocabulary (OOV) words, named entities in particular, in dependency parsing. The OOV words, whose word forms are unknown to the learning-based parser, in a sentence may decrease the parsing performance. To deal with this problem, we propose a sentence rephrasing approach to replace each OOV word in a sentence with a popular word of the same named entity type in the training set, so that the knowledge of the word forms can be used for parsing. The highest-frequency-based rephrasing strategy and the information-retrieval-based rephrasing strategy are explored to select the word to replace, and the Chinese Treebank 6.0 (CTB6) corpus is adopted to evaluate the feasibility of the proposed sentence rephrasing strategies. Experimental results show that rephrasing some specific types of OOV words such as Corporation, Organization, and Competition increases the parsing performances. This methodology can be applied to domain adaptation to deal with OOV problems.",2014
himmel-1998-visualization,https://aclanthology.org/W98-0205,0,,,,,,,"Visualization for Large Collections of Multimedia Information. Organizations that make use of large amounts of multimedia material (especially images and video) require easy access to such information. Recent developments in computer hardware and algorithm design have made possible content indexing of digital video information and efficient display of 3D data representations. This paper describes collaborative work between Boeing Applied Research & Technology (AR&T), Carnegie Mellon University (CMU), and the Battelle Pacific Northwest National Laboratories (PNNL). to integrate media indexing with computer visualization to achieve effective content-based access to video information. Text metadata, representing video content, was extracted from the CMU Informedia system, processed by AR&T's text analysis software, and presented to users via PNNL's Starlight 3D visualization system. This approach shows how to make multimedia information accessible to a text-based visualization system, facilitating a global view of large collections of such data. We evaluated our approach by making several experimental queries against a library of eight hours of video segmented into several hundred ""video paragraphs."" We conclude that search performance of Inforrnedia was enhanced in terms of ease of exploration by the integration with Starlight.",Visualization for Large Collections of Multimedia Information,"Organizations that make use of large amounts of multimedia material (especially images and video) require easy access to such information. Recent developments in computer hardware and algorithm design have made possible content indexing of digital video information and efficient display of 3D data representations. This paper describes collaborative work between Boeing Applied Research & Technology (AR&T), Carnegie Mellon University (CMU), and the Battelle Pacific Northwest National Laboratories (PNNL). to integrate media indexing with computer visualization to achieve effective content-based access to video information. Text metadata, representing video content, was extracted from the CMU Informedia system, processed by AR&T's text analysis software, and presented to users via PNNL's Starlight 3D visualization system. This approach shows how to make multimedia information accessible to a text-based visualization system, facilitating a global view of large collections of such data. We evaluated our approach by making several experimental queries against a library of eight hours of video segmented into several hundred ""video paragraphs."" We conclude that search performance of Inforrnedia was enhanced in terms of ease of exploration by the integration with Starlight.",Visualization for Large Collections of Multimedia Information,"Organizations that make use of large amounts of multimedia material (especially images and video) require easy access to such information. Recent developments in computer hardware and algorithm design have made possible content indexing of digital video information and efficient display of 3D data representations. This paper describes collaborative work between Boeing Applied Research & Technology (AR&T), Carnegie Mellon University (CMU), and the Battelle Pacific Northwest National Laboratories (PNNL). to integrate media indexing with computer visualization to achieve effective content-based access to video information. Text metadata, representing video content, was extracted from the CMU Informedia system, processed by AR&T's text analysis software, and presented to users via PNNL's Starlight 3D visualization system. This approach shows how to make multimedia information accessible to a text-based visualization system, facilitating a global view of large collections of such data. We evaluated our approach by making several experimental queries against a library of eight hours of video segmented into several hundred ""video paragraphs."" We conclude that search performance of Inforrnedia was enhanced in terms of ease of exploration by the integration with Starlight.","Our thanks go to Ricky Houghton and Bryan Maher at the Carnegie Mellon University Informedia project, and to John Risch, Scott Dowson, Brian Moon, and Bruce Rex at the Battelle Pacific Northwest National Laboratories Starlight project for their excellent work leading to this result. The Boeing team also includes Dean Billheimer, Andrew Booker, Fred Holt, Michelle Keim, Dan Pierce, and Jason Wu.","Visualization for Large Collections of Multimedia Information. Organizations that make use of large amounts of multimedia material (especially images and video) require easy access to such information. Recent developments in computer hardware and algorithm design have made possible content indexing of digital video information and efficient display of 3D data representations. This paper describes collaborative work between Boeing Applied Research & Technology (AR&T), Carnegie Mellon University (CMU), and the Battelle Pacific Northwest National Laboratories (PNNL). to integrate media indexing with computer visualization to achieve effective content-based access to video information. Text metadata, representing video content, was extracted from the CMU Informedia system, processed by AR&T's text analysis software, and presented to users via PNNL's Starlight 3D visualization system. This approach shows how to make multimedia information accessible to a text-based visualization system, facilitating a global view of large collections of such data. We evaluated our approach by making several experimental queries against a library of eight hours of video segmented into several hundred ""video paragraphs."" We conclude that search performance of Inforrnedia was enhanced in terms of ease of exploration by the integration with Starlight.",1998
amoia-martinez-2013-using,https://aclanthology.org/W13-2711,0,,,,,,,"Using Comparable Collections of Historical Texts for Building a Diachronic Dictionary for Spelling Normalization. In this paper, we argue that comparable collections of historical written resources can help overcoming typical challenges posed by heritage texts enhancing spelling normalization, POS-tagging and subsequent diachronic linguistic analyses. Thus, we present a comparable corpus of historical German recipes and show how such a comparable text collection together with the application of innovative MT inspired strategies allow us (i) to address the word form normalization problem and (ii) to automatically generate a diachronic dictionary of spelling variants. Such a diachronic dictionary can be used both for spelling normalization and for extracting new ""translation"" (word formation/change) rules for diachronic spelling variants. Moreover, our approach can be applied virtually to any diachronic collection of texts regardless of the time span they represent. A first evaluation shows that our approach compares well with state-of-art approaches.",Using Comparable Collections of Historical Texts for Building a Diachronic Dictionary for Spelling Normalization,"In this paper, we argue that comparable collections of historical written resources can help overcoming typical challenges posed by heritage texts enhancing spelling normalization, POS-tagging and subsequent diachronic linguistic analyses. Thus, we present a comparable corpus of historical German recipes and show how such a comparable text collection together with the application of innovative MT inspired strategies allow us (i) to address the word form normalization problem and (ii) to automatically generate a diachronic dictionary of spelling variants. Such a diachronic dictionary can be used both for spelling normalization and for extracting new ""translation"" (word formation/change) rules for diachronic spelling variants. Moreover, our approach can be applied virtually to any diachronic collection of texts regardless of the time span they represent. A first evaluation shows that our approach compares well with state-of-art approaches.",Using Comparable Collections of Historical Texts for Building a Diachronic Dictionary for Spelling Normalization,"In this paper, we argue that comparable collections of historical written resources can help overcoming typical challenges posed by heritage texts enhancing spelling normalization, POS-tagging and subsequent diachronic linguistic analyses. Thus, we present a comparable corpus of historical German recipes and show how such a comparable text collection together with the application of innovative MT inspired strategies allow us (i) to address the word form normalization problem and (ii) to automatically generate a diachronic dictionary of spelling variants. Such a diachronic dictionary can be used both for spelling normalization and for extracting new ""translation"" (word formation/change) rules for diachronic spelling variants. Moreover, our approach can be applied virtually to any diachronic collection of texts regardless of the time span they represent. A first evaluation shows that our approach compares well with state-of-art approaches.",,"Using Comparable Collections of Historical Texts for Building a Diachronic Dictionary for Spelling Normalization. In this paper, we argue that comparable collections of historical written resources can help overcoming typical challenges posed by heritage texts enhancing spelling normalization, POS-tagging and subsequent diachronic linguistic analyses. Thus, we present a comparable corpus of historical German recipes and show how such a comparable text collection together with the application of innovative MT inspired strategies allow us (i) to address the word form normalization problem and (ii) to automatically generate a diachronic dictionary of spelling variants. Such a diachronic dictionary can be used both for spelling normalization and for extracting new ""translation"" (word formation/change) rules for diachronic spelling variants. Moreover, our approach can be applied virtually to any diachronic collection of texts regardless of the time span they represent. A first evaluation shows that our approach compares well with state-of-art approaches.",2013
vlachos-2006-active,https://aclanthology.org/W06-2209,0,,,,,,,"Active Annotation. This paper introduces a semi-supervised learning framework for creating training material, namely active annotation. The main intuition is that an unsupervised method is used to initially annotate imperfectly the data and then the errors made are detected automatically and corrected by a human annotator. We applied active annotation to named entity recognition in the biomedical domain and encouraging results were obtained. The main advantages over the popular active learning framework are that no seed annotated data is needed and that the reusability of the data is maintained. In addition to the framework, an efficient uncertainty estimation for Hidden Markov Models is presented.",Active Annotation,"This paper introduces a semi-supervised learning framework for creating training material, namely active annotation. The main intuition is that an unsupervised method is used to initially annotate imperfectly the data and then the errors made are detected automatically and corrected by a human annotator. We applied active annotation to named entity recognition in the biomedical domain and encouraging results were obtained. The main advantages over the popular active learning framework are that no seed annotated data is needed and that the reusability of the data is maintained. In addition to the framework, an efficient uncertainty estimation for Hidden Markov Models is presented.",Active Annotation,"This paper introduces a semi-supervised learning framework for creating training material, namely active annotation. The main intuition is that an unsupervised method is used to initially annotate imperfectly the data and then the errors made are detected automatically and corrected by a human annotator. We applied active annotation to named entity recognition in the biomedical domain and encouraging results were obtained. The main advantages over the popular active learning framework are that no seed annotated data is needed and that the reusability of the data is maintained. In addition to the framework, an efficient uncertainty estimation for Hidden Markov Models is presented.","The author was funded by BBSRC, grant number 38688. I would like to thank Ted Briscoe and Bob Carpenter for their feedback and comments.","Active Annotation. This paper introduces a semi-supervised learning framework for creating training material, namely active annotation. The main intuition is that an unsupervised method is used to initially annotate imperfectly the data and then the errors made are detected automatically and corrected by a human annotator. We applied active annotation to named entity recognition in the biomedical domain and encouraging results were obtained. The main advantages over the popular active learning framework are that no seed annotated data is needed and that the reusability of the data is maintained. In addition to the framework, an efficient uncertainty estimation for Hidden Markov Models is presented.",2006
patry-etal-2006-mood,http://www.lrec-conf.org/proceedings/lrec2006/pdf/542_pdf.pdf,0,,,,,,,"MOOD: A Modular Object-Oriented Decoder for Statistical Machine Translation. We present an Open Source framework called MOOD developed in order to facilitate the development of a Statistical Machine Translation Decoder. MOOD has been modularized using an object-oriented approach which makes it especially suitable for the fast development of state-of-the-art decoders. As a proof of concept, a clone of the PHARAOH decoder has been implemented and evaluated. This clone named RAMSES is part of the current distribution of MOOD.",{MOOD}: A Modular Object-Oriented Decoder for Statistical Machine Translation,"We present an Open Source framework called MOOD developed in order to facilitate the development of a Statistical Machine Translation Decoder. MOOD has been modularized using an object-oriented approach which makes it especially suitable for the fast development of state-of-the-art decoders. As a proof of concept, a clone of the PHARAOH decoder has been implemented and evaluated. This clone named RAMSES is part of the current distribution of MOOD.",MOOD: A Modular Object-Oriented Decoder for Statistical Machine Translation,"We present an Open Source framework called MOOD developed in order to facilitate the development of a Statistical Machine Translation Decoder. MOOD has been modularized using an object-oriented approach which makes it especially suitable for the fast development of state-of-the-art decoders. As a proof of concept, a clone of the PHARAOH decoder has been implemented and evaluated. This clone named RAMSES is part of the current distribution of MOOD.",,"MOOD: A Modular Object-Oriented Decoder for Statistical Machine Translation. We present an Open Source framework called MOOD developed in order to facilitate the development of a Statistical Machine Translation Decoder. MOOD has been modularized using an object-oriented approach which makes it especially suitable for the fast development of state-of-the-art decoders. As a proof of concept, a clone of the PHARAOH decoder has been implemented and evaluated. This clone named RAMSES is part of the current distribution of MOOD.",2006
feng-etal-2015-blcunlp,https://aclanthology.org/S15-2054,0,,,,,,,"BLCUNLP: Corpus Pattern Analysis for Verbs Based on Dependency Chain. We implemented a syntactic and semantic tagging system for SemEval 2015 Task 15: Corpus Pattern Analysis. For syntactic tagging, we present a Dependency Chain Search Algorithm that is found to be effective at identifying structurally distant subjects and objects. Other syntactic labels are identified using rules defined over dependency parse structures and the output of a verb classification module. Semantic tagging is performed using a simple lexical mapping table combined with postprocessing rules written over phrase structure constituent types and named entity information. The final score of our system is 0.530 F1, ranking second in this task.",{BLCUNLP}: Corpus Pattern Analysis for Verbs Based on Dependency Chain,"We implemented a syntactic and semantic tagging system for SemEval 2015 Task 15: Corpus Pattern Analysis. For syntactic tagging, we present a Dependency Chain Search Algorithm that is found to be effective at identifying structurally distant subjects and objects. Other syntactic labels are identified using rules defined over dependency parse structures and the output of a verb classification module. Semantic tagging is performed using a simple lexical mapping table combined with postprocessing rules written over phrase structure constituent types and named entity information. The final score of our system is 0.530 F1, ranking second in this task.",BLCUNLP: Corpus Pattern Analysis for Verbs Based on Dependency Chain,"We implemented a syntactic and semantic tagging system for SemEval 2015 Task 15: Corpus Pattern Analysis. For syntactic tagging, we present a Dependency Chain Search Algorithm that is found to be effective at identifying structurally distant subjects and objects. Other syntactic labels are identified using rules defined over dependency parse structures and the output of a verb classification module. Semantic tagging is performed using a simple lexical mapping table combined with postprocessing rules written over phrase structure constituent types and named entity information. The final score of our system is 0.530 F1, ranking second in this task.","We would like to thank the anonymous reviewers for their helpful suggestions and comments. The research work is funded by the Natural Science Foundation of China (No.61300081, 61170162), and the Fundamental Research Funds for the Central Universities in BLCU (No. 15YJ030006).","BLCUNLP: Corpus Pattern Analysis for Verbs Based on Dependency Chain. We implemented a syntactic and semantic tagging system for SemEval 2015 Task 15: Corpus Pattern Analysis. For syntactic tagging, we present a Dependency Chain Search Algorithm that is found to be effective at identifying structurally distant subjects and objects. Other syntactic labels are identified using rules defined over dependency parse structures and the output of a verb classification module. Semantic tagging is performed using a simple lexical mapping table combined with postprocessing rules written over phrase structure constituent types and named entity information. The final score of our system is 0.530 F1, ranking second in this task.",2015
wang-etal-2018-semi-autoregressive,https://aclanthology.org/D18-1044,0,,,,,,,"Semi-Autoregressive Neural Machine Translation. Existing approaches to neural machine translation are typically autoregressive models. While these models attain state-of-the-art translation quality, they are suffering from low parallelizability and thus slow at decoding long sequences. In this paper, we propose a novel model for fast sequence generationthe semi-autoregressive Transformer (SAT). The SAT keeps the autoregressive property in global but relieves in local and thus are able to produce multiple successive words in parallel at each time step. Experiments conducted on English-German and Chinese-English translation tasks show that the SAT achieves a good balance between translation quality and decoding speed. On WMT'14 English-German translation, the SAT achieves 5.58× speedup while maintaining 88% translation quality, significantly better than the previous non-autoregressive methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).",Semi-Autoregressive Neural Machine Translation,"Existing approaches to neural machine translation are typically autoregressive models. While these models attain state-of-the-art translation quality, they are suffering from low parallelizability and thus slow at decoding long sequences. In this paper, we propose a novel model for fast sequence generationthe semi-autoregressive Transformer (SAT). The SAT keeps the autoregressive property in global but relieves in local and thus are able to produce multiple successive words in parallel at each time step. Experiments conducted on English-German and Chinese-English translation tasks show that the SAT achieves a good balance between translation quality and decoding speed. On WMT'14 English-German translation, the SAT achieves 5.58× speedup while maintaining 88% translation quality, significantly better than the previous non-autoregressive methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).",Semi-Autoregressive Neural Machine Translation,"Existing approaches to neural machine translation are typically autoregressive models. While these models attain state-of-the-art translation quality, they are suffering from low parallelizability and thus slow at decoding long sequences. In this paper, we propose a novel model for fast sequence generationthe semi-autoregressive Transformer (SAT). The SAT keeps the autoregressive property in global but relieves in local and thus are able to produce multiple successive words in parallel at each time step. Experiments conducted on English-German and Chinese-English translation tasks show that the SAT achieves a good balance between translation quality and decoding speed. On WMT'14 English-German translation, the SAT achieves 5.58× speedup while maintaining 88% translation quality, significantly better than the previous non-autoregressive methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).","We would like to thank the anonymous reviewers for their valuable comments. We also thank Wenfu Wang, Hao Wang for helpful discussion and Linhao Dong, Jinghao Niu for their help in paper writting.","Semi-Autoregressive Neural Machine Translation. Existing approaches to neural machine translation are typically autoregressive models. While these models attain state-of-the-art translation quality, they are suffering from low parallelizability and thus slow at decoding long sequences. In this paper, we propose a novel model for fast sequence generationthe semi-autoregressive Transformer (SAT). The SAT keeps the autoregressive property in global but relieves in local and thus are able to produce multiple successive words in parallel at each time step. Experiments conducted on English-German and Chinese-English translation tasks show that the SAT achieves a good balance between translation quality and decoding speed. On WMT'14 English-German translation, the SAT achieves 5.58× speedup while maintaining 88% translation quality, significantly better than the previous non-autoregressive methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).",2018
townsend-etal-2014-university,https://aclanthology.org/S14-2136,0,,,,,,,"University\_of\_Warwick: SENTIADAPTRON - A Domain Adaptable Sentiment Analyser for Tweets - Meets SemEval. We give a brief overview of our system, SentiAdaptron, a domain-sensitive and domain adaptable system for twitter analysis in tweets, and discuss performance on SemEval (in both the constrained and unconstrained scenarios), as well as implications arising from comparing the intra-and inter-domain performance on our twitter corpus.",{U}niversity{\_}of{\_}{W}arwick: {SENTIADAPTRON} - A Domain Adaptable Sentiment Analyser for Tweets - Meets {S}em{E}val,"We give a brief overview of our system, SentiAdaptron, a domain-sensitive and domain adaptable system for twitter analysis in tweets, and discuss performance on SemEval (in both the constrained and unconstrained scenarios), as well as implications arising from comparing the intra-and inter-domain performance on our twitter corpus.",University\_of\_Warwick: SENTIADAPTRON - A Domain Adaptable Sentiment Analyser for Tweets - Meets SemEval,"We give a brief overview of our system, SentiAdaptron, a domain-sensitive and domain adaptable system for twitter analysis in tweets, and discuss performance on SemEval (in both the constrained and unconstrained scenarios), as well as implications arising from comparing the intra-and inter-domain performance on our twitter corpus.","Warwick Research Development Fund grant RD13129 provided funding for crowdsourced annotations. We thank our partners at CUSP, NYU for enabling us to use Amazon Mechanical Turk for this process.","University\_of\_Warwick: SENTIADAPTRON - A Domain Adaptable Sentiment Analyser for Tweets - Meets SemEval. We give a brief overview of our system, SentiAdaptron, a domain-sensitive and domain adaptable system for twitter analysis in tweets, and discuss performance on SemEval (in both the constrained and unconstrained scenarios), as well as implications arising from comparing the intra-and inter-domain performance on our twitter corpus.",2014
rim-etal-2020-interchange,https://aclanthology.org/2020.lrec-1.893,0,,,,,,,"Interchange Formats for Visualization: LIF and MMIF. Promoting interoperrable computational linguistics (CL) and natural language processing (NLP) application platforms and interchangeable data formats have contributed improving discoverabilty and accessbility of the openly available NLP software. In this paper, we discuss the enhanced data visualization capabilities that are also enabled by inter-operating NLP pipelines and interchange formats. For adding openly available visualization tools and graphical annotation tools to the Language Applications Grid (LAPPS Grid) and Computational Linguistics Applications for Multimedia Services (CLAMS) toolboxes, we have developed interchange formats that can carry annotations and metadata for text and audiovisual source data. We descibe those data formats and present case studies where we successfully adopt open-source visualization tools and combine them with CL tools.",Interchange Formats for Visualization: {LIF} and {MMIF},"Promoting interoperrable computational linguistics (CL) and natural language processing (NLP) application platforms and interchangeable data formats have contributed improving discoverabilty and accessbility of the openly available NLP software. In this paper, we discuss the enhanced data visualization capabilities that are also enabled by inter-operating NLP pipelines and interchange formats. For adding openly available visualization tools and graphical annotation tools to the Language Applications Grid (LAPPS Grid) and Computational Linguistics Applications for Multimedia Services (CLAMS) toolboxes, we have developed interchange formats that can carry annotations and metadata for text and audiovisual source data. We descibe those data formats and present case studies where we successfully adopt open-source visualization tools and combine them with CL tools.",Interchange Formats for Visualization: LIF and MMIF,"Promoting interoperrable computational linguistics (CL) and natural language processing (NLP) application platforms and interchangeable data formats have contributed improving discoverabilty and accessbility of the openly available NLP software. In this paper, we discuss the enhanced data visualization capabilities that are also enabled by inter-operating NLP pipelines and interchange formats. For adding openly available visualization tools and graphical annotation tools to the Language Applications Grid (LAPPS Grid) and Computational Linguistics Applications for Multimedia Services (CLAMS) toolboxes, we have developed interchange formats that can carry annotations and metadata for text and audiovisual source data. We descibe those data formats and present case studies where we successfully adopt open-source visualization tools and combine them with CL tools.","We would like to thank the reviewers for their helpful comments. This work was supported by a grant from the National Science Foundation to Brandeis University and Vassar University, and by a grant from the Andrew W. Mellon Foundation to Brandeis University. The points of view expressed herein are solely those of the authors and do not represent the views of the NSF or the Andrew W. Mellon Foundation. Any errors or omissions are, of course, the responsibility of the authors.","Interchange Formats for Visualization: LIF and MMIF. Promoting interoperrable computational linguistics (CL) and natural language processing (NLP) application platforms and interchangeable data formats have contributed improving discoverabilty and accessbility of the openly available NLP software. In this paper, we discuss the enhanced data visualization capabilities that are also enabled by inter-operating NLP pipelines and interchange formats. For adding openly available visualization tools and graphical annotation tools to the Language Applications Grid (LAPPS Grid) and Computational Linguistics Applications for Multimedia Services (CLAMS) toolboxes, we have developed interchange formats that can carry annotations and metadata for text and audiovisual source data. We descibe those data formats and present case studies where we successfully adopt open-source visualization tools and combine them with CL tools.",2020
michou-seretan-2009-tool,https://aclanthology.org/E09-2012,0,,,,,,,"A Tool for Multi-Word Expression Extraction in Modern Greek Using Syntactic Parsing. This paper presents a tool for extracting multi-word expressions from corpora in Modern Greek, which is used together with a parallel concordancer to augment the lexicon of a rule-based machinetranslation system. The tool is part of a larger extraction system that relies, in turn, on a multilingual parser developed over the past decade in our laboratory. The paper reviews the various NLP modules and resources which enable the retrieval of Greek multi-word expressions and their translations: the Greek parser, its lexical database, the extraction and concordancing system.",A Tool for Multi-Word Expression Extraction in {M}odern {G}reek Using Syntactic Parsing,"This paper presents a tool for extracting multi-word expressions from corpora in Modern Greek, which is used together with a parallel concordancer to augment the lexicon of a rule-based machinetranslation system. The tool is part of a larger extraction system that relies, in turn, on a multilingual parser developed over the past decade in our laboratory. The paper reviews the various NLP modules and resources which enable the retrieval of Greek multi-word expressions and their translations: the Greek parser, its lexical database, the extraction and concordancing system.",A Tool for Multi-Word Expression Extraction in Modern Greek Using Syntactic Parsing,"This paper presents a tool for extracting multi-word expressions from corpora in Modern Greek, which is used together with a parallel concordancer to augment the lexicon of a rule-based machinetranslation system. The tool is part of a larger extraction system that relies, in turn, on a multilingual parser developed over the past decade in our laboratory. The paper reviews the various NLP modules and resources which enable the retrieval of Greek multi-word expressions and their translations: the Greek parser, its lexical database, the extraction and concordancing system.",This work has been supported by the Swiss National Science Foundation (grant 100012-117944). The authors would like to thank Eric Wehrli for his support and useful comments.,"A Tool for Multi-Word Expression Extraction in Modern Greek Using Syntactic Parsing. This paper presents a tool for extracting multi-word expressions from corpora in Modern Greek, which is used together with a parallel concordancer to augment the lexicon of a rule-based machinetranslation system. The tool is part of a larger extraction system that relies, in turn, on a multilingual parser developed over the past decade in our laboratory. The paper reviews the various NLP modules and resources which enable the retrieval of Greek multi-word expressions and their translations: the Greek parser, its lexical database, the extraction and concordancing system.",2009
liu-etal-2013-novel-classifier,https://aclanthology.org/P13-2086,0,,,,,,,"A Novel Classifier Based on Quantum Computation. In this article, we propose a novel classifier based on quantum computation theory. Different from existing methods, we consider the classification as an evolutionary process of a physical system and build the classifier by using the basic quantum mechanics equation. The performance of the experiments on two datasets indicates feasibility and potentiality of the quantum classifier.",A Novel Classifier Based on Quantum Computation,"In this article, we propose a novel classifier based on quantum computation theory. Different from existing methods, we consider the classification as an evolutionary process of a physical system and build the classifier by using the basic quantum mechanics equation. The performance of the experiments on two datasets indicates feasibility and potentiality of the quantum classifier.",A Novel Classifier Based on Quantum Computation,"In this article, we propose a novel classifier based on quantum computation theory. Different from existing methods, we consider the classification as an evolutionary process of a physical system and build the classifier by using the basic quantum mechanics equation. The performance of the experiments on two datasets indicates feasibility and potentiality of the quantum classifier.",This work was supported by the National Natural Science Foundation in China 61171114 ,"A Novel Classifier Based on Quantum Computation. In this article, we propose a novel classifier based on quantum computation theory. Different from existing methods, we consider the classification as an evolutionary process of a physical system and build the classifier by using the basic quantum mechanics equation. The performance of the experiments on two datasets indicates feasibility and potentiality of the quantum classifier.",2013
aktas-etal-2020-adapting,https://aclanthology.org/2020.findings-emnlp.222,0,,,,,,,"Adapting Coreference Resolution to Twitter Conversations. The performance of standard coreference resolution is known to drop significantly on Twitter texts. We improve the performance of the (Lee et al., 2018) system, which is originally trained on OntoNotes, by retraining on manually-annotated Twitter conversation data. Further experiments by combining different portions of OntoNotes with Twitter data show that selecting text genres for the training data can beat the mere maximization of training data amount. In addition, we inspect several phenomena such as the role of deictic pronouns in conversational data, and present additional results for variant settings. Our best configuration improves the performance of the ""out of the box"" system by 21.6%.",Adapting Coreference Resolution to {T}witter Conversations,"The performance of standard coreference resolution is known to drop significantly on Twitter texts. We improve the performance of the (Lee et al., 2018) system, which is originally trained on OntoNotes, by retraining on manually-annotated Twitter conversation data. Further experiments by combining different portions of OntoNotes with Twitter data show that selecting text genres for the training data can beat the mere maximization of training data amount. In addition, we inspect several phenomena such as the role of deictic pronouns in conversational data, and present additional results for variant settings. Our best configuration improves the performance of the ""out of the box"" system by 21.6%.",Adapting Coreference Resolution to Twitter Conversations,"The performance of standard coreference resolution is known to drop significantly on Twitter texts. We improve the performance of the (Lee et al., 2018) system, which is originally trained on OntoNotes, by retraining on manually-annotated Twitter conversation data. Further experiments by combining different portions of OntoNotes with Twitter data show that selecting text genres for the training data can beat the mere maximization of training data amount. In addition, we inspect several phenomena such as the role of deictic pronouns in conversational data, and present additional results for variant settings. Our best configuration improves the performance of the ""out of the box"" system by 21.6%.","We thank the anonymous reviewers for their helpful comments and suggestions. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Projektnummer 317633480 -SFB 1287, Project A03.","Adapting Coreference Resolution to Twitter Conversations. The performance of standard coreference resolution is known to drop significantly on Twitter texts. We improve the performance of the (Lee et al., 2018) system, which is originally trained on OntoNotes, by retraining on manually-annotated Twitter conversation data. Further experiments by combining different portions of OntoNotes with Twitter data show that selecting text genres for the training data can beat the mere maximization of training data amount. In addition, we inspect several phenomena such as the role of deictic pronouns in conversational data, and present additional results for variant settings. Our best configuration improves the performance of the ""out of the box"" system by 21.6%.",2020
mukund-srihari-2009-ne,https://aclanthology.org/W09-1609,0,,,,,,,"NE Tagging for Urdu based on Bootstrap POS Learning. Part of Speech (POS) tagging and Named Entity (NE) tagging have become important components of effective text analysis. In this paper, we propose a bootstrapped model that involves four levels of text processing for Urdu. We show that increasing the training data for POS learning by applying bootstrapping techniques improves NE tagging results. Our model overcomes the limitation imposed by the availability of limited ground truth data required for training a learning model. Both our POS tagging and NE tagging models are based on the Conditional Random Field (CRF) learning approach. To further enhance the performance, grammar rules and lexicon lookups are applied on the final output to correct any spurious tag assignments. We also propose a model for word boundary segmentation where a bigram HMM model is trained for character transitions among all positions in each word. The generated words are further processed using a probabilistic language model. All models use a hybrid approach that combines statistical models with hand crafted grammar rules.",{NE} Tagging for {U}rdu based on Bootstrap {POS} Learning,"Part of Speech (POS) tagging and Named Entity (NE) tagging have become important components of effective text analysis. In this paper, we propose a bootstrapped model that involves four levels of text processing for Urdu. We show that increasing the training data for POS learning by applying bootstrapping techniques improves NE tagging results. Our model overcomes the limitation imposed by the availability of limited ground truth data required for training a learning model. Both our POS tagging and NE tagging models are based on the Conditional Random Field (CRF) learning approach. To further enhance the performance, grammar rules and lexicon lookups are applied on the final output to correct any spurious tag assignments. We also propose a model for word boundary segmentation where a bigram HMM model is trained for character transitions among all positions in each word. The generated words are further processed using a probabilistic language model. All models use a hybrid approach that combines statistical models with hand crafted grammar rules.",NE Tagging for Urdu based on Bootstrap POS Learning,"Part of Speech (POS) tagging and Named Entity (NE) tagging have become important components of effective text analysis. In this paper, we propose a bootstrapped model that involves four levels of text processing for Urdu. We show that increasing the training data for POS learning by applying bootstrapping techniques improves NE tagging results. Our model overcomes the limitation imposed by the availability of limited ground truth data required for training a learning model. Both our POS tagging and NE tagging models are based on the Conditional Random Field (CRF) learning approach. To further enhance the performance, grammar rules and lexicon lookups are applied on the final output to correct any spurious tag assignments. We also propose a model for word boundary segmentation where a bigram HMM model is trained for character transitions among all positions in each word. The generated words are further processed using a probabilistic language model. All models use a hybrid approach that combines statistical models with hand crafted grammar rules.",,"NE Tagging for Urdu based on Bootstrap POS Learning. Part of Speech (POS) tagging and Named Entity (NE) tagging have become important components of effective text analysis. In this paper, we propose a bootstrapped model that involves four levels of text processing for Urdu. We show that increasing the training data for POS learning by applying bootstrapping techniques improves NE tagging results. Our model overcomes the limitation imposed by the availability of limited ground truth data required for training a learning model. Both our POS tagging and NE tagging models are based on the Conditional Random Field (CRF) learning approach. To further enhance the performance, grammar rules and lexicon lookups are applied on the final output to correct any spurious tag assignments. We also propose a model for word boundary segmentation where a bigram HMM model is trained for character transitions among all positions in each word. The generated words are further processed using a probabilistic language model. All models use a hybrid approach that combines statistical models with hand crafted grammar rules.",2009
gotz-meurers-1995-compiling,https://aclanthology.org/P95-1012,0,,,,,,,"Compiling HPSG type constraints into definite clause programs. We present a new approach to HPSG processing: compiling HPSG grammars expressed as type constraints into definite clause programs. This provides a clear and computationally useful correspondence between linguistic theories and their implementation. The compiler performs offline constraint inheritance and code optimization. As a result, we are able to efficiently process with HPSG grammars without haviog to hand-translate them into definite clause or phrase structure based systems.",Compiling {HPSG} type constraints into definite clause programs,"We present a new approach to HPSG processing: compiling HPSG grammars expressed as type constraints into definite clause programs. This provides a clear and computationally useful correspondence between linguistic theories and their implementation. The compiler performs offline constraint inheritance and code optimization. As a result, we are able to efficiently process with HPSG grammars without haviog to hand-translate them into definite clause or phrase structure based systems.",Compiling HPSG type constraints into definite clause programs,"We present a new approach to HPSG processing: compiling HPSG grammars expressed as type constraints into definite clause programs. This provides a clear and computationally useful correspondence between linguistic theories and their implementation. The compiler performs offline constraint inheritance and code optimization. As a result, we are able to efficiently process with HPSG grammars without haviog to hand-translate them into definite clause or phrase structure based systems.","The research reported here was carried out in the context of SFB 340, project B4, funded by the Deutsche Forschungsgemeinschaft. We would like to thank Dale Gerdemann, Paul John King and two anonymous referees for helpful discussion and comments.","Compiling HPSG type constraints into definite clause programs. We present a new approach to HPSG processing: compiling HPSG grammars expressed as type constraints into definite clause programs. This provides a clear and computationally useful correspondence between linguistic theories and their implementation. The compiler performs offline constraint inheritance and code optimization. As a result, we are able to efficiently process with HPSG grammars without haviog to hand-translate them into definite clause or phrase structure based systems.",1995
ahn-frampton-2006-automatic,https://aclanthology.org/W06-2006,0,,,,,,,"Automatic Generation of Translation Dictionaries Using Intermediary Languages. We describe a method which uses one or more intermediary languages in order to automatically generate translation dictionaries. Such a method could potentially be used to efficiently create translation dictionaries for language groups which have as yet had little interaction. For any given word in the source language, our method involves first translating into the intermediary language(s), then into the target language, back into the intermediary language(s) and finally back into the source language. The relationship between a word and the number of possible translations in another language is most often 1-to-many, and so at each stage, the number of possible translations grows exponentially. If we arrive back at the same starting point i.e. the same word in the source language, then we hypothesise that the meanings of the words in the chain have not diverged significantly. Hence we backtrack through the link structure to the target language word and accept this as a suitable translation. We have tested our method by using English as an intermediary language to automatically generate a Spanish-to-German dictionary, and the results are encouraging.",Automatic Generation of Translation Dictionaries Using Intermediary Languages,"We describe a method which uses one or more intermediary languages in order to automatically generate translation dictionaries. Such a method could potentially be used to efficiently create translation dictionaries for language groups which have as yet had little interaction. For any given word in the source language, our method involves first translating into the intermediary language(s), then into the target language, back into the intermediary language(s) and finally back into the source language. The relationship between a word and the number of possible translations in another language is most often 1-to-many, and so at each stage, the number of possible translations grows exponentially. If we arrive back at the same starting point i.e. the same word in the source language, then we hypothesise that the meanings of the words in the chain have not diverged significantly. Hence we backtrack through the link structure to the target language word and accept this as a suitable translation. We have tested our method by using English as an intermediary language to automatically generate a Spanish-to-German dictionary, and the results are encouraging.",Automatic Generation of Translation Dictionaries Using Intermediary Languages,"We describe a method which uses one or more intermediary languages in order to automatically generate translation dictionaries. Such a method could potentially be used to efficiently create translation dictionaries for language groups which have as yet had little interaction. For any given word in the source language, our method involves first translating into the intermediary language(s), then into the target language, back into the intermediary language(s) and finally back into the source language. The relationship between a word and the number of possible translations in another language is most often 1-to-many, and so at each stage, the number of possible translations grows exponentially. If we arrive back at the same starting point i.e. the same word in the source language, then we hypothesise that the meanings of the words in the chain have not diverged significantly. Hence we backtrack through the link structure to the target language word and accept this as a suitable translation. We have tested our method by using English as an intermediary language to automatically generate a Spanish-to-German dictionary, and the results are encouraging.",,"Automatic Generation of Translation Dictionaries Using Intermediary Languages. We describe a method which uses one or more intermediary languages in order to automatically generate translation dictionaries. Such a method could potentially be used to efficiently create translation dictionaries for language groups which have as yet had little interaction. For any given word in the source language, our method involves first translating into the intermediary language(s), then into the target language, back into the intermediary language(s) and finally back into the source language. The relationship between a word and the number of possible translations in another language is most often 1-to-many, and so at each stage, the number of possible translations grows exponentially. If we arrive back at the same starting point i.e. the same word in the source language, then we hypothesise that the meanings of the words in the chain have not diverged significantly. Hence we backtrack through the link structure to the target language word and accept this as a suitable translation. We have tested our method by using English as an intermediary language to automatically generate a Spanish-to-German dictionary, and the results are encouraging.",2006
zheng-etal-2021-low,https://aclanthology.org/2021.americasnlp-1.26,0,,,,,,,"Low-Resource Machine Translation Using Cross-Lingual Language Model Pretraining. This paper describes UTokyo's submission to the AmericasNLP 2021 Shared Task on machine translation systems for indigenous languages of the Americas. We present a lowresource machine translation system that improves translation accuracy using cross-lingual language model pretraining. Our system uses an mBART implementation of FAIRSEQ to pretrain on a large set of monolingual data from a diverse set of high-resource languages before finetuning on 10 low-resource indigenous American languages: Aymara, Bribri, Asháninka, Guaraní, Wixarika, Náhuatl, Hñähñu, Quechua, Shipibo-Konibo, and Rarámuri. On average, our system achieved BLEU scores that were 1.64 higher and CHRF scores that were 0.0749 higher than the baseline.",Low-Resource Machine Translation Using Cross-Lingual Language Model Pretraining,"This paper describes UTokyo's submission to the AmericasNLP 2021 Shared Task on machine translation systems for indigenous languages of the Americas. We present a lowresource machine translation system that improves translation accuracy using cross-lingual language model pretraining. Our system uses an mBART implementation of FAIRSEQ to pretrain on a large set of monolingual data from a diverse set of high-resource languages before finetuning on 10 low-resource indigenous American languages: Aymara, Bribri, Asháninka, Guaraní, Wixarika, Náhuatl, Hñähñu, Quechua, Shipibo-Konibo, and Rarámuri. On average, our system achieved BLEU scores that were 1.64 higher and CHRF scores that were 0.0749 higher than the baseline.",Low-Resource Machine Translation Using Cross-Lingual Language Model Pretraining,"This paper describes UTokyo's submission to the AmericasNLP 2021 Shared Task on machine translation systems for indigenous languages of the Americas. We present a lowresource machine translation system that improves translation accuracy using cross-lingual language model pretraining. Our system uses an mBART implementation of FAIRSEQ to pretrain on a large set of monolingual data from a diverse set of high-resource languages before finetuning on 10 low-resource indigenous American languages: Aymara, Bribri, Asháninka, Guaraní, Wixarika, Náhuatl, Hñähñu, Quechua, Shipibo-Konibo, and Rarámuri. On average, our system achieved BLEU scores that were 1.64 higher and CHRF scores that were 0.0749 higher than the baseline.",,"Low-Resource Machine Translation Using Cross-Lingual Language Model Pretraining. This paper describes UTokyo's submission to the AmericasNLP 2021 Shared Task on machine translation systems for indigenous languages of the Americas. We present a lowresource machine translation system that improves translation accuracy using cross-lingual language model pretraining. Our system uses an mBART implementation of FAIRSEQ to pretrain on a large set of monolingual data from a diverse set of high-resource languages before finetuning on 10 low-resource indigenous American languages: Aymara, Bribri, Asháninka, Guaraní, Wixarika, Náhuatl, Hñähñu, Quechua, Shipibo-Konibo, and Rarámuri. On average, our system achieved BLEU scores that were 1.64 higher and CHRF scores that were 0.0749 higher than the baseline.",2021
vilar-etal-2011-dfkis,https://aclanthology.org/2011.iwslt-evaluation.13,0,,,,,,,"DFKI's SC and MT submissions to IWSLT 2011. We describe DFKI's submission to the System Combination and Machine Translation tracks of the 2011 IWSLT Evaluation Campaign. We focus on a sentence selection mechanism which chooses the (hopefully) best sentence among a set of candidates. The rationale behind it is to take advantage of the strengths of each system, especially given an heterogeneous dataset like the one in this evaluation campaign, composed of TED Talks of very different topics. We focus on using features that correlate well with human judgement and, while our primary system still focus on optimizing the BLEU score on the development set, our goal is to move towards optimizing directly the correlation with human judgement. This kind of system is still under development and was used as a secondary submission.",{DFKI}{'}s {SC} and {MT} submissions to {IWSLT} 2011,"We describe DFKI's submission to the System Combination and Machine Translation tracks of the 2011 IWSLT Evaluation Campaign. We focus on a sentence selection mechanism which chooses the (hopefully) best sentence among a set of candidates. The rationale behind it is to take advantage of the strengths of each system, especially given an heterogeneous dataset like the one in this evaluation campaign, composed of TED Talks of very different topics. We focus on using features that correlate well with human judgement and, while our primary system still focus on optimizing the BLEU score on the development set, our goal is to move towards optimizing directly the correlation with human judgement. This kind of system is still under development and was used as a secondary submission.",DFKI's SC and MT submissions to IWSLT 2011,"We describe DFKI's submission to the System Combination and Machine Translation tracks of the 2011 IWSLT Evaluation Campaign. We focus on a sentence selection mechanism which chooses the (hopefully) best sentence among a set of candidates. The rationale behind it is to take advantage of the strengths of each system, especially given an heterogeneous dataset like the one in this evaluation campaign, composed of TED Talks of very different topics. We focus on using features that correlate well with human judgement and, while our primary system still focus on optimizing the BLEU score on the development set, our goal is to move towards optimizing directly the correlation with human judgement. This kind of system is still under development and was used as a secondary submission.","This work was done with the support of the TaraXŰ Project 9 , financed by TSB Technologiestiftung Berlin-Zukunftsfonds Berlin, co-financed by the European Union-European fund for regional development.","DFKI's SC and MT submissions to IWSLT 2011. We describe DFKI's submission to the System Combination and Machine Translation tracks of the 2011 IWSLT Evaluation Campaign. We focus on a sentence selection mechanism which chooses the (hopefully) best sentence among a set of candidates. The rationale behind it is to take advantage of the strengths of each system, especially given an heterogeneous dataset like the one in this evaluation campaign, composed of TED Talks of very different topics. We focus on using features that correlate well with human judgement and, while our primary system still focus on optimizing the BLEU score on the development set, our goal is to move towards optimizing directly the correlation with human judgement. This kind of system is still under development and was used as a secondary submission.",2011
erk-pado-2009-paraphrase,https://aclanthology.org/W09-0208,0,,,,,,,"Paraphrase Assessment in Structured Vector Space: Exploring Parameters and Datasets. The appropriateness of paraphrases for words depends often on context: ""grab"" can replace ""catch"" in ""catch a ball"", but not in ""catch a cold"". Structured Vector Space (SVS) (Erk and Padó, 2008) is a model that computes word meaning in context in order to assess the appropriateness of such paraphrases. This paper investigates ""best-practice"" parameter settings for SVS, and it presents a method to obtain large datasets for paraphrase assessment from corpora with WSD annotation.",Paraphrase Assessment in Structured Vector Space: Exploring Parameters and Datasets,"The appropriateness of paraphrases for words depends often on context: ""grab"" can replace ""catch"" in ""catch a ball"", but not in ""catch a cold"". Structured Vector Space (SVS) (Erk and Padó, 2008) is a model that computes word meaning in context in order to assess the appropriateness of such paraphrases. This paper investigates ""best-practice"" parameter settings for SVS, and it presents a method to obtain large datasets for paraphrase assessment from corpora with WSD annotation.",Paraphrase Assessment in Structured Vector Space: Exploring Parameters and Datasets,"The appropriateness of paraphrases for words depends often on context: ""grab"" can replace ""catch"" in ""catch a ball"", but not in ""catch a cold"". Structured Vector Space (SVS) (Erk and Padó, 2008) is a model that computes word meaning in context in order to assess the appropriateness of such paraphrases. This paper investigates ""best-practice"" parameter settings for SVS, and it presents a method to obtain large datasets for paraphrase assessment from corpora with WSD annotation.",,"Paraphrase Assessment in Structured Vector Space: Exploring Parameters and Datasets. The appropriateness of paraphrases for words depends often on context: ""grab"" can replace ""catch"" in ""catch a ball"", but not in ""catch a cold"". Structured Vector Space (SVS) (Erk and Padó, 2008) is a model that computes word meaning in context in order to assess the appropriateness of such paraphrases. This paper investigates ""best-practice"" parameter settings for SVS, and it presents a method to obtain large datasets for paraphrase assessment from corpora with WSD annotation.",2009
elhoseiny-elgammal-2015-visual,https://aclanthology.org/W15-2809,0,,,,,,,"Visual Classifier Prediction by Distributional Semantic Embedding of Text Descriptions. One of the main challenges for scaling up object recognition systems is the lack of annotated images for real-world categories. It is estimated that humans can recognize and discriminate among about 30,000 categories (Biederman and others, 1987) . Typically there are few images available for training classifiers form most of these categories. This is reflected in the number of images per category available for training in most object categorization datasets, which, as pointed out in (Salakhutdinov et al., 2011) , shows a Zipf distribution.
The problem of lack of training images becomes even more sever when we target recognition problems within a general category, i.e., subordinate categorization, for example building classifiers for different bird species or flower types (estimated over 10000 living bird species, similar for flowers).",Visual Classifier Prediction by Distributional Semantic Embedding of Text Descriptions,"One of the main challenges for scaling up object recognition systems is the lack of annotated images for real-world categories. It is estimated that humans can recognize and discriminate among about 30,000 categories (Biederman and others, 1987) . Typically there are few images available for training classifiers form most of these categories. This is reflected in the number of images per category available for training in most object categorization datasets, which, as pointed out in (Salakhutdinov et al., 2011) , shows a Zipf distribution.
The problem of lack of training images becomes even more sever when we target recognition problems within a general category, i.e., subordinate categorization, for example building classifiers for different bird species or flower types (estimated over 10000 living bird species, similar for flowers).",Visual Classifier Prediction by Distributional Semantic Embedding of Text Descriptions,"One of the main challenges for scaling up object recognition systems is the lack of annotated images for real-world categories. It is estimated that humans can recognize and discriminate among about 30,000 categories (Biederman and others, 1987) . Typically there are few images available for training classifiers form most of these categories. This is reflected in the number of images per category available for training in most object categorization datasets, which, as pointed out in (Salakhutdinov et al., 2011) , shows a Zipf distribution.
The problem of lack of training images becomes even more sever when we target recognition problems within a general category, i.e., subordinate categorization, for example building classifiers for different bird species or flower types (estimated over 10000 living bird species, similar for flowers).",,"Visual Classifier Prediction by Distributional Semantic Embedding of Text Descriptions. One of the main challenges for scaling up object recognition systems is the lack of annotated images for real-world categories. It is estimated that humans can recognize and discriminate among about 30,000 categories (Biederman and others, 1987) . Typically there are few images available for training classifiers form most of these categories. This is reflected in the number of images per category available for training in most object categorization datasets, which, as pointed out in (Salakhutdinov et al., 2011) , shows a Zipf distribution.
The problem of lack of training images becomes even more sever when we target recognition problems within a general category, i.e., subordinate categorization, for example building classifiers for different bird species or flower types (estimated over 10000 living bird species, similar for flowers).",2015
newman-griffis-etal-2019-classifying,https://aclanthology.org/W19-5001,1,,,,health,,,"Classifying the reported ability in clinical mobility descriptions. Assessing how individuals perform different activities is key information for modeling health states of individuals and populations. Descriptions of activity performance in clinical free text are complex, including syntactic negation and similarities to textual entailment tasks. We explore a variety of methods for the novel task of classifying four types of assertions about activity performance: Able, Unable, Unclear, and None (no information). We find that ensembling an SVM trained with lexical features and a CNN achieves 77.9% macro F1 score on our task, and yields nearly 80% recall on the rare Unclear and Unable samples. Finally, we highlight several challenges in classifying performance assertions, including capturing information about sources of assistance, incorporating syntactic structure and negation scope, and handling new modalities at test time. Our findings establish a strong baseline for this novel task, and identify intriguing areas for further research.",Classifying the reported ability in clinical mobility descriptions,"Assessing how individuals perform different activities is key information for modeling health states of individuals and populations. Descriptions of activity performance in clinical free text are complex, including syntactic negation and similarities to textual entailment tasks. We explore a variety of methods for the novel task of classifying four types of assertions about activity performance: Able, Unable, Unclear, and None (no information). We find that ensembling an SVM trained with lexical features and a CNN achieves 77.9% macro F1 score on our task, and yields nearly 80% recall on the rare Unclear and Unable samples. Finally, we highlight several challenges in classifying performance assertions, including capturing information about sources of assistance, incorporating syntactic structure and negation scope, and handling new modalities at test time. Our findings establish a strong baseline for this novel task, and identify intriguing areas for further research.",Classifying the reported ability in clinical mobility descriptions,"Assessing how individuals perform different activities is key information for modeling health states of individuals and populations. Descriptions of activity performance in clinical free text are complex, including syntactic negation and similarities to textual entailment tasks. We explore a variety of methods for the novel task of classifying four types of assertions about activity performance: Able, Unable, Unclear, and None (no information). We find that ensembling an SVM trained with lexical features and a CNN achieves 77.9% macro F1 score on our task, and yields nearly 80% recall on the rare Unclear and Unable samples. Finally, we highlight several challenges in classifying performance assertions, including capturing information about sources of assistance, incorporating syntactic structure and negation scope, and handling new modalities at test time. Our findings establish a strong baseline for this novel task, and identify intriguing areas for further research.","The authors would like to thank Pei-Shu Ho, Jonathan Camacho Maldonado, and Maryanne Sacco for discussions about error analysis, and our anonymous reviewers for their helpful comments. This research was supported in part by the Intramural Research Program of the National Institutes of Health, Clinical Research Center and through an Inter-Agency Agreement with the US Social Security Administration.","Classifying the reported ability in clinical mobility descriptions. Assessing how individuals perform different activities is key information for modeling health states of individuals and populations. Descriptions of activity performance in clinical free text are complex, including syntactic negation and similarities to textual entailment tasks. We explore a variety of methods for the novel task of classifying four types of assertions about activity performance: Able, Unable, Unclear, and None (no information). We find that ensembling an SVM trained with lexical features and a CNN achieves 77.9% macro F1 score on our task, and yields nearly 80% recall on the rare Unclear and Unable samples. Finally, we highlight several challenges in classifying performance assertions, including capturing information about sources of assistance, incorporating syntactic structure and negation scope, and handling new modalities at test time. Our findings establish a strong baseline for this novel task, and identify intriguing areas for further research.",2019
wolf-gibson-2004-paragraph,https://aclanthology.org/P04-1049,0,,,,,,,"Paragraph-, Word-, and Coherence-based Approaches to Sentence Ranking: A Comparison of Algorithm and Human Performance. Sentence ranking is a crucial part of generating text summaries. We compared human sentence rankings obtained in a psycholinguistic experiment to three different approaches to sentence ranking: A simple paragraph-based approach intended as a baseline, two word-based approaches, and two coherence-based approaches. In the paragraph-based approach, sentences in the beginning of paragraphs received higher importance ratings than other sentences. The word-based approaches determined sentence rankings based on relative word frequencies (Luhn (1958); Salton & Buckley (1988)). Coherence-based approaches determined sentence rankings based on some property of the coherence structure of a text (Marcu (2000); Page et al. (1998)). Our results suggest poor performance for the simple paragraph-based approach, whereas wordbased approaches perform remarkably well. The best performance was achieved by a coherence-based approach where coherence structures are represented in a non-tree structure. Most approaches also outperformed the commercially available MSWord summarizer.","Paragraph-, Word-, and Coherence-based Approaches to Sentence Ranking: A Comparison of Algorithm and Human Performance","Sentence ranking is a crucial part of generating text summaries. We compared human sentence rankings obtained in a psycholinguistic experiment to three different approaches to sentence ranking: A simple paragraph-based approach intended as a baseline, two word-based approaches, and two coherence-based approaches. In the paragraph-based approach, sentences in the beginning of paragraphs received higher importance ratings than other sentences. The word-based approaches determined sentence rankings based on relative word frequencies (Luhn (1958); Salton & Buckley (1988)). Coherence-based approaches determined sentence rankings based on some property of the coherence structure of a text (Marcu (2000); Page et al. (1998)). Our results suggest poor performance for the simple paragraph-based approach, whereas wordbased approaches perform remarkably well. The best performance was achieved by a coherence-based approach where coherence structures are represented in a non-tree structure. Most approaches also outperformed the commercially available MSWord summarizer.","Paragraph-, Word-, and Coherence-based Approaches to Sentence Ranking: A Comparison of Algorithm and Human Performance","Sentence ranking is a crucial part of generating text summaries. We compared human sentence rankings obtained in a psycholinguistic experiment to three different approaches to sentence ranking: A simple paragraph-based approach intended as a baseline, two word-based approaches, and two coherence-based approaches. In the paragraph-based approach, sentences in the beginning of paragraphs received higher importance ratings than other sentences. The word-based approaches determined sentence rankings based on relative word frequencies (Luhn (1958); Salton & Buckley (1988)). Coherence-based approaches determined sentence rankings based on some property of the coherence structure of a text (Marcu (2000); Page et al. (1998)). Our results suggest poor performance for the simple paragraph-based approach, whereas wordbased approaches perform remarkably well. The best performance was achieved by a coherence-based approach where coherence structures are represented in a non-tree structure. Most approaches also outperformed the commercially available MSWord summarizer.",,"Paragraph-, Word-, and Coherence-based Approaches to Sentence Ranking: A Comparison of Algorithm and Human Performance. Sentence ranking is a crucial part of generating text summaries. We compared human sentence rankings obtained in a psycholinguistic experiment to three different approaches to sentence ranking: A simple paragraph-based approach intended as a baseline, two word-based approaches, and two coherence-based approaches. In the paragraph-based approach, sentences in the beginning of paragraphs received higher importance ratings than other sentences. The word-based approaches determined sentence rankings based on relative word frequencies (Luhn (1958); Salton & Buckley (1988)). Coherence-based approaches determined sentence rankings based on some property of the coherence structure of a text (Marcu (2000); Page et al. (1998)). Our results suggest poor performance for the simple paragraph-based approach, whereas wordbased approaches perform remarkably well. The best performance was achieved by a coherence-based approach where coherence structures are represented in a non-tree structure. Most approaches also outperformed the commercially available MSWord summarizer.",2004
penton-bird-2004-representing,https://aclanthology.org/U04-1017,0,,,,,,,"Representing and Rendering Linguistic Paradigms. Linguistic forms are inherently multi-dimensional. They exhibit a variety of phonological, orthographic, morphosyntactic, semantic and pragmatic properties. Accordingly, linguistic analysis involves multidimensional exploration, a process in which the same collection of forms is laid out in many ways until clear patterns emerge. Equally, language documentation usually contains tabulations of linguistic forms to illustrate systematic patterns and variations. In all such cases, multi-dimensional data is projected onto a two-dimensional table known as a linguistic paradigm, the most widespread format for linguistic data presentation. In this paper we develop an XML data model for linguistic paradigms, and show how XSL transforms can render them. We describe a high-level interface which gives linguists flexible, high-level control of paradigm layout. The work provides a simple, general, and extensible model for the preservation and access of linguistic data.",Representing and Rendering Linguistic Paradigms,"Linguistic forms are inherently multi-dimensional. They exhibit a variety of phonological, orthographic, morphosyntactic, semantic and pragmatic properties. Accordingly, linguistic analysis involves multidimensional exploration, a process in which the same collection of forms is laid out in many ways until clear patterns emerge. Equally, language documentation usually contains tabulations of linguistic forms to illustrate systematic patterns and variations. In all such cases, multi-dimensional data is projected onto a two-dimensional table known as a linguistic paradigm, the most widespread format for linguistic data presentation. In this paper we develop an XML data model for linguistic paradigms, and show how XSL transforms can render them. We describe a high-level interface which gives linguists flexible, high-level control of paradigm layout. The work provides a simple, general, and extensible model for the preservation and access of linguistic data.",Representing and Rendering Linguistic Paradigms,"Linguistic forms are inherently multi-dimensional. They exhibit a variety of phonological, orthographic, morphosyntactic, semantic and pragmatic properties. Accordingly, linguistic analysis involves multidimensional exploration, a process in which the same collection of forms is laid out in many ways until clear patterns emerge. Equally, language documentation usually contains tabulations of linguistic forms to illustrate systematic patterns and variations. In all such cases, multi-dimensional data is projected onto a two-dimensional table known as a linguistic paradigm, the most widespread format for linguistic data presentation. In this paper we develop an XML data model for linguistic paradigms, and show how XSL transforms can render them. We describe a high-level interface which gives linguists flexible, high-level control of paradigm layout. The work provides a simple, general, and extensible model for the preservation and access of linguistic data.","This paper extends earlier work by (Penton et al., 2004) . This research has been supported by the National Science Foundation grant number 0317826 Querying Linguistic Databases.","Representing and Rendering Linguistic Paradigms. Linguistic forms are inherently multi-dimensional. They exhibit a variety of phonological, orthographic, morphosyntactic, semantic and pragmatic properties. Accordingly, linguistic analysis involves multidimensional exploration, a process in which the same collection of forms is laid out in many ways until clear patterns emerge. Equally, language documentation usually contains tabulations of linguistic forms to illustrate systematic patterns and variations. In all such cases, multi-dimensional data is projected onto a two-dimensional table known as a linguistic paradigm, the most widespread format for linguistic data presentation. In this paper we develop an XML data model for linguistic paradigms, and show how XSL transforms can render them. We describe a high-level interface which gives linguists flexible, high-level control of paradigm layout. The work provides a simple, general, and extensible model for the preservation and access of linguistic data.",2004
tanasijevic-etal-2012-multimedia,http://www.lrec-conf.org/proceedings/lrec2012/pdf/637_Paper.pdf,0,,,,,,,"Multimedia database of the cultural heritage of the Balkans. This paper presents a system that is designed to make possible the organization and search within the collected digitized material of intangible cultural heritage. The motivation for building the system was a vast quantity of multimedia documents collected by a team from the Institute for Balkan Studies in Belgrade. The main topic of their research were linguistic properties of speeches that are used in various places in the Balkans by different groups of people. This paper deals with a prototype system that enables the annotation of the collected material and its organization into a native XML database through a graphical interface. The system enables the search of the database and the presentation of digitized multimedia documents and spatial as well as non-spatial information of the queried data. The multimedia content can be read, listened to or watched while spatial properties are presented on the graphics that consists of geographic regions in the Balkans. The system also enables spatial queries by consulting the graph of geographic regions.",Multimedia database of the cultural heritage of the Balkans,"This paper presents a system that is designed to make possible the organization and search within the collected digitized material of intangible cultural heritage. The motivation for building the system was a vast quantity of multimedia documents collected by a team from the Institute for Balkan Studies in Belgrade. The main topic of their research were linguistic properties of speeches that are used in various places in the Balkans by different groups of people. This paper deals with a prototype system that enables the annotation of the collected material and its organization into a native XML database through a graphical interface. The system enables the search of the database and the presentation of digitized multimedia documents and spatial as well as non-spatial information of the queried data. The multimedia content can be read, listened to or watched while spatial properties are presented on the graphics that consists of geographic regions in the Balkans. The system also enables spatial queries by consulting the graph of geographic regions.",Multimedia database of the cultural heritage of the Balkans,"This paper presents a system that is designed to make possible the organization and search within the collected digitized material of intangible cultural heritage. The motivation for building the system was a vast quantity of multimedia documents collected by a team from the Institute for Balkan Studies in Belgrade. The main topic of their research were linguistic properties of speeches that are used in various places in the Balkans by different groups of people. This paper deals with a prototype system that enables the annotation of the collected material and its organization into a native XML database through a graphical interface. The system enables the search of the database and the presentation of digitized multimedia documents and spatial as well as non-spatial information of the queried data. The multimedia content can be read, listened to or watched while spatial properties are presented on the graphics that consists of geographic regions in the Balkans. The system also enables spatial queries by consulting the graph of geographic regions.",,"Multimedia database of the cultural heritage of the Balkans. This paper presents a system that is designed to make possible the organization and search within the collected digitized material of intangible cultural heritage. The motivation for building the system was a vast quantity of multimedia documents collected by a team from the Institute for Balkan Studies in Belgrade. The main topic of their research were linguistic properties of speeches that are used in various places in the Balkans by different groups of people. This paper deals with a prototype system that enables the annotation of the collected material and its organization into a native XML database through a graphical interface. The system enables the search of the database and the presentation of digitized multimedia documents and spatial as well as non-spatial information of the queried data. The multimedia content can be read, listened to or watched while spatial properties are presented on the graphics that consists of geographic regions in the Balkans. The system also enables spatial queries by consulting the graph of geographic regions.",2012
rehm-etal-2013-matecat,https://aclanthology.org/2013.mtsummit-european.10,0,,,,,,,MATECAT: Machine Translation Enhanced Computer Assisted Translation META - Multilingual Europe Technology Alliance. MateCat is a EU-funded research project (FP7-ICT-2011-7 grant 287688) that aims at improving the integration of machine translation (MT) and human translation within the so-called computer aided translation (CAT) framework.,{MATECAT}: Machine Translation Enhanced Computer Assisted Translation {META} - Multilingual {E}urope Technology Alliance,MateCat is a EU-funded research project (FP7-ICT-2011-7 grant 287688) that aims at improving the integration of machine translation (MT) and human translation within the so-called computer aided translation (CAT) framework.,MATECAT: Machine Translation Enhanced Computer Assisted Translation META - Multilingual Europe Technology Alliance,MateCat is a EU-funded research project (FP7-ICT-2011-7 grant 287688) that aims at improving the integration of machine translation (MT) and human translation within the so-called computer aided translation (CAT) framework.,,MATECAT: Machine Translation Enhanced Computer Assisted Translation META - Multilingual Europe Technology Alliance. MateCat is a EU-funded research project (FP7-ICT-2011-7 grant 287688) that aims at improving the integration of machine translation (MT) and human translation within the so-called computer aided translation (CAT) framework.,2013
taylor-1990-multimedia,https://aclanthology.org/1990.tc-1.18,0,,,,,,,"Multimedia/Multilanguage publishing for the 1990s. Kent Taylor AT&T Document Development Organisation, Winston-Salem, USA AT&T is a global information management and movement enterprise, providing computer and telecommunications products and services all over the world. The global nature of the business, combined with rapidly changing technology call for innovative approaches to publishing and distributing support documentation. AT&T's Document Development Organisation (DDO) is implementing new systems and processes to meet these needs.
Headquartered in Winston-Salem, North Carolina, DDO created over 750,000 original pages of product/service documentation in 1990. While this is already a huge number, the volume of information we produce is increasing at a rate of more than 25 per cent per year! As the volume increases, the demand for information in electronic form also increases rapidly; and more and more of this information must be translated each year to other languages. And it is unlikely that this 'information explosion', or the demand for more efficient methods of distributing and using it, will abate in the near future. In fact, all indications are that current trends will only accelerate. DDO responded to these demands by implementing an 'object oriented' publishing process. Writers focus on documentation content and structure, using generalised markup, to develop neutral form content models. Form, format and functionality are added to the content in our production systems, via electronic 'style sheets'. Different production systems and Figure 6 . style sheets produce a variety of traditional paper documents and serverbased, PC-based and CD-ROM-based 'electronic documents'.",Multimedia/Multilanguage publishing for the 1990s,"Kent Taylor AT&T Document Development Organisation, Winston-Salem, USA AT&T is a global information management and movement enterprise, providing computer and telecommunications products and services all over the world. The global nature of the business, combined with rapidly changing technology call for innovative approaches to publishing and distributing support documentation. AT&T's Document Development Organisation (DDO) is implementing new systems and processes to meet these needs.
Headquartered in Winston-Salem, North Carolina, DDO created over 750,000 original pages of product/service documentation in 1990. While this is already a huge number, the volume of information we produce is increasing at a rate of more than 25 per cent per year! As the volume increases, the demand for information in electronic form also increases rapidly; and more and more of this information must be translated each year to other languages. And it is unlikely that this 'information explosion', or the demand for more efficient methods of distributing and using it, will abate in the near future. In fact, all indications are that current trends will only accelerate. DDO responded to these demands by implementing an 'object oriented' publishing process. Writers focus on documentation content and structure, using generalised markup, to develop neutral form content models. Form, format and functionality are added to the content in our production systems, via electronic 'style sheets'. Different production systems and Figure 6 . style sheets produce a variety of traditional paper documents and serverbased, PC-based and CD-ROM-based 'electronic documents'.",Multimedia/Multilanguage publishing for the 1990s,"Kent Taylor AT&T Document Development Organisation, Winston-Salem, USA AT&T is a global information management and movement enterprise, providing computer and telecommunications products and services all over the world. The global nature of the business, combined with rapidly changing technology call for innovative approaches to publishing and distributing support documentation. AT&T's Document Development Organisation (DDO) is implementing new systems and processes to meet these needs.
Headquartered in Winston-Salem, North Carolina, DDO created over 750,000 original pages of product/service documentation in 1990. While this is already a huge number, the volume of information we produce is increasing at a rate of more than 25 per cent per year! As the volume increases, the demand for information in electronic form also increases rapidly; and more and more of this information must be translated each year to other languages. And it is unlikely that this 'information explosion', or the demand for more efficient methods of distributing and using it, will abate in the near future. In fact, all indications are that current trends will only accelerate. DDO responded to these demands by implementing an 'object oriented' publishing process. Writers focus on documentation content and structure, using generalised markup, to develop neutral form content models. Form, format and functionality are added to the content in our production systems, via electronic 'style sheets'. Different production systems and Figure 6 . style sheets produce a variety of traditional paper documents and serverbased, PC-based and CD-ROM-based 'electronic documents'.",,"Multimedia/Multilanguage publishing for the 1990s. Kent Taylor AT&T Document Development Organisation, Winston-Salem, USA AT&T is a global information management and movement enterprise, providing computer and telecommunications products and services all over the world. The global nature of the business, combined with rapidly changing technology call for innovative approaches to publishing and distributing support documentation. AT&T's Document Development Organisation (DDO) is implementing new systems and processes to meet these needs.
Headquartered in Winston-Salem, North Carolina, DDO created over 750,000 original pages of product/service documentation in 1990. While this is already a huge number, the volume of information we produce is increasing at a rate of more than 25 per cent per year! As the volume increases, the demand for information in electronic form also increases rapidly; and more and more of this information must be translated each year to other languages. And it is unlikely that this 'information explosion', or the demand for more efficient methods of distributing and using it, will abate in the near future. In fact, all indications are that current trends will only accelerate. DDO responded to these demands by implementing an 'object oriented' publishing process. Writers focus on documentation content and structure, using generalised markup, to develop neutral form content models. Form, format and functionality are added to the content in our production systems, via electronic 'style sheets'. Different production systems and Figure 6 . style sheets produce a variety of traditional paper documents and serverbased, PC-based and CD-ROM-based 'electronic documents'.",1990
zhao-etal-2021-good,https://aclanthology.org/2021.emnlp-main.537,0,,,,,,,"It Is Not As Good As You Think! Evaluating Simultaneous Machine Translation on Interpretation Data. Most existing simultaneous machine translation (SiMT) systems are trained and evaluated on offline translation corpora. We argue that SiMT systems should be trained and tested on real interpretation data. To illustrate this argument, we propose an interpretation test set and conduct a realistic evaluation of SiMT trained on offline translations. Our results, on our test set along with 3 existing smaller scale language pairs, highlight the difference of up-to 13.83 BLEU score when SiMT models are evaluated on translation vs interpretation data. In the absence of interpretation training data, we propose a translationto-interpretation (T2I) style transfer method which allows converting existing offline translations into interpretation-style data, leading to up-to 2.8 BLEU improvement. However, the evaluation gap remains notable, calling for constructing large-scale interpretation corpora better suited for evaluating and developing SiMT systems. 1",It Is Not As Good As You Think! Evaluating Simultaneous Machine Translation on Interpretation Data,"Most existing simultaneous machine translation (SiMT) systems are trained and evaluated on offline translation corpora. We argue that SiMT systems should be trained and tested on real interpretation data. To illustrate this argument, we propose an interpretation test set and conduct a realistic evaluation of SiMT trained on offline translations. Our results, on our test set along with 3 existing smaller scale language pairs, highlight the difference of up-to 13.83 BLEU score when SiMT models are evaluated on translation vs interpretation data. In the absence of interpretation training data, we propose a translationto-interpretation (T2I) style transfer method which allows converting existing offline translations into interpretation-style data, leading to up-to 2.8 BLEU improvement. However, the evaluation gap remains notable, calling for constructing large-scale interpretation corpora better suited for evaluating and developing SiMT systems. 1",It Is Not As Good As You Think! Evaluating Simultaneous Machine Translation on Interpretation Data,"Most existing simultaneous machine translation (SiMT) systems are trained and evaluated on offline translation corpora. We argue that SiMT systems should be trained and tested on real interpretation data. To illustrate this argument, we propose an interpretation test set and conduct a realistic evaluation of SiMT trained on offline translations. Our results, on our test set along with 3 existing smaller scale language pairs, highlight the difference of up-to 13.83 BLEU score when SiMT models are evaluated on translation vs interpretation data. In the absence of interpretation training data, we propose a translationto-interpretation (T2I) style transfer method which allows converting existing offline translations into interpretation-style data, leading to up-to 2.8 BLEU improvement. However, the evaluation gap remains notable, calling for constructing large-scale interpretation corpora better suited for evaluating and developing SiMT systems. 1",,"It Is Not As Good As You Think! Evaluating Simultaneous Machine Translation on Interpretation Data. Most existing simultaneous machine translation (SiMT) systems are trained and evaluated on offline translation corpora. We argue that SiMT systems should be trained and tested on real interpretation data. To illustrate this argument, we propose an interpretation test set and conduct a realistic evaluation of SiMT trained on offline translations. Our results, on our test set along with 3 existing smaller scale language pairs, highlight the difference of up-to 13.83 BLEU score when SiMT models are evaluated on translation vs interpretation data. In the absence of interpretation training data, we propose a translationto-interpretation (T2I) style transfer method which allows converting existing offline translations into interpretation-style data, leading to up-to 2.8 BLEU improvement. However, the evaluation gap remains notable, calling for constructing large-scale interpretation corpora better suited for evaluating and developing SiMT systems. 1",2021
ladhak-etal-2020-wikilingua,https://aclanthology.org/2020.findings-emnlp.360,0,,,,,,,"WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization. We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of crosslingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow 12 , a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard articlesummary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct crosslingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.",{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization,"We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of crosslingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow 12 , a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard articlesummary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct crosslingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.",WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization,"We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of crosslingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow 12 , a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard articlesummary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct crosslingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.","We would like to thank Chris Kedzie and the anonymous reviewers for their feedback. This research is based on work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract FA8650-17-C-9117. This work is also supported in part by National Science Foundation (NSF) grant 1815455 and Defense Advanced Research Projects Agency (DARPA) LwLL FA8750-19-2-0039. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, NSF, DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.","WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization. We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of crosslingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow 12 , a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard articlesummary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct crosslingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.",2020
lebanoff-etal-2021-semantic,https://aclanthology.org/2021.adaptnlp-1.25,0,,,,,,,"Semantic Parsing of Brief and Multi-Intent Natural Language Utterances. Many military communication domains involve rapidly conveying situation awareness with few words. Converting natural language utterances to logical forms in these domains is challenging, as these utterances are brief and contain multiple intents. In this paper, we present a first effort toward building a weakly-supervised semantic parser to transform brief, multi-intent natural utterances into logical forms. Our findings suggest a new ""projection and reduction"" method that iteratively performs projection from natural to canonical utterances followed by reduction of natural utterances is the most effective. We conduct extensive experiments on two military and a general-domain dataset and provide a new baseline for future research toward accurate parsing of multi-intent utterances.",Semantic Parsing of Brief and Multi-Intent Natural Language Utterances,"Many military communication domains involve rapidly conveying situation awareness with few words. Converting natural language utterances to logical forms in these domains is challenging, as these utterances are brief and contain multiple intents. In this paper, we present a first effort toward building a weakly-supervised semantic parser to transform brief, multi-intent natural utterances into logical forms. Our findings suggest a new ""projection and reduction"" method that iteratively performs projection from natural to canonical utterances followed by reduction of natural utterances is the most effective. We conduct extensive experiments on two military and a general-domain dataset and provide a new baseline for future research toward accurate parsing of multi-intent utterances.",Semantic Parsing of Brief and Multi-Intent Natural Language Utterances,"Many military communication domains involve rapidly conveying situation awareness with few words. Converting natural language utterances to logical forms in these domains is challenging, as these utterances are brief and contain multiple intents. In this paper, we present a first effort toward building a weakly-supervised semantic parser to transform brief, multi-intent natural utterances into logical forms. Our findings suggest a new ""projection and reduction"" method that iteratively performs projection from natural to canonical utterances followed by reduction of natural utterances is the most effective. We conduct extensive experiments on two military and a general-domain dataset and provide a new baseline for future research toward accurate parsing of multi-intent utterances.","This research is based upon work supported by the Naval Air Warfare Center Training Systems Division and the Department of the Navy's Small Business Innovation Research (SBIR) Program, contract N68335-19-C-0052. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Department of the Navy or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.","Semantic Parsing of Brief and Multi-Intent Natural Language Utterances. Many military communication domains involve rapidly conveying situation awareness with few words. Converting natural language utterances to logical forms in these domains is challenging, as these utterances are brief and contain multiple intents. In this paper, we present a first effort toward building a weakly-supervised semantic parser to transform brief, multi-intent natural utterances into logical forms. Our findings suggest a new ""projection and reduction"" method that iteratively performs projection from natural to canonical utterances followed by reduction of natural utterances is the most effective. We conduct extensive experiments on two military and a general-domain dataset and provide a new baseline for future research toward accurate parsing of multi-intent utterances.",2021
zheng-etal-2010-growing,https://aclanthology.org/P10-3009,0,,,,,,,"Growing Related Words from Seed via User Behaviors: A Re-Ranking Based Approach. Motivated by Google Sets, we study the problem of growing related words from a single seed word by leveraging user behaviors hiding in user records of Chinese input method. Our proposed method is motivated by the observation that the more frequently two words cooccur in user records, the more related they are. First, we utilize user behaviors to generate candidate words. Then, we utilize search engine to enrich candidate words with adequate semantic features. Finally, we reorder candidate words according to their semantic relatedness to the seed word. Experimental results on a Chinese input method dataset show that our method gains better performance.",Growing Related Words from Seed via User Behaviors: A Re-Ranking Based Approach,"Motivated by Google Sets, we study the problem of growing related words from a single seed word by leveraging user behaviors hiding in user records of Chinese input method. Our proposed method is motivated by the observation that the more frequently two words cooccur in user records, the more related they are. First, we utilize user behaviors to generate candidate words. Then, we utilize search engine to enrich candidate words with adequate semantic features. Finally, we reorder candidate words according to their semantic relatedness to the seed word. Experimental results on a Chinese input method dataset show that our method gains better performance.",Growing Related Words from Seed via User Behaviors: A Re-Ranking Based Approach,"Motivated by Google Sets, we study the problem of growing related words from a single seed word by leveraging user behaviors hiding in user records of Chinese input method. Our proposed method is motivated by the observation that the more frequently two words cooccur in user records, the more related they are. First, we utilize user behaviors to generate candidate words. Then, we utilize search engine to enrich candidate words with adequate semantic features. Finally, we reorder candidate words according to their semantic relatedness to the seed word. Experimental results on a Chinese input method dataset show that our method gains better performance.",We thank Xiance Si and Wufeng Ke for providing the Baidu encyclopedia corpus for evaluation. We also thank the anonymous reviewers for their helpful comments and suggestions. This work is supported by a Tsinghua-Sogou joint research project.,"Growing Related Words from Seed via User Behaviors: A Re-Ranking Based Approach. Motivated by Google Sets, we study the problem of growing related words from a single seed word by leveraging user behaviors hiding in user records of Chinese input method. Our proposed method is motivated by the observation that the more frequently two words cooccur in user records, the more related they are. First, we utilize user behaviors to generate candidate words. Then, we utilize search engine to enrich candidate words with adequate semantic features. Finally, we reorder candidate words according to their semantic relatedness to the seed word. Experimental results on a Chinese input method dataset show that our method gains better performance.",2010
chang-kou-1988-new,https://aclanthology.org/O88-1005,0,,,,,,,A New Approach to Quality Text Generation. ,A New Approach to Quality Text Generation,,A New Approach to Quality Text Generation,,,A New Approach to Quality Text Generation. ,1988
gosangi-etal-2021-use,https://aclanthology.org/2021.naacl-main.359,1,,,,industry_innovation_infrastructure,,,"On the Use of Context for Predicting Citation Worthiness of Sentences in Scholarly Articles. In this paper, we study the importance of context in predicting the citation worthiness of sentences in scholarly articles. We formulate this problem as a sequence labeling task solved using a hierarchical BiLSTM model. We contribute a new benchmark dataset containing over two million sentences and their corresponding labels. We preserve the sentence order in this dataset and perform document-level train/test splits, which importantly allows incorporating contextual information in the modeling process. We evaluate the proposed approach on three benchmark datasets. Our results quantify the benefits of using context and contextual embeddings for citation worthiness. Lastly, through error analysis, we provide insights into cases where context plays an essential role in predicting citation worthiness. Conclusion 0.",On the Use of Context for Predicting Citation Worthiness of Sentences in Scholarly Articles,"In this paper, we study the importance of context in predicting the citation worthiness of sentences in scholarly articles. We formulate this problem as a sequence labeling task solved using a hierarchical BiLSTM model. We contribute a new benchmark dataset containing over two million sentences and their corresponding labels. We preserve the sentence order in this dataset and perform document-level train/test splits, which importantly allows incorporating contextual information in the modeling process. We evaluate the proposed approach on three benchmark datasets. Our results quantify the benefits of using context and contextual embeddings for citation worthiness. Lastly, through error analysis, we provide insights into cases where context plays an essential role in predicting citation worthiness. Conclusion 0.",On the Use of Context for Predicting Citation Worthiness of Sentences in Scholarly Articles,"In this paper, we study the importance of context in predicting the citation worthiness of sentences in scholarly articles. We formulate this problem as a sequence labeling task solved using a hierarchical BiLSTM model. We contribute a new benchmark dataset containing over two million sentences and their corresponding labels. We preserve the sentence order in this dataset and perform document-level train/test splits, which importantly allows incorporating contextual information in the modeling process. We evaluate the proposed approach on three benchmark datasets. Our results quantify the benefits of using context and contextual embeddings for citation worthiness. Lastly, through error analysis, we provide insights into cases where context plays an essential role in predicting citation worthiness. Conclusion 0.",,"On the Use of Context for Predicting Citation Worthiness of Sentences in Scholarly Articles. In this paper, we study the importance of context in predicting the citation worthiness of sentences in scholarly articles. We formulate this problem as a sequence labeling task solved using a hierarchical BiLSTM model. We contribute a new benchmark dataset containing over two million sentences and their corresponding labels. We preserve the sentence order in this dataset and perform document-level train/test splits, which importantly allows incorporating contextual information in the modeling process. We evaluate the proposed approach on three benchmark datasets. Our results quantify the benefits of using context and contextual embeddings for citation worthiness. Lastly, through error analysis, we provide insights into cases where context plays an essential role in predicting citation worthiness. Conclusion 0.",2021
tubay-costa-jussa-2018-neural,https://aclanthology.org/W18-6449,1,,,,health,,,"Neural Machine Translation with the Transformer and Multi-Source Romance Languages for the Biomedical WMT 2018 task. The Transformer architecture has become the state-of-the-art in Machine Translation. This model, which relies on attention-based mechanisms, has outperformed previous neural machine translation architectures in several tasks. In this system description paper, we report details of training neural machine translation with multi-source Romance languages with the Transformer model and in the evaluation frame of the biomedical WMT 2018 task. Using multi-source languages from the same family allows improvements of over 6 BLEU points.",Neural Machine Translation with the Transformer and Multi-Source {R}omance Languages for the Biomedical {WMT} 2018 task,"The Transformer architecture has become the state-of-the-art in Machine Translation. This model, which relies on attention-based mechanisms, has outperformed previous neural machine translation architectures in several tasks. In this system description paper, we report details of training neural machine translation with multi-source Romance languages with the Transformer model and in the evaluation frame of the biomedical WMT 2018 task. Using multi-source languages from the same family allows improvements of over 6 BLEU points.",Neural Machine Translation with the Transformer and Multi-Source Romance Languages for the Biomedical WMT 2018 task,"The Transformer architecture has become the state-of-the-art in Machine Translation. This model, which relies on attention-based mechanisms, has outperformed previous neural machine translation architectures in several tasks. In this system description paper, we report details of training neural machine translation with multi-source Romance languages with the Transformer model and in the evaluation frame of the biomedical WMT 2018 task. Using multi-source languages from the same family allows improvements of over 6 BLEU points.",Authors would like to thank Noe Casas for his valuable comments. This work is supported in,"Neural Machine Translation with the Transformer and Multi-Source Romance Languages for the Biomedical WMT 2018 task. The Transformer architecture has become the state-of-the-art in Machine Translation. This model, which relies on attention-based mechanisms, has outperformed previous neural machine translation architectures in several tasks. In this system description paper, we report details of training neural machine translation with multi-source Romance languages with the Transformer model and in the evaluation frame of the biomedical WMT 2018 task. Using multi-source languages from the same family allows improvements of over 6 BLEU points.",2018
yi-etal-2007-semantic,https://aclanthology.org/N07-1069,0,,,,,,,"Can Semantic Roles Generalize Across Genres?. PropBank has been widely used as training data for Semantic Role Labeling. However, because this training data is taken from the WSJ, the resulting machine learning models tend to overfit on idiosyncrasies of that text's style, and do not port well to other genres. In addition, since PropBank was designed on a verb-by-verb basis, the argument labels Arg2-Arg5 get used for very diverse argument roles with inconsistent training instances. For example, the verb ""make"" uses Arg2 for the ""Material"" argument; but the verb ""multiply"" uses Arg2 for the ""Extent"" argument. As a result, it can be difficult for automatic classifiers to learn to distinguish arguments Arg2-Arg5. We have created a mapping between PropBank and VerbNet that provides a VerbNet thematic role label for each verb-specific PropBank label. Since VerbNet uses argument labels that are more consistent across verbs, we are able to demonstrate that these new labels are easier to learn.",Can Semantic Roles Generalize Across Genres?,"PropBank has been widely used as training data for Semantic Role Labeling. However, because this training data is taken from the WSJ, the resulting machine learning models tend to overfit on idiosyncrasies of that text's style, and do not port well to other genres. In addition, since PropBank was designed on a verb-by-verb basis, the argument labels Arg2-Arg5 get used for very diverse argument roles with inconsistent training instances. For example, the verb ""make"" uses Arg2 for the ""Material"" argument; but the verb ""multiply"" uses Arg2 for the ""Extent"" argument. As a result, it can be difficult for automatic classifiers to learn to distinguish arguments Arg2-Arg5. We have created a mapping between PropBank and VerbNet that provides a VerbNet thematic role label for each verb-specific PropBank label. Since VerbNet uses argument labels that are more consistent across verbs, we are able to demonstrate that these new labels are easier to learn.",Can Semantic Roles Generalize Across Genres?,"PropBank has been widely used as training data for Semantic Role Labeling. However, because this training data is taken from the WSJ, the resulting machine learning models tend to overfit on idiosyncrasies of that text's style, and do not port well to other genres. In addition, since PropBank was designed on a verb-by-verb basis, the argument labels Arg2-Arg5 get used for very diverse argument roles with inconsistent training instances. For example, the verb ""make"" uses Arg2 for the ""Material"" argument; but the verb ""multiply"" uses Arg2 for the ""Extent"" argument. As a result, it can be difficult for automatic classifiers to learn to distinguish arguments Arg2-Arg5. We have created a mapping between PropBank and VerbNet that provides a VerbNet thematic role label for each verb-specific PropBank label. Since VerbNet uses argument labels that are more consistent across verbs, we are able to demonstrate that these new labels are easier to learn.",,"Can Semantic Roles Generalize Across Genres?. PropBank has been widely used as training data for Semantic Role Labeling. However, because this training data is taken from the WSJ, the resulting machine learning models tend to overfit on idiosyncrasies of that text's style, and do not port well to other genres. In addition, since PropBank was designed on a verb-by-verb basis, the argument labels Arg2-Arg5 get used for very diverse argument roles with inconsistent training instances. For example, the verb ""make"" uses Arg2 for the ""Material"" argument; but the verb ""multiply"" uses Arg2 for the ""Extent"" argument. As a result, it can be difficult for automatic classifiers to learn to distinguish arguments Arg2-Arg5. We have created a mapping between PropBank and VerbNet that provides a VerbNet thematic role label for each verb-specific PropBank label. Since VerbNet uses argument labels that are more consistent across verbs, we are able to demonstrate that these new labels are easier to learn.",2007
herbelot-vecchi-2015-building,https://aclanthology.org/D15-1003,0,,,,,,,"Building a shared world: mapping distributional to model-theoretic semantic spaces. In this paper, we introduce an approach to automatically map a standard distributional semantic space onto a set-theoretic model. We predict that there is a functional relationship between distributional information and vectorial concept representations in which dimensions are predicates and weights are generalised quantifiers. In order to test our prediction, we learn a model of such relationship over a publicly available dataset of feature norms annotated with natural language quantifiers. Our initial experimental results show that, at least for domain-specific data, we can indeed map between formalisms, and generate high-quality vector representations which encapsulate set overlap information. We further investigate the generation of natural language quantifiers from such vectors.",Building a shared world: mapping distributional to model-theoretic semantic spaces,"In this paper, we introduce an approach to automatically map a standard distributional semantic space onto a set-theoretic model. We predict that there is a functional relationship between distributional information and vectorial concept representations in which dimensions are predicates and weights are generalised quantifiers. In order to test our prediction, we learn a model of such relationship over a publicly available dataset of feature norms annotated with natural language quantifiers. Our initial experimental results show that, at least for domain-specific data, we can indeed map between formalisms, and generate high-quality vector representations which encapsulate set overlap information. We further investigate the generation of natural language quantifiers from such vectors.",Building a shared world: mapping distributional to model-theoretic semantic spaces,"In this paper, we introduce an approach to automatically map a standard distributional semantic space onto a set-theoretic model. We predict that there is a functional relationship between distributional information and vectorial concept representations in which dimensions are predicates and weights are generalised quantifiers. In order to test our prediction, we learn a model of such relationship over a publicly available dataset of feature norms annotated with natural language quantifiers. Our initial experimental results show that, at least for domain-specific data, we can indeed map between formalisms, and generate high-quality vector representations which encapsulate set overlap information. We further investigate the generation of natural language quantifiers from such vectors.","We thank Marco Baroni, Stephen Clark, Ann Copestake and Katrin Erk for their helpful comments on a previous version of this paper, and the three anonymous reviewers for their thorough feedback on this work. Eva Maria Vecchi is supported by ERC Starting Grant DisCoTex (306920).","Building a shared world: mapping distributional to model-theoretic semantic spaces. In this paper, we introduce an approach to automatically map a standard distributional semantic space onto a set-theoretic model. We predict that there is a functional relationship between distributional information and vectorial concept representations in which dimensions are predicates and weights are generalised quantifiers. In order to test our prediction, we learn a model of such relationship over a publicly available dataset of feature norms annotated with natural language quantifiers. Our initial experimental results show that, at least for domain-specific data, we can indeed map between formalisms, and generate high-quality vector representations which encapsulate set overlap information. We further investigate the generation of natural language quantifiers from such vectors.",2015
wang-etal-2019-aspect,https://aclanthology.org/P19-1345,0,,,,,,,"Aspect Sentiment Classification Towards Question-Answering with Reinforced Bidirectional Attention Network. In the literature, existing studies on aspect sentiment classification (ASC) focus on individual non-interactive reviews. This paper extends the research to interactive reviews and proposes a new research task, namely Aspect Sentiment Classification towards Question-Answering (ASC-QA), for real-world applications. This new task aims to predict sentiment polarities for specific aspects from interactive QA style reviews. In particular, a high-quality annotated corpus is constructed for ASC-QA to facilitate corresponding research. On this basis, a Reinforced Bidirectional Attention Network (RBAN) approach is proposed to address two inherent challenges in ASC-QA, i.e., semantic matching between question and answer, and data noise. Experimental results demonstrate the great advantage of the proposed approach to ASC-QA against several state-of-the-art baselines.",Aspect Sentiment Classification Towards Question-Answering with Reinforced Bidirectional Attention Network,"In the literature, existing studies on aspect sentiment classification (ASC) focus on individual non-interactive reviews. This paper extends the research to interactive reviews and proposes a new research task, namely Aspect Sentiment Classification towards Question-Answering (ASC-QA), for real-world applications. This new task aims to predict sentiment polarities for specific aspects from interactive QA style reviews. In particular, a high-quality annotated corpus is constructed for ASC-QA to facilitate corresponding research. On this basis, a Reinforced Bidirectional Attention Network (RBAN) approach is proposed to address two inherent challenges in ASC-QA, i.e., semantic matching between question and answer, and data noise. Experimental results demonstrate the great advantage of the proposed approach to ASC-QA against several state-of-the-art baselines.",Aspect Sentiment Classification Towards Question-Answering with Reinforced Bidirectional Attention Network,"In the literature, existing studies on aspect sentiment classification (ASC) focus on individual non-interactive reviews. This paper extends the research to interactive reviews and proposes a new research task, namely Aspect Sentiment Classification towards Question-Answering (ASC-QA), for real-world applications. This new task aims to predict sentiment polarities for specific aspects from interactive QA style reviews. In particular, a high-quality annotated corpus is constructed for ASC-QA to facilitate corresponding research. On this basis, a Reinforced Bidirectional Attention Network (RBAN) approach is proposed to address two inherent challenges in ASC-QA, i.e., semantic matching between question and answer, and data noise. Experimental results demonstrate the great advantage of the proposed approach to ASC-QA against several state-of-the-art baselines.","We thank our anonymous reviewers for their helpful comments. This work was supported by three NSFC grants, i.e., No.61672366, No.61702149 and No.61525205. This work was also supported by the joint research project of Alibaba Group and Soochow University.","Aspect Sentiment Classification Towards Question-Answering with Reinforced Bidirectional Attention Network. In the literature, existing studies on aspect sentiment classification (ASC) focus on individual non-interactive reviews. This paper extends the research to interactive reviews and proposes a new research task, namely Aspect Sentiment Classification towards Question-Answering (ASC-QA), for real-world applications. This new task aims to predict sentiment polarities for specific aspects from interactive QA style reviews. In particular, a high-quality annotated corpus is constructed for ASC-QA to facilitate corresponding research. On this basis, a Reinforced Bidirectional Attention Network (RBAN) approach is proposed to address two inherent challenges in ASC-QA, i.e., semantic matching between question and answer, and data noise. Experimental results demonstrate the great advantage of the proposed approach to ASC-QA against several state-of-the-art baselines.",2019
ghaeini-etal-2018-dependent,https://aclanthology.org/C18-1282,0,,,,,,,"Dependent Gated Reading for Cloze-Style Question Answering. We present a novel deep learning architecture to address the cloze-style question answering task. Existing approaches employ reading mechanisms that do not fully exploit the interdependency between the document and the query. In this paper, we propose a novel dependent gated reading bidirectional GRU network (DGR) to efficiently model the relationship between the document and the query during encoding and decision making. Our evaluation shows that DGR obtains highly competitive performance on well-known machine comprehension benchmarks such as the Children's Book Test (CBT-NE and CBT-CN) and Who DiD What (WDW, Strict and Relaxed). Finally, we extensively analyze and validate our model by ablation and attention studies.",Dependent Gated Reading for Cloze-Style Question Answering,"We present a novel deep learning architecture to address the cloze-style question answering task. Existing approaches employ reading mechanisms that do not fully exploit the interdependency between the document and the query. In this paper, we propose a novel dependent gated reading bidirectional GRU network (DGR) to efficiently model the relationship between the document and the query during encoding and decision making. Our evaluation shows that DGR obtains highly competitive performance on well-known machine comprehension benchmarks such as the Children's Book Test (CBT-NE and CBT-CN) and Who DiD What (WDW, Strict and Relaxed). Finally, we extensively analyze and validate our model by ablation and attention studies.",Dependent Gated Reading for Cloze-Style Question Answering,"We present a novel deep learning architecture to address the cloze-style question answering task. Existing approaches employ reading mechanisms that do not fully exploit the interdependency between the document and the query. In this paper, we propose a novel dependent gated reading bidirectional GRU network (DGR) to efficiently model the relationship between the document and the query during encoding and decision making. Our evaluation shows that DGR obtains highly competitive performance on well-known machine comprehension benchmarks such as the Children's Book Test (CBT-NE and CBT-CN) and Who DiD What (WDW, Strict and Relaxed). Finally, we extensively analyze and validate our model by ablation and attention studies.",,"Dependent Gated Reading for Cloze-Style Question Answering. We present a novel deep learning architecture to address the cloze-style question answering task. Existing approaches employ reading mechanisms that do not fully exploit the interdependency between the document and the query. In this paper, we propose a novel dependent gated reading bidirectional GRU network (DGR) to efficiently model the relationship between the document and the query during encoding and decision making. Our evaluation shows that DGR obtains highly competitive performance on well-known machine comprehension benchmarks such as the Children's Book Test (CBT-NE and CBT-CN) and Who DiD What (WDW, Strict and Relaxed). Finally, we extensively analyze and validate our model by ablation and attention studies.",2018
chen-etal-2017-leveraging,https://aclanthology.org/K17-1006,0,,,,,,,"Leveraging Eventive Information for Better Metaphor Detection and Classification. Metaphor detection has been both challenging and rewarding in natural language processing applications. This study offers a new approach based on eventive information in detecting metaphors by leveraging the Chinese writing system, which is a culturally bound ontological system organized according to the basic concepts represented by radicals. As such, the information represented is available in all Chinese text without pre-processing. Since metaphor detection is another culturally based conceptual representation, we hypothesize that sub-textual information can facilitate the identification and classification of the types of metaphoric events denoted in Chinese text. We propose a set of syntactic conditions crucial to event structures to improve the model based on the classification of radical groups. With the proposed syntactic conditions, the model achieves a performance of 0.8859 in terms of F-scores, making 1.7% of improvement than the same classifier with only Bag-ofword features. Results show that eventive information can improve the effectiveness of metaphor detection. Event information is rooted in every language, and thus this approach has a high potential to be applied to metaphor detection in other languages.",Leveraging Eventive Information for Better Metaphor Detection and Classification,"Metaphor detection has been both challenging and rewarding in natural language processing applications. This study offers a new approach based on eventive information in detecting metaphors by leveraging the Chinese writing system, which is a culturally bound ontological system organized according to the basic concepts represented by radicals. As such, the information represented is available in all Chinese text without pre-processing. Since metaphor detection is another culturally based conceptual representation, we hypothesize that sub-textual information can facilitate the identification and classification of the types of metaphoric events denoted in Chinese text. We propose a set of syntactic conditions crucial to event structures to improve the model based on the classification of radical groups. With the proposed syntactic conditions, the model achieves a performance of 0.8859 in terms of F-scores, making 1.7% of improvement than the same classifier with only Bag-ofword features. Results show that eventive information can improve the effectiveness of metaphor detection. Event information is rooted in every language, and thus this approach has a high potential to be applied to metaphor detection in other languages.",Leveraging Eventive Information for Better Metaphor Detection and Classification,"Metaphor detection has been both challenging and rewarding in natural language processing applications. This study offers a new approach based on eventive information in detecting metaphors by leveraging the Chinese writing system, which is a culturally bound ontological system organized according to the basic concepts represented by radicals. As such, the information represented is available in all Chinese text without pre-processing. Since metaphor detection is another culturally based conceptual representation, we hypothesize that sub-textual information can facilitate the identification and classification of the types of metaphoric events denoted in Chinese text. We propose a set of syntactic conditions crucial to event structures to improve the model based on the classification of radical groups. With the proposed syntactic conditions, the model achieves a performance of 0.8859 in terms of F-scores, making 1.7% of improvement than the same classifier with only Bag-ofword features. Results show that eventive information can improve the effectiveness of metaphor detection. Event information is rooted in every language, and thus this approach has a high potential to be applied to metaphor detection in other languages.","The work is partially supported by the following research grants from Hong Kong Polytechnic University: 1-YW1V, 4-ZZFE and RTVU; as well as GRF grants (PolyU 15211/14E and PolyU 152006/16E).","Leveraging Eventive Information for Better Metaphor Detection and Classification. Metaphor detection has been both challenging and rewarding in natural language processing applications. This study offers a new approach based on eventive information in detecting metaphors by leveraging the Chinese writing system, which is a culturally bound ontological system organized according to the basic concepts represented by radicals. As such, the information represented is available in all Chinese text without pre-processing. Since metaphor detection is another culturally based conceptual representation, we hypothesize that sub-textual information can facilitate the identification and classification of the types of metaphoric events denoted in Chinese text. We propose a set of syntactic conditions crucial to event structures to improve the model based on the classification of radical groups. With the proposed syntactic conditions, the model achieves a performance of 0.8859 in terms of F-scores, making 1.7% of improvement than the same classifier with only Bag-ofword features. Results show that eventive information can improve the effectiveness of metaphor detection. Event information is rooted in every language, and thus this approach has a high potential to be applied to metaphor detection in other languages.",2017
song-etal-2010-enhanced,http://www.lrec-conf.org/proceedings/lrec2010/pdf/798_Paper.pdf,0,,,,,,,"Enhanced Infrastructure for Creation and Collection of Translation Resources. manual translation, parallel text harvesting, acquisition of existing manual translations Chinese > English 100M + BN, BC, NW, WB manual translation, parallel text harvesting, acquisition of existing manual translations English > Chinese 250K + BN, BC, NW, WB manual translation English>Arabic 250K + BN, BC, NW, WB manual translation Bengali>English Pashto>English Punjabi>English Tagalog>English Tamil>English Thai>English Urdu>English Uzbek>English 250-500K + per language pair manual translation, parallel text harvesting NW, WB",Enhanced Infrastructure for Creation and Collection of Translation Resources,"manual translation, parallel text harvesting, acquisition of existing manual translations Chinese > English 100M + BN, BC, NW, WB manual translation, parallel text harvesting, acquisition of existing manual translations English > Chinese 250K + BN, BC, NW, WB manual translation English>Arabic 250K + BN, BC, NW, WB manual translation Bengali>English Pashto>English Punjabi>English Tagalog>English Tamil>English Thai>English Urdu>English Uzbek>English 250-500K + per language pair manual translation, parallel text harvesting NW, WB",Enhanced Infrastructure for Creation and Collection of Translation Resources,"manual translation, parallel text harvesting, acquisition of existing manual translations Chinese > English 100M + BN, BC, NW, WB manual translation, parallel text harvesting, acquisition of existing manual translations English > Chinese 250K + BN, BC, NW, WB manual translation English>Arabic 250K + BN, BC, NW, WB manual translation Bengali>English Pashto>English Punjabi>English Tagalog>English Tamil>English Thai>English Urdu>English Uzbek>English 250-500K + per language pair manual translation, parallel text harvesting NW, WB","This work was supported in part by the Defense Advanced Research Projects Agency, GALE Program Grant No. HR0011-06-1-0003. The content of this paper does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.","Enhanced Infrastructure for Creation and Collection of Translation Resources. manual translation, parallel text harvesting, acquisition of existing manual translations Chinese > English 100M + BN, BC, NW, WB manual translation, parallel text harvesting, acquisition of existing manual translations English > Chinese 250K + BN, BC, NW, WB manual translation English>Arabic 250K + BN, BC, NW, WB manual translation Bengali>English Pashto>English Punjabi>English Tagalog>English Tamil>English Thai>English Urdu>English Uzbek>English 250-500K + per language pair manual translation, parallel text harvesting NW, WB",2010
delpech-saint-dizier-2008-investigating,http://www.lrec-conf.org/proceedings/lrec2008/pdf/20_paper.pdf,0,,,,,,,"Investigating the Structure of Procedural Texts for Answering How-to Questions. This paper presents ongoing work dedicated to parsing the textual structure of procedural texts. We propose here a model for the intructional structure and criteria to identify its main components: titles, instructions, warnings and prerequisites. The main aim of this project, besides a contribution to text processing, is to be able to answer procedural questions (How-to? questions), where the answer is a well-formed portion of a text, not a small set of words as for factoid questions.",Investigating the Structure of Procedural Texts for Answering How-to Questions,"This paper presents ongoing work dedicated to parsing the textual structure of procedural texts. We propose here a model for the intructional structure and criteria to identify its main components: titles, instructions, warnings and prerequisites. The main aim of this project, besides a contribution to text processing, is to be able to answer procedural questions (How-to? questions), where the answer is a well-formed portion of a text, not a small set of words as for factoid questions.",Investigating the Structure of Procedural Texts for Answering How-to Questions,"This paper presents ongoing work dedicated to parsing the textual structure of procedural texts. We propose here a model for the intructional structure and criteria to identify its main components: titles, instructions, warnings and prerequisites. The main aim of this project, besides a contribution to text processing, is to be able to answer procedural questions (How-to? questions), where the answer is a well-formed portion of a text, not a small set of words as for factoid questions.",Acknowledgements This paper relates work realized within the French ANR project TextCoop. We thank its partners for stimulating discussions.,"Investigating the Structure of Procedural Texts for Answering How-to Questions. This paper presents ongoing work dedicated to parsing the textual structure of procedural texts. We propose here a model for the intructional structure and criteria to identify its main components: titles, instructions, warnings and prerequisites. The main aim of this project, besides a contribution to text processing, is to be able to answer procedural questions (How-to? questions), where the answer is a well-formed portion of a text, not a small set of words as for factoid questions.",2008
srivastava-etal-2018-identifying,https://aclanthology.org/W18-4412,1,,,,hate_speech,,,"Identifying Aggression and Toxicity in Comments using Capsule Network. Aggression and related activities like trolling, hate speech etc. involve toxic comments in various forms. These are common scenarios in today's time and websites react by shutting down their comment sections. To tackle this, an algorithmic solution is preferred to human moderation which is slow and expensive. In this paper, we propose a single model capsule network with focal loss to achieve this task which is suitable for production environment. Our model achieves competitive results over other strong baseline methods, which show its effectiveness and that focal loss exhibits significant improvement in such cases where class imbalance is a regular issue. Additionally, we show that the problem of extensive data preprocessing, data augmentation can be tackled by capsule networks implicitly. We achieve an overall ROC AUC of 98.46 on Kaggletoxic comment dataset and show that it beats other architectures by a good margin. As comments tend to be written in more than one language, and transliteration is a common problem, we further show that our model handles this effectively by applying our model on TRAC shared task dataset which contains comments in code-mixed Hindi-English.",Identifying Aggression and Toxicity in Comments using Capsule Network,"Aggression and related activities like trolling, hate speech etc. involve toxic comments in various forms. These are common scenarios in today's time and websites react by shutting down their comment sections. To tackle this, an algorithmic solution is preferred to human moderation which is slow and expensive. In this paper, we propose a single model capsule network with focal loss to achieve this task which is suitable for production environment. Our model achieves competitive results over other strong baseline methods, which show its effectiveness and that focal loss exhibits significant improvement in such cases where class imbalance is a regular issue. Additionally, we show that the problem of extensive data preprocessing, data augmentation can be tackled by capsule networks implicitly. We achieve an overall ROC AUC of 98.46 on Kaggletoxic comment dataset and show that it beats other architectures by a good margin. As comments tend to be written in more than one language, and transliteration is a common problem, we further show that our model handles this effectively by applying our model on TRAC shared task dataset which contains comments in code-mixed Hindi-English.",Identifying Aggression and Toxicity in Comments using Capsule Network,"Aggression and related activities like trolling, hate speech etc. involve toxic comments in various forms. These are common scenarios in today's time and websites react by shutting down their comment sections. To tackle this, an algorithmic solution is preferred to human moderation which is slow and expensive. In this paper, we propose a single model capsule network with focal loss to achieve this task which is suitable for production environment. Our model achieves competitive results over other strong baseline methods, which show its effectiveness and that focal loss exhibits significant improvement in such cases where class imbalance is a regular issue. Additionally, we show that the problem of extensive data preprocessing, data augmentation can be tackled by capsule networks implicitly. We achieve an overall ROC AUC of 98.46 on Kaggletoxic comment dataset and show that it beats other architectures by a good margin. As comments tend to be written in more than one language, and transliteration is a common problem, we further show that our model handles this effectively by applying our model on TRAC shared task dataset which contains comments in code-mixed Hindi-English.",,"Identifying Aggression and Toxicity in Comments using Capsule Network. Aggression and related activities like trolling, hate speech etc. involve toxic comments in various forms. These are common scenarios in today's time and websites react by shutting down their comment sections. To tackle this, an algorithmic solution is preferred to human moderation which is slow and expensive. In this paper, we propose a single model capsule network with focal loss to achieve this task which is suitable for production environment. Our model achieves competitive results over other strong baseline methods, which show its effectiveness and that focal loss exhibits significant improvement in such cases where class imbalance is a regular issue. Additionally, we show that the problem of extensive data preprocessing, data augmentation can be tackled by capsule networks implicitly. We achieve an overall ROC AUC of 98.46 on Kaggletoxic comment dataset and show that it beats other architectures by a good margin. As comments tend to be written in more than one language, and transliteration is a common problem, we further show that our model handles this effectively by applying our model on TRAC shared task dataset which contains comments in code-mixed Hindi-English.",2018
wich-etal-2020-investigating,https://aclanthology.org/2020.alw-1.22,0,,,,,,,"Investigating Annotator Bias with a Graph-Based Approach. A challenge that many online platforms face is hate speech or any other form of online abuse. To cope with this, hate speech detection systems are developed based on machine learning to reduce manual work for monitoring these platforms. Unfortunately, machine learning is vulnerable to unintended bias in training data, which could have severe consequences, such as a decrease in classification performance or unfair behavior (e.g., discriminating minorities). In the scope of this study, we want to investigate annotator bias-a form of bias that annotators cause due to different knowledge in regards to the task and their subjective perception. Our goal is to identify annotation bias based on similarities in the annotation behavior from annotators. To do so, we build a graph based on the annotations from the different annotators, apply a community detection algorithm to group the annotators, and train for each group classifiers whose performances we compare. By doing so, we are able to identify annotator bias within a data set. The proposed method and collected insights can contribute to developing fairer and more reliable hate speech classification models.",Investigating Annotator Bias with a Graph-Based Approach,"A challenge that many online platforms face is hate speech or any other form of online abuse. To cope with this, hate speech detection systems are developed based on machine learning to reduce manual work for monitoring these platforms. Unfortunately, machine learning is vulnerable to unintended bias in training data, which could have severe consequences, such as a decrease in classification performance or unfair behavior (e.g., discriminating minorities). In the scope of this study, we want to investigate annotator bias-a form of bias that annotators cause due to different knowledge in regards to the task and their subjective perception. Our goal is to identify annotation bias based on similarities in the annotation behavior from annotators. To do so, we build a graph based on the annotations from the different annotators, apply a community detection algorithm to group the annotators, and train for each group classifiers whose performances we compare. By doing so, we are able to identify annotator bias within a data set. The proposed method and collected insights can contribute to developing fairer and more reliable hate speech classification models.",Investigating Annotator Bias with a Graph-Based Approach,"A challenge that many online platforms face is hate speech or any other form of online abuse. To cope with this, hate speech detection systems are developed based on machine learning to reduce manual work for monitoring these platforms. Unfortunately, machine learning is vulnerable to unintended bias in training data, which could have severe consequences, such as a decrease in classification performance or unfair behavior (e.g., discriminating minorities). In the scope of this study, we want to investigate annotator bias-a form of bias that annotators cause due to different knowledge in regards to the task and their subjective perception. Our goal is to identify annotation bias based on similarities in the annotation behavior from annotators. To do so, we build a graph based on the annotations from the different annotators, apply a community detection algorithm to group the annotators, and train for each group classifiers whose performances we compare. By doing so, we are able to identify annotator bias within a data set. The proposed method and collected insights can contribute to developing fairer and more reliable hate speech classification models.",This research has been partially funded by a scholarship from the Hanns Seidel Foundation financed by the German Federal Ministry of Education and Research.,"Investigating Annotator Bias with a Graph-Based Approach. A challenge that many online platforms face is hate speech or any other form of online abuse. To cope with this, hate speech detection systems are developed based on machine learning to reduce manual work for monitoring these platforms. Unfortunately, machine learning is vulnerable to unintended bias in training data, which could have severe consequences, such as a decrease in classification performance or unfair behavior (e.g., discriminating minorities). In the scope of this study, we want to investigate annotator bias-a form of bias that annotators cause due to different knowledge in regards to the task and their subjective perception. Our goal is to identify annotation bias based on similarities in the annotation behavior from annotators. To do so, we build a graph based on the annotations from the different annotators, apply a community detection algorithm to group the annotators, and train for each group classifiers whose performances we compare. By doing so, we are able to identify annotator bias within a data set. The proposed method and collected insights can contribute to developing fairer and more reliable hate speech classification models.",2020
hobbs-etal-1992-robust,https://aclanthology.org/A92-1026,0,,,,,,,"Robust Processing of Real-World Natural-Language Texts. I1. is often assumed that when natural language processing meets the real world, the ideal of aiming for complete and correct interpretations has to be abandoned. However, our experience with TACITUS; especially in the MUC-3 evaluation, has shown that. principled techniques fox' syntactic and pragmatic analysis can be bolstered with methods for achieving robustness. We describe three techniques for making syntactic analysis more robust-an agendabased scheduling parser, a recovery technique for failed parses, and a new technique called terminal substring parsing. For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. Each of these techlfiques have been evaluated and the results of the evaluations are presented.",Robust Processing of Real-World Natural-Language Texts,"I1. is often assumed that when natural language processing meets the real world, the ideal of aiming for complete and correct interpretations has to be abandoned. However, our experience with TACITUS; especially in the MUC-3 evaluation, has shown that. principled techniques fox' syntactic and pragmatic analysis can be bolstered with methods for achieving robustness. We describe three techniques for making syntactic analysis more robust-an agendabased scheduling parser, a recovery technique for failed parses, and a new technique called terminal substring parsing. For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. Each of these techlfiques have been evaluated and the results of the evaluations are presented.",Robust Processing of Real-World Natural-Language Texts,"I1. is often assumed that when natural language processing meets the real world, the ideal of aiming for complete and correct interpretations has to be abandoned. However, our experience with TACITUS; especially in the MUC-3 evaluation, has shown that. principled techniques fox' syntactic and pragmatic analysis can be bolstered with methods for achieving robustness. We describe three techniques for making syntactic analysis more robust-an agendabased scheduling parser, a recovery technique for failed parses, and a new technique called terminal substring parsing. For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. Each of these techlfiques have been evaluated and the results of the evaluations are presented.",This research has been funded by the Defense Advanced Research Projects Agency under Office of Naval Research contracts N00014-85-C-0013 and N00014-90-C-0220.,"Robust Processing of Real-World Natural-Language Texts. I1. is often assumed that when natural language processing meets the real world, the ideal of aiming for complete and correct interpretations has to be abandoned. However, our experience with TACITUS; especially in the MUC-3 evaluation, has shown that. principled techniques fox' syntactic and pragmatic analysis can be bolstered with methods for achieving robustness. We describe three techniques for making syntactic analysis more robust-an agendabased scheduling parser, a recovery technique for failed parses, and a new technique called terminal substring parsing. For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. Each of these techlfiques have been evaluated and the results of the evaluations are presented.",1992
miura-etal-2014-teamx,https://aclanthology.org/S14-2111,0,,,,,,,"TeamX: A Sentiment Analyzer with Enhanced Lexicon Mapping and Weighting Scheme for Unbalanced Data. This paper describes the system that has been used by TeamX in SemEval-2014 Task 9 Subtask B. The system is a sentiment analyzer based on a supervised text categorization approach designed with following two concepts. Firstly, since lexicon features were shown to be effective in SemEval-2013 Task 2, various lexicons and pre-processors for them are introduced to enhance lexical information. Secondly, since a distribution of sentiment on tweets is known to be unbalanced, an weighting scheme is introduced to bias an output of a machine learner. For the test run, the system was tuned towards Twitter texts and successfully achieved high scoring results on Twitter data, average F 1 70.96 on Twit-ter2014 and average F 1 56.50 on Twit-ter2014Sarcasm.",{T}eam{X}: A Sentiment Analyzer with Enhanced Lexicon Mapping and Weighting Scheme for Unbalanced Data,"This paper describes the system that has been used by TeamX in SemEval-2014 Task 9 Subtask B. The system is a sentiment analyzer based on a supervised text categorization approach designed with following two concepts. Firstly, since lexicon features were shown to be effective in SemEval-2013 Task 2, various lexicons and pre-processors for them are introduced to enhance lexical information. Secondly, since a distribution of sentiment on tweets is known to be unbalanced, an weighting scheme is introduced to bias an output of a machine learner. For the test run, the system was tuned towards Twitter texts and successfully achieved high scoring results on Twitter data, average F 1 70.96 on Twit-ter2014 and average F 1 56.50 on Twit-ter2014Sarcasm.",TeamX: A Sentiment Analyzer with Enhanced Lexicon Mapping and Weighting Scheme for Unbalanced Data,"This paper describes the system that has been used by TeamX in SemEval-2014 Task 9 Subtask B. The system is a sentiment analyzer based on a supervised text categorization approach designed with following two concepts. Firstly, since lexicon features were shown to be effective in SemEval-2013 Task 2, various lexicons and pre-processors for them are introduced to enhance lexical information. Secondly, since a distribution of sentiment on tweets is known to be unbalanced, an weighting scheme is introduced to bias an output of a machine learner. For the test run, the system was tuned towards Twitter texts and successfully achieved high scoring results on Twitter data, average F 1 70.96 on Twit-ter2014 and average F 1 56.50 on Twit-ter2014Sarcasm.",We would like to thank the anonymous reviewers for their valuable comments to improve this paper.,"TeamX: A Sentiment Analyzer with Enhanced Lexicon Mapping and Weighting Scheme for Unbalanced Data. This paper describes the system that has been used by TeamX in SemEval-2014 Task 9 Subtask B. The system is a sentiment analyzer based on a supervised text categorization approach designed with following two concepts. Firstly, since lexicon features were shown to be effective in SemEval-2013 Task 2, various lexicons and pre-processors for them are introduced to enhance lexical information. Secondly, since a distribution of sentiment on tweets is known to be unbalanced, an weighting scheme is introduced to bias an output of a machine learner. For the test run, the system was tuned towards Twitter texts and successfully achieved high scoring results on Twitter data, average F 1 70.96 on Twit-ter2014 and average F 1 56.50 on Twit-ter2014Sarcasm.",2014
kennington-schlangen-2021-incremental,https://aclanthology.org/2021.mmsr-1.8,0,,,,,,,"Incremental Unit Networks for Multimodal, Fine-grained Information State Representation. We offer a sketch of a fine-grained information state annotation scheme that follows directly from the Incremental Unit abstract model of dialogue processing when used within a multimodal, co-located, interactive setting. We explain the Incremental Unit model and give an example application using the Localized Narratives dataset, then offer avenues for future research.","Incremental Unit Networks for Multimodal, Fine-grained Information State Representation","We offer a sketch of a fine-grained information state annotation scheme that follows directly from the Incremental Unit abstract model of dialogue processing when used within a multimodal, co-located, interactive setting. We explain the Incremental Unit model and give an example application using the Localized Narratives dataset, then offer avenues for future research.","Incremental Unit Networks for Multimodal, Fine-grained Information State Representation","We offer a sketch of a fine-grained information state annotation scheme that follows directly from the Incremental Unit abstract model of dialogue processing when used within a multimodal, co-located, interactive setting. We explain the Incremental Unit model and give an example application using the Localized Narratives dataset, then offer avenues for future research.",Acknowledgements We appreciate the feedback from the anonymous reviewers.,"Incremental Unit Networks for Multimodal, Fine-grained Information State Representation. We offer a sketch of a fine-grained information state annotation scheme that follows directly from the Incremental Unit abstract model of dialogue processing when used within a multimodal, co-located, interactive setting. We explain the Incremental Unit model and give an example application using the Localized Narratives dataset, then offer avenues for future research.",2021
grimm-etal-2015-towards,https://aclanthology.org/W15-2405,0,,,,,,,"Towards a Model of Prediction-based Syntactic Category Acquisition: First Steps with Word Embeddings. We present a prototype model, based on a combination of count-based distributional semantics and prediction-based neural word embeddings, which learns about syntactic categories as a function of (1) writing contextual, phonological, and lexical-stress-related information to memory and (2) predicting upcoming context words based on memorized information. The system is a first step towards utilizing recently popular methods from Natural Language Processing for exploring the role of prediction in childrens' acquisition of syntactic categories. 1 Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781 [Computation and Language (cs.CL)]",Towards a Model of Prediction-based Syntactic Category Acquisition: First Steps with Word Embeddings,"We present a prototype model, based on a combination of count-based distributional semantics and prediction-based neural word embeddings, which learns about syntactic categories as a function of (1) writing contextual, phonological, and lexical-stress-related information to memory and (2) predicting upcoming context words based on memorized information. The system is a first step towards utilizing recently popular methods from Natural Language Processing for exploring the role of prediction in childrens' acquisition of syntactic categories. 1 Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781 [Computation and Language (cs.CL)]",Towards a Model of Prediction-based Syntactic Category Acquisition: First Steps with Word Embeddings,"We present a prototype model, based on a combination of count-based distributional semantics and prediction-based neural word embeddings, which learns about syntactic categories as a function of (1) writing contextual, phonological, and lexical-stress-related information to memory and (2) predicting upcoming context words based on memorized information. The system is a first step towards utilizing recently popular methods from Natural Language Processing for exploring the role of prediction in childrens' acquisition of syntactic categories. 1 Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781 [Computation and Language (cs.CL)]",The present research was supported by a BOF/TOP grant (ID 29072) of the Research Council of the University of Antwerp.,"Towards a Model of Prediction-based Syntactic Category Acquisition: First Steps with Word Embeddings. We present a prototype model, based on a combination of count-based distributional semantics and prediction-based neural word embeddings, which learns about syntactic categories as a function of (1) writing contextual, phonological, and lexical-stress-related information to memory and (2) predicting upcoming context words based on memorized information. The system is a first step towards utilizing recently popular methods from Natural Language Processing for exploring the role of prediction in childrens' acquisition of syntactic categories. 1 Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781 [Computation and Language (cs.CL)]",2015
noro-tokuda-2008-ranking,https://aclanthology.org/I08-2092,0,,,,,,,"Ranking Words for Building a Japanese Defining Vocabulary. Defining all words in a Japanese dictionary by using a limited number of words (defining vocabulary) is helpful for Japanese children and second-language learners of Japanese. Although some English dictionaries have their own defining vocabulary, no Japanese dictionary has such vocabulary as of yet. As the first step toward building a Japanese defining vocabulary, we ranked Japanese words based on a graphbased method. In this paper, we introduce the method, and show some evaluation results of applying the method to an existing Japanese dictionary.",Ranking Words for Building a {J}apanese Defining Vocabulary,"Defining all words in a Japanese dictionary by using a limited number of words (defining vocabulary) is helpful for Japanese children and second-language learners of Japanese. Although some English dictionaries have their own defining vocabulary, no Japanese dictionary has such vocabulary as of yet. As the first step toward building a Japanese defining vocabulary, we ranked Japanese words based on a graphbased method. In this paper, we introduce the method, and show some evaluation results of applying the method to an existing Japanese dictionary.",Ranking Words for Building a Japanese Defining Vocabulary,"Defining all words in a Japanese dictionary by using a limited number of words (defining vocabulary) is helpful for Japanese children and second-language learners of Japanese. Although some English dictionaries have their own defining vocabulary, no Japanese dictionary has such vocabulary as of yet. As the first step toward building a Japanese defining vocabulary, we ranked Japanese words based on a graphbased method. In this paper, we introduce the method, and show some evaluation results of applying the method to an existing Japanese dictionary.",,"Ranking Words for Building a Japanese Defining Vocabulary. Defining all words in a Japanese dictionary by using a limited number of words (defining vocabulary) is helpful for Japanese children and second-language learners of Japanese. Although some English dictionaries have their own defining vocabulary, no Japanese dictionary has such vocabulary as of yet. As the first step toward building a Japanese defining vocabulary, we ranked Japanese words based on a graphbased method. In this paper, we introduce the method, and show some evaluation results of applying the method to an existing Japanese dictionary.",2008
kleiweg-van-noord-2020-alpinograph,https://aclanthology.org/2020.tlt-1.13,0,,,,,,,"AlpinoGraph: A Graph-based Search Engine for Flexible and Efficient Treebank Search. AlpinoGraph is a graph-based search engine which provides treebank search using SQL database technology coupled with the Cypher query language for graphs. In the paper, we show that AlpinoGraph is a very powerful and very flexible approach towards treebank search. At the same time, AlpinoGraph is efficient. Currently, AlpinoGraph is applicable for all standard Dutch treebanks. We compare the Cypher queries in AlpinoGraph with the XPath queries used in earlier treebank search applications for the same treebanks. We also present a pre-processing technique which speeds up query processing dramatically in some cases, and is applicable beyond AlpinoGraph.",{A}lpino{G}raph: A Graph-based Search Engine for Flexible and Efficient Treebank Search,"AlpinoGraph is a graph-based search engine which provides treebank search using SQL database technology coupled with the Cypher query language for graphs. In the paper, we show that AlpinoGraph is a very powerful and very flexible approach towards treebank search. At the same time, AlpinoGraph is efficient. Currently, AlpinoGraph is applicable for all standard Dutch treebanks. We compare the Cypher queries in AlpinoGraph with the XPath queries used in earlier treebank search applications for the same treebanks. We also present a pre-processing technique which speeds up query processing dramatically in some cases, and is applicable beyond AlpinoGraph.",AlpinoGraph: A Graph-based Search Engine for Flexible and Efficient Treebank Search,"AlpinoGraph is a graph-based search engine which provides treebank search using SQL database technology coupled with the Cypher query language for graphs. In the paper, we show that AlpinoGraph is a very powerful and very flexible approach towards treebank search. At the same time, AlpinoGraph is efficient. Currently, AlpinoGraph is applicable for all standard Dutch treebanks. We compare the Cypher queries in AlpinoGraph with the XPath queries used in earlier treebank search applications for the same treebanks. We also present a pre-processing technique which speeds up query processing dramatically in some cases, and is applicable beyond AlpinoGraph.",,"AlpinoGraph: A Graph-based Search Engine for Flexible and Efficient Treebank Search. AlpinoGraph is a graph-based search engine which provides treebank search using SQL database technology coupled with the Cypher query language for graphs. In the paper, we show that AlpinoGraph is a very powerful and very flexible approach towards treebank search. At the same time, AlpinoGraph is efficient. Currently, AlpinoGraph is applicable for all standard Dutch treebanks. We compare the Cypher queries in AlpinoGraph with the XPath queries used in earlier treebank search applications for the same treebanks. We also present a pre-processing technique which speeds up query processing dramatically in some cases, and is applicable beyond AlpinoGraph.",2020
geng-etal-2022-improving,https://aclanthology.org/2022.acl-long.20,0,,,,,,,"Improving Personalized Explanation Generation through Visualization. In modern recommender systems, there are usually comments or reviews from users that justify their ratings for different items. Trained on such textual corpus, explainable recommendation models learn to discover user interests and generate personalized explanations. Though able to provide plausible explanations, existing models tend to generate repeated sentences for different items or empty sentences with insufficient details. This begs an interesting question: can we immerse the models in a multimodal environment to gain proper awareness of real-world concepts and alleviate above shortcomings? To this end, we propose a visuallyenhanced approach named METER with the help of visualization generation and text-image matching discrimination: the explainable recommendation model is encouraged to visualize what it refers to while incurring a penalty if the visualization is incongruent with the textual explanation. Experimental results and a manual assessment demonstrate that our approach can improve not only the text quality but also the diversity and explainability of the generated explanations.",Improving Personalized Explanation Generation through Visualization,"In modern recommender systems, there are usually comments or reviews from users that justify their ratings for different items. Trained on such textual corpus, explainable recommendation models learn to discover user interests and generate personalized explanations. Though able to provide plausible explanations, existing models tend to generate repeated sentences for different items or empty sentences with insufficient details. This begs an interesting question: can we immerse the models in a multimodal environment to gain proper awareness of real-world concepts and alleviate above shortcomings? To this end, we propose a visuallyenhanced approach named METER with the help of visualization generation and text-image matching discrimination: the explainable recommendation model is encouraged to visualize what it refers to while incurring a penalty if the visualization is incongruent with the textual explanation. Experimental results and a manual assessment demonstrate that our approach can improve not only the text quality but also the diversity and explainability of the generated explanations.",Improving Personalized Explanation Generation through Visualization,"In modern recommender systems, there are usually comments or reviews from users that justify their ratings for different items. Trained on such textual corpus, explainable recommendation models learn to discover user interests and generate personalized explanations. Though able to provide plausible explanations, existing models tend to generate repeated sentences for different items or empty sentences with insufficient details. This begs an interesting question: can we immerse the models in a multimodal environment to gain proper awareness of real-world concepts and alleviate above shortcomings? To this end, we propose a visuallyenhanced approach named METER with the help of visualization generation and text-image matching discrimination: the explainable recommendation model is encouraged to visualize what it refers to while incurring a penalty if the visualization is incongruent with the textual explanation. Experimental results and a manual assessment demonstrate that our approach can improve not only the text quality but also the diversity and explainability of the generated explanations.","We appreciate the valuable feedback and suggestions of the reviewers. This work was supported in part by NSF IIS 1910154, 2007907, and 2046457. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsors.","Improving Personalized Explanation Generation through Visualization. In modern recommender systems, there are usually comments or reviews from users that justify their ratings for different items. Trained on such textual corpus, explainable recommendation models learn to discover user interests and generate personalized explanations. Though able to provide plausible explanations, existing models tend to generate repeated sentences for different items or empty sentences with insufficient details. This begs an interesting question: can we immerse the models in a multimodal environment to gain proper awareness of real-world concepts and alleviate above shortcomings? To this end, we propose a visuallyenhanced approach named METER with the help of visualization generation and text-image matching discrimination: the explainable recommendation model is encouraged to visualize what it refers to while incurring a penalty if the visualization is incongruent with the textual explanation. Experimental results and a manual assessment demonstrate that our approach can improve not only the text quality but also the diversity and explainability of the generated explanations.",2022
s-r-etal-2022-sentiment,https://aclanthology.org/2022.dravidianlangtech-1.29,0,,,,,,,Sentiment Analysis on Code-Switched Dravidian Languages with Kernel Based Extreme Learning Machines. Code-switching refers to the textual or spoken data containing multiple languages. Application of natural language processing (NLP) tasks like sentiment analysis is a harder problem on code-switched languages due to the irregularities in the sentence structuring and ordering. This paper shows the experiment results of building a Kernel based Extreme Learning Machines(ELM) for sentiment analysis for code-switched Dravidian languages with English. Our results show that ELM performs better than traditional machine learning classifiers on various metrics as well as trains faster than deep learning models. We also show that Polynomial kernels perform better than others in the ELM architecture. We were able to achieve a median AUC of 0.79 with a polynomial kernel.,Sentiment Analysis on Code-Switched {D}ravidian Languages with Kernel Based Extreme Learning Machines,Code-switching refers to the textual or spoken data containing multiple languages. Application of natural language processing (NLP) tasks like sentiment analysis is a harder problem on code-switched languages due to the irregularities in the sentence structuring and ordering. This paper shows the experiment results of building a Kernel based Extreme Learning Machines(ELM) for sentiment analysis for code-switched Dravidian languages with English. Our results show that ELM performs better than traditional machine learning classifiers on various metrics as well as trains faster than deep learning models. We also show that Polynomial kernels perform better than others in the ELM architecture. We were able to achieve a median AUC of 0.79 with a polynomial kernel.,Sentiment Analysis on Code-Switched Dravidian Languages with Kernel Based Extreme Learning Machines,Code-switching refers to the textual or spoken data containing multiple languages. Application of natural language processing (NLP) tasks like sentiment analysis is a harder problem on code-switched languages due to the irregularities in the sentence structuring and ordering. This paper shows the experiment results of building a Kernel based Extreme Learning Machines(ELM) for sentiment analysis for code-switched Dravidian languages with English. Our results show that ELM performs better than traditional machine learning classifiers on various metrics as well as trains faster than deep learning models. We also show that Polynomial kernels perform better than others in the ELM architecture. We were able to achieve a median AUC of 0.79 with a polynomial kernel.,,Sentiment Analysis on Code-Switched Dravidian Languages with Kernel Based Extreme Learning Machines. Code-switching refers to the textual or spoken data containing multiple languages. Application of natural language processing (NLP) tasks like sentiment analysis is a harder problem on code-switched languages due to the irregularities in the sentence structuring and ordering. This paper shows the experiment results of building a Kernel based Extreme Learning Machines(ELM) for sentiment analysis for code-switched Dravidian languages with English. Our results show that ELM performs better than traditional machine learning classifiers on various metrics as well as trains faster than deep learning models. We also show that Polynomial kernels perform better than others in the ELM architecture. We were able to achieve a median AUC of 0.79 with a polynomial kernel.,2022
beckley-roark-2011-asynchronous,https://aclanthology.org/W11-2305,0,,,,,,,"Asynchronous fixed-grid scanning with dynamic codes. In this paper, we examine several methods for including dynamic, contextually-sensitive binary codes within indirect selection typing methods using a grid with fixed symbol positions. Using Huffman codes derived from a character n-gram model, we investigate both synchronous (fixed latency highlighting) and asynchronous (self-paced using long versus short press) scanning. Additionally, we look at methods that allow for scanning past a target and returning to it versus methods that remove unselected items from consideration. Finally, we investigate a novel method for displaying the binary codes for each symbol to the user, rather than using cell highlighting, as the means for identifying the required input sequence for the target symbol. We demonstrate that dynamic coding methods for fixed position grids can be tailored for very diverse user requirements.",Asynchronous fixed-grid scanning with dynamic codes,"In this paper, we examine several methods for including dynamic, contextually-sensitive binary codes within indirect selection typing methods using a grid with fixed symbol positions. Using Huffman codes derived from a character n-gram model, we investigate both synchronous (fixed latency highlighting) and asynchronous (self-paced using long versus short press) scanning. Additionally, we look at methods that allow for scanning past a target and returning to it versus methods that remove unselected items from consideration. Finally, we investigate a novel method for displaying the binary codes for each symbol to the user, rather than using cell highlighting, as the means for identifying the required input sequence for the target symbol. We demonstrate that dynamic coding methods for fixed position grids can be tailored for very diverse user requirements.",Asynchronous fixed-grid scanning with dynamic codes,"In this paper, we examine several methods for including dynamic, contextually-sensitive binary codes within indirect selection typing methods using a grid with fixed symbol positions. Using Huffman codes derived from a character n-gram model, we investigate both synchronous (fixed latency highlighting) and asynchronous (self-paced using long versus short press) scanning. Additionally, we look at methods that allow for scanning past a target and returning to it versus methods that remove unselected items from consideration. Finally, we investigate a novel method for displaying the binary codes for each symbol to the user, rather than using cell highlighting, as the means for identifying the required input sequence for the target symbol. We demonstrate that dynamic coding methods for fixed position grids can be tailored for very diverse user requirements.",,"Asynchronous fixed-grid scanning with dynamic codes. In this paper, we examine several methods for including dynamic, contextually-sensitive binary codes within indirect selection typing methods using a grid with fixed symbol positions. Using Huffman codes derived from a character n-gram model, we investigate both synchronous (fixed latency highlighting) and asynchronous (self-paced using long versus short press) scanning. Additionally, we look at methods that allow for scanning past a target and returning to it versus methods that remove unselected items from consideration. Finally, we investigate a novel method for displaying the binary codes for each symbol to the user, rather than using cell highlighting, as the means for identifying the required input sequence for the target symbol. We demonstrate that dynamic coding methods for fixed position grids can be tailored for very diverse user requirements.",2011
muischnek-muurisep-2017-estonian,https://aclanthology.org/W17-0410,0,,,,,,,"Estonian Copular and Existential Constructions as an UD Annotation Problem. This article is about annotating clauses with nonverbal predication in version 2 of Estonian UD treebank. Three possible annotation schemas are discussed, among which separating existential clauses from copular clauses would be theoretically most sound but would need too much manual labor and could possibly yield inconcistent annotation. Therefore, a solution has been adapted which separates existential clauses consisting only of subject and (copular) verb olema be from all other olema-clauses.",{E}stonian Copular and Existential Constructions as an {UD} Annotation Problem,"This article is about annotating clauses with nonverbal predication in version 2 of Estonian UD treebank. Three possible annotation schemas are discussed, among which separating existential clauses from copular clauses would be theoretically most sound but would need too much manual labor and could possibly yield inconcistent annotation. Therefore, a solution has been adapted which separates existential clauses consisting only of subject and (copular) verb olema be from all other olema-clauses.",Estonian Copular and Existential Constructions as an UD Annotation Problem,"This article is about annotating clauses with nonverbal predication in version 2 of Estonian UD treebank. Three possible annotation schemas are discussed, among which separating existential clauses from copular clauses would be theoretically most sound but would need too much manual labor and could possibly yield inconcistent annotation. Therefore, a solution has been adapted which separates existential clauses consisting only of subject and (copular) verb olema be from all other olema-clauses.","This study was supported by the Estonian Ministry of Education and Research (IUT20-56), and by the European Union through the European Regional Development Fund (Centre of Excellence in Estonian Studies).","Estonian Copular and Existential Constructions as an UD Annotation Problem. This article is about annotating clauses with nonverbal predication in version 2 of Estonian UD treebank. Three possible annotation schemas are discussed, among which separating existential clauses from copular clauses would be theoretically most sound but would need too much manual labor and could possibly yield inconcistent annotation. Therefore, a solution has been adapted which separates existential clauses consisting only of subject and (copular) verb olema be from all other olema-clauses.",2017
wang-etal-2020-pretrain,https://aclanthology.org/2020.acl-main.200,0,,,,,,,"To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks. Pretraining NLP models with variants of Masked Language Model (MLM) objectives has recently led to a significant improvements on many tasks. This paper examines the benefits of pretrained models as a function of the number of training samples used in the downstream task. On several text classification tasks, we show that as the number of training examples grow into the millions, the accuracy gap between finetuning BERT-based model and training vanilla LSTM from scratch narrows to within 1%. Our findings indicate that MLM-based models might reach a diminishing return point as the supervised data size increases significantly.",To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks,"Pretraining NLP models with variants of Masked Language Model (MLM) objectives has recently led to a significant improvements on many tasks. This paper examines the benefits of pretrained models as a function of the number of training samples used in the downstream task. On several text classification tasks, we show that as the number of training examples grow into the millions, the accuracy gap between finetuning BERT-based model and training vanilla LSTM from scratch narrows to within 1%. Our findings indicate that MLM-based models might reach a diminishing return point as the supervised data size increases significantly.",To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks,"Pretraining NLP models with variants of Masked Language Model (MLM) objectives has recently led to a significant improvements on many tasks. This paper examines the benefits of pretrained models as a function of the number of training samples used in the downstream task. On several text classification tasks, we show that as the number of training examples grow into the millions, the accuracy gap between finetuning BERT-based model and training vanilla LSTM from scratch narrows to within 1%. Our findings indicate that MLM-based models might reach a diminishing return point as the supervised data size increases significantly.",,"To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks. Pretraining NLP models with variants of Masked Language Model (MLM) objectives has recently led to a significant improvements on many tasks. This paper examines the benefits of pretrained models as a function of the number of training samples used in the downstream task. On several text classification tasks, we show that as the number of training examples grow into the millions, the accuracy gap between finetuning BERT-based model and training vanilla LSTM from scratch narrows to within 1%. Our findings indicate that MLM-based models might reach a diminishing return point as the supervised data size increases significantly.",2020
navarretta-2000-semantic,https://aclanthology.org/W99-1013,0,,,,,,,"Semantic Clustering of Adjectives and Verbs Based on Syntactic Patterns. In this paper we show that some of the syntactic patterns in an NLP lexicon can be used to identify semantically ""similar"" adjectives and verbs. We define semantic similarity on the basis of parameters used in the literature to classify adjectives and verbs semantically. The semantic clusters obtained from the syntactic encodings in the lexicon are evaluated by comparing them with semantic groups in existing tax onomies. The relation between adjectival syntactic patterns and their meaning is particularly interesting, because it has not been explored in the literature as much as it is the case for the relation between verbal complements and tu-guments. The identification of semantic groups on the basis of the syntactic encodings in the con sidered NLP lexicon can also be extended to other word classes and, maybe, to other languages for which the same type of lexicon exists.",Semantic Clustering of Adjectives and Verbs Based on Syntactic Patterns,"In this paper we show that some of the syntactic patterns in an NLP lexicon can be used to identify semantically ""similar"" adjectives and verbs. We define semantic similarity on the basis of parameters used in the literature to classify adjectives and verbs semantically. The semantic clusters obtained from the syntactic encodings in the lexicon are evaluated by comparing them with semantic groups in existing tax onomies. The relation between adjectival syntactic patterns and their meaning is particularly interesting, because it has not been explored in the literature as much as it is the case for the relation between verbal complements and tu-guments. The identification of semantic groups on the basis of the syntactic encodings in the con sidered NLP lexicon can also be extended to other word classes and, maybe, to other languages for which the same type of lexicon exists.",Semantic Clustering of Adjectives and Verbs Based on Syntactic Patterns,"In this paper we show that some of the syntactic patterns in an NLP lexicon can be used to identify semantically ""similar"" adjectives and verbs. We define semantic similarity on the basis of parameters used in the literature to classify adjectives and verbs semantically. The semantic clusters obtained from the syntactic encodings in the lexicon are evaluated by comparing them with semantic groups in existing tax onomies. The relation between adjectival syntactic patterns and their meaning is particularly interesting, because it has not been explored in the literature as much as it is the case for the relation between verbal complements and tu-guments. The identification of semantic groups on the basis of the syntactic encodings in the con sidered NLP lexicon can also be extended to other word classes and, maybe, to other languages for which the same type of lexicon exists.",,"Semantic Clustering of Adjectives and Verbs Based on Syntactic Patterns. In this paper we show that some of the syntactic patterns in an NLP lexicon can be used to identify semantically ""similar"" adjectives and verbs. We define semantic similarity on the basis of parameters used in the literature to classify adjectives and verbs semantically. The semantic clusters obtained from the syntactic encodings in the lexicon are evaluated by comparing them with semantic groups in existing tax onomies. The relation between adjectival syntactic patterns and their meaning is particularly interesting, because it has not been explored in the literature as much as it is the case for the relation between verbal complements and tu-guments. The identification of semantic groups on the basis of the syntactic encodings in the con sidered NLP lexicon can also be extended to other word classes and, maybe, to other languages for which the same type of lexicon exists.",2000
wan-etal-2020-self,https://aclanthology.org/2020.emnlp-main.80,0,,,,,,,"Self-Paced Learning for Neural Machine Translation. Recent studies have proven that the training of neural machine translation (NMT) can be facilitated by mimicking the learning process of humans. Nevertheless, achievements of such kind of curriculum learning rely on the quality of artificial schedule drawn up with the handcrafted features, e.g. sentence length or word rarity. We ameliorate this procedure with a more flexible manner by proposing self-paced learning, where NMT model is allowed to 1) automatically quantify the learning confidence over training examples; and 2) flexibly govern its learning via regulating the loss in each iteration step. Experimental results over multiple translation tasks demonstrate that the proposed model yields better performance than strong baselines and those models trained with human-designed curricula on both translation quality and convergence speed. 1",Self-Paced Learning for Neural Machine Translation,"Recent studies have proven that the training of neural machine translation (NMT) can be facilitated by mimicking the learning process of humans. Nevertheless, achievements of such kind of curriculum learning rely on the quality of artificial schedule drawn up with the handcrafted features, e.g. sentence length or word rarity. We ameliorate this procedure with a more flexible manner by proposing self-paced learning, where NMT model is allowed to 1) automatically quantify the learning confidence over training examples; and 2) flexibly govern its learning via regulating the loss in each iteration step. Experimental results over multiple translation tasks demonstrate that the proposed model yields better performance than strong baselines and those models trained with human-designed curricula on both translation quality and convergence speed. 1",Self-Paced Learning for Neural Machine Translation,"Recent studies have proven that the training of neural machine translation (NMT) can be facilitated by mimicking the learning process of humans. Nevertheless, achievements of such kind of curriculum learning rely on the quality of artificial schedule drawn up with the handcrafted features, e.g. sentence length or word rarity. We ameliorate this procedure with a more flexible manner by proposing self-paced learning, where NMT model is allowed to 1) automatically quantify the learning confidence over training examples; and 2) flexibly govern its learning via regulating the loss in each iteration step. Experimental results over multiple translation tasks demonstrate that the proposed model yields better performance than strong baselines and those models trained with human-designed curricula on both translation quality and convergence speed. 1",,"Self-Paced Learning for Neural Machine Translation. Recent studies have proven that the training of neural machine translation (NMT) can be facilitated by mimicking the learning process of humans. Nevertheless, achievements of such kind of curriculum learning rely on the quality of artificial schedule drawn up with the handcrafted features, e.g. sentence length or word rarity. We ameliorate this procedure with a more flexible manner by proposing self-paced learning, where NMT model is allowed to 1) automatically quantify the learning confidence over training examples; and 2) flexibly govern its learning via regulating the loss in each iteration step. Experimental results over multiple translation tasks demonstrate that the proposed model yields better performance than strong baselines and those models trained with human-designed curricula on both translation quality and convergence speed. 1",2020
smith-eisner-2006-annealing,https://aclanthology.org/P06-1072,0,,,,,,,"Annealing Structural Bias in Multilingual Weighted Grammar Induction. We first show how a structural locality bias can improve the accuracy of state-of-the-art dependency grammar induction models trained by EM from unannotated examples (Klein and Manning, 2004). Next, by annealing the free parameter that controls this bias, we achieve further improvements. We then describe an alternative kind of structural bias, toward ""broken"" hypotheses consisting of partial structures over segmented sentences, and show a similar pattern of improvement. We relate this approach to contrastive estimation (Smith and Eisner, 2005a), apply the latter to grammar induction in six languages, and show that our new approach improves accuracy by 1-17% (absolute) over CE (and 8-30% over EM), achieving to our knowledge the best results on this task to date. Our method, structural annealing, is a general technique with broad applicability to hidden-structure discovery problems.",Annealing Structural Bias in Multilingual Weighted Grammar Induction,"We first show how a structural locality bias can improve the accuracy of state-of-the-art dependency grammar induction models trained by EM from unannotated examples (Klein and Manning, 2004). Next, by annealing the free parameter that controls this bias, we achieve further improvements. We then describe an alternative kind of structural bias, toward ""broken"" hypotheses consisting of partial structures over segmented sentences, and show a similar pattern of improvement. We relate this approach to contrastive estimation (Smith and Eisner, 2005a), apply the latter to grammar induction in six languages, and show that our new approach improves accuracy by 1-17% (absolute) over CE (and 8-30% over EM), achieving to our knowledge the best results on this task to date. Our method, structural annealing, is a general technique with broad applicability to hidden-structure discovery problems.",Annealing Structural Bias in Multilingual Weighted Grammar Induction,"We first show how a structural locality bias can improve the accuracy of state-of-the-art dependency grammar induction models trained by EM from unannotated examples (Klein and Manning, 2004). Next, by annealing the free parameter that controls this bias, we achieve further improvements. We then describe an alternative kind of structural bias, toward ""broken"" hypotheses consisting of partial structures over segmented sentences, and show a similar pattern of improvement. We relate this approach to contrastive estimation (Smith and Eisner, 2005a), apply the latter to grammar induction in six languages, and show that our new approach improves accuracy by 1-17% (absolute) over CE (and 8-30% over EM), achieving to our knowledge the best results on this task to date. Our method, structural annealing, is a general technique with broad applicability to hidden-structure discovery problems.",,"Annealing Structural Bias in Multilingual Weighted Grammar Induction. We first show how a structural locality bias can improve the accuracy of state-of-the-art dependency grammar induction models trained by EM from unannotated examples (Klein and Manning, 2004). Next, by annealing the free parameter that controls this bias, we achieve further improvements. We then describe an alternative kind of structural bias, toward ""broken"" hypotheses consisting of partial structures over segmented sentences, and show a similar pattern of improvement. We relate this approach to contrastive estimation (Smith and Eisner, 2005a), apply the latter to grammar induction in six languages, and show that our new approach improves accuracy by 1-17% (absolute) over CE (and 8-30% over EM), achieving to our knowledge the best results on this task to date. Our method, structural annealing, is a general technique with broad applicability to hidden-structure discovery problems.",2006
shen-etal-2022-parallel,https://aclanthology.org/2022.acl-long.67,0,,,,,,,"Parallel Instance Query Network for Named Entity Recognition. Named entity recognition (NER) is a fundamental task in natural language processing. Recent works treat named entity recognition as a reading comprehension task, constructing typespecific queries manually to extract entities. This paradigm suffers from three issues. First, type-specific queries can only extract one type of entities per inference, which is inefficient. Second, the extraction for different types of entities is isolated, ignoring the dependencies between them. Third, query construction relies on external knowledge and is difficult to apply to realistic scenarios with hundreds of entity types. To deal with them, we propose Parallel Instance Query Network (PIQN), which sets up global and learnable instance queries to extract entities from a sentence in a parallel manner. Each instance query predicts one entity, and by feeding all instance queries simultaneously, we can query all entities in parallel. Instead of being constructed from external knowledge, instance queries can learn their different query semantics during training. For training the model, we treat label assignment as a one-to-many Linear Assignment Problem (LAP) and dynamically assign gold entities to instance queries with minimal assignment cost. Experiments on both nested and flat NER datasets demonstrate that our proposed method outperforms previous state-of-the-art models 1 .",Parallel Instance Query Network for Named Entity Recognition,"Named entity recognition (NER) is a fundamental task in natural language processing. Recent works treat named entity recognition as a reading comprehension task, constructing typespecific queries manually to extract entities. This paradigm suffers from three issues. First, type-specific queries can only extract one type of entities per inference, which is inefficient. Second, the extraction for different types of entities is isolated, ignoring the dependencies between them. Third, query construction relies on external knowledge and is difficult to apply to realistic scenarios with hundreds of entity types. To deal with them, we propose Parallel Instance Query Network (PIQN), which sets up global and learnable instance queries to extract entities from a sentence in a parallel manner. Each instance query predicts one entity, and by feeding all instance queries simultaneously, we can query all entities in parallel. Instead of being constructed from external knowledge, instance queries can learn their different query semantics during training. For training the model, we treat label assignment as a one-to-many Linear Assignment Problem (LAP) and dynamically assign gold entities to instance queries with minimal assignment cost. Experiments on both nested and flat NER datasets demonstrate that our proposed method outperforms previous state-of-the-art models 1 .",Parallel Instance Query Network for Named Entity Recognition,"Named entity recognition (NER) is a fundamental task in natural language processing. Recent works treat named entity recognition as a reading comprehension task, constructing typespecific queries manually to extract entities. This paradigm suffers from three issues. First, type-specific queries can only extract one type of entities per inference, which is inefficient. Second, the extraction for different types of entities is isolated, ignoring the dependencies between them. Third, query construction relies on external knowledge and is difficult to apply to realistic scenarios with hundreds of entity types. To deal with them, we propose Parallel Instance Query Network (PIQN), which sets up global and learnable instance queries to extract entities from a sentence in a parallel manner. Each instance query predicts one entity, and by feeding all instance queries simultaneously, we can query all entities in parallel. Instead of being constructed from external knowledge, instance queries can learn their different query semantics during training. For training the model, we treat label assignment as a one-to-many Linear Assignment Problem (LAP) and dynamically assign gold entities to instance queries with minimal assignment cost. Experiments on both nested and flat NER datasets demonstrate that our proposed method outperforms previous state-of-the-art models 1 .",,"Parallel Instance Query Network for Named Entity Recognition. Named entity recognition (NER) is a fundamental task in natural language processing. Recent works treat named entity recognition as a reading comprehension task, constructing typespecific queries manually to extract entities. This paradigm suffers from three issues. First, type-specific queries can only extract one type of entities per inference, which is inefficient. Second, the extraction for different types of entities is isolated, ignoring the dependencies between them. Third, query construction relies on external knowledge and is difficult to apply to realistic scenarios with hundreds of entity types. To deal with them, we propose Parallel Instance Query Network (PIQN), which sets up global and learnable instance queries to extract entities from a sentence in a parallel manner. Each instance query predicts one entity, and by feeding all instance queries simultaneously, we can query all entities in parallel. Instead of being constructed from external knowledge, instance queries can learn their different query semantics during training. For training the model, we treat label assignment as a one-to-many Linear Assignment Problem (LAP) and dynamically assign gold entities to instance queries with minimal assignment cost. Experiments on both nested and flat NER datasets demonstrate that our proposed method outperforms previous state-of-the-art models 1 .",2022
hoenen-2016-wikipedia,https://aclanthology.org/L16-1335,0,,,,,,,"Wikipedia Titles As Noun Tag Predictors. In this paper, we investigate a covert labeling cue, namely the probability that a title (by example of the Wikipedia titles) is a noun. If this probability is very large, any list such as or comparable to the Wikipedia titles can be used as a reliable word-class (or part-of-speech tag) predictor or noun lexicon. This may be especially useful in the case of Low Resource Languages (LRL) where labeled data is lacking and putatively for Natural Language Processing (NLP) tasks such as Word Sense Disambiguation, Sentiment Analysis and Machine Translation. Profitting from the ease of digital publication on the web as opposed to print, LRL speaker communities produce resources such as Wikipedia and Wiktionary, which can be used for an assessment. We provide statistical evidence for a strong noun bias for the Wikipedia titles from 2 corpora (English, Persian) and a dictionary (Japanese) and for a typologically balanced set of 17 languages including LRLs. Additionally, we conduct a small experiment on predicting noun tags for out-of-vocabulary items in part-of-speech tagging for English.",{W}ikipedia Titles As Noun Tag Predictors,"In this paper, we investigate a covert labeling cue, namely the probability that a title (by example of the Wikipedia titles) is a noun. If this probability is very large, any list such as or comparable to the Wikipedia titles can be used as a reliable word-class (or part-of-speech tag) predictor or noun lexicon. This may be especially useful in the case of Low Resource Languages (LRL) where labeled data is lacking and putatively for Natural Language Processing (NLP) tasks such as Word Sense Disambiguation, Sentiment Analysis and Machine Translation. Profitting from the ease of digital publication on the web as opposed to print, LRL speaker communities produce resources such as Wikipedia and Wiktionary, which can be used for an assessment. We provide statistical evidence for a strong noun bias for the Wikipedia titles from 2 corpora (English, Persian) and a dictionary (Japanese) and for a typologically balanced set of 17 languages including LRLs. Additionally, we conduct a small experiment on predicting noun tags for out-of-vocabulary items in part-of-speech tagging for English.",Wikipedia Titles As Noun Tag Predictors,"In this paper, we investigate a covert labeling cue, namely the probability that a title (by example of the Wikipedia titles) is a noun. If this probability is very large, any list such as or comparable to the Wikipedia titles can be used as a reliable word-class (or part-of-speech tag) predictor or noun lexicon. This may be especially useful in the case of Low Resource Languages (LRL) where labeled data is lacking and putatively for Natural Language Processing (NLP) tasks such as Word Sense Disambiguation, Sentiment Analysis and Machine Translation. Profitting from the ease of digital publication on the web as opposed to print, LRL speaker communities produce resources such as Wikipedia and Wiktionary, which can be used for an assessment. We provide statistical evidence for a strong noun bias for the Wikipedia titles from 2 corpora (English, Persian) and a dictionary (Japanese) and for a typologically balanced set of 17 languages including LRLs. Additionally, we conduct a small experiment on predicting noun tags for out-of-vocabulary items in part-of-speech tagging for English.","We greatfully acknowledge the support arising from the collaboration of the empirical linguistics department and computer science manifesting in the Centre for the Digital Foundation of Research in the Humanities, Social, and Educational Sciences (CEDIFOR: https://www. cedifor.de/en/cedifor/).","Wikipedia Titles As Noun Tag Predictors. In this paper, we investigate a covert labeling cue, namely the probability that a title (by example of the Wikipedia titles) is a noun. If this probability is very large, any list such as or comparable to the Wikipedia titles can be used as a reliable word-class (or part-of-speech tag) predictor or noun lexicon. This may be especially useful in the case of Low Resource Languages (LRL) where labeled data is lacking and putatively for Natural Language Processing (NLP) tasks such as Word Sense Disambiguation, Sentiment Analysis and Machine Translation. Profitting from the ease of digital publication on the web as opposed to print, LRL speaker communities produce resources such as Wikipedia and Wiktionary, which can be used for an assessment. We provide statistical evidence for a strong noun bias for the Wikipedia titles from 2 corpora (English, Persian) and a dictionary (Japanese) and for a typologically balanced set of 17 languages including LRLs. Additionally, we conduct a small experiment on predicting noun tags for out-of-vocabulary items in part-of-speech tagging for English.",2016
johannessen-etal-2009-nordic,https://aclanthology.org/W09-4612,0,,,,,,,"The Nordic Dialect Corpus--an advanced research tool. The paper describes the first part of the Nordic Dialect Corpus. This is a tool that combines a number of useful features that together makes it a unique and very advanced resource for researchers of many fields of language search. The corpus is web-based and features full audiovisual representation linked to transcripts. 1 Credits The Nordic Dialect Corpus is the result of close collaboration between the partners in the research networks Scandinavian Dialect Syntax and Nordic Centre of Excellence in Microcomparative Syntax. The researchers in the network have contributed in everything from decisions to actual work ranging from methodology to recordings, transcription, and annotation. Some of the corpus (in particular, recordings of informants) has been financed by the national research councils in the individual countries, while the technical development has been financed by the University of Oslo and the Norwegian Research Council, plus the Nordic research funds NOS-HS and NordForsk.",The Nordic Dialect Corpus{--}an advanced research tool,"The paper describes the first part of the Nordic Dialect Corpus. This is a tool that combines a number of useful features that together makes it a unique and very advanced resource for researchers of many fields of language search. The corpus is web-based and features full audiovisual representation linked to transcripts. 1 Credits The Nordic Dialect Corpus is the result of close collaboration between the partners in the research networks Scandinavian Dialect Syntax and Nordic Centre of Excellence in Microcomparative Syntax. The researchers in the network have contributed in everything from decisions to actual work ranging from methodology to recordings, transcription, and annotation. Some of the corpus (in particular, recordings of informants) has been financed by the national research councils in the individual countries, while the technical development has been financed by the University of Oslo and the Norwegian Research Council, plus the Nordic research funds NOS-HS and NordForsk.",The Nordic Dialect Corpus--an advanced research tool,"The paper describes the first part of the Nordic Dialect Corpus. This is a tool that combines a number of useful features that together makes it a unique and very advanced resource for researchers of many fields of language search. The corpus is web-based and features full audiovisual representation linked to transcripts. 1 Credits The Nordic Dialect Corpus is the result of close collaboration between the partners in the research networks Scandinavian Dialect Syntax and Nordic Centre of Excellence in Microcomparative Syntax. The researchers in the network have contributed in everything from decisions to actual work ranging from methodology to recordings, transcription, and annotation. Some of the corpus (in particular, recordings of informants) has been financed by the national research councils in the individual countries, while the technical development has been financed by the University of Oslo and the Norwegian Research Council, plus the Nordic research funds NOS-HS and NordForsk.","In addition to participants in the ScanDiaSyn and NORMS networks, we would like to thank three anonymous NODALIDA-09 reviewers for valuable comments.","The Nordic Dialect Corpus--an advanced research tool. The paper describes the first part of the Nordic Dialect Corpus. This is a tool that combines a number of useful features that together makes it a unique and very advanced resource for researchers of many fields of language search. The corpus is web-based and features full audiovisual representation linked to transcripts. 1 Credits The Nordic Dialect Corpus is the result of close collaboration between the partners in the research networks Scandinavian Dialect Syntax and Nordic Centre of Excellence in Microcomparative Syntax. The researchers in the network have contributed in everything from decisions to actual work ranging from methodology to recordings, transcription, and annotation. Some of the corpus (in particular, recordings of informants) has been financed by the national research councils in the individual countries, while the technical development has been financed by the University of Oslo and the Norwegian Research Council, plus the Nordic research funds NOS-HS and NordForsk.",2009
liu-etal-2018-itnlp,https://aclanthology.org/S18-1183,0,,,,,,,"ITNLP-ARC at SemEval-2018 Task 12: Argument Reasoning Comprehension with Attention. Reasoning is a very important topic and has many important applications in the field of natural language processing. Semantic Evaluation (SemEval) 2018 Task 12 ""The Argument Reasoning Comprehension"" committed to research natural language reasoning. In this task, we proposed a novel argument reasoning comprehension system, ITNLP-ARC, which use Neural Networks technology to solve this problem. In our system, the LSTM model is involved to encode both the premise sentences and the warrant sentences. The attention model is used to merge the two premise sentence vectors. Through comparing the similarity between the attention vector and each of the two warrant vectors, we choose the one with higher similarity as our system's final answer.",{ITNLP}-{ARC} at {S}em{E}val-2018 Task 12: Argument Reasoning Comprehension with Attention,"Reasoning is a very important topic and has many important applications in the field of natural language processing. Semantic Evaluation (SemEval) 2018 Task 12 ""The Argument Reasoning Comprehension"" committed to research natural language reasoning. In this task, we proposed a novel argument reasoning comprehension system, ITNLP-ARC, which use Neural Networks technology to solve this problem. In our system, the LSTM model is involved to encode both the premise sentences and the warrant sentences. The attention model is used to merge the two premise sentence vectors. Through comparing the similarity between the attention vector and each of the two warrant vectors, we choose the one with higher similarity as our system's final answer.",ITNLP-ARC at SemEval-2018 Task 12: Argument Reasoning Comprehension with Attention,"Reasoning is a very important topic and has many important applications in the field of natural language processing. Semantic Evaluation (SemEval) 2018 Task 12 ""The Argument Reasoning Comprehension"" committed to research natural language reasoning. In this task, we proposed a novel argument reasoning comprehension system, ITNLP-ARC, which use Neural Networks technology to solve this problem. In our system, the LSTM model is involved to encode both the premise sentences and the warrant sentences. The attention model is used to merge the two premise sentence vectors. Through comparing the similarity between the attention vector and each of the two warrant vectors, we choose the one with higher similarity as our system's final answer.",This work is sponsored by the National High Technology Research and Development Program of China (2015AA015405) and National Natural Science Foundation of China (61572151 and 61602131).,"ITNLP-ARC at SemEval-2018 Task 12: Argument Reasoning Comprehension with Attention. Reasoning is a very important topic and has many important applications in the field of natural language processing. Semantic Evaluation (SemEval) 2018 Task 12 ""The Argument Reasoning Comprehension"" committed to research natural language reasoning. In this task, we proposed a novel argument reasoning comprehension system, ITNLP-ARC, which use Neural Networks technology to solve this problem. In our system, the LSTM model is involved to encode both the premise sentences and the warrant sentences. The attention model is used to merge the two premise sentence vectors. Through comparing the similarity between the attention vector and each of the two warrant vectors, we choose the one with higher similarity as our system's final answer.",2018
araki-etal-1994-evaluation-detect,https://aclanthology.org/C94-1030,0,,,,,,,"An Evaluation to Detect and Correct Erroneous Characters Wrongly Substituted, Deleted and Inserted in Japanese and English Sentences Using Markov Models. In optical character recognition and coni.inuous speech recognition of a natural language, it has been diflicult to detect error characters which are wrongly deleted and inserted. ]n <>rder to judge three types of the errors, which are characters wrongly substituted, deleted or inserted in a Japanese ""bunsetsu"" and an l';nglish word, and to correct these errors, this paper proposes new methods using rn-th order Markov chain model for Japanese ""l~anjikana"" characters and Fmglish alphabets, assuming that Markov l)robability of a correct chain of syllables or ""kanji-kana"" characters is greater than that of erroneous chains. From the results of the experiments, it is concluded that the methods is usefld for detecting as well as correcting these errors in Japanese ""bunsetsu"" and English words.","An Evaluation to Detect and Correct Erroneous Characters Wrongly Substituted, Deleted and Inserted in {J}apanese and {E}nglish Sentences Using {M}arkov Models","In optical character recognition and coni.inuous speech recognition of a natural language, it has been diflicult to detect error characters which are wrongly deleted and inserted. ]n <>rder to judge three types of the errors, which are characters wrongly substituted, deleted or inserted in a Japanese ""bunsetsu"" and an l';nglish word, and to correct these errors, this paper proposes new methods using rn-th order Markov chain model for Japanese ""l~anjikana"" characters and Fmglish alphabets, assuming that Markov l)robability of a correct chain of syllables or ""kanji-kana"" characters is greater than that of erroneous chains. From the results of the experiments, it is concluded that the methods is usefld for detecting as well as correcting these errors in Japanese ""bunsetsu"" and English words.","An Evaluation to Detect and Correct Erroneous Characters Wrongly Substituted, Deleted and Inserted in Japanese and English Sentences Using Markov Models","In optical character recognition and coni.inuous speech recognition of a natural language, it has been diflicult to detect error characters which are wrongly deleted and inserted. ]n <>rder to judge three types of the errors, which are characters wrongly substituted, deleted or inserted in a Japanese ""bunsetsu"" and an l';nglish word, and to correct these errors, this paper proposes new methods using rn-th order Markov chain model for Japanese ""l~anjikana"" characters and Fmglish alphabets, assuming that Markov l)robability of a correct chain of syllables or ""kanji-kana"" characters is greater than that of erroneous chains. From the results of the experiments, it is concluded that the methods is usefld for detecting as well as correcting these errors in Japanese ""bunsetsu"" and English words.",,"An Evaluation to Detect and Correct Erroneous Characters Wrongly Substituted, Deleted and Inserted in Japanese and English Sentences Using Markov Models. In optical character recognition and coni.inuous speech recognition of a natural language, it has been diflicult to detect error characters which are wrongly deleted and inserted. ]n <>rder to judge three types of the errors, which are characters wrongly substituted, deleted or inserted in a Japanese ""bunsetsu"" and an l';nglish word, and to correct these errors, this paper proposes new methods using rn-th order Markov chain model for Japanese ""l~anjikana"" characters and Fmglish alphabets, assuming that Markov l)robability of a correct chain of syllables or ""kanji-kana"" characters is greater than that of erroneous chains. From the results of the experiments, it is concluded that the methods is usefld for detecting as well as correcting these errors in Japanese ""bunsetsu"" and English words.",1994
callaway-2008-textcap,https://aclanthology.org/W08-2226,0,,,,,,,"The TextCap Semantic Interpreter. The lack of large amounts of readily available, explicitly represented knowledge has long been recognized as a barrier to applications requiring semantic knowledge such as machine translation and question answering. This problem is analogous to that facing machine translation decades ago, where one proposed solution was to use human translators to post-edit automatically produced, low quality translations rather than expect a computer to independently create high-quality translations. This paper describes an attempt at implementing a semantic parser that takes unrestricted English text, uses publically available computational linguistics tools and lexical resources and as output produces semantic triples which can be used in a variety of tasks such as generating knowledge bases, providing raw material for question answering systems, or creating RDF structures. We describe the TEXTCAP system, detail the semantic triple representation it produces, illustrate step by step how TEXTCAP processes a short text, and use its results on unseen texts to discuss the amount of post-editing that might be realistically required.",The {T}ext{C}ap Semantic Interpreter,"The lack of large amounts of readily available, explicitly represented knowledge has long been recognized as a barrier to applications requiring semantic knowledge such as machine translation and question answering. This problem is analogous to that facing machine translation decades ago, where one proposed solution was to use human translators to post-edit automatically produced, low quality translations rather than expect a computer to independently create high-quality translations. This paper describes an attempt at implementing a semantic parser that takes unrestricted English text, uses publically available computational linguistics tools and lexical resources and as output produces semantic triples which can be used in a variety of tasks such as generating knowledge bases, providing raw material for question answering systems, or creating RDF structures. We describe the TEXTCAP system, detail the semantic triple representation it produces, illustrate step by step how TEXTCAP processes a short text, and use its results on unseen texts to discuss the amount of post-editing that might be realistically required.",The TextCap Semantic Interpreter,"The lack of large amounts of readily available, explicitly represented knowledge has long been recognized as a barrier to applications requiring semantic knowledge such as machine translation and question answering. This problem is analogous to that facing machine translation decades ago, where one proposed solution was to use human translators to post-edit automatically produced, low quality translations rather than expect a computer to independently create high-quality translations. This paper describes an attempt at implementing a semantic parser that takes unrestricted English text, uses publically available computational linguistics tools and lexical resources and as output produces semantic triples which can be used in a variety of tasks such as generating knowledge bases, providing raw material for question answering systems, or creating RDF structures. We describe the TEXTCAP system, detail the semantic triple representation it produces, illustrate step by step how TEXTCAP processes a short text, and use its results on unseen texts to discuss the amount of post-editing that might be realistically required.",,"The TextCap Semantic Interpreter. The lack of large amounts of readily available, explicitly represented knowledge has long been recognized as a barrier to applications requiring semantic knowledge such as machine translation and question answering. This problem is analogous to that facing machine translation decades ago, where one proposed solution was to use human translators to post-edit automatically produced, low quality translations rather than expect a computer to independently create high-quality translations. This paper describes an attempt at implementing a semantic parser that takes unrestricted English text, uses publically available computational linguistics tools and lexical resources and as output produces semantic triples which can be used in a variety of tasks such as generating knowledge bases, providing raw material for question answering systems, or creating RDF structures. We describe the TEXTCAP system, detail the semantic triple representation it produces, illustrate step by step how TEXTCAP processes a short text, and use its results on unseen texts to discuss the amount of post-editing that might be realistically required.",2008
xu-etal-2003-training,https://aclanthology.org/W03-1021,0,,,,,,,"Training Connectionist Models for the Structured Language Model. We investigate the performance of the Structured Language Model (SLM) in terms of perplexity (PPL) when its components are modeled by connectionist models. The connectionist models use a distributed representation of the items in the history and make much better use of contexts than currently used interpolated or back-off models, not only because of the inherent capability of the connectionist model in fighting the data sparseness problem, but also because of the sublinear growth in the model size when the context length is increased. The connectionist models can be further trained by an EM procedure, similar to the previously used procedure for training the SLM. Our experiments show that the connectionist models can significantly improve the PPL over the interpolated and back-off models on the UPENN Treebank corpora, after interpolating with a baseline trigram language model. The EM training procedure can improve the connectionist models further, by using hidden events obtained by the SLM parser.",Training Connectionist Models for the {S}tructured {L}anguage {M}odel,"We investigate the performance of the Structured Language Model (SLM) in terms of perplexity (PPL) when its components are modeled by connectionist models. The connectionist models use a distributed representation of the items in the history and make much better use of contexts than currently used interpolated or back-off models, not only because of the inherent capability of the connectionist model in fighting the data sparseness problem, but also because of the sublinear growth in the model size when the context length is increased. The connectionist models can be further trained by an EM procedure, similar to the previously used procedure for training the SLM. Our experiments show that the connectionist models can significantly improve the PPL over the interpolated and back-off models on the UPENN Treebank corpora, after interpolating with a baseline trigram language model. The EM training procedure can improve the connectionist models further, by using hidden events obtained by the SLM parser.",Training Connectionist Models for the Structured Language Model,"We investigate the performance of the Structured Language Model (SLM) in terms of perplexity (PPL) when its components are modeled by connectionist models. The connectionist models use a distributed representation of the items in the history and make much better use of contexts than currently used interpolated or back-off models, not only because of the inherent capability of the connectionist model in fighting the data sparseness problem, but also because of the sublinear growth in the model size when the context length is increased. The connectionist models can be further trained by an EM procedure, similar to the previously used procedure for training the SLM. Our experiments show that the connectionist models can significantly improve the PPL over the interpolated and back-off models on the UPENN Treebank corpora, after interpolating with a baseline trigram language model. The EM training procedure can improve the connectionist models further, by using hidden events obtained by the SLM parser.",,"Training Connectionist Models for the Structured Language Model. We investigate the performance of the Structured Language Model (SLM) in terms of perplexity (PPL) when its components are modeled by connectionist models. The connectionist models use a distributed representation of the items in the history and make much better use of contexts than currently used interpolated or back-off models, not only because of the inherent capability of the connectionist model in fighting the data sparseness problem, but also because of the sublinear growth in the model size when the context length is increased. The connectionist models can be further trained by an EM procedure, similar to the previously used procedure for training the SLM. Our experiments show that the connectionist models can significantly improve the PPL over the interpolated and back-off models on the UPENN Treebank corpora, after interpolating with a baseline trigram language model. The EM training procedure can improve the connectionist models further, by using hidden events obtained by the SLM parser.",2003
terragni-etal-2020-matters,https://aclanthology.org/2020.insights-1.5,0,,,,,,,"Which Matters Most? Comparing the Impact of Concept and Document Relationships in Topic Models. Topic models have been widely used to discover hidden topics in a collection of documents. In this paper, we propose to investigate the role of two different types of relational information, i.e. document relationships and concept relationships. While exploiting the document network significantly improves topic coherence, the introduction of concepts and their relationships does not influence the results both quantitatively and qualitatively.",Which Matters Most? Comparing the Impact of Concept and Document Relationships in Topic Models,"Topic models have been widely used to discover hidden topics in a collection of documents. In this paper, we propose to investigate the role of two different types of relational information, i.e. document relationships and concept relationships. While exploiting the document network significantly improves topic coherence, the introduction of concepts and their relationships does not influence the results both quantitatively and qualitatively.",Which Matters Most? Comparing the Impact of Concept and Document Relationships in Topic Models,"Topic models have been widely used to discover hidden topics in a collection of documents. In this paper, we propose to investigate the role of two different types of relational information, i.e. document relationships and concept relationships. While exploiting the document network significantly improves topic coherence, the introduction of concepts and their relationships does not influence the results both quantitatively and qualitatively.",,"Which Matters Most? Comparing the Impact of Concept and Document Relationships in Topic Models. Topic models have been widely used to discover hidden topics in a collection of documents. In this paper, we propose to investigate the role of two different types of relational information, i.e. document relationships and concept relationships. While exploiting the document network significantly improves topic coherence, the introduction of concepts and their relationships does not influence the results both quantitatively and qualitatively.",2020
charniak-1978-spoon-hand,https://aclanthology.org/T78-1027,0,,,,,,,"With a Spoon in Hand This Must Be the Eating Frame. A language comprehension program using ""frames"", ""scripts"", etc. must be able to decide which frames are appropriate to the text. Often there will be explicit indication (""Fred was playing tennis"" suggests the TENNIS frame) but it is not always so easy.(""The woman waved while the man on the stage sawed her in half"" suggests MAGICIAN but how?) This paper will examine how a program might go about determining the appropriate frame in such cases. At a sufficiently vague level the model presented here will resemble that of Minsky (1975) in it's assumption that one usually has available one or more context frames. Hence one only needs worry if information comes in which does not fit them. As opposed to Minsky however the suggestions for new context frames will not come from the old onesi but rather from the conflicting information. The problem them becomes how potential frames are indexed under the information which ""suggests"" them.",With a Spoon in Hand This Must Be the Eating Frame,"A language comprehension program using ""frames"", ""scripts"", etc. must be able to decide which frames are appropriate to the text. Often there will be explicit indication (""Fred was playing tennis"" suggests the TENNIS frame) but it is not always so easy.(""The woman waved while the man on the stage sawed her in half"" suggests MAGICIAN but how?) This paper will examine how a program might go about determining the appropriate frame in such cases. At a sufficiently vague level the model presented here will resemble that of Minsky (1975) in it's assumption that one usually has available one or more context frames. Hence one only needs worry if information comes in which does not fit them. As opposed to Minsky however the suggestions for new context frames will not come from the old onesi but rather from the conflicting information. The problem them becomes how potential frames are indexed under the information which ""suggests"" them.",With a Spoon in Hand This Must Be the Eating Frame,"A language comprehension program using ""frames"", ""scripts"", etc. must be able to decide which frames are appropriate to the text. Often there will be explicit indication (""Fred was playing tennis"" suggests the TENNIS frame) but it is not always so easy.(""The woman waved while the man on the stage sawed her in half"" suggests MAGICIAN but how?) This paper will examine how a program might go about determining the appropriate frame in such cases. At a sufficiently vague level the model presented here will resemble that of Minsky (1975) in it's assumption that one usually has available one or more context frames. Hence one only needs worry if information comes in which does not fit them. As opposed to Minsky however the suggestions for new context frames will not come from the old onesi but rather from the conflicting information. The problem them becomes how potential frames are indexed under the information which ""suggests"" them.",I have benefited from conversations with J.,"With a Spoon in Hand This Must Be the Eating Frame. A language comprehension program using ""frames"", ""scripts"", etc. must be able to decide which frames are appropriate to the text. Often there will be explicit indication (""Fred was playing tennis"" suggests the TENNIS frame) but it is not always so easy.(""The woman waved while the man on the stage sawed her in half"" suggests MAGICIAN but how?) This paper will examine how a program might go about determining the appropriate frame in such cases. At a sufficiently vague level the model presented here will resemble that of Minsky (1975) in it's assumption that one usually has available one or more context frames. Hence one only needs worry if information comes in which does not fit them. As opposed to Minsky however the suggestions for new context frames will not come from the old onesi but rather from the conflicting information. The problem them becomes how potential frames are indexed under the information which ""suggests"" them.",1978
poignant-etal-2016-camomile,https://aclanthology.org/L16-1226,0,,,,,,,"The CAMOMILE Collaborative Annotation Platform for Multi-modal, Multi-lingual and Multi-media Documents. In this paper, we describe the organization and the implementation of the CAMOMILE collaborative annotation framework for multimodal, multimedia, multilingual (3M) data. Given the versatile nature of the analysis which can be performed on 3M data, the structure of the server was kept intentionally simple in order to preserve its genericity, relying on standard Web technologies. Layers of annotations, defined as data associated to a media fragment from the corpus, are stored in a database and can be managed through standard interfaces with authentication. Interfaces tailored specifically to the needed task can then be developed in an agile way, relying on simple but reliable services for the management of the centralized annotations. We then present our implementation of an active learning scenario for person annotation in video, relying on the CAMOMILE server; during a dry run experiment, the manual annotation of 716 speech segments was thus propagated to 3504 labeled tracks. The code of the CAMOMILE framework is distributed in open source.","The {CAMOMILE} Collaborative Annotation Platform for Multi-modal, Multi-lingual and Multi-media Documents","In this paper, we describe the organization and the implementation of the CAMOMILE collaborative annotation framework for multimodal, multimedia, multilingual (3M) data. Given the versatile nature of the analysis which can be performed on 3M data, the structure of the server was kept intentionally simple in order to preserve its genericity, relying on standard Web technologies. Layers of annotations, defined as data associated to a media fragment from the corpus, are stored in a database and can be managed through standard interfaces with authentication. Interfaces tailored specifically to the needed task can then be developed in an agile way, relying on simple but reliable services for the management of the centralized annotations. We then present our implementation of an active learning scenario for person annotation in video, relying on the CAMOMILE server; during a dry run experiment, the manual annotation of 716 speech segments was thus propagated to 3504 labeled tracks. The code of the CAMOMILE framework is distributed in open source.","The CAMOMILE Collaborative Annotation Platform for Multi-modal, Multi-lingual and Multi-media Documents","In this paper, we describe the organization and the implementation of the CAMOMILE collaborative annotation framework for multimodal, multimedia, multilingual (3M) data. Given the versatile nature of the analysis which can be performed on 3M data, the structure of the server was kept intentionally simple in order to preserve its genericity, relying on standard Web technologies. Layers of annotations, defined as data associated to a media fragment from the corpus, are stored in a database and can be managed through standard interfaces with authentication. Interfaces tailored specifically to the needed task can then be developed in an agile way, relying on simple but reliable services for the management of the centralized annotations. We then present our implementation of an active learning scenario for person annotation in video, relying on the CAMOMILE server; during a dry run experiment, the manual annotation of 716 speech segments was thus propagated to 3504 labeled tracks. The code of the CAMOMILE framework is distributed in open source.","We thank the members of the CAMOMILE international advisory committee for their time and their precious advices and proposals. This work was done in the context of the CHIST-ERA CAMOMILE project funded by the ANR (Agence Nationale de la Recherche, France) under grant ANR-12-CHRI-0006-01, the FNR (Fonds National de La Recherche, Luxembourg), Tübitak (scientific and technological research council of Turkey) and Mineco (Ministerio de Economía y Competitividad, Spain).","The CAMOMILE Collaborative Annotation Platform for Multi-modal, Multi-lingual and Multi-media Documents. In this paper, we describe the organization and the implementation of the CAMOMILE collaborative annotation framework for multimodal, multimedia, multilingual (3M) data. Given the versatile nature of the analysis which can be performed on 3M data, the structure of the server was kept intentionally simple in order to preserve its genericity, relying on standard Web technologies. Layers of annotations, defined as data associated to a media fragment from the corpus, are stored in a database and can be managed through standard interfaces with authentication. Interfaces tailored specifically to the needed task can then be developed in an agile way, relying on simple but reliable services for the management of the centralized annotations. We then present our implementation of an active learning scenario for person annotation in video, relying on the CAMOMILE server; during a dry run experiment, the manual annotation of 716 speech segments was thus propagated to 3504 labeled tracks. The code of the CAMOMILE framework is distributed in open source.",2016
hwang-lee-2021-semi,https://aclanthology.org/2021.ranlp-1.67,0,,,,,,,"Semi-Supervised Learning Based on Auto-generated Lexicon Using XAI in Sentiment Analysis. In this study, we proposed a novel Lexicon-based pseudo-labeling method utilizing explainable AI(XAI) approach. Existing approach have a fundamental limitation in their robustness because poor classifier leads to inaccurate soft-labeling, and it lead to poor classifier repetitively. Meanwhile, we generate the lexicon consists of sentiment word based on the explainability score. Then we calculate the confidence of unlabeled data with lexicon and add them into labeled dataset for the robust pseudo-labeling approach. Our proposed method has three contributions. First, the proposed methodology automatically generates a lexicon based on XAI and performs independent pseudolabeling, thereby guaranteeing higher performance and robustness compared to the existing one. Second, since lexiconbased pseudo-labeling is performed without re-learning in most of models, time efficiency is considerably increased, and third, the generated high-quality lexicon can be available for sentiment analysis of data from similar domains. The effectiveness and efficiency of our proposed method were verified through quantitative comparison with the existing pseudo-labeling method and qualitative review of the generated lexicon.",Semi-Supervised Learning Based on Auto-generated Lexicon Using {XAI} in Sentiment Analysis,"In this study, we proposed a novel Lexicon-based pseudo-labeling method utilizing explainable AI(XAI) approach. Existing approach have a fundamental limitation in their robustness because poor classifier leads to inaccurate soft-labeling, and it lead to poor classifier repetitively. Meanwhile, we generate the lexicon consists of sentiment word based on the explainability score. Then we calculate the confidence of unlabeled data with lexicon and add them into labeled dataset for the robust pseudo-labeling approach. Our proposed method has three contributions. First, the proposed methodology automatically generates a lexicon based on XAI and performs independent pseudolabeling, thereby guaranteeing higher performance and robustness compared to the existing one. Second, since lexiconbased pseudo-labeling is performed without re-learning in most of models, time efficiency is considerably increased, and third, the generated high-quality lexicon can be available for sentiment analysis of data from similar domains. The effectiveness and efficiency of our proposed method were verified through quantitative comparison with the existing pseudo-labeling method and qualitative review of the generated lexicon.",Semi-Supervised Learning Based on Auto-generated Lexicon Using XAI in Sentiment Analysis,"In this study, we proposed a novel Lexicon-based pseudo-labeling method utilizing explainable AI(XAI) approach. Existing approach have a fundamental limitation in their robustness because poor classifier leads to inaccurate soft-labeling, and it lead to poor classifier repetitively. Meanwhile, we generate the lexicon consists of sentiment word based on the explainability score. Then we calculate the confidence of unlabeled data with lexicon and add them into labeled dataset for the robust pseudo-labeling approach. Our proposed method has three contributions. First, the proposed methodology automatically generates a lexicon based on XAI and performs independent pseudolabeling, thereby guaranteeing higher performance and robustness compared to the existing one. Second, since lexiconbased pseudo-labeling is performed without re-learning in most of models, time efficiency is considerably increased, and third, the generated high-quality lexicon can be available for sentiment analysis of data from similar domains. The effectiveness and efficiency of our proposed method were verified through quantitative comparison with the existing pseudo-labeling method and qualitative review of the generated lexicon.",,"Semi-Supervised Learning Based on Auto-generated Lexicon Using XAI in Sentiment Analysis. In this study, we proposed a novel Lexicon-based pseudo-labeling method utilizing explainable AI(XAI) approach. Existing approach have a fundamental limitation in their robustness because poor classifier leads to inaccurate soft-labeling, and it lead to poor classifier repetitively. Meanwhile, we generate the lexicon consists of sentiment word based on the explainability score. Then we calculate the confidence of unlabeled data with lexicon and add them into labeled dataset for the robust pseudo-labeling approach. Our proposed method has three contributions. First, the proposed methodology automatically generates a lexicon based on XAI and performs independent pseudolabeling, thereby guaranteeing higher performance and robustness compared to the existing one. Second, since lexiconbased pseudo-labeling is performed without re-learning in most of models, time efficiency is considerably increased, and third, the generated high-quality lexicon can be available for sentiment analysis of data from similar domains. The effectiveness and efficiency of our proposed method were verified through quantitative comparison with the existing pseudo-labeling method and qualitative review of the generated lexicon.",2021
van-halteren-oostdijk-2018-identification,https://aclanthology.org/W18-3923,0,,,,,,,"Identification of Differences between Dutch Language Varieties with the VarDial2018 Dutch-Flemish Subtitle Data. With the goal of discovering differences between Belgian and Netherlandic Dutch, we participated as Team Taurus in the Dutch-Flemish Subtitles task of VarDial2018. We used a rather simple marker-based method, but with a wide range of features, including lexical, lexico-syntactic and syntactic ones, and achieved a second position in the ranking. Inspection of highly distinguishing features did point towards differences between the two language varieties, but because of the nature of the experimental data, we have to treat our observations as very tentative and in need of further investigation.",Identification of Differences between {D}utch Language Varieties with the {V}ar{D}ial2018 {D}utch-{F}lemish Subtitle Data,"With the goal of discovering differences between Belgian and Netherlandic Dutch, we participated as Team Taurus in the Dutch-Flemish Subtitles task of VarDial2018. We used a rather simple marker-based method, but with a wide range of features, including lexical, lexico-syntactic and syntactic ones, and achieved a second position in the ranking. Inspection of highly distinguishing features did point towards differences between the two language varieties, but because of the nature of the experimental data, we have to treat our observations as very tentative and in need of further investigation.",Identification of Differences between Dutch Language Varieties with the VarDial2018 Dutch-Flemish Subtitle Data,"With the goal of discovering differences between Belgian and Netherlandic Dutch, we participated as Team Taurus in the Dutch-Flemish Subtitles task of VarDial2018. We used a rather simple marker-based method, but with a wide range of features, including lexical, lexico-syntactic and syntactic ones, and achieved a second position in the ranking. Inspection of highly distinguishing features did point towards differences between the two language varieties, but because of the nature of the experimental data, we have to treat our observations as very tentative and in need of further investigation.","We thank Erwin Komen and Micha Hulsbosch for preparing a script for the analysis of the text with Frog, Alpino and the surfacing software.","Identification of Differences between Dutch Language Varieties with the VarDial2018 Dutch-Flemish Subtitle Data. With the goal of discovering differences between Belgian and Netherlandic Dutch, we participated as Team Taurus in the Dutch-Flemish Subtitles task of VarDial2018. We used a rather simple marker-based method, but with a wide range of features, including lexical, lexico-syntactic and syntactic ones, and achieved a second position in the ranking. Inspection of highly distinguishing features did point towards differences between the two language varieties, but because of the nature of the experimental data, we have to treat our observations as very tentative and in need of further investigation.",2018
brew-schulte-im-walde-2002-spectral,https://aclanthology.org/W02-1016,0,,,,,,,"Spectral Clustering for German Verbs. We describe and evaluate the application of a spectral clustering technique (Ng et al., 2002) to the unsupervised clustering of German verbs. Our previous work has shown that standard clustering techniques succeed in inducing Levinstyle semantic classes from verb subcategorisation information. But clustering in the very high dimensional spaces that we use is fraught with technical and conceptual difficulties. Spectral clustering performs a dimensionality reduction on the verb frame patterns, and provides a robustness and efficiency that standard clustering methods do not display in direct use. The clustering results are evaluated according to the alignment (Christianini et al., 2002) between the Gram matrix defined by the cluster output and the corresponding matrix defined by a gold standard.",Spectral Clustering for {G}erman Verbs,"We describe and evaluate the application of a spectral clustering technique (Ng et al., 2002) to the unsupervised clustering of German verbs. Our previous work has shown that standard clustering techniques succeed in inducing Levinstyle semantic classes from verb subcategorisation information. But clustering in the very high dimensional spaces that we use is fraught with technical and conceptual difficulties. Spectral clustering performs a dimensionality reduction on the verb frame patterns, and provides a robustness and efficiency that standard clustering methods do not display in direct use. The clustering results are evaluated according to the alignment (Christianini et al., 2002) between the Gram matrix defined by the cluster output and the corresponding matrix defined by a gold standard.",Spectral Clustering for German Verbs,"We describe and evaluate the application of a spectral clustering technique (Ng et al., 2002) to the unsupervised clustering of German verbs. Our previous work has shown that standard clustering techniques succeed in inducing Levinstyle semantic classes from verb subcategorisation information. But clustering in the very high dimensional spaces that we use is fraught with technical and conceptual difficulties. Spectral clustering performs a dimensionality reduction on the verb frame patterns, and provides a robustness and efficiency that standard clustering methods do not display in direct use. The clustering results are evaluated according to the alignment (Christianini et al., 2002) between the Gram matrix defined by the cluster output and the corresponding matrix defined by a gold standard.",,"Spectral Clustering for German Verbs. We describe and evaluate the application of a spectral clustering technique (Ng et al., 2002) to the unsupervised clustering of German verbs. Our previous work has shown that standard clustering techniques succeed in inducing Levinstyle semantic classes from verb subcategorisation information. But clustering in the very high dimensional spaces that we use is fraught with technical and conceptual difficulties. Spectral clustering performs a dimensionality reduction on the verb frame patterns, and provides a robustness and efficiency that standard clustering methods do not display in direct use. The clustering results are evaluated according to the alignment (Christianini et al., 2002) between the Gram matrix defined by the cluster output and the corresponding matrix defined by a gold standard.",2002
lim-liew-2022-english,https://aclanthology.org/2022.acl-srw.16,0,,,,,,,"English-Malay Cross-Lingual Embedding Alignment using Bilingual Lexicon Augmentation. As high-quality Malay language resources are still a scarcity, cross lingual word embeddings make it possible for richer English resources to be leveraged for downstream Malay text classification tasks. This paper focuses on creating an English-Malay cross-lingual word embeddings using embedding alignment by exploiting existing language resources. We augmented the training bilingual lexicons using machine translation with the goal to improve the alignment precision of our cross-lingual word embeddings. We investigated the quality of the current stateof-the-art English-Malay bilingual lexicon and worked on improving its quality using Google Translate. We also examined the effect of Malay word coverage on the quality of cross-lingual word embeddings. Experimental results with a precision up till 28.17% show that the alignment precision of the cross-lingual word embeddings would inevitably degrade after 1-NN but a better seed lexicon and cleaner nearest neighbours can reduce the number of word pairs required to achieve satisfactory performance. As the English and Malay monolingual embeddings are pre-trained on informal language corpora, our proposed English-Malay embeddings alignment approach is also able to map non-standard Malay translations in the English nearest neighbours.",{E}nglish-{M}alay Cross-Lingual Embedding Alignment using Bilingual Lexicon Augmentation,"As high-quality Malay language resources are still a scarcity, cross lingual word embeddings make it possible for richer English resources to be leveraged for downstream Malay text classification tasks. This paper focuses on creating an English-Malay cross-lingual word embeddings using embedding alignment by exploiting existing language resources. We augmented the training bilingual lexicons using machine translation with the goal to improve the alignment precision of our cross-lingual word embeddings. We investigated the quality of the current stateof-the-art English-Malay bilingual lexicon and worked on improving its quality using Google Translate. We also examined the effect of Malay word coverage on the quality of cross-lingual word embeddings. Experimental results with a precision up till 28.17% show that the alignment precision of the cross-lingual word embeddings would inevitably degrade after 1-NN but a better seed lexicon and cleaner nearest neighbours can reduce the number of word pairs required to achieve satisfactory performance. As the English and Malay monolingual embeddings are pre-trained on informal language corpora, our proposed English-Malay embeddings alignment approach is also able to map non-standard Malay translations in the English nearest neighbours.",English-Malay Cross-Lingual Embedding Alignment using Bilingual Lexicon Augmentation,"As high-quality Malay language resources are still a scarcity, cross lingual word embeddings make it possible for richer English resources to be leveraged for downstream Malay text classification tasks. This paper focuses on creating an English-Malay cross-lingual word embeddings using embedding alignment by exploiting existing language resources. We augmented the training bilingual lexicons using machine translation with the goal to improve the alignment precision of our cross-lingual word embeddings. We investigated the quality of the current stateof-the-art English-Malay bilingual lexicon and worked on improving its quality using Google Translate. We also examined the effect of Malay word coverage on the quality of cross-lingual word embeddings. Experimental results with a precision up till 28.17% show that the alignment precision of the cross-lingual word embeddings would inevitably degrade after 1-NN but a better seed lexicon and cleaner nearest neighbours can reduce the number of word pairs required to achieve satisfactory performance. As the English and Malay monolingual embeddings are pre-trained on informal language corpora, our proposed English-Malay embeddings alignment approach is also able to map non-standard Malay translations in the English nearest neighbours.",This study was supported by the Ministry of Higher Education Malaysia for Fundamental Research Grant Scheme with Project Code: FRGS/1/2020/ICT02/USM/02/3.,"English-Malay Cross-Lingual Embedding Alignment using Bilingual Lexicon Augmentation. As high-quality Malay language resources are still a scarcity, cross lingual word embeddings make it possible for richer English resources to be leveraged for downstream Malay text classification tasks. This paper focuses on creating an English-Malay cross-lingual word embeddings using embedding alignment by exploiting existing language resources. We augmented the training bilingual lexicons using machine translation with the goal to improve the alignment precision of our cross-lingual word embeddings. We investigated the quality of the current stateof-the-art English-Malay bilingual lexicon and worked on improving its quality using Google Translate. We also examined the effect of Malay word coverage on the quality of cross-lingual word embeddings. Experimental results with a precision up till 28.17% show that the alignment precision of the cross-lingual word embeddings would inevitably degrade after 1-NN but a better seed lexicon and cleaner nearest neighbours can reduce the number of word pairs required to achieve satisfactory performance. As the English and Malay monolingual embeddings are pre-trained on informal language corpora, our proposed English-Malay embeddings alignment approach is also able to map non-standard Malay translations in the English nearest neighbours.",2022
lockard-etal-2020-zeroshotceres,https://aclanthology.org/2020.acl-main.721,0,,,,,,,"ZeroShotCeres: Zero-Shot Relation Extraction from Semi-Structured Webpages. In many documents, such as semi-structured webpages, textual semantics are augmented with additional information conveyed using visual elements including layout, font size, and color. Prior work on information extraction from semi-structured websites has required learning an extraction model specific to a given template via either manually labeled or distantly supervised data from that template. In this work, we propose a solution for ""zero-shot"" open-domain relation extraction from webpages with a previously unseen template, including from websites with little overlap with existing sources of knowledge for distant supervision and websites in entirely new subject verticals. Our model uses a graph neural network-based approach to build a rich representation of text fields on a webpage and the relationships between them, enabling generalization to new templates. Experiments show this approach provides a 31% F1 gain over a baseline for zero-shot extraction in a new subject vertical.",{Z}ero{S}hot{C}eres: Zero-Shot Relation Extraction from Semi-Structured Webpages,"In many documents, such as semi-structured webpages, textual semantics are augmented with additional information conveyed using visual elements including layout, font size, and color. Prior work on information extraction from semi-structured websites has required learning an extraction model specific to a given template via either manually labeled or distantly supervised data from that template. In this work, we propose a solution for ""zero-shot"" open-domain relation extraction from webpages with a previously unseen template, including from websites with little overlap with existing sources of knowledge for distant supervision and websites in entirely new subject verticals. Our model uses a graph neural network-based approach to build a rich representation of text fields on a webpage and the relationships between them, enabling generalization to new templates. Experiments show this approach provides a 31% F1 gain over a baseline for zero-shot extraction in a new subject vertical.",ZeroShotCeres: Zero-Shot Relation Extraction from Semi-Structured Webpages,"In many documents, such as semi-structured webpages, textual semantics are augmented with additional information conveyed using visual elements including layout, font size, and color. Prior work on information extraction from semi-structured websites has required learning an extraction model specific to a given template via either manually labeled or distantly supervised data from that template. In this work, we propose a solution for ""zero-shot"" open-domain relation extraction from webpages with a previously unseen template, including from websites with little overlap with existing sources of knowledge for distant supervision and websites in entirely new subject verticals. Our model uses a graph neural network-based approach to build a rich representation of text fields on a webpage and the relationships between them, enabling generalization to new templates. Experiments show this approach provides a 31% F1 gain over a baseline for zero-shot extraction in a new subject vertical.","We would like to acknowledge grants from ONR N00014-18-1-2826, DARPA N66001-19-2-403, NSF (IIS1616112, IIS1252835), Allen Distinguished Investigator Award, and Sloan Fellowship.","ZeroShotCeres: Zero-Shot Relation Extraction from Semi-Structured Webpages. In many documents, such as semi-structured webpages, textual semantics are augmented with additional information conveyed using visual elements including layout, font size, and color. Prior work on information extraction from semi-structured websites has required learning an extraction model specific to a given template via either manually labeled or distantly supervised data from that template. In this work, we propose a solution for ""zero-shot"" open-domain relation extraction from webpages with a previously unseen template, including from websites with little overlap with existing sources of knowledge for distant supervision and websites in entirely new subject verticals. Our model uses a graph neural network-based approach to build a rich representation of text fields on a webpage and the relationships between them, enabling generalization to new templates. Experiments show this approach provides a 31% F1 gain over a baseline for zero-shot extraction in a new subject vertical.",2020
och-etal-2001-efficient,https://aclanthology.org/W01-1408,0,,,,,,,"An Efficient A* Search Algorithm for Statistical Machine Translation. In this paper, we describe an efficient A* search algorithm for statistical machine translation. In contrary to beamsearch or greedy approaches it is possible to guarantee the avoidance of search errors with A*. We develop various sophisticated admissible and almost admissible heuristic functions. Especially our newly developped method to perform a multi-pass A* search with an iteratively improved heuristic function allows us to translate even long sentences. We compare the A* search algorithm with a beam-search approach on the Hansards task.",An Efficient {A}* Search Algorithm for Statistical Machine Translation,"In this paper, we describe an efficient A* search algorithm for statistical machine translation. In contrary to beamsearch or greedy approaches it is possible to guarantee the avoidance of search errors with A*. We develop various sophisticated admissible and almost admissible heuristic functions. Especially our newly developped method to perform a multi-pass A* search with an iteratively improved heuristic function allows us to translate even long sentences. We compare the A* search algorithm with a beam-search approach on the Hansards task.",An Efficient A* Search Algorithm for Statistical Machine Translation,"In this paper, we describe an efficient A* search algorithm for statistical machine translation. In contrary to beamsearch or greedy approaches it is possible to guarantee the avoidance of search errors with A*. We develop various sophisticated admissible and almost admissible heuristic functions. Especially our newly developped method to perform a multi-pass A* search with an iteratively improved heuristic function allows us to translate even long sentences. We compare the A* search algorithm with a beam-search approach on the Hansards task.","This paper is based on work supported partly by the VERBMOBIL project (contract number 01 IV 701 T4) by the German Federal Ministry of Education, Science, Research and Technology. In addition, this work was supported by the National Science Foundation under Grant No. IIS-9820687 through the 1999 Workshop on Language Engineering, Center for Language and Speech Processing, Johns Hopkins University.","An Efficient A* Search Algorithm for Statistical Machine Translation. In this paper, we describe an efficient A* search algorithm for statistical machine translation. In contrary to beamsearch or greedy approaches it is possible to guarantee the avoidance of search errors with A*. We develop various sophisticated admissible and almost admissible heuristic functions. Especially our newly developped method to perform a multi-pass A* search with an iteratively improved heuristic function allows us to translate even long sentences. We compare the A* search algorithm with a beam-search approach on the Hansards task.",2001
nakano-kato-1998-cue,https://aclanthology.org/W98-0317,0,,,,,,,"Cue Phrase Selection in Instruction Dialogue Using Machine Learning. The purpose of this paper is to identify effective factors for selecting discourse organization cue phrases in instruction dialogue that signal changes in discourse structure such as topic shifts and attentional state changes. By using a machine learning technique, a variety of features concerning discourse structure, task structure, and dialogue context are examined in terms of their effectiveness and the best set of learning patterns from the corpus as collocation candidates, in order to compare results to an experiment on Dutch V + PP patterns (Villada, 2004). Statistical processing on this dataset provided interesting observations, briefly explained in the evaluation section. We conclude by providing a summary of further steps required to improve the extraction process. This is not restricted to improvements in the resource preparation for statistical processing, but a proposal to use nonstatistical means as well, thus acquiring an efficient blend of different methods.",A New Approach to the Corpus-based Statistical Investigation of {H}ungarian Multi-word Lexemes,"We apply statistical methods to perform automatic extraction of Hungarian collocations from corpora. Due to the complexity of Hungarian morphology, a complex resource preparation tool chain has been developed. This tool chain implements a reusable and, in principle, language independent framework. In the first part, the paper describes the tool chain itself, then, in the second part, an experiment using this framework. The experiment deals with the extraction of patterns from the corpus as collocation candidates, in order to compare results to an experiment on Dutch V + PP patterns (Villada, 2004). Statistical processing on this dataset provided interesting observations, briefly explained in the evaluation section. We conclude by providing a summary of further steps required to improve the extraction process. This is not restricted to improvements in the resource preparation for statistical processing, but a proposal to use nonstatistical means as well, thus acquiring an efficient blend of different methods.",A New Approach to the Corpus-based Statistical Investigation of Hungarian Multi-word Lexemes,"We apply statistical methods to perform automatic extraction of Hungarian collocations from corpora. Due to the complexity of Hungarian morphology, a complex resource preparation tool chain has been developed. This tool chain implements a reusable and, in principle, language independent framework. In the first part, the paper describes the tool chain itself, then, in the second part, an experiment using this framework. The experiment deals with the extraction of patterns from the corpus as collocation candidates, in order to compare results to an experiment on Dutch V + PP patterns (Villada, 2004). Statistical processing on this dataset provided interesting observations, briefly explained in the evaluation section. We conclude by providing a summary of further steps required to improve the extraction process. This is not restricted to improvements in the resource preparation for statistical processing, but a proposal to use nonstatistical means as well, thus acquiring an efficient blend of different methods.",This work has been carried out in parallel with similar work on Dutch corpora by a joint Dutch-Hungarian research group supported by NWO-OTKA under grant number 048.011.040.,"A New Approach to the Corpus-based Statistical Investigation of Hungarian Multi-word Lexemes. We apply statistical methods to perform automatic extraction of Hungarian collocations from corpora. Due to the complexity of Hungarian morphology, a complex resource preparation tool chain has been developed. This tool chain implements a reusable and, in principle, language independent framework. In the first part, the paper describes the tool chain itself, then, in the second part, an experiment using this framework. The experiment deals with the extraction of patterns from the corpus as collocation candidates, in order to compare results to an experiment on Dutch V + PP patterns (Villada, 2004). Statistical processing on this dataset provided interesting observations, briefly explained in the evaluation section. We conclude by providing a summary of further steps required to improve the extraction process. This is not restricted to improvements in the resource preparation for statistical processing, but a proposal to use nonstatistical means as well, thus acquiring an efficient blend of different methods.",2004
babych-etal-2007-dynamic,https://aclanthology.org/2007.tc-1.3,0,,,,,,,A dynamic dictionary for discovering indirect translation equivalents. We present the design and evaluation of a novel software application intended to help translators with rendering problematic expressions from the general lexicon. It does this dynamically by first generalising the problem expression in the source language and then searching for possible translations in a large comparable corpus. These candidate solutions are ranked and presented to the user. The method relies on measures of distributional similarity and on bilingual dictionaries. It outperforms established techniques for extracting translation equivalents from parallel corpora.,A dynamic dictionary for discovering indirect translation equivalents,We present the design and evaluation of a novel software application intended to help translators with rendering problematic expressions from the general lexicon. It does this dynamically by first generalising the problem expression in the source language and then searching for possible translations in a large comparable corpus. These candidate solutions are ranked and presented to the user. The method relies on measures of distributional similarity and on bilingual dictionaries. It outperforms established techniques for extracting translation equivalents from parallel corpora.,A dynamic dictionary for discovering indirect translation equivalents,We present the design and evaluation of a novel software application intended to help translators with rendering problematic expressions from the general lexicon. It does this dynamically by first generalising the problem expression in the source language and then searching for possible translations in a large comparable corpus. These candidate solutions are ranked and presented to the user. The method relies on measures of distributional similarity and on bilingual dictionaries. It outperforms established techniques for extracting translation equivalents from parallel corpora.,We would like to thank the professional translators who kindly participated in our evaluation trials. This work was supported by EPSRC grant EP/C005902/1 and was conducted jointly with Paul Rayson. Olga Moudraya and Scott Piao of Lancaster University InfoLab.,A dynamic dictionary for discovering indirect translation equivalents. We present the design and evaluation of a novel software application intended to help translators with rendering problematic expressions from the general lexicon. It does this dynamically by first generalising the problem expression in the source language and then searching for possible translations in a large comparable corpus. These candidate solutions are ranked and presented to the user. The method relies on measures of distributional similarity and on bilingual dictionaries. It outperforms established techniques for extracting translation equivalents from parallel corpora.,2007
gorrell-etal-2013-finding,https://aclanthology.org/W13-5102,1,,,,health,,,"Finding Negative Symptoms of Schizophrenia in Patient Records. This paper reports the automatic extraction of eleven negative symptoms of schizophrenia from patient medical records. The task offers a range of difficulties depending on the consistency and complexity with which mental health professionals describe each. In order to reduce the cost of system development, rapid prototypes are built with minimal adaptation and configuration of existing software, and additional training data is obtained by annotating automatically extracted symptoms for which the system has low confidence. The system was further improved by the addition of a manually engineered rule based approach. Rule-based and machine learning approaches are combined in various ways to achieve the optimal result for each symptom. Precisions in the range of 0.8 to 0.99 have been obtained.",Finding Negative Symptoms of Schizophrenia in Patient Records,"This paper reports the automatic extraction of eleven negative symptoms of schizophrenia from patient medical records. The task offers a range of difficulties depending on the consistency and complexity with which mental health professionals describe each. In order to reduce the cost of system development, rapid prototypes are built with minimal adaptation and configuration of existing software, and additional training data is obtained by annotating automatically extracted symptoms for which the system has low confidence. The system was further improved by the addition of a manually engineered rule based approach. Rule-based and machine learning approaches are combined in various ways to achieve the optimal result for each symptom. Precisions in the range of 0.8 to 0.99 have been obtained.",Finding Negative Symptoms of Schizophrenia in Patient Records,"This paper reports the automatic extraction of eleven negative symptoms of schizophrenia from patient medical records. The task offers a range of difficulties depending on the consistency and complexity with which mental health professionals describe each. In order to reduce the cost of system development, rapid prototypes are built with minimal adaptation and configuration of existing software, and additional training data is obtained by annotating automatically extracted symptoms for which the system has low confidence. The system was further improved by the addition of a manually engineered rule based approach. Rule-based and machine learning approaches are combined in various ways to achieve the optimal result for each symptom. Precisions in the range of 0.8 to 0.99 have been obtained.",,"Finding Negative Symptoms of Schizophrenia in Patient Records. This paper reports the automatic extraction of eleven negative symptoms of schizophrenia from patient medical records. The task offers a range of difficulties depending on the consistency and complexity with which mental health professionals describe each. In order to reduce the cost of system development, rapid prototypes are built with minimal adaptation and configuration of existing software, and additional training data is obtained by annotating automatically extracted symptoms for which the system has low confidence. The system was further improved by the addition of a manually engineered rule based approach. Rule-based and machine learning approaches are combined in various ways to achieve the optimal result for each symptom. Precisions in the range of 0.8 to 0.99 have been obtained.",2013
akiba-etal-2008-statistical,https://aclanthology.org/I08-2104,0,,,,,,,"Statistical Machine Translation based Passage Retrieval for Cross-Lingual Question Answering. In this paper, we propose a novel approach for Cross-Lingual Question Answering (CLQA). In the proposed method, the statistical machine translation (SMT) is deeply incorporated into the question answering process, instead of using it as the pre-processing of the mono-lingual QA process as in the previous work. The proposed method can be considered as exploiting the SMT-based passage retrieval for CLQA task. We applied our method to the English-to-Japanese CLQA system and evaluated the performance by using NTCIR CLQA 1 and 2 test collections. The result showed that the proposed method outperformed the previous pre-translation approach.",Statistical Machine Translation based Passage Retrieval for Cross-Lingual Question Answering,"In this paper, we propose a novel approach for Cross-Lingual Question Answering (CLQA). In the proposed method, the statistical machine translation (SMT) is deeply incorporated into the question answering process, instead of using it as the pre-processing of the mono-lingual QA process as in the previous work. The proposed method can be considered as exploiting the SMT-based passage retrieval for CLQA task. We applied our method to the English-to-Japanese CLQA system and evaluated the performance by using NTCIR CLQA 1 and 2 test collections. The result showed that the proposed method outperformed the previous pre-translation approach.",Statistical Machine Translation based Passage Retrieval for Cross-Lingual Question Answering,"In this paper, we propose a novel approach for Cross-Lingual Question Answering (CLQA). In the proposed method, the statistical machine translation (SMT) is deeply incorporated into the question answering process, instead of using it as the pre-processing of the mono-lingual QA process as in the previous work. The proposed method can be considered as exploiting the SMT-based passage retrieval for CLQA task. We applied our method to the English-to-Japanese CLQA system and evaluated the performance by using NTCIR CLQA 1 and 2 test collections. The result showed that the proposed method outperformed the previous pre-translation approach.",,"Statistical Machine Translation based Passage Retrieval for Cross-Lingual Question Answering. In this paper, we propose a novel approach for Cross-Lingual Question Answering (CLQA). In the proposed method, the statistical machine translation (SMT) is deeply incorporated into the question answering process, instead of using it as the pre-processing of the mono-lingual QA process as in the previous work. The proposed method can be considered as exploiting the SMT-based passage retrieval for CLQA task. We applied our method to the English-to-Japanese CLQA system and evaluated the performance by using NTCIR CLQA 1 and 2 test collections. The result showed that the proposed method outperformed the previous pre-translation approach.",2008
pasca-harabagiu-2001-answer,https://aclanthology.org/W01-1206,0,,,,,,,"Answer Mining from On-Line Documents. Mining the answer of a natural language open-domain question in a large collection of on-line documents is made possible by the recognition of the expected answer type in relevant text passages. If the technology of retrieving texts where the answer might be found is well developed, few studies have been devoted to the recognition of the answer type. This paper presents a unified model of answer types for open-domain Question/Answering that enables the discovery of exact answers. The evaluation of the model, performed on real-world questions, considers both the correctness and the coverage of the answer types as well as their contribution to answer precision.",Answer Mining from On-Line Documents,"Mining the answer of a natural language open-domain question in a large collection of on-line documents is made possible by the recognition of the expected answer type in relevant text passages. If the technology of retrieving texts where the answer might be found is well developed, few studies have been devoted to the recognition of the answer type. This paper presents a unified model of answer types for open-domain Question/Answering that enables the discovery of exact answers. The evaluation of the model, performed on real-world questions, considers both the correctness and the coverage of the answer types as well as their contribution to answer precision.",Answer Mining from On-Line Documents,"Mining the answer of a natural language open-domain question in a large collection of on-line documents is made possible by the recognition of the expected answer type in relevant text passages. If the technology of retrieving texts where the answer might be found is well developed, few studies have been devoted to the recognition of the answer type. This paper presents a unified model of answer types for open-domain Question/Answering that enables the discovery of exact answers. The evaluation of the model, performed on real-world questions, considers both the correctness and the coverage of the answer types as well as their contribution to answer precision.",This research was supported in part by the Advanced Research and Development Activity (ARDA) grant 2001*H238400*000 and by the National Science Foundation CAREER grant CCR-9983600.,"Answer Mining from On-Line Documents. Mining the answer of a natural language open-domain question in a large collection of on-line documents is made possible by the recognition of the expected answer type in relevant text passages. If the technology of retrieving texts where the answer might be found is well developed, few studies have been devoted to the recognition of the answer type. This paper presents a unified model of answer types for open-domain Question/Answering that enables the discovery of exact answers. The evaluation of the model, performed on real-world questions, considers both the correctness and the coverage of the answer types as well as their contribution to answer precision.",2001
chang-etal-1992-statistical,https://aclanthology.org/C92-3139,0,,,,,,,"A Statistical Approach to Machine Aided Translation of Terminology Banks. ""l]fis paper reports on a new statistical approach to machine aided translation of terminology bank. The text in the bank is hyphenated and then dissected into roots of 1 to 3 syllables. Both hyphenation and dissection are done with a set of initial probabilities of syllables and roots. The probabilities are repeatedly revised using an EM algorithm. Alter each iteration of hyphenation or dissectioh, the resulting syllables and roots are counted subsequently to yield more precise estimation of probability. The set of roots rapidly converges to a set of most likely roots. Preliminary experhuents have shown promising results. From a terminology bank of more than 4,000 terms, the algorithm extracts 223 general and chemical roots, of which 91% are actually roots. The algoritlun dissects a word into roots with aromld 86% hit rate. The set of roots and their ""hand-translation are then used iu a compositional translation of the terminology bank. One can expect the translation of terminology bank using this approach to be more cost-effective, consistent, and with a better closure.",A Statistical Approach to Machine Aided Translation of Terminology {B}anks,"""l]fis paper reports on a new statistical approach to machine aided translation of terminology bank. The text in the bank is hyphenated and then dissected into roots of 1 to 3 syllables. Both hyphenation and dissection are done with a set of initial probabilities of syllables and roots. The probabilities are repeatedly revised using an EM algorithm. Alter each iteration of hyphenation or dissectioh, the resulting syllables and roots are counted subsequently to yield more precise estimation of probability. The set of roots rapidly converges to a set of most likely roots. Preliminary experhuents have shown promising results. From a terminology bank of more than 4,000 terms, the algorithm extracts 223 general and chemical roots, of which 91% are actually roots. The algoritlun dissects a word into roots with aromld 86% hit rate. The set of roots and their ""hand-translation are then used iu a compositional translation of the terminology bank. One can expect the translation of terminology bank using this approach to be more cost-effective, consistent, and with a better closure.",A Statistical Approach to Machine Aided Translation of Terminology Banks,"""l]fis paper reports on a new statistical approach to machine aided translation of terminology bank. The text in the bank is hyphenated and then dissected into roots of 1 to 3 syllables. Both hyphenation and dissection are done with a set of initial probabilities of syllables and roots. The probabilities are repeatedly revised using an EM algorithm. Alter each iteration of hyphenation or dissectioh, the resulting syllables and roots are counted subsequently to yield more precise estimation of probability. The set of roots rapidly converges to a set of most likely roots. Preliminary experhuents have shown promising results. From a terminology bank of more than 4,000 terms, the algorithm extracts 223 general and chemical roots, of which 91% are actually roots. The algoritlun dissects a word into roots with aromld 86% hit rate. The set of roots and their ""hand-translation are then used iu a compositional translation of the terminology bank. One can expect the translation of terminology bank using this approach to be more cost-effective, consistent, and with a better closure.","This research was supported by the National Science Council, Taiwan, under Contracts NSC 81-0408-E007-13 and -529.","A Statistical Approach to Machine Aided Translation of Terminology Banks. ""l]fis paper reports on a new statistical approach to machine aided translation of terminology bank. The text in the bank is hyphenated and then dissected into roots of 1 to 3 syllables. Both hyphenation and dissection are done with a set of initial probabilities of syllables and roots. The probabilities are repeatedly revised using an EM algorithm. Alter each iteration of hyphenation or dissectioh, the resulting syllables and roots are counted subsequently to yield more precise estimation of probability. The set of roots rapidly converges to a set of most likely roots. Preliminary experhuents have shown promising results. From a terminology bank of more than 4,000 terms, the algorithm extracts 223 general and chemical roots, of which 91% are actually roots. The algoritlun dissects a word into roots with aromld 86% hit rate. The set of roots and their ""hand-translation are then used iu a compositional translation of the terminology bank. One can expect the translation of terminology bank using this approach to be more cost-effective, consistent, and with a better closure.",1992
handler-oconnor-2019-query,https://aclanthology.org/D19-1612,0,,,,,,,"Query-focused Sentence Compression in Linear Time. Search applications often display shortened sentences which must contain certain query terms and must fit within the space constraints of a user interface. This work introduces a new transition-based sentence compression technique developed for such settings. Our query-focused method constructs length and lexically constrained compressions in linear time, by growing a subgraph in the dependency parse of a sentence. This theoretically efficient approach achieves an 11x empirical speedup over baseline ILP methods, while better reconstructing gold constrained shortenings. Such speedups help query-focused applications, because users are measurably hindered by interface lags. Additionally, our technique does not require an ILP solver or a GPU.",Query-focused Sentence Compression in Linear Time,"Search applications often display shortened sentences which must contain certain query terms and must fit within the space constraints of a user interface. This work introduces a new transition-based sentence compression technique developed for such settings. Our query-focused method constructs length and lexically constrained compressions in linear time, by growing a subgraph in the dependency parse of a sentence. This theoretically efficient approach achieves an 11x empirical speedup over baseline ILP methods, while better reconstructing gold constrained shortenings. Such speedups help query-focused applications, because users are measurably hindered by interface lags. Additionally, our technique does not require an ILP solver or a GPU.",Query-focused Sentence Compression in Linear Time,"Search applications often display shortened sentences which must contain certain query terms and must fit within the space constraints of a user interface. This work introduces a new transition-based sentence compression technique developed for such settings. Our query-focused method constructs length and lexically constrained compressions in linear time, by growing a subgraph in the dependency parse of a sentence. This theoretically efficient approach achieves an 11x empirical speedup over baseline ILP methods, while better reconstructing gold constrained shortenings. Such speedups help query-focused applications, because users are measurably hindered by interface lags. Additionally, our technique does not require an ILP solver or a GPU.","Thanks to Javier Burroni and Nick Eubank for suggesting ways to optimize and measure performance of Python code. Thanks to Jeffrey Flanigan, Katie Keith and the UMass NLP reading group for feedback. This work was partially supported by IIS-1814955.","Query-focused Sentence Compression in Linear Time. Search applications often display shortened sentences which must contain certain query terms and must fit within the space constraints of a user interface. This work introduces a new transition-based sentence compression technique developed for such settings. Our query-focused method constructs length and lexically constrained compressions in linear time, by growing a subgraph in the dependency parse of a sentence. This theoretically efficient approach achieves an 11x empirical speedup over baseline ILP methods, while better reconstructing gold constrained shortenings. Such speedups help query-focused applications, because users are measurably hindered by interface lags. Additionally, our technique does not require an ILP solver or a GPU.",2019
viegas-etal-1998-computational-lexical,https://aclanthology.org/P98-2216,0,,,,,,,"The Computational Lexical Semantics of Syntagmatic Expressions. In this paper, we address the issue of syntagmatic expressions from a computational lexical semantic perspective. From a representational viewpoint, we argue for a hybrid approach combining linguistic and conceptual paradigms, in order to account for the continuum we find in natural languages from free combining words to frozen expressions. In particular, we focus on the place of lexical and semantic restricted co-occurrences. From a processing viewpoint, we show how to generate/analyze syntagmatic expressions by using an efficient constraintbased processor, well fitted for a knowledge-driven approach.",The Computational Lexical Semantics of Syntagmatic Expressions,"In this paper, we address the issue of syntagmatic expressions from a computational lexical semantic perspective. From a representational viewpoint, we argue for a hybrid approach combining linguistic and conceptual paradigms, in order to account for the continuum we find in natural languages from free combining words to frozen expressions. In particular, we focus on the place of lexical and semantic restricted co-occurrences. From a processing viewpoint, we show how to generate/analyze syntagmatic expressions by using an efficient constraintbased processor, well fitted for a knowledge-driven approach.",The Computational Lexical Semantics of Syntagmatic Expressions,"In this paper, we address the issue of syntagmatic expressions from a computational lexical semantic perspective. From a representational viewpoint, we argue for a hybrid approach combining linguistic and conceptual paradigms, in order to account for the continuum we find in natural languages from free combining words to frozen expressions. In particular, we focus on the place of lexical and semantic restricted co-occurrences. From a processing viewpoint, we show how to generate/analyze syntagmatic expressions by using an efficient constraintbased processor, well fitted for a knowledge-driven approach.","This work has been supported in part by DoD under contract number MDA-904-92-C-5189. We would like to thank Pierrette Bouillon, L~o Wanner and R~mi Zajac for helpful discussions and the anonymous reviewers for their useful comments.","The Computational Lexical Semantics of Syntagmatic Expressions. In this paper, we address the issue of syntagmatic expressions from a computational lexical semantic perspective. From a representational viewpoint, we argue for a hybrid approach combining linguistic and conceptual paradigms, in order to account for the continuum we find in natural languages from free combining words to frozen expressions. In particular, we focus on the place of lexical and semantic restricted co-occurrences. From a processing viewpoint, we show how to generate/analyze syntagmatic expressions by using an efficient constraintbased processor, well fitted for a knowledge-driven approach.",1998
tanveer-ture-2018-syntaviz,https://aclanthology.org/D18-2001,0,,,,,,,"SyntaViz: Visualizing Voice Queries through a Syntax-Driven Hierarchical Ontology. This paper describes SYNTAVIZ, a visualization interface specifically designed for analyzing natural-language queries that were created by users of a voice-enabled product. SYN-TAVIZ provides a platform for browsing the ontology of user queries from a syntax-driven perspective, providing quick access to highimpact failure points of the existing intent understanding system and evidence for datadriven decisions in the development cycle. A case study on Xfinity X1 (a voice-enabled entertainment platform from Comcast) reveals that SYNTAVIZ helps developers identify multiple action items in a short amount of time without any special training. SYNTAVIZ has been open-sourced for the benefit of the community.",{S}ynta{V}iz: Visualizing Voice Queries through a Syntax-Driven Hierarchical Ontology,"This paper describes SYNTAVIZ, a visualization interface specifically designed for analyzing natural-language queries that were created by users of a voice-enabled product. SYN-TAVIZ provides a platform for browsing the ontology of user queries from a syntax-driven perspective, providing quick access to highimpact failure points of the existing intent understanding system and evidence for datadriven decisions in the development cycle. A case study on Xfinity X1 (a voice-enabled entertainment platform from Comcast) reveals that SYNTAVIZ helps developers identify multiple action items in a short amount of time without any special training. SYNTAVIZ has been open-sourced for the benefit of the community.",SyntaViz: Visualizing Voice Queries through a Syntax-Driven Hierarchical Ontology,"This paper describes SYNTAVIZ, a visualization interface specifically designed for analyzing natural-language queries that were created by users of a voice-enabled product. SYN-TAVIZ provides a platform for browsing the ontology of user queries from a syntax-driven perspective, providing quick access to highimpact failure points of the existing intent understanding system and evidence for datadriven decisions in the development cycle. A case study on Xfinity X1 (a voice-enabled entertainment platform from Comcast) reveals that SYNTAVIZ helps developers identify multiple action items in a short amount of time without any special training. SYNTAVIZ has been open-sourced for the benefit of the community.",,"SyntaViz: Visualizing Voice Queries through a Syntax-Driven Hierarchical Ontology. This paper describes SYNTAVIZ, a visualization interface specifically designed for analyzing natural-language queries that were created by users of a voice-enabled product. SYN-TAVIZ provides a platform for browsing the ontology of user queries from a syntax-driven perspective, providing quick access to highimpact failure points of the existing intent understanding system and evidence for datadriven decisions in the development cycle. A case study on Xfinity X1 (a voice-enabled entertainment platform from Comcast) reveals that SYNTAVIZ helps developers identify multiple action items in a short amount of time without any special training. SYNTAVIZ has been open-sourced for the benefit of the community.",2018
pajas-stepanek-2009-system,https://aclanthology.org/P09-4009,0,,,,,,,"System for Querying Syntactically Annotated Corpora. This paper presents a system for querying treebanks. The system consists of a powerful query language with natural support for cross-layer queries, a client interface with a graphical query builder and visualizer of the results, a command-line client interface, and two substitutable query engines: a very efficient engine using a relational database (suitable for large static data), and a slower, but paralel-computing enabled, engine operating on treebank files (suitable for ""live"" data).",System for Querying Syntactically Annotated Corpora,"This paper presents a system for querying treebanks. The system consists of a powerful query language with natural support for cross-layer queries, a client interface with a graphical query builder and visualizer of the results, a command-line client interface, and two substitutable query engines: a very efficient engine using a relational database (suitable for large static data), and a slower, but paralel-computing enabled, engine operating on treebank files (suitable for ""live"" data).",System for Querying Syntactically Annotated Corpora,"This paper presents a system for querying treebanks. The system consists of a powerful query language with natural support for cross-layer queries, a client interface with a graphical query builder and visualizer of the results, a command-line client interface, and two substitutable query engines: a very efficient engine using a relational database (suitable for large static data), and a slower, but paralel-computing enabled, engine operating on treebank files (suitable for ""live"" data).",This paper as well as the development of the system is supported by the grant Information Society of GA AVČR under contract 1ET101120503 and by the grant GAUK No. 22908.,"System for Querying Syntactically Annotated Corpora. This paper presents a system for querying treebanks. The system consists of a powerful query language with natural support for cross-layer queries, a client interface with a graphical query builder and visualizer of the results, a command-line client interface, and two substitutable query engines: a very efficient engine using a relational database (suitable for large static data), and a slower, but paralel-computing enabled, engine operating on treebank files (suitable for ""live"" data).",2009
shwartz-etal-2017-hypernyms,https://aclanthology.org/E17-1007,0,,,,,,,"Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection. The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution. We investigate an extensive number of such unsupervised measures, using several distributional semantic models that differ by context type and feature weighting. We analyze the performance of the different methods based on their linguistic motivation. Comparison to the state-of-the-art supervised methods shows that while supervised methods generally outperform the unsupervised ones, the former are sensitive to the distribution of training instances, hurting their reliability. Being based on general linguistic hypotheses and independent from training data, unsupervised measures are more robust, and therefore are still useful artillery for hypernymy detection.",Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection,"The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution. We investigate an extensive number of such unsupervised measures, using several distributional semantic models that differ by context type and feature weighting. We analyze the performance of the different methods based on their linguistic motivation. Comparison to the state-of-the-art supervised methods shows that while supervised methods generally outperform the unsupervised ones, the former are sensitive to the distribution of training instances, hurting their reliability. Being based on general linguistic hypotheses and independent from training data, unsupervised measures are more robust, and therefore are still useful artillery for hypernymy detection.",Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection,"The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution. We investigate an extensive number of such unsupervised measures, using several distributional semantic models that differ by context type and feature weighting. We analyze the performance of the different methods based on their linguistic motivation. Comparison to the state-of-the-art supervised methods shows that while supervised methods generally outperform the unsupervised ones, the former are sensitive to the distribution of training instances, hurting their reliability. Being based on general linguistic hypotheses and independent from training data, unsupervised measures are more robust, and therefore are still useful artillery for hypernymy detection.","The authors would like to thank Ido Dagan, Alessandro Lenci, and Yuji Matsumoto for their help and advice. Vered Shwartz is partially supported by an Intel ICRI-CI grant, the Israel Science Foundation grant 880/12, and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1). Enrico Santus is partially supported by HK PhD Fellowship Scheme under PF12-13656.","Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection. The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution. We investigate an extensive number of such unsupervised measures, using several distributional semantic models that differ by context type and feature weighting. We analyze the performance of the different methods based on their linguistic motivation. Comparison to the state-of-the-art supervised methods shows that while supervised methods generally outperform the unsupervised ones, the former are sensitive to the distribution of training instances, hurting their reliability. Being based on general linguistic hypotheses and independent from training data, unsupervised measures are more robust, and therefore are still useful artillery for hypernymy detection.",2017
mendes-etal-2010-named,http://www.lrec-conf.org/proceedings/lrec2010/pdf/97_Paper.pdf,0,,,,,,,"Named Entity Recognition in Questions: Towards a Golden Collection. Named Entity Recognition (NER) plays a relevant role in several Natural Language Processing tasks. Question-Answering (QA) is an example of such, since answers are frequently named entities in agreement with the semantic category expected by a given question. In this context, the recognition of named entities is usually applied in free text data. NER in natural language questions can also aid QA and, thus, should not be disregarded. Nevertheless, it has not yet been given the necessary importance. In this paper, we approach the identification and classification of named entities in natural language questions. We hypothesize that NER results can benefit with the inclusion of previously labeled questions in the training corpus. We present a broad study addressing that hypothesis and focusing, among others, on the balance to be achieved between the amount of free text and questions in order to build a suitable training corpus. This work also contributes by providing a set of nearly 5,500 annotated questions with their named entities, freely available for research purposes.",Named Entity Recognition in Questions: Towards a Golden Collection,"Named Entity Recognition (NER) plays a relevant role in several Natural Language Processing tasks. Question-Answering (QA) is an example of such, since answers are frequently named entities in agreement with the semantic category expected by a given question. In this context, the recognition of named entities is usually applied in free text data. NER in natural language questions can also aid QA and, thus, should not be disregarded. Nevertheless, it has not yet been given the necessary importance. In this paper, we approach the identification and classification of named entities in natural language questions. We hypothesize that NER results can benefit with the inclusion of previously labeled questions in the training corpus. We present a broad study addressing that hypothesis and focusing, among others, on the balance to be achieved between the amount of free text and questions in order to build a suitable training corpus. This work also contributes by providing a set of nearly 5,500 annotated questions with their named entities, freely available for research purposes.",Named Entity Recognition in Questions: Towards a Golden Collection,"Named Entity Recognition (NER) plays a relevant role in several Natural Language Processing tasks. Question-Answering (QA) is an example of such, since answers are frequently named entities in agreement with the semantic category expected by a given question. In this context, the recognition of named entities is usually applied in free text data. NER in natural language questions can also aid QA and, thus, should not be disregarded. Nevertheless, it has not yet been given the necessary importance. In this paper, we approach the identification and classification of named entities in natural language questions. We hypothesize that NER results can benefit with the inclusion of previously labeled questions in the training corpus. We present a broad study addressing that hypothesis and focusing, among others, on the balance to be achieved between the amount of free text and questions in order to build a suitable training corpus. This work also contributes by providing a set of nearly 5,500 annotated questions with their named entities, freely available for research purposes.",,"Named Entity Recognition in Questions: Towards a Golden Collection. Named Entity Recognition (NER) plays a relevant role in several Natural Language Processing tasks. Question-Answering (QA) is an example of such, since answers are frequently named entities in agreement with the semantic category expected by a given question. In this context, the recognition of named entities is usually applied in free text data. NER in natural language questions can also aid QA and, thus, should not be disregarded. Nevertheless, it has not yet been given the necessary importance. In this paper, we approach the identification and classification of named entities in natural language questions. We hypothesize that NER results can benefit with the inclusion of previously labeled questions in the training corpus. We present a broad study addressing that hypothesis and focusing, among others, on the balance to be achieved between the amount of free text and questions in order to build a suitable training corpus. This work also contributes by providing a set of nearly 5,500 annotated questions with their named entities, freely available for research purposes.",2010
li-etal-2010-transferring,https://aclanthology.org/2010.amta-papers.26,0,,,,,,,"Transferring Syntactic Relations of Subject-Verb-Object Pattern in Chinese-to-Korean SMT. Since most Korean postpositions signal grammatical functions such as syntactic relations, generation of incorrect Korean postpositions results in producing ungrammatical outputs in machine translations targeting Korean. Chinese and Korean belong to morphosyntactically divergent language pairs, and usually Korean postpositions do not have their counterparts in Chinese. In this paper, we propose a preprocessing method for a statistical MT system that generates more adequate Korean postpositions. We transfer syntactic relations of subject-verb-object patterns in Chinese sentences and enrich them with transferred syntactic relations in order to reduce the morpho-syntactic differences. The effectiveness of our proposed method is measured with lexical units of various granularities. Human evaluation also suggest improvements over previous methods, which are consistent with the result of the automatic evaluation.",Transferring Syntactic Relations of Subject-Verb-Object Pattern in {C}hinese-to-{K}orean {SMT},"Since most Korean postpositions signal grammatical functions such as syntactic relations, generation of incorrect Korean postpositions results in producing ungrammatical outputs in machine translations targeting Korean. Chinese and Korean belong to morphosyntactically divergent language pairs, and usually Korean postpositions do not have their counterparts in Chinese. In this paper, we propose a preprocessing method for a statistical MT system that generates more adequate Korean postpositions. We transfer syntactic relations of subject-verb-object patterns in Chinese sentences and enrich them with transferred syntactic relations in order to reduce the morpho-syntactic differences. The effectiveness of our proposed method is measured with lexical units of various granularities. Human evaluation also suggest improvements over previous methods, which are consistent with the result of the automatic evaluation.",Transferring Syntactic Relations of Subject-Verb-Object Pattern in Chinese-to-Korean SMT,"Since most Korean postpositions signal grammatical functions such as syntactic relations, generation of incorrect Korean postpositions results in producing ungrammatical outputs in machine translations targeting Korean. Chinese and Korean belong to morphosyntactically divergent language pairs, and usually Korean postpositions do not have their counterparts in Chinese. In this paper, we propose a preprocessing method for a statistical MT system that generates more adequate Korean postpositions. We transfer syntactic relations of subject-verb-object patterns in Chinese sentences and enrich them with transferred syntactic relations in order to reduce the morpho-syntactic differences. The effectiveness of our proposed method is measured with lexical units of various granularities. Human evaluation also suggest improvements over previous methods, which are consistent with the result of the automatic evaluation.","This work is supported in part by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (MEST) (2009-0075211), in part by the BK 21 project in 2010, and in part by the POSTECH Information Research Laboratories (PIRL) project.","Transferring Syntactic Relations of Subject-Verb-Object Pattern in Chinese-to-Korean SMT. Since most Korean postpositions signal grammatical functions such as syntactic relations, generation of incorrect Korean postpositions results in producing ungrammatical outputs in machine translations targeting Korean. Chinese and Korean belong to morphosyntactically divergent language pairs, and usually Korean postpositions do not have their counterparts in Chinese. In this paper, we propose a preprocessing method for a statistical MT system that generates more adequate Korean postpositions. We transfer syntactic relations of subject-verb-object patterns in Chinese sentences and enrich them with transferred syntactic relations in order to reduce the morpho-syntactic differences. The effectiveness of our proposed method is measured with lexical units of various granularities. Human evaluation also suggest improvements over previous methods, which are consistent with the result of the automatic evaluation.",2010
zhang-etal-2021-textoir,https://aclanthology.org/2021.acl-demo.20,0,,,,,,,"TEXTOIR: An Integrated and Visualized Platform for Text Open Intent Recognition. TEXTOIR is the first integrated and visualized platform for text open intent recognition. It is composed of two main modules: open intent detection and open intent discovery. Each module integrates most of the state-of-the-art algorithms and benchmark intent datasets. It also contains an overall framework connecting the two modules in a pipeline scheme. In addition, this platform has visualized tools for data and model management, training, evaluation and analysis of the performance from different aspects. TEXTOIR provides useful toolkits and convenient visualized interfaces for each sub-module 1 , and designs a framework to implement a complete process to both identify known intents and discover open intents 2 .",{TEXTOIR}: An Integrated and Visualized Platform for Text Open Intent Recognition,"TEXTOIR is the first integrated and visualized platform for text open intent recognition. It is composed of two main modules: open intent detection and open intent discovery. Each module integrates most of the state-of-the-art algorithms and benchmark intent datasets. It also contains an overall framework connecting the two modules in a pipeline scheme. In addition, this platform has visualized tools for data and model management, training, evaluation and analysis of the performance from different aspects. TEXTOIR provides useful toolkits and convenient visualized interfaces for each sub-module 1 , and designs a framework to implement a complete process to both identify known intents and discover open intents 2 .",TEXTOIR: An Integrated and Visualized Platform for Text Open Intent Recognition,"TEXTOIR is the first integrated and visualized platform for text open intent recognition. It is composed of two main modules: open intent detection and open intent discovery. Each module integrates most of the state-of-the-art algorithms and benchmark intent datasets. It also contains an overall framework connecting the two modules in a pipeline scheme. In addition, this platform has visualized tools for data and model management, training, evaluation and analysis of the performance from different aspects. TEXTOIR provides useful toolkits and convenient visualized interfaces for each sub-module 1 , and designs a framework to implement a complete process to both identify known intents and discover open intents 2 .","This work is founded by National Key R&D Program Projects of China (Grant No: 2018YFC1707605). This work is also supported by seed fund of Tsinghua University (Department of Computer Science and Technology)-Siemens Ltd., China Joint Research Center for Industrial Intelligence and Internet of Things. We would like to thank the help from Xin Wang and Huisheng Mao, and constructive feedback from Ting-En Lin on this work.","TEXTOIR: An Integrated and Visualized Platform for Text Open Intent Recognition. TEXTOIR is the first integrated and visualized platform for text open intent recognition. It is composed of two main modules: open intent detection and open intent discovery. Each module integrates most of the state-of-the-art algorithms and benchmark intent datasets. It also contains an overall framework connecting the two modules in a pipeline scheme. In addition, this platform has visualized tools for data and model management, training, evaluation and analysis of the performance from different aspects. TEXTOIR provides useful toolkits and convenient visualized interfaces for each sub-module 1 , and designs a framework to implement a complete process to both identify known intents and discover open intents 2 .",2021
yang-etal-2020-streaming,https://aclanthology.org/2020.emnlp-main.366,0,,,,,,,"A Streaming Approach For Efficient Batched Beam Search. We propose an efficient batching strategy for variable-length decoding on GPU architectures. During decoding, when candidates terminate or are pruned according to heuristics, our streaming approach periodically ""refills"" the batch before proceeding with a selected subset of candidates. We apply our method to variable-width beam search on a state-of-theart machine translation model. Our method decreases runtime by up to 71% compared to a fixed-width beam search baseline and 17% compared to a variable-width baseline, while matching baselines' BLEU. Finally, experiments show that our method can speed up decoding in other domains, such as semantic and syntactic parsing.",A Streaming Approach For Efficient Batched Beam Search,"We propose an efficient batching strategy for variable-length decoding on GPU architectures. During decoding, when candidates terminate or are pruned according to heuristics, our streaming approach periodically ""refills"" the batch before proceeding with a selected subset of candidates. We apply our method to variable-width beam search on a state-of-theart machine translation model. Our method decreases runtime by up to 71% compared to a fixed-width beam search baseline and 17% compared to a variable-width baseline, while matching baselines' BLEU. Finally, experiments show that our method can speed up decoding in other domains, such as semantic and syntactic parsing.",A Streaming Approach For Efficient Batched Beam Search,"We propose an efficient batching strategy for variable-length decoding on GPU architectures. During decoding, when candidates terminate or are pruned according to heuristics, our streaming approach periodically ""refills"" the batch before proceeding with a selected subset of candidates. We apply our method to variable-width beam search on a state-of-theart machine translation model. Our method decreases runtime by up to 71% compared to a fixed-width beam search baseline and 17% compared to a variable-width baseline, while matching baselines' BLEU. Finally, experiments show that our method can speed up decoding in other domains, such as semantic and syntactic parsing.","We thank Steven Cao, Daniel Fried, Nikita Kitaev, Kevin Lin, Mitchell Stern, Kyle Swanson, Ruiqi Zhong, and the three anonymous reviewers for their helpful comments and feedback, which helped us to greatly improve the paper. This work was supported by Berkeley AI Research, DARPA through the Learning with Less Labeling (LwLL) grant, and the NSF through a fellowship to the first author.","A Streaming Approach For Efficient Batched Beam Search. We propose an efficient batching strategy for variable-length decoding on GPU architectures. During decoding, when candidates terminate or are pruned according to heuristics, our streaming approach periodically ""refills"" the batch before proceeding with a selected subset of candidates. We apply our method to variable-width beam search on a state-of-theart machine translation model. Our method decreases runtime by up to 71% compared to a fixed-width beam search baseline and 17% compared to a variable-width baseline, while matching baselines' BLEU. Finally, experiments show that our method can speed up decoding in other domains, such as semantic and syntactic parsing.",2020
yeung-kartsaklis-2021-ccg,https://aclanthology.org/2021.semspace-1.3,0,,,,,,,"A CCG-Based Version of the DisCoCat Framework. While the DisCoCat model (Coecke et al., 2010) has been proved a valuable tool for studying compositional aspects of language at the level of semantics, its strong dependency on pregroup grammars poses important restrictions: first, it prevents large-scale experimentation due to the absence of a pregroup parser; and second, it limits the expressibility of the model to context-free grammars. In this paper we solve these problems by reformulating DisCoCat as a passage from Combinatory Categorial Grammar (CCG) to a category of semantics. We start by showing that standard categorial grammars can be expressed as a biclosed category, where all rules emerge as currying/uncurrying the identity; we then proceed to model permutation-inducing rules by exploiting the symmetry of the compact closed category encoding the word meaning. We provide a proof of concept for our method, converting ""Alice in Wonderland"" into DisCoCat form, a corpus that we make available to the community.",A {CCG}-Based Version of the {D}is{C}o{C}at Framework,"While the DisCoCat model (Coecke et al., 2010) has been proved a valuable tool for studying compositional aspects of language at the level of semantics, its strong dependency on pregroup grammars poses important restrictions: first, it prevents large-scale experimentation due to the absence of a pregroup parser; and second, it limits the expressibility of the model to context-free grammars. In this paper we solve these problems by reformulating DisCoCat as a passage from Combinatory Categorial Grammar (CCG) to a category of semantics. We start by showing that standard categorial grammars can be expressed as a biclosed category, where all rules emerge as currying/uncurrying the identity; we then proceed to model permutation-inducing rules by exploiting the symmetry of the compact closed category encoding the word meaning. We provide a proof of concept for our method, converting ""Alice in Wonderland"" into DisCoCat form, a corpus that we make available to the community.",A CCG-Based Version of the DisCoCat Framework,"While the DisCoCat model (Coecke et al., 2010) has been proved a valuable tool for studying compositional aspects of language at the level of semantics, its strong dependency on pregroup grammars poses important restrictions: first, it prevents large-scale experimentation due to the absence of a pregroup parser; and second, it limits the expressibility of the model to context-free grammars. In this paper we solve these problems by reformulating DisCoCat as a passage from Combinatory Categorial Grammar (CCG) to a category of semantics. We start by showing that standard categorial grammars can be expressed as a biclosed category, where all rules emerge as currying/uncurrying the identity; we then proceed to model permutation-inducing rules by exploiting the symmetry of the compact closed category encoding the word meaning. We provide a proof of concept for our method, converting ""Alice in Wonderland"" into DisCoCat form, a corpus that we make available to the community.","We would like to thank the anonymous reviewers for their useful comments. We are grateful to Steve Clark for his comments on CCG and the useful discussions on the generative power of the formalism. The paper has also greatly benefited from discussions with Alexis Toumi, Vincent Wang, Ian Fan, Harny Wang, Giovanni de Felice, Will Simmons, Konstantinos Meichanetzidis and Bob Coecke, who all have our sincere thanks.","A CCG-Based Version of the DisCoCat Framework. While the DisCoCat model (Coecke et al., 2010) has been proved a valuable tool for studying compositional aspects of language at the level of semantics, its strong dependency on pregroup grammars poses important restrictions: first, it prevents large-scale experimentation due to the absence of a pregroup parser; and second, it limits the expressibility of the model to context-free grammars. In this paper we solve these problems by reformulating DisCoCat as a passage from Combinatory Categorial Grammar (CCG) to a category of semantics. We start by showing that standard categorial grammars can be expressed as a biclosed category, where all rules emerge as currying/uncurrying the identity; we then proceed to model permutation-inducing rules by exploiting the symmetry of the compact closed category encoding the word meaning. We provide a proof of concept for our method, converting ""Alice in Wonderland"" into DisCoCat form, a corpus that we make available to the community.",2021
pericliev-1984-handling,https://aclanthology.org/P84-1111,0,,,,,,,Handling Syntactical Ambiguity in Machine Translation. The difficulties to be met with the resolution of syntactical ambiguity in MT can be at least partially overcome by means of preserving the syntactical ambiguity of the source language into the target language. An extensive study of the correspondences between the syntactically ambiguous structures in English and Bulgarian has provided a solid empirical basis in favor of such an approach. Similar results could be expected for other sufficiently related languages as well. The paper concentrates on the linguistic grounds for adopting the approach proposed.,Handling Syntactical Ambiguity in Machine Translation,The difficulties to be met with the resolution of syntactical ambiguity in MT can be at least partially overcome by means of preserving the syntactical ambiguity of the source language into the target language. An extensive study of the correspondences between the syntactically ambiguous structures in English and Bulgarian has provided a solid empirical basis in favor of such an approach. Similar results could be expected for other sufficiently related languages as well. The paper concentrates on the linguistic grounds for adopting the approach proposed.,Handling Syntactical Ambiguity in Machine Translation,The difficulties to be met with the resolution of syntactical ambiguity in MT can be at least partially overcome by means of preserving the syntactical ambiguity of the source language into the target language. An extensive study of the correspondences between the syntactically ambiguous structures in English and Bulgarian has provided a solid empirical basis in favor of such an approach. Similar results could be expected for other sufficiently related languages as well. The paper concentrates on the linguistic grounds for adopting the approach proposed.,,Handling Syntactical Ambiguity in Machine Translation. The difficulties to be met with the resolution of syntactical ambiguity in MT can be at least partially overcome by means of preserving the syntactical ambiguity of the source language into the target language. An extensive study of the correspondences between the syntactically ambiguous structures in English and Bulgarian has provided a solid empirical basis in favor of such an approach. Similar results could be expected for other sufficiently related languages as well. The paper concentrates on the linguistic grounds for adopting the approach proposed.,1984
wiren-1987-comparison,https://aclanthology.org/E87-1037,0,,,,,,,"A Comparison of Rule-Invocation Strategies in Context-Free Chart Parsing. Currently several grammatical formalisms converge towards being declarative and towards utilizing context-free phrase-structure grammar as a backbone, e.g. LFG and PATR-II. Typically the processing of these formalisms is organized within a chart-parsing framework. The declarative character of the formalisms makes it important to decide upon an overall optimal control strategy on the part of the processor. In particular, this brings the ruleinvocation strategy into critical focus: to gain maximal processing efficiency, one has to determine the best way of putting the rules to use. The aim of this paper is to provide a survey and a practical comparison of fundamental rule-invocation strategies within context-free chart parsing.",A Comparison of Rule-Invocation Strategies in Context-Free Chart Parsing,"Currently several grammatical formalisms converge towards being declarative and towards utilizing context-free phrase-structure grammar as a backbone, e.g. LFG and PATR-II. Typically the processing of these formalisms is organized within a chart-parsing framework. The declarative character of the formalisms makes it important to decide upon an overall optimal control strategy on the part of the processor. In particular, this brings the ruleinvocation strategy into critical focus: to gain maximal processing efficiency, one has to determine the best way of putting the rules to use. The aim of this paper is to provide a survey and a practical comparison of fundamental rule-invocation strategies within context-free chart parsing.",A Comparison of Rule-Invocation Strategies in Context-Free Chart Parsing,"Currently several grammatical formalisms converge towards being declarative and towards utilizing context-free phrase-structure grammar as a backbone, e.g. LFG and PATR-II. Typically the processing of these formalisms is organized within a chart-parsing framework. The declarative character of the formalisms makes it important to decide upon an overall optimal control strategy on the part of the processor. In particular, this brings the ruleinvocation strategy into critical focus: to gain maximal processing efficiency, one has to determine the best way of putting the rules to use. The aim of this paper is to provide a survey and a practical comparison of fundamental rule-invocation strategies within context-free chart parsing.","I would like to thank Lars Ahrenberg, Nils Dahlb~k, Arne Jbnsson, Magnus Merkel, Ivan Rankin, and an anonymous referee for the very helpful comments they have made on various drafts of this paper. In addition I am indebted to Masaru Tomita for providing me with his test grammars and sentences, and to Martin Kay for comments in connection with my presentation.","A Comparison of Rule-Invocation Strategies in Context-Free Chart Parsing. Currently several grammatical formalisms converge towards being declarative and towards utilizing context-free phrase-structure grammar as a backbone, e.g. LFG and PATR-II. Typically the processing of these formalisms is organized within a chart-parsing framework. The declarative character of the formalisms makes it important to decide upon an overall optimal control strategy on the part of the processor. In particular, this brings the ruleinvocation strategy into critical focus: to gain maximal processing efficiency, one has to determine the best way of putting the rules to use. The aim of this paper is to provide a survey and a practical comparison of fundamental rule-invocation strategies within context-free chart parsing.",1987
weller-di-marco-fraser-2020-modeling,https://aclanthology.org/2020.acl-main.389,0,,,,,,,"Modeling Word Formation in English--German Neural Machine Translation. This paper studies strategies to model word formation in NMT using rich linguistic information, namely a word segmentation approach that goes beyond splitting into substrings by considering fusional morphology. Our linguistically sound segmentation is combined with a method for target-side inflection to accommodate modeling word formation. The best system variants employ source-side morphological analysis and model complex target-side words, improving over a standard system.",Modeling Word Formation in {E}nglish{--}{G}erman Neural Machine Translation,"This paper studies strategies to model word formation in NMT using rich linguistic information, namely a word segmentation approach that goes beyond splitting into substrings by considering fusional morphology. Our linguistically sound segmentation is combined with a method for target-side inflection to accommodate modeling word formation. The best system variants employ source-side morphological analysis and model complex target-side words, improving over a standard system.",Modeling Word Formation in English--German Neural Machine Translation,"This paper studies strategies to model word formation in NMT using rich linguistic information, namely a word segmentation approach that goes beyond splitting into substrings by considering fusional morphology. Our linguistically sound segmentation is combined with a method for target-side inflection to accommodate modeling word formation. The best system variants employ source-side morphological analysis and model complex target-side words, improving over a standard system.",This research was partially funded by LMU Munich's Institutional Strategy LMUexcellent within the framework of the German Excellence Initiative. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement № 640550). This work was supported by the Dutch Organization for Scientific Research (NWO) VICI Grant nr. 277-89-002.,"Modeling Word Formation in English--German Neural Machine Translation. This paper studies strategies to model word formation in NMT using rich linguistic information, namely a word segmentation approach that goes beyond splitting into substrings by considering fusional morphology. Our linguistically sound segmentation is combined with a method for target-side inflection to accommodate modeling word formation. The best system variants employ source-side morphological analysis and model complex target-side words, improving over a standard system.",2020
dai-etal-2021-ultra,https://aclanthology.org/2021.acl-long.141,0,,,,,,,"Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model. Recently, there is an effort to extend finegrained entity typing by using a richer and ultra-fine set of types, and labeling noun phrases including pronouns and nominal nouns instead of just named entity mentions. A key challenge for this ultra-fine entity typing task is that human annotated data are extremely scarce, and the annotation ability of existing distant or weak supervision approaches is very limited. To remedy this problem, in this paper, we propose to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM). Given a mention in a sentence, our approach constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention, which can be used as type labels. Experimental results demonstrate that, with the help of these automatically generated labels, the performance of an ultra-fine entity typing model can be improved substantially. We also show that our approach can be applied to improve traditional fine-grained entity typing after performing simple type mapping.",Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model,"Recently, there is an effort to extend finegrained entity typing by using a richer and ultra-fine set of types, and labeling noun phrases including pronouns and nominal nouns instead of just named entity mentions. A key challenge for this ultra-fine entity typing task is that human annotated data are extremely scarce, and the annotation ability of existing distant or weak supervision approaches is very limited. To remedy this problem, in this paper, we propose to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM). Given a mention in a sentence, our approach constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention, which can be used as type labels. Experimental results demonstrate that, with the help of these automatically generated labels, the performance of an ultra-fine entity typing model can be improved substantially. We also show that our approach can be applied to improve traditional fine-grained entity typing after performing simple type mapping.",Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model,"Recently, there is an effort to extend finegrained entity typing by using a richer and ultra-fine set of types, and labeling noun phrases including pronouns and nominal nouns instead of just named entity mentions. A key challenge for this ultra-fine entity typing task is that human annotated data are extremely scarce, and the annotation ability of existing distant or weak supervision approaches is very limited. To remedy this problem, in this paper, we propose to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM). Given a mention in a sentence, our approach constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention, which can be used as type labels. Experimental results demonstrate that, with the help of these automatically generated labels, the performance of an ultra-fine entity typing model can be improved substantially. We also show that our approach can be applied to improve traditional fine-grained entity typing after performing simple type mapping.","This paper was supported by the NSFC Grant (No. U20B2053) from China, the Early Career Scheme (ECS, No. 26206717), the General Research Fund (GRF, No. 16211520), and the Research Impact Fund (RIF, from the Research Grants Council (RGC) of Hong Kong, with special thanks to the WeChat-HKUST WHAT Lab on Artificial Intelligence Technology.","Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model. Recently, there is an effort to extend finegrained entity typing by using a richer and ultra-fine set of types, and labeling noun phrases including pronouns and nominal nouns instead of just named entity mentions. A key challenge for this ultra-fine entity typing task is that human annotated data are extremely scarce, and the annotation ability of existing distant or weak supervision approaches is very limited. To remedy this problem, in this paper, we propose to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM). Given a mention in a sentence, our approach constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention, which can be used as type labels. Experimental results demonstrate that, with the help of these automatically generated labels, the performance of an ultra-fine entity typing model can be improved substantially. We also show that our approach can be applied to improve traditional fine-grained entity typing after performing simple type mapping.",2021
weller-heid-2012-analyzing,http://www.lrec-conf.org/proceedings/lrec2012/pdf/817_Paper.pdf,0,,,,,,,"Analyzing and Aligning German compound nouns. In this paper, we present and evaluate an approach for the compositional alignment of compound nouns using comparable corpora from technical domains. The task of term alignment consists in relating a source language term to its translation in a list of target language terms with the help of a bilingual dictionary. Compound splitting allows to transform a compound into a sequence of components which can be translated separately and then related to multi-word target language terms. We present and evaluate a method for compound splitting, and compare two strategies for term alignment (bag-of-word vs. pattern-based). The simple word-based approach leads to a considerable amount of erroneous alignments, whereas the pattern-based approach reaches a decent precision. We also assess the reasons for alignment failures: in the comparable corpora used for our experiments, a substantial number of terms has no translation in the target language data; furthermore, the non-isomorphic structures of source and target language terms cause alignment failures in many cases.",Analyzing and Aligning {G}erman compound nouns,"In this paper, we present and evaluate an approach for the compositional alignment of compound nouns using comparable corpora from technical domains. The task of term alignment consists in relating a source language term to its translation in a list of target language terms with the help of a bilingual dictionary. Compound splitting allows to transform a compound into a sequence of components which can be translated separately and then related to multi-word target language terms. We present and evaluate a method for compound splitting, and compare two strategies for term alignment (bag-of-word vs. pattern-based). The simple word-based approach leads to a considerable amount of erroneous alignments, whereas the pattern-based approach reaches a decent precision. We also assess the reasons for alignment failures: in the comparable corpora used for our experiments, a substantial number of terms has no translation in the target language data; furthermore, the non-isomorphic structures of source and target language terms cause alignment failures in many cases.",Analyzing and Aligning German compound nouns,"In this paper, we present and evaluate an approach for the compositional alignment of compound nouns using comparable corpora from technical domains. The task of term alignment consists in relating a source language term to its translation in a list of target language terms with the help of a bilingual dictionary. Compound splitting allows to transform a compound into a sequence of components which can be translated separately and then related to multi-word target language terms. We present and evaluate a method for compound splitting, and compare two strategies for term alignment (bag-of-word vs. pattern-based). The simple word-based approach leads to a considerable amount of erroneous alignments, whereas the pattern-based approach reaches a decent precision. We also assess the reasons for alignment failures: in the comparable corpora used for our experiments, a substantial number of terms has no translation in the target language data; furthermore, the non-isomorphic structures of source and target language terms cause alignment failures in many cases.",The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007(FP7/ -2013 under Grant Agreement n. 248005.,"Analyzing and Aligning German compound nouns. In this paper, we present and evaluate an approach for the compositional alignment of compound nouns using comparable corpora from technical domains. The task of term alignment consists in relating a source language term to its translation in a list of target language terms with the help of a bilingual dictionary. Compound splitting allows to transform a compound into a sequence of components which can be translated separately and then related to multi-word target language terms. We present and evaluate a method for compound splitting, and compare two strategies for term alignment (bag-of-word vs. pattern-based). The simple word-based approach leads to a considerable amount of erroneous alignments, whereas the pattern-based approach reaches a decent precision. We also assess the reasons for alignment failures: in the comparable corpora used for our experiments, a substantial number of terms has no translation in the target language data; furthermore, the non-isomorphic structures of source and target language terms cause alignment failures in many cases.",2012
poesio-vieira-1998-corpus,https://aclanthology.org/J98-2001,0,,,,,,,"A Corpus-based Investigation of Definite Description Use. We present the results of a study of the use of definite descriptions in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretation. We ran two experiments, in which subjects were asked to classi~ the uses of definite descriptions in a corpus of 33 newspaper articles, containing a total ofl,412 definite descriptions. We measured the agreement among annotators about the classes assigned to definite descriptions, as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the text. The most interesting result of this study from a corpus annotation perspective was the rather low agreement (K = 0.63) that we obtained using versions of Hawkins's and Prince's classification schemes; better results (K = 0.76) were obtained using the simplified scheme proposed by Fraurud that includes only two classes, firstmention and subsequent-mention. The agreement about antecedents was also not complete. These findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotation. From a linguistic point of view, the most interesting observations were the great number of discourse-new definites in our corpus (in one of our experiments, about 50% of the definites in the collection were classified as discourse-new, 30% as anaphoric, and 18% as associative~bridging) and the presence of definites that did not seem to require a complete disambiguation. all-or-nothing affair (Bard, Robertson, and Sorace 1996).",A Corpus-based Investigation of Definite Description Use,"We present the results of a study of the use of definite descriptions in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretation. We ran two experiments, in which subjects were asked to classi~ the uses of definite descriptions in a corpus of 33 newspaper articles, containing a total ofl,412 definite descriptions. We measured the agreement among annotators about the classes assigned to definite descriptions, as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the text. The most interesting result of this study from a corpus annotation perspective was the rather low agreement (K = 0.63) that we obtained using versions of Hawkins's and Prince's classification schemes; better results (K = 0.76) were obtained using the simplified scheme proposed by Fraurud that includes only two classes, firstmention and subsequent-mention. The agreement about antecedents was also not complete. These findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotation. From a linguistic point of view, the most interesting observations were the great number of discourse-new definites in our corpus (in one of our experiments, about 50% of the definites in the collection were classified as discourse-new, 30% as anaphoric, and 18% as associative~bridging) and the presence of definites that did not seem to require a complete disambiguation. all-or-nothing affair (Bard, Robertson, and Sorace 1996).",A Corpus-based Investigation of Definite Description Use,"We present the results of a study of the use of definite descriptions in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretation. We ran two experiments, in which subjects were asked to classi~ the uses of definite descriptions in a corpus of 33 newspaper articles, containing a total ofl,412 definite descriptions. We measured the agreement among annotators about the classes assigned to definite descriptions, as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the text. The most interesting result of this study from a corpus annotation perspective was the rather low agreement (K = 0.63) that we obtained using versions of Hawkins's and Prince's classification schemes; better results (K = 0.76) were obtained using the simplified scheme proposed by Fraurud that includes only two classes, firstmention and subsequent-mention. The agreement about antecedents was also not complete. These findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotation. From a linguistic point of view, the most interesting observations were the great number of discourse-new definites in our corpus (in one of our experiments, about 50% of the definites in the collection were classified as discourse-new, 30% as anaphoric, and 18% as associative~bridging) and the presence of definites that did not seem to require a complete disambiguation. all-or-nothing affair (Bard, Robertson, and Sorace 1996).","We wish to thank Jean Carletta for much help both with designing the experiments and with the analysis of the results. We are also grateful to Ellen Bard, Robin Cooper, Kari Fraurud, Janet Hitzeman, Kjetil Strand, and our anonymous reviewers for many helpful comments. Massimo Poesio holds an Advanced Research Fellowship from EPSRC, UK; Renata Vieira is supported by a fellowship from CNPq, Brazil.","A Corpus-based Investigation of Definite Description Use. We present the results of a study of the use of definite descriptions in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretation. We ran two experiments, in which subjects were asked to classi~ the uses of definite descriptions in a corpus of 33 newspaper articles, containing a total ofl,412 definite descriptions. We measured the agreement among annotators about the classes assigned to definite descriptions, as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the text. The most interesting result of this study from a corpus annotation perspective was the rather low agreement (K = 0.63) that we obtained using versions of Hawkins's and Prince's classification schemes; better results (K = 0.76) were obtained using the simplified scheme proposed by Fraurud that includes only two classes, firstmention and subsequent-mention. The agreement about antecedents was also not complete. These findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotation. From a linguistic point of view, the most interesting observations were the great number of discourse-new definites in our corpus (in one of our experiments, about 50% of the definites in the collection were classified as discourse-new, 30% as anaphoric, and 18% as associative~bridging) and the presence of definites that did not seem to require a complete disambiguation. all-or-nothing affair (Bard, Robertson, and Sorace 1996).",1998
seonwoo-etal-2021-weakly,https://aclanthology.org/2021.findings-acl.62,0,,,,,,,"Weakly Supervised Pre-Training for Multi-Hop Retriever. In multi-hop QA, answering complex questions entails iterative document retrieval for finding the missing entity of the question. The main steps of this process are sub-question detection, document retrieval for the subquestion, and generation of a new query for the final document retrieval. However, building a dataset that contains complex questions with sub-questions and their corresponding documents requires costly human annotation. To address the issue, we propose a new method for weakly supervised multi-hop retriever pretraining without human efforts. Our method includes 1) a pre-training task for generating vector representations of complex questions, 2) a scalable data generation method that produces the nested structure of question and subquestion as weak supervision for pre-training, and 3) a pre-training model structure based on dense encoders. We conduct experiments to compare the performance of our pre-trained retriever with several state-of-the-art models on end-to-end multi-hop QA as well as document retrieval. The experimental results show that our pre-trained retriever is effective and also robust on limited data and computational resources.",Weakly Supervised Pre-Training for Multi-Hop Retriever,"In multi-hop QA, answering complex questions entails iterative document retrieval for finding the missing entity of the question. The main steps of this process are sub-question detection, document retrieval for the subquestion, and generation of a new query for the final document retrieval. However, building a dataset that contains complex questions with sub-questions and their corresponding documents requires costly human annotation. To address the issue, we propose a new method for weakly supervised multi-hop retriever pretraining without human efforts. Our method includes 1) a pre-training task for generating vector representations of complex questions, 2) a scalable data generation method that produces the nested structure of question and subquestion as weak supervision for pre-training, and 3) a pre-training model structure based on dense encoders. We conduct experiments to compare the performance of our pre-trained retriever with several state-of-the-art models on end-to-end multi-hop QA as well as document retrieval. The experimental results show that our pre-trained retriever is effective and also robust on limited data and computational resources.",Weakly Supervised Pre-Training for Multi-Hop Retriever,"In multi-hop QA, answering complex questions entails iterative document retrieval for finding the missing entity of the question. The main steps of this process are sub-question detection, document retrieval for the subquestion, and generation of a new query for the final document retrieval. However, building a dataset that contains complex questions with sub-questions and their corresponding documents requires costly human annotation. To address the issue, we propose a new method for weakly supervised multi-hop retriever pretraining without human efforts. Our method includes 1) a pre-training task for generating vector representations of complex questions, 2) a scalable data generation method that produces the nested structure of question and subquestion as weak supervision for pre-training, and 3) a pre-training model structure based on dense encoders. We conduct experiments to compare the performance of our pre-trained retriever with several state-of-the-art models on end-to-end multi-hop QA as well as document retrieval. The experimental results show that our pre-trained retriever is effective and also robust on limited data and computational resources.","This work was partly supported by NAVER Corp. and Institute for Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korean government(MSIT) (No. 2017-0-01780, The technology development for event recognition/relational reasoning and learning knowledge based system for video understanding).","Weakly Supervised Pre-Training for Multi-Hop Retriever. In multi-hop QA, answering complex questions entails iterative document retrieval for finding the missing entity of the question. The main steps of this process are sub-question detection, document retrieval for the subquestion, and generation of a new query for the final document retrieval. However, building a dataset that contains complex questions with sub-questions and their corresponding documents requires costly human annotation. To address the issue, we propose a new method for weakly supervised multi-hop retriever pretraining without human efforts. Our method includes 1) a pre-training task for generating vector representations of complex questions, 2) a scalable data generation method that produces the nested structure of question and subquestion as weak supervision for pre-training, and 3) a pre-training model structure based on dense encoders. We conduct experiments to compare the performance of our pre-trained retriever with several state-of-the-art models on end-to-end multi-hop QA as well as document retrieval. The experimental results show that our pre-trained retriever is effective and also robust on limited data and computational resources.",2021
barnes-etal-2016-exploring,https://aclanthology.org/C16-1152,0,,,,,,,"Exploring Distributional Representations and Machine Translation for Aspect-based Cross-lingual Sentiment Classification.. Cross-lingual sentiment classification (CLSC) seeks to use resources from a source language in order to detect sentiment and classify text in a target language. Almost all research into CLSC has been carried out at sentence and document level, although this level of granularity is often less useful. This paper explores methods for performing aspect-based cross-lingual sentiment classification (aspect-based CLSC) for under-resourced languages. Given the limited nature of parallel data for under-resourced languages, we would like to make the most of this resource for our task. We compare zero-shot learning, bilingual word embeddings, stacked denoising autoencoder representations and machine translation techniques for aspect-based CLSC. We show that models based on distributed semantics can achieve comparable results to machine translation on aspect-based CLSC. Finally, we give an analysis of the errors found for each method.",Exploring Distributional Representations and Machine Translation for Aspect-based Cross-lingual Sentiment Classification.,"Cross-lingual sentiment classification (CLSC) seeks to use resources from a source language in order to detect sentiment and classify text in a target language. Almost all research into CLSC has been carried out at sentence and document level, although this level of granularity is often less useful. This paper explores methods for performing aspect-based cross-lingual sentiment classification (aspect-based CLSC) for under-resourced languages. Given the limited nature of parallel data for under-resourced languages, we would like to make the most of this resource for our task. We compare zero-shot learning, bilingual word embeddings, stacked denoising autoencoder representations and machine translation techniques for aspect-based CLSC. We show that models based on distributed semantics can achieve comparable results to machine translation on aspect-based CLSC. Finally, we give an analysis of the errors found for each method.",Exploring Distributional Representations and Machine Translation for Aspect-based Cross-lingual Sentiment Classification.,"Cross-lingual sentiment classification (CLSC) seeks to use resources from a source language in order to detect sentiment and classify text in a target language. Almost all research into CLSC has been carried out at sentence and document level, although this level of granularity is often less useful. This paper explores methods for performing aspect-based cross-lingual sentiment classification (aspect-based CLSC) for under-resourced languages. Given the limited nature of parallel data for under-resourced languages, we would like to make the most of this resource for our task. We compare zero-shot learning, bilingual word embeddings, stacked denoising autoencoder representations and machine translation techniques for aspect-based CLSC. We show that models based on distributed semantics can achieve comparable results to machine translation on aspect-based CLSC. Finally, we give an analysis of the errors found for each method.",,"Exploring Distributional Representations and Machine Translation for Aspect-based Cross-lingual Sentiment Classification.. Cross-lingual sentiment classification (CLSC) seeks to use resources from a source language in order to detect sentiment and classify text in a target language. Almost all research into CLSC has been carried out at sentence and document level, although this level of granularity is often less useful. This paper explores methods for performing aspect-based cross-lingual sentiment classification (aspect-based CLSC) for under-resourced languages. Given the limited nature of parallel data for under-resourced languages, we would like to make the most of this resource for our task. We compare zero-shot learning, bilingual word embeddings, stacked denoising autoencoder representations and machine translation techniques for aspect-based CLSC. We show that models based on distributed semantics can achieve comparable results to machine translation on aspect-based CLSC. Finally, we give an analysis of the errors found for each method.",2016
van-kuppevelt-1993-intentionality,https://aclanthology.org/W93-0236,0,,,,,,,"Intentionality in a Topical Approach of Discourse Structure. Position paper The alternative to be outlined provides a proposal to solve a central problem in research on discourse structure and discourse coherence, namely, as pointed out by many authors, that of the relationship between linguistic and intentional structure, or, in other words, between subject matter and presentational relations (Mann and Thompson 1988) or informational and intentional relations (Moore and Pollack 1992). As is argued for in Van Kuppevelt (1993), this alternative not only implies uniformity on the structural levels involved, i.e. the linguistic and intentional level, but also on the level of attentional states (Grosz and Sidner 1986). 2 The latter is ruled by the dynamics of topic constitution and topic termination, determining which discourse units are in focus of attention during the development of the discourse. 3 We will see that both linguistic relations and intentions are defined in a uniform way by topic-forming questions in discourse, thereby automatically satisfying the need for a multi-level analysis as is argued for in Moore and Paris (1992), and as is signalled by Dale (this volume), avoiding differences in discourse segmentation between RST analyses and intentional approaches. The central hypothesis underlying this alternative is that the structural coherence in discourse is governed by the discourse-internal process of questioning, consisting of the contextual induction of explicit and/or implicit topic-forming questions. This process gives rise to the phenomenon that the organization of discourse segments (as well as the associated isomorphic structure of intentions) agrees with the internal topic-comment structure, and that in the following specific way: (i) every discourse unit u(D)Tp has associated with it a topic Tp (or, a discourse topic DTp) which is provided by the (set of) topic-forming question(s) Qp that UTp has answered, and (ii), the relation between discourse units u(D) Ti is determined by the relation between the topic-forming questions Qi answered by these discourse units u(D)Ti. 4 Topics are thus context-dependently characterized in terms of questions arising from the preceding discourse. As is elaborated upon in Van Kuppevelt (1991/92) every contextually induced explicit .... or implicit (sub)question Qp that ...... is answered in discourse constitutes a (sub)topic Tp. Tp ts that which ts questioned; an undetermined set of (possibly non-existent) discourse entitles (or a set of ordered n-tuples of such entities in the case of an n-fold question) which needs further",Intentionality in a Topical Approach of Discourse Structure,"Position paper The alternative to be outlined provides a proposal to solve a central problem in research on discourse structure and discourse coherence, namely, as pointed out by many authors, that of the relationship between linguistic and intentional structure, or, in other words, between subject matter and presentational relations (Mann and Thompson 1988) or informational and intentional relations (Moore and Pollack 1992). As is argued for in Van Kuppevelt (1993), this alternative not only implies uniformity on the structural levels involved, i.e. the linguistic and intentional level, but also on the level of attentional states (Grosz and Sidner 1986). 2 The latter is ruled by the dynamics of topic constitution and topic termination, determining which discourse units are in focus of attention during the development of the discourse. 3 We will see that both linguistic relations and intentions are defined in a uniform way by topic-forming questions in discourse, thereby automatically satisfying the need for a multi-level analysis as is argued for in Moore and Paris (1992), and as is signalled by Dale (this volume), avoiding differences in discourse segmentation between RST analyses and intentional approaches. The central hypothesis underlying this alternative is that the structural coherence in discourse is governed by the discourse-internal process of questioning, consisting of the contextual induction of explicit and/or implicit topic-forming questions. This process gives rise to the phenomenon that the organization of discourse segments (as well as the associated isomorphic structure of intentions) agrees with the internal topic-comment structure, and that in the following specific way: (i) every discourse unit u(D)Tp has associated with it a topic Tp (or, a discourse topic DTp) which is provided by the (set of) topic-forming question(s) Qp that UTp has answered, and (ii), the relation between discourse units u(D) Ti is determined by the relation between the topic-forming questions Qi answered by these discourse units u(D)Ti. 4 Topics are thus context-dependently characterized in terms of questions arising from the preceding discourse. As is elaborated upon in Van Kuppevelt (1991/92) every contextually induced explicit .... or implicit (sub)question Qp that ...... is answered in discourse constitutes a (sub)topic Tp. Tp ts that which ts questioned; an undetermined set of (possibly non-existent) discourse entitles (or a set of ordered n-tuples of such entities in the case of an n-fold question) which needs further",Intentionality in a Topical Approach of Discourse Structure,"Position paper The alternative to be outlined provides a proposal to solve a central problem in research on discourse structure and discourse coherence, namely, as pointed out by many authors, that of the relationship between linguistic and intentional structure, or, in other words, between subject matter and presentational relations (Mann and Thompson 1988) or informational and intentional relations (Moore and Pollack 1992). As is argued for in Van Kuppevelt (1993), this alternative not only implies uniformity on the structural levels involved, i.e. the linguistic and intentional level, but also on the level of attentional states (Grosz and Sidner 1986). 2 The latter is ruled by the dynamics of topic constitution and topic termination, determining which discourse units are in focus of attention during the development of the discourse. 3 We will see that both linguistic relations and intentions are defined in a uniform way by topic-forming questions in discourse, thereby automatically satisfying the need for a multi-level analysis as is argued for in Moore and Paris (1992), and as is signalled by Dale (this volume), avoiding differences in discourse segmentation between RST analyses and intentional approaches. The central hypothesis underlying this alternative is that the structural coherence in discourse is governed by the discourse-internal process of questioning, consisting of the contextual induction of explicit and/or implicit topic-forming questions. This process gives rise to the phenomenon that the organization of discourse segments (as well as the associated isomorphic structure of intentions) agrees with the internal topic-comment structure, and that in the following specific way: (i) every discourse unit u(D)Tp has associated with it a topic Tp (or, a discourse topic DTp) which is provided by the (set of) topic-forming question(s) Qp that UTp has answered, and (ii), the relation between discourse units u(D) Ti is determined by the relation between the topic-forming questions Qi answered by these discourse units u(D)Ti. 4 Topics are thus context-dependently characterized in terms of questions arising from the preceding discourse. As is elaborated upon in Van Kuppevelt (1991/92) every contextually induced explicit .... or implicit (sub)question Qp that ...... is answered in discourse constitutes a (sub)topic Tp. Tp ts that which ts questioned; an undetermined set of (possibly non-existent) discourse entitles (or a set of ordered n-tuples of such entities in the case of an n-fold question) which needs further",,"Intentionality in a Topical Approach of Discourse Structure. Position paper The alternative to be outlined provides a proposal to solve a central problem in research on discourse structure and discourse coherence, namely, as pointed out by many authors, that of the relationship between linguistic and intentional structure, or, in other words, between subject matter and presentational relations (Mann and Thompson 1988) or informational and intentional relations (Moore and Pollack 1992). As is argued for in Van Kuppevelt (1993), this alternative not only implies uniformity on the structural levels involved, i.e. the linguistic and intentional level, but also on the level of attentional states (Grosz and Sidner 1986). 2 The latter is ruled by the dynamics of topic constitution and topic termination, determining which discourse units are in focus of attention during the development of the discourse. 3 We will see that both linguistic relations and intentions are defined in a uniform way by topic-forming questions in discourse, thereby automatically satisfying the need for a multi-level analysis as is argued for in Moore and Paris (1992), and as is signalled by Dale (this volume), avoiding differences in discourse segmentation between RST analyses and intentional approaches. The central hypothesis underlying this alternative is that the structural coherence in discourse is governed by the discourse-internal process of questioning, consisting of the contextual induction of explicit and/or implicit topic-forming questions. This process gives rise to the phenomenon that the organization of discourse segments (as well as the associated isomorphic structure of intentions) agrees with the internal topic-comment structure, and that in the following specific way: (i) every discourse unit u(D)Tp has associated with it a topic Tp (or, a discourse topic DTp) which is provided by the (set of) topic-forming question(s) Qp that UTp has answered, and (ii), the relation between discourse units u(D) Ti is determined by the relation between the topic-forming questions Qi answered by these discourse units u(D)Ti. 4 Topics are thus context-dependently characterized in terms of questions arising from the preceding discourse. As is elaborated upon in Van Kuppevelt (1991/92) every contextually induced explicit .... or implicit (sub)question Qp that ...... is answered in discourse constitutes a (sub)topic Tp. Tp ts that which ts questioned; an undetermined set of (possibly non-existent) discourse entitles (or a set of ordered n-tuples of such entities in the case of an n-fold question) which needs further",1993
liu-etal-2021-improving-factual,https://aclanthology.org/2021.ecnlp-1.19,0,,,,business_use,,,"Improving Factual Consistency of Abstractive Summarization on Customer Feedback. E-commerce stores collect customer feedback to let sellers learn about customer concerns and enhance customer order experience. Because customer feedback often contains redundant information, a concise summary of the feedback can be generated to help sellers better understand the issues causing customer dissatisfaction. Previous state-of-the-art abstractive text summarization models make two major types of factual errors when producing summaries from customer feedback, which are wrong entity detection (WED) and incorrect product-defect description (IPD). In this work, we introduce a set of methods to enhance the factual consistency of abstractive summarization on customer feedback. We augment the training data with artificially corrupted summaries, and use them as counterparts of the target summaries. We add a contrastive loss term into the training objective so that the model learns to avoid certain factual errors. Evaluation results show that a large portion of WED and IPD errors are alleviated for BART and T5. Furthermore, our approaches do not depend on the structure of the summarization model and thus are generalizable to any abstractive summarization systems.",Improving Factual Consistency of Abstractive Summarization on Customer Feedback,"E-commerce stores collect customer feedback to let sellers learn about customer concerns and enhance customer order experience. Because customer feedback often contains redundant information, a concise summary of the feedback can be generated to help sellers better understand the issues causing customer dissatisfaction. Previous state-of-the-art abstractive text summarization models make two major types of factual errors when producing summaries from customer feedback, which are wrong entity detection (WED) and incorrect product-defect description (IPD). In this work, we introduce a set of methods to enhance the factual consistency of abstractive summarization on customer feedback. We augment the training data with artificially corrupted summaries, and use them as counterparts of the target summaries. We add a contrastive loss term into the training objective so that the model learns to avoid certain factual errors. Evaluation results show that a large portion of WED and IPD errors are alleviated for BART and T5. Furthermore, our approaches do not depend on the structure of the summarization model and thus are generalizable to any abstractive summarization systems.",Improving Factual Consistency of Abstractive Summarization on Customer Feedback,"E-commerce stores collect customer feedback to let sellers learn about customer concerns and enhance customer order experience. Because customer feedback often contains redundant information, a concise summary of the feedback can be generated to help sellers better understand the issues causing customer dissatisfaction. Previous state-of-the-art abstractive text summarization models make two major types of factual errors when producing summaries from customer feedback, which are wrong entity detection (WED) and incorrect product-defect description (IPD). In this work, we introduce a set of methods to enhance the factual consistency of abstractive summarization on customer feedback. We augment the training data with artificially corrupted summaries, and use them as counterparts of the target summaries. We add a contrastive loss term into the training objective so that the model learns to avoid certain factual errors. Evaluation results show that a large portion of WED and IPD errors are alleviated for BART and T5. Furthermore, our approaches do not depend on the structure of the summarization model and thus are generalizable to any abstractive summarization systems.",,"Improving Factual Consistency of Abstractive Summarization on Customer Feedback. E-commerce stores collect customer feedback to let sellers learn about customer concerns and enhance customer order experience. Because customer feedback often contains redundant information, a concise summary of the feedback can be generated to help sellers better understand the issues causing customer dissatisfaction. Previous state-of-the-art abstractive text summarization models make two major types of factual errors when producing summaries from customer feedback, which are wrong entity detection (WED) and incorrect product-defect description (IPD). In this work, we introduce a set of methods to enhance the factual consistency of abstractive summarization on customer feedback. We augment the training data with artificially corrupted summaries, and use them as counterparts of the target summaries. We add a contrastive loss term into the training objective so that the model learns to avoid certain factual errors. Evaluation results show that a large portion of WED and IPD errors are alleviated for BART and T5. Furthermore, our approaches do not depend on the structure of the summarization model and thus are generalizable to any abstractive summarization systems.",2021
abnar-etal-2018-experiential,https://aclanthology.org/W18-0107,1,,,,health,,,"Experiential, Distributional and Dependency-based Word Embeddings have Complementary Roles in Decoding Brain Activity. We evaluate 8 different word embedding models on their usefulness for predicting the neural activation patterns associated with concrete nouns. The models we consider include an experiential model, based on crowd-sourced association data, several popular neural and distributional models, and a model that reflects the syntactic context of words (based on dependency parses). Our goal is to assess the cognitive plausibility of these various embedding models, and understand how we can further improve our methods for interpreting brain imaging data.
We show that neural word embedding models exhibit superior performance on the tasks we consider, beating experiential word representation model. The syntactically informed model gives the overall best performance when predicting brain activation patterns from word embeddings; whereas the GloVe distributional method gives the overall best performance when predicting in the reverse direction (words vectors from brain images). Interestingly, however, the error patterns of these different models are markedly different. This may support the idea that the brain uses different systems for processing different kinds of words. Moreover, we suggest that taking the relative strengths of different embedding models into account will lead to better models of the brain activity associated with words.","Experiential, Distributional and Dependency-based Word Embeddings have Complementary Roles in Decoding Brain Activity","We evaluate 8 different word embedding models on their usefulness for predicting the neural activation patterns associated with concrete nouns. The models we consider include an experiential model, based on crowd-sourced association data, several popular neural and distributional models, and a model that reflects the syntactic context of words (based on dependency parses). Our goal is to assess the cognitive plausibility of these various embedding models, and understand how we can further improve our methods for interpreting brain imaging data.
We show that neural word embedding models exhibit superior performance on the tasks we consider, beating experiential word representation model. The syntactically informed model gives the overall best performance when predicting brain activation patterns from word embeddings; whereas the GloVe distributional method gives the overall best performance when predicting in the reverse direction (words vectors from brain images). Interestingly, however, the error patterns of these different models are markedly different. This may support the idea that the brain uses different systems for processing different kinds of words. Moreover, we suggest that taking the relative strengths of different embedding models into account will lead to better models of the brain activity associated with words.","Experiential, Distributional and Dependency-based Word Embeddings have Complementary Roles in Decoding Brain Activity","We evaluate 8 different word embedding models on their usefulness for predicting the neural activation patterns associated with concrete nouns. The models we consider include an experiential model, based on crowd-sourced association data, several popular neural and distributional models, and a model that reflects the syntactic context of words (based on dependency parses). Our goal is to assess the cognitive plausibility of these various embedding models, and understand how we can further improve our methods for interpreting brain imaging data.
We show that neural word embedding models exhibit superior performance on the tasks we consider, beating experiential word representation model. The syntactically informed model gives the overall best performance when predicting brain activation patterns from word embeddings; whereas the GloVe distributional method gives the overall best performance when predicting in the reverse direction (words vectors from brain images). Interestingly, however, the error patterns of these different models are markedly different. This may support the idea that the brain uses different systems for processing different kinds of words. Moreover, we suggest that taking the relative strengths of different embedding models into account will lead to better models of the brain activity associated with words.",,"Experiential, Distributional and Dependency-based Word Embeddings have Complementary Roles in Decoding Brain Activity. We evaluate 8 different word embedding models on their usefulness for predicting the neural activation patterns associated with concrete nouns. The models we consider include an experiential model, based on crowd-sourced association data, several popular neural and distributional models, and a model that reflects the syntactic context of words (based on dependency parses). Our goal is to assess the cognitive plausibility of these various embedding models, and understand how we can further improve our methods for interpreting brain imaging data.
We show that neural word embedding models exhibit superior performance on the tasks we consider, beating experiential word representation model. The syntactically informed model gives the overall best performance when predicting brain activation patterns from word embeddings; whereas the GloVe distributional method gives the overall best performance when predicting in the reverse direction (words vectors from brain images). Interestingly, however, the error patterns of these different models are markedly different. This may support the idea that the brain uses different systems for processing different kinds of words. Moreover, we suggest that taking the relative strengths of different embedding models into account will lead to better models of the brain activity associated with words.",2018
buechel-etal-2019-time,https://aclanthology.org/D19-5103,0,,,,,,,"A Time Series Analysis of Emotional Loading in Central Bank Statements. We examine the affective content of central bank press statements using emotion analysis. Our focus is on two major international players, the European Central Bank (ECB) and the US Federal Reserve Bank (Fed), covering a time span from 1998 through 2019. We reveal characteristic patterns in the emotional dimensions of valence, arousal, and dominance and find-despite the commonly established attitude that emotional wording in central bank communication should be avoided-a correlation between the state of the economy and particularly the dominance dimension in the press releases under scrutiny and, overall, an impact of the president in office.",A Time Series Analysis of Emotional Loading in Central Bank Statements,"We examine the affective content of central bank press statements using emotion analysis. Our focus is on two major international players, the European Central Bank (ECB) and the US Federal Reserve Bank (Fed), covering a time span from 1998 through 2019. We reveal characteristic patterns in the emotional dimensions of valence, arousal, and dominance and find-despite the commonly established attitude that emotional wording in central bank communication should be avoided-a correlation between the state of the economy and particularly the dominance dimension in the press releases under scrutiny and, overall, an impact of the president in office.",A Time Series Analysis of Emotional Loading in Central Bank Statements,"We examine the affective content of central bank press statements using emotion analysis. Our focus is on two major international players, the European Central Bank (ECB) and the US Federal Reserve Bank (Fed), covering a time span from 1998 through 2019. We reveal characteristic patterns in the emotional dimensions of valence, arousal, and dominance and find-despite the commonly established attitude that emotional wording in central bank communication should be avoided-a correlation between the state of the economy and particularly the dominance dimension in the press releases under scrutiny and, overall, an impact of the president in office.",We would like to thank the anonymous reviewers for their detailed and constructive comments.,"A Time Series Analysis of Emotional Loading in Central Bank Statements. We examine the affective content of central bank press statements using emotion analysis. Our focus is on two major international players, the European Central Bank (ECB) and the US Federal Reserve Bank (Fed), covering a time span from 1998 through 2019. We reveal characteristic patterns in the emotional dimensions of valence, arousal, and dominance and find-despite the commonly established attitude that emotional wording in central bank communication should be avoided-a correlation between the state of the economy and particularly the dominance dimension in the press releases under scrutiny and, overall, an impact of the president in office.",2019
le-nagard-koehn-2010-aiding,https://aclanthology.org/W10-1737,0,,,,,,,Aiding Pronoun Translation with Co-Reference Resolution. We propose a method to improve the translation of pronouns by resolving their coreference to prior mentions. We report results using two different co-reference resolution methods and point to remaining challenges.,Aiding Pronoun Translation with Co-Reference Resolution,We propose a method to improve the translation of pronouns by resolving their coreference to prior mentions. We report results using two different co-reference resolution methods and point to remaining challenges.,Aiding Pronoun Translation with Co-Reference Resolution,We propose a method to improve the translation of pronouns by resolving their coreference to prior mentions. We report results using two different co-reference resolution methods and point to remaining challenges.,This work was supported by the EuroMatrixPlus project funded by the European Commission (7th Framework Programme).,Aiding Pronoun Translation with Co-Reference Resolution. We propose a method to improve the translation of pronouns by resolving their coreference to prior mentions. We report results using two different co-reference resolution methods and point to remaining challenges.,2010
chen-etal-2019-expanding,https://aclanthology.org/W19-7415,0,,,,,,,Expanding English and Chinese Dictionaries by Wikipedia Titles. This paper introduces our preliminary work in dictionary expansion by adding English and Chinese Wikipedia titles along with their linguistic features. Parts-of-speech of Chinese titles are determined by the majority of heads of their Wikipedia categories. Proper noun detection in English Wikipedia is done by checking the capitalization of the titles in the content of the articles. Title alternatives will be detected beforehand. Chinese proper noun detection is done via interlanguage links and POS. The estimated accuracy of POS determination is 71.67% and the accuracy of proper noun detection is about 83.32%.,Expanding {E}nglish and {C}hinese Dictionaries by {W}ikipedia Titles,This paper introduces our preliminary work in dictionary expansion by adding English and Chinese Wikipedia titles along with their linguistic features. Parts-of-speech of Chinese titles are determined by the majority of heads of their Wikipedia categories. Proper noun detection in English Wikipedia is done by checking the capitalization of the titles in the content of the articles. Title alternatives will be detected beforehand. Chinese proper noun detection is done via interlanguage links and POS. The estimated accuracy of POS determination is 71.67% and the accuracy of proper noun detection is about 83.32%.,Expanding English and Chinese Dictionaries by Wikipedia Titles,This paper introduces our preliminary work in dictionary expansion by adding English and Chinese Wikipedia titles along with their linguistic features. Parts-of-speech of Chinese titles are determined by the majority of heads of their Wikipedia categories. Proper noun detection in English Wikipedia is done by checking the capitalization of the titles in the content of the articles. Title alternatives will be detected beforehand. Chinese proper noun detection is done via interlanguage links and POS. The estimated accuracy of POS determination is 71.67% and the accuracy of proper noun detection is about 83.32%.,This research was funded by the Taiwan Ministry of Science and Technology (grant: MOST 106-2221-E-019-072.),Expanding English and Chinese Dictionaries by Wikipedia Titles. This paper introduces our preliminary work in dictionary expansion by adding English and Chinese Wikipedia titles along with their linguistic features. Parts-of-speech of Chinese titles are determined by the majority of heads of their Wikipedia categories. Proper noun detection in English Wikipedia is done by checking the capitalization of the titles in the content of the articles. Title alternatives will be detected beforehand. Chinese proper noun detection is done via interlanguage links and POS. The estimated accuracy of POS determination is 71.67% and the accuracy of proper noun detection is about 83.32%.,2019
bashier-etal-2021-disk,https://aclanthology.org/2021.eacl-main.263,0,,,,,,,"DISK-CSV: Distilling Interpretable Semantic Knowledge with a Class Semantic Vector. Neural networks (NN) applied to natural language processing (NLP) are becoming deeper and more complex, making them increasingly difficult to understand and interpret. Even in applications of limited scope on fixed data, the creation of these complex ""black-boxes"" creates substantial challenges for debugging, understanding, and generalization. But rapid development in this field has now lead to building more straightforward and interpretable models. We propose a new technique (DISK-CSV) to distill knowledge concurrently from any neural network architecture for text classification, captured as a lightweight interpretable/explainable classifier. Across multiple datasets, our approach achieves better performance than the target black-box. In addition, our approach provides better explanations than existing techniques.",{DISK}-{CSV}: Distilling Interpretable Semantic Knowledge with a Class Semantic Vector,"Neural networks (NN) applied to natural language processing (NLP) are becoming deeper and more complex, making them increasingly difficult to understand and interpret. Even in applications of limited scope on fixed data, the creation of these complex ""black-boxes"" creates substantial challenges for debugging, understanding, and generalization. But rapid development in this field has now lead to building more straightforward and interpretable models. We propose a new technique (DISK-CSV) to distill knowledge concurrently from any neural network architecture for text classification, captured as a lightweight interpretable/explainable classifier. Across multiple datasets, our approach achieves better performance than the target black-box. In addition, our approach provides better explanations than existing techniques.",DISK-CSV: Distilling Interpretable Semantic Knowledge with a Class Semantic Vector,"Neural networks (NN) applied to natural language processing (NLP) are becoming deeper and more complex, making them increasingly difficult to understand and interpret. Even in applications of limited scope on fixed data, the creation of these complex ""black-boxes"" creates substantial challenges for debugging, understanding, and generalization. But rapid development in this field has now lead to building more straightforward and interpretable models. We propose a new technique (DISK-CSV) to distill knowledge concurrently from any neural network architecture for text classification, captured as a lightweight interpretable/explainable classifier. Across multiple datasets, our approach achieves better performance than the target black-box. In addition, our approach provides better explanations than existing techniques.","We acknowledge support from the Alberta Machine Intelligence Institute (AMII), from the Computing Science Department of the University of Alberta, and the Natural Sciences and Engineering Research Council of Canada (NSERC).","DISK-CSV: Distilling Interpretable Semantic Knowledge with a Class Semantic Vector. Neural networks (NN) applied to natural language processing (NLP) are becoming deeper and more complex, making them increasingly difficult to understand and interpret. Even in applications of limited scope on fixed data, the creation of these complex ""black-boxes"" creates substantial challenges for debugging, understanding, and generalization. But rapid development in this field has now lead to building more straightforward and interpretable models. We propose a new technique (DISK-CSV) to distill knowledge concurrently from any neural network architecture for text classification, captured as a lightweight interpretable/explainable classifier. Across multiple datasets, our approach achieves better performance than the target black-box. In addition, our approach provides better explanations than existing techniques.",2021
belyaev-etal-2021-digitizing,https://aclanthology.org/2021.iwclul-1.7,0,,,,,,,"Digitizing print dictionaries using TEI: The Abaev Dictionary Project. We present the results of a year-long effort to create an electronic version of V. I. Abaev's Historical-etymological dictionary of Ossetic. The aim of the project is twofold: first, to create an English translation of the dictionary; second, to provide it (in both its Russian and English version) with a semantic markup that would make it searchable across multiple types of data and accessible for machine-based processing. Volume 1, whose prelimiary version was completed in 2020, used the TshwaneLex (TLex) platform, which is perfectly adequate for dictionaries with a low to medium level of complexity, and which allows for almost WYSIWYG formatting and simple export into a publishable format. However, due to a number of limitations of TLex, it was necessary to transition to a more flexible and more powerful format. We settled on the Text Encoding Initiative-an XML-based format for the computational representation of published texts, used in a number of digital humanities projects. Using TEI also allowed the project to transition from the proprietary, closed system of TLex to the full range of tools available for XML and related technologies. We discuss the challenges that are faced by such largescale dictionary projects, and the practices that we have adopted in order to avoid common pitfalls.",Digitizing print dictionaries using {TEI}: The Abaev Dictionary Project,"We present the results of a year-long effort to create an electronic version of V. I. Abaev's Historical-etymological dictionary of Ossetic. The aim of the project is twofold: first, to create an English translation of the dictionary; second, to provide it (in both its Russian and English version) with a semantic markup that would make it searchable across multiple types of data and accessible for machine-based processing. Volume 1, whose prelimiary version was completed in 2020, used the TshwaneLex (TLex) platform, which is perfectly adequate for dictionaries with a low to medium level of complexity, and which allows for almost WYSIWYG formatting and simple export into a publishable format. However, due to a number of limitations of TLex, it was necessary to transition to a more flexible and more powerful format. We settled on the Text Encoding Initiative-an XML-based format for the computational representation of published texts, used in a number of digital humanities projects. Using TEI also allowed the project to transition from the proprietary, closed system of TLex to the full range of tools available for XML and related technologies. We discuss the challenges that are faced by such largescale dictionary projects, and the practices that we have adopted in order to avoid common pitfalls.",Digitizing print dictionaries using TEI: The Abaev Dictionary Project,"We present the results of a year-long effort to create an electronic version of V. I. Abaev's Historical-etymological dictionary of Ossetic. The aim of the project is twofold: first, to create an English translation of the dictionary; second, to provide it (in both its Russian and English version) with a semantic markup that would make it searchable across multiple types of data and accessible for machine-based processing. Volume 1, whose prelimiary version was completed in 2020, used the TshwaneLex (TLex) platform, which is perfectly adequate for dictionaries with a low to medium level of complexity, and which allows for almost WYSIWYG formatting and simple export into a publishable format. However, due to a number of limitations of TLex, it was necessary to transition to a more flexible and more powerful format. We settled on the Text Encoding Initiative-an XML-based format for the computational representation of published texts, used in a number of digital humanities projects. Using TEI also allowed the project to transition from the proprietary, closed system of TLex to the full range of tools available for XML and related technologies. We discuss the challenges that are faced by such largescale dictionary projects, and the practices that we have adopted in order to avoid common pitfalls.",,"Digitizing print dictionaries using TEI: The Abaev Dictionary Project. We present the results of a year-long effort to create an electronic version of V. I. Abaev's Historical-etymological dictionary of Ossetic. The aim of the project is twofold: first, to create an English translation of the dictionary; second, to provide it (in both its Russian and English version) with a semantic markup that would make it searchable across multiple types of data and accessible for machine-based processing. Volume 1, whose prelimiary version was completed in 2020, used the TshwaneLex (TLex) platform, which is perfectly adequate for dictionaries with a low to medium level of complexity, and which allows for almost WYSIWYG formatting and simple export into a publishable format. However, due to a number of limitations of TLex, it was necessary to transition to a more flexible and more powerful format. We settled on the Text Encoding Initiative-an XML-based format for the computational representation of published texts, used in a number of digital humanities projects. Using TEI also allowed the project to transition from the proprietary, closed system of TLex to the full range of tools available for XML and related technologies. We discuss the challenges that are faced by such largescale dictionary projects, and the practices that we have adopted in order to avoid common pitfalls.",2021
dybkjaer-dybkjaer-2006-act,http://www.lrec-conf.org/proceedings/lrec2006/pdf/471_pdf.pdf,0,,,,,,,"Act-Topic Patterns for Automatically Checking Dialogue Models. When dialogue models are evaluated today, this is normally done by using some evaluation method to collect data, often involving users interacting with the system model, and then subsequently analysing the collected data. We present a tool called DialogDesigner that enables automatic evaluation performed directly on the dialogue model and that does not require any data collection first. DialogDesigner is a tool in support of rapid design and evaluation of dialogue models. The first version was developed in 2005 and enabled developers to create an electronic dialogue model, get various graphical views of the model, run a Wizard-of-Oz (WOZ) simulation session, and extract different presentations in HTML. The second version includes extensions in terms of support for automatic dialogue model evaluation. Various aspects of dialogue model well-formedness can be automatically checked. Some of the automatic analyses simply perform checks based on the state and transition structure of the dialogue model while the core part are based on act-topic annotation of prompts and transitions in the dialogue model and specification of act-topic patterns. This paper focuses on the version 2 extensions.",Act-Topic Patterns for Automatically Checking Dialogue Models,"When dialogue models are evaluated today, this is normally done by using some evaluation method to collect data, often involving users interacting with the system model, and then subsequently analysing the collected data. We present a tool called DialogDesigner that enables automatic evaluation performed directly on the dialogue model and that does not require any data collection first. DialogDesigner is a tool in support of rapid design and evaluation of dialogue models. The first version was developed in 2005 and enabled developers to create an electronic dialogue model, get various graphical views of the model, run a Wizard-of-Oz (WOZ) simulation session, and extract different presentations in HTML. The second version includes extensions in terms of support for automatic dialogue model evaluation. Various aspects of dialogue model well-formedness can be automatically checked. Some of the automatic analyses simply perform checks based on the state and transition structure of the dialogue model while the core part are based on act-topic annotation of prompts and transitions in the dialogue model and specification of act-topic patterns. This paper focuses on the version 2 extensions.",Act-Topic Patterns for Automatically Checking Dialogue Models,"When dialogue models are evaluated today, this is normally done by using some evaluation method to collect data, often involving users interacting with the system model, and then subsequently analysing the collected data. We present a tool called DialogDesigner that enables automatic evaluation performed directly on the dialogue model and that does not require any data collection first. DialogDesigner is a tool in support of rapid design and evaluation of dialogue models. The first version was developed in 2005 and enabled developers to create an electronic dialogue model, get various graphical views of the model, run a Wizard-of-Oz (WOZ) simulation session, and extract different presentations in HTML. The second version includes extensions in terms of support for automatic dialogue model evaluation. Various aspects of dialogue model well-formedness can be automatically checked. Some of the automatic analyses simply perform checks based on the state and transition structure of the dialogue model while the core part are based on act-topic annotation of prompts and transitions in the dialogue model and specification of act-topic patterns. This paper focuses on the version 2 extensions.",,"Act-Topic Patterns for Automatically Checking Dialogue Models. When dialogue models are evaluated today, this is normally done by using some evaluation method to collect data, often involving users interacting with the system model, and then subsequently analysing the collected data. We present a tool called DialogDesigner that enables automatic evaluation performed directly on the dialogue model and that does not require any data collection first. DialogDesigner is a tool in support of rapid design and evaluation of dialogue models. The first version was developed in 2005 and enabled developers to create an electronic dialogue model, get various graphical views of the model, run a Wizard-of-Oz (WOZ) simulation session, and extract different presentations in HTML. The second version includes extensions in terms of support for automatic dialogue model evaluation. Various aspects of dialogue model well-formedness can be automatically checked. Some of the automatic analyses simply perform checks based on the state and transition structure of the dialogue model while the core part are based on act-topic annotation of prompts and transitions in the dialogue model and specification of act-topic patterns. This paper focuses on the version 2 extensions.",2006
zhou-etal-2016-evaluating,https://aclanthology.org/L16-1104,0,,,,,,,"Evaluating a Deterministic Shift-Reduce Neural Parser for Constituent Parsing. Greedy transition-based parsers are appealing for their very fast speed, with reasonably high accuracies. In this paper, we build a fast shift-reduce neural constituent parser by using a neural network to make local decisions. One challenge to the parsing speed is the large hidden and output layer sizes caused by the number of constituent labels and branching options. We speed up the parser by using a hierarchical output layer, inspired by the hierarchical log-bilinear neural language model. In standard WSJ experiments, the neural parser achieves an almost 2.4 time speed up (320 sen/sec) compared to a non-hierarchical baseline without significant accuracy loss (89.06 vs 89.13 F-score).",Evaluating a Deterministic Shift-Reduce Neural Parser for Constituent Parsing,"Greedy transition-based parsers are appealing for their very fast speed, with reasonably high accuracies. In this paper, we build a fast shift-reduce neural constituent parser by using a neural network to make local decisions. One challenge to the parsing speed is the large hidden and output layer sizes caused by the number of constituent labels and branching options. We speed up the parser by using a hierarchical output layer, inspired by the hierarchical log-bilinear neural language model. In standard WSJ experiments, the neural parser achieves an almost 2.4 time speed up (320 sen/sec) compared to a non-hierarchical baseline without significant accuracy loss (89.06 vs 89.13 F-score).",Evaluating a Deterministic Shift-Reduce Neural Parser for Constituent Parsing,"Greedy transition-based parsers are appealing for their very fast speed, with reasonably high accuracies. In this paper, we build a fast shift-reduce neural constituent parser by using a neural network to make local decisions. One challenge to the parsing speed is the large hidden and output layer sizes caused by the number of constituent labels and branching options. We speed up the parser by using a hierarchical output layer, inspired by the hierarchical log-bilinear neural language model. In standard WSJ experiments, the neural parser achieves an almost 2.4 time speed up (320 sen/sec) compared to a non-hierarchical baseline without significant accuracy loss (89.06 vs 89.13 F-score).",,"Evaluating a Deterministic Shift-Reduce Neural Parser for Constituent Parsing. Greedy transition-based parsers are appealing for their very fast speed, with reasonably high accuracies. In this paper, we build a fast shift-reduce neural constituent parser by using a neural network to make local decisions. One challenge to the parsing speed is the large hidden and output layer sizes caused by the number of constituent labels and branching options. We speed up the parser by using a hierarchical output layer, inspired by the hierarchical log-bilinear neural language model. In standard WSJ experiments, the neural parser achieves an almost 2.4 time speed up (320 sen/sec) compared to a non-hierarchical baseline without significant accuracy loss (89.06 vs 89.13 F-score).",2016
krahmer-van-der-sluis-2003-new,https://aclanthology.org/W03-2307,0,,,,,,,A New Model for Generating Multimodal Referring Expressions. ,A New Model for Generating Multimodal Referring Expressions,,A New Model for Generating Multimodal Referring Expressions,,,A New Model for Generating Multimodal Referring Expressions. ,2003
yang-etal-2014-towards,https://aclanthology.org/W14-4104,1,,,,education,,,"Towards Identifying the Resolvability of Threads in MOOCs. One important function of the discussion forums of Massive Open Online Courses (MOOCs) is for students to post problems they are unable to resolve and receive help from their peers and instructors. There are a large proportion of threads that are not resolved to the satisfaction of the students for various reasons. In this paper, we attack this problem by firstly constructing a conceptual model validated using a Structural Equation Modeling technique, which enables us to understand the factors that influence whether a problem thread is satisfactorily resolved. We then demonstrate the robustness of these findings using a predictive model that illustrates how accurately those factors can be used to predict whether a thread is resolved or unresolved. Experiments conducted on one MOOC show that thread resolveability connects closely to our proposed five dimensions and that the predictive ensemble model gives better performance over several baselines.",Towards Identifying the Resolvability of Threads in {MOOC}s,"One important function of the discussion forums of Massive Open Online Courses (MOOCs) is for students to post problems they are unable to resolve and receive help from their peers and instructors. There are a large proportion of threads that are not resolved to the satisfaction of the students for various reasons. In this paper, we attack this problem by firstly constructing a conceptual model validated using a Structural Equation Modeling technique, which enables us to understand the factors that influence whether a problem thread is satisfactorily resolved. We then demonstrate the robustness of these findings using a predictive model that illustrates how accurately those factors can be used to predict whether a thread is resolved or unresolved. Experiments conducted on one MOOC show that thread resolveability connects closely to our proposed five dimensions and that the predictive ensemble model gives better performance over several baselines.",Towards Identifying the Resolvability of Threads in MOOCs,"One important function of the discussion forums of Massive Open Online Courses (MOOCs) is for students to post problems they are unable to resolve and receive help from their peers and instructors. There are a large proportion of threads that are not resolved to the satisfaction of the students for various reasons. In this paper, we attack this problem by firstly constructing a conceptual model validated using a Structural Equation Modeling technique, which enables us to understand the factors that influence whether a problem thread is satisfactorily resolved. We then demonstrate the robustness of these findings using a predictive model that illustrates how accurately those factors can be used to predict whether a thread is resolved or unresolved. Experiments conducted on one MOOC show that thread resolveability connects closely to our proposed five dimensions and that the predictive ensemble model gives better performance over several baselines.",This research was funded in part by NSF grants IIS-1320064 and OMA-0836012 and funding from Google.,"Towards Identifying the Resolvability of Threads in MOOCs. One important function of the discussion forums of Massive Open Online Courses (MOOCs) is for students to post problems they are unable to resolve and receive help from their peers and instructors. There are a large proportion of threads that are not resolved to the satisfaction of the students for various reasons. In this paper, we attack this problem by firstly constructing a conceptual model validated using a Structural Equation Modeling technique, which enables us to understand the factors that influence whether a problem thread is satisfactorily resolved. We then demonstrate the robustness of these findings using a predictive model that illustrates how accurately those factors can be used to predict whether a thread is resolved or unresolved. Experiments conducted on one MOOC show that thread resolveability connects closely to our proposed five dimensions and that the predictive ensemble model gives better performance over several baselines.",2014
li-nenkova-2014-reducing,https://aclanthology.org/W14-4327,0,,,,,,,"Reducing Sparsity Improves the Recognition of Implicit Discourse Relations. The earliest work on automatic detection of implicit discourse relations relied on lexical features. More recently, researchers have demonstrated that syntactic features are superior to lexical features for the task. In this paper we reexamine the two classes of state of the art representations: syntactic production rules and word pair features. In particular, we focus on the need to reduce sparsity in instance representation, demonstrating that different representation choices even for the same class of features may exacerbate sparsity issues and reduce performance. We present results that clearly reveal that lexicalization of the syntactic features is necessary for good performance. We introduce a novel, less sparse, syntactic representation which leads to improvement in discourse relation recognition. Finally, we demonstrate that classifiers trained on different representations, especially lexical ones, behave rather differently and thus could likely be combined in future systems.",Reducing Sparsity Improves the Recognition of Implicit Discourse Relations,"The earliest work on automatic detection of implicit discourse relations relied on lexical features. More recently, researchers have demonstrated that syntactic features are superior to lexical features for the task. In this paper we reexamine the two classes of state of the art representations: syntactic production rules and word pair features. In particular, we focus on the need to reduce sparsity in instance representation, demonstrating that different representation choices even for the same class of features may exacerbate sparsity issues and reduce performance. We present results that clearly reveal that lexicalization of the syntactic features is necessary for good performance. We introduce a novel, less sparse, syntactic representation which leads to improvement in discourse relation recognition. Finally, we demonstrate that classifiers trained on different representations, especially lexical ones, behave rather differently and thus could likely be combined in future systems.",Reducing Sparsity Improves the Recognition of Implicit Discourse Relations,"The earliest work on automatic detection of implicit discourse relations relied on lexical features. More recently, researchers have demonstrated that syntactic features are superior to lexical features for the task. In this paper we reexamine the two classes of state of the art representations: syntactic production rules and word pair features. In particular, we focus on the need to reduce sparsity in instance representation, demonstrating that different representation choices even for the same class of features may exacerbate sparsity issues and reduce performance. We present results that clearly reveal that lexicalization of the syntactic features is necessary for good performance. We introduce a novel, less sparse, syntactic representation which leads to improvement in discourse relation recognition. Finally, we demonstrate that classifiers trained on different representations, especially lexical ones, behave rather differently and thus could likely be combined in future systems.",,"Reducing Sparsity Improves the Recognition of Implicit Discourse Relations. The earliest work on automatic detection of implicit discourse relations relied on lexical features. More recently, researchers have demonstrated that syntactic features are superior to lexical features for the task. In this paper we reexamine the two classes of state of the art representations: syntactic production rules and word pair features. In particular, we focus on the need to reduce sparsity in instance representation, demonstrating that different representation choices even for the same class of features may exacerbate sparsity issues and reduce performance. We present results that clearly reveal that lexicalization of the syntactic features is necessary for good performance. We introduce a novel, less sparse, syntactic representation which leads to improvement in discourse relation recognition. Finally, we demonstrate that classifiers trained on different representations, especially lexical ones, behave rather differently and thus could likely be combined in future systems.",2014
sido-etal-2021-czert,https://aclanthology.org/2021.ranlp-1.149,0,,,,,,,"Czert -- Czech BERT-like Model for Language Representation. This paper describes the training process of the first Czech monolingual language representation models based on BERT and ALBERT architectures. We pre-train our models on more than 340K of sentences, which is 50 times more than multilingual models that include Czech data. We outperform the multilingual models on 9 out of 11 datasets. In addition, we establish the new state-of-the-art results on nine datasets. At the end, we discuss properties of monolingual and multilingual models based upon our results. We publish all the pretrained and fine-tuned models freely for the research community.",Czert {--} {C}zech {BERT}-like Model for Language Representation,"This paper describes the training process of the first Czech monolingual language representation models based on BERT and ALBERT architectures. We pre-train our models on more than 340K of sentences, which is 50 times more than multilingual models that include Czech data. We outperform the multilingual models on 9 out of 11 datasets. In addition, we establish the new state-of-the-art results on nine datasets. At the end, we discuss properties of monolingual and multilingual models based upon our results. We publish all the pretrained and fine-tuned models freely for the research community.",Czert -- Czech BERT-like Model for Language Representation,"This paper describes the training process of the first Czech monolingual language representation models based on BERT and ALBERT architectures. We pre-train our models on more than 340K of sentences, which is 50 times more than multilingual models that include Czech data. We outperform the multilingual models on 9 out of 11 datasets. In addition, we establish the new state-of-the-art results on nine datasets. At the end, we discuss properties of monolingual and multilingual models based upon our results. We publish all the pretrained and fine-tuned models freely for the research community.","This work has been partly supported by ERDF ""Research and Development of Intelligent Components of Advanced Technologies for the Pilsen Metropolitan Area (InteCom)"" (no.: CZ.02.1.01/0.0/0.0/17 048/0007267); and by Grant No. SGS-2019-018 Processing of heterogeneous data and its specialized applications. Computational resources were supplied by the project ""e-Infrastruktura CZ"" (e-INFRA LM2018140) provided within the program Projects of Large Research, Development and Innovations Infrastructures.","Czert -- Czech BERT-like Model for Language Representation. This paper describes the training process of the first Czech monolingual language representation models based on BERT and ALBERT architectures. We pre-train our models on more than 340K of sentences, which is 50 times more than multilingual models that include Czech data. We outperform the multilingual models on 9 out of 11 datasets. In addition, we establish the new state-of-the-art results on nine datasets. At the end, we discuss properties of monolingual and multilingual models based upon our results. We publish all the pretrained and fine-tuned models freely for the research community.",2021
gehrmann-etal-2021-gem,https://aclanthology.org/2021.gem-1.10,0,,,,,,,"The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics. We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with wellestablished, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate.","The {GEM} Benchmark: Natural Language Generation, its Evaluation and Metrics","We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with wellestablished, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate.","The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics","We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with wellestablished, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate.","The authors of this paper not named in the groups participated in initial discussions, participated in the surveys, and provided regular feedback and guidance. Many participants commented on and helped write this paper. We additionally thank all participants of INLG 2019, the Generation Birdsof-a-Feather meeting at ACL 2020, the EvalNL-GEval Workshop at INLG 2020, and members of the generation challenge mailing list of SIGGEN for their participation in the discussions that inspired and influenced the creation of GEM.","The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics. We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with wellestablished, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate.",2021
novak-novak-2021-transfer,https://aclanthology.org/2021.ranlp-1.119,0,,,,,,,"Transfer-based Enrichment of a Hungarian Named Entity Dataset. In this paper, we present a major update to the first Hungarian named entity dataset, the Szeged NER corpus. We used zero-shot crosslingual transfer to initialize the enrichment of entity types annotated in the corpus using three neural NER models: two of them based on the English OntoNotes corpus and one based on the Czech Named Entity Corpus fine-tuned from multilingual neural language models. The output of the models was automatically merged with the original NER annotation, and automatically and manually corrected and further enriched with additional annotation, like qualifiers for various entity types. We present the evaluation of the zero-shot performance of the two OntoNotes-based models and a transformer-based new NER model trained on the training part of the final corpus. We release the corpus and the trained model.",Transfer-based Enrichment of a {H}ungarian Named Entity Dataset,"In this paper, we present a major update to the first Hungarian named entity dataset, the Szeged NER corpus. We used zero-shot crosslingual transfer to initialize the enrichment of entity types annotated in the corpus using three neural NER models: two of them based on the English OntoNotes corpus and one based on the Czech Named Entity Corpus fine-tuned from multilingual neural language models. The output of the models was automatically merged with the original NER annotation, and automatically and manually corrected and further enriched with additional annotation, like qualifiers for various entity types. We present the evaluation of the zero-shot performance of the two OntoNotes-based models and a transformer-based new NER model trained on the training part of the final corpus. We release the corpus and the trained model.",Transfer-based Enrichment of a Hungarian Named Entity Dataset,"In this paper, we present a major update to the first Hungarian named entity dataset, the Szeged NER corpus. We used zero-shot crosslingual transfer to initialize the enrichment of entity types annotated in the corpus using three neural NER models: two of them based on the English OntoNotes corpus and one based on the Czech Named Entity Corpus fine-tuned from multilingual neural language models. The output of the models was automatically merged with the original NER annotation, and automatically and manually corrected and further enriched with additional annotation, like qualifiers for various entity types. We present the evaluation of the zero-shot performance of the two OntoNotes-based models and a transformer-based new NER model trained on the training part of the final corpus. We release the corpus and the trained model.","This research was implemented with support provided by grants FK 125217 and PD 125216 of the National Research, Development and Innovation Office of Hungary financed under the FK 17 and PD 17 funding schemes as well as through the Artificial Intelligence National Excellence Program (grant no.: 2018-1.2.1-NKP-2018-00008).","Transfer-based Enrichment of a Hungarian Named Entity Dataset. In this paper, we present a major update to the first Hungarian named entity dataset, the Szeged NER corpus. We used zero-shot crosslingual transfer to initialize the enrichment of entity types annotated in the corpus using three neural NER models: two of them based on the English OntoNotes corpus and one based on the Czech Named Entity Corpus fine-tuned from multilingual neural language models. The output of the models was automatically merged with the original NER annotation, and automatically and manually corrected and further enriched with additional annotation, like qualifiers for various entity types. We present the evaluation of the zero-shot performance of the two OntoNotes-based models and a transformer-based new NER model trained on the training part of the final corpus. We release the corpus and the trained model.",2021
hu-etal-2019-texar,https://aclanthology.org/P19-3027,0,,,,,,,"Texar: A Modularized, Versatile, and Extensible Toolkit for Text Generation. We introduce Texar, an open-source toolkit aiming to support the broad set of text generation tasks that transform any inputs into natural language, such as machine translation, summarization, dialog, content manipulation, and so forth. With the design goals of modularity, versatility, and extensibility in mind, Texar extracts common patterns underlying the diverse tasks and methodologies, creates a library of highly reusable modules and functionalities, and allows arbitrary model architectures and algorithmic paradigms. In Texar, model architecture, inference, and learning processes are properly decomposed. Modules at a high concept level can be freely assembled or plugged in/swapped out. Texar is thus particularly suitable for researchers and practitioners to do fast prototyping and experimentation. The versatile toolkit also fosters technique sharing across different text generation tasks. Texar supports both TensorFlow and PyTorch, and is released under Apache License 2.0 at https: //www.texar.io. 1","{T}exar: A Modularized, Versatile, and Extensible Toolkit for Text Generation","We introduce Texar, an open-source toolkit aiming to support the broad set of text generation tasks that transform any inputs into natural language, such as machine translation, summarization, dialog, content manipulation, and so forth. With the design goals of modularity, versatility, and extensibility in mind, Texar extracts common patterns underlying the diverse tasks and methodologies, creates a library of highly reusable modules and functionalities, and allows arbitrary model architectures and algorithmic paradigms. In Texar, model architecture, inference, and learning processes are properly decomposed. Modules at a high concept level can be freely assembled or plugged in/swapped out. Texar is thus particularly suitable for researchers and practitioners to do fast prototyping and experimentation. The versatile toolkit also fosters technique sharing across different text generation tasks. Texar supports both TensorFlow and PyTorch, and is released under Apache License 2.0 at https: //www.texar.io. 1","Texar: A Modularized, Versatile, and Extensible Toolkit for Text Generation","We introduce Texar, an open-source toolkit aiming to support the broad set of text generation tasks that transform any inputs into natural language, such as machine translation, summarization, dialog, content manipulation, and so forth. With the design goals of modularity, versatility, and extensibility in mind, Texar extracts common patterns underlying the diverse tasks and methodologies, creates a library of highly reusable modules and functionalities, and allows arbitrary model architectures and algorithmic paradigms. In Texar, model architecture, inference, and learning processes are properly decomposed. Modules at a high concept level can be freely assembled or plugged in/swapped out. Texar is thus particularly suitable for researchers and practitioners to do fast prototyping and experimentation. The versatile toolkit also fosters technique sharing across different text generation tasks. Texar supports both TensorFlow and PyTorch, and is released under Apache License 2.0 at https: //www.texar.io. 1",,"Texar: A Modularized, Versatile, and Extensible Toolkit for Text Generation. We introduce Texar, an open-source toolkit aiming to support the broad set of text generation tasks that transform any inputs into natural language, such as machine translation, summarization, dialog, content manipulation, and so forth. With the design goals of modularity, versatility, and extensibility in mind, Texar extracts common patterns underlying the diverse tasks and methodologies, creates a library of highly reusable modules and functionalities, and allows arbitrary model architectures and algorithmic paradigms. In Texar, model architecture, inference, and learning processes are properly decomposed. Modules at a high concept level can be freely assembled or plugged in/swapped out. Texar is thus particularly suitable for researchers and practitioners to do fast prototyping and experimentation. The versatile toolkit also fosters technique sharing across different text generation tasks. Texar supports both TensorFlow and PyTorch, and is released under Apache License 2.0 at https: //www.texar.io. 1",2019
zhao-huang-1998-quasi,https://aclanthology.org/C98-1001,0,,,,,,,A Quasi-Dependency Model for Structural Analysis it of Chinese BaseNPs. The paper puts forward a quasidependency model for structural analysis of Chinese baseNPs and a MDL-based algorithm for quasidependency-strength acquisition. The experiments show that the proposed model is more suitable for Chinese baseNP analysis and the proposed MDLbased algorithm is superior to the traditional MLbased algorithm. The paper also discusses the problem of incorporating the linguistic knowledge into the above statistical model.,A Quasi-Dependency Model for Structural Analysis it of {C}hinese {B}ase{NP}s,The paper puts forward a quasidependency model for structural analysis of Chinese baseNPs and a MDL-based algorithm for quasidependency-strength acquisition. The experiments show that the proposed model is more suitable for Chinese baseNP analysis and the proposed MDLbased algorithm is superior to the traditional MLbased algorithm. The paper also discusses the problem of incorporating the linguistic knowledge into the above statistical model.,A Quasi-Dependency Model for Structural Analysis it of Chinese BaseNPs,The paper puts forward a quasidependency model for structural analysis of Chinese baseNPs and a MDL-based algorithm for quasidependency-strength acquisition. The experiments show that the proposed model is more suitable for Chinese baseNP analysis and the proposed MDLbased algorithm is superior to the traditional MLbased algorithm. The paper also discusses the problem of incorporating the linguistic knowledge into the above statistical model.,,A Quasi-Dependency Model for Structural Analysis it of Chinese BaseNPs. The paper puts forward a quasidependency model for structural analysis of Chinese baseNPs and a MDL-based algorithm for quasidependency-strength acquisition. The experiments show that the proposed model is more suitable for Chinese baseNP analysis and the proposed MDLbased algorithm is superior to the traditional MLbased algorithm. The paper also discusses the problem of incorporating the linguistic knowledge into the above statistical model.,1998
filimonov-harper-2007-recovery,https://aclanthology.org/D07-1065,0,,,,,,,"Recovery of Empty Nodes in Parse Structures. In this paper, we describe a new algorithm for recovering WH-trace empty nodes. Our approach combines a set of handwritten patterns together with a probabilistic model. Because the patterns heavily utilize regular expressions, the pertinent tree structures are covered using a limited number of patterns. The probabilistic model is essentially a probabilistic context-free grammar (PCFG) approach with the patterns acting as the terminals in production rules. We evaluate the algorithm's performance on gold trees and parser output using three different metrics. Our method compares favorably with state-of-the-art algorithms that recover WH-traces.",Recovery of Empty Nodes in Parse Structures,"In this paper, we describe a new algorithm for recovering WH-trace empty nodes. Our approach combines a set of handwritten patterns together with a probabilistic model. Because the patterns heavily utilize regular expressions, the pertinent tree structures are covered using a limited number of patterns. The probabilistic model is essentially a probabilistic context-free grammar (PCFG) approach with the patterns acting as the terminals in production rules. We evaluate the algorithm's performance on gold trees and parser output using three different metrics. Our method compares favorably with state-of-the-art algorithms that recover WH-traces.",Recovery of Empty Nodes in Parse Structures,"In this paper, we describe a new algorithm for recovering WH-trace empty nodes. Our approach combines a set of handwritten patterns together with a probabilistic model. Because the patterns heavily utilize regular expressions, the pertinent tree structures are covered using a limited number of patterns. The probabilistic model is essentially a probabilistic context-free grammar (PCFG) approach with the patterns acting as the terminals in production rules. We evaluate the algorithm's performance on gold trees and parser output using three different metrics. Our method compares favorably with state-of-the-art algorithms that recover WH-traces.","We would like to thank Ryan Gabbard for providing us output from his algorithm for evaluation. We would also like to thank the anonymous reviewers for invaluable comments. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-06-C-0023. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.","Recovery of Empty Nodes in Parse Structures. In this paper, we describe a new algorithm for recovering WH-trace empty nodes. Our approach combines a set of handwritten patterns together with a probabilistic model. Because the patterns heavily utilize regular expressions, the pertinent tree structures are covered using a limited number of patterns. The probabilistic model is essentially a probabilistic context-free grammar (PCFG) approach with the patterns acting as the terminals in production rules. We evaluate the algorithm's performance on gold trees and parser output using three different metrics. Our method compares favorably with state-of-the-art algorithms that recover WH-traces.",2007
linzen-2020-accelerate,https://aclanthology.org/2020.acl-main.465,0,,,,,,,"How Can We Accelerate Progress Towards Human-like Linguistic Generalization?. This position paper describes and critiques the Pretraining-Agnostic Identically Distributed (PAID) evaluation paradigm, which has become a central tool for measuring progress in natural language understanding. This paradigm consists of three stages: (1) pretraining of a word prediction model on a corpus of arbitrary size; (2) fine-tuning (transfer learning) on a training set representing a classification task; (3) evaluation on a test set drawn from the same distribution as that training set. This paradigm favors simple, low-bias architectures, which, first, can be scaled to process vast amounts of data, and second, can capture the fine-grained statistical properties of a particular data set, regardless of whether those properties are likely to generalize to examples of the task outside the data set. This contrasts with humans, who learn language from several orders of magnitude less data than the systems favored by this evaluation paradigm, and generalize to new tasks in a consistent way. We advocate for supplementing or replacing PAID with paradigms that reward architectures that generalize as quickly and robustly as humans.",How Can We Accelerate Progress Towards Human-like Linguistic Generalization?,"This position paper describes and critiques the Pretraining-Agnostic Identically Distributed (PAID) evaluation paradigm, which has become a central tool for measuring progress in natural language understanding. This paradigm consists of three stages: (1) pretraining of a word prediction model on a corpus of arbitrary size; (2) fine-tuning (transfer learning) on a training set representing a classification task; (3) evaluation on a test set drawn from the same distribution as that training set. This paradigm favors simple, low-bias architectures, which, first, can be scaled to process vast amounts of data, and second, can capture the fine-grained statistical properties of a particular data set, regardless of whether those properties are likely to generalize to examples of the task outside the data set. This contrasts with humans, who learn language from several orders of magnitude less data than the systems favored by this evaluation paradigm, and generalize to new tasks in a consistent way. We advocate for supplementing or replacing PAID with paradigms that reward architectures that generalize as quickly and robustly as humans.",How Can We Accelerate Progress Towards Human-like Linguistic Generalization?,"This position paper describes and critiques the Pretraining-Agnostic Identically Distributed (PAID) evaluation paradigm, which has become a central tool for measuring progress in natural language understanding. This paradigm consists of three stages: (1) pretraining of a word prediction model on a corpus of arbitrary size; (2) fine-tuning (transfer learning) on a training set representing a classification task; (3) evaluation on a test set drawn from the same distribution as that training set. This paradigm favors simple, low-bias architectures, which, first, can be scaled to process vast amounts of data, and second, can capture the fine-grained statistical properties of a particular data set, regardless of whether those properties are likely to generalize to examples of the task outside the data set. This contrasts with humans, who learn language from several orders of magnitude less data than the systems favored by this evaluation paradigm, and generalize to new tasks in a consistent way. We advocate for supplementing or replacing PAID with paradigms that reward architectures that generalize as quickly and robustly as humans.",,"How Can We Accelerate Progress Towards Human-like Linguistic Generalization?. This position paper describes and critiques the Pretraining-Agnostic Identically Distributed (PAID) evaluation paradigm, which has become a central tool for measuring progress in natural language understanding. This paradigm consists of three stages: (1) pretraining of a word prediction model on a corpus of arbitrary size; (2) fine-tuning (transfer learning) on a training set representing a classification task; (3) evaluation on a test set drawn from the same distribution as that training set. This paradigm favors simple, low-bias architectures, which, first, can be scaled to process vast amounts of data, and second, can capture the fine-grained statistical properties of a particular data set, regardless of whether those properties are likely to generalize to examples of the task outside the data set. This contrasts with humans, who learn language from several orders of magnitude less data than the systems favored by this evaluation paradigm, and generalize to new tasks in a consistent way. We advocate for supplementing or replacing PAID with paradigms that reward architectures that generalize as quickly and robustly as humans.",2020
jin-hauptmann-2002-new,https://aclanthology.org/C02-1137,0,,,,,,,"A New Probabilistic Model for Title Generation. Title generation is a complex task involving both natural language understanding and natural language synthesis. In this paper, we propose a new probabilistic model for title generation. Different from the previous statistical models for title generation, which treat title generation as a generation process that converts the 'document representation' of information directly into a 'title representation' of the same information, this model introduces a hidden state called 'information source' and divides title generation into two steps, namely the step of distilling the 'information source' from the observation of a document and the step of generating a title from the estimated 'information source'. In our experiment, the new probabilistic model outperforms the previous model for title generation in terms of both automatic evaluations and human judgments.",A New Probabilistic Model for Title Generation,"Title generation is a complex task involving both natural language understanding and natural language synthesis. In this paper, we propose a new probabilistic model for title generation. Different from the previous statistical models for title generation, which treat title generation as a generation process that converts the 'document representation' of information directly into a 'title representation' of the same information, this model introduces a hidden state called 'information source' and divides title generation into two steps, namely the step of distilling the 'information source' from the observation of a document and the step of generating a title from the estimated 'information source'. In our experiment, the new probabilistic model outperforms the previous model for title generation in terms of both automatic evaluations and human judgments.",A New Probabilistic Model for Title Generation,"Title generation is a complex task involving both natural language understanding and natural language synthesis. In this paper, we propose a new probabilistic model for title generation. Different from the previous statistical models for title generation, which treat title generation as a generation process that converts the 'document representation' of information directly into a 'title representation' of the same information, this model introduces a hidden state called 'information source' and divides title generation into two steps, namely the step of distilling the 'information source' from the observation of a document and the step of generating a title from the estimated 'information source'. In our experiment, the new probabilistic model outperforms the previous model for title generation in terms of both automatic evaluations and human judgments.","The authors are grateful to the anonymous reviewers for their comments, which have helped improve the quality of the paper. This material is based in part on work supported by National Science Foundation under Cooperative Agreement No. IRI-9817496. Partial support for this work was provided by the National Science Foundation's National Science, Mathematics, Engineering, and Technology Education Digital Library Program under grant DUE-0085834. This work was also supported in part by the Advanced Research and Development Activity (ARDA) under contract number MDA908-00-C-0037. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or ARDA.","A New Probabilistic Model for Title Generation. Title generation is a complex task involving both natural language understanding and natural language synthesis. In this paper, we propose a new probabilistic model for title generation. Different from the previous statistical models for title generation, which treat title generation as a generation process that converts the 'document representation' of information directly into a 'title representation' of the same information, this model introduces a hidden state called 'information source' and divides title generation into two steps, namely the step of distilling the 'information source' from the observation of a document and the step of generating a title from the estimated 'information source'. In our experiment, the new probabilistic model outperforms the previous model for title generation in terms of both automatic evaluations and human judgments.",2002
kozhevnikov-titov-2014-cross,https://aclanthology.org/P14-2095,0,,,,,,,"Cross-lingual Model Transfer Using Feature Representation Projection. We propose a novel approach to crosslingual model transfer based on feature representation projection. First, a compact feature representation relevant for the task in question is constructed for either language independently and then the mapping between the two representations is determined using parallel data. The target instance can then be mapped into the source-side feature representation using the derived mapping and handled directly by the source-side model. This approach displays competitive performance on model transfer for semantic role labeling when compared to direct model transfer and annotation projection and suggests interesting directions for further research.",Cross-lingual Model Transfer Using Feature Representation Projection,"We propose a novel approach to crosslingual model transfer based on feature representation projection. First, a compact feature representation relevant for the task in question is constructed for either language independently and then the mapping between the two representations is determined using parallel data. The target instance can then be mapped into the source-side feature representation using the derived mapping and handled directly by the source-side model. This approach displays competitive performance on model transfer for semantic role labeling when compared to direct model transfer and annotation projection and suggests interesting directions for further research.",Cross-lingual Model Transfer Using Feature Representation Projection,"We propose a novel approach to crosslingual model transfer based on feature representation projection. First, a compact feature representation relevant for the task in question is constructed for either language independently and then the mapping between the two representations is determined using parallel data. The target instance can then be mapped into the source-side feature representation using the derived mapping and handled directly by the source-side model. This approach displays competitive performance on model transfer for semantic role labeling when compared to direct model transfer and annotation projection and suggests interesting directions for further research.",The authors would like to acknowledge the support of MMCI Cluster of Excellence and Saarbrücken Graduate School of Computer Science and thank the anonymous reviewers for their suggestions.,"Cross-lingual Model Transfer Using Feature Representation Projection. We propose a novel approach to crosslingual model transfer based on feature representation projection. First, a compact feature representation relevant for the task in question is constructed for either language independently and then the mapping between the two representations is determined using parallel data. The target instance can then be mapped into the source-side feature representation using the derived mapping and handled directly by the source-side model. This approach displays competitive performance on model transfer for semantic role labeling when compared to direct model transfer and annotation projection and suggests interesting directions for further research.",2014
knight-sproat-2009-writing,https://aclanthology.org/N09-4008,0,,,,,,,"Writing Systems, Transliteration and Decipherment. Kevin Knight (USC/ISI) Richard Sproat (CSLU/OHSU)
Nearly all of the core data that computational linguists deal with is in the form of text, which is to say that it consists of language data written (usually) in the standard writing system for the language in question. Yet surprisingly little is generally understood about how writing systems work. This tutorial will be divided into three parts. In the first part we discuss the history of writing and introduce a wide variety of writing systems, explaining their structure and how they encode language. We end this section with a brief review of how some of the properties of writing systems are handled in modern encoding systems, such as Unicode, and some of the continued pitfalls that can occur despite the best intentions of standardization. The second section of the tutorial will focus on the problem of transcription between scripts (often termed ""transliteration""), and how this problem-which is important both for machine translation and named entity recognition-has been addressed. The third section is more theoretical and, at the same time we hope, more fun. We will discuss the problem of decipherment and how computational methods might be brought to bear on the problem of unlocking the mysteries of as yet undeciphered ancient scripts. We start with a brief review of three famous cases of decipherment. We then discuss how techniques that have been used in speech recognition and machine translation might be applied to the problem of decipherment. We end with a survey of the as-yet undeciphered ancient scripts and give some sense of the prospects of deciphering them given currently available data.","Writing Systems, Transliteration and Decipherment","Kevin Knight (USC/ISI) Richard Sproat (CSLU/OHSU)
Nearly all of the core data that computational linguists deal with is in the form of text, which is to say that it consists of language data written (usually) in the standard writing system for the language in question. Yet surprisingly little is generally understood about how writing systems work. This tutorial will be divided into three parts. In the first part we discuss the history of writing and introduce a wide variety of writing systems, explaining their structure and how they encode language. We end this section with a brief review of how some of the properties of writing systems are handled in modern encoding systems, such as Unicode, and some of the continued pitfalls that can occur despite the best intentions of standardization. The second section of the tutorial will focus on the problem of transcription between scripts (often termed ""transliteration""), and how this problem-which is important both for machine translation and named entity recognition-has been addressed. The third section is more theoretical and, at the same time we hope, more fun. We will discuss the problem of decipherment and how computational methods might be brought to bear on the problem of unlocking the mysteries of as yet undeciphered ancient scripts. We start with a brief review of three famous cases of decipherment. We then discuss how techniques that have been used in speech recognition and machine translation might be applied to the problem of decipherment. We end with a survey of the as-yet undeciphered ancient scripts and give some sense of the prospects of deciphering them given currently available data.","Writing Systems, Transliteration and Decipherment","Kevin Knight (USC/ISI) Richard Sproat (CSLU/OHSU)
Nearly all of the core data that computational linguists deal with is in the form of text, which is to say that it consists of language data written (usually) in the standard writing system for the language in question. Yet surprisingly little is generally understood about how writing systems work. This tutorial will be divided into three parts. In the first part we discuss the history of writing and introduce a wide variety of writing systems, explaining their structure and how they encode language. We end this section with a brief review of how some of the properties of writing systems are handled in modern encoding systems, such as Unicode, and some of the continued pitfalls that can occur despite the best intentions of standardization. The second section of the tutorial will focus on the problem of transcription between scripts (often termed ""transliteration""), and how this problem-which is important both for machine translation and named entity recognition-has been addressed. The third section is more theoretical and, at the same time we hope, more fun. We will discuss the problem of decipherment and how computational methods might be brought to bear on the problem of unlocking the mysteries of as yet undeciphered ancient scripts. We start with a brief review of three famous cases of decipherment. We then discuss how techniques that have been used in speech recognition and machine translation might be applied to the problem of decipherment. We end with a survey of the as-yet undeciphered ancient scripts and give some sense of the prospects of deciphering them given currently available data.",,"Writing Systems, Transliteration and Decipherment. Kevin Knight (USC/ISI) Richard Sproat (CSLU/OHSU)
Nearly all of the core data that computational linguists deal with is in the form of text, which is to say that it consists of language data written (usually) in the standard writing system for the language in question. Yet surprisingly little is generally understood about how writing systems work. This tutorial will be divided into three parts. In the first part we discuss the history of writing and introduce a wide variety of writing systems, explaining their structure and how they encode language. We end this section with a brief review of how some of the properties of writing systems are handled in modern encoding systems, such as Unicode, and some of the continued pitfalls that can occur despite the best intentions of standardization. The second section of the tutorial will focus on the problem of transcription between scripts (often termed ""transliteration""), and how this problem-which is important both for machine translation and named entity recognition-has been addressed. The third section is more theoretical and, at the same time we hope, more fun. We will discuss the problem of decipherment and how computational methods might be brought to bear on the problem of unlocking the mysteries of as yet undeciphered ancient scripts. We start with a brief review of three famous cases of decipherment. We then discuss how techniques that have been used in speech recognition and machine translation might be applied to the problem of decipherment. We end with a survey of the as-yet undeciphered ancient scripts and give some sense of the prospects of deciphering them given currently available data.",2009
holan-etal-1998-two,https://aclanthology.org/W98-0503,0,,,,,,,"Two Useful Measures of Word Order Complexity. This paper presents a class of dependency-based forreal grammars (FODG) which can be parametrized by two different but similar measures of nonprojectivity. The measures allow to formulate constraints on the degree of word-order freedom in a language described by a FODG. We discuss the problem of the degree of word-order freedom which should be allowed ~, a FODG describing the (surface) syntax of Czech.",Two Useful Measures of Word Order Complexity,"This paper presents a class of dependency-based forreal grammars (FODG) which can be parametrized by two different but similar measures of nonprojectivity. The measures allow to formulate constraints on the degree of word-order freedom in a language described by a FODG. We discuss the problem of the degree of word-order freedom which should be allowed ~, a FODG describing the (surface) syntax of Czech.",Two Useful Measures of Word Order Complexity,"This paper presents a class of dependency-based forreal grammars (FODG) which can be parametrized by two different but similar measures of nonprojectivity. The measures allow to formulate constraints on the degree of word-order freedom in a language described by a FODG. We discuss the problem of the degree of word-order freedom which should be allowed ~, a FODG describing the (surface) syntax of Czech.",,"Two Useful Measures of Word Order Complexity. This paper presents a class of dependency-based forreal grammars (FODG) which can be parametrized by two different but similar measures of nonprojectivity. The measures allow to formulate constraints on the degree of word-order freedom in a language described by a FODG. We discuss the problem of the degree of word-order freedom which should be allowed ~, a FODG describing the (surface) syntax of Czech.",1998
mohammad-etal-2018-semeval,https://aclanthology.org/S18-1001,0,,,,,,,"SemEval-2018 Task 1: Affect in Tweets. We present the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet. For each task, we created labeled data from English, Arabic, and Spanish tweets. The individual tasks are: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. Seventy-five teams (about 200 team members) participated in the shared task. We summarize the methods, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful. We also analyze systems for consistent bias towards a particular race or gender. The data is made freely available to further improve our understanding of how people convey emotions through language.",{S}em{E}val-2018 Task 1: Affect in Tweets,"We present the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet. For each task, we created labeled data from English, Arabic, and Spanish tweets. The individual tasks are: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. Seventy-five teams (about 200 team members) participated in the shared task. We summarize the methods, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful. We also analyze systems for consistent bias towards a particular race or gender. The data is made freely available to further improve our understanding of how people convey emotions through language.",SemEval-2018 Task 1: Affect in Tweets,"We present the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet. For each task, we created labeled data from English, Arabic, and Spanish tweets. The individual tasks are: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. Seventy-five teams (about 200 team members) participated in the shared task. We summarize the methods, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful. We also analyze systems for consistent bias towards a particular race or gender. The data is made freely available to further improve our understanding of how people convey emotions through language.",,"SemEval-2018 Task 1: Affect in Tweets. We present the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet. For each task, we created labeled data from English, Arabic, and Spanish tweets. The individual tasks are: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. Seventy-five teams (about 200 team members) participated in the shared task. We summarize the methods, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful. We also analyze systems for consistent bias towards a particular race or gender. The data is made freely available to further improve our understanding of how people convey emotions through language.",2018
miculicich-henderson-2022-graph,https://aclanthology.org/2022.findings-acl.215,0,,,,,,,"Graph Refinement for Coreference Resolution. The state-of-the-art models for coreference resolution are based on independent mention pairwise decisions. We propose a modelling approach that learns coreference at the documentlevel and takes global decisions. For this purpose, we model coreference links in a graph structure where the nodes are tokens in the text, and the edges represent the relationship between them. Our model predicts the graph in a non-autoregressive manner, then iteratively refines it based on previous predictions, allowing global dependencies between decisions. The experimental results show improvements over various baselines, reinforcing the hypothesis that document-level information improves conference resolution.",Graph Refinement for Coreference Resolution,"The state-of-the-art models for coreference resolution are based on independent mention pairwise decisions. We propose a modelling approach that learns coreference at the documentlevel and takes global decisions. For this purpose, we model coreference links in a graph structure where the nodes are tokens in the text, and the edges represent the relationship between them. Our model predicts the graph in a non-autoregressive manner, then iteratively refines it based on previous predictions, allowing global dependencies between decisions. The experimental results show improvements over various baselines, reinforcing the hypothesis that document-level information improves conference resolution.",Graph Refinement for Coreference Resolution,"The state-of-the-art models for coreference resolution are based on independent mention pairwise decisions. We propose a modelling approach that learns coreference at the documentlevel and takes global decisions. For this purpose, we model coreference links in a graph structure where the nodes are tokens in the text, and the edges represent the relationship between them. Our model predicts the graph in a non-autoregressive manner, then iteratively refines it based on previous predictions, allowing global dependencies between decisions. The experimental results show improvements over various baselines, reinforcing the hypothesis that document-level information improves conference resolution.","This work was supported in part by the Swiss National Science Foundation, under grants 200021_178862 and CRSII5_180320.","Graph Refinement for Coreference Resolution. The state-of-the-art models for coreference resolution are based on independent mention pairwise decisions. We propose a modelling approach that learns coreference at the documentlevel and takes global decisions. For this purpose, we model coreference links in a graph structure where the nodes are tokens in the text, and the edges represent the relationship between them. Our model predicts the graph in a non-autoregressive manner, then iteratively refines it based on previous predictions, allowing global dependencies between decisions. The experimental results show improvements over various baselines, reinforcing the hypothesis that document-level information improves conference resolution.",2022
prasad-etal-2008-towards,https://aclanthology.org/I08-7010,0,,,,,,,"Towards an Annotated Corpus of Discourse Relations in Hindi. We describe our initial efforts towards developing a large-scale corpus of Hindi texts annotated with discourse relations. Adopting the lexically grounded approach of the Penn Discourse Treebank (PDTB), we present a preliminary analysis of discourse connectives in a small corpus. We describe how discourse connectives are represented in the sentence-level dependency annotation in Hindi, and discuss how the discourse annotation can enrich this level for research and applications. The ultimate goal of our work is to build a Hindi Discourse Relation Bank along the lines of the PDTB. Our work will also contribute to the cross-linguistic understanding of discourse connectives.",Towards an Annotated Corpus of Discourse Relations in {H}indi,"We describe our initial efforts towards developing a large-scale corpus of Hindi texts annotated with discourse relations. Adopting the lexically grounded approach of the Penn Discourse Treebank (PDTB), we present a preliminary analysis of discourse connectives in a small corpus. We describe how discourse connectives are represented in the sentence-level dependency annotation in Hindi, and discuss how the discourse annotation can enrich this level for research and applications. The ultimate goal of our work is to build a Hindi Discourse Relation Bank along the lines of the PDTB. Our work will also contribute to the cross-linguistic understanding of discourse connectives.",Towards an Annotated Corpus of Discourse Relations in Hindi,"We describe our initial efforts towards developing a large-scale corpus of Hindi texts annotated with discourse relations. Adopting the lexically grounded approach of the Penn Discourse Treebank (PDTB), we present a preliminary analysis of discourse connectives in a small corpus. We describe how discourse connectives are represented in the sentence-level dependency annotation in Hindi, and discuss how the discourse annotation can enrich this level for research and applications. The ultimate goal of our work is to build a Hindi Discourse Relation Bank along the lines of the PDTB. Our work will also contribute to the cross-linguistic understanding of discourse connectives.",,"Towards an Annotated Corpus of Discourse Relations in Hindi. We describe our initial efforts towards developing a large-scale corpus of Hindi texts annotated with discourse relations. Adopting the lexically grounded approach of the Penn Discourse Treebank (PDTB), we present a preliminary analysis of discourse connectives in a small corpus. We describe how discourse connectives are represented in the sentence-level dependency annotation in Hindi, and discuss how the discourse annotation can enrich this level for research and applications. The ultimate goal of our work is to build a Hindi Discourse Relation Bank along the lines of the PDTB. Our work will also contribute to the cross-linguistic understanding of discourse connectives.",2008
kunz-etal-2021-heicic,https://aclanthology.org/2021.motra-1.2,0,,,,,,,"HeiCiC: A simultaneous interpreting corpus combining product and pre-process data. This paper presents HeiCIC, a simultaneous interpreting corpus that comprises audio files, time-aligned transcripts and corresponding preparation material complemented by annotation layers. The corpus serves the pursuit of a range of research questions focusing on strategic cognitive load management and its effects on the interpreting output. One research objective is the analysis of semantic transfer as a function of problem triggers in the source text which represent potential cognitive load peaks. Another research approach correlates problem triggers with solution cues in the visual support material used by interpreters in the booth. Interpreting strategies based on this priming reduce cognitive load during SI.",{H}ei{C}i{C}: A simultaneous interpreting corpus combining product and pre-process data,"This paper presents HeiCIC, a simultaneous interpreting corpus that comprises audio files, time-aligned transcripts and corresponding preparation material complemented by annotation layers. The corpus serves the pursuit of a range of research questions focusing on strategic cognitive load management and its effects on the interpreting output. One research objective is the analysis of semantic transfer as a function of problem triggers in the source text which represent potential cognitive load peaks. Another research approach correlates problem triggers with solution cues in the visual support material used by interpreters in the booth. Interpreting strategies based on this priming reduce cognitive load during SI.",HeiCiC: A simultaneous interpreting corpus combining product and pre-process data,"This paper presents HeiCIC, a simultaneous interpreting corpus that comprises audio files, time-aligned transcripts and corresponding preparation material complemented by annotation layers. The corpus serves the pursuit of a range of research questions focusing on strategic cognitive load management and its effects on the interpreting output. One research objective is the analysis of semantic transfer as a function of problem triggers in the source text which represent potential cognitive load peaks. Another research approach correlates problem triggers with solution cues in the visual support material used by interpreters in the booth. Interpreting strategies based on this priming reduce cognitive load during SI.",,"HeiCiC: A simultaneous interpreting corpus combining product and pre-process data. This paper presents HeiCIC, a simultaneous interpreting corpus that comprises audio files, time-aligned transcripts and corresponding preparation material complemented by annotation layers. The corpus serves the pursuit of a range of research questions focusing on strategic cognitive load management and its effects on the interpreting output. One research objective is the analysis of semantic transfer as a function of problem triggers in the source text which represent potential cognitive load peaks. Another research approach correlates problem triggers with solution cues in the visual support material used by interpreters in the booth. Interpreting strategies based on this priming reduce cognitive load during SI.",2021
chang-etal-2021-nao,https://aclanthology.org/2021.ccl-1.57,1,,,,health,,,"脑卒中疾病电子病历实体及实体关系标注语料库构建(Corpus Construction for Named-Entity and Entity Relations for Electronic Medical Records of Stroke Disease). This paper discussed the labeling of Named-Entity and Entity Relations in Chinese electronic medical records of stroke disease, and proposes a system and norms for labeling entity and entity relations that are suitable for content and characteristics of electronic medical records of stroke disease. Based on the guidance of the labeling system and norms, this carried out several rounds of manual tagging and proofreading and completed the labeling of entities and relationships more than 1.5 million words. The entity and entity relationship tagging corpus of stroke electronic medical record(Stroke Electronic Medical Record entity and entity related Corpus, SEMRC)is fromed. The constructed corpus contains 10,594 named entities and 14,457 entity relationships. The consistency of named entity reached 85.16%, and that of entity relationship reached 94.16%.",脑卒中疾病电子病历实体及实体关系标注语料库构建(Corpus Construction for Named-Entity and Entity Relations for Electronic Medical Records of Stroke Disease),"This paper discussed the labeling of Named-Entity and Entity Relations in Chinese electronic medical records of stroke disease, and proposes a system and norms for labeling entity and entity relations that are suitable for content and characteristics of electronic medical records of stroke disease. Based on the guidance of the labeling system and norms, this carried out several rounds of manual tagging and proofreading and completed the labeling of entities and relationships more than 1.5 million words. The entity and entity relationship tagging corpus of stroke electronic medical record(Stroke Electronic Medical Record entity and entity related Corpus, SEMRC)is fromed. The constructed corpus contains 10,594 named entities and 14,457 entity relationships. The consistency of named entity reached 85.16%, and that of entity relationship reached 94.16%.",脑卒中疾病电子病历实体及实体关系标注语料库构建(Corpus Construction for Named-Entity and Entity Relations for Electronic Medical Records of Stroke Disease),"This paper discussed the labeling of Named-Entity and Entity Relations in Chinese electronic medical records of stroke disease, and proposes a system and norms for labeling entity and entity relations that are suitable for content and characteristics of electronic medical records of stroke disease. Based on the guidance of the labeling system and norms, this carried out several rounds of manual tagging and proofreading and completed the labeling of entities and relationships more than 1.5 million words. The entity and entity relationship tagging corpus of stroke electronic medical record(Stroke Electronic Medical Record entity and entity related Corpus, SEMRC)is fromed. The constructed corpus contains 10,594 named entities and 14,457 entity relationships. The consistency of named entity reached 85.16%, and that of entity relationship reached 94.16%.",,"脑卒中疾病电子病历实体及实体关系标注语料库构建(Corpus Construction for Named-Entity and Entity Relations for Electronic Medical Records of Stroke Disease). This paper discussed the labeling of Named-Entity and Entity Relations in Chinese electronic medical records of stroke disease, and proposes a system and norms for labeling entity and entity relations that are suitable for content and characteristics of electronic medical records of stroke disease. Based on the guidance of the labeling system and norms, this carried out several rounds of manual tagging and proofreading and completed the labeling of entities and relationships more than 1.5 million words. The entity and entity relationship tagging corpus of stroke electronic medical record(Stroke Electronic Medical Record entity and entity related Corpus, SEMRC)is fromed. The constructed corpus contains 10,594 named entities and 14,457 entity relationships. The consistency of named entity reached 85.16%, and that of entity relationship reached 94.16%.",2021
lee-chang-2003-acquisition,https://aclanthology.org/W03-0317,0,,,,,,,"Acquisition of English-Chinese Transliterated Word Pairs from Parallel-Aligned Texts using a Statistical Machine Transliteration Model. This paper presents a framework for extracting English and Chinese transliterated word pairs from parallel texts. The approach is based on the statistical machine transliteration model to exploit the phonetic similarities between English words and corresponding Chinese transliterations. For a given proper noun in English, the proposed method extracts the corresponding transliterated word from the aligned text in Chinese. Under the proposed approach, the parameters of the model are automatically learned from a bilingual proper name list. Experimental results show that the average rates of word and character precision are 86.0% and 94.4%, respectively. The rates can be further improved with the addition of simple linguistic processing.",Acquisition of {E}nglish-{C}hinese Transliterated Word Pairs from Parallel-Aligned Texts using a Statistical Machine Transliteration Model,"This paper presents a framework for extracting English and Chinese transliterated word pairs from parallel texts. The approach is based on the statistical machine transliteration model to exploit the phonetic similarities between English words and corresponding Chinese transliterations. For a given proper noun in English, the proposed method extracts the corresponding transliterated word from the aligned text in Chinese. Under the proposed approach, the parameters of the model are automatically learned from a bilingual proper name list. Experimental results show that the average rates of word and character precision are 86.0% and 94.4%, respectively. The rates can be further improved with the addition of simple linguistic processing.",Acquisition of English-Chinese Transliterated Word Pairs from Parallel-Aligned Texts using a Statistical Machine Transliteration Model,"This paper presents a framework for extracting English and Chinese transliterated word pairs from parallel texts. The approach is based on the statistical machine transliteration model to exploit the phonetic similarities between English words and corresponding Chinese transliterations. For a given proper noun in English, the proposed method extracts the corresponding transliterated word from the aligned text in Chinese. Under the proposed approach, the parameters of the model are automatically learned from a bilingual proper name list. Experimental results show that the average rates of word and character precision are 86.0% and 94.4%, respectively. The rates can be further improved with the addition of simple linguistic processing.",,"Acquisition of English-Chinese Transliterated Word Pairs from Parallel-Aligned Texts using a Statistical Machine Transliteration Model. This paper presents a framework for extracting English and Chinese transliterated word pairs from parallel texts. The approach is based on the statistical machine transliteration model to exploit the phonetic similarities between English words and corresponding Chinese transliterations. For a given proper noun in English, the proposed method extracts the corresponding transliterated word from the aligned text in Chinese. Under the proposed approach, the parameters of the model are automatically learned from a bilingual proper name list. Experimental results show that the average rates of word and character precision are 86.0% and 94.4%, respectively. The rates can be further improved with the addition of simple linguistic processing.",2003
zilio-etal-2017-using,https://doi.org/10.26615/978-954-452-049-6_107,1,,,,education,,,Using NLP for Enhancing Second Language Acquisition. ,Using {NLP} for Enhancing Second Language Acquisition,,Using NLP for Enhancing Second Language Acquisition,,,Using NLP for Enhancing Second Language Acquisition. ,2017
nasr-rambow-2004-simple,https://aclanthology.org/W04-1503,0,,,,,,,"A Simple String-Rewriting Formalism for Dependency Grammar. Recently, dependency grammar has gained renewed attention as empirical methods in parsing have emphasized the importance of relations between words, which is what dependency grammars model explicitly, but context-free phrase-structure grammars do not. While there has been much work on formalizing dependency grammar and on parsing algorithms for dependency grammars in the past, there is not a complete generative formalization of dependency grammar based on string-rewriting in which the derivation structure is the desired dependency structure. Such a system allows for the definition of a compact parse forest in a straightforward manner. In this paper, we present a simple generative formalism for dependency grammars based on Extended Context-Free Grammar, along with a parser; the formalism captures the intuitions of previous formalizations while deviating minimally from the much-used Context-Free Grammar.",A Simple String-Rewriting Formalism for Dependency Grammar,"Recently, dependency grammar has gained renewed attention as empirical methods in parsing have emphasized the importance of relations between words, which is what dependency grammars model explicitly, but context-free phrase-structure grammars do not. While there has been much work on formalizing dependency grammar and on parsing algorithms for dependency grammars in the past, there is not a complete generative formalization of dependency grammar based on string-rewriting in which the derivation structure is the desired dependency structure. Such a system allows for the definition of a compact parse forest in a straightforward manner. In this paper, we present a simple generative formalism for dependency grammars based on Extended Context-Free Grammar, along with a parser; the formalism captures the intuitions of previous formalizations while deviating minimally from the much-used Context-Free Grammar.",A Simple String-Rewriting Formalism for Dependency Grammar,"Recently, dependency grammar has gained renewed attention as empirical methods in parsing have emphasized the importance of relations between words, which is what dependency grammars model explicitly, but context-free phrase-structure grammars do not. While there has been much work on formalizing dependency grammar and on parsing algorithms for dependency grammars in the past, there is not a complete generative formalization of dependency grammar based on string-rewriting in which the derivation structure is the desired dependency structure. Such a system allows for the definition of a compact parse forest in a straightforward manner. In this paper, we present a simple generative formalism for dependency grammars based on Extended Context-Free Grammar, along with a parser; the formalism captures the intuitions of previous formalizations while deviating minimally from the much-used Context-Free Grammar.",,"A Simple String-Rewriting Formalism for Dependency Grammar. Recently, dependency grammar has gained renewed attention as empirical methods in parsing have emphasized the importance of relations between words, which is what dependency grammars model explicitly, but context-free phrase-structure grammars do not. While there has been much work on formalizing dependency grammar and on parsing algorithms for dependency grammars in the past, there is not a complete generative formalization of dependency grammar based on string-rewriting in which the derivation structure is the desired dependency structure. Such a system allows for the definition of a compact parse forest in a straightforward manner. In this paper, we present a simple generative formalism for dependency grammars based on Extended Context-Free Grammar, along with a parser; the formalism captures the intuitions of previous formalizations while deviating minimally from the much-used Context-Free Grammar.",2004
zhao-etal-2010-automatic,https://aclanthology.org/C10-2171,0,,,,,,,"Automatic Temporal Expression Normalization with Reference Time Dynamic-Choosing. Temporal expressions in texts contain significant temporal information. Understanding temporal information is very useful in many NLP applications, such as information extraction, documents summarization and question answering. Therefore, the temporal expression normalization which is used for transforming temporal expressions to temporal information has absorbed many researchers' attentions. But previous works, whatever the hand-crafted rules-based or the machine-learnt rules-based, all can not address the actual problem about temporal reference in real texts effectively. More specifically, the reference time choosing mechanism employed by these works is not adaptable to the universal implicit times in normalization. Aiming at this issue, we introduce a new reference time choosing mechanism for temporal expression normalization, called reference time dynamic-choosing, which assigns the appropriate reference times to different classes of implicit temporal expressions dynamically when normalizing. And then, the solution to temporal expression defuzzification by scenario dependences among temporal expressions is discussed. Finally, we evaluate the system on a substantial corpus collected by Chinese news articles and obtained more promising results than compared methods.",Automatic Temporal Expression Normalization with Reference Time Dynamic-Choosing,"Temporal expressions in texts contain significant temporal information. Understanding temporal information is very useful in many NLP applications, such as information extraction, documents summarization and question answering. Therefore, the temporal expression normalization which is used for transforming temporal expressions to temporal information has absorbed many researchers' attentions. But previous works, whatever the hand-crafted rules-based or the machine-learnt rules-based, all can not address the actual problem about temporal reference in real texts effectively. More specifically, the reference time choosing mechanism employed by these works is not adaptable to the universal implicit times in normalization. Aiming at this issue, we introduce a new reference time choosing mechanism for temporal expression normalization, called reference time dynamic-choosing, which assigns the appropriate reference times to different classes of implicit temporal expressions dynamically when normalizing. And then, the solution to temporal expression defuzzification by scenario dependences among temporal expressions is discussed. Finally, we evaluate the system on a substantial corpus collected by Chinese news articles and obtained more promising results than compared methods.",Automatic Temporal Expression Normalization with Reference Time Dynamic-Choosing,"Temporal expressions in texts contain significant temporal information. Understanding temporal information is very useful in many NLP applications, such as information extraction, documents summarization and question answering. Therefore, the temporal expression normalization which is used for transforming temporal expressions to temporal information has absorbed many researchers' attentions. But previous works, whatever the hand-crafted rules-based or the machine-learnt rules-based, all can not address the actual problem about temporal reference in real texts effectively. More specifically, the reference time choosing mechanism employed by these works is not adaptable to the universal implicit times in normalization. Aiming at this issue, we introduce a new reference time choosing mechanism for temporal expression normalization, called reference time dynamic-choosing, which assigns the appropriate reference times to different classes of implicit temporal expressions dynamically when normalizing. And then, the solution to temporal expression defuzzification by scenario dependences among temporal expressions is discussed. Finally, we evaluate the system on a substantial corpus collected by Chinese news articles and obtained more promising results than compared methods.",,"Automatic Temporal Expression Normalization with Reference Time Dynamic-Choosing. Temporal expressions in texts contain significant temporal information. Understanding temporal information is very useful in many NLP applications, such as information extraction, documents summarization and question answering. Therefore, the temporal expression normalization which is used for transforming temporal expressions to temporal information has absorbed many researchers' attentions. But previous works, whatever the hand-crafted rules-based or the machine-learnt rules-based, all can not address the actual problem about temporal reference in real texts effectively. More specifically, the reference time choosing mechanism employed by these works is not adaptable to the universal implicit times in normalization. Aiming at this issue, we introduce a new reference time choosing mechanism for temporal expression normalization, called reference time dynamic-choosing, which assigns the appropriate reference times to different classes of implicit temporal expressions dynamically when normalizing. And then, the solution to temporal expression defuzzification by scenario dependences among temporal expressions is discussed. Finally, we evaluate the system on a substantial corpus collected by Chinese news articles and obtained more promising results than compared methods.",2010
dolan-etal-2004-unsupervised,https://aclanthology.org/C04-1051,0,,,,,,,"Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources. We investigate unsupervised techniques for acquiring monolingual sentence-level paraphrases from a corpus of temporally and topically clustered news articles collected from thousands of web-based news sources. Two techniques are employed: (1) simple string edit distance, and (2) a heuristic strategy that pairs initial (presumably summary) sentences from different news stories in the same cluster. We evaluate both datasets using a word alignment algorithm and a metric borrowed from machine translation. Results show that edit distance data is cleaner and more easily-aligned than the heuristic data, with an overall alignment error rate (AER) of 11.58% on a similarly-extracted test set. On test data extracted by the heuristic strategy, however, performance of the two training sets is similar, with AERs of 13.2% and 14.7% respectively. Analysis of 100 pairs of sentences from each set reveals that the edit distance data lacks many of the complex lexical and syntactic alternations that characterize monolingual paraphrase. The summary sentences, while less readily alignable, retain more of the non-trivial alternations that are of greatest interest learning paraphrase relationships.",Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources,"We investigate unsupervised techniques for acquiring monolingual sentence-level paraphrases from a corpus of temporally and topically clustered news articles collected from thousands of web-based news sources. Two techniques are employed: (1) simple string edit distance, and (2) a heuristic strategy that pairs initial (presumably summary) sentences from different news stories in the same cluster. We evaluate both datasets using a word alignment algorithm and a metric borrowed from machine translation. Results show that edit distance data is cleaner and more easily-aligned than the heuristic data, with an overall alignment error rate (AER) of 11.58% on a similarly-extracted test set. On test data extracted by the heuristic strategy, however, performance of the two training sets is similar, with AERs of 13.2% and 14.7% respectively. Analysis of 100 pairs of sentences from each set reveals that the edit distance data lacks many of the complex lexical and syntactic alternations that characterize monolingual paraphrase. The summary sentences, while less readily alignable, retain more of the non-trivial alternations that are of greatest interest learning paraphrase relationships.",Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources,"We investigate unsupervised techniques for acquiring monolingual sentence-level paraphrases from a corpus of temporally and topically clustered news articles collected from thousands of web-based news sources. Two techniques are employed: (1) simple string edit distance, and (2) a heuristic strategy that pairs initial (presumably summary) sentences from different news stories in the same cluster. We evaluate both datasets using a word alignment algorithm and a metric borrowed from machine translation. Results show that edit distance data is cleaner and more easily-aligned than the heuristic data, with an overall alignment error rate (AER) of 11.58% on a similarly-extracted test set. On test data extracted by the heuristic strategy, however, performance of the two training sets is similar, with AERs of 13.2% and 14.7% respectively. Analysis of 100 pairs of sentences from each set reveals that the edit distance data lacks many of the complex lexical and syntactic alternations that characterize monolingual paraphrase. The summary sentences, while less readily alignable, retain more of the non-trivial alternations that are of greatest interest learning paraphrase relationships.","We are grateful to the Mo Corston-Oliver, Jeff Stevenson and Amy Muia of the Butler Hill Group for their work in annotating the data used in the experiments. We have also benefited from discussions with Ken Church, Mark Johnson, Daniel Marcu and Franz Och. We remain, however, responsible for all content.","Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources. We investigate unsupervised techniques for acquiring monolingual sentence-level paraphrases from a corpus of temporally and topically clustered news articles collected from thousands of web-based news sources. Two techniques are employed: (1) simple string edit distance, and (2) a heuristic strategy that pairs initial (presumably summary) sentences from different news stories in the same cluster. We evaluate both datasets using a word alignment algorithm and a metric borrowed from machine translation. Results show that edit distance data is cleaner and more easily-aligned than the heuristic data, with an overall alignment error rate (AER) of 11.58% on a similarly-extracted test set. On test data extracted by the heuristic strategy, however, performance of the two training sets is similar, with AERs of 13.2% and 14.7% respectively. Analysis of 100 pairs of sentences from each set reveals that the edit distance data lacks many of the complex lexical and syntactic alternations that characterize monolingual paraphrase. The summary sentences, while less readily alignable, retain more of the non-trivial alternations that are of greatest interest learning paraphrase relationships.",2004
ghosh-etal-2020-cease,https://aclanthology.org/2020.lrec-1.201,1,,,,health,,,"CEASE, a Corpus of Emotion Annotated Suicide notes in English. A suicide note is usually written shortly before the suicide, and it provides a chance to comprehend the self-destructive state of mind of the deceased. From a psychological point of view, suicide notes have been utilized for recognizing the motive behind the suicide. To the best of our knowledge, there are no openly accessible suicide note corpus at present, making it challenging for the researchers and developers to deep dive into the area of mental health assessment and suicide prevention. In this paper, we create a fine-grained emotion annotated corpus (CEASE) of suicide notes in English, and develop various deep learning models to perform emotion detection on the curated dataset. The corpus consists of 2393 sentences from around 205 suicide notes collected from various sources. Each sentence is annotated with a particular emotion class from a set of 15 fine-grained emotion labels, namely (forgiveness, happiness peacefulness, love, pride, hopefulness, thankfulness, blame, anger, fear, abuse, sorrow, hopelessness, guilt, information, instructions). For the evaluation, we develop an ensemble architecture, where the base models correspond to three supervised deep learning models, namely Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM).We obtain the highest test accuracy of 60.17%, and cross-validation accuracy of 60.32%.","{CEASE}, a Corpus of Emotion Annotated Suicide notes in {E}nglish","A suicide note is usually written shortly before the suicide, and it provides a chance to comprehend the self-destructive state of mind of the deceased. From a psychological point of view, suicide notes have been utilized for recognizing the motive behind the suicide. To the best of our knowledge, there are no openly accessible suicide note corpus at present, making it challenging for the researchers and developers to deep dive into the area of mental health assessment and suicide prevention. In this paper, we create a fine-grained emotion annotated corpus (CEASE) of suicide notes in English, and develop various deep learning models to perform emotion detection on the curated dataset. The corpus consists of 2393 sentences from around 205 suicide notes collected from various sources. Each sentence is annotated with a particular emotion class from a set of 15 fine-grained emotion labels, namely (forgiveness, happiness peacefulness, love, pride, hopefulness, thankfulness, blame, anger, fear, abuse, sorrow, hopelessness, guilt, information, instructions). For the evaluation, we develop an ensemble architecture, where the base models correspond to three supervised deep learning models, namely Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM).We obtain the highest test accuracy of 60.17%, and cross-validation accuracy of 60.32%.","CEASE, a Corpus of Emotion Annotated Suicide notes in English","A suicide note is usually written shortly before the suicide, and it provides a chance to comprehend the self-destructive state of mind of the deceased. From a psychological point of view, suicide notes have been utilized for recognizing the motive behind the suicide. To the best of our knowledge, there are no openly accessible suicide note corpus at present, making it challenging for the researchers and developers to deep dive into the area of mental health assessment and suicide prevention. In this paper, we create a fine-grained emotion annotated corpus (CEASE) of suicide notes in English, and develop various deep learning models to perform emotion detection on the curated dataset. The corpus consists of 2393 sentences from around 205 suicide notes collected from various sources. Each sentence is annotated with a particular emotion class from a set of 15 fine-grained emotion labels, namely (forgiveness, happiness peacefulness, love, pride, hopefulness, thankfulness, blame, anger, fear, abuse, sorrow, hopelessness, guilt, information, instructions). For the evaluation, we develop an ensemble architecture, where the base models correspond to three supervised deep learning models, namely Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM).We obtain the highest test accuracy of 60.17%, and cross-validation accuracy of 60.32%.","Authors gratefully acknowledge the support from the project titled 'Development of C-DAC Digital Forensic Centre with AI based Knowledge Support Tools', supported by MeitY, Govt. of India and Govt. of Bihar. The authors would also like to thank the linguists: Akash Bhagat, Suman Shekhar (IIT Patna) and Danish Armaan (IIEST Shibpur) for their valuable efforts in labelling the tweets.","CEASE, a Corpus of Emotion Annotated Suicide notes in English. A suicide note is usually written shortly before the suicide, and it provides a chance to comprehend the self-destructive state of mind of the deceased. From a psychological point of view, suicide notes have been utilized for recognizing the motive behind the suicide. To the best of our knowledge, there are no openly accessible suicide note corpus at present, making it challenging for the researchers and developers to deep dive into the area of mental health assessment and suicide prevention. In this paper, we create a fine-grained emotion annotated corpus (CEASE) of suicide notes in English, and develop various deep learning models to perform emotion detection on the curated dataset. The corpus consists of 2393 sentences from around 205 suicide notes collected from various sources. Each sentence is annotated with a particular emotion class from a set of 15 fine-grained emotion labels, namely (forgiveness, happiness peacefulness, love, pride, hopefulness, thankfulness, blame, anger, fear, abuse, sorrow, hopelessness, guilt, information, instructions). For the evaluation, we develop an ensemble architecture, where the base models correspond to three supervised deep learning models, namely Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM).We obtain the highest test accuracy of 60.17%, and cross-validation accuracy of 60.32%.",2020
horacek-2013-justifying,https://aclanthology.org/R13-1040,0,,,,,,,"Justifying Corpus-Based Choices in Referring Expression Generation. Most empirically-based approaches to NL generation elaborate on co-occurrences and frequencies observed over a corpus, which are then accommodated by learning algorithms. This method fails to capture generalities in generation subtasks, such as generating referring expressions, so that results obtained for some corpus cannot be transferred with confidence to similar environments or even to other domains. In order to obtain a more general basis for choices in referring expression generation, we formulate situational and task-specific properties, and we test to what degree they hold in a specific corpus. As a novelty, we incorporate features of the role of the underlying task, object identification, into these property specifications; these features are inherently domain-independent. Our method has the potential to enable the development of a repertoire of regularities that express generalities and differences across situations and domains, which supports the development of generic algorithms and also leads to a better understanding of underlying dependencies.",Justifying Corpus-Based Choices in Referring Expression Generation,"Most empirically-based approaches to NL generation elaborate on co-occurrences and frequencies observed over a corpus, which are then accommodated by learning algorithms. This method fails to capture generalities in generation subtasks, such as generating referring expressions, so that results obtained for some corpus cannot be transferred with confidence to similar environments or even to other domains. In order to obtain a more general basis for choices in referring expression generation, we formulate situational and task-specific properties, and we test to what degree they hold in a specific corpus. As a novelty, we incorporate features of the role of the underlying task, object identification, into these property specifications; these features are inherently domain-independent. Our method has the potential to enable the development of a repertoire of regularities that express generalities and differences across situations and domains, which supports the development of generic algorithms and also leads to a better understanding of underlying dependencies.",Justifying Corpus-Based Choices in Referring Expression Generation,"Most empirically-based approaches to NL generation elaborate on co-occurrences and frequencies observed over a corpus, which are then accommodated by learning algorithms. This method fails to capture generalities in generation subtasks, such as generating referring expressions, so that results obtained for some corpus cannot be transferred with confidence to similar environments or even to other domains. In order to obtain a more general basis for choices in referring expression generation, we formulate situational and task-specific properties, and we test to what degree they hold in a specific corpus. As a novelty, we incorporate features of the role of the underlying task, object identification, into these property specifications; these features are inherently domain-independent. Our method has the potential to enable the development of a repertoire of regularities that express generalities and differences across situations and domains, which supports the development of generic algorithms and also leads to a better understanding of underlying dependencies.",,"Justifying Corpus-Based Choices in Referring Expression Generation. Most empirically-based approaches to NL generation elaborate on co-occurrences and frequencies observed over a corpus, which are then accommodated by learning algorithms. This method fails to capture generalities in generation subtasks, such as generating referring expressions, so that results obtained for some corpus cannot be transferred with confidence to similar environments or even to other domains. In order to obtain a more general basis for choices in referring expression generation, we formulate situational and task-specific properties, and we test to what degree they hold in a specific corpus. As a novelty, we incorporate features of the role of the underlying task, object identification, into these property specifications; these features are inherently domain-independent. Our method has the potential to enable the development of a repertoire of regularities that express generalities and differences across situations and domains, which supports the development of generic algorithms and also leads to a better understanding of underlying dependencies.",2013
carpenter-qu-1995-abstract,https://aclanthology.org/1995.iwpt-1.9,0,,,,,,,"An Abstract Machine for Attribute-Value Logics. A direct abstract machine implementation of the core attribute-value logic operations is shown to decrease the number of operations and conserve the amount of storage required when compared to interpreters or indirect compilers. In this paper, we describe the fundamental data structures and compilation techniques that we have employed to develop a unification and constraint-resolution engine capable of performance rivaling that of directly compiled Prolog terms while greatly exceeding Prolog in flexibility, expressiveness and modularity. In this paper, we will discuss the core architecture of our machine. We begin with a survey of the data structures supporting the small set of attribute-value logic instructions. These instructions manipulate feature structures by means of features, equality, and typing, and manipulate the program state by search and sequencing operations. We further show how these core operations can be integrated with a broad range of standard parsing techniques. Feature structures improve upon Prolog terms by allowing data to be organized by feature rather than by position. This encourages modular program development through the use of sparse structural descriptions which can be logically conjoined into larger units and directly executed. Standard linguistic representations, even of relatively simple local syntactic and semantic structures, typically run to hundreds of substructures. The type discipline we impose organizes information in an object-oriented manner by the multiple inheritance of classes and their associated features and type value constraints. In practice, this allows the construction of large-scale grammars in a relatively short period of time. At run-time, eager copying and structure-sharing is replaced with lazy, incremental, and localized branch and write operations. In order to allow for applications with parallel search, incremental backtracking can be localized to disjunctive choice points within the description of a single structure, thus supporting the kind of conditional mutual consistency checks used in modern grammatical theories such as HPSG, GB, and LFG. Further attention is paid to the byte-coding of instructions and their efficient indexing and subsequent retrieval, all of which is keyed on type information. 1 Motivation Modern attribute-value constraint-based grammars share their primary operational structure with logic programs. In the past decade, Prolog compilers, such as Warren's Abstract Machine (A1t-Kaci 1990), have supplanted interpreters as the execution method of choice for logic pro grams. This is in large part due to a 50-fold speed up in execution times and a reduction by an order of magnitude in terms of space required. In addition to efficiency, compilation also brings the opportunity for static error detection. The vast majority of the time and space used by traditional unification-based grammar interpreters is spent on copying and unifying feature structures. For example, in a bottom-up chart parser, the standard process would be first to build a feature structure for a lexical entry, then to build the feature structures for the relevant rules, and then to unify the matching structures. The principal drawback to this, approach is that complete feature structures have to be constructed, even though unification may result in failure. In the case of f ailure, this can amount to a substantial amount of wasted time and space. By adopting an incremental compiled approach, a description is compiled into a set of abstract machine instructions. At run-time a description is evaluated incrementally, one instruction at a time. In this way, conflicts can be detected as early as possible, before any irrelevant structure has been introduced. In practice, this often means that the inconsistency of a rule with a category can often be detected very",An Abstract Machine for Attribute-Value Logics,"A direct abstract machine implementation of the core attribute-value logic operations is shown to decrease the number of operations and conserve the amount of storage required when compared to interpreters or indirect compilers. In this paper, we describe the fundamental data structures and compilation techniques that we have employed to develop a unification and constraint-resolution engine capable of performance rivaling that of directly compiled Prolog terms while greatly exceeding Prolog in flexibility, expressiveness and modularity. In this paper, we will discuss the core architecture of our machine. We begin with a survey of the data structures supporting the small set of attribute-value logic instructions. These instructions manipulate feature structures by means of features, equality, and typing, and manipulate the program state by search and sequencing operations. We further show how these core operations can be integrated with a broad range of standard parsing techniques. Feature structures improve upon Prolog terms by allowing data to be organized by feature rather than by position. This encourages modular program development through the use of sparse structural descriptions which can be logically conjoined into larger units and directly executed. Standard linguistic representations, even of relatively simple local syntactic and semantic structures, typically run to hundreds of substructures. The type discipline we impose organizes information in an object-oriented manner by the multiple inheritance of classes and their associated features and type value constraints. In practice, this allows the construction of large-scale grammars in a relatively short period of time. At run-time, eager copying and structure-sharing is replaced with lazy, incremental, and localized branch and write operations. In order to allow for applications with parallel search, incremental backtracking can be localized to disjunctive choice points within the description of a single structure, thus supporting the kind of conditional mutual consistency checks used in modern grammatical theories such as HPSG, GB, and LFG. Further attention is paid to the byte-coding of instructions and their efficient indexing and subsequent retrieval, all of which is keyed on type information. 1 Motivation Modern attribute-value constraint-based grammars share their primary operational structure with logic programs. In the past decade, Prolog compilers, such as Warren's Abstract Machine (A1t-Kaci 1990), have supplanted interpreters as the execution method of choice for logic pro grams. This is in large part due to a 50-fold speed up in execution times and a reduction by an order of magnitude in terms of space required. In addition to efficiency, compilation also brings the opportunity for static error detection. The vast majority of the time and space used by traditional unification-based grammar interpreters is spent on copying and unifying feature structures. For example, in a bottom-up chart parser, the standard process would be first to build a feature structure for a lexical entry, then to build the feature structures for the relevant rules, and then to unify the matching structures. The principal drawback to this, approach is that complete feature structures have to be constructed, even though unification may result in failure. In the case of f ailure, this can amount to a substantial amount of wasted time and space. By adopting an incremental compiled approach, a description is compiled into a set of abstract machine instructions. At run-time a description is evaluated incrementally, one instruction at a time. In this way, conflicts can be detected as early as possible, before any irrelevant structure has been introduced. In practice, this often means that the inconsistency of a rule with a category can often be detected very",An Abstract Machine for Attribute-Value Logics,"A direct abstract machine implementation of the core attribute-value logic operations is shown to decrease the number of operations and conserve the amount of storage required when compared to interpreters or indirect compilers. In this paper, we describe the fundamental data structures and compilation techniques that we have employed to develop a unification and constraint-resolution engine capable of performance rivaling that of directly compiled Prolog terms while greatly exceeding Prolog in flexibility, expressiveness and modularity. In this paper, we will discuss the core architecture of our machine. We begin with a survey of the data structures supporting the small set of attribute-value logic instructions. These instructions manipulate feature structures by means of features, equality, and typing, and manipulate the program state by search and sequencing operations. We further show how these core operations can be integrated with a broad range of standard parsing techniques. Feature structures improve upon Prolog terms by allowing data to be organized by feature rather than by position. This encourages modular program development through the use of sparse structural descriptions which can be logically conjoined into larger units and directly executed. Standard linguistic representations, even of relatively simple local syntactic and semantic structures, typically run to hundreds of substructures. The type discipline we impose organizes information in an object-oriented manner by the multiple inheritance of classes and their associated features and type value constraints. In practice, this allows the construction of large-scale grammars in a relatively short period of time. At run-time, eager copying and structure-sharing is replaced with lazy, incremental, and localized branch and write operations. In order to allow for applications with parallel search, incremental backtracking can be localized to disjunctive choice points within the description of a single structure, thus supporting the kind of conditional mutual consistency checks used in modern grammatical theories such as HPSG, GB, and LFG. Further attention is paid to the byte-coding of instructions and their efficient indexing and subsequent retrieval, all of which is keyed on type information. 1 Motivation Modern attribute-value constraint-based grammars share their primary operational structure with logic programs. In the past decade, Prolog compilers, such as Warren's Abstract Machine (A1t-Kaci 1990), have supplanted interpreters as the execution method of choice for logic pro grams. This is in large part due to a 50-fold speed up in execution times and a reduction by an order of magnitude in terms of space required. In addition to efficiency, compilation also brings the opportunity for static error detection. The vast majority of the time and space used by traditional unification-based grammar interpreters is spent on copying and unifying feature structures. For example, in a bottom-up chart parser, the standard process would be first to build a feature structure for a lexical entry, then to build the feature structures for the relevant rules, and then to unify the matching structures. The principal drawback to this, approach is that complete feature structures have to be constructed, even though unification may result in failure. In the case of f ailure, this can amount to a substantial amount of wasted time and space. By adopting an incremental compiled approach, a description is compiled into a set of abstract machine instructions. At run-time a description is evaluated incrementally, one instruction at a time. In this way, conflicts can be detected as early as possible, before any irrelevant structure has been introduced. In practice, this often means that the inconsistency of a rule with a category can often be detected very",,"An Abstract Machine for Attribute-Value Logics. A direct abstract machine implementation of the core attribute-value logic operations is shown to decrease the number of operations and conserve the amount of storage required when compared to interpreters or indirect compilers. In this paper, we describe the fundamental data structures and compilation techniques that we have employed to develop a unification and constraint-resolution engine capable of performance rivaling that of directly compiled Prolog terms while greatly exceeding Prolog in flexibility, expressiveness and modularity. In this paper, we will discuss the core architecture of our machine. We begin with a survey of the data structures supporting the small set of attribute-value logic instructions. These instructions manipulate feature structures by means of features, equality, and typing, and manipulate the program state by search and sequencing operations. We further show how these core operations can be integrated with a broad range of standard parsing techniques. Feature structures improve upon Prolog terms by allowing data to be organized by feature rather than by position. This encourages modular program development through the use of sparse structural descriptions which can be logically conjoined into larger units and directly executed. Standard linguistic representations, even of relatively simple local syntactic and semantic structures, typically run to hundreds of substructures. The type discipline we impose organizes information in an object-oriented manner by the multiple inheritance of classes and their associated features and type value constraints. In practice, this allows the construction of large-scale grammars in a relatively short period of time. At run-time, eager copying and structure-sharing is replaced with lazy, incremental, and localized branch and write operations. In order to allow for applications with parallel search, incremental backtracking can be localized to disjunctive choice points within the description of a single structure, thus supporting the kind of conditional mutual consistency checks used in modern grammatical theories such as HPSG, GB, and LFG. Further attention is paid to the byte-coding of instructions and their efficient indexing and subsequent retrieval, all of which is keyed on type information. 1 Motivation Modern attribute-value constraint-based grammars share their primary operational structure with logic programs. In the past decade, Prolog compilers, such as Warren's Abstract Machine (A1t-Kaci 1990), have supplanted interpreters as the execution method of choice for logic pro grams. This is in large part due to a 50-fold speed up in execution times and a reduction by an order of magnitude in terms of space required. In addition to efficiency, compilation also brings the opportunity for static error detection. The vast majority of the time and space used by traditional unification-based grammar interpreters is spent on copying and unifying feature structures. For example, in a bottom-up chart parser, the standard process would be first to build a feature structure for a lexical entry, then to build the feature structures for the relevant rules, and then to unify the matching structures. The principal drawback to this, approach is that complete feature structures have to be constructed, even though unification may result in failure. In the case of f ailure, this can amount to a substantial amount of wasted time and space. By adopting an incremental compiled approach, a description is compiled into a set of abstract machine instructions. At run-time a description is evaluated incrementally, one instruction at a time. In this way, conflicts can be detected as early as possible, before any irrelevant structure has been introduced. In practice, this often means that the inconsistency of a rule with a category can often be detected very",1995
batra-etal-2021-building,https://aclanthology.org/2021.emnlp-main.53,0,,,,,,,"Building Adaptive Acceptability Classifiers for Neural NLG. We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches. An NLG response is considered acceptable if it is both semantically correct and grammatical. We don't make use of any human references making the classifiers suitable for runtime deployment. Training data for the classifiers is obtained using a 2-stage approach of first generating synthetic data using a combination of existing and new model-based approaches followed by a novel validation framework to filter and sort the synthetic data into acceptable and unacceptable classes. Our 2stage approach adapts to a wide range of data representations and does not require additional data beyond what the NLG models are trained on. It is also independent of the underlying NLG model architecture, and is able to generate more realistic samples close to the distribution of the NLG model-generated responses. We present results on 5 datasets (WebNLG, Cleaned E2E, ViGGO, Alarm, and Weather) with varying data representations. We compare our framework with existing techniques that involve synthetic data generation using simple sentence transformations and/or modelbased techniques, and show that building acceptability classifiers using data that resembles the generation model outputs followed by a validation framework outperforms the existing techniques, achieving state-of-the-art results. We also show that our techniques can be used in few-shot settings using self-training.",Building Adaptive Acceptability Classifiers for Neural {NLG},"We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches. An NLG response is considered acceptable if it is both semantically correct and grammatical. We don't make use of any human references making the classifiers suitable for runtime deployment. Training data for the classifiers is obtained using a 2-stage approach of first generating synthetic data using a combination of existing and new model-based approaches followed by a novel validation framework to filter and sort the synthetic data into acceptable and unacceptable classes. Our 2stage approach adapts to a wide range of data representations and does not require additional data beyond what the NLG models are trained on. It is also independent of the underlying NLG model architecture, and is able to generate more realistic samples close to the distribution of the NLG model-generated responses. We present results on 5 datasets (WebNLG, Cleaned E2E, ViGGO, Alarm, and Weather) with varying data representations. We compare our framework with existing techniques that involve synthetic data generation using simple sentence transformations and/or modelbased techniques, and show that building acceptability classifiers using data that resembles the generation model outputs followed by a validation framework outperforms the existing techniques, achieving state-of-the-art results. We also show that our techniques can be used in few-shot settings using self-training.",Building Adaptive Acceptability Classifiers for Neural NLG,"We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches. An NLG response is considered acceptable if it is both semantically correct and grammatical. We don't make use of any human references making the classifiers suitable for runtime deployment. Training data for the classifiers is obtained using a 2-stage approach of first generating synthetic data using a combination of existing and new model-based approaches followed by a novel validation framework to filter and sort the synthetic data into acceptable and unacceptable classes. Our 2stage approach adapts to a wide range of data representations and does not require additional data beyond what the NLG models are trained on. It is also independent of the underlying NLG model architecture, and is able to generate more realistic samples close to the distribution of the NLG model-generated responses. We present results on 5 datasets (WebNLG, Cleaned E2E, ViGGO, Alarm, and Weather) with varying data representations. We compare our framework with existing techniques that involve synthetic data generation using simple sentence transformations and/or modelbased techniques, and show that building acceptability classifiers using data that resembles the generation model outputs followed by a validation framework outperforms the existing techniques, achieving state-of-the-art results. We also show that our techniques can be used in few-shot settings using self-training.",,"Building Adaptive Acceptability Classifiers for Neural NLG. We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches. An NLG response is considered acceptable if it is both semantically correct and grammatical. We don't make use of any human references making the classifiers suitable for runtime deployment. Training data for the classifiers is obtained using a 2-stage approach of first generating synthetic data using a combination of existing and new model-based approaches followed by a novel validation framework to filter and sort the synthetic data into acceptable and unacceptable classes. Our 2stage approach adapts to a wide range of data representations and does not require additional data beyond what the NLG models are trained on. It is also independent of the underlying NLG model architecture, and is able to generate more realistic samples close to the distribution of the NLG model-generated responses. We present results on 5 datasets (WebNLG, Cleaned E2E, ViGGO, Alarm, and Weather) with varying data representations. We compare our framework with existing techniques that involve synthetic data generation using simple sentence transformations and/or modelbased techniques, and show that building acceptability classifiers using data that resembles the generation model outputs followed by a validation framework outperforms the existing techniques, achieving state-of-the-art results. We also show that our techniques can be used in few-shot settings using self-training.",2021
husain-etal-2011-clausal,https://aclanthology.org/I11-1143,0,,,,,,,"Clausal parsing helps data-driven dependency parsing: Experiments with Hindi. This paper investigates clausal data-driven dependency parsing. We first motivate a clause as the minimal parsing unit by correlating inter-and intra-clausal relations with relation type, depth, arc length and non-projectivity. This insight leads to a two-stage formulation of parsing where intra-clausal relations are identified in the 1 st stage and inter-clausal relations are identified in the 2 nd stage. We compare two ways of implementing this idea, one based on hard constraints (similar to the one used in constraint-based parsing) and one based on soft constraints (using a kind of parser stacking). Our results show that the approach using hard constraints seems most promising and performs significantly better than single-stage parsing. Our best result gives significant increase in LAS and UAS, respectively, over the previous best result using single-stage parsing.",Clausal parsing helps data-driven dependency parsing: Experiments with {H}indi,"This paper investigates clausal data-driven dependency parsing. We first motivate a clause as the minimal parsing unit by correlating inter-and intra-clausal relations with relation type, depth, arc length and non-projectivity. This insight leads to a two-stage formulation of parsing where intra-clausal relations are identified in the 1 st stage and inter-clausal relations are identified in the 2 nd stage. We compare two ways of implementing this idea, one based on hard constraints (similar to the one used in constraint-based parsing) and one based on soft constraints (using a kind of parser stacking). Our results show that the approach using hard constraints seems most promising and performs significantly better than single-stage parsing. Our best result gives significant increase in LAS and UAS, respectively, over the previous best result using single-stage parsing.",Clausal parsing helps data-driven dependency parsing: Experiments with Hindi,"This paper investigates clausal data-driven dependency parsing. We first motivate a clause as the minimal parsing unit by correlating inter-and intra-clausal relations with relation type, depth, arc length and non-projectivity. This insight leads to a two-stage formulation of parsing where intra-clausal relations are identified in the 1 st stage and inter-clausal relations are identified in the 2 nd stage. We compare two ways of implementing this idea, one based on hard constraints (similar to the one used in constraint-based parsing) and one based on soft constraints (using a kind of parser stacking). Our results show that the approach using hard constraints seems most promising and performs significantly better than single-stage parsing. Our best result gives significant increase in LAS and UAS, respectively, over the previous best result using single-stage parsing.",,"Clausal parsing helps data-driven dependency parsing: Experiments with Hindi. This paper investigates clausal data-driven dependency parsing. We first motivate a clause as the minimal parsing unit by correlating inter-and intra-clausal relations with relation type, depth, arc length and non-projectivity. This insight leads to a two-stage formulation of parsing where intra-clausal relations are identified in the 1 st stage and inter-clausal relations are identified in the 2 nd stage. We compare two ways of implementing this idea, one based on hard constraints (similar to the one used in constraint-based parsing) and one based on soft constraints (using a kind of parser stacking). Our results show that the approach using hard constraints seems most promising and performs significantly better than single-stage parsing. Our best result gives significant increase in LAS and UAS, respectively, over the previous best result using single-stage parsing.",2011
ws-2001-adaptation,https://aclanthology.org/W01-0300,0,,,,,,,Adaptation in Dialog Systems. ,Adaptation in Dialog Systems,,Adaptation in Dialog Systems,,,Adaptation in Dialog Systems. ,2001
maier-kallmeyer-2010-discontinuity,https://aclanthology.org/W10-4415,0,,,,,,,"Discontinuity and Non-Projectivity: Using Mildly Context-Sensitive Formalisms for Data-Driven Parsing. We present a parser for probabilistic Linear Context-Free Rewriting Systems and use it for constituency and dependency treebank parsing. The choice of LCFRS, a formalism with an extended domain of locality, enables us to model discontinuous constituents and nonprojective dependencies in a straightforward way. The parsing results show that, firstly, our parser is efficient enough to be used for datadriven parsing and, secondly, its result quality for constituency parsing is comparable to the output quality of other state-of-the-art results, all while yielding structures that display discontinuous dependencies.",Discontinuity and Non-Projectivity: Using Mildly Context-Sensitive Formalisms for Data-Driven Parsing,"We present a parser for probabilistic Linear Context-Free Rewriting Systems and use it for constituency and dependency treebank parsing. The choice of LCFRS, a formalism with an extended domain of locality, enables us to model discontinuous constituents and nonprojective dependencies in a straightforward way. The parsing results show that, firstly, our parser is efficient enough to be used for datadriven parsing and, secondly, its result quality for constituency parsing is comparable to the output quality of other state-of-the-art results, all while yielding structures that display discontinuous dependencies.",Discontinuity and Non-Projectivity: Using Mildly Context-Sensitive Formalisms for Data-Driven Parsing,"We present a parser for probabilistic Linear Context-Free Rewriting Systems and use it for constituency and dependency treebank parsing. The choice of LCFRS, a formalism with an extended domain of locality, enables us to model discontinuous constituents and nonprojective dependencies in a straightforward way. The parsing results show that, firstly, our parser is efficient enough to be used for datadriven parsing and, secondly, its result quality for constituency parsing is comparable to the output quality of other state-of-the-art results, all while yielding structures that display discontinuous dependencies.",,"Discontinuity and Non-Projectivity: Using Mildly Context-Sensitive Formalisms for Data-Driven Parsing. We present a parser for probabilistic Linear Context-Free Rewriting Systems and use it for constituency and dependency treebank parsing. The choice of LCFRS, a formalism with an extended domain of locality, enables us to model discontinuous constituents and nonprojective dependencies in a straightforward way. The parsing results show that, firstly, our parser is efficient enough to be used for datadriven parsing and, secondly, its result quality for constituency parsing is comparable to the output quality of other state-of-the-art results, all while yielding structures that display discontinuous dependencies.",2010
kiela-bottou-2014-learning,https://aclanthology.org/D14-1005,0,,,,,,,Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics. We construct multi-modal concept representations by concatenating a skip-gram linguistic representation vector with a visual concept representation vector computed using the feature extraction layers of a deep convolutional neural network (CNN) trained on a large labeled object recognition dataset. This transfer learning approach brings a clear performance gain over features based on the traditional bag-of-visual-word approach. Experimental results are reported on the WordSim353 and MEN semantic relatedness evaluation tasks. We use visual features computed using either ImageNet or ESP Game images.,Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics,We construct multi-modal concept representations by concatenating a skip-gram linguistic representation vector with a visual concept representation vector computed using the feature extraction layers of a deep convolutional neural network (CNN) trained on a large labeled object recognition dataset. This transfer learning approach brings a clear performance gain over features based on the traditional bag-of-visual-word approach. Experimental results are reported on the WordSim353 and MEN semantic relatedness evaluation tasks. We use visual features computed using either ImageNet or ESP Game images.,Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics,We construct multi-modal concept representations by concatenating a skip-gram linguistic representation vector with a visual concept representation vector computed using the feature extraction layers of a deep convolutional neural network (CNN) trained on a large labeled object recognition dataset. This transfer learning approach brings a clear performance gain over features based on the traditional bag-of-visual-word approach. Experimental results are reported on the WordSim353 and MEN semantic relatedness evaluation tasks. We use visual features computed using either ImageNet or ESP Game images.,We would like to thank Maxime Oquab for providing the feature extraction code.,Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics. We construct multi-modal concept representations by concatenating a skip-gram linguistic representation vector with a visual concept representation vector computed using the feature extraction layers of a deep convolutional neural network (CNN) trained on a large labeled object recognition dataset. This transfer learning approach brings a clear performance gain over features based on the traditional bag-of-visual-word approach. Experimental results are reported on the WordSim353 and MEN semantic relatedness evaluation tasks. We use visual features computed using either ImageNet or ESP Game images.,2014
kyle-etal-2013-native,https://aclanthology.org/W13-1731,0,,,,,,,"Native Language Identification: A Key N-gram Category Approach. This study explores the efficacy of an approach to native language identification that utilizes grammatical, rhetorical, semantic, syntactic, and cohesive function categories comprised of key n-grams. The study found that a model based on these categories of key n-grams was able to successfully predict the L1 of essays written in English by L2 learners from 11 different L1 backgrounds with an accuracy of 59%. Preliminary findings concerning instances of crosslinguistic influence are discussed, along with evidence of language similarities based on patterns of language misclassification.",Native Language Identification: A Key N-gram Category Approach,"This study explores the efficacy of an approach to native language identification that utilizes grammatical, rhetorical, semantic, syntactic, and cohesive function categories comprised of key n-grams. The study found that a model based on these categories of key n-grams was able to successfully predict the L1 of essays written in English by L2 learners from 11 different L1 backgrounds with an accuracy of 59%. Preliminary findings concerning instances of crosslinguistic influence are discussed, along with evidence of language similarities based on patterns of language misclassification.",Native Language Identification: A Key N-gram Category Approach,"This study explores the efficacy of an approach to native language identification that utilizes grammatical, rhetorical, semantic, syntactic, and cohesive function categories comprised of key n-grams. The study found that a model based on these categories of key n-grams was able to successfully predict the L1 of essays written in English by L2 learners from 11 different L1 backgrounds with an accuracy of 59%. Preliminary findings concerning instances of crosslinguistic influence are discussed, along with evidence of language similarities based on patterns of language misclassification.","We thank ETS for compiling and providing the TOEFL11 corpus, and we also thank the organizers of the NLI Shared Task 2013.","Native Language Identification: A Key N-gram Category Approach. This study explores the efficacy of an approach to native language identification that utilizes grammatical, rhetorical, semantic, syntactic, and cohesive function categories comprised of key n-grams. The study found that a model based on these categories of key n-grams was able to successfully predict the L1 of essays written in English by L2 learners from 11 different L1 backgrounds with an accuracy of 59%. Preliminary findings concerning instances of crosslinguistic influence are discussed, along with evidence of language similarities based on patterns of language misclassification.",2013
bingel-etal-2016-extracting,https://aclanthology.org/P16-1071,1,,,,health,,,"Extracting token-level signals of syntactic processing from fMRI - with an application to PoS induction. Neuro-imaging studies on reading different parts of speech (PoS) report somewhat mixed results, yet some of them indicate different activations with different PoS. This paper addresses the difficulty of using fMRI to discriminate between linguistic tokens in reading of running text because of low temporal resolution. We show that once we solve this problem, fMRI data contains a signal of PoS distinctions to the extent that it improves PoS induction with error reductions of more than 4%.",Extracting token-level signals of syntactic processing from f{MRI} - with an application to {P}o{S} induction,"Neuro-imaging studies on reading different parts of speech (PoS) report somewhat mixed results, yet some of them indicate different activations with different PoS. This paper addresses the difficulty of using fMRI to discriminate between linguistic tokens in reading of running text because of low temporal resolution. We show that once we solve this problem, fMRI data contains a signal of PoS distinctions to the extent that it improves PoS induction with error reductions of more than 4%.",Extracting token-level signals of syntactic processing from fMRI - with an application to PoS induction,"Neuro-imaging studies on reading different parts of speech (PoS) report somewhat mixed results, yet some of them indicate different activations with different PoS. This paper addresses the difficulty of using fMRI to discriminate between linguistic tokens in reading of running text because of low temporal resolution. We show that once we solve this problem, fMRI data contains a signal of PoS distinctions to the extent that it improves PoS induction with error reductions of more than 4%.","This research was partially funded by the ERC Starting Grant LOWLANDS No. 313695, as well as by Trygfonden.","Extracting token-level signals of syntactic processing from fMRI - with an application to PoS induction. Neuro-imaging studies on reading different parts of speech (PoS) report somewhat mixed results, yet some of them indicate different activations with different PoS. This paper addresses the difficulty of using fMRI to discriminate between linguistic tokens in reading of running text because of low temporal resolution. We show that once we solve this problem, fMRI data contains a signal of PoS distinctions to the extent that it improves PoS induction with error reductions of more than 4%.",2016
long-etal-2017-xjnlp,https://aclanthology.org/S17-2178,1,,,,health,,,"XJNLP at SemEval-2017 Task 12: Clinical temporal information ex-traction with a Hybrid Model. Temporality is crucial in understanding the course of clinical events from a patient's electronic health records and temporal processing is becoming more and more important for improving access to content. SemEval 2017 Task 12 (Clinical TempEval) addressed this challenge using the THYME corpus, a corpus of clinical narratives annotated with a schema based on TimeML2 guidelines. We developed and evaluated approaches for: extraction of temporal expressions (TIMEX3) and EVENTs; EVENT attributes; document-time relations. Our approach is a hybrid model which is based on rule based methods, semi-supervised learning, and semantic features with addition of manually crafted rules.",{XJNLP} at {S}em{E}val-2017 Task 12: Clinical temporal information ex-traction with a Hybrid Model,"Temporality is crucial in understanding the course of clinical events from a patient's electronic health records and temporal processing is becoming more and more important for improving access to content. SemEval 2017 Task 12 (Clinical TempEval) addressed this challenge using the THYME corpus, a corpus of clinical narratives annotated with a schema based on TimeML2 guidelines. We developed and evaluated approaches for: extraction of temporal expressions (TIMEX3) and EVENTs; EVENT attributes; document-time relations. Our approach is a hybrid model which is based on rule based methods, semi-supervised learning, and semantic features with addition of manually crafted rules.",XJNLP at SemEval-2017 Task 12: Clinical temporal information ex-traction with a Hybrid Model,"Temporality is crucial in understanding the course of clinical events from a patient's electronic health records and temporal processing is becoming more and more important for improving access to content. SemEval 2017 Task 12 (Clinical TempEval) addressed this challenge using the THYME corpus, a corpus of clinical narratives annotated with a schema based on TimeML2 guidelines. We developed and evaluated approaches for: extraction of temporal expressions (TIMEX3) and EVENTs; EVENT attributes; document-time relations. Our approach is a hybrid model which is based on rule based methods, semi-supervised learning, and semantic features with addition of manually crafted rules.","This work has been supported by ""The Fundamental Theory and Applications of Big Data with ","XJNLP at SemEval-2017 Task 12: Clinical temporal information ex-traction with a Hybrid Model. Temporality is crucial in understanding the course of clinical events from a patient's electronic health records and temporal processing is becoming more and more important for improving access to content. SemEval 2017 Task 12 (Clinical TempEval) addressed this challenge using the THYME corpus, a corpus of clinical narratives annotated with a schema based on TimeML2 guidelines. We developed and evaluated approaches for: extraction of temporal expressions (TIMEX3) and EVENTs; EVENT attributes; document-time relations. Our approach is a hybrid model which is based on rule based methods, semi-supervised learning, and semantic features with addition of manually crafted rules.",2017
evans-1996-legitimate,https://aclanthology.org/Y96-1033,0,,,,,,,"Legitimate Termination of Nonlocal Features in HPSG. This paper reviews the treatment of wh-question facts offered by Lappin and Johnson 1996, and suggests that their account of certain island phenomena should be adapted by assuming that certain phrase structures license binding of inherited features. In Japanese, Lappin and Johnson's INHERILQUE feature appears to be dependent on INHERIQUE in order to terminate with a functional C head's TO-BINDIQUE. For certain languages, C's TO-BINDILQUE feature must be null if TO-BINDIQUE is null. In the spirit of Sag 1996 and Pollard and Yoo 1996, the facts can be handled by saying that TO-BINDILQUE is licensed on a wh-clause (wh-cl). As a wh-cl requires TO-BINDIQUE, the dependence of the less robust INHERILQUE on INHERIQUE is thus explained.",Legitimate Termination of Nonlocal Features in {HPSG},"This paper reviews the treatment of wh-question facts offered by Lappin and Johnson 1996, and suggests that their account of certain island phenomena should be adapted by assuming that certain phrase structures license binding of inherited features. In Japanese, Lappin and Johnson's INHERILQUE feature appears to be dependent on INHERIQUE in order to terminate with a functional C head's TO-BINDIQUE. For certain languages, C's TO-BINDILQUE feature must be null if TO-BINDIQUE is null. In the spirit of Sag 1996 and Pollard and Yoo 1996, the facts can be handled by saying that TO-BINDILQUE is licensed on a wh-clause (wh-cl). As a wh-cl requires TO-BINDIQUE, the dependence of the less robust INHERILQUE on INHERIQUE is thus explained.",Legitimate Termination of Nonlocal Features in HPSG,"This paper reviews the treatment of wh-question facts offered by Lappin and Johnson 1996, and suggests that their account of certain island phenomena should be adapted by assuming that certain phrase structures license binding of inherited features. In Japanese, Lappin and Johnson's INHERILQUE feature appears to be dependent on INHERIQUE in order to terminate with a functional C head's TO-BINDIQUE. For certain languages, C's TO-BINDILQUE feature must be null if TO-BINDIQUE is null. In the spirit of Sag 1996 and Pollard and Yoo 1996, the facts can be handled by saying that TO-BINDILQUE is licensed on a wh-clause (wh-cl). As a wh-cl requires TO-BINDIQUE, the dependence of the less robust INHERILQUE on INHERIQUE is thus explained.",,"Legitimate Termination of Nonlocal Features in HPSG. This paper reviews the treatment of wh-question facts offered by Lappin and Johnson 1996, and suggests that their account of certain island phenomena should be adapted by assuming that certain phrase structures license binding of inherited features. In Japanese, Lappin and Johnson's INHERILQUE feature appears to be dependent on INHERIQUE in order to terminate with a functional C head's TO-BINDIQUE. For certain languages, C's TO-BINDILQUE feature must be null if TO-BINDIQUE is null. In the spirit of Sag 1996 and Pollard and Yoo 1996, the facts can be handled by saying that TO-BINDILQUE is licensed on a wh-clause (wh-cl). As a wh-cl requires TO-BINDIQUE, the dependence of the less robust INHERILQUE on INHERIQUE is thus explained.",1996
suzuki-2012-classifying,https://aclanthology.org/W12-5307,0,,,,,,,"Classifying Hotel Reviews into Criteria for Review Summarization. Recently, we can refer to user reviews in the shopping or hotel reservation sites. However, with the exponential growth of information of the Internet, it is becoming increasingly difficult for a user to read and understand all the materials from a large-scale reviews. In this paper, we propose a method for classifying hotel reviews written in Japanese into criteria, e.g., location and facilities. Our system firstly extracts words which represent criteria from hotel reviews. The extracted words are classified into 12 criteria classes. Then, for each hotel, each sentence of the guest reviews is classified into criterion classes by using two different types of Naive Bayes classifiers. We performed experiments for estimating accuracy of classifying hotel review into 12 criteria. The results showed the effectiveness of our method and indicated that it can be used for review summarization by guest's criteria.",Classifying Hotel Reviews into Criteria for Review Summarization,"Recently, we can refer to user reviews in the shopping or hotel reservation sites. However, with the exponential growth of information of the Internet, it is becoming increasingly difficult for a user to read and understand all the materials from a large-scale reviews. In this paper, we propose a method for classifying hotel reviews written in Japanese into criteria, e.g., location and facilities. Our system firstly extracts words which represent criteria from hotel reviews. The extracted words are classified into 12 criteria classes. Then, for each hotel, each sentence of the guest reviews is classified into criterion classes by using two different types of Naive Bayes classifiers. We performed experiments for estimating accuracy of classifying hotel review into 12 criteria. The results showed the effectiveness of our method and indicated that it can be used for review summarization by guest's criteria.",Classifying Hotel Reviews into Criteria for Review Summarization,"Recently, we can refer to user reviews in the shopping or hotel reservation sites. However, with the exponential growth of information of the Internet, it is becoming increasingly difficult for a user to read and understand all the materials from a large-scale reviews. In this paper, we propose a method for classifying hotel reviews written in Japanese into criteria, e.g., location and facilities. Our system firstly extracts words which represent criteria from hotel reviews. The extracted words are classified into 12 criteria classes. Then, for each hotel, each sentence of the guest reviews is classified into criterion classes by using two different types of Naive Bayes classifiers. We performed experiments for estimating accuracy of classifying hotel review into 12 criteria. The results showed the effectiveness of our method and indicated that it can be used for review summarization by guest's criteria.",The authors would like to thank the referees for their comments on the earlier version of this paper. This work was partially supported by The Telecommunications Advancement Foundation.,"Classifying Hotel Reviews into Criteria for Review Summarization. Recently, we can refer to user reviews in the shopping or hotel reservation sites. However, with the exponential growth of information of the Internet, it is becoming increasingly difficult for a user to read and understand all the materials from a large-scale reviews. In this paper, we propose a method for classifying hotel reviews written in Japanese into criteria, e.g., location and facilities. Our system firstly extracts words which represent criteria from hotel reviews. The extracted words are classified into 12 criteria classes. Then, for each hotel, each sentence of the guest reviews is classified into criterion classes by using two different types of Naive Bayes classifiers. We performed experiments for estimating accuracy of classifying hotel review into 12 criteria. The results showed the effectiveness of our method and indicated that it can be used for review summarization by guest's criteria.",2012
inaba-1996-computational,https://aclanthology.org/Y96-1029,0,,,,,,,"A Computational Expression of Initial Binary Feet and Surface Ternary Feet in Metrical Theory. Under the strict binary foot parsing (Kager 1993), stray elements may occur between bimoraic feet. The stray element may be associated to the preceding foot or following foot at surface level. Stray element adjunction is the mechanism for achieving surface exhaustivity. Each language has its own unique mecbanism of stray element adjunction in order to achieve surface exhaustivity. In Japanese loanwords, the strict binary initial foot parsing creates stray moras. Inaba's (1996) phonetic experiment shows that the word-medial stray moras associate to preceding feet, and provides evidence for the initial unaccented mora as extrametrical. Since the theoretical points I advance are deeply embedded in other languages, I present a set of possible parameters. Based on the set of parameters, I create a computer program which derives the surface foot structures of input loanwords in Japanese, Fijian, and Ponapean.",A Computational Expression of Initial Binary Feet and Surface Ternary Feet in Metrical Theory,"Under the strict binary foot parsing (Kager 1993), stray elements may occur between bimoraic feet. The stray element may be associated to the preceding foot or following foot at surface level. Stray element adjunction is the mechanism for achieving surface exhaustivity. Each language has its own unique mecbanism of stray element adjunction in order to achieve surface exhaustivity. In Japanese loanwords, the strict binary initial foot parsing creates stray moras. Inaba's (1996) phonetic experiment shows that the word-medial stray moras associate to preceding feet, and provides evidence for the initial unaccented mora as extrametrical. Since the theoretical points I advance are deeply embedded in other languages, I present a set of possible parameters. Based on the set of parameters, I create a computer program which derives the surface foot structures of input loanwords in Japanese, Fijian, and Ponapean.",A Computational Expression of Initial Binary Feet and Surface Ternary Feet in Metrical Theory,"Under the strict binary foot parsing (Kager 1993), stray elements may occur between bimoraic feet. The stray element may be associated to the preceding foot or following foot at surface level. Stray element adjunction is the mechanism for achieving surface exhaustivity. Each language has its own unique mecbanism of stray element adjunction in order to achieve surface exhaustivity. In Japanese loanwords, the strict binary initial foot parsing creates stray moras. Inaba's (1996) phonetic experiment shows that the word-medial stray moras associate to preceding feet, and provides evidence for the initial unaccented mora as extrametrical. Since the theoretical points I advance are deeply embedded in other languages, I present a set of possible parameters. Based on the set of parameters, I create a computer program which derives the surface foot structures of input loanwords in Japanese, Fijian, and Ponapean.",,"A Computational Expression of Initial Binary Feet and Surface Ternary Feet in Metrical Theory. Under the strict binary foot parsing (Kager 1993), stray elements may occur between bimoraic feet. The stray element may be associated to the preceding foot or following foot at surface level. Stray element adjunction is the mechanism for achieving surface exhaustivity. Each language has its own unique mecbanism of stray element adjunction in order to achieve surface exhaustivity. In Japanese loanwords, the strict binary initial foot parsing creates stray moras. Inaba's (1996) phonetic experiment shows that the word-medial stray moras associate to preceding feet, and provides evidence for the initial unaccented mora as extrametrical. Since the theoretical points I advance are deeply embedded in other languages, I present a set of possible parameters. Based on the set of parameters, I create a computer program which derives the surface foot structures of input loanwords in Japanese, Fijian, and Ponapean.",1996
silva-etal-2010-top,http://www.lrec-conf.org/proceedings/lrec2010/pdf/136_Paper.pdf,0,,,,,,,"Top-Performing Robust Constituency Parsing of Portuguese: Freely Available in as Many Ways as you Can Get it. In this paper we present LX-Parser, a probabilistic, robust constituency parser for Portuguese. This parser achieves ca. 88% f-score in the labeled bracketing task, thus reaching a state-of-the-art performance score that is in line with those that are currently obtained by top-ranking parsers for English, the most studied natural language. To the best of our knowledge, LX-Parser is the first state-of-the-art, robust constituency parser for Portuguese that is made freely available. This parser is being distributed in a variety of ways, each suited for a different type of usage. More specifically, LX-Parser is being made available (i) as a downloadable, stand-alone parsing tool that can be run locally by its users; (ii) as a Web service that exposes an interface that can be invoked remotely and transparently by client applications; and finally (iii) as an on-line parsing service, aimed at human users, that can be accessed through any common Web browser.",Top-Performing Robust Constituency Parsing of {P}ortuguese: Freely Available in as Many Ways as you Can Get it,"In this paper we present LX-Parser, a probabilistic, robust constituency parser for Portuguese. This parser achieves ca. 88% f-score in the labeled bracketing task, thus reaching a state-of-the-art performance score that is in line with those that are currently obtained by top-ranking parsers for English, the most studied natural language. To the best of our knowledge, LX-Parser is the first state-of-the-art, robust constituency parser for Portuguese that is made freely available. This parser is being distributed in a variety of ways, each suited for a different type of usage. More specifically, LX-Parser is being made available (i) as a downloadable, stand-alone parsing tool that can be run locally by its users; (ii) as a Web service that exposes an interface that can be invoked remotely and transparently by client applications; and finally (iii) as an on-line parsing service, aimed at human users, that can be accessed through any common Web browser.",Top-Performing Robust Constituency Parsing of Portuguese: Freely Available in as Many Ways as you Can Get it,"In this paper we present LX-Parser, a probabilistic, robust constituency parser for Portuguese. This parser achieves ca. 88% f-score in the labeled bracketing task, thus reaching a state-of-the-art performance score that is in line with those that are currently obtained by top-ranking parsers for English, the most studied natural language. To the best of our knowledge, LX-Parser is the first state-of-the-art, robust constituency parser for Portuguese that is made freely available. This parser is being distributed in a variety of ways, each suited for a different type of usage. More specifically, LX-Parser is being made available (i) as a downloadable, stand-alone parsing tool that can be run locally by its users; (ii) as a Web service that exposes an interface that can be invoked remotely and transparently by client applications; and finally (iii) as an on-line parsing service, aimed at human users, that can be accessed through any common Web browser.",,"Top-Performing Robust Constituency Parsing of Portuguese: Freely Available in as Many Ways as you Can Get it. In this paper we present LX-Parser, a probabilistic, robust constituency parser for Portuguese. This parser achieves ca. 88% f-score in the labeled bracketing task, thus reaching a state-of-the-art performance score that is in line with those that are currently obtained by top-ranking parsers for English, the most studied natural language. To the best of our knowledge, LX-Parser is the first state-of-the-art, robust constituency parser for Portuguese that is made freely available. This parser is being distributed in a variety of ways, each suited for a different type of usage. More specifically, LX-Parser is being made available (i) as a downloadable, stand-alone parsing tool that can be run locally by its users; (ii) as a Web service that exposes an interface that can be invoked remotely and transparently by client applications; and finally (iii) as an on-line parsing service, aimed at human users, that can be accessed through any common Web browser.",2010
perez-beltrachini-lapata-2021-models,https://aclanthology.org/2021.emnlp-main.742,0,,,,,,,"Models and Datasets for Cross-Lingual Summarisation. We present a cross-lingual summarisation corpus with long documents in a source language associated with multi-sentence summaries in a target language. The corpus covers twelve language pairs and directions for four European languages, namely Czech, English, French and German, and the methodology for its creation can be applied to several other languages. We derive cross-lingual document-summary instances from Wikipedia by combining lead paragraphs and articles' bodies from language aligned Wikipedia titles. We analyse the proposed cross-lingual summarisation task with automatic metrics and validate it with a human study. To illustrate the utility of our dataset we report experiments with multilingual pretrained models in supervised, zero-and fewshot, and out-of-domain scenarios.",Models and Datasets for Cross-Lingual Summarisation,"We present a cross-lingual summarisation corpus with long documents in a source language associated with multi-sentence summaries in a target language. The corpus covers twelve language pairs and directions for four European languages, namely Czech, English, French and German, and the methodology for its creation can be applied to several other languages. We derive cross-lingual document-summary instances from Wikipedia by combining lead paragraphs and articles' bodies from language aligned Wikipedia titles. We analyse the proposed cross-lingual summarisation task with automatic metrics and validate it with a human study. To illustrate the utility of our dataset we report experiments with multilingual pretrained models in supervised, zero-and fewshot, and out-of-domain scenarios.",Models and Datasets for Cross-Lingual Summarisation,"We present a cross-lingual summarisation corpus with long documents in a source language associated with multi-sentence summaries in a target language. The corpus covers twelve language pairs and directions for four European languages, namely Czech, English, French and German, and the methodology for its creation can be applied to several other languages. We derive cross-lingual document-summary instances from Wikipedia by combining lead paragraphs and articles' bodies from language aligned Wikipedia titles. We analyse the proposed cross-lingual summarisation task with automatic metrics and validate it with a human study. To illustrate the utility of our dataset we report experiments with multilingual pretrained models in supervised, zero-and fewshot, and out-of-domain scenarios.",We thank the anonymous reviewers for their feedback. We also thank Yumo Xu for useful discussions about the models. We are extremely grateful to our bilingual annotators and to Voxeurop SCE publishers. We gratefully acknowledge the support of the European Research Council (award number 681760).,"Models and Datasets for Cross-Lingual Summarisation. We present a cross-lingual summarisation corpus with long documents in a source language associated with multi-sentence summaries in a target language. The corpus covers twelve language pairs and directions for four European languages, namely Czech, English, French and German, and the methodology for its creation can be applied to several other languages. We derive cross-lingual document-summary instances from Wikipedia by combining lead paragraphs and articles' bodies from language aligned Wikipedia titles. We analyse the proposed cross-lingual summarisation task with automatic metrics and validate it with a human study. To illustrate the utility of our dataset we report experiments with multilingual pretrained models in supervised, zero-and fewshot, and out-of-domain scenarios.",2021
komninos-manandhar-2016-dependency,https://aclanthology.org/N16-1175,0,,,,,,,"Dependency Based Embeddings for Sentence Classification Tasks. We compare different word embeddings from a standard window based skipgram model, a skipgram model trained using dependency context features and a novel skipgram variant that utilizes additional information from dependency graphs. We explore the effectiveness of the different types of word embeddings for word similarity and sentence classification tasks. We consider three common sentence classification tasks: question type classification on the TREC dataset, binary sentiment classification on Stanford's Sentiment Treebank and semantic relation classification on the SemEval 2010 dataset. For each task we use three different classification methods: a Support Vector Machine, a Convolutional Neural Network and a Long Short Term Memory Network. Our experiments show that dependency based embeddings outperform standard window based embeddings in most of the settings, while using dependency context embeddings as additional features improves performance in all tasks regardless of the classification method. Our embeddings and code are available at https://www.cs.york.ac.uk/nlp/ extvec",Dependency Based Embeddings for Sentence Classification Tasks,"We compare different word embeddings from a standard window based skipgram model, a skipgram model trained using dependency context features and a novel skipgram variant that utilizes additional information from dependency graphs. We explore the effectiveness of the different types of word embeddings for word similarity and sentence classification tasks. We consider three common sentence classification tasks: question type classification on the TREC dataset, binary sentiment classification on Stanford's Sentiment Treebank and semantic relation classification on the SemEval 2010 dataset. For each task we use three different classification methods: a Support Vector Machine, a Convolutional Neural Network and a Long Short Term Memory Network. Our experiments show that dependency based embeddings outperform standard window based embeddings in most of the settings, while using dependency context embeddings as additional features improves performance in all tasks regardless of the classification method. Our embeddings and code are available at https://www.cs.york.ac.uk/nlp/ extvec",Dependency Based Embeddings for Sentence Classification Tasks,"We compare different word embeddings from a standard window based skipgram model, a skipgram model trained using dependency context features and a novel skipgram variant that utilizes additional information from dependency graphs. We explore the effectiveness of the different types of word embeddings for word similarity and sentence classification tasks. We consider three common sentence classification tasks: question type classification on the TREC dataset, binary sentiment classification on Stanford's Sentiment Treebank and semantic relation classification on the SemEval 2010 dataset. For each task we use three different classification methods: a Support Vector Machine, a Convolutional Neural Network and a Long Short Term Memory Network. Our experiments show that dependency based embeddings outperform standard window based embeddings in most of the settings, while using dependency context embeddings as additional features improves performance in all tasks regardless of the classification method. Our embeddings and code are available at https://www.cs.york.ac.uk/nlp/ extvec","Alexandros Komninos was supported by EP-SRC via an Engineering Doctorate in LSCITS. Suresh Manandhar was supported by EPSRC grant EP/I037512/1, A Unified Model of Compositional & Distributional Semantics: Theory and Application.","Dependency Based Embeddings for Sentence Classification Tasks. We compare different word embeddings from a standard window based skipgram model, a skipgram model trained using dependency context features and a novel skipgram variant that utilizes additional information from dependency graphs. We explore the effectiveness of the different types of word embeddings for word similarity and sentence classification tasks. We consider three common sentence classification tasks: question type classification on the TREC dataset, binary sentiment classification on Stanford's Sentiment Treebank and semantic relation classification on the SemEval 2010 dataset. For each task we use three different classification methods: a Support Vector Machine, a Convolutional Neural Network and a Long Short Term Memory Network. Our experiments show that dependency based embeddings outperform standard window based embeddings in most of the settings, while using dependency context embeddings as additional features improves performance in all tasks regardless of the classification method. Our embeddings and code are available at https://www.cs.york.ac.uk/nlp/ extvec",2016
merlo-van-der-plas-2009-abstraction,https://aclanthology.org/P09-1033,0,,,,,,,"Abstraction and Generalisation in Semantic Role Labels: PropBank, VerbNet or both?. Semantic role labels are the representation of the grammatically relevant aspects of a sentence meaning. Capturing the nature and the number of semantic roles in a sentence is therefore fundamental to correctly describing the interface between grammar and meaning. In this paper, we compare two annotation schemes, Prop-Bank and VerbNet, in a task-independent, general way, analysing how well they fare in capturing the linguistic generalisations that are known to hold for semantic role labels, and consequently how well they grammaticalise aspects of meaning. We show that VerbNet is more verb-specific and better able to generalise to new semantic role instances, while PropBank better captures some of the structural constraints among roles. We conclude that these two resources should be used together, as they are complementary.","Abstraction and Generalisation in Semantic Role Labels: {P}rop{B}ank, {V}erb{N}et or both?","Semantic role labels are the representation of the grammatically relevant aspects of a sentence meaning. Capturing the nature and the number of semantic roles in a sentence is therefore fundamental to correctly describing the interface between grammar and meaning. In this paper, we compare two annotation schemes, Prop-Bank and VerbNet, in a task-independent, general way, analysing how well they fare in capturing the linguistic generalisations that are known to hold for semantic role labels, and consequently how well they grammaticalise aspects of meaning. We show that VerbNet is more verb-specific and better able to generalise to new semantic role instances, while PropBank better captures some of the structural constraints among roles. We conclude that these two resources should be used together, as they are complementary.","Abstraction and Generalisation in Semantic Role Labels: PropBank, VerbNet or both?","Semantic role labels are the representation of the grammatically relevant aspects of a sentence meaning. Capturing the nature and the number of semantic roles in a sentence is therefore fundamental to correctly describing the interface between grammar and meaning. In this paper, we compare two annotation schemes, Prop-Bank and VerbNet, in a task-independent, general way, analysing how well they fare in capturing the linguistic generalisations that are known to hold for semantic role labels, and consequently how well they grammaticalise aspects of meaning. We show that VerbNet is more verb-specific and better able to generalise to new semantic role instances, while PropBank better captures some of the structural constraints among roles. We conclude that these two resources should be used together, as they are complementary.",We thank James Henderson and Ivan Titov for useful comments. The research leading to these results has received partial funding from the EU FP7 programme (FP7/2007-2013) under grant agreement number 216594 (CLASSIC project: www.classic-project.org).,"Abstraction and Generalisation in Semantic Role Labels: PropBank, VerbNet or both?. Semantic role labels are the representation of the grammatically relevant aspects of a sentence meaning. Capturing the nature and the number of semantic roles in a sentence is therefore fundamental to correctly describing the interface between grammar and meaning. In this paper, we compare two annotation schemes, Prop-Bank and VerbNet, in a task-independent, general way, analysing how well they fare in capturing the linguistic generalisations that are known to hold for semantic role labels, and consequently how well they grammaticalise aspects of meaning. We show that VerbNet is more verb-specific and better able to generalise to new semantic role instances, while PropBank better captures some of the structural constraints among roles. We conclude that these two resources should be used together, as they are complementary.",2009
green-2011-effects,https://aclanthology.org/P11-3013,0,,,,,,,"Effects of Noun Phrase Bracketing in Dependency Parsing and Machine Translation. Flat noun phrase structure was, up until recently, the standard in annotation for the Penn Treebanks. With the recent addition of internal noun phrase annotation, dependency parsing and applications down the NLP pipeline are likely affected. Some machine translation systems, such as TectoMT, use deep syntax as a language transfer layer. It is proposed that changes to the noun phrase dependency parse will have a cascading effect down the NLP pipeline and in the end, improve machine translation output, even with a reduction in parser accuracy that the noun phrase structure might cause. This paper examines this noun phrase structure's effect on dependency parsing, in English, with a maximum spanning tree parser and shows a 2.43%, 0.23 Bleu score, improvement for English to Czech machine translation.",Effects of Noun Phrase Bracketing in Dependency Parsing and Machine Translation,"Flat noun phrase structure was, up until recently, the standard in annotation for the Penn Treebanks. With the recent addition of internal noun phrase annotation, dependency parsing and applications down the NLP pipeline are likely affected. Some machine translation systems, such as TectoMT, use deep syntax as a language transfer layer. It is proposed that changes to the noun phrase dependency parse will have a cascading effect down the NLP pipeline and in the end, improve machine translation output, even with a reduction in parser accuracy that the noun phrase structure might cause. This paper examines this noun phrase structure's effect on dependency parsing, in English, with a maximum spanning tree parser and shows a 2.43%, 0.23 Bleu score, improvement for English to Czech machine translation.",Effects of Noun Phrase Bracketing in Dependency Parsing and Machine Translation,"Flat noun phrase structure was, up until recently, the standard in annotation for the Penn Treebanks. With the recent addition of internal noun phrase annotation, dependency parsing and applications down the NLP pipeline are likely affected. Some machine translation systems, such as TectoMT, use deep syntax as a language transfer layer. It is proposed that changes to the noun phrase dependency parse will have a cascading effect down the NLP pipeline and in the end, improve machine translation output, even with a reduction in parser accuracy that the noun phrase structure might cause. This paper examines this noun phrase structure's effect on dependency parsing, in English, with a maximum spanning tree parser and shows a 2.43%, 0.23 Bleu score, improvement for English to Czech machine translation.","This research has received funding from the European Commissions 7th Framework Program (FP7) under grant agreement n • 238405 (CLARA), and from grant MSM 0021620838. I would like to thank ZdeněkŽabokrtský for his guidance in this research and also the anonymous reviewers for their comments.","Effects of Noun Phrase Bracketing in Dependency Parsing and Machine Translation. Flat noun phrase structure was, up until recently, the standard in annotation for the Penn Treebanks. With the recent addition of internal noun phrase annotation, dependency parsing and applications down the NLP pipeline are likely affected. Some machine translation systems, such as TectoMT, use deep syntax as a language transfer layer. It is proposed that changes to the noun phrase dependency parse will have a cascading effect down the NLP pipeline and in the end, improve machine translation output, even with a reduction in parser accuracy that the noun phrase structure might cause. This paper examines this noun phrase structure's effect on dependency parsing, in English, with a maximum spanning tree parser and shows a 2.43%, 0.23 Bleu score, improvement for English to Czech machine translation.",2011
fei-etal-2020-mimic,https://aclanthology.org/2020.findings-emnlp.18,0,,,,,,,"Mimic and Conquer: Heterogeneous Tree Structure Distillation for Syntactic NLP. Syntax has been shown useful for various NLP tasks, while existing work mostly encodes singleton syntactic tree using one hierarchical neural network. In this paper, we investigate a simple and effective method, Knowledge Distillation, to integrate heterogeneous structure knowledge into a unified sequential LSTM encoder. Experimental results on four typical syntax-dependent tasks show that our method outperforms tree encoders by effectively integrating rich heterogeneous structure syntax, meanwhile reducing error propagation, and also outperforms ensemble methods, in terms of both the efficiency and accuracy.",Mimic and Conquer: Heterogeneous Tree Structure Distillation for Syntactic {NLP},"Syntax has been shown useful for various NLP tasks, while existing work mostly encodes singleton syntactic tree using one hierarchical neural network. In this paper, we investigate a simple and effective method, Knowledge Distillation, to integrate heterogeneous structure knowledge into a unified sequential LSTM encoder. Experimental results on four typical syntax-dependent tasks show that our method outperforms tree encoders by effectively integrating rich heterogeneous structure syntax, meanwhile reducing error propagation, and also outperforms ensemble methods, in terms of both the efficiency and accuracy.",Mimic and Conquer: Heterogeneous Tree Structure Distillation for Syntactic NLP,"Syntax has been shown useful for various NLP tasks, while existing work mostly encodes singleton syntactic tree using one hierarchical neural network. In this paper, we investigate a simple and effective method, Knowledge Distillation, to integrate heterogeneous structure knowledge into a unified sequential LSTM encoder. Experimental results on four typical syntax-dependent tasks show that our method outperforms tree encoders by effectively integrating rich heterogeneous structure syntax, meanwhile reducing error propagation, and also outperforms ensemble methods, in terms of both the efficiency and accuracy.","This work is supported by the National Natural Science Foundation of China (No. 61772378, No. 61702121) ","Mimic and Conquer: Heterogeneous Tree Structure Distillation for Syntactic NLP. Syntax has been shown useful for various NLP tasks, while existing work mostly encodes singleton syntactic tree using one hierarchical neural network. In this paper, we investigate a simple and effective method, Knowledge Distillation, to integrate heterogeneous structure knowledge into a unified sequential LSTM encoder. Experimental results on four typical syntax-dependent tasks show that our method outperforms tree encoders by effectively integrating rich heterogeneous structure syntax, meanwhile reducing error propagation, and also outperforms ensemble methods, in terms of both the efficiency and accuracy.",2020
mcintosh-2009-canadian,https://aclanthology.org/2009.mtsummit-government.4,1,,,,decent_work_and_economy,,,"Canadian Job Bank Automated Translation System. § Job Bank (www.jobbank.gc.ca) is a free job-posting service provided by the Federal Government to all Canadians. Employers have the option to create a profile; upon approval, they can then post job offers. § Job seekers are able to access these positions in two ways. Standard Job Search and Job Matching § Several additional tools are available to assist the Job Seeker with their job search; such as resume builder, job alert, job search tips, and career navigator Job Bank for Employers § Employers can post job advertisements 24 hours a day, 7 days a week using the ""Job Bank for Employers"" Web site § Job offers received by fax, e-mail, Internet and telephone must be published simultaneously in both French and English within 24 business hours § 70,356,222 Job Bank Web site visits in 2008-2009 § 1,138,233 Spelling and Grammar Checker § Customized entries are added on a weekly basis to a single file § Has been integrated into the JBFE interface
Oracle Database § Archives offers and their post-edited equivalents in a database § Automatically posts offers that are identical (100% match) along with their translation § Of all offers posted to the JB site, 45% are reproduced by the database ",{C}anadian Job Bank Automated Translation System,"§ Job Bank (www.jobbank.gc.ca) is a free job-posting service provided by the Federal Government to all Canadians. Employers have the option to create a profile; upon approval, they can then post job offers. § Job seekers are able to access these positions in two ways. Standard Job Search and Job Matching § Several additional tools are available to assist the Job Seeker with their job search; such as resume builder, job alert, job search tips, and career navigator Job Bank for Employers § Employers can post job advertisements 24 hours a day, 7 days a week using the ""Job Bank for Employers"" Web site § Job offers received by fax, e-mail, Internet and telephone must be published simultaneously in both French and English within 24 business hours § 70,356,222 Job Bank Web site visits in 2008-2009 § 1,138,233 Spelling and Grammar Checker § Customized entries are added on a weekly basis to a single file § Has been integrated into the JBFE interface
Oracle Database § Archives offers and their post-edited equivalents in a database § Automatically posts offers that are identical (100% match) along with their translation § Of all offers posted to the JB site, 45% are reproduced by the database ",Canadian Job Bank Automated Translation System,"§ Job Bank (www.jobbank.gc.ca) is a free job-posting service provided by the Federal Government to all Canadians. Employers have the option to create a profile; upon approval, they can then post job offers. § Job seekers are able to access these positions in two ways. Standard Job Search and Job Matching § Several additional tools are available to assist the Job Seeker with their job search; such as resume builder, job alert, job search tips, and career navigator Job Bank for Employers § Employers can post job advertisements 24 hours a day, 7 days a week using the ""Job Bank for Employers"" Web site § Job offers received by fax, e-mail, Internet and telephone must be published simultaneously in both French and English within 24 business hours § 70,356,222 Job Bank Web site visits in 2008-2009 § 1,138,233 Spelling and Grammar Checker § Customized entries are added on a weekly basis to a single file § Has been integrated into the JBFE interface
Oracle Database § Archives offers and their post-edited equivalents in a database § Automatically posts offers that are identical (100% match) along with their translation § Of all offers posted to the JB site, 45% are reproduced by the database ",,"Canadian Job Bank Automated Translation System. § Job Bank (www.jobbank.gc.ca) is a free job-posting service provided by the Federal Government to all Canadians. Employers have the option to create a profile; upon approval, they can then post job offers. § Job seekers are able to access these positions in two ways. Standard Job Search and Job Matching § Several additional tools are available to assist the Job Seeker with their job search; such as resume builder, job alert, job search tips, and career navigator Job Bank for Employers § Employers can post job advertisements 24 hours a day, 7 days a week using the ""Job Bank for Employers"" Web site § Job offers received by fax, e-mail, Internet and telephone must be published simultaneously in both French and English within 24 business hours § 70,356,222 Job Bank Web site visits in 2008-2009 § 1,138,233 Spelling and Grammar Checker § Customized entries are added on a weekly basis to a single file § Has been integrated into the JBFE interface
Oracle Database § Archives offers and their post-edited equivalents in a database § Automatically posts offers that are identical (100% match) along with their translation § Of all offers posted to the JB site, 45% are reproduced by the database ",2009
sornlertlamvanich-etal-2000-automatic,https://aclanthology.org/C00-2116,0,,,,,,,"Automatic Corpus-Based Thai Word Extraction with the C4.5 Learning Algorithm. Word"" is difficult to define in the languages that do not exhibit explicit word boundary, such as Thai. Traditional methods on defining words for this kind of languages have to depend on human judgement which bases on unclear criteria or procedures, and have several limitations. This paper proposes an algorithm for word extraction from Thai texts without borrowing a hand from word segmentation. We employ the c4.5 learning algorithm for this task. Several attributes such as string length, frequency, mutual information and entropy are chosen for word/non-word determination. Our experiment yields high precision results about 85% in both training and test corpus.",Automatic Corpus-Based {T}hai Word Extraction with the {C}4.5 Learning Algorithm,"Word"" is difficult to define in the languages that do not exhibit explicit word boundary, such as Thai. Traditional methods on defining words for this kind of languages have to depend on human judgement which bases on unclear criteria or procedures, and have several limitations. This paper proposes an algorithm for word extraction from Thai texts without borrowing a hand from word segmentation. We employ the c4.5 learning algorithm for this task. Several attributes such as string length, frequency, mutual information and entropy are chosen for word/non-word determination. Our experiment yields high precision results about 85% in both training and test corpus.",Automatic Corpus-Based Thai Word Extraction with the C4.5 Learning Algorithm,"Word"" is difficult to define in the languages that do not exhibit explicit word boundary, such as Thai. Traditional methods on defining words for this kind of languages have to depend on human judgement which bases on unclear criteria or procedures, and have several limitations. This paper proposes an algorithm for word extraction from Thai texts without borrowing a hand from word segmentation. We employ the c4.5 learning algorithm for this task. Several attributes such as string length, frequency, mutual information and entropy are chosen for word/non-word determination. Our experiment yields high precision results about 85% in both training and test corpus.",Special thanks to Assistant Professor Mikio Yamamoto for providing the useful program to extract all substrings from the corpora in linear time.,"Automatic Corpus-Based Thai Word Extraction with the C4.5 Learning Algorithm. Word"" is difficult to define in the languages that do not exhibit explicit word boundary, such as Thai. Traditional methods on defining words for this kind of languages have to depend on human judgement which bases on unclear criteria or procedures, and have several limitations. This paper proposes an algorithm for word extraction from Thai texts without borrowing a hand from word segmentation. We employ the c4.5 learning algorithm for this task. Several attributes such as string length, frequency, mutual information and entropy are chosen for word/non-word determination. Our experiment yields high precision results about 85% in both training and test corpus.",2000
song-etal-2020-multi,https://aclanthology.org/2020.emnlp-main.546,1,,,,education,,,"Multi-Stage Pre-training for Automated Chinese Essay Scoring. This paper proposes a pre-training based automated Chinese essay scoring method. The method involves three components: weakly supervised pre-training, supervised crossprompt fine-tuning and supervised targetprompt fine-tuning. An essay scorer is first pretrained on a large essay dataset covering diverse topics and with coarse ratings, i.e., good and poor, which are used as a kind of weak supervision. The pre-trained essay scorer would be further fine-tuned on previously rated essays from existing prompts, which have the same score range with the target prompt and provide extra supervision. At last, the scorer is fine-tuned on the target-prompt training data. The evaluation on four prompts shows that this method can improve a state-of-the-art neural essay scorer in terms of effectiveness and domain adaptation ability, while in-depth analysis also reveals its limitations.",Multi-Stage Pre-training for Automated {C}hinese Essay Scoring,"This paper proposes a pre-training based automated Chinese essay scoring method. The method involves three components: weakly supervised pre-training, supervised crossprompt fine-tuning and supervised targetprompt fine-tuning. An essay scorer is first pretrained on a large essay dataset covering diverse topics and with coarse ratings, i.e., good and poor, which are used as a kind of weak supervision. The pre-trained essay scorer would be further fine-tuned on previously rated essays from existing prompts, which have the same score range with the target prompt and provide extra supervision. At last, the scorer is fine-tuned on the target-prompt training data. The evaluation on four prompts shows that this method can improve a state-of-the-art neural essay scorer in terms of effectiveness and domain adaptation ability, while in-depth analysis also reveals its limitations.",Multi-Stage Pre-training for Automated Chinese Essay Scoring,"This paper proposes a pre-training based automated Chinese essay scoring method. The method involves three components: weakly supervised pre-training, supervised crossprompt fine-tuning and supervised targetprompt fine-tuning. An essay scorer is first pretrained on a large essay dataset covering diverse topics and with coarse ratings, i.e., good and poor, which are used as a kind of weak supervision. The pre-trained essay scorer would be further fine-tuned on previously rated essays from existing prompts, which have the same score range with the target prompt and provide extra supervision. At last, the scorer is fine-tuned on the target-prompt training data. The evaluation on four prompts shows that this method can improve a state-of-the-art neural essay scorer in terms of effectiveness and domain adaptation ability, while in-depth analysis also reveals its limitations.","This work is supported by the National Natural Science Foundation of China (Nos. 61876113, 61876112), Beijing Natural Science Foundation (No. 4192017) and Capital Building for Sci-Tech Innovation-Fundamental Scientific Research Funds. Lizhen Liu is the corresponding author.","Multi-Stage Pre-training for Automated Chinese Essay Scoring. This paper proposes a pre-training based automated Chinese essay scoring method. The method involves three components: weakly supervised pre-training, supervised crossprompt fine-tuning and supervised targetprompt fine-tuning. An essay scorer is first pretrained on a large essay dataset covering diverse topics and with coarse ratings, i.e., good and poor, which are used as a kind of weak supervision. The pre-trained essay scorer would be further fine-tuned on previously rated essays from existing prompts, which have the same score range with the target prompt and provide extra supervision. At last, the scorer is fine-tuned on the target-prompt training data. The evaluation on four prompts shows that this method can improve a state-of-the-art neural essay scorer in terms of effectiveness and domain adaptation ability, while in-depth analysis also reveals its limitations.",2020
vu-etal-2022-domain,https://aclanthology.org/2022.findings-acl.49,0,,,,,,,"Domain Generalisation of NMT: Fusing Adapters with Leave-One-Domain-Out Training. Generalising to unseen domains is underexplored and remains a challenge in neural machine translation. Inspired by recent research in parameter-efficient transfer learning from pretrained models, this paper proposes a fusionbased generalisation method that learns to combine domain-specific parameters. We propose a leave-one-domain-out training strategy to avoid information leaking to address the challenge of not knowing the test domain during training time. Empirical results on three language pairs show that our proposed fusion method outperforms other baselines up to +0.8 BLEU score on average.",Domain Generalisation of {NMT}: Fusing Adapters with Leave-One-Domain-Out Training,"Generalising to unseen domains is underexplored and remains a challenge in neural machine translation. Inspired by recent research in parameter-efficient transfer learning from pretrained models, this paper proposes a fusionbased generalisation method that learns to combine domain-specific parameters. We propose a leave-one-domain-out training strategy to avoid information leaking to address the challenge of not knowing the test domain during training time. Empirical results on three language pairs show that our proposed fusion method outperforms other baselines up to +0.8 BLEU score on average.",Domain Generalisation of NMT: Fusing Adapters with Leave-One-Domain-Out Training,"Generalising to unseen domains is underexplored and remains a challenge in neural machine translation. Inspired by recent research in parameter-efficient transfer learning from pretrained models, this paper proposes a fusionbased generalisation method that learns to combine domain-specific parameters. We propose a leave-one-domain-out training strategy to avoid information leaking to address the challenge of not knowing the test domain during training time. Empirical results on three language pairs show that our proposed fusion method outperforms other baselines up to +0.8 BLEU score on average.",This research is supported by an eBay Research Award and the ARC Future Fellowship FT190100039. This work is partly sponsored by the Air Force Research Laboratory and DARPA under agreement number FA8750-19-2-0501. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The authors are grateful to the anonymous reviewers for their helpful comments to improve the manuscript.,"Domain Generalisation of NMT: Fusing Adapters with Leave-One-Domain-Out Training. Generalising to unseen domains is underexplored and remains a challenge in neural machine translation. Inspired by recent research in parameter-efficient transfer learning from pretrained models, this paper proposes a fusionbased generalisation method that learns to combine domain-specific parameters. We propose a leave-one-domain-out training strategy to avoid information leaking to address the challenge of not knowing the test domain during training time. Empirical results on three language pairs show that our proposed fusion method outperforms other baselines up to +0.8 BLEU score on average.",2022
bisk-hockenmaier-2013-hdp,https://aclanthology.org/Q13-1007,0,,,,,,,"An HDP Model for Inducing Combinatory Categorial Grammars. We introduce a novel nonparametric Bayesian model for the induction of Combinatory Categorial Grammars from POS-tagged text. It achieves state of the art performance on a number of languages, and induces linguistically plausible lexicons.",An {HDP} Model for Inducing {C}ombinatory {C}ategorial {G}rammars,"We introduce a novel nonparametric Bayesian model for the induction of Combinatory Categorial Grammars from POS-tagged text. It achieves state of the art performance on a number of languages, and induces linguistically plausible lexicons.",An HDP Model for Inducing Combinatory Categorial Grammars,"We introduce a novel nonparametric Bayesian model for the induction of Combinatory Categorial Grammars from POS-tagged text. It achieves state of the art performance on a number of languages, and induces linguistically plausible lexicons.",This work is supported by NSF CAREER award 1053856 (Bayesian Models for Lexicalized Grammars).,"An HDP Model for Inducing Combinatory Categorial Grammars. We introduce a novel nonparametric Bayesian model for the induction of Combinatory Categorial Grammars from POS-tagged text. It achieves state of the art performance on a number of languages, and induces linguistically plausible lexicons.",2013
jauhiainen-etal-2017-evaluating,https://aclanthology.org/W17-1212,0,,,,,,,"Evaluating HeLI with Non-Linear Mappings. In this paper we describe the non-linear mappings we used with the Helsinki language identification method, HeLI, in the 4 th edition of the Discriminating between Similar Languages (DSL) shared task, which was organized as part of the Var-Dial 2017 workshop. Our SUKI team participated in the closed track together with 10 other teams. Our system reached the 7 th position in the track. We describe the HeLI method and the non-linear mappings in mathematical notation. The HeLI method uses a probabilistic model with character n-grams and word-based backoff. We also describe our trials using the non-linear mappings instead of relative frequencies and we present statistics about the back-off function of the HeLI method.",Evaluating {H}e{LI} with Non-Linear Mappings,"In this paper we describe the non-linear mappings we used with the Helsinki language identification method, HeLI, in the 4 th edition of the Discriminating between Similar Languages (DSL) shared task, which was organized as part of the Var-Dial 2017 workshop. Our SUKI team participated in the closed track together with 10 other teams. Our system reached the 7 th position in the track. We describe the HeLI method and the non-linear mappings in mathematical notation. The HeLI method uses a probabilistic model with character n-grams and word-based backoff. We also describe our trials using the non-linear mappings instead of relative frequencies and we present statistics about the back-off function of the HeLI method.",Evaluating HeLI with Non-Linear Mappings,"In this paper we describe the non-linear mappings we used with the Helsinki language identification method, HeLI, in the 4 th edition of the Discriminating between Similar Languages (DSL) shared task, which was organized as part of the Var-Dial 2017 workshop. Our SUKI team participated in the closed track together with 10 other teams. Our system reached the 7 th position in the track. We describe the HeLI method and the non-linear mappings in mathematical notation. The HeLI method uses a probabilistic model with character n-grams and word-based backoff. We also describe our trials using the non-linear mappings instead of relative frequencies and we present statistics about the back-off function of the HeLI method.",We would like to thank Kimmo Koskenniemi for many valuable discussions and comments. This research was made possible by funding from the Kone Foundation Language Programme.,"Evaluating HeLI with Non-Linear Mappings. In this paper we describe the non-linear mappings we used with the Helsinki language identification method, HeLI, in the 4 th edition of the Discriminating between Similar Languages (DSL) shared task, which was organized as part of the Var-Dial 2017 workshop. Our SUKI team participated in the closed track together with 10 other teams. Our system reached the 7 th position in the track. We describe the HeLI method and the non-linear mappings in mathematical notation. The HeLI method uses a probabilistic model with character n-grams and word-based backoff. We also describe our trials using the non-linear mappings instead of relative frequencies and we present statistics about the back-off function of the HeLI method.",2017
zhang-etal-2020-fast,https://aclanthology.org/2020.wmt-1.62,0,,,,,,,"Fast Interleaved Bidirectional Sequence Generation. Independence assumptions during sequence generation can speed up inference, but parallel generation of highly interdependent tokens comes at a cost in quality. Instead of assuming independence between neighbouring tokens (semi-autoregressive decoding, SA), we take inspiration from bidirectional sequence generation and introduce a decoder that generates target words from the left-to-right and right-toleft directions simultaneously. We show that we can easily convert a standard architecture for unidirectional decoding into a bidirectional decoder by simply interleaving the two directions and adapting the word positions and selfattention masks. Our interleaved bidirectional decoder (IBDecoder) retains the model simplicity and training efficiency of the standard Transformer, and on five machine translation tasks and two document summarization tasks, achieves a decoding speedup of ∼2× compared to autoregressive decoding with comparable quality. Notably, it outperforms left-toright SA because the independence assumptions in IBDecoder are more felicitous. To achieve even higher speedups, we explore hybrid models where we either simultaneously predict multiple neighbouring tokens per direction, or perform multi-directional decoding by partitioning the target sequence. These methods achieve speedups to 4×-11× across different tasks at the cost of <1 BLEU or <0.5 ROUGE (on average). 1",Fast Interleaved Bidirectional Sequence Generation,"Independence assumptions during sequence generation can speed up inference, but parallel generation of highly interdependent tokens comes at a cost in quality. Instead of assuming independence between neighbouring tokens (semi-autoregressive decoding, SA), we take inspiration from bidirectional sequence generation and introduce a decoder that generates target words from the left-to-right and right-toleft directions simultaneously. We show that we can easily convert a standard architecture for unidirectional decoding into a bidirectional decoder by simply interleaving the two directions and adapting the word positions and selfattention masks. Our interleaved bidirectional decoder (IBDecoder) retains the model simplicity and training efficiency of the standard Transformer, and on five machine translation tasks and two document summarization tasks, achieves a decoding speedup of ∼2× compared to autoregressive decoding with comparable quality. Notably, it outperforms left-toright SA because the independence assumptions in IBDecoder are more felicitous. To achieve even higher speedups, we explore hybrid models where we either simultaneously predict multiple neighbouring tokens per direction, or perform multi-directional decoding by partitioning the target sequence. These methods achieve speedups to 4×-11× across different tasks at the cost of <1 BLEU or <0.5 ROUGE (on average). 1",Fast Interleaved Bidirectional Sequence Generation,"Independence assumptions during sequence generation can speed up inference, but parallel generation of highly interdependent tokens comes at a cost in quality. Instead of assuming independence between neighbouring tokens (semi-autoregressive decoding, SA), we take inspiration from bidirectional sequence generation and introduce a decoder that generates target words from the left-to-right and right-toleft directions simultaneously. We show that we can easily convert a standard architecture for unidirectional decoding into a bidirectional decoder by simply interleaving the two directions and adapting the word positions and selfattention masks. Our interleaved bidirectional decoder (IBDecoder) retains the model simplicity and training efficiency of the standard Transformer, and on five machine translation tasks and two document summarization tasks, achieves a decoding speedup of ∼2× compared to autoregressive decoding with comparable quality. Notably, it outperforms left-toright SA because the independence assumptions in IBDecoder are more felicitous. To achieve even higher speedups, we explore hybrid models where we either simultaneously predict multiple neighbouring tokens per direction, or perform multi-directional decoding by partitioning the target sequence. These methods achieve speedups to 4×-11× across different tasks at the cost of <1 BLEU or <0.5 ROUGE (on average). 1","This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (http: //www.csd3.cam.ac.uk/), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). Ivan Titov acknowledges support of the European Research Council (ERC Starting grant 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518). Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727).","Fast Interleaved Bidirectional Sequence Generation. Independence assumptions during sequence generation can speed up inference, but parallel generation of highly interdependent tokens comes at a cost in quality. Instead of assuming independence between neighbouring tokens (semi-autoregressive decoding, SA), we take inspiration from bidirectional sequence generation and introduce a decoder that generates target words from the left-to-right and right-toleft directions simultaneously. We show that we can easily convert a standard architecture for unidirectional decoding into a bidirectional decoder by simply interleaving the two directions and adapting the word positions and selfattention masks. Our interleaved bidirectional decoder (IBDecoder) retains the model simplicity and training efficiency of the standard Transformer, and on five machine translation tasks and two document summarization tasks, achieves a decoding speedup of ∼2× compared to autoregressive decoding with comparable quality. Notably, it outperforms left-toright SA because the independence assumptions in IBDecoder are more felicitous. To achieve even higher speedups, we explore hybrid models where we either simultaneously predict multiple neighbouring tokens per direction, or perform multi-directional decoding by partitioning the target sequence. These methods achieve speedups to 4×-11× across different tasks at the cost of <1 BLEU or <0.5 ROUGE (on average). 1",2020
grefenstette-2015-inriasac,https://aclanthology.org/S15-2152,0,,,,,,,"INRIASAC: Simple Hypernym Extraction Methods. For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.",{INRIASAC}: Simple Hypernym Extraction Methods,"For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.",INRIASAC: Simple Hypernym Extraction Methods,"For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.","This research is partially funded by a research grant from INRIA, and the Paris-Saclay Institut de la Société Numérique funded by the IDEX Paris-Saclay, ANR-11-IDEX-0003-02.","INRIASAC: Simple Hypernym Extraction Methods. For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.",2015
seyffarth-kallmeyer-2020-corpus,https://aclanthology.org/2020.coling-main.357,0,,,,,,,"Corpus-based Identification of Verbs Participating in Verb Alternations Using Classification and Manual Annotation. English verb alternations allow participating verbs to appear in a set of syntactically different constructions whose associated semantic frames are systematically related. We use ENCOW and VerbNet data to train classifiers to predict the instrument subject alternation and the causativeinchoative alternation, relying on count-based and vector-based features as well as perplexitybased language model features, which are intended to reflect each alternation's felicity by simulating it. Beyond the prediction task, we use the classifier results as a source for a manual annotation step in order to identify new, unseen instances of each alternation. This is possible because existing alternation datasets contain positive, but no negative instances and are not comprehensive. Over several sequences of classification-annotation steps, we iteratively extend our sets of alternating verbs. Our hybrid approach to the identification of new alternating verbs reduces the required annotation effort by only presenting annotators with the highest-scoring candidates from the previous classification. Due to the success of semi-supervised and unsupervised features, our approach can easily be transferred to further alternations.",Corpus-based Identification of Verbs Participating in Verb Alternations Using Classification and Manual Annotation,"English verb alternations allow participating verbs to appear in a set of syntactically different constructions whose associated semantic frames are systematically related. We use ENCOW and VerbNet data to train classifiers to predict the instrument subject alternation and the causativeinchoative alternation, relying on count-based and vector-based features as well as perplexitybased language model features, which are intended to reflect each alternation's felicity by simulating it. Beyond the prediction task, we use the classifier results as a source for a manual annotation step in order to identify new, unseen instances of each alternation. This is possible because existing alternation datasets contain positive, but no negative instances and are not comprehensive. Over several sequences of classification-annotation steps, we iteratively extend our sets of alternating verbs. Our hybrid approach to the identification of new alternating verbs reduces the required annotation effort by only presenting annotators with the highest-scoring candidates from the previous classification. Due to the success of semi-supervised and unsupervised features, our approach can easily be transferred to further alternations.",Corpus-based Identification of Verbs Participating in Verb Alternations Using Classification and Manual Annotation,"English verb alternations allow participating verbs to appear in a set of syntactically different constructions whose associated semantic frames are systematically related. We use ENCOW and VerbNet data to train classifiers to predict the instrument subject alternation and the causativeinchoative alternation, relying on count-based and vector-based features as well as perplexitybased language model features, which are intended to reflect each alternation's felicity by simulating it. Beyond the prediction task, we use the classifier results as a source for a manual annotation step in order to identify new, unseen instances of each alternation. This is possible because existing alternation datasets contain positive, but no negative instances and are not comprehensive. Over several sequences of classification-annotation steps, we iteratively extend our sets of alternating verbs. Our hybrid approach to the identification of new alternating verbs reduces the required annotation effort by only presenting annotators with the highest-scoring candidates from the previous classification. Due to the success of semi-supervised and unsupervised features, our approach can easily be transferred to further alternations.","The work presented in this paper was financed by the Deutsche Forschungsgemeinschaft (DFG) within the CRC 991 ""The Structure of Representations in Language, Cognition, and Science"" and the individual DFG project ""Unsupervised Frame Induction (FInd)"". We wish to thank the anonymous reviewers for their constructive feedback and helpful comments.","Corpus-based Identification of Verbs Participating in Verb Alternations Using Classification and Manual Annotation. English verb alternations allow participating verbs to appear in a set of syntactically different constructions whose associated semantic frames are systematically related. We use ENCOW and VerbNet data to train classifiers to predict the instrument subject alternation and the causativeinchoative alternation, relying on count-based and vector-based features as well as perplexitybased language model features, which are intended to reflect each alternation's felicity by simulating it. Beyond the prediction task, we use the classifier results as a source for a manual annotation step in order to identify new, unseen instances of each alternation. This is possible because existing alternation datasets contain positive, but no negative instances and are not comprehensive. Over several sequences of classification-annotation steps, we iteratively extend our sets of alternating verbs. Our hybrid approach to the identification of new alternating verbs reduces the required annotation effort by only presenting annotators with the highest-scoring candidates from the previous classification. Due to the success of semi-supervised and unsupervised features, our approach can easily be transferred to further alternations.",2020
shen-etal-2021-sciconceptminer,https://aclanthology.org/2021.acl-demo.6,1,,,,industry_innovation_infrastructure,,,"SciConceptMiner: A system for large-scale scientific concept discovery. Scientific knowledge is evolving at an unprecedented rate of speed, with new concepts constantly being introduced from millions of academic articles published every month. In this paper, we introduce a self-supervised end-toend system, SciConceptMiner, for the automatic capture of emerging scientific concepts from both independent knowledge sources (semi-structured data) and academic publications (unstructured documents). First, we adopt a BERT-based sequence labeling model to predict candidate concept phrases with selfsupervision data. Then, we incorporate rich Web content for synonym detection and concept selection via a web search API. This two-stage approach achieves highly accurate (94.7%) concept identification with more than 740K scientific concepts. These concepts are deployed in the Microsoft Academic 1 production system and are the backbone for its semantic search capability.",{S}ci{C}oncept{M}iner: A system for large-scale scientific concept discovery,"Scientific knowledge is evolving at an unprecedented rate of speed, with new concepts constantly being introduced from millions of academic articles published every month. In this paper, we introduce a self-supervised end-toend system, SciConceptMiner, for the automatic capture of emerging scientific concepts from both independent knowledge sources (semi-structured data) and academic publications (unstructured documents). First, we adopt a BERT-based sequence labeling model to predict candidate concept phrases with selfsupervision data. Then, we incorporate rich Web content for synonym detection and concept selection via a web search API. This two-stage approach achieves highly accurate (94.7%) concept identification with more than 740K scientific concepts. These concepts are deployed in the Microsoft Academic 1 production system and are the backbone for its semantic search capability.",SciConceptMiner: A system for large-scale scientific concept discovery,"Scientific knowledge is evolving at an unprecedented rate of speed, with new concepts constantly being introduced from millions of academic articles published every month. In this paper, we introduce a self-supervised end-toend system, SciConceptMiner, for the automatic capture of emerging scientific concepts from both independent knowledge sources (semi-structured data) and academic publications (unstructured documents). First, we adopt a BERT-based sequence labeling model to predict candidate concept phrases with selfsupervision data. Then, we incorporate rich Web content for synonym detection and concept selection via a web search API. This two-stage approach achieves highly accurate (94.7%) concept identification with more than 740K scientific concepts. These concepts are deployed in the Microsoft Academic 1 production system and are the backbone for its semantic search capability.", 13 We split the sampled data of each category to 3 groups with 100 each and they are evaluated by 3 judges. We report the average of positive label ratios.Source Age Avg Y 5% Y 50% Y 95% Y Wiki,"SciConceptMiner: A system for large-scale scientific concept discovery. Scientific knowledge is evolving at an unprecedented rate of speed, with new concepts constantly being introduced from millions of academic articles published every month. In this paper, we introduce a self-supervised end-toend system, SciConceptMiner, for the automatic capture of emerging scientific concepts from both independent knowledge sources (semi-structured data) and academic publications (unstructured documents). First, we adopt a BERT-based sequence labeling model to predict candidate concept phrases with selfsupervision data. Then, we incorporate rich Web content for synonym detection and concept selection via a web search API. This two-stage approach achieves highly accurate (94.7%) concept identification with more than 740K scientific concepts. These concepts are deployed in the Microsoft Academic 1 production system and are the backbone for its semantic search capability.",2021
niu-2017-chinese,https://aclanthology.org/W17-6519,0,,,,,,,"Chinese Descriptive and Resultative V-de Constructions. A Dependency-based Analysis. This contribution presents a dependency grammar (DG) analysis of the so-called descriptive and resultative V-de constructions in Mandarin Chinese (VDCs); it focuses, in particular, on the dependency analysis of the noun phrase that intervenes between the two predicates in a VDC. Two methods, namely chunking data collected from informants and two diagnostics specific to Chinese, i.e. bǎ and bèi sentence formation, were used. They were employed to discern which analysis should be preferred, i.e. the ternary-branching analysis, in which the intervening NP (NP2) is a dependent of the first predicate (P1), or the small-clause analysis, in which NP2 depends on the second predicate (P2). The results obtained suggest a flexible structural analysis for VDCs in the form of ""NP1+P1-de+NP2+P2"". The difference in structural assignment is attributed to a semantic property of NP2 and the semantic relations it forms with adjacent predicates.",{C}hinese Descriptive and Resultative {V}-de Constructions. A Dependency-based Analysis,"This contribution presents a dependency grammar (DG) analysis of the so-called descriptive and resultative V-de constructions in Mandarin Chinese (VDCs); it focuses, in particular, on the dependency analysis of the noun phrase that intervenes between the two predicates in a VDC. Two methods, namely chunking data collected from informants and two diagnostics specific to Chinese, i.e. bǎ and bèi sentence formation, were used. They were employed to discern which analysis should be preferred, i.e. the ternary-branching analysis, in which the intervening NP (NP2) is a dependent of the first predicate (P1), or the small-clause analysis, in which NP2 depends on the second predicate (P2). The results obtained suggest a flexible structural analysis for VDCs in the form of ""NP1+P1-de+NP2+P2"". The difference in structural assignment is attributed to a semantic property of NP2 and the semantic relations it forms with adjacent predicates.",Chinese Descriptive and Resultative V-de Constructions. A Dependency-based Analysis,"This contribution presents a dependency grammar (DG) analysis of the so-called descriptive and resultative V-de constructions in Mandarin Chinese (VDCs); it focuses, in particular, on the dependency analysis of the noun phrase that intervenes between the two predicates in a VDC. Two methods, namely chunking data collected from informants and two diagnostics specific to Chinese, i.e. bǎ and bèi sentence formation, were used. They were employed to discern which analysis should be preferred, i.e. the ternary-branching analysis, in which the intervening NP (NP2) is a dependent of the first predicate (P1), or the small-clause analysis, in which NP2 depends on the second predicate (P2). The results obtained suggest a flexible structural analysis for VDCs in the form of ""NP1+P1-de+NP2+P2"". The difference in structural assignment is attributed to a semantic property of NP2 and the semantic relations it forms with adjacent predicates.","The research presented in this article was funded by the Ministry of Education of the People's Republic of China, Grant # 15YJA74001.","Chinese Descriptive and Resultative V-de Constructions. A Dependency-based Analysis. This contribution presents a dependency grammar (DG) analysis of the so-called descriptive and resultative V-de constructions in Mandarin Chinese (VDCs); it focuses, in particular, on the dependency analysis of the noun phrase that intervenes between the two predicates in a VDC. Two methods, namely chunking data collected from informants and two diagnostics specific to Chinese, i.e. bǎ and bèi sentence formation, were used. They were employed to discern which analysis should be preferred, i.e. the ternary-branching analysis, in which the intervening NP (NP2) is a dependent of the first predicate (P1), or the small-clause analysis, in which NP2 depends on the second predicate (P2). The results obtained suggest a flexible structural analysis for VDCs in the form of ""NP1+P1-de+NP2+P2"". The difference in structural assignment is attributed to a semantic property of NP2 and the semantic relations it forms with adjacent predicates.",2017
lichouri-abbas-2020-speechtrans,https://aclanthology.org/2020.smm4h-1.19,1,,,,health,,,"SpeechTrans@SMM4H'20: Impact of Preprocessing and N-grams on Automatic Classification of Tweets That Mention Medications. This paper describes our system developed for automatically classifying tweets that mention medications. We used the Decision Tree classifier for this task. We have shown that using some elementary preprocessing steps and TF-IDF n-grams led to acceptable classifier performance. Indeed, the F1-score recorded was 74.58% in the development phase and 63.70% in the test phase.",{S}peech{T}rans@{SMM}4{H}{'}20: Impact of Preprocessing and N-grams on Automatic Classification of Tweets That Mention Medications,"This paper describes our system developed for automatically classifying tweets that mention medications. We used the Decision Tree classifier for this task. We have shown that using some elementary preprocessing steps and TF-IDF n-grams led to acceptable classifier performance. Indeed, the F1-score recorded was 74.58% in the development phase and 63.70% in the test phase.",SpeechTrans@SMM4H'20: Impact of Preprocessing and N-grams on Automatic Classification of Tweets That Mention Medications,"This paper describes our system developed for automatically classifying tweets that mention medications. We used the Decision Tree classifier for this task. We have shown that using some elementary preprocessing steps and TF-IDF n-grams led to acceptable classifier performance. Indeed, the F1-score recorded was 74.58% in the development phase and 63.70% in the test phase.",,"SpeechTrans@SMM4H'20: Impact of Preprocessing and N-grams on Automatic Classification of Tweets That Mention Medications. This paper describes our system developed for automatically classifying tweets that mention medications. We used the Decision Tree classifier for this task. We have shown that using some elementary preprocessing steps and TF-IDF n-grams led to acceptable classifier performance. Indeed, the F1-score recorded was 74.58% in the development phase and 63.70% in the test phase.",2020
alexin-etal-2003-annotated,https://aclanthology.org/E03-1012,0,,,,,,,"Annotated Hungarian National Corpus. The beginning of the work dates back to 1998 when the authors started a research project on the application of ILP (Inductive Logic Programming) learning methods for part-of-speech tagging. This research was done within the framework of a European ESPRIT project (LTR 20237, ""lLP2"") , where first studies were based on the so-called TELRI corpus (Erjavec et al., 1998) . Since the corpus annotation had several deficiencies and its size proved to be small for further research, a national project has been organized with the main goal to create a suitably large training corpus for machine learning applications, primarily for POS (Part-of-speech) tagging.",Annotated {H}ungarian National Corpus,"The beginning of the work dates back to 1998 when the authors started a research project on the application of ILP (Inductive Logic Programming) learning methods for part-of-speech tagging. This research was done within the framework of a European ESPRIT project (LTR 20237, ""lLP2"") , where first studies were based on the so-called TELRI corpus (Erjavec et al., 1998) . Since the corpus annotation had several deficiencies and its size proved to be small for further research, a national project has been organized with the main goal to create a suitably large training corpus for machine learning applications, primarily for POS (Part-of-speech) tagging.",Annotated Hungarian National Corpus,"The beginning of the work dates back to 1998 when the authors started a research project on the application of ILP (Inductive Logic Programming) learning methods for part-of-speech tagging. This research was done within the framework of a European ESPRIT project (LTR 20237, ""lLP2"") , where first studies were based on the so-called TELRI corpus (Erjavec et al., 1998) . Since the corpus annotation had several deficiencies and its size proved to be small for further research, a national project has been organized with the main goal to create a suitably large training corpus for machine learning applications, primarily for POS (Part-of-speech) tagging.",The project was partially supported by the Hungarian Ministry of Education (grant: IKTA 27/2000). The authors also would like to thank researchers of the Research Institute for Linguistics at the Hungarian Academy of Sciences for their kind help and advice.,"Annotated Hungarian National Corpus. The beginning of the work dates back to 1998 when the authors started a research project on the application of ILP (Inductive Logic Programming) learning methods for part-of-speech tagging. This research was done within the framework of a European ESPRIT project (LTR 20237, ""lLP2"") , where first studies were based on the so-called TELRI corpus (Erjavec et al., 1998) . Since the corpus annotation had several deficiencies and its size proved to be small for further research, a national project has been organized with the main goal to create a suitably large training corpus for machine learning applications, primarily for POS (Part-of-speech) tagging.",2003
meyers-etal-2004-cross,http://www.lrec-conf.org/proceedings/lrec2004/pdf/397.pdf,0,,,,,,,"The Cross-Breeding of Dictionaries. Especially for English, the number of hand-coded electronic resources available to the Natural Language Processing Community keeps growing: annotated corpora, treebanks, lexicons, wordnets, etc. Unfortunately, initial funding for such projects is much easier to obtain than the additional funding needed to enlarge or improve upon such resources. Thus once one proves the usefulness of a resource, it is difficult to make that resource reach its full potential. We discuss techniques for combining dictionary resources and producing others by semi-automatic means. The resources we created using these techniques have become an integral part of our work on NomBank, a project with the goal of annotating noun arguments in the Penn Treebank II corpus (PTB).",The Cross-Breeding of Dictionaries,"Especially for English, the number of hand-coded electronic resources available to the Natural Language Processing Community keeps growing: annotated corpora, treebanks, lexicons, wordnets, etc. Unfortunately, initial funding for such projects is much easier to obtain than the additional funding needed to enlarge or improve upon such resources. Thus once one proves the usefulness of a resource, it is difficult to make that resource reach its full potential. We discuss techniques for combining dictionary resources and producing others by semi-automatic means. The resources we created using these techniques have become an integral part of our work on NomBank, a project with the goal of annotating noun arguments in the Penn Treebank II corpus (PTB).",The Cross-Breeding of Dictionaries,"Especially for English, the number of hand-coded electronic resources available to the Natural Language Processing Community keeps growing: annotated corpora, treebanks, lexicons, wordnets, etc. Unfortunately, initial funding for such projects is much easier to obtain than the additional funding needed to enlarge or improve upon such resources. Thus once one proves the usefulness of a resource, it is difficult to make that resource reach its full potential. We discuss techniques for combining dictionary resources and producing others by semi-automatic means. The resources we created using these techniques have become an integral part of our work on NomBank, a project with the goal of annotating noun arguments in the Penn Treebank II corpus (PTB).",Nombank is supported under Grant N66001-001-1-8917 from the Space and Naval Warfare Systems Center San Diego. This paper does not necessarily reflect the position or the policy of the U.S. Government.,"The Cross-Breeding of Dictionaries. Especially for English, the number of hand-coded electronic resources available to the Natural Language Processing Community keeps growing: annotated corpora, treebanks, lexicons, wordnets, etc. Unfortunately, initial funding for such projects is much easier to obtain than the additional funding needed to enlarge or improve upon such resources. Thus once one proves the usefulness of a resource, it is difficult to make that resource reach its full potential. We discuss techniques for combining dictionary resources and producing others by semi-automatic means. The resources we created using these techniques have become an integral part of our work on NomBank, a project with the goal of annotating noun arguments in the Penn Treebank II corpus (PTB).",2004
janarthanam-lemon-2010-adaptive,https://aclanthology.org/W10-4324,0,,,,,,,"Adaptive Referring Expression Generation in Spoken Dialogue Systems: Evaluation with Real Users. We present new results from a real-user evaluation of a data-driven approach to learning user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical 'jargon' names of the domain entities. In such cases, dialogue systems must be able to model the user's (lexical) domain knowledge and use appropriate referring expressions. We present a reinforcement learning (RL) framework in which the system learns REG policies which can adapt to unknown users online. For real users of such a system, we show that in comparison to an adaptive hand-coded baseline policy, the learned policy performs significantly better, with a 20.8% average increase in adaptation accuracy, 12.6% decrease in time taken, and a 15.1% increase in task completion rate. The learned policy also has a significantly better subjective rating from users. This is because the learned policies adapt online to changing evidence about the user's domain expertise. We also discuss the issue of evaluation in simulation versus evaluation with real users.",Adaptive Referring Expression Generation in Spoken Dialogue Systems: Evaluation with Real Users,"We present new results from a real-user evaluation of a data-driven approach to learning user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical 'jargon' names of the domain entities. In such cases, dialogue systems must be able to model the user's (lexical) domain knowledge and use appropriate referring expressions. We present a reinforcement learning (RL) framework in which the system learns REG policies which can adapt to unknown users online. For real users of such a system, we show that in comparison to an adaptive hand-coded baseline policy, the learned policy performs significantly better, with a 20.8% average increase in adaptation accuracy, 12.6% decrease in time taken, and a 15.1% increase in task completion rate. The learned policy also has a significantly better subjective rating from users. This is because the learned policies adapt online to changing evidence about the user's domain expertise. We also discuss the issue of evaluation in simulation versus evaluation with real users.",Adaptive Referring Expression Generation in Spoken Dialogue Systems: Evaluation with Real Users,"We present new results from a real-user evaluation of a data-driven approach to learning user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical 'jargon' names of the domain entities. In such cases, dialogue systems must be able to model the user's (lexical) domain knowledge and use appropriate referring expressions. We present a reinforcement learning (RL) framework in which the system learns REG policies which can adapt to unknown users online. For real users of such a system, we show that in comparison to an adaptive hand-coded baseline policy, the learned policy performs significantly better, with a 20.8% average increase in adaptation accuracy, 12.6% decrease in time taken, and a 15.1% increase in task completion rate. The learned policy also has a significantly better subjective rating from users. This is because the learned policies adapt online to changing evidence about the user's domain expertise. We also discuss the issue of evaluation in simulation versus evaluation with real users.","The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 216594 (CLASSiC project www.classic-project.org) and from the EPSRC, project no. EP/G069840/1.","Adaptive Referring Expression Generation in Spoken Dialogue Systems: Evaluation with Real Users. We present new results from a real-user evaluation of a data-driven approach to learning user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical 'jargon' names of the domain entities. In such cases, dialogue systems must be able to model the user's (lexical) domain knowledge and use appropriate referring expressions. We present a reinforcement learning (RL) framework in which the system learns REG policies which can adapt to unknown users online. For real users of such a system, we show that in comparison to an adaptive hand-coded baseline policy, the learned policy performs significantly better, with a 20.8% average increase in adaptation accuracy, 12.6% decrease in time taken, and a 15.1% increase in task completion rate. The learned policy also has a significantly better subjective rating from users. This is because the learned policies adapt online to changing evidence about the user's domain expertise. We also discuss the issue of evaluation in simulation versus evaluation with real users.",2010
sitbon-bellot-2006-tools,http://www.lrec-conf.org/proceedings/lrec2006/pdf/410_pdf.pdf,0,,,,,,,"Tools and methods for objective or contextual evaluation of topic segmentation. In this paper we discuss the way of evaluating topic segmentation, from mathematical measures on variously constructed reference corpus to contextual evaluation depending on different topic segmentation usages. We present an overview of the different ways of building reference corpora and of mathematically evaluating segmentation methods, and then we focus on three tasks which may involve a topic segmentation : text extraction, information retrieval and document presentation. We have developped two graphical interfaces, one for an intrinsec comparison, and the other one dedicated to an evaluation in an information retrieval context. These tools will be very soon distributed under GPL licences on the Technolangue project web page.",Tools and methods for objective or contextual evaluation of topic segmentation,"In this paper we discuss the way of evaluating topic segmentation, from mathematical measures on variously constructed reference corpus to contextual evaluation depending on different topic segmentation usages. We present an overview of the different ways of building reference corpora and of mathematically evaluating segmentation methods, and then we focus on three tasks which may involve a topic segmentation : text extraction, information retrieval and document presentation. We have developped two graphical interfaces, one for an intrinsec comparison, and the other one dedicated to an evaluation in an information retrieval context. These tools will be very soon distributed under GPL licences on the Technolangue project web page.",Tools and methods for objective or contextual evaluation of topic segmentation,"In this paper we discuss the way of evaluating topic segmentation, from mathematical measures on variously constructed reference corpus to contextual evaluation depending on different topic segmentation usages. We present an overview of the different ways of building reference corpora and of mathematically evaluating segmentation methods, and then we focus on three tasks which may involve a topic segmentation : text extraction, information retrieval and document presentation. We have developped two graphical interfaces, one for an intrinsec comparison, and the other one dedicated to an evaluation in an information retrieval context. These tools will be very soon distributed under GPL licences on the Technolangue project web page.",,"Tools and methods for objective or contextual evaluation of topic segmentation. In this paper we discuss the way of evaluating topic segmentation, from mathematical measures on variously constructed reference corpus to contextual evaluation depending on different topic segmentation usages. We present an overview of the different ways of building reference corpora and of mathematically evaluating segmentation methods, and then we focus on three tasks which may involve a topic segmentation : text extraction, information retrieval and document presentation. We have developped two graphical interfaces, one for an intrinsec comparison, and the other one dedicated to an evaluation in an information retrieval context. These tools will be very soon distributed under GPL licences on the Technolangue project web page.",2006
foret-nir-2002-rigid,https://aclanthology.org/C02-1111,0,,,,,,,Rigid Lambek Grammars Are Not Learnable from Strings. ,Rigid {L}ambek Grammars Are Not Learnable from Strings,,Rigid Lambek Grammars Are Not Learnable from Strings,,,Rigid Lambek Grammars Are Not Learnable from Strings. ,2002
defauw-etal-2019-collecting,https://aclanthology.org/W19-6733,0,,,,,,,"Collecting domain specific data for MT: an evaluation of the ParaCrawlpipeline. This paper investigates the effectiveness of the ParaCrawl pipeline for collecting domain-specific training data for machine translation. We follow the different steps of the pipeline (document alignment, sentence alignment, cleaning) and add a topic-filtering component. Experiments are performed on the legal domain for the English to French and English to Irish language pairs. We evaluate the pipeline at both intrinsic (alignment quality) and extrinsic (MT performance) levels. Our results show that with this pipeline we obtain highquality alignments and significant improvements in MT quality.",Collecting domain specific data for {MT}: an evaluation of the {P}ara{C}rawlpipeline,"This paper investigates the effectiveness of the ParaCrawl pipeline for collecting domain-specific training data for machine translation. We follow the different steps of the pipeline (document alignment, sentence alignment, cleaning) and add a topic-filtering component. Experiments are performed on the legal domain for the English to French and English to Irish language pairs. We evaluate the pipeline at both intrinsic (alignment quality) and extrinsic (MT performance) levels. Our results show that with this pipeline we obtain highquality alignments and significant improvements in MT quality.",Collecting domain specific data for MT: an evaluation of the ParaCrawlpipeline,"This paper investigates the effectiveness of the ParaCrawl pipeline for collecting domain-specific training data for machine translation. We follow the different steps of the pipeline (document alignment, sentence alignment, cleaning) and add a topic-filtering component. Experiments are performed on the legal domain for the English to French and English to Irish language pairs. We evaluate the pipeline at both intrinsic (alignment quality) and extrinsic (MT performance) levels. Our results show that with this pipeline we obtain highquality alignments and significant improvements in MT quality.","This work was performed in the framework of the SMART 2015/1091 project (""Tools and resources for CEF automated translation""), funded by the CEF Telecom programme (Connecting Europe Facility).","Collecting domain specific data for MT: an evaluation of the ParaCrawlpipeline. This paper investigates the effectiveness of the ParaCrawl pipeline for collecting domain-specific training data for machine translation. We follow the different steps of the pipeline (document alignment, sentence alignment, cleaning) and add a topic-filtering component. Experiments are performed on the legal domain for the English to French and English to Irish language pairs. We evaluate the pipeline at both intrinsic (alignment quality) and extrinsic (MT performance) levels. Our results show that with this pipeline we obtain highquality alignments and significant improvements in MT quality.",2019
jonnalagadda-etal-2013-evaluating,https://aclanthology.org/W13-0404,0,,,,,,,"Evaluating the Use of Empirically Constructed Lexical Resources for Named Entity Recognition. One of the most time-consuming tasks faced by a Natural Language Processing (NLP) researcher or practitioner trying to adapt a machine-learning-based NER system to a different domain is the creation, compilation, and customization of the needed lexicons. Lexical resources, such as lexicons of concept classes are considered necessary to improve the performance of NER. It is typical for medical informatics researchers to implement modularized systems that cannot be generalized (Stanfill et al. 2010) . As the work of constructing or customizing lexical resources needed for these highly specific systems is human-intensive, automatic generation is a desirable alternative. It might be possible that empirically created lexical resources might incorporate domain knowledge into a machine-learning NER engine and increase its accuracy.
Although many machine learning-based NER techniques require annotated data, semi-supervised and unsupervised techniques for NER have been long been explored due to their value in domain robustness and minimizing labor costs. Some attempts at automatic knowledgebase construction included automatic thesaurus discovery efforts (Grefenstette 1994) , which sought to build lists of similar words without human intervention to aid in query expansion or automatic dictionary construction (Riloff 1996) . More recently, the use of empirically derived semantics for NER is used by Finkel and Manning (Finkel and Manning 2009a) , Turian et al. (Turian et al. 2010) , and ). Finkel's NER tool uses clusters of terms built apriori from the British National corpus (Aston and Burnard 1998) and English gigaword corpus (Graff et al. 2003) for extracting concepts from newswire text and PubMed abstracts for extracting gene mentions from biomedical literature. Turian et al. (Turian et al. 2010 ) also showed that statistically created word clusters (P. F. Brown et al. 1992; Clark 2000) could be used to improve named entity recognition. However, only a single feature (cluster membership) can be derived from the clusters. Semantic vector representations of terms had not been previously used for NER or sequential tagging classification tasks before (Turian et al. 2010) . Although use empirically derived vector representation for extracting concepts defined in the GENIA (Kim, Ohta, and Tsujii 2008) ontology from biomedical literature using rule-based methods, it was not clear whether such methods could be ported to extract other concepts or incrementally improve the performance of an existing system . This work not only demonstrates how such vector representation could improve state-of-the-art NER, but also that they are more useful than statistical clustering in this context.",Evaluating the Use of Empirically Constructed Lexical Resources for Named Entity Recognition,"One of the most time-consuming tasks faced by a Natural Language Processing (NLP) researcher or practitioner trying to adapt a machine-learning-based NER system to a different domain is the creation, compilation, and customization of the needed lexicons. Lexical resources, such as lexicons of concept classes are considered necessary to improve the performance of NER. It is typical for medical informatics researchers to implement modularized systems that cannot be generalized (Stanfill et al. 2010) . As the work of constructing or customizing lexical resources needed for these highly specific systems is human-intensive, automatic generation is a desirable alternative. It might be possible that empirically created lexical resources might incorporate domain knowledge into a machine-learning NER engine and increase its accuracy.
Although many machine learning-based NER techniques require annotated data, semi-supervised and unsupervised techniques for NER have been long been explored due to their value in domain robustness and minimizing labor costs. Some attempts at automatic knowledgebase construction included automatic thesaurus discovery efforts (Grefenstette 1994) , which sought to build lists of similar words without human intervention to aid in query expansion or automatic dictionary construction (Riloff 1996) . More recently, the use of empirically derived semantics for NER is used by Finkel and Manning (Finkel and Manning 2009a) , Turian et al. (Turian et al. 2010) , and ). Finkel's NER tool uses clusters of terms built apriori from the British National corpus (Aston and Burnard 1998) and English gigaword corpus (Graff et al. 2003) for extracting concepts from newswire text and PubMed abstracts for extracting gene mentions from biomedical literature. Turian et al. (Turian et al. 2010 ) also showed that statistically created word clusters (P. F. Brown et al. 1992; Clark 2000) could be used to improve named entity recognition. However, only a single feature (cluster membership) can be derived from the clusters. Semantic vector representations of terms had not been previously used for NER or sequential tagging classification tasks before (Turian et al. 2010) . Although use empirically derived vector representation for extracting concepts defined in the GENIA (Kim, Ohta, and Tsujii 2008) ontology from biomedical literature using rule-based methods, it was not clear whether such methods could be ported to extract other concepts or incrementally improve the performance of an existing system . This work not only demonstrates how such vector representation could improve state-of-the-art NER, but also that they are more useful than statistical clustering in this context.",Evaluating the Use of Empirically Constructed Lexical Resources for Named Entity Recognition,"One of the most time-consuming tasks faced by a Natural Language Processing (NLP) researcher or practitioner trying to adapt a machine-learning-based NER system to a different domain is the creation, compilation, and customization of the needed lexicons. Lexical resources, such as lexicons of concept classes are considered necessary to improve the performance of NER. It is typical for medical informatics researchers to implement modularized systems that cannot be generalized (Stanfill et al. 2010) . As the work of constructing or customizing lexical resources needed for these highly specific systems is human-intensive, automatic generation is a desirable alternative. It might be possible that empirically created lexical resources might incorporate domain knowledge into a machine-learning NER engine and increase its accuracy.
Although many machine learning-based NER techniques require annotated data, semi-supervised and unsupervised techniques for NER have been long been explored due to their value in domain robustness and minimizing labor costs. Some attempts at automatic knowledgebase construction included automatic thesaurus discovery efforts (Grefenstette 1994) , which sought to build lists of similar words without human intervention to aid in query expansion or automatic dictionary construction (Riloff 1996) . More recently, the use of empirically derived semantics for NER is used by Finkel and Manning (Finkel and Manning 2009a) , Turian et al. (Turian et al. 2010) , and ). Finkel's NER tool uses clusters of terms built apriori from the British National corpus (Aston and Burnard 1998) and English gigaword corpus (Graff et al. 2003) for extracting concepts from newswire text and PubMed abstracts for extracting gene mentions from biomedical literature. Turian et al. (Turian et al. 2010 ) also showed that statistically created word clusters (P. F. Brown et al. 1992; Clark 2000) could be used to improve named entity recognition. However, only a single feature (cluster membership) can be derived from the clusters. Semantic vector representations of terms had not been previously used for NER or sequential tagging classification tasks before (Turian et al. 2010) . Although use empirically derived vector representation for extracting concepts defined in the GENIA (Kim, Ohta, and Tsujii 2008) ontology from biomedical literature using rule-based methods, it was not clear whether such methods could be ported to extract other concepts or incrementally improve the performance of an existing system . This work not only demonstrates how such vector representation could improve state-of-the-art NER, but also that they are more useful than statistical clustering in this context.","This work was possible because of funding from possible sources: NLM HHSN276201000031C (PI: Gonzalez), NCRR 3UL1RR024148, NCRR 1RC1RR028254, NSF 0964613 and the Brown Foundation (PI: Bernstam), NSF ABI:0845523, NLM R01LM009959A1 (PI: Liu) and NLM 1K99LM011389 (PI: Jonnalagadda). We also thank the developers of BANNER (http://banner.sourceforge.net/), MALLET (http://mallet.cs.umass.edu/) and Semantic Vectors (http://code.google.com/p/semanticvectors/) for the software packages and the organizers of the i2b2/VA 2010 NLP challenge for sharing the corpus.","Evaluating the Use of Empirically Constructed Lexical Resources for Named Entity Recognition. One of the most time-consuming tasks faced by a Natural Language Processing (NLP) researcher or practitioner trying to adapt a machine-learning-based NER system to a different domain is the creation, compilation, and customization of the needed lexicons. Lexical resources, such as lexicons of concept classes are considered necessary to improve the performance of NER. It is typical for medical informatics researchers to implement modularized systems that cannot be generalized (Stanfill et al. 2010) . As the work of constructing or customizing lexical resources needed for these highly specific systems is human-intensive, automatic generation is a desirable alternative. It might be possible that empirically created lexical resources might incorporate domain knowledge into a machine-learning NER engine and increase its accuracy.
Although many machine learning-based NER techniques require annotated data, semi-supervised and unsupervised techniques for NER have been long been explored due to their value in domain robustness and minimizing labor costs. Some attempts at automatic knowledgebase construction included automatic thesaurus discovery efforts (Grefenstette 1994) , which sought to build lists of similar words without human intervention to aid in query expansion or automatic dictionary construction (Riloff 1996) . More recently, the use of empirically derived semantics for NER is used by Finkel and Manning (Finkel and Manning 2009a) , Turian et al. (Turian et al. 2010) , and ). Finkel's NER tool uses clusters of terms built apriori from the British National corpus (Aston and Burnard 1998) and English gigaword corpus (Graff et al. 2003) for extracting concepts from newswire text and PubMed abstracts for extracting gene mentions from biomedical literature. Turian et al. (Turian et al. 2010 ) also showed that statistically created word clusters (P. F. Brown et al. 1992; Clark 2000) could be used to improve named entity recognition. However, only a single feature (cluster membership) can be derived from the clusters. Semantic vector representations of terms had not been previously used for NER or sequential tagging classification tasks before (Turian et al. 2010) . Although use empirically derived vector representation for extracting concepts defined in the GENIA (Kim, Ohta, and Tsujii 2008) ontology from biomedical literature using rule-based methods, it was not clear whether such methods could be ported to extract other concepts or incrementally improve the performance of an existing system . This work not only demonstrates how such vector representation could improve state-of-the-art NER, but also that they are more useful than statistical clustering in this context.",2013
dai-etal-2020-multi,https://aclanthology.org/2020.emnlp-main.565,0,,,,,,,"A Multi-Task Incremental Learning Framework with Category Name Embedding for Aspect-Category Sentiment Analysis. (T)ACSA tasks, including aspect-category sentiment analysis (ACSA) and targeted aspectcategory sentiment analysis (TACSA), aims at identifying sentiment polarity on predefined categories. Incremental learning on new categories is necessary for (T)ACSA real applications. Though current multi-task learning models achieve good performance in (T)ACSA tasks, they suffer from catastrophic forgetting problems in (T)ACSA incremental learning tasks. In this paper, to make multi-task learning feasible for incremental learning, we proposed Category Name Embedding network (CNE-net). We set both encoder and decoder shared among all categories to weaken the catastrophic forgetting problem. Besides the origin input sentence, we applied another input feature, i.e., category name, for task discrimination. Our model achieved state-of-theart on two (T)ACSA benchmark datasets. Furthermore, we proposed a dataset for (T)ACSA incremental learning and achieved the best performance compared with other strong baselines.",A Multi-Task Incremental Learning Framework with Category Name Embedding for Aspect-Category Sentiment Analysis,"(T)ACSA tasks, including aspect-category sentiment analysis (ACSA) and targeted aspectcategory sentiment analysis (TACSA), aims at identifying sentiment polarity on predefined categories. Incremental learning on new categories is necessary for (T)ACSA real applications. Though current multi-task learning models achieve good performance in (T)ACSA tasks, they suffer from catastrophic forgetting problems in (T)ACSA incremental learning tasks. In this paper, to make multi-task learning feasible for incremental learning, we proposed Category Name Embedding network (CNE-net). We set both encoder and decoder shared among all categories to weaken the catastrophic forgetting problem. Besides the origin input sentence, we applied another input feature, i.e., category name, for task discrimination. Our model achieved state-of-theart on two (T)ACSA benchmark datasets. Furthermore, we proposed a dataset for (T)ACSA incremental learning and achieved the best performance compared with other strong baselines.",A Multi-Task Incremental Learning Framework with Category Name Embedding for Aspect-Category Sentiment Analysis,"(T)ACSA tasks, including aspect-category sentiment analysis (ACSA) and targeted aspectcategory sentiment analysis (TACSA), aims at identifying sentiment polarity on predefined categories. Incremental learning on new categories is necessary for (T)ACSA real applications. Though current multi-task learning models achieve good performance in (T)ACSA tasks, they suffer from catastrophic forgetting problems in (T)ACSA incremental learning tasks. In this paper, to make multi-task learning feasible for incremental learning, we proposed Category Name Embedding network (CNE-net). We set both encoder and decoder shared among all categories to weaken the catastrophic forgetting problem. Besides the origin input sentence, we applied another input feature, i.e., category name, for task discrimination. Our model achieved state-of-theart on two (T)ACSA benchmark datasets. Furthermore, we proposed a dataset for (T)ACSA incremental learning and achieved the best performance compared with other strong baselines.",,"A Multi-Task Incremental Learning Framework with Category Name Embedding for Aspect-Category Sentiment Analysis. (T)ACSA tasks, including aspect-category sentiment analysis (ACSA) and targeted aspectcategory sentiment analysis (TACSA), aims at identifying sentiment polarity on predefined categories. Incremental learning on new categories is necessary for (T)ACSA real applications. Though current multi-task learning models achieve good performance in (T)ACSA tasks, they suffer from catastrophic forgetting problems in (T)ACSA incremental learning tasks. In this paper, to make multi-task learning feasible for incremental learning, we proposed Category Name Embedding network (CNE-net). We set both encoder and decoder shared among all categories to weaken the catastrophic forgetting problem. Besides the origin input sentence, we applied another input feature, i.e., category name, for task discrimination. Our model achieved state-of-theart on two (T)ACSA benchmark datasets. Furthermore, we proposed a dataset for (T)ACSA incremental learning and achieved the best performance compared with other strong baselines.",2020
nn-1977-finite-string-volume-14-number-5,https://aclanthology.org/J77-3003,0,,,,,,,"The FINITE STRING, Volume 14, Number 5. AMERICAN JOURNAL OF COMPUTATIONAL LINGUISTICS is published by the Association for Computational Linguistics.","The {F}INITE {S}TRING, Volume 14, Number 5",AMERICAN JOURNAL OF COMPUTATIONAL LINGUISTICS is published by the Association for Computational Linguistics.,"The FINITE STRING, Volume 14, Number 5",AMERICAN JOURNAL OF COMPUTATIONAL LINGUISTICS is published by the Association for Computational Linguistics.,,"The FINITE STRING, Volume 14, Number 5. AMERICAN JOURNAL OF COMPUTATIONAL LINGUISTICS is published by the Association for Computational Linguistics.",1977
tyers-etal-2012-flexible,https://aclanthology.org/2012.eamt-1.54,0,,,,,,,"Flexible finite-state lexical selection for rule-based machine translation. In this paper we describe a module (rule formalism, rule compiler and rule processor) designed to provide flexible support for lexical selection in rule-based machine translation. The motivation and implementation for the system is outlined and an efficient algorithm to compute the best coverage of lexical-selection rules over an ambiguous input sentence is described. We provide a demonstration of the module by learning rules for it on a typical training corpus and evaluating against other possible lexicalselection strategies. The inclusion of the module, along with rules learnt from the parallel corpus provides a small, but consistent and statistically-significant improvement over either using the highest-scoring translation according to a target-language model or using the most frequent aligned translation in the parallel corpus which is also found in the system's bilingual dictionaries.",Flexible finite-state lexical selection for rule-based machine translation,"In this paper we describe a module (rule formalism, rule compiler and rule processor) designed to provide flexible support for lexical selection in rule-based machine translation. The motivation and implementation for the system is outlined and an efficient algorithm to compute the best coverage of lexical-selection rules over an ambiguous input sentence is described. We provide a demonstration of the module by learning rules for it on a typical training corpus and evaluating against other possible lexicalselection strategies. The inclusion of the module, along with rules learnt from the parallel corpus provides a small, but consistent and statistically-significant improvement over either using the highest-scoring translation according to a target-language model or using the most frequent aligned translation in the parallel corpus which is also found in the system's bilingual dictionaries.",Flexible finite-state lexical selection for rule-based machine translation,"In this paper we describe a module (rule formalism, rule compiler and rule processor) designed to provide flexible support for lexical selection in rule-based machine translation. The motivation and implementation for the system is outlined and an efficient algorithm to compute the best coverage of lexical-selection rules over an ambiguous input sentence is described. We provide a demonstration of the module by learning rules for it on a typical training corpus and evaluating against other possible lexicalselection strategies. The inclusion of the module, along with rules learnt from the parallel corpus provides a small, but consistent and statistically-significant improvement over either using the highest-scoring translation according to a target-language model or using the most frequent aligned translation in the parallel corpus which is also found in the system's bilingual dictionaries.","We are thankful for the support of the Spanish Ministry of Science and Innovation through project TIN2009-14009-C02-01, and the Universitat d'Alacant through project GRE11-20. We also thank Sergio Ortiz Rojas for his constructive comments and ideas on the development of the system, and the anonymous reviewers for comments on the manuscript.","Flexible finite-state lexical selection for rule-based machine translation. In this paper we describe a module (rule formalism, rule compiler and rule processor) designed to provide flexible support for lexical selection in rule-based machine translation. The motivation and implementation for the system is outlined and an efficient algorithm to compute the best coverage of lexical-selection rules over an ambiguous input sentence is described. We provide a demonstration of the module by learning rules for it on a typical training corpus and evaluating against other possible lexicalselection strategies. The inclusion of the module, along with rules learnt from the parallel corpus provides a small, but consistent and statistically-significant improvement over either using the highest-scoring translation according to a target-language model or using the most frequent aligned translation in the parallel corpus which is also found in the system's bilingual dictionaries.",2012
pucher-2007-wordnet,https://aclanthology.org/P07-2033,0,,,,,,,WordNet-based Semantic Relatedness Measures in Automatic Speech Recognition for Meetings. This paper presents the application of WordNet-based semantic relatedness measures to Automatic Speech Recognition (ASR) in multi-party meetings. Different word-utterance context relatedness measures and utterance-coherence measures are defined and applied to the rescoring of Nbest lists. No significant improvements in terms of Word-Error-Rate (WER) are achieved compared to a large word-based ngram baseline model. We discuss our results and the relation to other work that achieved an improvement with such models for simpler tasks.,{W}ord{N}et-based Semantic Relatedness Measures in Automatic Speech Recognition for Meetings,This paper presents the application of WordNet-based semantic relatedness measures to Automatic Speech Recognition (ASR) in multi-party meetings. Different word-utterance context relatedness measures and utterance-coherence measures are defined and applied to the rescoring of Nbest lists. No significant improvements in terms of Word-Error-Rate (WER) are achieved compared to a large word-based ngram baseline model. We discuss our results and the relation to other work that achieved an improvement with such models for simpler tasks.,WordNet-based Semantic Relatedness Measures in Automatic Speech Recognition for Meetings,This paper presents the application of WordNet-based semantic relatedness measures to Automatic Speech Recognition (ASR) in multi-party meetings. Different word-utterance context relatedness measures and utterance-coherence measures are defined and applied to the rescoring of Nbest lists. No significant improvements in terms of Word-Error-Rate (WER) are achieved compared to a large word-based ngram baseline model. We discuss our results and the relation to other work that achieved an improvement with such models for simpler tasks.,"This work was supported by the European Union 6th FP IST Integrated Project AMI (Augmented Multiparty Interaction, and by Kapsch Carrier-Com AG and Mobilkom Austria AG together with the Austrian competence centre programme Kplus.",WordNet-based Semantic Relatedness Measures in Automatic Speech Recognition for Meetings. This paper presents the application of WordNet-based semantic relatedness measures to Automatic Speech Recognition (ASR) in multi-party meetings. Different word-utterance context relatedness measures and utterance-coherence measures are defined and applied to the rescoring of Nbest lists. No significant improvements in terms of Word-Error-Rate (WER) are achieved compared to a large word-based ngram baseline model. We discuss our results and the relation to other work that achieved an improvement with such models for simpler tasks.,2007
calzolari-etal-2004-enabler,http://www.lrec-conf.org/proceedings/lrec2004/pdf/545.pdf,1,,,,industry_innovation_infrastructure,peace_justice_and_strong_institutions,,"ENABLER Thematic Network of National Projects: Technical, Strategic and Political Issues of LRs. In this paper we present general strategies concerning Language Resources (LRs)-Written, Spoken and, recently, Multimodal-as developed within the ENABLER Thematic Network. LRs are a central component of the so-called ""linguistic infrastructure"" (the other key element being Evaluation), necessary for the development of any Human Language Technology (HLT) application. They play a critical role, as horizontal technology, in different emerging areas of FP6, and have been recognized as a priority within a number of national projects around Europe and worldwide. The availability of LRs is also a ""sensitive"" issue, touching directly the sphere of linguistic and cultural identity, but also with economical, societal and political implications. This is going to be even more true in the new Europe with 25 languages on a par.","{ENABLER} Thematic Network of National Projects: Technical, Strategic and Political Issues of {LR}s","In this paper we present general strategies concerning Language Resources (LRs)-Written, Spoken and, recently, Multimodal-as developed within the ENABLER Thematic Network. LRs are a central component of the so-called ""linguistic infrastructure"" (the other key element being Evaluation), necessary for the development of any Human Language Technology (HLT) application. They play a critical role, as horizontal technology, in different emerging areas of FP6, and have been recognized as a priority within a number of national projects around Europe and worldwide. The availability of LRs is also a ""sensitive"" issue, touching directly the sphere of linguistic and cultural identity, but also with economical, societal and political implications. This is going to be even more true in the new Europe with 25 languages on a par.","ENABLER Thematic Network of National Projects: Technical, Strategic and Political Issues of LRs","In this paper we present general strategies concerning Language Resources (LRs)-Written, Spoken and, recently, Multimodal-as developed within the ENABLER Thematic Network. LRs are a central component of the so-called ""linguistic infrastructure"" (the other key element being Evaluation), necessary for the development of any Human Language Technology (HLT) application. They play a critical role, as horizontal technology, in different emerging areas of FP6, and have been recognized as a priority within a number of national projects around Europe and worldwide. The availability of LRs is also a ""sensitive"" issue, touching directly the sphere of linguistic and cultural identity, but also with economical, societal and political implications. This is going to be even more true in the new Europe with 25 languages on a par.",,"ENABLER Thematic Network of National Projects: Technical, Strategic and Political Issues of LRs. In this paper we present general strategies concerning Language Resources (LRs)-Written, Spoken and, recently, Multimodal-as developed within the ENABLER Thematic Network. LRs are a central component of the so-called ""linguistic infrastructure"" (the other key element being Evaluation), necessary for the development of any Human Language Technology (HLT) application. They play a critical role, as horizontal technology, in different emerging areas of FP6, and have been recognized as a priority within a number of national projects around Europe and worldwide. The availability of LRs is also a ""sensitive"" issue, touching directly the sphere of linguistic and cultural identity, but also with economical, societal and political implications. This is going to be even more true in the new Europe with 25 languages on a par.",2004
escoter-etal-2017-grouping,https://aclanthology.org/E17-1103,0,,,,finance,,,"Grouping business news stories based on salience of named entities. In news aggregation systems focused on broad news domains, certain stories may appear in multiple articles. Depending on the relative importance of the story, the number of versions can reach dozens or hundreds within a day. The text in these versions may be nearly identical or quite different. Linking multiple versions of a story into a single group brings several important benefits to the end-user-reducing the cognitive load on the reader, as well as signaling the relative importance of the story. We present a grouping algorithm, and explore several vector-based representations of input documents: from a baseline using keywords, to a method using salience-a measure of importance of named entities in the text. We demonstrate that features beyond keywords yield substantial improvements, verified on a manually-annotated corpus of business news stories.",Grouping business news stories based on salience of named entities,"In news aggregation systems focused on broad news domains, certain stories may appear in multiple articles. Depending on the relative importance of the story, the number of versions can reach dozens or hundreds within a day. The text in these versions may be nearly identical or quite different. Linking multiple versions of a story into a single group brings several important benefits to the end-user-reducing the cognitive load on the reader, as well as signaling the relative importance of the story. We present a grouping algorithm, and explore several vector-based representations of input documents: from a baseline using keywords, to a method using salience-a measure of importance of named entities in the text. We demonstrate that features beyond keywords yield substantial improvements, verified on a manually-annotated corpus of business news stories.",Grouping business news stories based on salience of named entities,"In news aggregation systems focused on broad news domains, certain stories may appear in multiple articles. Depending on the relative importance of the story, the number of versions can reach dozens or hundreds within a day. The text in these versions may be nearly identical or quite different. Linking multiple versions of a story into a single group brings several important benefits to the end-user-reducing the cognitive load on the reader, as well as signaling the relative importance of the story. We present a grouping algorithm, and explore several vector-based representations of input documents: from a baseline using keywords, to a method using salience-a measure of importance of named entities in the text. We demonstrate that features beyond keywords yield substantial improvements, verified on a manually-annotated corpus of business news stories.",,"Grouping business news stories based on salience of named entities. In news aggregation systems focused on broad news domains, certain stories may appear in multiple articles. Depending on the relative importance of the story, the number of versions can reach dozens or hundreds within a day. The text in these versions may be nearly identical or quite different. Linking multiple versions of a story into a single group brings several important benefits to the end-user-reducing the cognitive load on the reader, as well as signaling the relative importance of the story. We present a grouping algorithm, and explore several vector-based representations of input documents: from a baseline using keywords, to a method using salience-a measure of importance of named entities in the text. We demonstrate that features beyond keywords yield substantial improvements, verified on a manually-annotated corpus of business news stories.",2017
sartorio-etal-2013-transition,https://aclanthology.org/P13-1014,0,,,,,,,"A Transition-Based Dependency Parser Using a Dynamic Parsing Strategy. We present a novel transition-based, greedy dependency parser which implements a flexible mix of bottom-up and top-down strategies. The new strategy allows the parser to postpone difficult decisions until the relevant information becomes available. The novel parser has a ∼12% error reduction in unlabeled attachment score over an arc-eager parser, with a slowdown factor of 2.8.",A Transition-Based Dependency Parser Using a Dynamic Parsing Strategy,"We present a novel transition-based, greedy dependency parser which implements a flexible mix of bottom-up and top-down strategies. The new strategy allows the parser to postpone difficult decisions until the relevant information becomes available. The novel parser has a ∼12% error reduction in unlabeled attachment score over an arc-eager parser, with a slowdown factor of 2.8.",A Transition-Based Dependency Parser Using a Dynamic Parsing Strategy,"We present a novel transition-based, greedy dependency parser which implements a flexible mix of bottom-up and top-down strategies. The new strategy allows the parser to postpone difficult decisions until the relevant information becomes available. The novel parser has a ∼12% error reduction in unlabeled attachment score over an arc-eager parser, with a slowdown factor of 2.8.","We wish to thank Liang Huang and Marco Kuhlmann for discussion related to the ideas reported in this paper, and the anonymous reviewers for their useful suggestions. The second author has been partially supported by MIUR under project PRIN No. 2010LYA9RH 006.","A Transition-Based Dependency Parser Using a Dynamic Parsing Strategy. We present a novel transition-based, greedy dependency parser which implements a flexible mix of bottom-up and top-down strategies. The new strategy allows the parser to postpone difficult decisions until the relevant information becomes available. The novel parser has a ∼12% error reduction in unlabeled attachment score over an arc-eager parser, with a slowdown factor of 2.8.",2013
bagga-2000-analyzing,https://aclanthology.org/W00-0106,0,,,,,,,"Analyzing the Reading Comprehension Task. In this paper we describe a method for analyzing the reading comprehension task. First, we describe a method of classifying facts (information) into categories or levels; where each level signifies a different degree of difficulty of extracting a fact from a piece of text containing it. We then proceed to show how one can use this model the analyze the complexity of the reading comprehension task. Finally, we analyze five different reading comprehension tasks and present results from this analysis.",Analyzing the Reading Comprehension Task,"In this paper we describe a method for analyzing the reading comprehension task. First, we describe a method of classifying facts (information) into categories or levels; where each level signifies a different degree of difficulty of extracting a fact from a piece of text containing it. We then proceed to show how one can use this model the analyze the complexity of the reading comprehension task. Finally, we analyze five different reading comprehension tasks and present results from this analysis.",Analyzing the Reading Comprehension Task,"In this paper we describe a method for analyzing the reading comprehension task. First, we describe a method of classifying facts (information) into categories or levels; where each level signifies a different degree of difficulty of extracting a fact from a piece of text containing it. We then proceed to show how one can use this model the analyze the complexity of the reading comprehension task. Finally, we analyze five different reading comprehension tasks and present results from this analysis.",,"Analyzing the Reading Comprehension Task. In this paper we describe a method for analyzing the reading comprehension task. First, we describe a method of classifying facts (information) into categories or levels; where each level signifies a different degree of difficulty of extracting a fact from a piece of text containing it. We then proceed to show how one can use this model the analyze the complexity of the reading comprehension task. Finally, we analyze five different reading comprehension tasks and present results from this analysis.",2000
chen-etal-2020-distilling,https://aclanthology.org/2020.acl-main.705,0,,,,,,,"Distilling Knowledge Learned in BERT for Text Generation. Large-scale pre-trained language model such as BERT has achieved great success in language understanding tasks. However, it remains an open question how to utilize BERT for language generation. In this paper, we present a novel approach, Conditional Masked Language Modeling (C-MLM), to enable the finetuning of BERT on target generation tasks. The finetuned BERT (teacher) is exploited as extra supervision to improve conventional Seq2Seq models (student) for better text generation performance. By leveraging BERT's idiosyncratic bidirectional nature, distilling knowledge learned in BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation. Experiments show that the proposed approach significantly outperforms strong Transformer baselines on multiple language generation tasks such as machine translation and text summarization. Our proposed model also achieves new state of the art on IWSLT German-English and English-Vietnamese MT datasets. 1",Distilling Knowledge Learned in {BERT} for Text Generation,"Large-scale pre-trained language model such as BERT has achieved great success in language understanding tasks. However, it remains an open question how to utilize BERT for language generation. In this paper, we present a novel approach, Conditional Masked Language Modeling (C-MLM), to enable the finetuning of BERT on target generation tasks. The finetuned BERT (teacher) is exploited as extra supervision to improve conventional Seq2Seq models (student) for better text generation performance. By leveraging BERT's idiosyncratic bidirectional nature, distilling knowledge learned in BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation. Experiments show that the proposed approach significantly outperforms strong Transformer baselines on multiple language generation tasks such as machine translation and text summarization. Our proposed model also achieves new state of the art on IWSLT German-English and English-Vietnamese MT datasets. 1",Distilling Knowledge Learned in BERT for Text Generation,"Large-scale pre-trained language model such as BERT has achieved great success in language understanding tasks. However, it remains an open question how to utilize BERT for language generation. In this paper, we present a novel approach, Conditional Masked Language Modeling (C-MLM), to enable the finetuning of BERT on target generation tasks. The finetuned BERT (teacher) is exploited as extra supervision to improve conventional Seq2Seq models (student) for better text generation performance. By leveraging BERT's idiosyncratic bidirectional nature, distilling knowledge learned in BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation. Experiments show that the proposed approach significantly outperforms strong Transformer baselines on multiple language generation tasks such as machine translation and text summarization. Our proposed model also achieves new state of the art on IWSLT German-English and English-Vietnamese MT datasets. 1",,"Distilling Knowledge Learned in BERT for Text Generation. Large-scale pre-trained language model such as BERT has achieved great success in language understanding tasks. However, it remains an open question how to utilize BERT for language generation. In this paper, we present a novel approach, Conditional Masked Language Modeling (C-MLM), to enable the finetuning of BERT on target generation tasks. The finetuned BERT (teacher) is exploited as extra supervision to improve conventional Seq2Seq models (student) for better text generation performance. By leveraging BERT's idiosyncratic bidirectional nature, distilling knowledge learned in BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation. Experiments show that the proposed approach significantly outperforms strong Transformer baselines on multiple language generation tasks such as machine translation and text summarization. Our proposed model also achieves new state of the art on IWSLT German-English and English-Vietnamese MT datasets. 1",2020
castillo-2010-machine,https://aclanthology.org/W10-1609,0,,,,,,,"A Machine Learning Approach for Recognizing Textual Entailment in Spanish. This paper presents a system that uses machine learning algorithms for the task of recognizing textual entailment in Spanish language. The datasets used include SPARTE Corpus and a translated version to Spanish of RTE3, RTE4 and RTE5 datasets. The features chosen quantify lexical, syntactic and semantic level matching between text and hypothesis sentences. We analyze how the different sizes of datasets and classifiers could impact on the final overall performance of the RTE classification of two-way task in Spanish. The RTE system yields 60.83% of accuracy and a competitive result of 66.50% of accuracy is reported by train and test set taken from SPARTE Corpus with 70% split.",A Machine Learning Approach for Recognizing Textual Entailment in {S}panish,"This paper presents a system that uses machine learning algorithms for the task of recognizing textual entailment in Spanish language. The datasets used include SPARTE Corpus and a translated version to Spanish of RTE3, RTE4 and RTE5 datasets. The features chosen quantify lexical, syntactic and semantic level matching between text and hypothesis sentences. We analyze how the different sizes of datasets and classifiers could impact on the final overall performance of the RTE classification of two-way task in Spanish. The RTE system yields 60.83% of accuracy and a competitive result of 66.50% of accuracy is reported by train and test set taken from SPARTE Corpus with 70% split.",A Machine Learning Approach for Recognizing Textual Entailment in Spanish,"This paper presents a system that uses machine learning algorithms for the task of recognizing textual entailment in Spanish language. The datasets used include SPARTE Corpus and a translated version to Spanish of RTE3, RTE4 and RTE5 datasets. The features chosen quantify lexical, syntactic and semantic level matching between text and hypothesis sentences. We analyze how the different sizes of datasets and classifiers could impact on the final overall performance of the RTE classification of two-way task in Spanish. The RTE system yields 60.83% of accuracy and a competitive result of 66.50% of accuracy is reported by train and test set taken from SPARTE Corpus with 70% split.",,"A Machine Learning Approach for Recognizing Textual Entailment in Spanish. This paper presents a system that uses machine learning algorithms for the task of recognizing textual entailment in Spanish language. The datasets used include SPARTE Corpus and a translated version to Spanish of RTE3, RTE4 and RTE5 datasets. The features chosen quantify lexical, syntactic and semantic level matching between text and hypothesis sentences. We analyze how the different sizes of datasets and classifiers could impact on the final overall performance of the RTE classification of two-way task in Spanish. The RTE system yields 60.83% of accuracy and a competitive result of 66.50% of accuracy is reported by train and test set taken from SPARTE Corpus with 70% split.",2010
wich-etal-2020-impact,https://aclanthology.org/2020.alw-1.7,1,,,,hate_speech,,,"Impact of Politically Biased Data on Hate Speech Classification. One challenge that social media platforms are facing nowadays is hate speech. Hence, automatic hate speech detection has been increasingly researched in recent years-in particular with the rise of deep learning. A problem of these models is their vulnerability to undesirable bias in training data. We investigate the impact of political bias on hate speech classification by constructing three politicallybiased data sets (left-wing, right-wing, politically neutral) and compare the performance of classifiers trained on them. We show that (1) political bias negatively impairs the performance of hate speech classifiers and (2) an explainable machine learning model can help to visualize such bias within the training data. The results show that political bias in training data has an impact on hate speech classification and can become a serious issue.",Impact of Politically Biased Data on Hate Speech Classification,"One challenge that social media platforms are facing nowadays is hate speech. Hence, automatic hate speech detection has been increasingly researched in recent years-in particular with the rise of deep learning. A problem of these models is their vulnerability to undesirable bias in training data. We investigate the impact of political bias on hate speech classification by constructing three politicallybiased data sets (left-wing, right-wing, politically neutral) and compare the performance of classifiers trained on them. We show that (1) political bias negatively impairs the performance of hate speech classifiers and (2) an explainable machine learning model can help to visualize such bias within the training data. The results show that political bias in training data has an impact on hate speech classification and can become a serious issue.",Impact of Politically Biased Data on Hate Speech Classification,"One challenge that social media platforms are facing nowadays is hate speech. Hence, automatic hate speech detection has been increasingly researched in recent years-in particular with the rise of deep learning. A problem of these models is their vulnerability to undesirable bias in training data. We investigate the impact of political bias on hate speech classification by constructing three politicallybiased data sets (left-wing, right-wing, politically neutral) and compare the performance of classifiers trained on them. We show that (1) political bias negatively impairs the performance of hate speech classifiers and (2) an explainable machine learning model can help to visualize such bias within the training data. The results show that political bias in training data has an impact on hate speech classification and can become a serious issue.","This paper is based on a joined work in the context of Jan Bauer's master's thesis (Bauer, 2020) . This research has been partially funded by a scholarship from the Hanns Seidel Foundation financed by the German Federal Ministry of Education and Research.","Impact of Politically Biased Data on Hate Speech Classification. One challenge that social media platforms are facing nowadays is hate speech. Hence, automatic hate speech detection has been increasingly researched in recent years-in particular with the rise of deep learning. A problem of these models is their vulnerability to undesirable bias in training data. We investigate the impact of political bias on hate speech classification by constructing three politicallybiased data sets (left-wing, right-wing, politically neutral) and compare the performance of classifiers trained on them. We show that (1) political bias negatively impairs the performance of hate speech classifiers and (2) an explainable machine learning model can help to visualize such bias within the training data. The results show that political bias in training data has an impact on hate speech classification and can become a serious issue.",2020
temperley-2010-invited,https://aclanthology.org/N10-1114,0,,,,,,,"Invited Talk: Music, Language, and Computational Modeling: Lessons from the Key-Finding Problem. Recent research in computational music research, including my own, has been greatly influenced by methods in computational linguistics. But I believe the influence could also go the other way: Music may offer some interesting lessons for language research, particularly with regard to the modeling of cognition.","Invited Talk: Music, Language, and Computational Modeling: Lessons from the Key-Finding Problem","Recent research in computational music research, including my own, has been greatly influenced by methods in computational linguistics. But I believe the influence could also go the other way: Music may offer some interesting lessons for language research, particularly with regard to the modeling of cognition.","Invited Talk: Music, Language, and Computational Modeling: Lessons from the Key-Finding Problem","Recent research in computational music research, including my own, has been greatly influenced by methods in computational linguistics. But I believe the influence could also go the other way: Music may offer some interesting lessons for language research, particularly with regard to the modeling of cognition.",,"Invited Talk: Music, Language, and Computational Modeling: Lessons from the Key-Finding Problem. Recent research in computational music research, including my own, has been greatly influenced by methods in computational linguistics. But I believe the influence could also go the other way: Music may offer some interesting lessons for language research, particularly with regard to the modeling of cognition.",2010
singh-etal-2011-large,https://aclanthology.org/P11-1080,0,,,,,,,"Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models. Cross-document coreference, the task of grouping all the mentions of each entity in a document collection, arises in information extraction and automated knowledge base construction. For large collections, it is clearly impractical to consider all possible groupings of mentions into distinct entities. To solve the problem we propose two ideas: (a) a distributed inference technique that uses parallelism to enable large scale processing, and (b) a hierarchical model of coreference that represents uncertainty over multiple granularities of entities to facilitate more effective approximate inference. To evaluate these ideas, we constructed a labeled corpus of 1.5 million disambiguated mentions in Web pages by selecting link anchors referring to Wikipedia entities. We show that the combination of the hierarchical model with distributed inference quickly obtains high accuracy (with error reduction of 38%) on this large dataset, demonstrating the scalability of our approach.",Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models,"Cross-document coreference, the task of grouping all the mentions of each entity in a document collection, arises in information extraction and automated knowledge base construction. For large collections, it is clearly impractical to consider all possible groupings of mentions into distinct entities. To solve the problem we propose two ideas: (a) a distributed inference technique that uses parallelism to enable large scale processing, and (b) a hierarchical model of coreference that represents uncertainty over multiple granularities of entities to facilitate more effective approximate inference. To evaluate these ideas, we constructed a labeled corpus of 1.5 million disambiguated mentions in Web pages by selecting link anchors referring to Wikipedia entities. We show that the combination of the hierarchical model with distributed inference quickly obtains high accuracy (with error reduction of 38%) on this large dataset, demonstrating the scalability of our approach.",Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models,"Cross-document coreference, the task of grouping all the mentions of each entity in a document collection, arises in information extraction and automated knowledge base construction. For large collections, it is clearly impractical to consider all possible groupings of mentions into distinct entities. To solve the problem we propose two ideas: (a) a distributed inference technique that uses parallelism to enable large scale processing, and (b) a hierarchical model of coreference that represents uncertainty over multiple granularities of entities to facilitate more effective approximate inference. To evaluate these ideas, we constructed a labeled corpus of 1.5 million disambiguated mentions in Web pages by selecting link anchors referring to Wikipedia entities. We show that the combination of the hierarchical model with distributed inference quickly obtains high accuracy (with error reduction of 38%) on this large dataset, demonstrating the scalability of our approach.","This work was done when the first author was an intern at Google Research. The authors would like to thank Mark Dredze, Sebastian Riedel, and anonymous reviewers for their valuable feedback. This work was supported in part by the Center for Intelligent Information Retrieval, the University of Massachusetts gratefully acknowledges the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181., in part by an award from Google, in part by The Central Intelligence Agency, the National Security Agency and National Science Foundation under NSF grant #IIS-0326249, in part by NSF grant #CNS-0958392, and in part by UPenn NSF medium IIS-0803847. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.","Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models. Cross-document coreference, the task of grouping all the mentions of each entity in a document collection, arises in information extraction and automated knowledge base construction. For large collections, it is clearly impractical to consider all possible groupings of mentions into distinct entities. To solve the problem we propose two ideas: (a) a distributed inference technique that uses parallelism to enable large scale processing, and (b) a hierarchical model of coreference that represents uncertainty over multiple granularities of entities to facilitate more effective approximate inference. To evaluate these ideas, we constructed a labeled corpus of 1.5 million disambiguated mentions in Web pages by selecting link anchors referring to Wikipedia entities. We show that the combination of the hierarchical model with distributed inference quickly obtains high accuracy (with error reduction of 38%) on this large dataset, demonstrating the scalability of our approach.",2011
dethlefs-etal-2014-cluster,https://aclanthology.org/E14-1074,0,,,,,,,"Cluster-based Prediction of User Ratings for Stylistic Surface Realisation. Surface realisations typically depend on their target style and audience. A challenge in estimating a stylistic realiser from data is that humans vary significantly in their subjective perceptions of linguistic forms and styles, leading to almost no correlation between ratings of the same utterance. We address this problem in two steps. First, we estimate a mapping function between the linguistic features of a corpus of utterances and their human style ratings. Users are partitioned into clusters based on the similarity of their ratings, so that ratings for new utterances can be estimated, even for new, unknown users. In a second step, the estimated model is used to re-rank the outputs of a number of surface realisers to produce stylistically adaptive output. Results confirm that the generated styles are recognisable to human judges and that predictive models based on clusters of users lead to better rating predictions than models based on an average population of users.",Cluster-based Prediction of User Ratings for Stylistic Surface Realisation,"Surface realisations typically depend on their target style and audience. A challenge in estimating a stylistic realiser from data is that humans vary significantly in their subjective perceptions of linguistic forms and styles, leading to almost no correlation between ratings of the same utterance. We address this problem in two steps. First, we estimate a mapping function between the linguistic features of a corpus of utterances and their human style ratings. Users are partitioned into clusters based on the similarity of their ratings, so that ratings for new utterances can be estimated, even for new, unknown users. In a second step, the estimated model is used to re-rank the outputs of a number of surface realisers to produce stylistically adaptive output. Results confirm that the generated styles are recognisable to human judges and that predictive models based on clusters of users lead to better rating predictions than models based on an average population of users.",Cluster-based Prediction of User Ratings for Stylistic Surface Realisation,"Surface realisations typically depend on their target style and audience. A challenge in estimating a stylistic realiser from data is that humans vary significantly in their subjective perceptions of linguistic forms and styles, leading to almost no correlation between ratings of the same utterance. We address this problem in two steps. First, we estimate a mapping function between the linguistic features of a corpus of utterances and their human style ratings. Users are partitioned into clusters based on the similarity of their ratings, so that ratings for new utterances can be estimated, even for new, unknown users. In a second step, the estimated model is used to re-rank the outputs of a number of surface realisers to produce stylistically adaptive output. Results confirm that the generated styles are recognisable to human judges and that predictive models based on clusters of users lead to better rating predictions than models based on an average population of users.",Acknowledgements This research was funded by the EC FP7 programme FP7/2011-14 under grant agreements no. 270019 (SPACEBOOK) and no. 287615 (PARLANCE).,"Cluster-based Prediction of User Ratings for Stylistic Surface Realisation. Surface realisations typically depend on their target style and audience. A challenge in estimating a stylistic realiser from data is that humans vary significantly in their subjective perceptions of linguistic forms and styles, leading to almost no correlation between ratings of the same utterance. We address this problem in two steps. First, we estimate a mapping function between the linguistic features of a corpus of utterances and their human style ratings. Users are partitioned into clusters based on the similarity of their ratings, so that ratings for new utterances can be estimated, even for new, unknown users. In a second step, the estimated model is used to re-rank the outputs of a number of surface realisers to produce stylistically adaptive output. Results confirm that the generated styles are recognisable to human judges and that predictive models based on clusters of users lead to better rating predictions than models based on an average population of users.",2014
yang-etal-2016-extraction,https://aclanthology.org/N16-2012,1,,,,peace_justice_and_strong_institutions,industry_innovation_infrastructure,,"Extraction of Bilingual Technical Terms for Chinese-Japanese Patent Translation. The translation of patents or scientific papers is a key issue that should be helped by the use of statistical machine translation (SMT). In this paper, we propose a method to improve Chinese-Japanese patent SMT by premarking the training corpus with aligned bilingual multi-word terms. We automatically extract multi-word terms from monolingual corpora by combining statistical and linguistic filtering methods. We use the sampling-based alignment method to identify aligned terms and set some threshold on translation probabilities to select the most promising bilingual multi-word terms. We pre-mark a Chinese-Japanese training corpus with such selected aligned bilingual multi-word terms. We obtain the performance of over 70% precision in bilingual term extraction and a significant improvement of BLEU scores in our experiments on a Chinese-Japanese patent parallel corpus.",Extraction of Bilingual Technical Terms for {C}hinese-{J}apanese Patent Translation,"The translation of patents or scientific papers is a key issue that should be helped by the use of statistical machine translation (SMT). In this paper, we propose a method to improve Chinese-Japanese patent SMT by premarking the training corpus with aligned bilingual multi-word terms. We automatically extract multi-word terms from monolingual corpora by combining statistical and linguistic filtering methods. We use the sampling-based alignment method to identify aligned terms and set some threshold on translation probabilities to select the most promising bilingual multi-word terms. We pre-mark a Chinese-Japanese training corpus with such selected aligned bilingual multi-word terms. We obtain the performance of over 70% precision in bilingual term extraction and a significant improvement of BLEU scores in our experiments on a Chinese-Japanese patent parallel corpus.",Extraction of Bilingual Technical Terms for Chinese-Japanese Patent Translation,"The translation of patents or scientific papers is a key issue that should be helped by the use of statistical machine translation (SMT). In this paper, we propose a method to improve Chinese-Japanese patent SMT by premarking the training corpus with aligned bilingual multi-word terms. We automatically extract multi-word terms from monolingual corpora by combining statistical and linguistic filtering methods. We use the sampling-based alignment method to identify aligned terms and set some threshold on translation probabilities to select the most promising bilingual multi-word terms. We pre-mark a Chinese-Japanese training corpus with such selected aligned bilingual multi-word terms. We obtain the performance of over 70% precision in bilingual term extraction and a significant improvement of BLEU scores in our experiments on a Chinese-Japanese patent parallel corpus.",,"Extraction of Bilingual Technical Terms for Chinese-Japanese Patent Translation. The translation of patents or scientific papers is a key issue that should be helped by the use of statistical machine translation (SMT). In this paper, we propose a method to improve Chinese-Japanese patent SMT by premarking the training corpus with aligned bilingual multi-word terms. We automatically extract multi-word terms from monolingual corpora by combining statistical and linguistic filtering methods. We use the sampling-based alignment method to identify aligned terms and set some threshold on translation probabilities to select the most promising bilingual multi-word terms. We pre-mark a Chinese-Japanese training corpus with such selected aligned bilingual multi-word terms. We obtain the performance of over 70% precision in bilingual term extraction and a significant improvement of BLEU scores in our experiments on a Chinese-Japanese patent parallel corpus.",2016
magri-2014-error,https://aclanthology.org/W14-2802,0,,,,,,,The Error-driven Ranking Model of the Acquisition of Phonotactics: How to Keep the Faithfulness Constraints at Bay. A problem which arises in the theory of the error-driven ranking model of the acquisition of phonotactics is that the faithfulness constraints need to be promoted but should not be promoted too high. This paper motivates this technical problem and shows how to tune the promotion component of the re-ranking rule so as to keep the faithfulness constraints at bay.,The Error-driven Ranking Model of the Acquisition of Phonotactics: How to Keep the Faithfulness Constraints at Bay,A problem which arises in the theory of the error-driven ranking model of the acquisition of phonotactics is that the faithfulness constraints need to be promoted but should not be promoted too high. This paper motivates this technical problem and shows how to tune the promotion component of the re-ranking rule so as to keep the faithfulness constraints at bay.,The Error-driven Ranking Model of the Acquisition of Phonotactics: How to Keep the Faithfulness Constraints at Bay,A problem which arises in the theory of the error-driven ranking model of the acquisition of phonotactics is that the faithfulness constraints need to be promoted but should not be promoted too high. This paper motivates this technical problem and shows how to tune the promotion component of the re-ranking rule so as to keep the faithfulness constraints at bay.,This research was supported by a Marie Curie Intra European Fellowship within the 7th European,The Error-driven Ranking Model of the Acquisition of Phonotactics: How to Keep the Faithfulness Constraints at Bay. A problem which arises in the theory of the error-driven ranking model of the acquisition of phonotactics is that the faithfulness constraints need to be promoted but should not be promoted too high. This paper motivates this technical problem and shows how to tune the promotion component of the re-ranking rule so as to keep the faithfulness constraints at bay.,2014
philpot-etal-2005-omega,https://aclanthology.org/I05-7009,0,,,,,,,The Omega Ontology. ,The Omega Ontology,,The Omega Ontology,,,The Omega Ontology. ,2005
christodoulopoulos-etal-2016-incremental,https://aclanthology.org/W16-1906,0,,,,,,,"An incremental model of syntactic bootstrapping. Syntactic bootstrapping is the hypothesis that learners can use the preliminary syntactic structure of a sentence to identify and characterise the meanings of novel verbs. Previous work has shown that syntactic bootstrapping can begin using only a few seed nouns (Connor et al., 2010; Connor et al., 2012). Here, we relax their key assumption: rather than training the model over the entire corpus at once (batch mode), we train the model incrementally, thus more realistically simulating a human learner. We also improve on the verb prediction method by incorporating the assumption that verb assignments are stable over time. We show that, given a high enough number of seed nouns (around 30), an incremental model achieves similar performance to the batch model. We also find that the number of seed nouns shown to be sufficient in the previous work is not sufficient under the more realistic incremental model. The results demonstrate that adopting more realistic assumptions about the early stages of language acquisition can provide new insights without undermining performance.",An incremental model of syntactic bootstrapping,"Syntactic bootstrapping is the hypothesis that learners can use the preliminary syntactic structure of a sentence to identify and characterise the meanings of novel verbs. Previous work has shown that syntactic bootstrapping can begin using only a few seed nouns (Connor et al., 2010; Connor et al., 2012). Here, we relax their key assumption: rather than training the model over the entire corpus at once (batch mode), we train the model incrementally, thus more realistically simulating a human learner. We also improve on the verb prediction method by incorporating the assumption that verb assignments are stable over time. We show that, given a high enough number of seed nouns (around 30), an incremental model achieves similar performance to the batch model. We also find that the number of seed nouns shown to be sufficient in the previous work is not sufficient under the more realistic incremental model. The results demonstrate that adopting more realistic assumptions about the early stages of language acquisition can provide new insights without undermining performance.",An incremental model of syntactic bootstrapping,"Syntactic bootstrapping is the hypothesis that learners can use the preliminary syntactic structure of a sentence to identify and characterise the meanings of novel verbs. Previous work has shown that syntactic bootstrapping can begin using only a few seed nouns (Connor et al., 2010; Connor et al., 2012). Here, we relax their key assumption: rather than training the model over the entire corpus at once (batch mode), we train the model incrementally, thus more realistically simulating a human learner. We also improve on the verb prediction method by incorporating the assumption that verb assignments are stable over time. We show that, given a high enough number of seed nouns (around 30), an incremental model achieves similar performance to the batch model. We also find that the number of seed nouns shown to be sufficient in the previous work is not sufficient under the more realistic incremental model. The results demonstrate that adopting more realistic assumptions about the early stages of language acquisition can provide new insights without undermining performance.",The authors would like to thank the anonymous reviewers for their suggestions. Many thanks also to Catriona Silvey for her help with the manuscript. This research is supported by NIH grant R01-HD054448-07.,"An incremental model of syntactic bootstrapping. Syntactic bootstrapping is the hypothesis that learners can use the preliminary syntactic structure of a sentence to identify and characterise the meanings of novel verbs. Previous work has shown that syntactic bootstrapping can begin using only a few seed nouns (Connor et al., 2010; Connor et al., 2012). Here, we relax their key assumption: rather than training the model over the entire corpus at once (batch mode), we train the model incrementally, thus more realistically simulating a human learner. We also improve on the verb prediction method by incorporating the assumption that verb assignments are stable over time. We show that, given a high enough number of seed nouns (around 30), an incremental model achieves similar performance to the batch model. We also find that the number of seed nouns shown to be sufficient in the previous work is not sufficient under the more realistic incremental model. The results demonstrate that adopting more realistic assumptions about the early stages of language acquisition can provide new insights without undermining performance.",2016
leeuwenberg-moens-2018-word,https://aclanthology.org/C18-1291,0,,,,,,,"Word-Level Loss Extensions for Neural Temporal Relation Classification. Unsupervised pre-trained word embeddings are used effectively for many tasks in natural language processing to leverage unlabeled textual data. Often these embeddings are either used as initializations or as fixed word representations for task-specific classification models. In this work, we extend our classification model's task loss with an unsupervised auxiliary loss on the word-embedding level of the model. This is to ensure that the learned word representations contain both task-specific features, learned from the supervised loss component, and more general features learned from the unsupervised loss component. We evaluate our approach on the task of temporal relation extraction, in particular, narrative containment relation extraction from clinical records, and show that continued training of the embeddings on the unsupervised objective together with the task objective gives better task-specific embeddings, and results in an improvement over the state of the art on the THYME dataset, using only a general-domain part-of-speech tagger as linguistic resource.",Word-Level Loss Extensions for Neural Temporal Relation Classification,"Unsupervised pre-trained word embeddings are used effectively for many tasks in natural language processing to leverage unlabeled textual data. Often these embeddings are either used as initializations or as fixed word representations for task-specific classification models. In this work, we extend our classification model's task loss with an unsupervised auxiliary loss on the word-embedding level of the model. This is to ensure that the learned word representations contain both task-specific features, learned from the supervised loss component, and more general features learned from the unsupervised loss component. We evaluate our approach on the task of temporal relation extraction, in particular, narrative containment relation extraction from clinical records, and show that continued training of the embeddings on the unsupervised objective together with the task objective gives better task-specific embeddings, and results in an improvement over the state of the art on the THYME dataset, using only a general-domain part-of-speech tagger as linguistic resource.",Word-Level Loss Extensions for Neural Temporal Relation Classification,"Unsupervised pre-trained word embeddings are used effectively for many tasks in natural language processing to leverage unlabeled textual data. Often these embeddings are either used as initializations or as fixed word representations for task-specific classification models. In this work, we extend our classification model's task loss with an unsupervised auxiliary loss on the word-embedding level of the model. This is to ensure that the learned word representations contain both task-specific features, learned from the supervised loss component, and more general features learned from the unsupervised loss component. We evaluate our approach on the task of temporal relation extraction, in particular, narrative containment relation extraction from clinical records, and show that continued training of the embeddings on the unsupervised objective together with the task objective gives better task-specific embeddings, and results in an improvement over the state of the art on the THYME dataset, using only a general-domain part-of-speech tagger as linguistic resource.","The authors would like to thank the reviewers for their constructive comments which helped us to improve the paper. Also, we would like to thank the Mayo Clinic for permission to use the THYME corpus. This work was funded by the KU Leuven C22/15/16 project ""MAchine Reading of patient recordS (MARS)"", and by the IWT-SBO 150056 project ""ACquiring CrUcial Medical information Using LAnguage TEchnology"" (ACCUMULATE).","Word-Level Loss Extensions for Neural Temporal Relation Classification. Unsupervised pre-trained word embeddings are used effectively for many tasks in natural language processing to leverage unlabeled textual data. Often these embeddings are either used as initializations or as fixed word representations for task-specific classification models. In this work, we extend our classification model's task loss with an unsupervised auxiliary loss on the word-embedding level of the model. This is to ensure that the learned word representations contain both task-specific features, learned from the supervised loss component, and more general features learned from the unsupervised loss component. We evaluate our approach on the task of temporal relation extraction, in particular, narrative containment relation extraction from clinical records, and show that continued training of the embeddings on the unsupervised objective together with the task objective gives better task-specific embeddings, and results in an improvement over the state of the art on the THYME dataset, using only a general-domain part-of-speech tagger as linguistic resource.",2018
pereira-etal-2010-learning,https://aclanthology.org/W10-0601,1,,,,health,,,"Learning semantic features for fMRI data from definitional text. Mitchell et al., 2008) showed that it was possible to use a text corpus to learn the value of hypothesized semantic features characterizing the meaning of a concrete noun. The authors also demonstrated that those features could be used to decompose the spatial pattern of fMRI-measured brain activation in response to a stimulus containing that noun and a picture of it. In this paper we introduce a method for learning such semantic features automatically from a text corpus, without needing to hypothesize them or provide any proxies for their presence on the text. We show that those features are effective in a more demanding classification task than that in (Mitchell et al., 2008) and describe their qualitative relationship to the features proposed in that paper.",Learning semantic features for f{MRI} data from definitional text,"Mitchell et al., 2008) showed that it was possible to use a text corpus to learn the value of hypothesized semantic features characterizing the meaning of a concrete noun. The authors also demonstrated that those features could be used to decompose the spatial pattern of fMRI-measured brain activation in response to a stimulus containing that noun and a picture of it. In this paper we introduce a method for learning such semantic features automatically from a text corpus, without needing to hypothesize them or provide any proxies for their presence on the text. We show that those features are effective in a more demanding classification task than that in (Mitchell et al., 2008) and describe their qualitative relationship to the features proposed in that paper.",Learning semantic features for fMRI data from definitional text,"Mitchell et al., 2008) showed that it was possible to use a text corpus to learn the value of hypothesized semantic features characterizing the meaning of a concrete noun. The authors also demonstrated that those features could be used to decompose the spatial pattern of fMRI-measured brain activation in response to a stimulus containing that noun and a picture of it. In this paper we introduce a method for learning such semantic features automatically from a text corpus, without needing to hypothesize them or provide any proxies for their presence on the text. We show that those features are effective in a more demanding classification task than that in (Mitchell et al., 2008) and describe their qualitative relationship to the features proposed in that paper.",We would like to thank David Blei for discussions about topic modelling in general and of the Wikipedia corpus in particular and Ken Norman for valuable feedback at various stages of the work.,"Learning semantic features for fMRI data from definitional text. Mitchell et al., 2008) showed that it was possible to use a text corpus to learn the value of hypothesized semantic features characterizing the meaning of a concrete noun. The authors also demonstrated that those features could be used to decompose the spatial pattern of fMRI-measured brain activation in response to a stimulus containing that noun and a picture of it. In this paper we introduce a method for learning such semantic features automatically from a text corpus, without needing to hypothesize them or provide any proxies for their presence on the text. We show that those features are effective in a more demanding classification task than that in (Mitchell et al., 2008) and describe their qualitative relationship to the features proposed in that paper.",2010
tsai-lai-2018-functions,https://aclanthology.org/Y18-1078,0,,,,,,,"The Functions of Must-constructions in Spoken Corpus: A Constructionist Perspective. This study investigates must constructions in the Spoken British National Corpus 2014 (Spoken BNC2014). A constructionist perspective is taken to examine the structure and distribution of must constructions in the spoken corpus. Moreover, a conversational analysis is conducted to identify the functions of must constructions as they are used in communication. Adopting corpus analytical procedures, we identified two major must constructions, [must+be] and [must+""ve/have], whose central member [there+must+be+some] conducts the topic extending function while [she+must+""ve/have+been] is related to the speaker""s evaluation of the condition of an individual identified as she. On the other hand, although [must+Verb] does not have a very high type frequency, its central member [I+must+admit+I] performs an important interpersonal function in minimizing possible negative impact brought about by the speaker""s comment. The findings suggest that the central members of must constructions exhibit dynamic and interactive functions in daily conversations.",The Functions of Must-constructions in Spoken Corpus: A Constructionist Perspective,"This study investigates must constructions in the Spoken British National Corpus 2014 (Spoken BNC2014). A constructionist perspective is taken to examine the structure and distribution of must constructions in the spoken corpus. Moreover, a conversational analysis is conducted to identify the functions of must constructions as they are used in communication. Adopting corpus analytical procedures, we identified two major must constructions, [must+be] and [must+""ve/have], whose central member [there+must+be+some] conducts the topic extending function while [she+must+""ve/have+been] is related to the speaker""s evaluation of the condition of an individual identified as she. On the other hand, although [must+Verb] does not have a very high type frequency, its central member [I+must+admit+I] performs an important interpersonal function in minimizing possible negative impact brought about by the speaker""s comment. The findings suggest that the central members of must constructions exhibit dynamic and interactive functions in daily conversations.",The Functions of Must-constructions in Spoken Corpus: A Constructionist Perspective,"This study investigates must constructions in the Spoken British National Corpus 2014 (Spoken BNC2014). A constructionist perspective is taken to examine the structure and distribution of must constructions in the spoken corpus. Moreover, a conversational analysis is conducted to identify the functions of must constructions as they are used in communication. Adopting corpus analytical procedures, we identified two major must constructions, [must+be] and [must+""ve/have], whose central member [there+must+be+some] conducts the topic extending function while [she+must+""ve/have+been] is related to the speaker""s evaluation of the condition of an individual identified as she. On the other hand, although [must+Verb] does not have a very high type frequency, its central member [I+must+admit+I] performs an important interpersonal function in minimizing possible negative impact brought about by the speaker""s comment. The findings suggest that the central members of must constructions exhibit dynamic and interactive functions in daily conversations.",This work was supported in part by the Ministry of Education under the Grants 107H121-08.,"The Functions of Must-constructions in Spoken Corpus: A Constructionist Perspective. This study investigates must constructions in the Spoken British National Corpus 2014 (Spoken BNC2014). A constructionist perspective is taken to examine the structure and distribution of must constructions in the spoken corpus. Moreover, a conversational analysis is conducted to identify the functions of must constructions as they are used in communication. Adopting corpus analytical procedures, we identified two major must constructions, [must+be] and [must+""ve/have], whose central member [there+must+be+some] conducts the topic extending function while [she+must+""ve/have+been] is related to the speaker""s evaluation of the condition of an individual identified as she. On the other hand, although [must+Verb] does not have a very high type frequency, its central member [I+must+admit+I] performs an important interpersonal function in minimizing possible negative impact brought about by the speaker""s comment. The findings suggest that the central members of must constructions exhibit dynamic and interactive functions in daily conversations.",2018
power-1999-generating,https://aclanthology.org/E99-1002,0,,,,,,,"Generating referring expressions with a unification grammar. A simple formalism is proposed to represent the contexts in which pronouns, definite/indefinite descriptions, and ordinal descriptions (e.g. 'the second book') can be used, and the way in which these expressions change the context. It is shown that referring expressions can be generated by a unification grammar provided that some phrase-structure rules are specially tailored to express entities in the current knowledge base.",Generating referring expressions with a unification grammar,"A simple formalism is proposed to represent the contexts in which pronouns, definite/indefinite descriptions, and ordinal descriptions (e.g. 'the second book') can be used, and the way in which these expressions change the context. It is shown that referring expressions can be generated by a unification grammar provided that some phrase-structure rules are specially tailored to express entities in the current knowledge base.",Generating referring expressions with a unification grammar,"A simple formalism is proposed to represent the contexts in which pronouns, definite/indefinite descriptions, and ordinal descriptions (e.g. 'the second book') can be used, and the way in which these expressions change the context. It is shown that referring expressions can be generated by a unification grammar provided that some phrase-structure rules are specially tailored to express entities in the current knowledge base.",,"Generating referring expressions with a unification grammar. A simple formalism is proposed to represent the contexts in which pronouns, definite/indefinite descriptions, and ordinal descriptions (e.g. 'the second book') can be used, and the way in which these expressions change the context. It is shown that referring expressions can be generated by a unification grammar provided that some phrase-structure rules are specially tailored to express entities in the current knowledge base.",1999
chandrahas-etal-2020-inducing,https://aclanthology.org/2020.icon-main.9,0,,,,,,,"Inducing Interpretability in Knowledge Graph Embeddings. We study the problem of inducing interpretability in Knowledge Graph (KG) embeddings. Learning KG embeddings has been an active area of research in the past few years, resulting in many different models. However, most of these methods do not address the interpretability (semantics) of individual dimensions of the learned embeddings. In this work, we study this problem and propose a method for inducing interpretability in KG embeddings using entity co-occurrence statistics. The proposed method significantly improves the interpretability, while maintaining comparable performance in other KG tasks.",Inducing Interpretability in Knowledge Graph Embeddings,"We study the problem of inducing interpretability in Knowledge Graph (KG) embeddings. Learning KG embeddings has been an active area of research in the past few years, resulting in many different models. However, most of these methods do not address the interpretability (semantics) of individual dimensions of the learned embeddings. In this work, we study this problem and propose a method for inducing interpretability in KG embeddings using entity co-occurrence statistics. The proposed method significantly improves the interpretability, while maintaining comparable performance in other KG tasks.",Inducing Interpretability in Knowledge Graph Embeddings,"We study the problem of inducing interpretability in Knowledge Graph (KG) embeddings. Learning KG embeddings has been an active area of research in the past few years, resulting in many different models. However, most of these methods do not address the interpretability (semantics) of individual dimensions of the learned embeddings. In this work, we study this problem and propose a method for inducing interpretability in KG embeddings using entity co-occurrence statistics. The proposed method significantly improves the interpretability, while maintaining comparable performance in other KG tasks.",We thank the anonymous reviewers for their constructive comments. This work is supported by the Ministry of Human Resources Development (Government of India).,"Inducing Interpretability in Knowledge Graph Embeddings. We study the problem of inducing interpretability in Knowledge Graph (KG) embeddings. Learning KG embeddings has been an active area of research in the past few years, resulting in many different models. However, most of these methods do not address the interpretability (semantics) of individual dimensions of the learned embeddings. In this work, we study this problem and propose a method for inducing interpretability in KG embeddings using entity co-occurrence statistics. The proposed method significantly improves the interpretability, while maintaining comparable performance in other KG tasks.",2020
coltekin-2010-freely,http://www.lrec-conf.org/proceedings/lrec2010/pdf/109_Paper.pdf,0,,,,,,,"A Freely Available Morphological Analyzer for Turkish. This paper presents TRmorph, a two-level morphological analyzer for Turkish. TRmorph is a fairly complete and accurate morphological analyzer for Turkish. However, strength of TRmorph is neither in its performance, nor in its novelty. The main feature of this analyzer is its availability. It has completely been implemented using freely available tools and resources, and the two-level description is also distributed with a license that allows others to use and modify it freely for different applications. To our knowledge, TRmorph is the first freely available morphological analyzer for Turkish. This makes TRmorph particularly suitable for applications where the analyzer has to be changed in some way, or as a starting point for morphological analyzers for similar languages. TRmorph's specification of Turkish morphology is relatively complete, and it is distributed with a large lexicon. Along with the description of how the analyzer is implemented, this paper provides an evaluation of the analyzer on two large corpora.",A Freely Available Morphological Analyzer for {T}urkish,"This paper presents TRmorph, a two-level morphological analyzer for Turkish. TRmorph is a fairly complete and accurate morphological analyzer for Turkish. However, strength of TRmorph is neither in its performance, nor in its novelty. The main feature of this analyzer is its availability. It has completely been implemented using freely available tools and resources, and the two-level description is also distributed with a license that allows others to use and modify it freely for different applications. To our knowledge, TRmorph is the first freely available morphological analyzer for Turkish. This makes TRmorph particularly suitable for applications where the analyzer has to be changed in some way, or as a starting point for morphological analyzers for similar languages. TRmorph's specification of Turkish morphology is relatively complete, and it is distributed with a large lexicon. Along with the description of how the analyzer is implemented, this paper provides an evaluation of the analyzer on two large corpora.",A Freely Available Morphological Analyzer for Turkish,"This paper presents TRmorph, a two-level morphological analyzer for Turkish. TRmorph is a fairly complete and accurate morphological analyzer for Turkish. However, strength of TRmorph is neither in its performance, nor in its novelty. The main feature of this analyzer is its availability. It has completely been implemented using freely available tools and resources, and the two-level description is also distributed with a license that allows others to use and modify it freely for different applications. To our knowledge, TRmorph is the first freely available morphological analyzer for Turkish. This makes TRmorph particularly suitable for applications where the analyzer has to be changed in some way, or as a starting point for morphological analyzers for similar languages. TRmorph's specification of Turkish morphology is relatively complete, and it is distributed with a large lexicon. Along with the description of how the analyzer is implemented, this paper provides an evaluation of the analyzer on two large corpora.",,"A Freely Available Morphological Analyzer for Turkish. This paper presents TRmorph, a two-level morphological analyzer for Turkish. TRmorph is a fairly complete and accurate morphological analyzer for Turkish. However, strength of TRmorph is neither in its performance, nor in its novelty. The main feature of this analyzer is its availability. It has completely been implemented using freely available tools and resources, and the two-level description is also distributed with a license that allows others to use and modify it freely for different applications. To our knowledge, TRmorph is the first freely available morphological analyzer for Turkish. This makes TRmorph particularly suitable for applications where the analyzer has to be changed in some way, or as a starting point for morphological analyzers for similar languages. TRmorph's specification of Turkish morphology is relatively complete, and it is distributed with a large lexicon. Along with the description of how the analyzer is implemented, this paper provides an evaluation of the analyzer on two large corpora.",2010
hu-etal-2018-texar,https://aclanthology.org/W18-2503,0,,,,,,,"Texar: A Modularized, Versatile, and Extensible Toolbox for Text Generation. We introduce Texar, an open-source toolkit aiming to support the broad set of text generation tasks. Different from many existing toolkits that are specialized for specific applications (e.g., neural machine translation), Texar is designed to be highly flexible and versatile. This is achieved by abstracting the common patterns underlying the diverse tasks and methodologies, creating a library of highly reusable modules and functionalities, and enabling arbitrary model architectures and various algorithmic paradigms. The features make Texar particularly suitable for technique sharing and generalization across different text generation applications. The toolkit emphasizes heavily on extensibility and modularized system design, so that components can be freely plugged in or swapped out. We conduct extensive experiments and case studies to demonstrate the use and advantage of the toolkit.","{T}exar: A Modularized, Versatile, and Extensible Toolbox for Text Generation","We introduce Texar, an open-source toolkit aiming to support the broad set of text generation tasks. Different from many existing toolkits that are specialized for specific applications (e.g., neural machine translation), Texar is designed to be highly flexible and versatile. This is achieved by abstracting the common patterns underlying the diverse tasks and methodologies, creating a library of highly reusable modules and functionalities, and enabling arbitrary model architectures and various algorithmic paradigms. The features make Texar particularly suitable for technique sharing and generalization across different text generation applications. The toolkit emphasizes heavily on extensibility and modularized system design, so that components can be freely plugged in or swapped out. We conduct extensive experiments and case studies to demonstrate the use and advantage of the toolkit.","Texar: A Modularized, Versatile, and Extensible Toolbox for Text Generation","We introduce Texar, an open-source toolkit aiming to support the broad set of text generation tasks. Different from many existing toolkits that are specialized for specific applications (e.g., neural machine translation), Texar is designed to be highly flexible and versatile. This is achieved by abstracting the common patterns underlying the diverse tasks and methodologies, creating a library of highly reusable modules and functionalities, and enabling arbitrary model architectures and various algorithmic paradigms. The features make Texar particularly suitable for technique sharing and generalization across different text generation applications. The toolkit emphasizes heavily on extensibility and modularized system design, so that components can be freely plugged in or swapped out. We conduct extensive experiments and case studies to demonstrate the use and advantage of the toolkit.",,"Texar: A Modularized, Versatile, and Extensible Toolbox for Text Generation. We introduce Texar, an open-source toolkit aiming to support the broad set of text generation tasks. Different from many existing toolkits that are specialized for specific applications (e.g., neural machine translation), Texar is designed to be highly flexible and versatile. This is achieved by abstracting the common patterns underlying the diverse tasks and methodologies, creating a library of highly reusable modules and functionalities, and enabling arbitrary model architectures and various algorithmic paradigms. The features make Texar particularly suitable for technique sharing and generalization across different text generation applications. The toolkit emphasizes heavily on extensibility and modularized system design, so that components can be freely plugged in or swapped out. We conduct extensive experiments and case studies to demonstrate the use and advantage of the toolkit.",2018
montariol-allauzen-2019-empirical,https://aclanthology.org/R19-1092,0,,,,,,,"Empirical Study of Diachronic Word Embeddings for Scarce Data. Word meaning change can be inferred from drifts of time-varying word embeddings. However, temporal data may be too sparse to build robust word embeddings and to discriminate significant drifts from noise. In this paper, we compare three models to learn diachronic word embeddings on scarce data: incremental updating of a Skip-Gram from Kim et al. (2014), dynamic filtering from Bamler and Mandt (2017), and dynamic Bernoulli embeddings from Rudolph and Blei (2018). In particular, we study the performance of different initialisation schemes and emphasise what characteristics of each model are more suitable to data scarcity, relying on the distribution of detected drifts. Finally, we regularise the loss of these models to better adapt to scarce data.",Empirical Study of Diachronic Word Embeddings for Scarce Data,"Word meaning change can be inferred from drifts of time-varying word embeddings. However, temporal data may be too sparse to build robust word embeddings and to discriminate significant drifts from noise. In this paper, we compare three models to learn diachronic word embeddings on scarce data: incremental updating of a Skip-Gram from Kim et al. (2014), dynamic filtering from Bamler and Mandt (2017), and dynamic Bernoulli embeddings from Rudolph and Blei (2018). In particular, we study the performance of different initialisation schemes and emphasise what characteristics of each model are more suitable to data scarcity, relying on the distribution of detected drifts. Finally, we regularise the loss of these models to better adapt to scarce data.",Empirical Study of Diachronic Word Embeddings for Scarce Data,"Word meaning change can be inferred from drifts of time-varying word embeddings. However, temporal data may be too sparse to build robust word embeddings and to discriminate significant drifts from noise. In this paper, we compare three models to learn diachronic word embeddings on scarce data: incremental updating of a Skip-Gram from Kim et al. (2014), dynamic filtering from Bamler and Mandt (2017), and dynamic Bernoulli embeddings from Rudolph and Blei (2018). In particular, we study the performance of different initialisation schemes and emphasise what characteristics of each model are more suitable to data scarcity, relying on the distribution of detected drifts. Finally, we regularise the loss of these models to better adapt to scarce data.",,"Empirical Study of Diachronic Word Embeddings for Scarce Data. Word meaning change can be inferred from drifts of time-varying word embeddings. However, temporal data may be too sparse to build robust word embeddings and to discriminate significant drifts from noise. In this paper, we compare three models to learn diachronic word embeddings on scarce data: incremental updating of a Skip-Gram from Kim et al. (2014), dynamic filtering from Bamler and Mandt (2017), and dynamic Bernoulli embeddings from Rudolph and Blei (2018). In particular, we study the performance of different initialisation schemes and emphasise what characteristics of each model are more suitable to data scarcity, relying on the distribution of detected drifts. Finally, we regularise the loss of these models to better adapt to scarce data.",2019
mostafazadeh-davani-etal-2021-improving,https://aclanthology.org/2021.woah-1.10,1,,,,hate_speech,,,"Improving Counterfactual Generation for Fair Hate Speech Detection. Bias mitigation approaches reduce models' dependence on sensitive features of data, such as social group tokens (SGTs), resulting in equal predictions across the sensitive features. In hate speech detection, however, equalizing model predictions may ignore important differences among targeted social groups, as hate speech can contain stereotypical language specific to each SGT. Here, to take the specific language about each SGT into account, we rely on counterfactual fairness and equalize predictions among counterfactuals, generated by changing the SGTs. Our method evaluates the similarity in sentence likelihoods (via pretrained language models) among counterfactuals, to treat SGTs equally only within interchangeable contexts. By applying logit pairing to equalize outcomes on the restricted set of counterfactuals for each instance, we improve fairness metrics while preserving model performance on hate speech detection.",Improving Counterfactual Generation for Fair Hate Speech Detection,"Bias mitigation approaches reduce models' dependence on sensitive features of data, such as social group tokens (SGTs), resulting in equal predictions across the sensitive features. In hate speech detection, however, equalizing model predictions may ignore important differences among targeted social groups, as hate speech can contain stereotypical language specific to each SGT. Here, to take the specific language about each SGT into account, we rely on counterfactual fairness and equalize predictions among counterfactuals, generated by changing the SGTs. Our method evaluates the similarity in sentence likelihoods (via pretrained language models) among counterfactuals, to treat SGTs equally only within interchangeable contexts. By applying logit pairing to equalize outcomes on the restricted set of counterfactuals for each instance, we improve fairness metrics while preserving model performance on hate speech detection.",Improving Counterfactual Generation for Fair Hate Speech Detection,"Bias mitigation approaches reduce models' dependence on sensitive features of data, such as social group tokens (SGTs), resulting in equal predictions across the sensitive features. In hate speech detection, however, equalizing model predictions may ignore important differences among targeted social groups, as hate speech can contain stereotypical language specific to each SGT. Here, to take the specific language about each SGT into account, we rely on counterfactual fairness and equalize predictions among counterfactuals, generated by changing the SGTs. Our method evaluates the similarity in sentence likelihoods (via pretrained language models) among counterfactuals, to treat SGTs equally only within interchangeable contexts. By applying logit pairing to equalize outcomes on the restricted set of counterfactuals for each instance, we improve fairness metrics while preserving model performance on hate speech detection.",This research was sponsored in part by NSF CA-REER BCS-1846531 to Morteza Dehghani.,"Improving Counterfactual Generation for Fair Hate Speech Detection. Bias mitigation approaches reduce models' dependence on sensitive features of data, such as social group tokens (SGTs), resulting in equal predictions across the sensitive features. In hate speech detection, however, equalizing model predictions may ignore important differences among targeted social groups, as hate speech can contain stereotypical language specific to each SGT. Here, to take the specific language about each SGT into account, we rely on counterfactual fairness and equalize predictions among counterfactuals, generated by changing the SGTs. Our method evaluates the similarity in sentence likelihoods (via pretrained language models) among counterfactuals, to treat SGTs equally only within interchangeable contexts. By applying logit pairing to equalize outcomes on the restricted set of counterfactuals for each instance, we improve fairness metrics while preserving model performance on hate speech detection.",2021
wu-etal-2018-phrase,https://aclanthology.org/D18-1408,0,,,,,,,"Phrase-level Self-Attention Networks for Universal Sentence Encoding. Universal sentence encoding is a hot topic in recent NLP research. Attention mechanism has been an integral part in many sentence encoding models, allowing the models to capture context dependencies regardless of the distance between elements in the sequence. Fully attention-based models have recently attracted enormous interest due to their highly parallelizable computation and significantly less training time. However, the memory consumption of their models grows quadratically with sentence length, and the syntactic information is neglected. To this end, we propose Phrase-level Self-Attention Networks (PSAN) that perform self-attention across words inside a phrase to capture context dependencies at the phrase level, and use the gated memory updating mechanism to refine each word's representation hierarchically with longer-term context dependencies captured in a larger phrase. As a result, the memory consumption can be reduced because the self-attention is performed at the phrase level instead of the sentence level. At the same time, syntactic information can be easily integrated in the model. Experiment results show that PSAN can achieve the state-ofthe-art transfer performance across a plethora of NLP tasks including sentence classification, natural language inference and sentence textual similarity.",Phrase-level Self-Attention Networks for Universal Sentence Encoding,"Universal sentence encoding is a hot topic in recent NLP research. Attention mechanism has been an integral part in many sentence encoding models, allowing the models to capture context dependencies regardless of the distance between elements in the sequence. Fully attention-based models have recently attracted enormous interest due to their highly parallelizable computation and significantly less training time. However, the memory consumption of their models grows quadratically with sentence length, and the syntactic information is neglected. To this end, we propose Phrase-level Self-Attention Networks (PSAN) that perform self-attention across words inside a phrase to capture context dependencies at the phrase level, and use the gated memory updating mechanism to refine each word's representation hierarchically with longer-term context dependencies captured in a larger phrase. As a result, the memory consumption can be reduced because the self-attention is performed at the phrase level instead of the sentence level. At the same time, syntactic information can be easily integrated in the model. Experiment results show that PSAN can achieve the state-ofthe-art transfer performance across a plethora of NLP tasks including sentence classification, natural language inference and sentence textual similarity.",Phrase-level Self-Attention Networks for Universal Sentence Encoding,"Universal sentence encoding is a hot topic in recent NLP research. Attention mechanism has been an integral part in many sentence encoding models, allowing the models to capture context dependencies regardless of the distance between elements in the sequence. Fully attention-based models have recently attracted enormous interest due to their highly parallelizable computation and significantly less training time. However, the memory consumption of their models grows quadratically with sentence length, and the syntactic information is neglected. To this end, we propose Phrase-level Self-Attention Networks (PSAN) that perform self-attention across words inside a phrase to capture context dependencies at the phrase level, and use the gated memory updating mechanism to refine each word's representation hierarchically with longer-term context dependencies captured in a larger phrase. As a result, the memory consumption can be reduced because the self-attention is performed at the phrase level instead of the sentence level. At the same time, syntactic information can be easily integrated in the model. Experiment results show that PSAN can achieve the state-ofthe-art transfer performance across a plethora of NLP tasks including sentence classification, natural language inference and sentence textual similarity.",,"Phrase-level Self-Attention Networks for Universal Sentence Encoding. Universal sentence encoding is a hot topic in recent NLP research. Attention mechanism has been an integral part in many sentence encoding models, allowing the models to capture context dependencies regardless of the distance between elements in the sequence. Fully attention-based models have recently attracted enormous interest due to their highly parallelizable computation and significantly less training time. However, the memory consumption of their models grows quadratically with sentence length, and the syntactic information is neglected. To this end, we propose Phrase-level Self-Attention Networks (PSAN) that perform self-attention across words inside a phrase to capture context dependencies at the phrase level, and use the gated memory updating mechanism to refine each word's representation hierarchically with longer-term context dependencies captured in a larger phrase. As a result, the memory consumption can be reduced because the self-attention is performed at the phrase level instead of the sentence level. At the same time, syntactic information can be easily integrated in the model. Experiment results show that PSAN can achieve the state-ofthe-art transfer performance across a plethora of NLP tasks including sentence classification, natural language inference and sentence textual similarity.",2018
wei-gulla-2010-sentiment,https://aclanthology.org/P10-1042,0,,,,business_use,,,"Sentiment Learning on Product Reviews via Sentiment Ontology Tree. Existing works on sentiment analysis on product reviews suffer from the following limitations: (1) The knowledge of hierarchical relationships of products attributes is not fully utilized. (2) Reviews or sentences mentioning several attributes associated with complicated sentiments are not dealt with very well. In this paper, we propose a novel HL-SOT approach to labeling a product's attributes and their associated sentiments in product reviews by a Hierarchical Learning (HL) process with a defined Sentiment Ontology Tree (SOT). The empirical analysis against a humanlabeled data set demonstrates promising and reasonable performance of the proposed HL-SOT approach. While this paper is mainly on sentiment analysis on reviews of one product, our proposed HL-SOT approach is easily generalized to labeling a mix of reviews of more than one products.",Sentiment Learning on Product Reviews via Sentiment Ontology Tree,"Existing works on sentiment analysis on product reviews suffer from the following limitations: (1) The knowledge of hierarchical relationships of products attributes is not fully utilized. (2) Reviews or sentences mentioning several attributes associated with complicated sentiments are not dealt with very well. In this paper, we propose a novel HL-SOT approach to labeling a product's attributes and their associated sentiments in product reviews by a Hierarchical Learning (HL) process with a defined Sentiment Ontology Tree (SOT). The empirical analysis against a humanlabeled data set demonstrates promising and reasonable performance of the proposed HL-SOT approach. While this paper is mainly on sentiment analysis on reviews of one product, our proposed HL-SOT approach is easily generalized to labeling a mix of reviews of more than one products.",Sentiment Learning on Product Reviews via Sentiment Ontology Tree,"Existing works on sentiment analysis on product reviews suffer from the following limitations: (1) The knowledge of hierarchical relationships of products attributes is not fully utilized. (2) Reviews or sentences mentioning several attributes associated with complicated sentiments are not dealt with very well. In this paper, we propose a novel HL-SOT approach to labeling a product's attributes and their associated sentiments in product reviews by a Hierarchical Learning (HL) process with a defined Sentiment Ontology Tree (SOT). The empirical analysis against a humanlabeled data set demonstrates promising and reasonable performance of the proposed HL-SOT approach. While this paper is mainly on sentiment analysis on reviews of one product, our proposed HL-SOT approach is easily generalized to labeling a mix of reviews of more than one products.",The authors would like to thank the anonymous reviewers for many helpful comments on the manuscript. This work is funded by the Research Council of Norway under the VERDIKT research programme (Project No.: 183337).,"Sentiment Learning on Product Reviews via Sentiment Ontology Tree. Existing works on sentiment analysis on product reviews suffer from the following limitations: (1) The knowledge of hierarchical relationships of products attributes is not fully utilized. (2) Reviews or sentences mentioning several attributes associated with complicated sentiments are not dealt with very well. In this paper, we propose a novel HL-SOT approach to labeling a product's attributes and their associated sentiments in product reviews by a Hierarchical Learning (HL) process with a defined Sentiment Ontology Tree (SOT). The empirical analysis against a humanlabeled data set demonstrates promising and reasonable performance of the proposed HL-SOT approach. While this paper is mainly on sentiment analysis on reviews of one product, our proposed HL-SOT approach is easily generalized to labeling a mix of reviews of more than one products.",2010
bloem-etal-2019-modeling,https://aclanthology.org/W19-4733,0,,,,,,,"Modeling a Historical Variety of a Low-Resource Language: Language Contact Effects in the Verbal Cluster of Early-Modern Frisian. Certain phenomena of interest to linguists mainly occur in low-resource languages, such as contact-induced language change. We show that it is possible to study contact-induced language change computationally in a historical variety of a low-resource language, Early-Modern Frisian, by creating a model using features that were established to be relevant in a closely related language, modern Dutch. This allows us to test two hypotheses on two types of language contact that may have taken place between Frisian and Dutch during this time. Our model shows that Frisian verb cluster word orders are associated with different context features than Dutch verb orders, supporting the 'learned borrowing' hypothesis.",Modeling a Historical Variety of a Low-Resource Language: {L}anguage Contact Effects in the Verbal Cluster of {E}arly-{M}odern {F}risian,"Certain phenomena of interest to linguists mainly occur in low-resource languages, such as contact-induced language change. We show that it is possible to study contact-induced language change computationally in a historical variety of a low-resource language, Early-Modern Frisian, by creating a model using features that were established to be relevant in a closely related language, modern Dutch. This allows us to test two hypotheses on two types of language contact that may have taken place between Frisian and Dutch during this time. Our model shows that Frisian verb cluster word orders are associated with different context features than Dutch verb orders, supporting the 'learned borrowing' hypothesis.",Modeling a Historical Variety of a Low-Resource Language: Language Contact Effects in the Verbal Cluster of Early-Modern Frisian,"Certain phenomena of interest to linguists mainly occur in low-resource languages, such as contact-induced language change. We show that it is possible to study contact-induced language change computationally in a historical variety of a low-resource language, Early-Modern Frisian, by creating a model using features that were established to be relevant in a closely related language, modern Dutch. This allows us to test two hypotheses on two types of language contact that may have taken place between Frisian and Dutch during this time. Our model shows that Frisian verb cluster word orders are associated with different context features than Dutch verb orders, supporting the 'learned borrowing' hypothesis.",,"Modeling a Historical Variety of a Low-Resource Language: Language Contact Effects in the Verbal Cluster of Early-Modern Frisian. Certain phenomena of interest to linguists mainly occur in low-resource languages, such as contact-induced language change. We show that it is possible to study contact-induced language change computationally in a historical variety of a low-resource language, Early-Modern Frisian, by creating a model using features that were established to be relevant in a closely related language, modern Dutch. This allows us to test two hypotheses on two types of language contact that may have taken place between Frisian and Dutch during this time. Our model shows that Frisian verb cluster word orders are associated with different context features than Dutch verb orders, supporting the 'learned borrowing' hypothesis.",2019
dahl-mccord-1983-treating,https://aclanthology.org/J83-2002,0,,,,,,,"Treating Coordination in Logic Grammars. Logic grammars are grammars expressible in predicate logic. Implemented in the programming language Prolog, logic grammar systems have proved to be a good basis for natural language processing. One of the most difficult constructions for natural language grammars to treat is coordination (construction with conjunctions like 'and'). This paper describes a logic grammar formalism, modifier structure grammars (MSGs), together with an interpreter written in Prolog, which can handle coordination (and other natural language constructions) in a reasonable and general way. The system produces both syntactic analyses and logical forms, and problems of scoping for coordination and quantifiers are dealt with. The MSG formalism seems of interest in its own right (perhaps even outside natural language processing) because the notions of syntactic structure and semantic interpretation are more constrained than in many previous systems (made more implicit in the formalism itself), so that less burden is put on the grammar writer.",Treating Coordination in Logic Grammars,"Logic grammars are grammars expressible in predicate logic. Implemented in the programming language Prolog, logic grammar systems have proved to be a good basis for natural language processing. One of the most difficult constructions for natural language grammars to treat is coordination (construction with conjunctions like 'and'). This paper describes a logic grammar formalism, modifier structure grammars (MSGs), together with an interpreter written in Prolog, which can handle coordination (and other natural language constructions) in a reasonable and general way. The system produces both syntactic analyses and logical forms, and problems of scoping for coordination and quantifiers are dealt with. The MSG formalism seems of interest in its own right (perhaps even outside natural language processing) because the notions of syntactic structure and semantic interpretation are more constrained than in many previous systems (made more implicit in the formalism itself), so that less burden is put on the grammar writer.",Treating Coordination in Logic Grammars,"Logic grammars are grammars expressible in predicate logic. Implemented in the programming language Prolog, logic grammar systems have proved to be a good basis for natural language processing. One of the most difficult constructions for natural language grammars to treat is coordination (construction with conjunctions like 'and'). This paper describes a logic grammar formalism, modifier structure grammars (MSGs), together with an interpreter written in Prolog, which can handle coordination (and other natural language constructions) in a reasonable and general way. The system produces both syntactic analyses and logical forms, and problems of scoping for coordination and quantifiers are dealt with. The MSG formalism seems of interest in its own right (perhaps even outside natural language processing) because the notions of syntactic structure and semantic interpretation are more constrained than in many previous systems (made more implicit in the formalism itself), so that less burden is put on the grammar writer.",,"Treating Coordination in Logic Grammars. Logic grammars are grammars expressible in predicate logic. Implemented in the programming language Prolog, logic grammar systems have proved to be a good basis for natural language processing. One of the most difficult constructions for natural language grammars to treat is coordination (construction with conjunctions like 'and'). This paper describes a logic grammar formalism, modifier structure grammars (MSGs), together with an interpreter written in Prolog, which can handle coordination (and other natural language constructions) in a reasonable and general way. The system produces both syntactic analyses and logical forms, and problems of scoping for coordination and quantifiers are dealt with. The MSG formalism seems of interest in its own right (perhaps even outside natural language processing) because the notions of syntactic structure and semantic interpretation are more constrained than in many previous systems (made more implicit in the formalism itself), so that less burden is put on the grammar writer.",1983
rangarajan-sridhar-etal-2013-segmentation,https://aclanthology.org/N13-1023,0,,,,,,,"Segmentation Strategies for Streaming Speech Translation. The study presented in this work is a first effort at real-time speech translation of TED talks, a compendium of public talks with different speakers addressing a variety of topics. We address the goal of achieving a system that balances translation accuracy and latency. In order to improve ASR performance for our diverse data set, adaptation techniques such as constrained model adaptation and vocal tract length normalization are found to be useful. In order to improve machine translation (MT) performance, techniques that could be employed in real-time such as monotonic and partial translation retention are found to be of use. We also experiment with inserting text segmenters of various types between ASR and MT in a series of real-time translation experiments. Among other results, our experiments demonstrate that a good segmentation is useful, and a novel conjunction-based segmentation strategy improves translation quality nearly as much as other strategies such as comma-based segmentation. It was also found to be important to synchronize various pipeline components in order to minimize latency.",Segmentation Strategies for Streaming Speech Translation,"The study presented in this work is a first effort at real-time speech translation of TED talks, a compendium of public talks with different speakers addressing a variety of topics. We address the goal of achieving a system that balances translation accuracy and latency. In order to improve ASR performance for our diverse data set, adaptation techniques such as constrained model adaptation and vocal tract length normalization are found to be useful. In order to improve machine translation (MT) performance, techniques that could be employed in real-time such as monotonic and partial translation retention are found to be of use. We also experiment with inserting text segmenters of various types between ASR and MT in a series of real-time translation experiments. Among other results, our experiments demonstrate that a good segmentation is useful, and a novel conjunction-based segmentation strategy improves translation quality nearly as much as other strategies such as comma-based segmentation. It was also found to be important to synchronize various pipeline components in order to minimize latency.",Segmentation Strategies for Streaming Speech Translation,"The study presented in this work is a first effort at real-time speech translation of TED talks, a compendium of public talks with different speakers addressing a variety of topics. We address the goal of achieving a system that balances translation accuracy and latency. In order to improve ASR performance for our diverse data set, adaptation techniques such as constrained model adaptation and vocal tract length normalization are found to be useful. In order to improve machine translation (MT) performance, techniques that could be employed in real-time such as monotonic and partial translation retention are found to be of use. We also experiment with inserting text segmenters of various types between ASR and MT in a series of real-time translation experiments. Among other results, our experiments demonstrate that a good segmentation is useful, and a novel conjunction-based segmentation strategy improves translation quality nearly as much as other strategies such as comma-based segmentation. It was also found to be important to synchronize various pipeline components in order to minimize latency.",We would like to thank Simon Byers for his help with organizing the TED talks data.,"Segmentation Strategies for Streaming Speech Translation. The study presented in this work is a first effort at real-time speech translation of TED talks, a compendium of public talks with different speakers addressing a variety of topics. We address the goal of achieving a system that balances translation accuracy and latency. In order to improve ASR performance for our diverse data set, adaptation techniques such as constrained model adaptation and vocal tract length normalization are found to be useful. In order to improve machine translation (MT) performance, techniques that could be employed in real-time such as monotonic and partial translation retention are found to be of use. We also experiment with inserting text segmenters of various types between ASR and MT in a series of real-time translation experiments. Among other results, our experiments demonstrate that a good segmentation is useful, and a novel conjunction-based segmentation strategy improves translation quality nearly as much as other strategies such as comma-based segmentation. It was also found to be important to synchronize various pipeline components in order to minimize latency.",2013
gupta-etal-2015-dissecting,https://aclanthology.org/S15-1017,0,,,,,,,"Dissecting the Practical Lexical Function Model for Compositional Distributional Semantics. The Practical Lexical Function model (PLF) is a recently proposed compositional distributional semantic model which provides an elegant account of composition, striking a balance between expressiveness and robustness and performing at the state-of-the-art. In this paper, we identify an inconsistency in PLF between the objective function at training and the prediction at testing which leads to an overcounting of the predicate's contribution to the meaning of the phrase. We investigate two possible solutions of which one (the exclusion of simple lexical vector at test time) improves performance significantly on two out of the three composition datasets.",Dissecting the Practical Lexical Function Model for Compositional Distributional Semantics,"The Practical Lexical Function model (PLF) is a recently proposed compositional distributional semantic model which provides an elegant account of composition, striking a balance between expressiveness and robustness and performing at the state-of-the-art. In this paper, we identify an inconsistency in PLF between the objective function at training and the prediction at testing which leads to an overcounting of the predicate's contribution to the meaning of the phrase. We investigate two possible solutions of which one (the exclusion of simple lexical vector at test time) improves performance significantly on two out of the three composition datasets.",Dissecting the Practical Lexical Function Model for Compositional Distributional Semantics,"The Practical Lexical Function model (PLF) is a recently proposed compositional distributional semantic model which provides an elegant account of composition, striking a balance between expressiveness and robustness and performing at the state-of-the-art. In this paper, we identify an inconsistency in PLF between the objective function at training and the prediction at testing which leads to an overcounting of the predicate's contribution to the meaning of the phrase. We investigate two possible solutions of which one (the exclusion of simple lexical vector at test time) improves performance significantly on two out of the three composition datasets.","We gratefully acknowledge funding of our research by the DFG (SFB 732, Project D10).","Dissecting the Practical Lexical Function Model for Compositional Distributional Semantics. The Practical Lexical Function model (PLF) is a recently proposed compositional distributional semantic model which provides an elegant account of composition, striking a balance between expressiveness and robustness and performing at the state-of-the-art. In this paper, we identify an inconsistency in PLF between the objective function at training and the prediction at testing which leads to an overcounting of the predicate's contribution to the meaning of the phrase. We investigate two possible solutions of which one (the exclusion of simple lexical vector at test time) improves performance significantly on two out of the three composition datasets.",2015
yoon-1996-danger,https://aclanthology.org/Y96-1045,0,,,,,,,"Danger of Partial Universality : In Two Uses of In-adverbials. Not all empirical facts are treated equally in science; in theorizing, some are weighed more heavily than others. It is often unavoidable and it should not necessarily be avoided. We will present a case where a semantic theory is influenced more by a seemingly universal fact, but in fact accidental among related languages, than by a few significant exceptions in the language in question, thereby failing to capture a meaningful generalization. In particular, we argue that in-adverbials are not a test for telic predicates, as they are popularly claimed to be; we will show that this claim is triggered by the accidental fact that there are two homomorphic in-adverbials in English and their cognates in other languages.",Danger of Partial Universality : In Two Uses of In-adverbials,"Not all empirical facts are treated equally in science; in theorizing, some are weighed more heavily than others. It is often unavoidable and it should not necessarily be avoided. We will present a case where a semantic theory is influenced more by a seemingly universal fact, but in fact accidental among related languages, than by a few significant exceptions in the language in question, thereby failing to capture a meaningful generalization. In particular, we argue that in-adverbials are not a test for telic predicates, as they are popularly claimed to be; we will show that this claim is triggered by the accidental fact that there are two homomorphic in-adverbials in English and their cognates in other languages.",Danger of Partial Universality : In Two Uses of In-adverbials,"Not all empirical facts are treated equally in science; in theorizing, some are weighed more heavily than others. It is often unavoidable and it should not necessarily be avoided. We will present a case where a semantic theory is influenced more by a seemingly universal fact, but in fact accidental among related languages, than by a few significant exceptions in the language in question, thereby failing to capture a meaningful generalization. In particular, we argue that in-adverbials are not a test for telic predicates, as they are popularly claimed to be; we will show that this claim is triggered by the accidental fact that there are two homomorphic in-adverbials in English and their cognates in other languages.",,"Danger of Partial Universality : In Two Uses of In-adverbials. Not all empirical facts are treated equally in science; in theorizing, some are weighed more heavily than others. It is often unavoidable and it should not necessarily be avoided. We will present a case where a semantic theory is influenced more by a seemingly universal fact, but in fact accidental among related languages, than by a few significant exceptions in the language in question, thereby failing to capture a meaningful generalization. In particular, we argue that in-adverbials are not a test for telic predicates, as they are popularly claimed to be; we will show that this claim is triggered by the accidental fact that there are two homomorphic in-adverbials in English and their cognates in other languages.",1996
wachsmuth-etal-2017-building,https://aclanthology.org/W17-5106,0,,,,,,,"Building an Argument Search Engine for the Web. Computational argumentation is expected to play a critical role in the future of web search. To make this happen, many searchrelated questions must be revisited, such as how people query for arguments, how to mine arguments from the web, or how to rank them. In this paper, we develop an argument search framework for studying these and further questions. The framework allows for the composition of approaches to acquiring, mining, assessing, indexing, querying, retrieving, ranking, and presenting arguments while relying on standard infrastructure and interfaces. Based on the framework, we build a prototype search engine, called args, that relies on an initial, freely accessible index of nearly 300k arguments crawled from reliable web resources. The framework and the argument search engine are intended as an environment for collaborative research on computational argumentation and its practical evaluation.",Building an Argument Search Engine for the Web,"Computational argumentation is expected to play a critical role in the future of web search. To make this happen, many searchrelated questions must be revisited, such as how people query for arguments, how to mine arguments from the web, or how to rank them. In this paper, we develop an argument search framework for studying these and further questions. The framework allows for the composition of approaches to acquiring, mining, assessing, indexing, querying, retrieving, ranking, and presenting arguments while relying on standard infrastructure and interfaces. Based on the framework, we build a prototype search engine, called args, that relies on an initial, freely accessible index of nearly 300k arguments crawled from reliable web resources. The framework and the argument search engine are intended as an environment for collaborative research on computational argumentation and its practical evaluation.",Building an Argument Search Engine for the Web,"Computational argumentation is expected to play a critical role in the future of web search. To make this happen, many searchrelated questions must be revisited, such as how people query for arguments, how to mine arguments from the web, or how to rank them. In this paper, we develop an argument search framework for studying these and further questions. The framework allows for the composition of approaches to acquiring, mining, assessing, indexing, querying, retrieving, ranking, and presenting arguments while relying on standard infrastructure and interfaces. Based on the framework, we build a prototype search engine, called args, that relies on an initial, freely accessible index of nearly 300k arguments crawled from reliable web resources. The framework and the argument search engine are intended as an environment for collaborative research on computational argumentation and its practical evaluation.",,"Building an Argument Search Engine for the Web. Computational argumentation is expected to play a critical role in the future of web search. To make this happen, many searchrelated questions must be revisited, such as how people query for arguments, how to mine arguments from the web, or how to rank them. In this paper, we develop an argument search framework for studying these and further questions. The framework allows for the composition of approaches to acquiring, mining, assessing, indexing, querying, retrieving, ranking, and presenting arguments while relying on standard infrastructure and interfaces. Based on the framework, we build a prototype search engine, called args, that relies on an initial, freely accessible index of nearly 300k arguments crawled from reliable web resources. The framework and the argument search engine are intended as an environment for collaborative research on computational argumentation and its practical evaluation.",2017
williams-liden-2017-demonstration,https://aclanthology.org/W17-5511,0,,,,,,,"Demonstration of interactive teaching for end-to-end dialog control with hybrid code networks. This is a demonstration of interactive teaching for practical end-to-end dialog systems driven by a recurrent neural network. In this approach, a developer teaches the network by interacting with the system and providing on-the-spot corrections. Once a system is deployed, a developer can also correct mistakes in logged dialogs. This demonstration shows both of these teaching methods applied to dialog systems in three domains: pizza ordering, restaurant information, and weather forecasts.",Demonstration of interactive teaching for end-to-end dialog control with hybrid code networks,"This is a demonstration of interactive teaching for practical end-to-end dialog systems driven by a recurrent neural network. In this approach, a developer teaches the network by interacting with the system and providing on-the-spot corrections. Once a system is deployed, a developer can also correct mistakes in logged dialogs. This demonstration shows both of these teaching methods applied to dialog systems in three domains: pizza ordering, restaurant information, and weather forecasts.",Demonstration of interactive teaching for end-to-end dialog control with hybrid code networks,"This is a demonstration of interactive teaching for practical end-to-end dialog systems driven by a recurrent neural network. In this approach, a developer teaches the network by interacting with the system and providing on-the-spot corrections. Once a system is deployed, a developer can also correct mistakes in logged dialogs. This demonstration shows both of these teaching methods applied to dialog systems in three domains: pizza ordering, restaurant information, and weather forecasts.",,"Demonstration of interactive teaching for end-to-end dialog control with hybrid code networks. This is a demonstration of interactive teaching for practical end-to-end dialog systems driven by a recurrent neural network. In this approach, a developer teaches the network by interacting with the system and providing on-the-spot corrections. Once a system is deployed, a developer can also correct mistakes in logged dialogs. This demonstration shows both of these teaching methods applied to dialog systems in three domains: pizza ordering, restaurant information, and weather forecasts.",2017
abb-etal-1993-incremental,https://aclanthology.org/E93-1002,0,,,,,,,The Incremental Generation of Passive Sentences. This paper sketches some basic features of the SYNPHONICS account of the computational modelling of incremental language production with the example of the generation of passive sentences. The SYNPHONICS approach aims at linking psycholinguistic insights into the nature of the human natural language production process with well-established assumptions in theoretical and computational linguistics concerning the representation and processing of grammatical knowledge. We differentiate between,The Incremental Generation of Passive Sentences,This paper sketches some basic features of the SYNPHONICS account of the computational modelling of incremental language production with the example of the generation of passive sentences. The SYNPHONICS approach aims at linking psycholinguistic insights into the nature of the human natural language production process with well-established assumptions in theoretical and computational linguistics concerning the representation and processing of grammatical knowledge. We differentiate between,The Incremental Generation of Passive Sentences,This paper sketches some basic features of the SYNPHONICS account of the computational modelling of incremental language production with the example of the generation of passive sentences. The SYNPHONICS approach aims at linking psycholinguistic insights into the nature of the human natural language production process with well-established assumptions in theoretical and computational linguistics concerning the representation and processing of grammatical knowledge. We differentiate between,,The Incremental Generation of Passive Sentences. This paper sketches some basic features of the SYNPHONICS account of the computational modelling of incremental language production with the example of the generation of passive sentences. The SYNPHONICS approach aims at linking psycholinguistic insights into the nature of the human natural language production process with well-established assumptions in theoretical and computational linguistics concerning the representation and processing of grammatical knowledge. We differentiate between,1993
kuncham-etal-2015-statistical,https://aclanthology.org/R15-1042,0,,,,,,,"Statistical Sandhi Splitter and its Effect on NLP Applications. This paper revisits the work of (Kuncham et al., 2015) which developed a statistical sandhi splitter (SSS) for agglutinative languages that was tested for Telugu and Malayalam languages. Handling compound words is a major challenge for Natural Language Processing (NLP) applications for agglutinative languages. Hence, in this paper we concentrate on testing the effect of SSS on the NLP applications like Machine Translation, Dialogue System and Anaphora Resolution and show that the accuracy of these applications is consistently improved by using SSS. We shall also discuss in detail the performance of SSS on these applications.",Statistical Sandhi Splitter and its Effect on {NLP} Applications,"This paper revisits the work of (Kuncham et al., 2015) which developed a statistical sandhi splitter (SSS) for agglutinative languages that was tested for Telugu and Malayalam languages. Handling compound words is a major challenge for Natural Language Processing (NLP) applications for agglutinative languages. Hence, in this paper we concentrate on testing the effect of SSS on the NLP applications like Machine Translation, Dialogue System and Anaphora Resolution and show that the accuracy of these applications is consistently improved by using SSS. We shall also discuss in detail the performance of SSS on these applications.",Statistical Sandhi Splitter and its Effect on NLP Applications,"This paper revisits the work of (Kuncham et al., 2015) which developed a statistical sandhi splitter (SSS) for agglutinative languages that was tested for Telugu and Malayalam languages. Handling compound words is a major challenge for Natural Language Processing (NLP) applications for agglutinative languages. Hence, in this paper we concentrate on testing the effect of SSS on the NLP applications like Machine Translation, Dialogue System and Anaphora Resolution and show that the accuracy of these applications is consistently improved by using SSS. We shall also discuss in detail the performance of SSS on these applications.",,"Statistical Sandhi Splitter and its Effect on NLP Applications. This paper revisits the work of (Kuncham et al., 2015) which developed a statistical sandhi splitter (SSS) for agglutinative languages that was tested for Telugu and Malayalam languages. Handling compound words is a major challenge for Natural Language Processing (NLP) applications for agglutinative languages. Hence, in this paper we concentrate on testing the effect of SSS on the NLP applications like Machine Translation, Dialogue System and Anaphora Resolution and show that the accuracy of these applications is consistently improved by using SSS. We shall also discuss in detail the performance of SSS on these applications.",2015
lux-etal-2020-truth,https://aclanthology.org/2020.eval4nlp-1.1,0,,,,,,,"Truth or Error? Towards systematic analysis of factual errors in abstractive summaries. This paper presents a typology of errors produced by automatic summarization systems. The typology was created by manually analyzing the output of four recent neural summarization systems. Our work is motivated by the growing awareness of the need for better summary evaluation methods that go beyond conventional overlap-based metrics. Our typology is structured into two dimensions. First, the Mapping Dimension describes surface-level errors and provides insight into word-sequence transformation issues. Second, the Meaning Dimension describes issues related to interpretation and provides insight into breakdowns in truth, i.e., factual faithfulness to the original text. Comparative analysis revealed that two neural summarization systems leveraging pretrained models have an advantage in decreasing grammaticality errors, but not necessarily factual errors. We also discuss the importance of ensuring that summary length and abstractiveness do not interfere with evaluating summary quality.",Truth or Error? Towards systematic analysis of factual errors in abstractive summaries,"This paper presents a typology of errors produced by automatic summarization systems. The typology was created by manually analyzing the output of four recent neural summarization systems. Our work is motivated by the growing awareness of the need for better summary evaluation methods that go beyond conventional overlap-based metrics. Our typology is structured into two dimensions. First, the Mapping Dimension describes surface-level errors and provides insight into word-sequence transformation issues. Second, the Meaning Dimension describes issues related to interpretation and provides insight into breakdowns in truth, i.e., factual faithfulness to the original text. Comparative analysis revealed that two neural summarization systems leveraging pretrained models have an advantage in decreasing grammaticality errors, but not necessarily factual errors. We also discuss the importance of ensuring that summary length and abstractiveness do not interfere with evaluating summary quality.",Truth or Error? Towards systematic analysis of factual errors in abstractive summaries,"This paper presents a typology of errors produced by automatic summarization systems. The typology was created by manually analyzing the output of four recent neural summarization systems. Our work is motivated by the growing awareness of the need for better summary evaluation methods that go beyond conventional overlap-based metrics. Our typology is structured into two dimensions. First, the Mapping Dimension describes surface-level errors and provides insight into word-sequence transformation issues. Second, the Meaning Dimension describes issues related to interpretation and provides insight into breakdowns in truth, i.e., factual faithfulness to the original text. Comparative analysis revealed that two neural summarization systems leveraging pretrained models have an advantage in decreasing grammaticality errors, but not necessarily factual errors. We also discuss the importance of ensuring that summary length and abstractiveness do not interfere with evaluating summary quality.",Acknowledgments: We thank FD Mediagroep for conducting the Smart Journalism project which allowed us to perform this research.,"Truth or Error? Towards systematic analysis of factual errors in abstractive summaries. This paper presents a typology of errors produced by automatic summarization systems. The typology was created by manually analyzing the output of four recent neural summarization systems. Our work is motivated by the growing awareness of the need for better summary evaluation methods that go beyond conventional overlap-based metrics. Our typology is structured into two dimensions. First, the Mapping Dimension describes surface-level errors and provides insight into word-sequence transformation issues. Second, the Meaning Dimension describes issues related to interpretation and provides insight into breakdowns in truth, i.e., factual faithfulness to the original text. Comparative analysis revealed that two neural summarization systems leveraging pretrained models have an advantage in decreasing grammaticality errors, but not necessarily factual errors. We also discuss the importance of ensuring that summary length and abstractiveness do not interfere with evaluating summary quality.",2020
islam-etal-2012-text,https://aclanthology.org/Y12-1059,1,,,,education,,,"Text Readability Classification of Textbooks of a Low-Resource Language. There are many languages considered to be low-density languages, either because the population speaking the language is not very large, or because insufficient digitized text material is available in the language even though millions of people speak the language. Bangla is one of the latter ones. Readability classification is an important Natural Language Processing (NLP) application that can be used to judge the quality of documents and assist writers to locate possible problems. This paper presents a readability classifier of Bangla textbook documents based on information-theoretic and lexical features. The features proposed in this paper result in an F-score that is 50% higher than that for traditional readability formulas.",Text Readability Classification of Textbooks of a Low-Resource Language,"There are many languages considered to be low-density languages, either because the population speaking the language is not very large, or because insufficient digitized text material is available in the language even though millions of people speak the language. Bangla is one of the latter ones. Readability classification is an important Natural Language Processing (NLP) application that can be used to judge the quality of documents and assist writers to locate possible problems. This paper presents a readability classifier of Bangla textbook documents based on information-theoretic and lexical features. The features proposed in this paper result in an F-score that is 50% higher than that for traditional readability formulas.",Text Readability Classification of Textbooks of a Low-Resource Language,"There are many languages considered to be low-density languages, either because the population speaking the language is not very large, or because insufficient digitized text material is available in the language even though millions of people speak the language. Bangla is one of the latter ones. Readability classification is an important Natural Language Processing (NLP) application that can be used to judge the quality of documents and assist writers to locate possible problems. This paper presents a readability classifier of Bangla textbook documents based on information-theoretic and lexical features. The features proposed in this paper result in an F-score that is 50% higher than that for traditional readability formulas.","We would like to thank Mr. Munir Hasan from the Bangladesh Open Source Network (BdOSN) and Mr. Murshid Aktar from the National Curriculum & Textbook Board Authority, Bangladesh for their help on corpus collection. We would also like to thank Andy Lücking, Paul Warner and Armin Hoenen for their fruitful suggestions and comments. Finally, we thank three anonymous reviewers. This work is funded by the LOEWE Digital-Humanities project in the Goethe-Universität Frankfurt.","Text Readability Classification of Textbooks of a Low-Resource Language. There are many languages considered to be low-density languages, either because the population speaking the language is not very large, or because insufficient digitized text material is available in the language even though millions of people speak the language. Bangla is one of the latter ones. Readability classification is an important Natural Language Processing (NLP) application that can be used to judge the quality of documents and assist writers to locate possible problems. This paper presents a readability classifier of Bangla textbook documents based on information-theoretic and lexical features. The features proposed in this paper result in an F-score that is 50% higher than that for traditional readability formulas.",2012
wang-etal-2021-enhanced,https://aclanthology.org/2021.iwpt-1.20,0,,,,,,,"Enhanced Universal Dependency Parsing with Automated Concatenation of Embeddings. This paper describes the system used in submission from SHANGHAITECH team to the IWPT 2021 Shared Task. Our system is a graph-based parser with the technique of Automated Concatenation of Embeddings (ACE). Because recent work found that better word representations can be obtained by concatenating different types of embeddings, we use ACE to automatically find the better concatenation of embeddings for the task of enhanced universal dependencies. According to official results averaged on 17 languages, our system ranks 2nd over 9 teams.",Enhanced {U}niversal {D}ependency Parsing with Automated Concatenation of Embeddings,"This paper describes the system used in submission from SHANGHAITECH team to the IWPT 2021 Shared Task. Our system is a graph-based parser with the technique of Automated Concatenation of Embeddings (ACE). Because recent work found that better word representations can be obtained by concatenating different types of embeddings, we use ACE to automatically find the better concatenation of embeddings for the task of enhanced universal dependencies. According to official results averaged on 17 languages, our system ranks 2nd over 9 teams.",Enhanced Universal Dependency Parsing with Automated Concatenation of Embeddings,"This paper describes the system used in submission from SHANGHAITECH team to the IWPT 2021 Shared Task. Our system is a graph-based parser with the technique of Automated Concatenation of Embeddings (ACE). Because recent work found that better word representations can be obtained by concatenating different types of embeddings, we use ACE to automatically find the better concatenation of embeddings for the task of enhanced universal dependencies. According to official results averaged on 17 languages, our system ranks 2nd over 9 teams.",,"Enhanced Universal Dependency Parsing with Automated Concatenation of Embeddings. This paper describes the system used in submission from SHANGHAITECH team to the IWPT 2021 Shared Task. Our system is a graph-based parser with the technique of Automated Concatenation of Embeddings (ACE). Because recent work found that better word representations can be obtained by concatenating different types of embeddings, we use ACE to automatically find the better concatenation of embeddings for the task of enhanced universal dependencies. According to official results averaged on 17 languages, our system ranks 2nd over 9 teams.",2021
agirre-etal-2009-use,https://aclanthology.org/2009.eamt-1.9,0,,,,,,,"Use of Rich Linguistic Information to Translate Prepositions and Grammar Cases to Basque. This paper presents three successful techniques to translate prepositions heading verbal complements by means of rich linguistic information, in the context of a rule-based Machine Translation system for an agglutinative language with scarce resources. This information comes in the form of lexicalized syntactic dependency triples, verb subcategorization and manually coded selection rules based on lexical, syntactic and semantic information. The first two resources have been automatically extracted from monolingual corpora. The results obtained using a new evaluation methodology show that all proposed techniques improve precision over the baselines, including a translation dictionary compiled from an aligned corpus, and a state-of-the-art statistical Machine Translation system. The results also show that linguistic information in all three techniques are complementary, and that a combination of them obtains the best F-score results overall.",Use of Rich Linguistic Information to Translate Prepositions and Grammar Cases to {B}asque,"This paper presents three successful techniques to translate prepositions heading verbal complements by means of rich linguistic information, in the context of a rule-based Machine Translation system for an agglutinative language with scarce resources. This information comes in the form of lexicalized syntactic dependency triples, verb subcategorization and manually coded selection rules based on lexical, syntactic and semantic information. The first two resources have been automatically extracted from monolingual corpora. The results obtained using a new evaluation methodology show that all proposed techniques improve precision over the baselines, including a translation dictionary compiled from an aligned corpus, and a state-of-the-art statistical Machine Translation system. The results also show that linguistic information in all three techniques are complementary, and that a combination of them obtains the best F-score results overall.",Use of Rich Linguistic Information to Translate Prepositions and Grammar Cases to Basque,"This paper presents three successful techniques to translate prepositions heading verbal complements by means of rich linguistic information, in the context of a rule-based Machine Translation system for an agglutinative language with scarce resources. This information comes in the form of lexicalized syntactic dependency triples, verb subcategorization and manually coded selection rules based on lexical, syntactic and semantic information. The first two resources have been automatically extracted from monolingual corpora. The results obtained using a new evaluation methodology show that all proposed techniques improve precision over the baselines, including a translation dictionary compiled from an aligned corpus, and a state-of-the-art statistical Machine Translation system. The results also show that linguistic information in all three techniques are complementary, and that a combination of them obtains the best F-score results overall.","This research was supported in part by the Spanish Ministry of Education and Science (OpenMT: Open Source Machine Translation using hybrid methods, TIN2006-15307-C03-01; RICOTERM-3, HUM2007-65966.CO2-02) and the Regional Branch of the Basque Government (AnHITZ 2006: Language Technologies for Multilingual Interaction in Intelligent Environments, IE06-185). Gorka Labaka is supported by a PhD grant from the Basque Government (grant code, BFI05.326). Consumer corpus has been kindly supplied by Asier Alcázar from the University of Missouri-Columbia and by Eroski Fundazioa.","Use of Rich Linguistic Information to Translate Prepositions and Grammar Cases to Basque. This paper presents three successful techniques to translate prepositions heading verbal complements by means of rich linguistic information, in the context of a rule-based Machine Translation system for an agglutinative language with scarce resources. This information comes in the form of lexicalized syntactic dependency triples, verb subcategorization and manually coded selection rules based on lexical, syntactic and semantic information. The first two resources have been automatically extracted from monolingual corpora. The results obtained using a new evaluation methodology show that all proposed techniques improve precision over the baselines, including a translation dictionary compiled from an aligned corpus, and a state-of-the-art statistical Machine Translation system. The results also show that linguistic information in all three techniques are complementary, and that a combination of them obtains the best F-score results overall.",2009
lehtola-etal-1985-language,https://aclanthology.org/E85-1015.pdf,0,,,,,,,"Language-Based Environment for Natural Language Parsing. ).
The left The righ constituent constituent stack stack The syntax of these declarations can be seen in Figure 3 .",Language-Based Environment for Natural Language Parsing,").
The left The righ constituent constituent stack stack The syntax of these declarations can be seen in Figure 3 .",Language-Based Environment for Natural Language Parsing,").
The left The righ constituent constituent stack stack The syntax of these declarations can be seen in Figure 3 .",,"Language-Based Environment for Natural Language Parsing. ).
The left The righ constituent constituent stack stack The syntax of these declarations can be seen in Figure 3 .",1985
frej-etal-2020-wikir,https://aclanthology.org/2020.lrec-1.237.pdf,0,,,,,,,"WIKIR: A Python Toolkit for Building a Large-scale Wikipedia-based English Information Retrieval Dataset. Over the past years, deep learning methods allowed for new state-of-the-art results in ad-hoc information retrieval. However such methods usually require large amounts of annotated data to be effective. Since most standard ad-hoc information retrieval datasets publicly available for academic research (e.g. Robust04, ClueWeb09) have at most 250 annotated queries, the recent deep learning models for information retrieval perform poorly on these datasets. These models (e.g. DUET, Conv-KNRM) are trained and evaluated on data collected from commercial search engines not publicly available for academic research which is a problem for reproducibility and the advancement of research. In this paper, we propose WIKIR: an open-source toolkit to automatically build large-scale English information retrieval datasets based on Wikipedia. WIKIR is publicly available on GitHub. We also provide wikIR78k and wikIRS78k: two large-scale publicly available datasets that both contain 78,628 queries and 3,060,191 (query, relevant documents) pairs.",{WIKIR}: A Python Toolkit for Building a Large-scale {W}ikipedia-based {E}nglish Information Retrieval Dataset,"Over the past years, deep learning methods allowed for new state-of-the-art results in ad-hoc information retrieval. However such methods usually require large amounts of annotated data to be effective. Since most standard ad-hoc information retrieval datasets publicly available for academic research (e.g. Robust04, ClueWeb09) have at most 250 annotated queries, the recent deep learning models for information retrieval perform poorly on these datasets. These models (e.g. DUET, Conv-KNRM) are trained and evaluated on data collected from commercial search engines not publicly available for academic research which is a problem for reproducibility and the advancement of research. In this paper, we propose WIKIR: an open-source toolkit to automatically build large-scale English information retrieval datasets based on Wikipedia. WIKIR is publicly available on GitHub. We also provide wikIR78k and wikIRS78k: two large-scale publicly available datasets that both contain 78,628 queries and 3,060,191 (query, relevant documents) pairs.",WIKIR: A Python Toolkit for Building a Large-scale Wikipedia-based English Information Retrieval Dataset,"Over the past years, deep learning methods allowed for new state-of-the-art results in ad-hoc information retrieval. However such methods usually require large amounts of annotated data to be effective. Since most standard ad-hoc information retrieval datasets publicly available for academic research (e.g. Robust04, ClueWeb09) have at most 250 annotated queries, the recent deep learning models for information retrieval perform poorly on these datasets. These models (e.g. DUET, Conv-KNRM) are trained and evaluated on data collected from commercial search engines not publicly available for academic research which is a problem for reproducibility and the advancement of research. In this paper, we propose WIKIR: an open-source toolkit to automatically build large-scale English information retrieval datasets based on Wikipedia. WIKIR is publicly available on GitHub. We also provide wikIR78k and wikIRS78k: two large-scale publicly available datasets that both contain 78,628 queries and 3,060,191 (query, relevant documents) pairs.","The authors would like to thank Maximin Coavoux, 11 Emmanuelle Esperança-Rodier, 11 Lorraine Goeuriot, 11 William N. Havard, 11 Quentin Legros, 12 Fabien Ringeval, 11 and Loïc Vial 11 for their thoughtful comments and efforts towards improving our manuscript.","WIKIR: A Python Toolkit for Building a Large-scale Wikipedia-based English Information Retrieval Dataset. Over the past years, deep learning methods allowed for new state-of-the-art results in ad-hoc information retrieval. However such methods usually require large amounts of annotated data to be effective. Since most standard ad-hoc information retrieval datasets publicly available for academic research (e.g. Robust04, ClueWeb09) have at most 250 annotated queries, the recent deep learning models for information retrieval perform poorly on these datasets. These models (e.g. DUET, Conv-KNRM) are trained and evaluated on data collected from commercial search engines not publicly available for academic research which is a problem for reproducibility and the advancement of research. In this paper, we propose WIKIR: an open-source toolkit to automatically build large-scale English information retrieval datasets based on Wikipedia. WIKIR is publicly available on GitHub. We also provide wikIR78k and wikIRS78k: two large-scale publicly available datasets that both contain 78,628 queries and 3,060,191 (query, relevant documents) pairs.",2020
alkhairy-etal-2020-finite,https://aclanthology.org/2020.lrec-1.473.pdf,0,,,,,,,"Finite State Machine Pattern-Root Arabic Morphological Generator, Analyzer and Diacritizer. We describe and evaluate the Finite-State Arabic Morphologizer (FSAM)-a concatenative (prefix-stem-suffix) and templatic (rootpattern) morphologizer that generates and analyzes undiacritized Modern Standard Arabic (MSA) words, and diacritizes them. Our bidirectional unified-architecture finite state machine (FSM) is based on morphotactic MSA grammatical rules. The FSM models the root-pattern structure related to semantics and syntax, making it readily scalable unlike stem-tabulations in prevailing systems. We evaluate the coverage and accuracy of our model, with coverage being percentage of words in Tashkeela (a large corpus) that can be analyzed. Accuracy is computed against a gold standard, comprising words and properties, created from the intersection of UD PADT treebank and Tashkeela. Coverage of analysis (extraction of root and properties from word) is 82%. Accuracy results are: root computed from a word (92%), word generation from a root (100%), non-root properties of a word (97%), and diacritization (84%). FSAM's non-root results match or surpass MADAMIRA's, and root result comparisons are not made because of the concatenative nature of publicly available morphologizers.","Finite State Machine Pattern-Root {A}rabic Morphological Generator, Analyzer and Diacritizer","We describe and evaluate the Finite-State Arabic Morphologizer (FSAM)-a concatenative (prefix-stem-suffix) and templatic (rootpattern) morphologizer that generates and analyzes undiacritized Modern Standard Arabic (MSA) words, and diacritizes them. Our bidirectional unified-architecture finite state machine (FSM) is based on morphotactic MSA grammatical rules. The FSM models the root-pattern structure related to semantics and syntax, making it readily scalable unlike stem-tabulations in prevailing systems. We evaluate the coverage and accuracy of our model, with coverage being percentage of words in Tashkeela (a large corpus) that can be analyzed. Accuracy is computed against a gold standard, comprising words and properties, created from the intersection of UD PADT treebank and Tashkeela. Coverage of analysis (extraction of root and properties from word) is 82%. Accuracy results are: root computed from a word (92%), word generation from a root (100%), non-root properties of a word (97%), and diacritization (84%). FSAM's non-root results match or surpass MADAMIRA's, and root result comparisons are not made because of the concatenative nature of publicly available morphologizers.","Finite State Machine Pattern-Root Arabic Morphological Generator, Analyzer and Diacritizer","We describe and evaluate the Finite-State Arabic Morphologizer (FSAM)-a concatenative (prefix-stem-suffix) and templatic (rootpattern) morphologizer that generates and analyzes undiacritized Modern Standard Arabic (MSA) words, and diacritizes them. Our bidirectional unified-architecture finite state machine (FSM) is based on morphotactic MSA grammatical rules. The FSM models the root-pattern structure related to semantics and syntax, making it readily scalable unlike stem-tabulations in prevailing systems. We evaluate the coverage and accuracy of our model, with coverage being percentage of words in Tashkeela (a large corpus) that can be analyzed. Accuracy is computed against a gold standard, comprising words and properties, created from the intersection of UD PADT treebank and Tashkeela. Coverage of analysis (extraction of root and properties from word) is 82%. Accuracy results are: root computed from a word (92%), word generation from a root (100%), non-root properties of a word (97%), and diacritization (84%). FSAM's non-root results match or surpass MADAMIRA's, and root result comparisons are not made because of the concatenative nature of publicly available morphologizers.",,"Finite State Machine Pattern-Root Arabic Morphological Generator, Analyzer and Diacritizer. We describe and evaluate the Finite-State Arabic Morphologizer (FSAM)-a concatenative (prefix-stem-suffix) and templatic (rootpattern) morphologizer that generates and analyzes undiacritized Modern Standard Arabic (MSA) words, and diacritizes them. Our bidirectional unified-architecture finite state machine (FSM) is based on morphotactic MSA grammatical rules. The FSM models the root-pattern structure related to semantics and syntax, making it readily scalable unlike stem-tabulations in prevailing systems. We evaluate the coverage and accuracy of our model, with coverage being percentage of words in Tashkeela (a large corpus) that can be analyzed. Accuracy is computed against a gold standard, comprising words and properties, created from the intersection of UD PADT treebank and Tashkeela. Coverage of analysis (extraction of root and properties from word) is 82%. Accuracy results are: root computed from a word (92%), word generation from a root (100%), non-root properties of a word (97%), and diacritization (84%). FSAM's non-root results match or surpass MADAMIRA's, and root result comparisons are not made because of the concatenative nature of publicly available morphologizers.",2020
prudhommeaux-etal-2017-vector,https://aclanthology.org/P17-2006.pdf,1,,,,health,,,"Vector space models for evaluating semantic fluency in autism. A common test administered during neurological examination is the semantic fluency test, in which the patient must list as many examples of a given semantic category as possible under timed conditions. Poor performance is associated with neurological conditions characterized by impairments in executive function, such as dementia, schizophrenia, and autism spectrum disorder (ASD). Methods for analyzing semantic fluency responses at the level of detail necessary to uncover these differences have typically relied on subjective manual annotation. In this paper, we explore automated approaches for scoring semantic fluency responses that leverage ontological resources and distributional semantic models to characterize the semantic fluency responses produced by young children with and without ASD. Using these methods, we find significant differences in the semantic fluency responses of children with ASD, demonstrating the utility of using objective methods for clinical language analysis.",Vector space models for evaluating semantic fluency in autism,"A common test administered during neurological examination is the semantic fluency test, in which the patient must list as many examples of a given semantic category as possible under timed conditions. Poor performance is associated with neurological conditions characterized by impairments in executive function, such as dementia, schizophrenia, and autism spectrum disorder (ASD). Methods for analyzing semantic fluency responses at the level of detail necessary to uncover these differences have typically relied on subjective manual annotation. In this paper, we explore automated approaches for scoring semantic fluency responses that leverage ontological resources and distributional semantic models to characterize the semantic fluency responses produced by young children with and without ASD. Using these methods, we find significant differences in the semantic fluency responses of children with ASD, demonstrating the utility of using objective methods for clinical language analysis.",Vector space models for evaluating semantic fluency in autism,"A common test administered during neurological examination is the semantic fluency test, in which the patient must list as many examples of a given semantic category as possible under timed conditions. Poor performance is associated with neurological conditions characterized by impairments in executive function, such as dementia, schizophrenia, and autism spectrum disorder (ASD). Methods for analyzing semantic fluency responses at the level of detail necessary to uncover these differences have typically relied on subjective manual annotation. In this paper, we explore automated approaches for scoring semantic fluency responses that leverage ontological resources and distributional semantic models to characterize the semantic fluency responses produced by young children with and without ASD. Using these methods, we find significant differences in the semantic fluency responses of children with ASD, demonstrating the utility of using objective methods for clinical language analysis.","This work was supported in part by NIH grants R01DC013996, R01DC012033, and R01DC007129. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NIH.","Vector space models for evaluating semantic fluency in autism. A common test administered during neurological examination is the semantic fluency test, in which the patient must list as many examples of a given semantic category as possible under timed conditions. Poor performance is associated with neurological conditions characterized by impairments in executive function, such as dementia, schizophrenia, and autism spectrum disorder (ASD). Methods for analyzing semantic fluency responses at the level of detail necessary to uncover these differences have typically relied on subjective manual annotation. In this paper, we explore automated approaches for scoring semantic fluency responses that leverage ontological resources and distributional semantic models to characterize the semantic fluency responses produced by young children with and without ASD. Using these methods, we find significant differences in the semantic fluency responses of children with ASD, demonstrating the utility of using objective methods for clinical language analysis.",2017
shwartz-etal-2015-learning,https://aclanthology.org/K15-1018.pdf,0,,,,,,,"Learning to Exploit Structured Resources for Lexical Inference. Massive knowledge resources, such as Wikidata, can provide valuable information for lexical inference, especially for proper-names. Prior resource-based approaches typically select the subset of each resource's relations which are relevant for a particular given task. The selection process is done manually, limiting these approaches to smaller resources such as WordNet, which lacks coverage of propernames and recent terminology. This paper presents a supervised framework for automatically selecting an optimized subset of resource relations for a given target inference task. Our approach enables the use of large-scale knowledge resources, thus providing a rich source of high-precision inferences over proper-names. 1",Learning to Exploit Structured Resources for Lexical Inference,"Massive knowledge resources, such as Wikidata, can provide valuable information for lexical inference, especially for proper-names. Prior resource-based approaches typically select the subset of each resource's relations which are relevant for a particular given task. The selection process is done manually, limiting these approaches to smaller resources such as WordNet, which lacks coverage of propernames and recent terminology. This paper presents a supervised framework for automatically selecting an optimized subset of resource relations for a given target inference task. Our approach enables the use of large-scale knowledge resources, thus providing a rich source of high-precision inferences over proper-names. 1",Learning to Exploit Structured Resources for Lexical Inference,"Massive knowledge resources, such as Wikidata, can provide valuable information for lexical inference, especially for proper-names. Prior resource-based approaches typically select the subset of each resource's relations which are relevant for a particular given task. The selection process is done manually, limiting these approaches to smaller resources such as WordNet, which lacks coverage of propernames and recent terminology. This paper presents a supervised framework for automatically selecting an optimized subset of resource relations for a given target inference task. Our approach enables the use of large-scale knowledge resources, thus providing a rich source of high-precision inferences over proper-names. 1","This work was supported by an Intel ICRI-CI grant, the Google Research Award Program and the German Research Foundation via the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1).","Learning to Exploit Structured Resources for Lexical Inference. Massive knowledge resources, such as Wikidata, can provide valuable information for lexical inference, especially for proper-names. Prior resource-based approaches typically select the subset of each resource's relations which are relevant for a particular given task. The selection process is done manually, limiting these approaches to smaller resources such as WordNet, which lacks coverage of propernames and recent terminology. This paper presents a supervised framework for automatically selecting an optimized subset of resource relations for a given target inference task. Our approach enables the use of large-scale knowledge resources, thus providing a rich source of high-precision inferences over proper-names. 1",2015
hokamp-etal-2019-evaluating,https://aclanthology.org/W19-5319.pdf,0,,,,,,,"Evaluating the Supervised and Zero-shot Performance of Multi-lingual Translation Models. We study several methods for full or partial sharing of the decoder parameters of multilingual NMT models. Using only the WMT 2019 shared task parallel datasets for training, we evaluate both fully supervised and zero-shot translation performance in 110 unique translation directions. We use additional test sets and re-purpose evaluation methods recently used for unsupervised MT in order to evaluate zero-shot translation performance for language pairs where no gold-standard parallel data is available. To our knowledge, this is the largest evaluation of multilingual translation yet conducted in terms of the total size of the training data we use, and in terms of the number of zero-shot translation pairs we evaluate. We conduct an in-depth evaluation of the translation performance of different models, highlighting the trade-offs between methods of sharing decoder parameters. We find that models which have task-specific decoder parameters outperform models where decoder parameters are fully shared across all tasks.",Evaluating the Supervised and Zero-shot Performance of Multi-lingual Translation Models,"We study several methods for full or partial sharing of the decoder parameters of multilingual NMT models. Using only the WMT 2019 shared task parallel datasets for training, we evaluate both fully supervised and zero-shot translation performance in 110 unique translation directions. We use additional test sets and re-purpose evaluation methods recently used for unsupervised MT in order to evaluate zero-shot translation performance for language pairs where no gold-standard parallel data is available. To our knowledge, this is the largest evaluation of multilingual translation yet conducted in terms of the total size of the training data we use, and in terms of the number of zero-shot translation pairs we evaluate. We conduct an in-depth evaluation of the translation performance of different models, highlighting the trade-offs between methods of sharing decoder parameters. We find that models which have task-specific decoder parameters outperform models where decoder parameters are fully shared across all tasks.",Evaluating the Supervised and Zero-shot Performance of Multi-lingual Translation Models,"We study several methods for full or partial sharing of the decoder parameters of multilingual NMT models. Using only the WMT 2019 shared task parallel datasets for training, we evaluate both fully supervised and zero-shot translation performance in 110 unique translation directions. We use additional test sets and re-purpose evaluation methods recently used for unsupervised MT in order to evaluate zero-shot translation performance for language pairs where no gold-standard parallel data is available. To our knowledge, this is the largest evaluation of multilingual translation yet conducted in terms of the total size of the training data we use, and in terms of the number of zero-shot translation pairs we evaluate. We conduct an in-depth evaluation of the translation performance of different models, highlighting the trade-offs between methods of sharing decoder parameters. We find that models which have task-specific decoder parameters outperform models where decoder parameters are fully shared across all tasks.",,"Evaluating the Supervised and Zero-shot Performance of Multi-lingual Translation Models. We study several methods for full or partial sharing of the decoder parameters of multilingual NMT models. Using only the WMT 2019 shared task parallel datasets for training, we evaluate both fully supervised and zero-shot translation performance in 110 unique translation directions. We use additional test sets and re-purpose evaluation methods recently used for unsupervised MT in order to evaluate zero-shot translation performance for language pairs where no gold-standard parallel data is available. To our knowledge, this is the largest evaluation of multilingual translation yet conducted in terms of the total size of the training data we use, and in terms of the number of zero-shot translation pairs we evaluate. We conduct an in-depth evaluation of the translation performance of different models, highlighting the trade-offs between methods of sharing decoder parameters. We find that models which have task-specific decoder parameters outperform models where decoder parameters are fully shared across all tasks.",2019
reed-etal-2008-linguistic,http://www.lrec-conf.org/proceedings/lrec2008/pdf/755_paper.pdf,0,,,,,,,"The Linguistic Data Consortium Member Survey: Purpose, Execution and Results. The Linguistic Data Consortium (LDC) seeks to provide its members with quality linguistic resources and services. In order to pursue these ideals and to remain current, LDC monitors the needs and sentiments of its communities. One mechanism LDC uses to generate feedback on consortium and resource issues is the LDC Member Survey. The survey allows LDC Members and nonmembers to provide LDC with valuable insight into their own unique circumstances, their current and future data needs and their views on LDC's role in meeting them. When the 2006 Survey was found to be a useful tool for communicating with the Consortium membership, a 2007 Survey was organized and administered. As a result of the surveys, LDC has confirmed that it has made a positive impact on the community and has identified ways to improve the quality of service and the diversity of monthly offerings. Many respondents recommended ways to improve LDC's functions, ordering mechanism and webpage. Some of these comments have inspired changes to LDC's operation and strategy.","The {L}inguistic {D}ata {C}onsortium Member Survey: Purpose, Execution and Results","The Linguistic Data Consortium (LDC) seeks to provide its members with quality linguistic resources and services. In order to pursue these ideals and to remain current, LDC monitors the needs and sentiments of its communities. One mechanism LDC uses to generate feedback on consortium and resource issues is the LDC Member Survey. The survey allows LDC Members and nonmembers to provide LDC with valuable insight into their own unique circumstances, their current and future data needs and their views on LDC's role in meeting them. When the 2006 Survey was found to be a useful tool for communicating with the Consortium membership, a 2007 Survey was organized and administered. As a result of the surveys, LDC has confirmed that it has made a positive impact on the community and has identified ways to improve the quality of service and the diversity of monthly offerings. Many respondents recommended ways to improve LDC's functions, ordering mechanism and webpage. Some of these comments have inspired changes to LDC's operation and strategy.","The Linguistic Data Consortium Member Survey: Purpose, Execution and Results","The Linguistic Data Consortium (LDC) seeks to provide its members with quality linguistic resources and services. In order to pursue these ideals and to remain current, LDC monitors the needs and sentiments of its communities. One mechanism LDC uses to generate feedback on consortium and resource issues is the LDC Member Survey. The survey allows LDC Members and nonmembers to provide LDC with valuable insight into their own unique circumstances, their current and future data needs and their views on LDC's role in meeting them. When the 2006 Survey was found to be a useful tool for communicating with the Consortium membership, a 2007 Survey was organized and administered. As a result of the surveys, LDC has confirmed that it has made a positive impact on the community and has identified ways to improve the quality of service and the diversity of monthly offerings. Many respondents recommended ways to improve LDC's functions, ordering mechanism and webpage. Some of these comments have inspired changes to LDC's operation and strategy.",,"The Linguistic Data Consortium Member Survey: Purpose, Execution and Results. The Linguistic Data Consortium (LDC) seeks to provide its members with quality linguistic resources and services. In order to pursue these ideals and to remain current, LDC monitors the needs and sentiments of its communities. One mechanism LDC uses to generate feedback on consortium and resource issues is the LDC Member Survey. The survey allows LDC Members and nonmembers to provide LDC with valuable insight into their own unique circumstances, their current and future data needs and their views on LDC's role in meeting them. When the 2006 Survey was found to be a useful tool for communicating with the Consortium membership, a 2007 Survey was organized and administered. As a result of the surveys, LDC has confirmed that it has made a positive impact on the community and has identified ways to improve the quality of service and the diversity of monthly offerings. Many respondents recommended ways to improve LDC's functions, ordering mechanism and webpage. Some of these comments have inspired changes to LDC's operation and strategy.",2008
headden-iii-etal-2006-learning,https://aclanthology.org/W06-1636.pdf,0,,,,,,,"Learning Phrasal Categories. In this work we learn clusters of contextual annotations for non-terminals in the Penn Treebank. Perhaps the best way to think about this problem is to contrast our work with that of Klein and Manning (2003). That research used treetransformations to create various grammars with different contextual annotations on the non-terminals. These grammars were then used in conjunction with a CKY parser. The authors explored the space of different annotation combinations by hand. Here we try to automate the process-to learn the ""right"" combination automatically. Our results are not quite as good as those carefully created by hand, but they are close (84.8 vs 85.7).",Learning Phrasal Categories,"In this work we learn clusters of contextual annotations for non-terminals in the Penn Treebank. Perhaps the best way to think about this problem is to contrast our work with that of Klein and Manning (2003). That research used treetransformations to create various grammars with different contextual annotations on the non-terminals. These grammars were then used in conjunction with a CKY parser. The authors explored the space of different annotation combinations by hand. Here we try to automate the process-to learn the ""right"" combination automatically. Our results are not quite as good as those carefully created by hand, but they are close (84.8 vs 85.7).",Learning Phrasal Categories,"In this work we learn clusters of contextual annotations for non-terminals in the Penn Treebank. Perhaps the best way to think about this problem is to contrast our work with that of Klein and Manning (2003). That research used treetransformations to create various grammars with different contextual annotations on the non-terminals. These grammars were then used in conjunction with a CKY parser. The authors explored the space of different annotation combinations by hand. Here we try to automate the process-to learn the ""right"" combination automatically. Our results are not quite as good as those carefully created by hand, but they are close (84.8 vs 85.7).",The research presented here was funded in part by DARPA GALE contract HR 0011-06-20001.,"Learning Phrasal Categories. In this work we learn clusters of contextual annotations for non-terminals in the Penn Treebank. Perhaps the best way to think about this problem is to contrast our work with that of Klein and Manning (2003). That research used treetransformations to create various grammars with different contextual annotations on the non-terminals. These grammars were then used in conjunction with a CKY parser. The authors explored the space of different annotation combinations by hand. Here we try to automate the process-to learn the ""right"" combination automatically. Our results are not quite as good as those carefully created by hand, but they are close (84.8 vs 85.7).",2006
ahmed-butt-2011-discovering,https://aclanthology.org/W11-0132.pdf,0,,,,,,,"Discovering Semantic Classes for Urdu N-V Complex Predicates. This paper reports on an exploratory investigation as to whether classes of Urdu N-V complex predicates can be identified on the basis syntactic patterns and lexical choices associated with the N-V complex predicates. Working with data from a POS annotated corpus, we show that choices with respect to the number of arguments, case marking on subjects and which light verbs are felicitous with which nouns depend heavily on the semantics of the noun in the N-V complex predicate. This initial work represents an important step towards identifying semantic criteria relevant for complex predicate formation. Identifying the semantic criteria and being able to systematically code them in turn represents a first step towards building up a lexical resource for nouns as part of developing natural language processing tools for the underresourced South Asian language Urdu.",Discovering Semantic Classes for {U}rdu N-{V} Complex Predicates,"This paper reports on an exploratory investigation as to whether classes of Urdu N-V complex predicates can be identified on the basis syntactic patterns and lexical choices associated with the N-V complex predicates. Working with data from a POS annotated corpus, we show that choices with respect to the number of arguments, case marking on subjects and which light verbs are felicitous with which nouns depend heavily on the semantics of the noun in the N-V complex predicate. This initial work represents an important step towards identifying semantic criteria relevant for complex predicate formation. Identifying the semantic criteria and being able to systematically code them in turn represents a first step towards building up a lexical resource for nouns as part of developing natural language processing tools for the underresourced South Asian language Urdu.",Discovering Semantic Classes for Urdu N-V Complex Predicates,"This paper reports on an exploratory investigation as to whether classes of Urdu N-V complex predicates can be identified on the basis syntactic patterns and lexical choices associated with the N-V complex predicates. Working with data from a POS annotated corpus, we show that choices with respect to the number of arguments, case marking on subjects and which light verbs are felicitous with which nouns depend heavily on the semantics of the noun in the N-V complex predicate. This initial work represents an important step towards identifying semantic criteria relevant for complex predicate formation. Identifying the semantic criteria and being able to systematically code them in turn represents a first step towards building up a lexical resource for nouns as part of developing natural language processing tools for the underresourced South Asian language Urdu.",,"Discovering Semantic Classes for Urdu N-V Complex Predicates. This paper reports on an exploratory investigation as to whether classes of Urdu N-V complex predicates can be identified on the basis syntactic patterns and lexical choices associated with the N-V complex predicates. Working with data from a POS annotated corpus, we show that choices with respect to the number of arguments, case marking on subjects and which light verbs are felicitous with which nouns depend heavily on the semantics of the noun in the N-V complex predicate. This initial work represents an important step towards identifying semantic criteria relevant for complex predicate formation. Identifying the semantic criteria and being able to systematically code them in turn represents a first step towards building up a lexical resource for nouns as part of developing natural language processing tools for the underresourced South Asian language Urdu.",2011
saint-dizier-2008-challenges,https://aclanthology.org/Y08-1006.pdf,0,,,,,,,"Some Challenges of Advanced Question-Answering: an Experiment with How-to Questions. This paper is a contribution to text semantics processing and its application to advanced question-answering where a significant portion of a well-formed text is required as a response. We focus on procedural texts of various domains, and show how titles, instructions, instructional compounds and arguments can be extracted.",Some Challenges of Advanced Question-Answering: an Experiment with How-to Questions,"This paper is a contribution to text semantics processing and its application to advanced question-answering where a significant portion of a well-formed text is required as a response. We focus on procedural texts of various domains, and show how titles, instructions, instructional compounds and arguments can be extracted.",Some Challenges of Advanced Question-Answering: an Experiment with How-to Questions,"This paper is a contribution to text semantics processing and its application to advanced question-answering where a significant portion of a well-formed text is required as a response. We focus on procedural texts of various domains, and show how titles, instructions, instructional compounds and arguments can be extracted.",,"Some Challenges of Advanced Question-Answering: an Experiment with How-to Questions. This paper is a contribution to text semantics processing and its application to advanced question-answering where a significant portion of a well-formed text is required as a response. We focus on procedural texts of various domains, and show how titles, instructions, instructional compounds and arguments can be extracted.",2008
yang-etal-2019-convolutional,https://aclanthology.org/N19-1407.pdf,0,,,,,,,"Convolutional Self-Attention Networks. Self-attention networks (SANs) have drawn increasing interest due to their high parallelization in computation and flexibility in modeling dependencies. SANs can be further enhanced with multi-head attention by allowing the model to attend to information from different representation subspaces. In this work, we propose novel convolutional self-attention networks, which offer SANs the abilities to 1) strengthen dependencies among neighboring elements, and 2) model the interaction between features extracted by multiple attention heads. Experimental results of machine translation on different language pairs and model settings show that our approach outperforms both the strong Transformer baseline and other existing models on enhancing the locality of SANs. Comparing with prior studies, the proposed model is parameter free in terms of introducing no more parameters.",Convolutional Self-Attention Networks,"Self-attention networks (SANs) have drawn increasing interest due to their high parallelization in computation and flexibility in modeling dependencies. SANs can be further enhanced with multi-head attention by allowing the model to attend to information from different representation subspaces. In this work, we propose novel convolutional self-attention networks, which offer SANs the abilities to 1) strengthen dependencies among neighboring elements, and 2) model the interaction between features extracted by multiple attention heads. Experimental results of machine translation on different language pairs and model settings show that our approach outperforms both the strong Transformer baseline and other existing models on enhancing the locality of SANs. Comparing with prior studies, the proposed model is parameter free in terms of introducing no more parameters.",Convolutional Self-Attention Networks,"Self-attention networks (SANs) have drawn increasing interest due to their high parallelization in computation and flexibility in modeling dependencies. SANs can be further enhanced with multi-head attention by allowing the model to attend to information from different representation subspaces. In this work, we propose novel convolutional self-attention networks, which offer SANs the abilities to 1) strengthen dependencies among neighboring elements, and 2) model the interaction between features extracted by multiple attention heads. Experimental results of machine translation on different language pairs and model settings show that our approach outperforms both the strong Transformer baseline and other existing models on enhancing the locality of SANs. Comparing with prior studies, the proposed model is parameter free in terms of introducing no more parameters.","The work was partly supported by the National Natural Science Foundation of China (Grant No. 61672555), the Joint Project of Macao Science and Technology Development Fund and National Natural Science Foundation of China (Grant No. 045/2017/AFJ) and the Multiyear Research Grant from the University of Macau (Grant No. MYRG2017-00087-FST). We thank the anonymous reviewers for their insightful comments.","Convolutional Self-Attention Networks. Self-attention networks (SANs) have drawn increasing interest due to their high parallelization in computation and flexibility in modeling dependencies. SANs can be further enhanced with multi-head attention by allowing the model to attend to information from different representation subspaces. In this work, we propose novel convolutional self-attention networks, which offer SANs the abilities to 1) strengthen dependencies among neighboring elements, and 2) model the interaction between features extracted by multiple attention heads. Experimental results of machine translation on different language pairs and model settings show that our approach outperforms both the strong Transformer baseline and other existing models on enhancing the locality of SANs. Comparing with prior studies, the proposed model is parameter free in terms of introducing no more parameters.",2019
kim-lee-2003-clause,https://aclanthology.org/U03-1005.pdf,0,,,,,,,"S-clause segmentation for efficient syntactic analysis using decision trees. In dependency parsing of long sentences with fewer subjects than predicates, it is difficult to recognize which predicate governs which subject. To handle such syntactic ambiguity between subjects and predicates, this paper proposes an ""Sclause"" segmentation method, where an S(ubject)clause is defined as a group of words containing several predicates and their common subject. We propose an automatic S-clause segmentation method using decision trees. The S-clause information was shown to be very effective in analyzing long sentences, with an improved performance of 5 percent.",{S}-clause segmentation for efficient syntactic analysis using decision trees,"In dependency parsing of long sentences with fewer subjects than predicates, it is difficult to recognize which predicate governs which subject. To handle such syntactic ambiguity between subjects and predicates, this paper proposes an ""Sclause"" segmentation method, where an S(ubject)clause is defined as a group of words containing several predicates and their common subject. We propose an automatic S-clause segmentation method using decision trees. The S-clause information was shown to be very effective in analyzing long sentences, with an improved performance of 5 percent.",S-clause segmentation for efficient syntactic analysis using decision trees,"In dependency parsing of long sentences with fewer subjects than predicates, it is difficult to recognize which predicate governs which subject. To handle such syntactic ambiguity between subjects and predicates, this paper proposes an ""Sclause"" segmentation method, where an S(ubject)clause is defined as a group of words containing several predicates and their common subject. We propose an automatic S-clause segmentation method using decision trees. The S-clause information was shown to be very effective in analyzing long sentences, with an improved performance of 5 percent.",This work was supported by the Korea Science and Engineering Foundation (KOSEF) through the Advanced Information Technology Research Center(AITrc) and by the Brain Korea 21 Project in 2003.,"S-clause segmentation for efficient syntactic analysis using decision trees. In dependency parsing of long sentences with fewer subjects than predicates, it is difficult to recognize which predicate governs which subject. To handle such syntactic ambiguity between subjects and predicates, this paper proposes an ""Sclause"" segmentation method, where an S(ubject)clause is defined as a group of words containing several predicates and their common subject. We propose an automatic S-clause segmentation method using decision trees. The S-clause information was shown to be very effective in analyzing long sentences, with an improved performance of 5 percent.",2003
coster-kauchak-2011-simple,https://aclanthology.org/P11-2117.pdf,0,,,,,,,"Simple English Wikipedia: A New Text Simplification Task. In this paper we examine the task of sentence simplification which aims to reduce the reading complexity of a sentence by incorporating more accessible vocabulary and sentence structure. We introduce a new data set that pairs English Wikipedia with Simple English Wikipedia and is orders of magnitude larger than any previously examined for sentence simplification. The data contains the full range of simplification operations including rewording, reordering, insertion and deletion. We provide an analysis of this corpus as well as preliminary results using a phrase-based translation approach for simplification.",{S}imple {E}nglish {W}ikipedia: A New Text Simplification Task,"In this paper we examine the task of sentence simplification which aims to reduce the reading complexity of a sentence by incorporating more accessible vocabulary and sentence structure. We introduce a new data set that pairs English Wikipedia with Simple English Wikipedia and is orders of magnitude larger than any previously examined for sentence simplification. The data contains the full range of simplification operations including rewording, reordering, insertion and deletion. We provide an analysis of this corpus as well as preliminary results using a phrase-based translation approach for simplification.",Simple English Wikipedia: A New Text Simplification Task,"In this paper we examine the task of sentence simplification which aims to reduce the reading complexity of a sentence by incorporating more accessible vocabulary and sentence structure. We introduce a new data set that pairs English Wikipedia with Simple English Wikipedia and is orders of magnitude larger than any previously examined for sentence simplification. The data contains the full range of simplification operations including rewording, reordering, insertion and deletion. We provide an analysis of this corpus as well as preliminary results using a phrase-based translation approach for simplification.",,"Simple English Wikipedia: A New Text Simplification Task. In this paper we examine the task of sentence simplification which aims to reduce the reading complexity of a sentence by incorporating more accessible vocabulary and sentence structure. We introduce a new data set that pairs English Wikipedia with Simple English Wikipedia and is orders of magnitude larger than any previously examined for sentence simplification. The data contains the full range of simplification operations including rewording, reordering, insertion and deletion. We provide an analysis of this corpus as well as preliminary results using a phrase-based translation approach for simplification.",2011
demollin-etal-2020-argumentation,https://aclanthology.org/2020.nl4xai-1.10.pdf,0,,,,,,,"Argumentation Theoretical Frameworks for Explainable Artificial Intelligence. This paper discusses four major argumentation theoretical frameworks with respect to their use in support of explainable artificial intelligence (XAI). We consider these frameworks as useful tools for both system-centred and usercentred XAI. The former is concerned with the generation of explanations for decisions taken by AI systems, while the latter is concerned with the way explanations are given to users and received by them.",Argumentation Theoretical Frameworks for Explainable Artificial Intelligence,"This paper discusses four major argumentation theoretical frameworks with respect to their use in support of explainable artificial intelligence (XAI). We consider these frameworks as useful tools for both system-centred and usercentred XAI. The former is concerned with the generation of explanations for decisions taken by AI systems, while the latter is concerned with the way explanations are given to users and received by them.",Argumentation Theoretical Frameworks for Explainable Artificial Intelligence,"This paper discusses four major argumentation theoretical frameworks with respect to their use in support of explainable artificial intelligence (XAI). We consider these frameworks as useful tools for both system-centred and usercentred XAI. The former is concerned with the generation of explanations for decisions taken by AI systems, while the latter is concerned with the way explanations are given to users and received by them.",This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860621.,"Argumentation Theoretical Frameworks for Explainable Artificial Intelligence. This paper discusses four major argumentation theoretical frameworks with respect to their use in support of explainable artificial intelligence (XAI). We consider these frameworks as useful tools for both system-centred and usercentred XAI. The former is concerned with the generation of explanations for decisions taken by AI systems, while the latter is concerned with the way explanations are given to users and received by them.",2020
biesialska-etal-2020-enhancing,https://aclanthology.org/2020.acl-srw.36.pdf,0,,,,,,,"Enhancing Word Embeddings with Knowledge Extracted from Lexical Resources. In this work, we present an effective method for semantic specialization of word vector representations. To this end, we use traditional word embeddings and apply specialization methods to better capture semantic relations between words. In our approach, we leverage external knowledge from rich lexical resources such as BabelNet. We also show that our proposed post-specialization method based on an adversarial neural network with the Wasserstein distance allows to gain improvements over state-of-the-art methods on two tasks: word similarity and dialog state tracking.",Enhancing Word Embeddings with Knowledge Extracted from Lexical Resources,"In this work, we present an effective method for semantic specialization of word vector representations. To this end, we use traditional word embeddings and apply specialization methods to better capture semantic relations between words. In our approach, we leverage external knowledge from rich lexical resources such as BabelNet. We also show that our proposed post-specialization method based on an adversarial neural network with the Wasserstein distance allows to gain improvements over state-of-the-art methods on two tasks: word similarity and dialog state tracking.",Enhancing Word Embeddings with Knowledge Extracted from Lexical Resources,"In this work, we present an effective method for semantic specialization of word vector representations. To this end, we use traditional word embeddings and apply specialization methods to better capture semantic relations between words. In our approach, we leverage external knowledge from rich lexical resources such as BabelNet. We also show that our proposed post-specialization method based on an adversarial neural network with the Wasserstein distance allows to gain improvements over state-of-the-art methods on two tasks: word similarity and dialog state tracking.","We thank the anonymous reviewers for their insightful comments. This work is supported in part by the Spanish Ministerio de Economía y Competitividad, the European Regional Development Fund through the postdoctoral senior grant Ramón y Cajal and by the Agencia Estatal de Investigación through the projects EUR2019-103819 and PCIN-2017-079.","Enhancing Word Embeddings with Knowledge Extracted from Lexical Resources. In this work, we present an effective method for semantic specialization of word vector representations. To this end, we use traditional word embeddings and apply specialization methods to better capture semantic relations between words. In our approach, we leverage external knowledge from rich lexical resources such as BabelNet. We also show that our proposed post-specialization method based on an adversarial neural network with the Wasserstein distance allows to gain improvements over state-of-the-art methods on two tasks: word similarity and dialog state tracking.",2020
mckeown-paris-1987-functional,https://aclanthology.org/P87-1014.pdf,0,,,,,,,"Functional Unification Grammar Revisited. In this paper, we show that one benefit of FUG, the ability to state global conslralnts on choice separately from syntactic rules, is difficult in generation systems based on augmented context free grammars (e.g., Def'mite Clause Cn'anmm~). They require that such constraints be expressed locally as part of syntactic rules and therefore, duplicated in the grammar. Finally, we discuss a reimplementation of lUg that achieves the similar levels of efficiency as Rubinoff's adaptation of MUMBLE, a detcrministc language generator.",Functional Unification Grammar Revisited,"In this paper, we show that one benefit of FUG, the ability to state global conslralnts on choice separately from syntactic rules, is difficult in generation systems based on augmented context free grammars (e.g., Def'mite Clause Cn'anmm~). They require that such constraints be expressed locally as part of syntactic rules and therefore, duplicated in the grammar. Finally, we discuss a reimplementation of lUg that achieves the similar levels of efficiency as Rubinoff's adaptation of MUMBLE, a detcrministc language generator.",Functional Unification Grammar Revisited,"In this paper, we show that one benefit of FUG, the ability to state global conslralnts on choice separately from syntactic rules, is difficult in generation systems based on augmented context free grammars (e.g., Def'mite Clause Cn'anmm~). They require that such constraints be expressed locally as part of syntactic rules and therefore, duplicated in the grammar. Finally, we discuss a reimplementation of lUg that achieves the similar levels of efficiency as Rubinoff's adaptation of MUMBLE, a detcrministc language generator.","The research reported in this paper was partially supported by DARPA grant N00039-84-C-0165, by ONR grant N00014-82-K-0256 and by NSF grant IST-84-51438. We would like to thank Bill Mann for making a portion of NIGEL's grammar available to us for comparisons.","Functional Unification Grammar Revisited. In this paper, we show that one benefit of FUG, the ability to state global conslralnts on choice separately from syntactic rules, is difficult in generation systems based on augmented context free grammars (e.g., Def'mite Clause Cn'anmm~). They require that such constraints be expressed locally as part of syntactic rules and therefore, duplicated in the grammar. Finally, we discuss a reimplementation of lUg that achieves the similar levels of efficiency as Rubinoff's adaptation of MUMBLE, a detcrministc language generator.",1987
orav-etal-2018-estonian,https://aclanthology.org/2018.gwc-1.42.pdf,0,,,,,,,"Estonian Wordnet: Current State and Future Prospects. This paper presents Estonian Wordnet (EstWN) with its latest developments. We are focusing on the time period of 2011-2017 because during this time EstWN project was supported by the National Programme for Estonian Language Technology (NPELT 1). We describe which were the goals at the beginning of 2011 and what are the accomplishments today. This paper serves as a summarizing report about the progress of EstWN during this programme. While building EstWN we have been concentrating on the fact, that EstWN as a valuable Estonian resource would also be compatible in a common multilingual framework.",{E}stonian {W}ordnet: Current State and Future Prospects,"This paper presents Estonian Wordnet (EstWN) with its latest developments. We are focusing on the time period of 2011-2017 because during this time EstWN project was supported by the National Programme for Estonian Language Technology (NPELT 1). We describe which were the goals at the beginning of 2011 and what are the accomplishments today. This paper serves as a summarizing report about the progress of EstWN during this programme. While building EstWN we have been concentrating on the fact, that EstWN as a valuable Estonian resource would also be compatible in a common multilingual framework.",Estonian Wordnet: Current State and Future Prospects,"This paper presents Estonian Wordnet (EstWN) with its latest developments. We are focusing on the time period of 2011-2017 because during this time EstWN project was supported by the National Programme for Estonian Language Technology (NPELT 1). We describe which were the goals at the beginning of 2011 and what are the accomplishments today. This paper serves as a summarizing report about the progress of EstWN during this programme. While building EstWN we have been concentrating on the fact, that EstWN as a valuable Estonian resource would also be compatible in a common multilingual framework.",,"Estonian Wordnet: Current State and Future Prospects. This paper presents Estonian Wordnet (EstWN) with its latest developments. We are focusing on the time period of 2011-2017 because during this time EstWN project was supported by the National Programme for Estonian Language Technology (NPELT 1). We describe which were the goals at the beginning of 2011 and what are the accomplishments today. This paper serves as a summarizing report about the progress of EstWN during this programme. While building EstWN we have been concentrating on the fact, that EstWN as a valuable Estonian resource would also be compatible in a common multilingual framework.",2018
zhang-denero-2014-observational,https://aclanthology.org/P14-2132.pdf,0,,,,,,,"Observational Initialization of Type-Supervised Taggers. Recent work has sparked new interest in type-supervised part-of-speech tagging, a data setting in which no labeled sentences are available, but the set of allowed tags is known for each word type. This paper describes observational initialization, a novel technique for initializing EM when training a type-supervised HMM tagger. Our initializer allocates probability mass to unambiguous transitions in an unlabeled corpus, generating token-level observations from type-level supervision. Experimentally, observational initialization gives state-of-the-art type-supervised tagging accuracy, providing an error reduction of 56% over uniform initialization on the Penn English Treebank. * Research conducted during an internship at Google.",Observational Initialization of Type-Supervised Taggers,"Recent work has sparked new interest in type-supervised part-of-speech tagging, a data setting in which no labeled sentences are available, but the set of allowed tags is known for each word type. This paper describes observational initialization, a novel technique for initializing EM when training a type-supervised HMM tagger. Our initializer allocates probability mass to unambiguous transitions in an unlabeled corpus, generating token-level observations from type-level supervision. Experimentally, observational initialization gives state-of-the-art type-supervised tagging accuracy, providing an error reduction of 56% over uniform initialization on the Penn English Treebank. * Research conducted during an internship at Google.",Observational Initialization of Type-Supervised Taggers,"Recent work has sparked new interest in type-supervised part-of-speech tagging, a data setting in which no labeled sentences are available, but the set of allowed tags is known for each word type. This paper describes observational initialization, a novel technique for initializing EM when training a type-supervised HMM tagger. Our initializer allocates probability mass to unambiguous transitions in an unlabeled corpus, generating token-level observations from type-level supervision. Experimentally, observational initialization gives state-of-the-art type-supervised tagging accuracy, providing an error reduction of 56% over uniform initialization on the Penn English Treebank. * Research conducted during an internship at Google.",,"Observational Initialization of Type-Supervised Taggers. Recent work has sparked new interest in type-supervised part-of-speech tagging, a data setting in which no labeled sentences are available, but the set of allowed tags is known for each word type. This paper describes observational initialization, a novel technique for initializing EM when training a type-supervised HMM tagger. Our initializer allocates probability mass to unambiguous transitions in an unlabeled corpus, generating token-level observations from type-level supervision. Experimentally, observational initialization gives state-of-the-art type-supervised tagging accuracy, providing an error reduction of 56% over uniform initialization on the Penn English Treebank. * Research conducted during an internship at Google.",2014
liu-etal-2012-expected,https://aclanthology.org/C12-2071.pdf,0,,,,,,,"Expected Error Minimization with Ultraconservative Update for SMT. Minimum error rate training is a popular method for parameter tuning in statistical machine translation (SMT). However, the optimization objective function may change drastically at each optimization step, which may induce MERT instability. We propose an alternative tuning method based on an ultraconservative update, in which the combination of an expected task loss and the distance from the parameters in the previous round are minimized with a variant of gradient descent. Experiments on test datasets of both Chinese-to-English and Spanish-to-English translation show that our method can achieve improvements over MERT under the Moses system.",Expected Error Minimization with Ultraconservative Update for {SMT},"Minimum error rate training is a popular method for parameter tuning in statistical machine translation (SMT). However, the optimization objective function may change drastically at each optimization step, which may induce MERT instability. We propose an alternative tuning method based on an ultraconservative update, in which the combination of an expected task loss and the distance from the parameters in the previous round are minimized with a variant of gradient descent. Experiments on test datasets of both Chinese-to-English and Spanish-to-English translation show that our method can achieve improvements over MERT under the Moses system.",Expected Error Minimization with Ultraconservative Update for SMT,"Minimum error rate training is a popular method for parameter tuning in statistical machine translation (SMT). However, the optimization objective function may change drastically at each optimization step, which may induce MERT instability. We propose an alternative tuning method based on an ultraconservative update, in which the combination of an expected task loss and the distance from the parameters in the previous round are minimized with a variant of gradient descent. Experiments on test datasets of both Chinese-to-English and Spanish-to-English translation show that our method can achieve improvements over MERT under the Moses system.","We would like to thank Muyun Yang and Hongfei Jiang for many valuable discussions and thank three anonymous reviewers for many valuable comments and helpful suggestions. This work was supported by National Natural Science Foundation of China (61173073,61100093,61073130,61272384), the Key Project of the National High Technology Research and Development Program of China (2011AA01A207), and and the Fundamental Research Funds for Central Universites (HIT.NSRIF.2013065).","Expected Error Minimization with Ultraconservative Update for SMT. Minimum error rate training is a popular method for parameter tuning in statistical machine translation (SMT). However, the optimization objective function may change drastically at each optimization step, which may induce MERT instability. We propose an alternative tuning method based on an ultraconservative update, in which the combination of an expected task loss and the distance from the parameters in the previous round are minimized with a variant of gradient descent. Experiments on test datasets of both Chinese-to-English and Spanish-to-English translation show that our method can achieve improvements over MERT under the Moses system.",2012
guggilla-etal-2016-cnn,https://aclanthology.org/C16-1258.pdf,1,,,,disinformation_and_fake_news,,,"CNN- and LSTM-based Claim Classification in Online User Comments. When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes propositions according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks.",{CNN}- and {LSTM}-based Claim Classification in Online User Comments,"When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes propositions according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks.",CNN- and LSTM-based Claim Classification in Online User Comments,"When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes propositions according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks.","This work was funded through the research training group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES, GRK 1994/1) and through the German Research Foundation (DFG).","CNN- and LSTM-based Claim Classification in Online User Comments. When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes propositions according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks.",2016
meng-etal-2021-mixture,https://aclanthology.org/2021.emnlp-main.383.pdf,1,,,,health,,,"Mixture-of-Partitions: Infusing Large Biomedical Knowledge Graphs into BERT. Infusing factual knowledge into pretrained models is fundamental for many knowledgeintensive tasks. In this paper, we propose Mixture-of-Partitions (MoP), an infusion approach that can handle a very large knowledge graph (KG) by partitioning it into smaller subgraphs and infusing their specific knowledge into various BERT models using lightweight adapters. To leverage the overall factual knowledge for a target task, these sub-graph adapters are further fine-tuned along with the underlying BERT through a mixture layer. We evaluate our MoP with three biomedical BERTs (SciBERT, BioBERT, PubmedBERT) on six downstream tasks (inc. NLI, QA, Classification), and the results show that our MoP consistently enhances the underlying BERTs in task performance, and achieves new SOTA performances on five evaluated datasets. 1",Mixture-of-Partitions: Infusing Large Biomedical Knowledge Graphs into {BERT},"Infusing factual knowledge into pretrained models is fundamental for many knowledgeintensive tasks. In this paper, we propose Mixture-of-Partitions (MoP), an infusion approach that can handle a very large knowledge graph (KG) by partitioning it into smaller subgraphs and infusing their specific knowledge into various BERT models using lightweight adapters. To leverage the overall factual knowledge for a target task, these sub-graph adapters are further fine-tuned along with the underlying BERT through a mixture layer. We evaluate our MoP with three biomedical BERTs (SciBERT, BioBERT, PubmedBERT) on six downstream tasks (inc. NLI, QA, Classification), and the results show that our MoP consistently enhances the underlying BERTs in task performance, and achieves new SOTA performances on five evaluated datasets. 1",Mixture-of-Partitions: Infusing Large Biomedical Knowledge Graphs into BERT,"Infusing factual knowledge into pretrained models is fundamental for many knowledgeintensive tasks. In this paper, we propose Mixture-of-Partitions (MoP), an infusion approach that can handle a very large knowledge graph (KG) by partitioning it into smaller subgraphs and infusing their specific knowledge into various BERT models using lightweight adapters. To leverage the overall factual knowledge for a target task, these sub-graph adapters are further fine-tuned along with the underlying BERT through a mixture layer. We evaluate our MoP with three biomedical BERTs (SciBERT, BioBERT, PubmedBERT) on six downstream tasks (inc. NLI, QA, Classification), and the results show that our MoP consistently enhances the underlying BERTs in task performance, and achieves new SOTA performances on five evaluated datasets. 1",Nigel Collier and Zaiqiao Meng kindly acknowledge grant-in-aid funding from ESRC (grant number ES/T012277/1).,"Mixture-of-Partitions: Infusing Large Biomedical Knowledge Graphs into BERT. Infusing factual knowledge into pretrained models is fundamental for many knowledgeintensive tasks. In this paper, we propose Mixture-of-Partitions (MoP), an infusion approach that can handle a very large knowledge graph (KG) by partitioning it into smaller subgraphs and infusing their specific knowledge into various BERT models using lightweight adapters. To leverage the overall factual knowledge for a target task, these sub-graph adapters are further fine-tuned along with the underlying BERT through a mixture layer. We evaluate our MoP with three biomedical BERTs (SciBERT, BioBERT, PubmedBERT) on six downstream tasks (inc. NLI, QA, Classification), and the results show that our MoP consistently enhances the underlying BERTs in task performance, and achieves new SOTA performances on five evaluated datasets. 1",2021
du-ji-2019-empirical,https://aclanthology.org/D19-1619.pdf,0,,,,,,,"An Empirical Comparison on Imitation Learning and Reinforcement Learning for Paraphrase Generation. Generating paraphrases from given sentences involves decoding words step by step from a large vocabulary. To learn a decoder, supervised learning which maximizes the likelihood of tokens always suffers from the exposure bias. Although both reinforcement learning (RL) and imitation learning (IL) have been widely used to alleviate the bias, the lack of direct comparison leads to only a partial image on their benefits. In this work, we present an empirical study on how RL and IL can help boost the performance of generating paraphrases, with the pointer-generator as a base model 1. Experiments on the benchmark datasets show that (1) imitation learning is constantly better than reinforcement learning; and (2) the pointer-generator models with imitation learning outperform the state-of-theart methods with a large margin.",An Empirical Comparison on Imitation Learning and Reinforcement Learning for Paraphrase Generation,"Generating paraphrases from given sentences involves decoding words step by step from a large vocabulary. To learn a decoder, supervised learning which maximizes the likelihood of tokens always suffers from the exposure bias. Although both reinforcement learning (RL) and imitation learning (IL) have been widely used to alleviate the bias, the lack of direct comparison leads to only a partial image on their benefits. In this work, we present an empirical study on how RL and IL can help boost the performance of generating paraphrases, with the pointer-generator as a base model 1. Experiments on the benchmark datasets show that (1) imitation learning is constantly better than reinforcement learning; and (2) the pointer-generator models with imitation learning outperform the state-of-theart methods with a large margin.",An Empirical Comparison on Imitation Learning and Reinforcement Learning for Paraphrase Generation,"Generating paraphrases from given sentences involves decoding words step by step from a large vocabulary. To learn a decoder, supervised learning which maximizes the likelihood of tokens always suffers from the exposure bias. Although both reinforcement learning (RL) and imitation learning (IL) have been widely used to alleviate the bias, the lack of direct comparison leads to only a partial image on their benefits. In this work, we present an empirical study on how RL and IL can help boost the performance of generating paraphrases, with the pointer-generator as a base model 1. Experiments on the benchmark datasets show that (1) imitation learning is constantly better than reinforcement learning; and (2) the pointer-generator models with imitation learning outperform the state-of-theart methods with a large margin.",The authors thank three anonymous reviewers for their useful comments and the UVa NLP group for helpful discussion. This research was supported in part by a gift from Tencent AI Lab Rhino-Bird Gift Fund.,"An Empirical Comparison on Imitation Learning and Reinforcement Learning for Paraphrase Generation. Generating paraphrases from given sentences involves decoding words step by step from a large vocabulary. To learn a decoder, supervised learning which maximizes the likelihood of tokens always suffers from the exposure bias. Although both reinforcement learning (RL) and imitation learning (IL) have been widely used to alleviate the bias, the lack of direct comparison leads to only a partial image on their benefits. In this work, we present an empirical study on how RL and IL can help boost the performance of generating paraphrases, with the pointer-generator as a base model 1. Experiments on the benchmark datasets show that (1) imitation learning is constantly better than reinforcement learning; and (2) the pointer-generator models with imitation learning outperform the state-of-theart methods with a large margin.",2019
le-hong-etal-2009-finite,https://aclanthology.org/W09-3409.pdf,0,,,,,,,"Finite-State Description of Vietnamese Reduplication. We present for the first time a computational model for the reduplication of the Vietnamese language. Reduplication is a popular phenomenon of Vietnamese in which reduplicative words are created by the combination of multiple syllables whose phonics are similar. We first give a systematical study of Vietnamese reduplicative words, bringing into focus clear principles for the formation of a large class of bi-syllabic reduplicative words. We then make use of optimal finite-state devices, in particular minimal sequential string-to string transducers to build a computational model for very efficient recognition and production of those words. Finally, several nice applications of this computational model are discussed.",Finite-State Description of {V}ietnamese Reduplication,"We present for the first time a computational model for the reduplication of the Vietnamese language. Reduplication is a popular phenomenon of Vietnamese in which reduplicative words are created by the combination of multiple syllables whose phonics are similar. We first give a systematical study of Vietnamese reduplicative words, bringing into focus clear principles for the formation of a large class of bi-syllabic reduplicative words. We then make use of optimal finite-state devices, in particular minimal sequential string-to string transducers to build a computational model for very efficient recognition and production of those words. Finally, several nice applications of this computational model are discussed.",Finite-State Description of Vietnamese Reduplication,"We present for the first time a computational model for the reduplication of the Vietnamese language. Reduplication is a popular phenomenon of Vietnamese in which reduplicative words are created by the combination of multiple syllables whose phonics are similar. We first give a systematical study of Vietnamese reduplicative words, bringing into focus clear principles for the formation of a large class of bi-syllabic reduplicative words. We then make use of optimal finite-state devices, in particular minimal sequential string-to string transducers to build a computational model for very efficient recognition and production of those words. Finally, several nice applications of this computational model are discussed.",We gratefully acknowledge helpful comments and valuable suggestions from three anonymous reviewers for improving the paper.,"Finite-State Description of Vietnamese Reduplication. We present for the first time a computational model for the reduplication of the Vietnamese language. Reduplication is a popular phenomenon of Vietnamese in which reduplicative words are created by the combination of multiple syllables whose phonics are similar. We first give a systematical study of Vietnamese reduplicative words, bringing into focus clear principles for the formation of a large class of bi-syllabic reduplicative words. We then make use of optimal finite-state devices, in particular minimal sequential string-to string transducers to build a computational model for very efficient recognition and production of those words. Finally, several nice applications of this computational model are discussed.",2009
jacobs-etal-1991-lexico,https://aclanthology.org/H91-1066.pdf,0,,,,,,,"Lexico-Semantic Pattern Matching as a Companion to Parsing in Text Understanding. Ordinarily, one thinks of the problem of natural language understanding as one of making a single, left-to-right pass through an input, producing a progressively refined and detailed interpretation. In text interpretation, however, the constraints of strict left-to-right processing are an encumbrance. Multi-pass methods, especially by interpreting words using corpus data and associating units of text with possible interpretations, can be more accurate and faster than single-pass methods of data extraction. Quality improves because corpus-based data and global context help to control false interpretations; speed improves because processing focuses on relevant sections. The most useful forms of pre-processing for text interpretation use fairly superficial analysis that complements the style of ordinary parsing but uses much of the same knowledge base. Lexico-semantic pattern matching, with rules that combine lexlocal analysis with ordering and semantic categories, is a good method for this form of analysis. This type of pre-processing is efficient, takes advantage of corpus data, prevents many garden paths and fruitless parses, and helps the parser cope with the complexity and flexibility of real text.",Lexico-Semantic Pattern Matching as a Companion to Parsing in Text Understanding,"Ordinarily, one thinks of the problem of natural language understanding as one of making a single, left-to-right pass through an input, producing a progressively refined and detailed interpretation. In text interpretation, however, the constraints of strict left-to-right processing are an encumbrance. Multi-pass methods, especially by interpreting words using corpus data and associating units of text with possible interpretations, can be more accurate and faster than single-pass methods of data extraction. Quality improves because corpus-based data and global context help to control false interpretations; speed improves because processing focuses on relevant sections. The most useful forms of pre-processing for text interpretation use fairly superficial analysis that complements the style of ordinary parsing but uses much of the same knowledge base. Lexico-semantic pattern matching, with rules that combine lexlocal analysis with ordering and semantic categories, is a good method for this form of analysis. This type of pre-processing is efficient, takes advantage of corpus data, prevents many garden paths and fruitless parses, and helps the parser cope with the complexity and flexibility of real text.",Lexico-Semantic Pattern Matching as a Companion to Parsing in Text Understanding,"Ordinarily, one thinks of the problem of natural language understanding as one of making a single, left-to-right pass through an input, producing a progressively refined and detailed interpretation. In text interpretation, however, the constraints of strict left-to-right processing are an encumbrance. Multi-pass methods, especially by interpreting words using corpus data and associating units of text with possible interpretations, can be more accurate and faster than single-pass methods of data extraction. Quality improves because corpus-based data and global context help to control false interpretations; speed improves because processing focuses on relevant sections. The most useful forms of pre-processing for text interpretation use fairly superficial analysis that complements the style of ordinary parsing but uses much of the same knowledge base. Lexico-semantic pattern matching, with rules that combine lexlocal analysis with ordering and semantic categories, is a good method for this form of analysis. This type of pre-processing is efficient, takes advantage of corpus data, prevents many garden paths and fruitless parses, and helps the parser cope with the complexity and flexibility of real text.",,"Lexico-Semantic Pattern Matching as a Companion to Parsing in Text Understanding. Ordinarily, one thinks of the problem of natural language understanding as one of making a single, left-to-right pass through an input, producing a progressively refined and detailed interpretation. In text interpretation, however, the constraints of strict left-to-right processing are an encumbrance. Multi-pass methods, especially by interpreting words using corpus data and associating units of text with possible interpretations, can be more accurate and faster than single-pass methods of data extraction. Quality improves because corpus-based data and global context help to control false interpretations; speed improves because processing focuses on relevant sections. The most useful forms of pre-processing for text interpretation use fairly superficial analysis that complements the style of ordinary parsing but uses much of the same knowledge base. Lexico-semantic pattern matching, with rules that combine lexlocal analysis with ordering and semantic categories, is a good method for this form of analysis. This type of pre-processing is efficient, takes advantage of corpus data, prevents many garden paths and fruitless parses, and helps the parser cope with the complexity and flexibility of real text.",1991
sarkar-haffari-2006-tutorial,https://aclanthology.org/N06-5005.pdf,0,,,,,,,"Tutorial on Inductive Semi-supervised Learning Methods: with Applicability to Natural Language Processing. Supervised machine learning methods which learn from labelled (or annotated) data are now widely used in many different areas of Computational Linguistics and Natural Language Processing. There are widespread data annotation endeavours but they face problems: there are a large number of languages and annotation is expensive, while at the same time raw text data is plentiful. Semi-supervised learning methods aim to close this gap. The last 6-7 years have seen a surge of interest in semi-supervised methods in the machine learning and NLP communities focused on the one hand on analysing the situations in which unlabelled data can be useful, and on the other hand, providing feasible learning algorithms. This recent research has resulted in a wide variety of interesting methods which are different with respect to the assumptions they make about the learning task. In this tutorial, we survey recent semi-supervised learning methods, discuss assumptions behind various approaches, and show how some of these methods have been applied to NLP tasks.",Tutorial on Inductive Semi-supervised Learning Methods: with Applicability to Natural Language Processing,"Supervised machine learning methods which learn from labelled (or annotated) data are now widely used in many different areas of Computational Linguistics and Natural Language Processing. There are widespread data annotation endeavours but they face problems: there are a large number of languages and annotation is expensive, while at the same time raw text data is plentiful. Semi-supervised learning methods aim to close this gap. The last 6-7 years have seen a surge of interest in semi-supervised methods in the machine learning and NLP communities focused on the one hand on analysing the situations in which unlabelled data can be useful, and on the other hand, providing feasible learning algorithms. This recent research has resulted in a wide variety of interesting methods which are different with respect to the assumptions they make about the learning task. In this tutorial, we survey recent semi-supervised learning methods, discuss assumptions behind various approaches, and show how some of these methods have been applied to NLP tasks.",Tutorial on Inductive Semi-supervised Learning Methods: with Applicability to Natural Language Processing,"Supervised machine learning methods which learn from labelled (or annotated) data are now widely used in many different areas of Computational Linguistics and Natural Language Processing. There are widespread data annotation endeavours but they face problems: there are a large number of languages and annotation is expensive, while at the same time raw text data is plentiful. Semi-supervised learning methods aim to close this gap. The last 6-7 years have seen a surge of interest in semi-supervised methods in the machine learning and NLP communities focused on the one hand on analysing the situations in which unlabelled data can be useful, and on the other hand, providing feasible learning algorithms. This recent research has resulted in a wide variety of interesting methods which are different with respect to the assumptions they make about the learning task. In this tutorial, we survey recent semi-supervised learning methods, discuss assumptions behind various approaches, and show how some of these methods have been applied to NLP tasks.",,"Tutorial on Inductive Semi-supervised Learning Methods: with Applicability to Natural Language Processing. Supervised machine learning methods which learn from labelled (or annotated) data are now widely used in many different areas of Computational Linguistics and Natural Language Processing. There are widespread data annotation endeavours but they face problems: there are a large number of languages and annotation is expensive, while at the same time raw text data is plentiful. Semi-supervised learning methods aim to close this gap. The last 6-7 years have seen a surge of interest in semi-supervised methods in the machine learning and NLP communities focused on the one hand on analysing the situations in which unlabelled data can be useful, and on the other hand, providing feasible learning algorithms. This recent research has resulted in a wide variety of interesting methods which are different with respect to the assumptions they make about the learning task. In this tutorial, we survey recent semi-supervised learning methods, discuss assumptions behind various approaches, and show how some of these methods have been applied to NLP tasks.",2006
chang-etal-2015-ct,https://aclanthology.org/W15-3125.pdf,0,,,,,,,"CT-SPA: Text sentiment polarity prediction model using semi-automatically expanded sentiment lexicon. In this study, an automatic classification method based on the sentiment polarity of text is proposed. This method uses two sentiment dictionaries from different sources: the Chinese sentiment dictionary CSWN that integrates Chinese WordNet with SentiWordNet, and the sentiment dictionary obtained from a training corpus labeled with sentiment polarities. In this study, the sentiment polarity of text is analyzed using these two dictionaries, a mixed-rule approach, and a statistics-based prediction model. The proposed method is used to analyze a test corpus provided by the Topic-Based Chinese Message Polarity Classification task of SIGHAN-8, and the F1measure value is tested at 0.62.",{CT}-{SPA}: Text sentiment polarity prediction model using semi-automatically expanded sentiment lexicon,"In this study, an automatic classification method based on the sentiment polarity of text is proposed. This method uses two sentiment dictionaries from different sources: the Chinese sentiment dictionary CSWN that integrates Chinese WordNet with SentiWordNet, and the sentiment dictionary obtained from a training corpus labeled with sentiment polarities. In this study, the sentiment polarity of text is analyzed using these two dictionaries, a mixed-rule approach, and a statistics-based prediction model. The proposed method is used to analyze a test corpus provided by the Topic-Based Chinese Message Polarity Classification task of SIGHAN-8, and the F1measure value is tested at 0.62.",CT-SPA: Text sentiment polarity prediction model using semi-automatically expanded sentiment lexicon,"In this study, an automatic classification method based on the sentiment polarity of text is proposed. This method uses two sentiment dictionaries from different sources: the Chinese sentiment dictionary CSWN that integrates Chinese WordNet with SentiWordNet, and the sentiment dictionary obtained from a training corpus labeled with sentiment polarities. In this study, the sentiment polarity of text is analyzed using these two dictionaries, a mixed-rule approach, and a statistics-based prediction model. The proposed method is used to analyze a test corpus provided by the Topic-Based Chinese Message Polarity Classification task of SIGHAN-8, and the F1measure value is tested at 0.62.",,"CT-SPA: Text sentiment polarity prediction model using semi-automatically expanded sentiment lexicon. In this study, an automatic classification method based on the sentiment polarity of text is proposed. This method uses two sentiment dictionaries from different sources: the Chinese sentiment dictionary CSWN that integrates Chinese WordNet with SentiWordNet, and the sentiment dictionary obtained from a training corpus labeled with sentiment polarities. In this study, the sentiment polarity of text is analyzed using these two dictionaries, a mixed-rule approach, and a statistics-based prediction model. The proposed method is used to analyze a test corpus provided by the Topic-Based Chinese Message Polarity Classification task of SIGHAN-8, and the F1measure value is tested at 0.62.",2015
hammond-2021-data,https://aclanthology.org/2021.sigmorphon-1.14.pdf,0,,,,,,,"Data augmentation for low-resource grapheme-to-phoneme mapping. In this paper we explore a very simple neural approach to mapping orthography to phonetic transcription in a low-resource context. The basic idea is to start from a baseline system and focus all efforts on data augmentation. We will see that some techniques work, but others do not.",Data augmentation for low-resource grapheme-to-phoneme mapping,"In this paper we explore a very simple neural approach to mapping orthography to phonetic transcription in a low-resource context. The basic idea is to start from a baseline system and focus all efforts on data augmentation. We will see that some techniques work, but others do not.",Data augmentation for low-resource grapheme-to-phoneme mapping,"In this paper we explore a very simple neural approach to mapping orthography to phonetic transcription in a low-resource context. The basic idea is to start from a baseline system and focus all efforts on data augmentation. We will see that some techniques work, but others do not.",Thanks to Diane Ohala for useful discussion. Thanks to several anonymous reviewers for very helpful feedback. All errors are my own.,"Data augmentation for low-resource grapheme-to-phoneme mapping. In this paper we explore a very simple neural approach to mapping orthography to phonetic transcription in a low-resource context. The basic idea is to start from a baseline system and focus all efforts on data augmentation. We will see that some techniques work, but others do not.",2021
power-scott-2005-automatic,https://aclanthology.org/I05-5010.pdf,0,,,,,,,"Automatic generation of large-scale paraphrases. Research on paraphrase has mostly focussed on lexical or syntactic variation within individual sentences. Our concern is with larger-scale paraphrases, from multiple sentences or paragraphs to entire documents. In this paper we address the problem of generating paraphrases of large chunks of texts. We ground our discussion through a worked example of extending an existing NLG system to accept as input a source text, and to generate a range of fluent semantically-equivalent alternatives, varying not only at the lexical and syntactic levels, but also in document structure and layout.",Automatic generation of large-scale paraphrases,"Research on paraphrase has mostly focussed on lexical or syntactic variation within individual sentences. Our concern is with larger-scale paraphrases, from multiple sentences or paragraphs to entire documents. In this paper we address the problem of generating paraphrases of large chunks of texts. We ground our discussion through a worked example of extending an existing NLG system to accept as input a source text, and to generate a range of fluent semantically-equivalent alternatives, varying not only at the lexical and syntactic levels, but also in document structure and layout.",Automatic generation of large-scale paraphrases,"Research on paraphrase has mostly focussed on lexical or syntactic variation within individual sentences. Our concern is with larger-scale paraphrases, from multiple sentences or paragraphs to entire documents. In this paper we address the problem of generating paraphrases of large chunks of texts. We ground our discussion through a worked example of extending an existing NLG system to accept as input a source text, and to generate a range of fluent semantically-equivalent alternatives, varying not only at the lexical and syntactic levels, but also in document structure and layout.",,"Automatic generation of large-scale paraphrases. Research on paraphrase has mostly focussed on lexical or syntactic variation within individual sentences. Our concern is with larger-scale paraphrases, from multiple sentences or paragraphs to entire documents. In this paper we address the problem of generating paraphrases of large chunks of texts. We ground our discussion through a worked example of extending an existing NLG system to accept as input a source text, and to generate a range of fluent semantically-equivalent alternatives, varying not only at the lexical and syntactic levels, but also in document structure and layout.",2005
britz-etal-2017-efficient,https://aclanthology.org/D17-1040.pdf,0,,,,,,,"Efficient Attention using a Fixed-Size Memory Representation. The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.",Efficient Attention using a Fixed-Size Memory Representation,"The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.",Efficient Attention using a Fixed-Size Memory Representation,"The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.",,"Efficient Attention using a Fixed-Size Memory Representation. The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.",2017
nguyen-etal-2016-empirical,https://aclanthology.org/U16-1017.pdf,0,,,,,,,"An empirical study for Vietnamese dependency parsing. This paper presents an empirical comparison of different dependency parsers for Vietnamese, which has some unusual characteristics such as copula drop and verb serialization. Experimental results show that the neural network-based parsers perform significantly better than the traditional parsers. We report the highest parsing scores published to date for Vietnamese with the labeled attachment score (LAS) at 73.53% and the unlabeled attachment score (UAS) at 80.66%.",An empirical study for {V}ietnamese dependency parsing,"This paper presents an empirical comparison of different dependency parsers for Vietnamese, which has some unusual characteristics such as copula drop and verb serialization. Experimental results show that the neural network-based parsers perform significantly better than the traditional parsers. We report the highest parsing scores published to date for Vietnamese with the labeled attachment score (LAS) at 73.53% and the unlabeled attachment score (UAS) at 80.66%.",An empirical study for Vietnamese dependency parsing,"This paper presents an empirical comparison of different dependency parsers for Vietnamese, which has some unusual characteristics such as copula drop and verb serialization. Experimental results show that the neural network-based parsers perform significantly better than the traditional parsers. We report the highest parsing scores published to date for Vietnamese with the labeled attachment score (LAS) at 73.53% and the unlabeled attachment score (UAS) at 80.66%.",The first author is supported by an International Postgraduate Research Scholarship and a NICTA NRPA Top-Up Scholarship.,"An empirical study for Vietnamese dependency parsing. This paper presents an empirical comparison of different dependency parsers for Vietnamese, which has some unusual characteristics such as copula drop and verb serialization. Experimental results show that the neural network-based parsers perform significantly better than the traditional parsers. We report the highest parsing scores published to date for Vietnamese with the labeled attachment score (LAS) at 73.53% and the unlabeled attachment score (UAS) at 80.66%.",2016
lin-2004-rouge,https://aclanthology.org/W04-1013.pdf,0,,,,,,,"ROUGE: A Package for Automatic Evaluation of Summaries. ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. It includes measures to automatically determine the quality of a summary by comparing it to other (ideal) summaries created by humans. The measures count the number of overlapping units such as n-gram, word sequences, and word pairs between the computer-generated summary to be evaluated and the ideal summaries created by humans. This paper introduces four different ROUGE measures: ROUGE-N, ROUGE-L, ROUGE-W, and ROUGE-S included in the ROUGE summarization evaluation package and their evaluatio ns. Three of them have been used in the Document Understanding Conference (DUC) 2004, a large-scale summarization evaluation sponsored by NIST.",{ROUGE}: A Package for Automatic Evaluation of Summaries,"ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. It includes measures to automatically determine the quality of a summary by comparing it to other (ideal) summaries created by humans. The measures count the number of overlapping units such as n-gram, word sequences, and word pairs between the computer-generated summary to be evaluated and the ideal summaries created by humans. This paper introduces four different ROUGE measures: ROUGE-N, ROUGE-L, ROUGE-W, and ROUGE-S included in the ROUGE summarization evaluation package and their evaluatio ns. Three of them have been used in the Document Understanding Conference (DUC) 2004, a large-scale summarization evaluation sponsored by NIST.",ROUGE: A Package for Automatic Evaluation of Summaries,"ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. It includes measures to automatically determine the quality of a summary by comparing it to other (ideal) summaries created by humans. The measures count the number of overlapping units such as n-gram, word sequences, and word pairs between the computer-generated summary to be evaluated and the ideal summaries created by humans. This paper introduces four different ROUGE measures: ROUGE-N, ROUGE-L, ROUGE-W, and ROUGE-S included in the ROUGE summarization evaluation package and their evaluatio ns. Three of them have been used in the Document Understanding Conference (DUC) 2004, a large-scale summarization evaluation sponsored by NIST.","The author would like to thank the anonymous reviewers for their constructive comments, Paul Over at NIST, U.S.A, and ROUGE users around the world for testing and providing useful feedback on earlier versions of the ROUGE evaluation package, and the DARPA TIDES project for supporting this research.","ROUGE: A Package for Automatic Evaluation of Summaries. ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. It includes measures to automatically determine the quality of a summary by comparing it to other (ideal) summaries created by humans. The measures count the number of overlapping units such as n-gram, word sequences, and word pairs between the computer-generated summary to be evaluated and the ideal summaries created by humans. This paper introduces four different ROUGE measures: ROUGE-N, ROUGE-L, ROUGE-W, and ROUGE-S included in the ROUGE summarization evaluation package and their evaluatio ns. Three of them have been used in the Document Understanding Conference (DUC) 2004, a large-scale summarization evaluation sponsored by NIST.",2004
felice-briscoe-2015-towards,https://aclanthology.org/N15-1060.pdf,0,,,,,,,"Towards a standard evaluation method for grammatical error detection and correction. We present a novel evaluation method for grammatical error correction that addresses problems with previous approaches and scores systems in terms of improvement on the original text. Our method evaluates corrections at the token level using a globally optimal alignment between the source, a system hypothesis, and a reference. Unlike the M 2 Scorer, our method provides scores for both detection and correction and is sensitive to different types of edit operations.",Towards a standard evaluation method for grammatical error detection and correction,"We present a novel evaluation method for grammatical error correction that addresses problems with previous approaches and scores systems in terms of improvement on the original text. Our method evaluates corrections at the token level using a globally optimal alignment between the source, a system hypothesis, and a reference. Unlike the M 2 Scorer, our method provides scores for both detection and correction and is sensitive to different types of edit operations.",Towards a standard evaluation method for grammatical error detection and correction,"We present a novel evaluation method for grammatical error correction that addresses problems with previous approaches and scores systems in terms of improvement on the original text. Our method evaluates corrections at the token level using a globally optimal alignment between the source, a system hypothesis, and a reference. Unlike the M 2 Scorer, our method provides scores for both detection and correction and is sensitive to different types of edit operations.","We would like to thank Øistein Andersen and Zheng Yuan for their constructive feedback, as well as the anonymous reviewers for their comments and suggestions. We are also grateful to Cambridge English Language Assessment for supporting this research via the ALTA Institute.","Towards a standard evaluation method for grammatical error detection and correction. We present a novel evaluation method for grammatical error correction that addresses problems with previous approaches and scores systems in terms of improvement on the original text. Our method evaluates corrections at the token level using a globally optimal alignment between the source, a system hypothesis, and a reference. Unlike the M 2 Scorer, our method provides scores for both detection and correction and is sensitive to different types of edit operations.",2015
klimek-etal-2016-creating,https://aclanthology.org/L16-1143.pdf,0,,,,,,,"Creating Linked Data Morphological Language Resources with MMoOn - The Hebrew Morpheme Inventory. The development of standard models for describing general lexical resources has led to the emergence of numerous lexical datasets of various languages in the Semantic Web. However, there are no models that describe the domain of morphology in a similar manner. As a result, there are hardly any language resources of morphemic data available in RDF to date. This paper presents the creation of the Hebrew Morpheme Inventory from a manually compiled tabular dataset comprising around 52.000 entries. It is an ongoing effort of representing the lexemes, word-forms and morphologigal patterns together with their underlying relations based on the newly created Multilingual Morpheme Ontology (MMoOn). It will be shown how segmented Hebrew language data can be granularly described in a Linked Data format, thus, serving as an exemplary case for creating morpheme inventories of any inflectional language with MMoOn. The resulting dataset is described a) according to the structure of the underlying data format, b) with respect to the Hebrew language characteristic of building word-forms directly from roots, c) by exemplifying how inflectional information is realized and d) with regard to its enrichment with external links to sense resources.",Creating Linked Data Morphological Language Resources with {MM}o{O}n - The {H}ebrew Morpheme Inventory,"The development of standard models for describing general lexical resources has led to the emergence of numerous lexical datasets of various languages in the Semantic Web. However, there are no models that describe the domain of morphology in a similar manner. As a result, there are hardly any language resources of morphemic data available in RDF to date. This paper presents the creation of the Hebrew Morpheme Inventory from a manually compiled tabular dataset comprising around 52.000 entries. It is an ongoing effort of representing the lexemes, word-forms and morphologigal patterns together with their underlying relations based on the newly created Multilingual Morpheme Ontology (MMoOn). It will be shown how segmented Hebrew language data can be granularly described in a Linked Data format, thus, serving as an exemplary case for creating morpheme inventories of any inflectional language with MMoOn. The resulting dataset is described a) according to the structure of the underlying data format, b) with respect to the Hebrew language characteristic of building word-forms directly from roots, c) by exemplifying how inflectional information is realized and d) with regard to its enrichment with external links to sense resources.",Creating Linked Data Morphological Language Resources with MMoOn - The Hebrew Morpheme Inventory,"The development of standard models for describing general lexical resources has led to the emergence of numerous lexical datasets of various languages in the Semantic Web. However, there are no models that describe the domain of morphology in a similar manner. As a result, there are hardly any language resources of morphemic data available in RDF to date. This paper presents the creation of the Hebrew Morpheme Inventory from a manually compiled tabular dataset comprising around 52.000 entries. It is an ongoing effort of representing the lexemes, word-forms and morphologigal patterns together with their underlying relations based on the newly created Multilingual Morpheme Ontology (MMoOn). It will be shown how segmented Hebrew language data can be granularly described in a Linked Data format, thus, serving as an exemplary case for creating morpheme inventories of any inflectional language with MMoOn. The resulting dataset is described a) according to the structure of the underlying data format, b) with respect to the Hebrew language characteristic of building word-forms directly from roots, c) by exemplifying how inflectional information is realized and d) with regard to its enrichment with external links to sense resources.",This paper's research activities were partly supported and funded by grants from the FREME FP7 European project ,"Creating Linked Data Morphological Language Resources with MMoOn - The Hebrew Morpheme Inventory. The development of standard models for describing general lexical resources has led to the emergence of numerous lexical datasets of various languages in the Semantic Web. However, there are no models that describe the domain of morphology in a similar manner. As a result, there are hardly any language resources of morphemic data available in RDF to date. This paper presents the creation of the Hebrew Morpheme Inventory from a manually compiled tabular dataset comprising around 52.000 entries. It is an ongoing effort of representing the lexemes, word-forms and morphologigal patterns together with their underlying relations based on the newly created Multilingual Morpheme Ontology (MMoOn). It will be shown how segmented Hebrew language data can be granularly described in a Linked Data format, thus, serving as an exemplary case for creating morpheme inventories of any inflectional language with MMoOn. The resulting dataset is described a) according to the structure of the underlying data format, b) with respect to the Hebrew language characteristic of building word-forms directly from roots, c) by exemplifying how inflectional information is realized and d) with regard to its enrichment with external links to sense resources.",2016
lu-roth-2012-automatic,https://aclanthology.org/P12-1088.pdf,0,,,,,,,"Automatic Event Extraction with Structured Preference Modeling. This paper presents a novel sequence labeling model based on the latent-variable semi-Markov conditional random fields for jointly extracting argument roles of events from texts. The model takes in coarse mention and type information and predicts argument roles for a given event template. This paper addresses the event extraction problem in a primarily unsupervised setting, where no labeled training instances are available. Our key contribution is a novel learning framework called structured preference modeling (PM), that allows arbitrary preference to be assigned to certain structures during the learning procedure. We establish and discuss connections between this framework and other existing works. We show empirically that the structured preferences are crucial to the success of our task. Our model, trained without annotated data and with a small number of structured preferences, yields performance competitive to some baseline supervised approaches.",Automatic Event Extraction with Structured Preference Modeling,"This paper presents a novel sequence labeling model based on the latent-variable semi-Markov conditional random fields for jointly extracting argument roles of events from texts. The model takes in coarse mention and type information and predicts argument roles for a given event template. This paper addresses the event extraction problem in a primarily unsupervised setting, where no labeled training instances are available. Our key contribution is a novel learning framework called structured preference modeling (PM), that allows arbitrary preference to be assigned to certain structures during the learning procedure. We establish and discuss connections between this framework and other existing works. We show empirically that the structured preferences are crucial to the success of our task. Our model, trained without annotated data and with a small number of structured preferences, yields performance competitive to some baseline supervised approaches.",Automatic Event Extraction with Structured Preference Modeling,"This paper presents a novel sequence labeling model based on the latent-variable semi-Markov conditional random fields for jointly extracting argument roles of events from texts. The model takes in coarse mention and type information and predicts argument roles for a given event template. This paper addresses the event extraction problem in a primarily unsupervised setting, where no labeled training instances are available. Our key contribution is a novel learning framework called structured preference modeling (PM), that allows arbitrary preference to be assigned to certain structures during the learning procedure. We establish and discuss connections between this framework and other existing works. We show empirically that the structured preferences are crucial to the success of our task. Our model, trained without annotated data and with a small number of structured preferences, yields performance competitive to some baseline supervised approaches.","We would like to thank Yee Seng Chan, Mark Sammons, and Quang Xuan Do for their help with the mention identification and typing system used in this paper. We gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of DARPA, AFRL, or the US government.","Automatic Event Extraction with Structured Preference Modeling. This paper presents a novel sequence labeling model based on the latent-variable semi-Markov conditional random fields for jointly extracting argument roles of events from texts. The model takes in coarse mention and type information and predicts argument roles for a given event template. This paper addresses the event extraction problem in a primarily unsupervised setting, where no labeled training instances are available. Our key contribution is a novel learning framework called structured preference modeling (PM), that allows arbitrary preference to be assigned to certain structures during the learning procedure. We establish and discuss connections between this framework and other existing works. We show empirically that the structured preferences are crucial to the success of our task. Our model, trained without annotated data and with a small number of structured preferences, yields performance competitive to some baseline supervised approaches.",2012
rama-wichmann-2018-towards,https://aclanthology.org/C18-1134.pdf,0,,,,,,,"Towards identifying the optimal datasize for lexically-based Bayesian inference of linguistic phylogenies. Bayesian linguistic phylogenies are standardly based on cognate matrices for words referring to a fix set of meanings-typically around 100-200. To this day there has not been any empirical investigation into which datasize is optimal. Here we determine, across a set of language families, the optimal number of meanings required for the best performance in Bayesian phylogenetic inference. We rank meanings by stability, infer phylogenetic trees using first the most stable meaning, then the two most stable meanings, and so on, computing the quartet distance of the resulting tree to the tree proposed by language family experts at each step of datasize increase. When a gold standard tree is not available we propose to instead compute the quartet distance between the tree based on the n-most stable meaning and the one based on the n + 1-most stable meanings, increasing n from 1 to N − 1, where N is the total number of meanings. The assumption here is that the value of n for which the quartet distance begins to stabilize is also the value at which the quality of the tree ceases to improve. We show that this assumption is borne out. The results of the two methods vary across families, and the optimal number of meanings appears to correlate with the number of languages under consideration.",Towards identifying the optimal datasize for lexically-based {B}ayesian inference of linguistic phylogenies,"Bayesian linguistic phylogenies are standardly based on cognate matrices for words referring to a fix set of meanings-typically around 100-200. To this day there has not been any empirical investigation into which datasize is optimal. Here we determine, across a set of language families, the optimal number of meanings required for the best performance in Bayesian phylogenetic inference. We rank meanings by stability, infer phylogenetic trees using first the most stable meaning, then the two most stable meanings, and so on, computing the quartet distance of the resulting tree to the tree proposed by language family experts at each step of datasize increase. When a gold standard tree is not available we propose to instead compute the quartet distance between the tree based on the n-most stable meaning and the one based on the n + 1-most stable meanings, increasing n from 1 to N − 1, where N is the total number of meanings. The assumption here is that the value of n for which the quartet distance begins to stabilize is also the value at which the quality of the tree ceases to improve. We show that this assumption is borne out. The results of the two methods vary across families, and the optimal number of meanings appears to correlate with the number of languages under consideration.",Towards identifying the optimal datasize for lexically-based Bayesian inference of linguistic phylogenies,"Bayesian linguistic phylogenies are standardly based on cognate matrices for words referring to a fix set of meanings-typically around 100-200. To this day there has not been any empirical investigation into which datasize is optimal. Here we determine, across a set of language families, the optimal number of meanings required for the best performance in Bayesian phylogenetic inference. We rank meanings by stability, infer phylogenetic trees using first the most stable meaning, then the two most stable meanings, and so on, computing the quartet distance of the resulting tree to the tree proposed by language family experts at each step of datasize increase. When a gold standard tree is not available we propose to instead compute the quartet distance between the tree based on the n-most stable meaning and the one based on the n + 1-most stable meanings, increasing n from 1 to N − 1, where N is the total number of meanings. The assumption here is that the value of n for which the quartet distance begins to stabilize is also the value at which the quality of the tree ceases to improve. We show that this assumption is borne out. The results of the two methods vary across families, and the optimal number of meanings appears to correlate with the number of languages under consideration.","The first author is supported by BIGMED project (a Norwegian Research Council LightHouse grant, see bigmed.no). The second author is supported by a subsidy of the Russian Government to support the Programme of Competitive Development of Kazan Federal University. The experiments were performed when both authors took part in the ERC Advanced Grant 324246 EVOLAEMP project led by Gerhard Jäger. All these sources of support are gratefully acknowledged.","Towards identifying the optimal datasize for lexically-based Bayesian inference of linguistic phylogenies. Bayesian linguistic phylogenies are standardly based on cognate matrices for words referring to a fix set of meanings-typically around 100-200. To this day there has not been any empirical investigation into which datasize is optimal. Here we determine, across a set of language families, the optimal number of meanings required for the best performance in Bayesian phylogenetic inference. We rank meanings by stability, infer phylogenetic trees using first the most stable meaning, then the two most stable meanings, and so on, computing the quartet distance of the resulting tree to the tree proposed by language family experts at each step of datasize increase. When a gold standard tree is not available we propose to instead compute the quartet distance between the tree based on the n-most stable meaning and the one based on the n + 1-most stable meanings, increasing n from 1 to N − 1, where N is the total number of meanings. The assumption here is that the value of n for which the quartet distance begins to stabilize is also the value at which the quality of the tree ceases to improve. We show that this assumption is borne out. The results of the two methods vary across families, and the optimal number of meanings appears to correlate with the number of languages under consideration.",2018
agirre-soroa-2009-personalizing,https://aclanthology.org/E09-1005.pdf,0,,,,,,,"Personalizing PageRank for Word Sense Disambiguation. In this paper we propose a new graphbased method that uses the knowledge in a LKB (based on WordNet) in order to perform unsupervised Word Sense Disambiguation. Our algorithm uses the full graph of the LKB efficiently, performing better than previous approaches in English all-words datasets. We also show that the algorithm can be easily ported to other languages with good results, with the only requirement of having a wordnet. In addition, we make an analysis of the performance of the algorithm, showing that it is efficient and that it could be tuned to be faster.",Personalizing {P}age{R}ank for Word Sense Disambiguation,"In this paper we propose a new graphbased method that uses the knowledge in a LKB (based on WordNet) in order to perform unsupervised Word Sense Disambiguation. Our algorithm uses the full graph of the LKB efficiently, performing better than previous approaches in English all-words datasets. We also show that the algorithm can be easily ported to other languages with good results, with the only requirement of having a wordnet. In addition, we make an analysis of the performance of the algorithm, showing that it is efficient and that it could be tuned to be faster.",Personalizing PageRank for Word Sense Disambiguation,"In this paper we propose a new graphbased method that uses the knowledge in a LKB (based on WordNet) in order to perform unsupervised Word Sense Disambiguation. Our algorithm uses the full graph of the LKB efficiently, performing better than previous approaches in English all-words datasets. We also show that the algorithm can be easily ported to other languages with good results, with the only requirement of having a wordnet. In addition, we make an analysis of the performance of the algorithm, showing that it is efficient and that it could be tuned to be faster.",This work has been partially funded by the EU Commission (project KYOTO ICT-2007-211423) and Spanish Research Department (project KNOW TIN2006-15049-C03-01).,"Personalizing PageRank for Word Sense Disambiguation. In this paper we propose a new graphbased method that uses the knowledge in a LKB (based on WordNet) in order to perform unsupervised Word Sense Disambiguation. Our algorithm uses the full graph of the LKB efficiently, performing better than previous approaches in English all-words datasets. We also show that the algorithm can be easily ported to other languages with good results, with the only requirement of having a wordnet. In addition, we make an analysis of the performance of the algorithm, showing that it is efficient and that it could be tuned to be faster.",2009
oseki-etal-2019-inverting,https://aclanthology.org/W19-4220.pdf,0,,,,,,,"Inverting and Modeling Morphological Inflection. Previous ""wug"" tests (Berko, 1958) on Japanese verbal inflection have demonstrated that Japanese speakers, both adults and children, cannot inflect novel present tense forms to ""correct"" past tense forms predicted by rules of existent verbs (",Inverting and Modeling Morphological Inflection,"Previous ""wug"" tests (Berko, 1958) on Japanese verbal inflection have demonstrated that Japanese speakers, both adults and children, cannot inflect novel present tense forms to ""correct"" past tense forms predicted by rules of existent verbs (",Inverting and Modeling Morphological Inflection,"Previous ""wug"" tests (Berko, 1958) on Japanese verbal inflection have demonstrated that Japanese speakers, both adults and children, cannot inflect novel present tense forms to ""correct"" past tense forms predicted by rules of existent verbs (","We would like to thank Takane Ito, Ryo Otoguro, Yoko Sugioka, and SIGMORPHON anonymous reviewers for valuable suggestions. This work was supported by JSPS KAKENHI Grant Number JP18H05589.","Inverting and Modeling Morphological Inflection. Previous ""wug"" tests (Berko, 1958) on Japanese verbal inflection have demonstrated that Japanese speakers, both adults and children, cannot inflect novel present tense forms to ""correct"" past tense forms predicted by rules of existent verbs (",2019
aksenova-deshmukh-2018-formal,https://aclanthology.org/W18-0307.pdf,0,,,,,,,"Formal Restrictions On Multiple Tiers. In this paper, we use harmony systems with multiple feature spreadings as a litmus test for the possible configurations of items involved in certain dependence. The subregular language classes, and the class of tierbased strictly local (TSL) languages in particular, have shown themselves as a good fit for different aspects of natural language. It is also known that there are some patterns that cannot be captured by a single TSL grammar. However, no proposed limitations exist on tier alphabets of several cooperating TSL grammars. While theoretically possible relations among tier alphabets of several TSL grammars are containment, disjunction and intersection, the latter one appears to be unattested. Apart from presenting the typological overview, we discuss formal reasons that might explain such distribution.",Formal Restrictions On Multiple Tiers,"In this paper, we use harmony systems with multiple feature spreadings as a litmus test for the possible configurations of items involved in certain dependence. The subregular language classes, and the class of tierbased strictly local (TSL) languages in particular, have shown themselves as a good fit for different aspects of natural language. It is also known that there are some patterns that cannot be captured by a single TSL grammar. However, no proposed limitations exist on tier alphabets of several cooperating TSL grammars. While theoretically possible relations among tier alphabets of several TSL grammars are containment, disjunction and intersection, the latter one appears to be unattested. Apart from presenting the typological overview, we discuss formal reasons that might explain such distribution.",Formal Restrictions On Multiple Tiers,"In this paper, we use harmony systems with multiple feature spreadings as a litmus test for the possible configurations of items involved in certain dependence. The subregular language classes, and the class of tierbased strictly local (TSL) languages in particular, have shown themselves as a good fit for different aspects of natural language. It is also known that there are some patterns that cannot be captured by a single TSL grammar. However, no proposed limitations exist on tier alphabets of several cooperating TSL grammars. While theoretically possible relations among tier alphabets of several TSL grammars are containment, disjunction and intersection, the latter one appears to be unattested. Apart from presenting the typological overview, we discuss formal reasons that might explain such distribution.","We thank the anonymous referees for their useful comments and suggestions. We are very grateful to our friends and colleagues at Stony Brook University, especially to Thomas Graf, Lori Repetti, Jeffrey Heinz, and Aniello De Santo for their unlimited knowledge and constant help. Also big thanks to Gary Mar, Jonathan Rawski, Sedigheh Moradi, and Yaobin Liu for valuable comments on the paper. All mistakes, of course, are our own.","Formal Restrictions On Multiple Tiers. In this paper, we use harmony systems with multiple feature spreadings as a litmus test for the possible configurations of items involved in certain dependence. The subregular language classes, and the class of tierbased strictly local (TSL) languages in particular, have shown themselves as a good fit for different aspects of natural language. It is also known that there are some patterns that cannot be captured by a single TSL grammar. However, no proposed limitations exist on tier alphabets of several cooperating TSL grammars. While theoretically possible relations among tier alphabets of several TSL grammars are containment, disjunction and intersection, the latter one appears to be unattested. Apart from presenting the typological overview, we discuss formal reasons that might explain such distribution.",2018
bollmann-etal-2014-cora,https://aclanthology.org/W14-0612.pdf,0,,,,,,,"CorA: A web-based annotation tool for historical and other non-standard language data. We present CorA, a web-based annotation tool for manual annotation of historical and other non-standard language data. It allows for editing the primary data and modifying token boundaries during the annotation process. Further, it supports immediate retraining of taggers on newly annotated data.",{C}or{A}: A web-based annotation tool for historical and other non-standard language data,"We present CorA, a web-based annotation tool for manual annotation of historical and other non-standard language data. It allows for editing the primary data and modifying token boundaries during the annotation process. Further, it supports immediate retraining of taggers on newly annotated data.",CorA: A web-based annotation tool for historical and other non-standard language data,"We present CorA, a web-based annotation tool for manual annotation of historical and other non-standard language data. It allows for editing the primary data and modifying token boundaries during the annotation process. Further, it supports immediate retraining of taggers on newly annotated data.",,"CorA: A web-based annotation tool for historical and other non-standard language data. We present CorA, a web-based annotation tool for manual annotation of historical and other non-standard language data. It allows for editing the primary data and modifying token boundaries during the annotation process. Further, it supports immediate retraining of taggers on newly annotated data.",2014
hsie-etal-2003-interleaving,https://aclanthology.org/O03-3002.pdf,0,,,,,,,"Interleaving Text and Punctuations for Bilingual Sub-sentential Alignment. We present a new approach to aligning bilingual English and Chinese text at sub-sentential level by interleaving alphabetic texts and punctuations matches. With sub-sentential alignment, we expect to improve the effectiveness of alignment at word, chunk and phrase levels and provide finer grained and more reusable translation memory.",Interleaving Text and Punctuations for Bilingual Sub-sentential Alignment,"We present a new approach to aligning bilingual English and Chinese text at sub-sentential level by interleaving alphabetic texts and punctuations matches. With sub-sentential alignment, we expect to improve the effectiveness of alignment at word, chunk and phrase levels and provide finer grained and more reusable translation memory.",Interleaving Text and Punctuations for Bilingual Sub-sentential Alignment,"We present a new approach to aligning bilingual English and Chinese text at sub-sentential level by interleaving alphabetic texts and punctuations matches. With sub-sentential alignment, we expect to improve the effectiveness of alignment at word, chunk and phrase levels and provide finer grained and more reusable translation memory.","We acknowledge the support for this study through grants from Ministry of Education, Taiwan (MOE EX-91-E-FA06-4-4). Thanks are also due to Jim Chang for preparing the training data and evaluating the experimental results.","Interleaving Text and Punctuations for Bilingual Sub-sentential Alignment. We present a new approach to aligning bilingual English and Chinese text at sub-sentential level by interleaving alphabetic texts and punctuations matches. With sub-sentential alignment, we expect to improve the effectiveness of alignment at word, chunk and phrase levels and provide finer grained and more reusable translation memory.",2003
phi-matsumoto-2016-integrating,https://aclanthology.org/Y16-2015.pdf,0,,,,,,,"Integrating Word Embedding Offsets into the Espresso System for Part-Whole Relation Extraction. Part-whole relation, or meronymy plays an important role in many domains. Among approaches to addressing the part-whole relation extraction task, the Espresso bootstrapping algorithm has proved to be effective by significantly improving recall while keeping high precision. In this paper, we first investigate the effect of using fine-grained subtypes and careful seed selection step on the performance of extracting part-whole relation. Our multitask learning and careful seed selection were major factors for achieving higher precision. Then, we improve the Espresso bootstrapping algorithm for part-whole relation extraction task by integrating word embedding approach into its iterations. The key idea of our approach is utilizing an additional ranker component, namely Similarity Ranker in the Instances Extraction phase of the Espresso system. This ranker component uses embedding offset information between instance pairs of part-whole relation. The experiments show that our proposed system achieved a precision of 84.9% for harvesting instances of the partwhole relation, and outperformed the original Espresso system.",Integrating Word Embedding Offsets into the Espresso System for Part-Whole Relation Extraction,"Part-whole relation, or meronymy plays an important role in many domains. Among approaches to addressing the part-whole relation extraction task, the Espresso bootstrapping algorithm has proved to be effective by significantly improving recall while keeping high precision. In this paper, we first investigate the effect of using fine-grained subtypes and careful seed selection step on the performance of extracting part-whole relation. Our multitask learning and careful seed selection were major factors for achieving higher precision. Then, we improve the Espresso bootstrapping algorithm for part-whole relation extraction task by integrating word embedding approach into its iterations. The key idea of our approach is utilizing an additional ranker component, namely Similarity Ranker in the Instances Extraction phase of the Espresso system. This ranker component uses embedding offset information between instance pairs of part-whole relation. The experiments show that our proposed system achieved a precision of 84.9% for harvesting instances of the partwhole relation, and outperformed the original Espresso system.",Integrating Word Embedding Offsets into the Espresso System for Part-Whole Relation Extraction,"Part-whole relation, or meronymy plays an important role in many domains. Among approaches to addressing the part-whole relation extraction task, the Espresso bootstrapping algorithm has proved to be effective by significantly improving recall while keeping high precision. In this paper, we first investigate the effect of using fine-grained subtypes and careful seed selection step on the performance of extracting part-whole relation. Our multitask learning and careful seed selection were major factors for achieving higher precision. Then, we improve the Espresso bootstrapping algorithm for part-whole relation extraction task by integrating word embedding approach into its iterations. The key idea of our approach is utilizing an additional ranker component, namely Similarity Ranker in the Instances Extraction phase of the Espresso system. This ranker component uses embedding offset information between instance pairs of part-whole relation. The experiments show that our proposed system achieved a precision of 84.9% for harvesting instances of the partwhole relation, and outperformed the original Espresso system.",,"Integrating Word Embedding Offsets into the Espresso System for Part-Whole Relation Extraction. Part-whole relation, or meronymy plays an important role in many domains. Among approaches to addressing the part-whole relation extraction task, the Espresso bootstrapping algorithm has proved to be effective by significantly improving recall while keeping high precision. In this paper, we first investigate the effect of using fine-grained subtypes and careful seed selection step on the performance of extracting part-whole relation. Our multitask learning and careful seed selection were major factors for achieving higher precision. Then, we improve the Espresso bootstrapping algorithm for part-whole relation extraction task by integrating word embedding approach into its iterations. The key idea of our approach is utilizing an additional ranker component, namely Similarity Ranker in the Instances Extraction phase of the Espresso system. This ranker component uses embedding offset information between instance pairs of part-whole relation. The experiments show that our proposed system achieved a precision of 84.9% for harvesting instances of the partwhole relation, and outperformed the original Espresso system.",2016
molinero-etal-2009-building,https://aclanthology.org/W09-4619.pdf,0,,,,,,,"Building a morphological and syntactic lexicon by merging various linguistic resources. This paper shows how large-coverage morphological and syntactic NLP lexicons can be developed by interpreting, converting to a common format and merging existing lexical resources. Applied on Spanish, this allowed us to build a morphological and syntactic lexicon, the Leffe. It relies on the Alexina framework, originally developed together with the French lexicon Lefff. We describe how the input resources-two morphological and two syntactic lexicons-were converted into Alexina lexicons and merged. A preliminary evaluation shows that merging different sources of lexical information is indeed a good approach to improve the development speed, the coverage and the precision of linguistic resources.",Building a morphological and syntactic lexicon by merging various linguistic resources,"This paper shows how large-coverage morphological and syntactic NLP lexicons can be developed by interpreting, converting to a common format and merging existing lexical resources. Applied on Spanish, this allowed us to build a morphological and syntactic lexicon, the Leffe. It relies on the Alexina framework, originally developed together with the French lexicon Lefff. We describe how the input resources-two morphological and two syntactic lexicons-were converted into Alexina lexicons and merged. A preliminary evaluation shows that merging different sources of lexical information is indeed a good approach to improve the development speed, the coverage and the precision of linguistic resources.",Building a morphological and syntactic lexicon by merging various linguistic resources,"This paper shows how large-coverage morphological and syntactic NLP lexicons can be developed by interpreting, converting to a common format and merging existing lexical resources. Applied on Spanish, this allowed us to build a morphological and syntactic lexicon, the Leffe. It relies on the Alexina framework, originally developed together with the French lexicon Lefff. We describe how the input resources-two morphological and two syntactic lexicons-were converted into Alexina lexicons and merged. A preliminary evaluation shows that merging different sources of lexical information is indeed a good approach to improve the development speed, the coverage and the precision of linguistic resources."," "" 2006-2009).We would like also to thank group Gramática delEspañol from USC, and especially to Guillermo Rojo, M. a Paula Santalla and Susana Sotelo, for granting us access to their lexicon.","Building a morphological and syntactic lexicon by merging various linguistic resources. This paper shows how large-coverage morphological and syntactic NLP lexicons can be developed by interpreting, converting to a common format and merging existing lexical resources. Applied on Spanish, this allowed us to build a morphological and syntactic lexicon, the Leffe. It relies on the Alexina framework, originally developed together with the French lexicon Lefff. We describe how the input resources-two morphological and two syntactic lexicons-were converted into Alexina lexicons and merged. A preliminary evaluation shows that merging different sources of lexical information is indeed a good approach to improve the development speed, the coverage and the precision of linguistic resources.",2009
dligach-palmer-2008-novel,https://aclanthology.org/P08-2008.pdf,0,,,,,,,Novel Semantic Features for Verb Sense Disambiguation. We propose a novel method for extracting semantic information about a verb's arguments and apply it to Verb Sense Disambiguation (VSD). We contrast this m ethod with two popular approaches to retrieving this information and show that it improves the performance of our VSD system and outperforms the other two approaches,Novel Semantic Features for Verb Sense Disambiguation,We propose a novel method for extracting semantic information about a verb's arguments and apply it to Verb Sense Disambiguation (VSD). We contrast this m ethod with two popular approaches to retrieving this information and show that it improves the performance of our VSD system and outperforms the other two approaches,Novel Semantic Features for Verb Sense Disambiguation,We propose a novel method for extracting semantic information about a verb's arguments and apply it to Verb Sense Disambiguation (VSD). We contrast this m ethod with two popular approaches to retrieving this information and show that it improves the performance of our VSD system and outperforms the other two approaches,"We gratefully acknowledge the support of the National Science Foundation Grant NSF-0715078, Consistent Criteria for Word Sense Disambiguation, and the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-C-0022, a subcontract from the BBN-AGILE Team. Any opinions, findings, and conclusions or recommendations expressed in this ma-terial are those of the authors and do not necessarily reflect the views of the National Sc ience Foundation. We also thank our colleagues Rodney Nielsen and Philipp Wetzler for parsing English Gigaword with MaltParser.",Novel Semantic Features for Verb Sense Disambiguation. We propose a novel method for extracting semantic information about a verb's arguments and apply it to Verb Sense Disambiguation (VSD). We contrast this m ethod with two popular approaches to retrieving this information and show that it improves the performance of our VSD system and outperforms the other two approaches,2008
jin-etal-2021-cogie,https://aclanthology.org/2021.acl-demo.11.pdf,0,,,,,,,"CogIE: An Information Extraction Toolkit for Bridging Texts and CogNet. CogNet is a knowledge base that integrates three types of knowledge: linguistic knowledge, world knowledge and commonsense knowledge. In this paper, we propose an information extraction toolkit, called CogIE, which is a bridge connecting raw texts and CogNet. CogIE has three features: versatile, knowledge-grounded and extensible. First, CogIE is a versatile toolkit with a rich set of functional modules, including named entity recognition, entity typing, entity linking, relation extraction, event extraction and framesemantic parsing. Second, as a knowledgegrounded toolkit, CogIE can ground the extracted facts to CogNet and leverage different types of knowledge to enrich extracted results. Third, for extensibility, owing to the design of three-tier architecture, CogIE is not only a plug-and-play toolkit for developers but also an extensible programming framework for researchers. We release an open-access online system 1 to visually extract information from texts. Source code, datasets and pre-trained models are publicly available at GitHub 2 , with a short instruction video 3 .",{C}og{IE}: An Information Extraction Toolkit for Bridging Texts and {C}og{N}et,"CogNet is a knowledge base that integrates three types of knowledge: linguistic knowledge, world knowledge and commonsense knowledge. In this paper, we propose an information extraction toolkit, called CogIE, which is a bridge connecting raw texts and CogNet. CogIE has three features: versatile, knowledge-grounded and extensible. First, CogIE is a versatile toolkit with a rich set of functional modules, including named entity recognition, entity typing, entity linking, relation extraction, event extraction and framesemantic parsing. Second, as a knowledgegrounded toolkit, CogIE can ground the extracted facts to CogNet and leverage different types of knowledge to enrich extracted results. Third, for extensibility, owing to the design of three-tier architecture, CogIE is not only a plug-and-play toolkit for developers but also an extensible programming framework for researchers. We release an open-access online system 1 to visually extract information from texts. Source code, datasets and pre-trained models are publicly available at GitHub 2 , with a short instruction video 3 .",CogIE: An Information Extraction Toolkit for Bridging Texts and CogNet,"CogNet is a knowledge base that integrates three types of knowledge: linguistic knowledge, world knowledge and commonsense knowledge. In this paper, we propose an information extraction toolkit, called CogIE, which is a bridge connecting raw texts and CogNet. CogIE has three features: versatile, knowledge-grounded and extensible. First, CogIE is a versatile toolkit with a rich set of functional modules, including named entity recognition, entity typing, entity linking, relation extraction, event extraction and framesemantic parsing. Second, as a knowledgegrounded toolkit, CogIE can ground the extracted facts to CogNet and leverage different types of knowledge to enrich extracted results. Third, for extensibility, owing to the design of three-tier architecture, CogIE is not only a plug-and-play toolkit for developers but also an extensible programming framework for researchers. We release an open-access online system 1 to visually extract information from texts. Source code, datasets and pre-trained models are publicly available at GitHub 2 , with a short instruction video 3 .","This work is supported by the National Key Research and Development Program of China (No. 2020AAA0106400), the National Natural Science Foundation of China (No.61806201).","CogIE: An Information Extraction Toolkit for Bridging Texts and CogNet. CogNet is a knowledge base that integrates three types of knowledge: linguistic knowledge, world knowledge and commonsense knowledge. In this paper, we propose an information extraction toolkit, called CogIE, which is a bridge connecting raw texts and CogNet. CogIE has three features: versatile, knowledge-grounded and extensible. First, CogIE is a versatile toolkit with a rich set of functional modules, including named entity recognition, entity typing, entity linking, relation extraction, event extraction and framesemantic parsing. Second, as a knowledgegrounded toolkit, CogIE can ground the extracted facts to CogNet and leverage different types of knowledge to enrich extracted results. Third, for extensibility, owing to the design of three-tier architecture, CogIE is not only a plug-and-play toolkit for developers but also an extensible programming framework for researchers. We release an open-access online system 1 to visually extract information from texts. Source code, datasets and pre-trained models are publicly available at GitHub 2 , with a short instruction video 3 .",2021
cao-zukerman-2012-experimental,https://aclanthology.org/U12-1008.pdf,0,,,,,,,"Experimental Evaluation of a Lexicon- and Corpus-based Ensemble for Multi-way Sentiment Analysis. We describe a probabilistic approach that combines information obtained from a lexicon with information obtained from a Naïve Bayes (NB) classifier for multi-way sentiment analysis. Our approach also employs grammatical structures to perform adjustments for negations, modifiers and sentence connectives. The performance of this method is compared with that of an NB classifier with feature selection, and MCST-a state-of-the-art system. The results of our evaluation show that the performance of our hybrid approach is at least as good as that of these systems. We also examine the influence of three factors on performance: (1) sentiment-ambiguous sentences, (2) probability of the most probable star rating, and (3) coverage of the lexicon and the NB classifier. Our results indicate that the consideration of these factors supports the identification of regions of improved reliability for sentiment analysis.",Experimental Evaluation of a Lexicon- and Corpus-based Ensemble for Multi-way Sentiment Analysis,"We describe a probabilistic approach that combines information obtained from a lexicon with information obtained from a Naïve Bayes (NB) classifier for multi-way sentiment analysis. Our approach also employs grammatical structures to perform adjustments for negations, modifiers and sentence connectives. The performance of this method is compared with that of an NB classifier with feature selection, and MCST-a state-of-the-art system. The results of our evaluation show that the performance of our hybrid approach is at least as good as that of these systems. We also examine the influence of three factors on performance: (1) sentiment-ambiguous sentences, (2) probability of the most probable star rating, and (3) coverage of the lexicon and the NB classifier. Our results indicate that the consideration of these factors supports the identification of regions of improved reliability for sentiment analysis.",Experimental Evaluation of a Lexicon- and Corpus-based Ensemble for Multi-way Sentiment Analysis,"We describe a probabilistic approach that combines information obtained from a lexicon with information obtained from a Naïve Bayes (NB) classifier for multi-way sentiment analysis. Our approach also employs grammatical structures to perform adjustments for negations, modifiers and sentence connectives. The performance of this method is compared with that of an NB classifier with feature selection, and MCST-a state-of-the-art system. The results of our evaluation show that the performance of our hybrid approach is at least as good as that of these systems. We also examine the influence of three factors on performance: (1) sentiment-ambiguous sentences, (2) probability of the most probable star rating, and (3) coverage of the lexicon and the NB classifier. Our results indicate that the consideration of these factors supports the identification of regions of improved reliability for sentiment analysis.",,"Experimental Evaluation of a Lexicon- and Corpus-based Ensemble for Multi-way Sentiment Analysis. We describe a probabilistic approach that combines information obtained from a lexicon with information obtained from a Naïve Bayes (NB) classifier for multi-way sentiment analysis. Our approach also employs grammatical structures to perform adjustments for negations, modifiers and sentence connectives. The performance of this method is compared with that of an NB classifier with feature selection, and MCST-a state-of-the-art system. The results of our evaluation show that the performance of our hybrid approach is at least as good as that of these systems. We also examine the influence of three factors on performance: (1) sentiment-ambiguous sentences, (2) probability of the most probable star rating, and (3) coverage of the lexicon and the NB classifier. Our results indicate that the consideration of these factors supports the identification of regions of improved reliability for sentiment analysis.",2012
yang-etal-2021-journalistic,https://aclanthology.org/2021.emnlp-main.419.pdf,0,,,,,,,"Journalistic Guidelines Aware News Image Captioning. The task of news article image captioning aims to generate descriptive and informative captions for news article images. Unlike conventional image captions that simply describe the content of the image in general terms, news image captions follow journalistic guidelines and rely heavily on named entities to describe the image content, often drawing context from the whole article they are associated with. In this work, we propose a new approach to this task, motivated by caption guidelines that journalists follow. Our approach, Journalistic Guidelines Aware News Image Captioning (JoGANIC), leverages the structure of captions to improve the generation quality and guide our representation design. Experimental results, including detailed ablation studies, on two large-scale publicly available datasets show that JoGANIC substantially outperforms state-of-the-art methods both on caption generation and named entity related metrics.",Journalistic Guidelines Aware News Image Captioning,"The task of news article image captioning aims to generate descriptive and informative captions for news article images. Unlike conventional image captions that simply describe the content of the image in general terms, news image captions follow journalistic guidelines and rely heavily on named entities to describe the image content, often drawing context from the whole article they are associated with. In this work, we propose a new approach to this task, motivated by caption guidelines that journalists follow. Our approach, Journalistic Guidelines Aware News Image Captioning (JoGANIC), leverages the structure of captions to improve the generation quality and guide our representation design. Experimental results, including detailed ablation studies, on two large-scale publicly available datasets show that JoGANIC substantially outperforms state-of-the-art methods both on caption generation and named entity related metrics.",Journalistic Guidelines Aware News Image Captioning,"The task of news article image captioning aims to generate descriptive and informative captions for news article images. Unlike conventional image captions that simply describe the content of the image in general terms, news image captions follow journalistic guidelines and rely heavily on named entities to describe the image content, often drawing context from the whole article they are associated with. In this work, we propose a new approach to this task, motivated by caption guidelines that journalists follow. Our approach, Journalistic Guidelines Aware News Image Captioning (JoGANIC), leverages the structure of captions to improve the generation quality and guide our representation design. Experimental results, including detailed ablation studies, on two large-scale publicly available datasets show that JoGANIC substantially outperforms state-of-the-art methods both on caption generation and named entity related metrics.","We thank Mahdi Abavisani, Shengli Hu, and Di Lu for the fruitful discussions during the development of the method, and all the reviewers for their detailed questions, clarification requests, and suggestions on the paper.","Journalistic Guidelines Aware News Image Captioning. The task of news article image captioning aims to generate descriptive and informative captions for news article images. Unlike conventional image captions that simply describe the content of the image in general terms, news image captions follow journalistic guidelines and rely heavily on named entities to describe the image content, often drawing context from the whole article they are associated with. In this work, we propose a new approach to this task, motivated by caption guidelines that journalists follow. Our approach, Journalistic Guidelines Aware News Image Captioning (JoGANIC), leverages the structure of captions to improve the generation quality and guide our representation design. Experimental results, including detailed ablation studies, on two large-scale publicly available datasets show that JoGANIC substantially outperforms state-of-the-art methods both on caption generation and named entity related metrics.",2021
ide-romary-2003-outline,https://aclanthology.org/W03-1901.pdf,0,,,,,,,"Outline of the International Standard Linguistic Annotation Framework. This paper describes the outline of a linguistic annotation framework under development by ISO TC37 SC WG1-1. This international standard provides an architecture for the creation, annotation, and manipulation of linguistic resources and processing software. The goal is to provide maximum flexibility for encoders and annotators, while at the same time enabling interchange and re-use of annotated linguistic resources. We describe here the outline of the standard for the purposes of enabling annotators to begin to explore how their schemes may map into the framework.",Outline of the International Standard Linguistic Annotation Framework,"This paper describes the outline of a linguistic annotation framework under development by ISO TC37 SC WG1-1. This international standard provides an architecture for the creation, annotation, and manipulation of linguistic resources and processing software. The goal is to provide maximum flexibility for encoders and annotators, while at the same time enabling interchange and re-use of annotated linguistic resources. We describe here the outline of the standard for the purposes of enabling annotators to begin to explore how their schemes may map into the framework.",Outline of the International Standard Linguistic Annotation Framework,"This paper describes the outline of a linguistic annotation framework under development by ISO TC37 SC WG1-1. This international standard provides an architecture for the creation, annotation, and manipulation of linguistic resources and processing software. The goal is to provide maximum flexibility for encoders and annotators, while at the same time enabling interchange and re-use of annotated linguistic resources. We describe here the outline of the standard for the purposes of enabling annotators to begin to explore how their schemes may map into the framework.",,"Outline of the International Standard Linguistic Annotation Framework. This paper describes the outline of a linguistic annotation framework under development by ISO TC37 SC WG1-1. This international standard provides an architecture for the creation, annotation, and manipulation of linguistic resources and processing software. The goal is to provide maximum flexibility for encoders and annotators, while at the same time enabling interchange and re-use of annotated linguistic resources. We describe here the outline of the standard for the purposes of enabling annotators to begin to explore how their schemes may map into the framework.",2003
levy-etal-2014-ontology,https://aclanthology.org/W14-6003.pdf,1,,,,industry_innovation_infrastructure,,,"Ontology-based Technical Text Annotation. Powerful tools could help users explore and maintain domain specific documentations, provided that documents have been semantically annotated. For that, the annotations must be sufficiently specialized and rich, relying on some explicit semantic model, usually an ontology, that represents the semantics of the target domain. In this paper, we learn to annotate biomedical scientific publications with respect to a Gene Regulation Ontology. We devise a two-step approach to annotate semantic events and relations. The first step is recast as a text segmentation and labeling problem and solved using machine translation tools and a CRF, the second as multi-class classification. We evaluate the approach on the BioNLP-GRO benchmark, achieving an average 61% F-measure on the event detection by itself and 50% F-measure on biological relation annotation. This suggests that human annotators can be supported in domain specific semantic annotation tasks. Under different experimental settings, we also conclude some interesting observations: (1) For event detection and compared to classical time-consuming sequence labeling approach, the newly proposed machine translation based method performed equally well but with much less computation resource required. (2) A highly domain specific part of the task, namely proteins and transcription factors detection, is best performed by domain aware tools, which can be used separately as an initial step of the pipeline. This work is licensed under a Creative Commons Attribution 4.0 International Licence.",Ontology-based Technical Text Annotation,"Powerful tools could help users explore and maintain domain specific documentations, provided that documents have been semantically annotated. For that, the annotations must be sufficiently specialized and rich, relying on some explicit semantic model, usually an ontology, that represents the semantics of the target domain. In this paper, we learn to annotate biomedical scientific publications with respect to a Gene Regulation Ontology. We devise a two-step approach to annotate semantic events and relations. The first step is recast as a text segmentation and labeling problem and solved using machine translation tools and a CRF, the second as multi-class classification. We evaluate the approach on the BioNLP-GRO benchmark, achieving an average 61% F-measure on the event detection by itself and 50% F-measure on biological relation annotation. This suggests that human annotators can be supported in domain specific semantic annotation tasks. Under different experimental settings, we also conclude some interesting observations: (1) For event detection and compared to classical time-consuming sequence labeling approach, the newly proposed machine translation based method performed equally well but with much less computation resource required. (2) A highly domain specific part of the task, namely proteins and transcription factors detection, is best performed by domain aware tools, which can be used separately as an initial step of the pipeline. This work is licensed under a Creative Commons Attribution 4.0 International Licence.",Ontology-based Technical Text Annotation,"Powerful tools could help users explore and maintain domain specific documentations, provided that documents have been semantically annotated. For that, the annotations must be sufficiently specialized and rich, relying on some explicit semantic model, usually an ontology, that represents the semantics of the target domain. In this paper, we learn to annotate biomedical scientific publications with respect to a Gene Regulation Ontology. We devise a two-step approach to annotate semantic events and relations. The first step is recast as a text segmentation and labeling problem and solved using machine translation tools and a CRF, the second as multi-class classification. We evaluate the approach on the BioNLP-GRO benchmark, achieving an average 61% F-measure on the event detection by itself and 50% F-measure on biological relation annotation. This suggests that human annotators can be supported in domain specific semantic annotation tasks. Under different experimental settings, we also conclude some interesting observations: (1) For event detection and compared to classical time-consuming sequence labeling approach, the newly proposed machine translation based method performed equally well but with much less computation resource required. (2) A highly domain specific part of the task, namely proteins and transcription factors detection, is best performed by domain aware tools, which can be used separately as an initial step of the pipeline. This work is licensed under a Creative Commons Attribution 4.0 International Licence.","We are thankful to the reviewers for their comments. This work is part of the program Investissements d'Avenir, overseen by the French National Research Agency, ANR-10-LABX-0083, (Labex EFL). We acknowledge financial support by the DFG Research Unit FOR 1513, project B1.","Ontology-based Technical Text Annotation. Powerful tools could help users explore and maintain domain specific documentations, provided that documents have been semantically annotated. For that, the annotations must be sufficiently specialized and rich, relying on some explicit semantic model, usually an ontology, that represents the semantics of the target domain. In this paper, we learn to annotate biomedical scientific publications with respect to a Gene Regulation Ontology. We devise a two-step approach to annotate semantic events and relations. The first step is recast as a text segmentation and labeling problem and solved using machine translation tools and a CRF, the second as multi-class classification. We evaluate the approach on the BioNLP-GRO benchmark, achieving an average 61% F-measure on the event detection by itself and 50% F-measure on biological relation annotation. This suggests that human annotators can be supported in domain specific semantic annotation tasks. Under different experimental settings, we also conclude some interesting observations: (1) For event detection and compared to classical time-consuming sequence labeling approach, the newly proposed machine translation based method performed equally well but with much less computation resource required. (2) A highly domain specific part of the task, namely proteins and transcription factors detection, is best performed by domain aware tools, which can be used separately as an initial step of the pipeline. This work is licensed under a Creative Commons Attribution 4.0 International Licence.",2014
huang-etal-2003-unified,https://aclanthology.org/2003.mtsummit-papers.23.pdf,0,,,,,,,"A unified statistical model for generalized translation memory system. We introduced, for Translation Memory System, a statistical framework, which unifies the different phases in a Translation Memory System by letting them constrain each other, and enables Translation Memory System a statistical qualification. Compared to traditional Translation Memory Systems, our model operates at a fine grained sub-sentential level such that it improves the translation coverage. Compared with other approaches that exploit sub-sentential benefits, it unifies the processes of source string segmentation, best example selection, and translation generation by making them constrain each other via the statistical confidence of each step. We realized this framework into a prototype system. Compared with an existing product Translation Memory System, our system exhibits obviously better performance in the ""assistant quality metric"" and gains improvements in the range of 26.3% to 55.1% in the ""translation efficiency metric"".",A unified statistical model for generalized translation memory system,"We introduced, for Translation Memory System, a statistical framework, which unifies the different phases in a Translation Memory System by letting them constrain each other, and enables Translation Memory System a statistical qualification. Compared to traditional Translation Memory Systems, our model operates at a fine grained sub-sentential level such that it improves the translation coverage. Compared with other approaches that exploit sub-sentential benefits, it unifies the processes of source string segmentation, best example selection, and translation generation by making them constrain each other via the statistical confidence of each step. We realized this framework into a prototype system. Compared with an existing product Translation Memory System, our system exhibits obviously better performance in the ""assistant quality metric"" and gains improvements in the range of 26.3% to 55.1% in the ""translation efficiency metric"".",A unified statistical model for generalized translation memory system,"We introduced, for Translation Memory System, a statistical framework, which unifies the different phases in a Translation Memory System by letting them constrain each other, and enables Translation Memory System a statistical qualification. Compared to traditional Translation Memory Systems, our model operates at a fine grained sub-sentential level such that it improves the translation coverage. Compared with other approaches that exploit sub-sentential benefits, it unifies the processes of source string segmentation, best example selection, and translation generation by making them constrain each other via the statistical confidence of each step. We realized this framework into a prototype system. Compared with an existing product Translation Memory System, our system exhibits obviously better performance in the ""assistant quality metric"" and gains improvements in the range of 26.3% to 55.1% in the ""translation efficiency metric"".",,"A unified statistical model for generalized translation memory system. We introduced, for Translation Memory System, a statistical framework, which unifies the different phases in a Translation Memory System by letting them constrain each other, and enables Translation Memory System a statistical qualification. Compared to traditional Translation Memory Systems, our model operates at a fine grained sub-sentential level such that it improves the translation coverage. Compared with other approaches that exploit sub-sentential benefits, it unifies the processes of source string segmentation, best example selection, and translation generation by making them constrain each other via the statistical confidence of each step. We realized this framework into a prototype system. Compared with an existing product Translation Memory System, our system exhibits obviously better performance in the ""assistant quality metric"" and gains improvements in the range of 26.3% to 55.1% in the ""translation efficiency metric"".",2003
mendes-etal-2016-modality,https://aclanthology.org/2016.lilt-14.5.pdf,0,,,,,,,"Modality annotation for Portuguese: from manual annotation to automatic labeling. We investigate modality in Portuguese and we combine a linguistic perspective with an application-oriented perspective on modality. We design an annotation scheme reflecting theoretical linguistic concepts and apply this schema to a small corpus sample to show how the scheme deals with real world language usage. We present two schemas for Portuguese, one for spoken Brazilian Portuguese and one for written European Portuguese. Furthermore, we use the annotated data not only to study the linguistic phenomena of modality, but also to train a practical text mining tool to detect modality in text automatically. The modality tagger uses a machine learning classifier trained on automatically extracted features from a syntactic parser. As we only have a small annotated sample available, the tagger was evaluated on 11 modal verbs that are frequent in our corpus and that denote more than one modal meaning. Finally, we discuss several valuable insights into the complexity of the semantic concept of modality that derive from the process of manual annotation of the corpus and from the analysis of the results of the automatic labeling: ambiguity and the semantic and syntactic 1 2 / LiLT volume 14, issue 5 August 2016 properties typically associated to one modal meaning in context, and also the interaction of modality with negation and focus. The knowledge gained from the manual annotation task leads us to propose a new unified scheme for modality that applies to the two Portuguese varieties and covers both written and spoken data.",Modality annotation for {P}ortuguese: from manual annotation to automatic labeling,"We investigate modality in Portuguese and we combine a linguistic perspective with an application-oriented perspective on modality. We design an annotation scheme reflecting theoretical linguistic concepts and apply this schema to a small corpus sample to show how the scheme deals with real world language usage. We present two schemas for Portuguese, one for spoken Brazilian Portuguese and one for written European Portuguese. Furthermore, we use the annotated data not only to study the linguistic phenomena of modality, but also to train a practical text mining tool to detect modality in text automatically. The modality tagger uses a machine learning classifier trained on automatically extracted features from a syntactic parser. As we only have a small annotated sample available, the tagger was evaluated on 11 modal verbs that are frequent in our corpus and that denote more than one modal meaning. Finally, we discuss several valuable insights into the complexity of the semantic concept of modality that derive from the process of manual annotation of the corpus and from the analysis of the results of the automatic labeling: ambiguity and the semantic and syntactic 1 2 / LiLT volume 14, issue 5 August 2016 properties typically associated to one modal meaning in context, and also the interaction of modality with negation and focus. The knowledge gained from the manual annotation task leads us to propose a new unified scheme for modality that applies to the two Portuguese varieties and covers both written and spoken data.",Modality annotation for Portuguese: from manual annotation to automatic labeling,"We investigate modality in Portuguese and we combine a linguistic perspective with an application-oriented perspective on modality. We design an annotation scheme reflecting theoretical linguistic concepts and apply this schema to a small corpus sample to show how the scheme deals with real world language usage. We present two schemas for Portuguese, one for spoken Brazilian Portuguese and one for written European Portuguese. Furthermore, we use the annotated data not only to study the linguistic phenomena of modality, but also to train a practical text mining tool to detect modality in text automatically. The modality tagger uses a machine learning classifier trained on automatically extracted features from a syntactic parser. As we only have a small annotated sample available, the tagger was evaluated on 11 modal verbs that are frequent in our corpus and that denote more than one modal meaning. Finally, we discuss several valuable insights into the complexity of the semantic concept of modality that derive from the process of manual annotation of the corpus and from the analysis of the results of the automatic labeling: ambiguity and the semantic and syntactic 1 2 / LiLT volume 14, issue 5 August 2016 properties typically associated to one modal meaning in context, and also the interaction of modality with negation and focus. The knowledge gained from the manual annotation task leads us to propose a new unified scheme for modality that applies to the two Portuguese varieties and covers both written and spoken data.","This work was partially supported by national funds through FCT -Fundação para a Ciência e Tecnologia, under project Pest-OE/EEI/ LA0021/2013 and project PEst-OE/LIN/UI0214/2013, and through FAPEMIG (PEE-00293-15).","Modality annotation for Portuguese: from manual annotation to automatic labeling. We investigate modality in Portuguese and we combine a linguistic perspective with an application-oriented perspective on modality. We design an annotation scheme reflecting theoretical linguistic concepts and apply this schema to a small corpus sample to show how the scheme deals with real world language usage. We present two schemas for Portuguese, one for spoken Brazilian Portuguese and one for written European Portuguese. Furthermore, we use the annotated data not only to study the linguistic phenomena of modality, but also to train a practical text mining tool to detect modality in text automatically. The modality tagger uses a machine learning classifier trained on automatically extracted features from a syntactic parser. As we only have a small annotated sample available, the tagger was evaluated on 11 modal verbs that are frequent in our corpus and that denote more than one modal meaning. Finally, we discuss several valuable insights into the complexity of the semantic concept of modality that derive from the process of manual annotation of the corpus and from the analysis of the results of the automatic labeling: ambiguity and the semantic and syntactic 1 2 / LiLT volume 14, issue 5 August 2016 properties typically associated to one modal meaning in context, and also the interaction of modality with negation and focus. The knowledge gained from the manual annotation task leads us to propose a new unified scheme for modality that applies to the two Portuguese varieties and covers both written and spoken data.",2016
jing-etal-2018-automatic,https://aclanthology.org/P18-1240.pdf,1,,,,health,,,"On the Automatic Generation of Medical Imaging Reports. Medical imaging is widely used in clinical practice for diagnosis and treatment. Report-writing can be error-prone for unexperienced physicians, and timeconsuming and tedious for experienced physicians. To address these issues, we study the automatic generation of medical imaging reports. This task presents several challenges. First, a complete report contains multiple heterogeneous forms of information, including findings and tags. Second, abnormal regions in medical images are difficult to identify. Third, the reports are typically long, containing multiple sentences. To cope with these challenges, we (1) build a multi-task learning framework which jointly performs the prediction of tags and the generation of paragraphs, (2) propose a co-attention mechanism to localize regions containing abnormalities and generate narrations for them, (3) develop a hierarchical LSTM model to generate long paragraphs. We demonstrate the effectiveness of the proposed methods on two publicly available datasets.",On the Automatic Generation of Medical Imaging Reports,"Medical imaging is widely used in clinical practice for diagnosis and treatment. Report-writing can be error-prone for unexperienced physicians, and timeconsuming and tedious for experienced physicians. To address these issues, we study the automatic generation of medical imaging reports. This task presents several challenges. First, a complete report contains multiple heterogeneous forms of information, including findings and tags. Second, abnormal regions in medical images are difficult to identify. Third, the reports are typically long, containing multiple sentences. To cope with these challenges, we (1) build a multi-task learning framework which jointly performs the prediction of tags and the generation of paragraphs, (2) propose a co-attention mechanism to localize regions containing abnormalities and generate narrations for them, (3) develop a hierarchical LSTM model to generate long paragraphs. We demonstrate the effectiveness of the proposed methods on two publicly available datasets.",On the Automatic Generation of Medical Imaging Reports,"Medical imaging is widely used in clinical practice for diagnosis and treatment. Report-writing can be error-prone for unexperienced physicians, and timeconsuming and tedious for experienced physicians. To address these issues, we study the automatic generation of medical imaging reports. This task presents several challenges. First, a complete report contains multiple heterogeneous forms of information, including findings and tags. Second, abnormal regions in medical images are difficult to identify. Third, the reports are typically long, containing multiple sentences. To cope with these challenges, we (1) build a multi-task learning framework which jointly performs the prediction of tags and the generation of paragraphs, (2) propose a co-attention mechanism to localize regions containing abnormalities and generate narrations for them, (3) develop a hierarchical LSTM model to generate long paragraphs. We demonstrate the effectiveness of the proposed methods on two publicly available datasets.",,"On the Automatic Generation of Medical Imaging Reports. Medical imaging is widely used in clinical practice for diagnosis and treatment. Report-writing can be error-prone for unexperienced physicians, and timeconsuming and tedious for experienced physicians. To address these issues, we study the automatic generation of medical imaging reports. This task presents several challenges. First, a complete report contains multiple heterogeneous forms of information, including findings and tags. Second, abnormal regions in medical images are difficult to identify. Third, the reports are typically long, containing multiple sentences. To cope with these challenges, we (1) build a multi-task learning framework which jointly performs the prediction of tags and the generation of paragraphs, (2) propose a co-attention mechanism to localize regions containing abnormalities and generate narrations for them, (3) develop a hierarchical LSTM model to generate long paragraphs. We demonstrate the effectiveness of the proposed methods on two publicly available datasets.",2018
mcdonald-1993-interplay,https://aclanthology.org/1993.iwpt-1.15.pdf,0,,,,,,,"The Interplay of Syntactic and Semantic Node Labels in Partial Parsing. Our natural language comprehension system, ""Sparser"" , uses a semantic grammar in conjunc tion with a domain model that defines the categories and already-known individuals that can be expected in the sublanguages we are studying, the most significant of which to date has been articles from the Wall Street Journal's ""Who's News"" column. In this paper we describe the systematic use of default syntactic rules in this grammar: an alternative set of labels on consitu tents that are used to capture generalities in the semantic interpretation of constructions like the verbal auxiliaries or many adverbials. Syntactic rules form the basis of a set of schemas in a Tree Adjoining Grammar that are used as templates from which to create the primary, semantically labeled rules of the grammar as part of defining the categories in the domain models. This design permits the semantic grammar to be developed on a linguistically principled basis since all the rules must conform to syntactically sound patterns.",The Interplay of Syntactic and Semantic Node Labels in Partial Parsing,"Our natural language comprehension system, ""Sparser"" , uses a semantic grammar in conjunc tion with a domain model that defines the categories and already-known individuals that can be expected in the sublanguages we are studying, the most significant of which to date has been articles from the Wall Street Journal's ""Who's News"" column. In this paper we describe the systematic use of default syntactic rules in this grammar: an alternative set of labels on consitu tents that are used to capture generalities in the semantic interpretation of constructions like the verbal auxiliaries or many adverbials. Syntactic rules form the basis of a set of schemas in a Tree Adjoining Grammar that are used as templates from which to create the primary, semantically labeled rules of the grammar as part of defining the categories in the domain models. This design permits the semantic grammar to be developed on a linguistically principled basis since all the rules must conform to syntactically sound patterns.",The Interplay of Syntactic and Semantic Node Labels in Partial Parsing,"Our natural language comprehension system, ""Sparser"" , uses a semantic grammar in conjunc tion with a domain model that defines the categories and already-known individuals that can be expected in the sublanguages we are studying, the most significant of which to date has been articles from the Wall Street Journal's ""Who's News"" column. In this paper we describe the systematic use of default syntactic rules in this grammar: an alternative set of labels on consitu tents that are used to capture generalities in the semantic interpretation of constructions like the verbal auxiliaries or many adverbials. Syntactic rules form the basis of a set of schemas in a Tree Adjoining Grammar that are used as templates from which to create the primary, semantically labeled rules of the grammar as part of defining the categories in the domain models. This design permits the semantic grammar to be developed on a linguistically principled basis since all the rules must conform to syntactically sound patterns.",,"The Interplay of Syntactic and Semantic Node Labels in Partial Parsing. Our natural language comprehension system, ""Sparser"" , uses a semantic grammar in conjunc tion with a domain model that defines the categories and already-known individuals that can be expected in the sublanguages we are studying, the most significant of which to date has been articles from the Wall Street Journal's ""Who's News"" column. In this paper we describe the systematic use of default syntactic rules in this grammar: an alternative set of labels on consitu tents that are used to capture generalities in the semantic interpretation of constructions like the verbal auxiliaries or many adverbials. Syntactic rules form the basis of a set of schemas in a Tree Adjoining Grammar that are used as templates from which to create the primary, semantically labeled rules of the grammar as part of defining the categories in the domain models. This design permits the semantic grammar to be developed on a linguistically principled basis since all the rules must conform to syntactically sound patterns.",1993
damani-2013-improving,https://aclanthology.org/W13-3503.pdf,0,,,,,,,"Improving Pointwise Mutual Information (PMI) by Incorporating Significant Co-occurrence. We design a new co-occurrence based word association measure by incorporating the concept of significant cooccurrence in the popular word association measure Pointwise Mutual Information (PMI). By extensive experiments with a large number of publicly available datasets we show that the newly introduced measure performs better than other co-occurrence based measures and despite being resource-light, compares well with the best known resource-heavy distributional similarity and knowledge based word association measures. We investigate the source of this performance improvement and find that of the two types of significant co-occurrence-corpus-level and document-level, the concept of corpus level significance combined with the use of document counts in place of word counts is responsible for all the performance gains observed. The concept of document level significance is not helpful for PMI adaptation.",Improving Pointwise Mutual Information ({PMI}) by Incorporating Significant Co-occurrence,"We design a new co-occurrence based word association measure by incorporating the concept of significant cooccurrence in the popular word association measure Pointwise Mutual Information (PMI). By extensive experiments with a large number of publicly available datasets we show that the newly introduced measure performs better than other co-occurrence based measures and despite being resource-light, compares well with the best known resource-heavy distributional similarity and knowledge based word association measures. We investigate the source of this performance improvement and find that of the two types of significant co-occurrence-corpus-level and document-level, the concept of corpus level significance combined with the use of document counts in place of word counts is responsible for all the performance gains observed. The concept of document level significance is not helpful for PMI adaptation.",Improving Pointwise Mutual Information (PMI) by Incorporating Significant Co-occurrence,"We design a new co-occurrence based word association measure by incorporating the concept of significant cooccurrence in the popular word association measure Pointwise Mutual Information (PMI). By extensive experiments with a large number of publicly available datasets we show that the newly introduced measure performs better than other co-occurrence based measures and despite being resource-light, compares well with the best known resource-heavy distributional similarity and knowledge based word association measures. We investigate the source of this performance improvement and find that of the two types of significant co-occurrence-corpus-level and document-level, the concept of corpus level significance combined with the use of document counts in place of word counts is responsible for all the performance gains observed. The concept of document level significance is not helpful for PMI adaptation.",We thank Dipak Chaudhari and Shweta Ghonghe for their help with the implementation.,"Improving Pointwise Mutual Information (PMI) by Incorporating Significant Co-occurrence. We design a new co-occurrence based word association measure by incorporating the concept of significant cooccurrence in the popular word association measure Pointwise Mutual Information (PMI). By extensive experiments with a large number of publicly available datasets we show that the newly introduced measure performs better than other co-occurrence based measures and despite being resource-light, compares well with the best known resource-heavy distributional similarity and knowledge based word association measures. We investigate the source of this performance improvement and find that of the two types of significant co-occurrence-corpus-level and document-level, the concept of corpus level significance combined with the use of document counts in place of word counts is responsible for all the performance gains observed. The concept of document level significance is not helpful for PMI adaptation.",2013
bhagat-etal-2005-statistical,https://aclanthology.org/W05-1520.pdf,0,,,,,,,"Statistical Shallow Semantic Parsing despite Little Training Data. Natural language understanding is an essential module in any dialogue system. To obtain satisfactory performance levels, a dialogue system needs a semantic parser/natural language understanding system (NLU) that produces accurate and detailed dialogue oriented semantic output. Recently, a number of semantic parsers trained using either the FrameNet (Baker et al., 1998) or the Prop-Bank (Kingsbury et al., 2002) have been reported. Despite their reasonable performances on general tasks, these parsers do not work so well in specific domains. Also, where these general purpose parsers tend to provide case-frame structures, that include the standard core case roles (Agent, Patient, Instrument, etc.), dialogue oriented domains tend to require additional information about addressees, modality, speech acts, etc. Where general-purpose resources such as PropBank and Framenet provide invaluable training data for general case, it tends to be a problem to obtain enough training data in a specific dialogue oriented domain.",Statistical Shallow Semantic Parsing despite Little Training Data,"Natural language understanding is an essential module in any dialogue system. To obtain satisfactory performance levels, a dialogue system needs a semantic parser/natural language understanding system (NLU) that produces accurate and detailed dialogue oriented semantic output. Recently, a number of semantic parsers trained using either the FrameNet (Baker et al., 1998) or the Prop-Bank (Kingsbury et al., 2002) have been reported. Despite their reasonable performances on general tasks, these parsers do not work so well in specific domains. Also, where these general purpose parsers tend to provide case-frame structures, that include the standard core case roles (Agent, Patient, Instrument, etc.), dialogue oriented domains tend to require additional information about addressees, modality, speech acts, etc. Where general-purpose resources such as PropBank and Framenet provide invaluable training data for general case, it tends to be a problem to obtain enough training data in a specific dialogue oriented domain.",Statistical Shallow Semantic Parsing despite Little Training Data,"Natural language understanding is an essential module in any dialogue system. To obtain satisfactory performance levels, a dialogue system needs a semantic parser/natural language understanding system (NLU) that produces accurate and detailed dialogue oriented semantic output. Recently, a number of semantic parsers trained using either the FrameNet (Baker et al., 1998) or the Prop-Bank (Kingsbury et al., 2002) have been reported. Despite their reasonable performances on general tasks, these parsers do not work so well in specific domains. Also, where these general purpose parsers tend to provide case-frame structures, that include the standard core case roles (Agent, Patient, Instrument, etc.), dialogue oriented domains tend to require additional information about addressees, modality, speech acts, etc. Where general-purpose resources such as PropBank and Framenet provide invaluable training data for general case, it tends to be a problem to obtain enough training data in a specific dialogue oriented domain.",,"Statistical Shallow Semantic Parsing despite Little Training Data. Natural language understanding is an essential module in any dialogue system. To obtain satisfactory performance levels, a dialogue system needs a semantic parser/natural language understanding system (NLU) that produces accurate and detailed dialogue oriented semantic output. Recently, a number of semantic parsers trained using either the FrameNet (Baker et al., 1998) or the Prop-Bank (Kingsbury et al., 2002) have been reported. Despite their reasonable performances on general tasks, these parsers do not work so well in specific domains. Also, where these general purpose parsers tend to provide case-frame structures, that include the standard core case roles (Agent, Patient, Instrument, etc.), dialogue oriented domains tend to require additional information about addressees, modality, speech acts, etc. Where general-purpose resources such as PropBank and Framenet provide invaluable training data for general case, it tends to be a problem to obtain enough training data in a specific dialogue oriented domain.",2005
ion-etal-2019-racais,https://aclanthology.org/D19-5714.pdf,0,,,,,,,"RACAI's System at PharmaCoNER 2019. This paper describes the Named Entity Recognition system of the Institute for Artificial Intelligence ""Mihai Drȃgȃnescu"" of the Romanian Academy (RACAI for short). Our best F1 score of 0.84984 was achieved using an ensemble of two systems: a gazetteer-based baseline and a RNN-based NER system, developed specially for PharmaCoNER 2019. We will describe the individual systems and the ensemble algorithm, compare the final system to the current state of the art, as well as discuss our results with respect to the quality of the training data and its annotation strategy. The resulting NER system is language independent, provided that language-dependent resources and preprocessing tools exist, such as tokenizers and POS taggers.",{RACAI}{'}s System at {P}harma{C}o{NER} 2019,"This paper describes the Named Entity Recognition system of the Institute for Artificial Intelligence ""Mihai Drȃgȃnescu"" of the Romanian Academy (RACAI for short). Our best F1 score of 0.84984 was achieved using an ensemble of two systems: a gazetteer-based baseline and a RNN-based NER system, developed specially for PharmaCoNER 2019. We will describe the individual systems and the ensemble algorithm, compare the final system to the current state of the art, as well as discuss our results with respect to the quality of the training data and its annotation strategy. The resulting NER system is language independent, provided that language-dependent resources and preprocessing tools exist, such as tokenizers and POS taggers.",RACAI's System at PharmaCoNER 2019,"This paper describes the Named Entity Recognition system of the Institute for Artificial Intelligence ""Mihai Drȃgȃnescu"" of the Romanian Academy (RACAI for short). Our best F1 score of 0.84984 was achieved using an ensemble of two systems: a gazetteer-based baseline and a RNN-based NER system, developed specially for PharmaCoNER 2019. We will describe the individual systems and the ensemble algorithm, compare the final system to the current state of the art, as well as discuss our results with respect to the quality of the training data and its annotation strategy. The resulting NER system is language independent, provided that language-dependent resources and preprocessing tools exist, such as tokenizers and POS taggers.","The reported research was supported by the EC grant MARCELL (Multilingual Resources for CEF.AT in the Legal Domain), TENtec no. 27798023.","RACAI's System at PharmaCoNER 2019. This paper describes the Named Entity Recognition system of the Institute for Artificial Intelligence ""Mihai Drȃgȃnescu"" of the Romanian Academy (RACAI for short). Our best F1 score of 0.84984 was achieved using an ensemble of two systems: a gazetteer-based baseline and a RNN-based NER system, developed specially for PharmaCoNER 2019. We will describe the individual systems and the ensemble algorithm, compare the final system to the current state of the art, as well as discuss our results with respect to the quality of the training data and its annotation strategy. The resulting NER system is language independent, provided that language-dependent resources and preprocessing tools exist, such as tokenizers and POS taggers.",2019
yang-berwick-1996-principle,https://aclanthology.org/Y96-1038.pdf,0,,,,,,,"Principle-based Parsing for Chinese. This paper describes the implementation of Mandarin Chinese in the Pappi system, a principle-based multilingual parser. We show that substantive linguistic coverage for new and linguistically diverse languages such as Chinese can be achieved, conveniently and efficiently, through parameterization and minimal modifications to a core system. In particular, we focus on two problems that have posed hurdles for Chinese linguistic theories. A a novel analysis is proposed for the so-called BA-construction, along with a principled computer implementation. For scoping ambiguity, we developed a simple algorithm based on Jim Huang's Isomorphic Principle. The implementation can parse fairly sophisticated sentences in a couple of seconds, with minimal addition (less than 100 lines of Prolog code) to the core parser. This study suggests that principle-based parsing systems are useful tools for theoretical and computational analysis of linguistic problems.",Principle-based Parsing for {C}hinese,"This paper describes the implementation of Mandarin Chinese in the Pappi system, a principle-based multilingual parser. We show that substantive linguistic coverage for new and linguistically diverse languages such as Chinese can be achieved, conveniently and efficiently, through parameterization and minimal modifications to a core system. In particular, we focus on two problems that have posed hurdles for Chinese linguistic theories. A a novel analysis is proposed for the so-called BA-construction, along with a principled computer implementation. For scoping ambiguity, we developed a simple algorithm based on Jim Huang's Isomorphic Principle. The implementation can parse fairly sophisticated sentences in a couple of seconds, with minimal addition (less than 100 lines of Prolog code) to the core parser. This study suggests that principle-based parsing systems are useful tools for theoretical and computational analysis of linguistic problems.",Principle-based Parsing for Chinese,"This paper describes the implementation of Mandarin Chinese in the Pappi system, a principle-based multilingual parser. We show that substantive linguistic coverage for new and linguistically diverse languages such as Chinese can be achieved, conveniently and efficiently, through parameterization and minimal modifications to a core system. In particular, we focus on two problems that have posed hurdles for Chinese linguistic theories. A a novel analysis is proposed for the so-called BA-construction, along with a principled computer implementation. For scoping ambiguity, we developed a simple algorithm based on Jim Huang's Isomorphic Principle. The implementation can parse fairly sophisticated sentences in a couple of seconds, with minimal addition (less than 100 lines of Prolog code) to the core parser. This study suggests that principle-based parsing systems are useful tools for theoretical and computational analysis of linguistic problems.",,"Principle-based Parsing for Chinese. This paper describes the implementation of Mandarin Chinese in the Pappi system, a principle-based multilingual parser. We show that substantive linguistic coverage for new and linguistically diverse languages such as Chinese can be achieved, conveniently and efficiently, through parameterization and minimal modifications to a core system. In particular, we focus on two problems that have posed hurdles for Chinese linguistic theories. A a novel analysis is proposed for the so-called BA-construction, along with a principled computer implementation. For scoping ambiguity, we developed a simple algorithm based on Jim Huang's Isomorphic Principle. The implementation can parse fairly sophisticated sentences in a couple of seconds, with minimal addition (less than 100 lines of Prolog code) to the core parser. This study suggests that principle-based parsing systems are useful tools for theoretical and computational analysis of linguistic problems.",1996
kocisky-etal-2014-learning,https://aclanthology.org/P14-2037.pdf,0,,,,,,,"Learning Bilingual Word Representations by Marginalizing Alignments. We present a probabilistic model that simultaneously learns alignments and distributed representations for bilingual data. By marginalizing over word alignments the model captures a larger semantic context than prior work relying on hard alignments. The advantage of this approach is demonstrated in a cross-lingual classification task, where we outperform the prior published state of the art.",Learning Bilingual Word Representations by Marginalizing Alignments,"We present a probabilistic model that simultaneously learns alignments and distributed representations for bilingual data. By marginalizing over word alignments the model captures a larger semantic context than prior work relying on hard alignments. The advantage of this approach is demonstrated in a cross-lingual classification task, where we outperform the prior published state of the art.",Learning Bilingual Word Representations by Marginalizing Alignments,"We present a probabilistic model that simultaneously learns alignments and distributed representations for bilingual data. By marginalizing over word alignments the model captures a larger semantic context than prior work relying on hard alignments. The advantage of this approach is demonstrated in a cross-lingual classification task, where we outperform the prior published state of the art.",This work was supported by a Xerox Foundation Award and EPSRC grant number EP/K036580/1. We acknowledge the use of the Oxford ARC.,"Learning Bilingual Word Representations by Marginalizing Alignments. We present a probabilistic model that simultaneously learns alignments and distributed representations for bilingual data. By marginalizing over word alignments the model captures a larger semantic context than prior work relying on hard alignments. The advantage of this approach is demonstrated in a cross-lingual classification task, where we outperform the prior published state of the art.",2014
fehri-etal-2011-new,https://aclanthology.org/R11-1076.pdf,0,,,,,,,"A New Representation Model for the Automatic Recognition and Translation of Arabic Named Entities with NooJ. Recognition and translation of named entities (NEs) are two current research topics with regard to the proliferation of electronic documents exchanged through the Internet. The need to assimilate these documents through NLP tools has become necessary and interesting. Moreover, the formal or semiformal modeling of these NEs may intervene in both processes of recognition and translation. Indeed, the modeling makes more reliable the constitution of linguistic resources, limits the impact of linguistic specificities and facilitates transformations from one representation to another. In this context, we propose an approach of recognition and translation based on a representation model of Arabic NEs and a set of transducers resolving morphological and syntactical phenomena.",A New Representation Model for the Automatic Recognition and Translation of {A}rabic Named Entities with {N}oo{J},"Recognition and translation of named entities (NEs) are two current research topics with regard to the proliferation of electronic documents exchanged through the Internet. The need to assimilate these documents through NLP tools has become necessary and interesting. Moreover, the formal or semiformal modeling of these NEs may intervene in both processes of recognition and translation. Indeed, the modeling makes more reliable the constitution of linguistic resources, limits the impact of linguistic specificities and facilitates transformations from one representation to another. In this context, we propose an approach of recognition and translation based on a representation model of Arabic NEs and a set of transducers resolving morphological and syntactical phenomena.",A New Representation Model for the Automatic Recognition and Translation of Arabic Named Entities with NooJ,"Recognition and translation of named entities (NEs) are two current research topics with regard to the proliferation of electronic documents exchanged through the Internet. The need to assimilate these documents through NLP tools has become necessary and interesting. Moreover, the formal or semiformal modeling of these NEs may intervene in both processes of recognition and translation. Indeed, the modeling makes more reliable the constitution of linguistic resources, limits the impact of linguistic specificities and facilitates transformations from one representation to another. In this context, we propose an approach of recognition and translation based on a representation model of Arabic NEs and a set of transducers resolving morphological and syntactical phenomena.",,"A New Representation Model for the Automatic Recognition and Translation of Arabic Named Entities with NooJ. Recognition and translation of named entities (NEs) are two current research topics with regard to the proliferation of electronic documents exchanged through the Internet. The need to assimilate these documents through NLP tools has become necessary and interesting. Moreover, the formal or semiformal modeling of these NEs may intervene in both processes of recognition and translation. Indeed, the modeling makes more reliable the constitution of linguistic resources, limits the impact of linguistic specificities and facilitates transformations from one representation to another. In this context, we propose an approach of recognition and translation based on a representation model of Arabic NEs and a set of transducers resolving morphological and syntactical phenomena.",2011
bates-1989-summary,https://aclanthology.org/H89-2029.pdf,0,,,,,,,"Summary of Session 7 -- Natural Language (Part 2). In this session, Ralph Weischedel of BBN reported on work advancing the state of the art in multiple underlying systems, i.e., translating an understood query or command into a program to produce an answer from one or more application systems.
This work addresses one of the key bottlenecks to making NL (and speech) systems truly applicable. Systematic translation techniques from logical form of an English input to commands to carry out the request have previously been worked out only for relational databases, but is extended here in both number of underlying systems and their type.",Summary of Session 7 {--} Natural Language (Part 2),"In this session, Ralph Weischedel of BBN reported on work advancing the state of the art in multiple underlying systems, i.e., translating an understood query or command into a program to produce an answer from one or more application systems.
This work addresses one of the key bottlenecks to making NL (and speech) systems truly applicable. Systematic translation techniques from logical form of an English input to commands to carry out the request have previously been worked out only for relational databases, but is extended here in both number of underlying systems and their type.",Summary of Session 7 -- Natural Language (Part 2),"In this session, Ralph Weischedel of BBN reported on work advancing the state of the art in multiple underlying systems, i.e., translating an understood query or command into a program to produce an answer from one or more application systems.
This work addresses one of the key bottlenecks to making NL (and speech) systems truly applicable. Systematic translation techniques from logical form of an English input to commands to carry out the request have previously been worked out only for relational databases, but is extended here in both number of underlying systems and their type.",,"Summary of Session 7 -- Natural Language (Part 2). In this session, Ralph Weischedel of BBN reported on work advancing the state of the art in multiple underlying systems, i.e., translating an understood query or command into a program to produce an answer from one or more application systems.
This work addresses one of the key bottlenecks to making NL (and speech) systems truly applicable. Systematic translation techniques from logical form of an English input to commands to carry out the request have previously been worked out only for relational databases, but is extended here in both number of underlying systems and their type.",1989
tattar-fishel-2017-bleu2vec,https://aclanthology.org/W17-4771.pdf,0,,,,,,,bleu2vec: the Painfully Familiar Metric on Continuous Vector Space Steroids. In this participation in the WMT'2017 metrics shared task we implement a fuzzy match score for n-gram precisions in the BLEU metric. To do this we learn ngram embeddings; we describe two ways of extending the WORD2VEC approach to do so. Evaluation results show that the introduced score beats the original BLEU metric on system and segment level.,bleu2vec: the Painfully Familiar Metric on Continuous Vector Space Steroids,In this participation in the WMT'2017 metrics shared task we implement a fuzzy match score for n-gram precisions in the BLEU metric. To do this we learn ngram embeddings; we describe two ways of extending the WORD2VEC approach to do so. Evaluation results show that the introduced score beats the original BLEU metric on system and segment level.,bleu2vec: the Painfully Familiar Metric on Continuous Vector Space Steroids,In this participation in the WMT'2017 metrics shared task we implement a fuzzy match score for n-gram precisions in the BLEU metric. To do this we learn ngram embeddings; we describe two ways of extending the WORD2VEC approach to do so. Evaluation results show that the introduced score beats the original BLEU metric on system and segment level.,,bleu2vec: the Painfully Familiar Metric on Continuous Vector Space Steroids. In this participation in the WMT'2017 metrics shared task we implement a fuzzy match score for n-gram precisions in the BLEU metric. To do this we learn ngram embeddings; we describe two ways of extending the WORD2VEC approach to do so. Evaluation results show that the introduced score beats the original BLEU metric on system and segment level.,2017
agrawal-an-2014-kea,https://aclanthology.org/S14-2065.pdf,0,,,,,,,"Kea: Sentiment Analysis of Phrases Within Short Texts. Sentiment Analysis has become an increasingly important research topic. This paper describes our approach to building a system for the Sentiment Analysis in Twitter task of the SemEval-2014 evaluation. The goal is to classify a phrase within a short piece of text as positive, negative or neutral. In the evaluation, classifiers trained on Twitter data are tested on data from other domains such as SMS, blogs as well as sarcasm. The results indicate that apart from sarcasm, classifiers built for sentiment analysis of phrases from tweets can be generalized to other short text domains quite effectively. However, in crossdomain experiments, SMS data is found to generalize even better than Twitter data.",{K}ea: Sentiment Analysis of Phrases Within Short Texts,"Sentiment Analysis has become an increasingly important research topic. This paper describes our approach to building a system for the Sentiment Analysis in Twitter task of the SemEval-2014 evaluation. The goal is to classify a phrase within a short piece of text as positive, negative or neutral. In the evaluation, classifiers trained on Twitter data are tested on data from other domains such as SMS, blogs as well as sarcasm. The results indicate that apart from sarcasm, classifiers built for sentiment analysis of phrases from tweets can be generalized to other short text domains quite effectively. However, in crossdomain experiments, SMS data is found to generalize even better than Twitter data.",Kea: Sentiment Analysis of Phrases Within Short Texts,"Sentiment Analysis has become an increasingly important research topic. This paper describes our approach to building a system for the Sentiment Analysis in Twitter task of the SemEval-2014 evaluation. The goal is to classify a phrase within a short piece of text as positive, negative or neutral. In the evaluation, classifiers trained on Twitter data are tested on data from other domains such as SMS, blogs as well as sarcasm. The results indicate that apart from sarcasm, classifiers built for sentiment analysis of phrases from tweets can be generalized to other short text domains quite effectively. However, in crossdomain experiments, SMS data is found to generalize even better than Twitter data.",We would like to thank the organizers of this task for their effort and the reviewers for their useful feedback. This research is funded in part by the Centre for Information Visualization and Data Driven Design (CIV/DDD) established by the Ontario Research Fund.,"Kea: Sentiment Analysis of Phrases Within Short Texts. Sentiment Analysis has become an increasingly important research topic. This paper describes our approach to building a system for the Sentiment Analysis in Twitter task of the SemEval-2014 evaluation. The goal is to classify a phrase within a short piece of text as positive, negative or neutral. In the evaluation, classifiers trained on Twitter data are tested on data from other domains such as SMS, blogs as well as sarcasm. The results indicate that apart from sarcasm, classifiers built for sentiment analysis of phrases from tweets can be generalized to other short text domains quite effectively. However, in crossdomain experiments, SMS data is found to generalize even better than Twitter data.",2014
lochbaum-1991-algorithm,https://aclanthology.org/P91-1005.pdf,1,,,,partnership,,,"An Algorithm for Plan Recognition in Collaborative Discourse. A model of plan recognition in discourse must be based on intended recognition, distinguish each agent's beliefs and intentions from the other's, and avoid assumptions about the correctness or completeness of the agents' beliefs. In this paper, we present an algorithm for plan recognition that is based on the Shared-Plan model of collaboration (Grosz and Sidner, 1990; Lochbaum et al., 1990) and that satisfies these constraints.",An Algorithm for Plan Recognition in Collaborative Discourse,"A model of plan recognition in discourse must be based on intended recognition, distinguish each agent's beliefs and intentions from the other's, and avoid assumptions about the correctness or completeness of the agents' beliefs. In this paper, we present an algorithm for plan recognition that is based on the Shared-Plan model of collaboration (Grosz and Sidner, 1990; Lochbaum et al., 1990) and that satisfies these constraints.",An Algorithm for Plan Recognition in Collaborative Discourse,"A model of plan recognition in discourse must be based on intended recognition, distinguish each agent's beliefs and intentions from the other's, and avoid assumptions about the correctness or completeness of the agents' beliefs. In this paper, we present an algorithm for plan recognition that is based on the Shared-Plan model of collaboration (Grosz and Sidner, 1990; Lochbaum et al., 1990) and that satisfies these constraints.","I would like to thank Cecile Balkanski, Barbara Grosz, Stuart Shieber, and Candy Sidner for many helpful discussions and comments on the research presented in this paper.","An Algorithm for Plan Recognition in Collaborative Discourse. A model of plan recognition in discourse must be based on intended recognition, distinguish each agent's beliefs and intentions from the other's, and avoid assumptions about the correctness or completeness of the agents' beliefs. In this paper, we present an algorithm for plan recognition that is based on the Shared-Plan model of collaboration (Grosz and Sidner, 1990; Lochbaum et al., 1990) and that satisfies these constraints.",1991
yuste-2004-corporate,https://aclanthology.org/W04-1401.pdf,0,,,,,,,"Corporate Language Resources in Multilingual Content Creation, Maintenance and Leverage. This paper focuses on how language resources (LR) for translation (hence LR4Trans) feature, and should ideally feature, within a corporate workflow of multilingual content development. The envisaged scenario will be that of a content management system that acknowledges the value of LR4Trans in the organisation as a key component and corporate knowledge resource.","Corporate Language Resources in Multilingual Content Creation, Maintenance and Leverage","This paper focuses on how language resources (LR) for translation (hence LR4Trans) feature, and should ideally feature, within a corporate workflow of multilingual content development. The envisaged scenario will be that of a content management system that acknowledges the value of LR4Trans in the organisation as a key component and corporate knowledge resource.","Corporate Language Resources in Multilingual Content Creation, Maintenance and Leverage","This paper focuses on how language resources (LR) for translation (hence LR4Trans) feature, and should ideally feature, within a corporate workflow of multilingual content development. The envisaged scenario will be that of a content management system that acknowledges the value of LR4Trans in the organisation as a key component and corporate knowledge resource.",My special thanks go to the two blind reviewers of this paper's first draft. I would also like to thank my colleagues at the Institute for Computational Linguistics of the University of Zurich for their interesting questions during a recent presentation.,"Corporate Language Resources in Multilingual Content Creation, Maintenance and Leverage. This paper focuses on how language resources (LR) for translation (hence LR4Trans) feature, and should ideally feature, within a corporate workflow of multilingual content development. The envisaged scenario will be that of a content management system that acknowledges the value of LR4Trans in the organisation as a key component and corporate knowledge resource.",2004
ma-li-2006-comparative,https://aclanthology.org/O06-3004.pdf,0,,,,,,,"A Comparative Study of Four Language Identification Systems. In this paper, we compare four typical spoken language identification (LID) systems. We introduce a novel acoustic segment modeling approach for the LID system frontend. It is assumed that the overall sound characteristics of all spoken languages can be covered by a universal collection of acoustic segment models (ASMs) without imposing strict phonetic definitions. The ASM models are used to decode spoken utterances into strings of segment units in parallel phone recognition (PPR) and universal phone recognition (UPR) frontends. We also propose a novel approach to LID system backend design, where the statistics of ASMs and their co-occurrences are used to form ASM-derived feature vectors, in a vector space modeling (VSM) approach, as opposed to the traditional language modeling (LM) approach, in order to discriminate between individual spoken languages. Four LID systems are built to evaluate the effects of two different frontends and two different backends. We evaluate the four systems based on the 1996, 2003 and 2005 NIST Language Recognition Evaluation (LRE) tasks. The results show that the proposed ASM-based VSM framework reduces the LID error rate quite significantly when compared with the widely-used parallel PRLM method. Among the four configurations, the PPR-VSM system demonstrates the best performance across all of the tasks.",A Comparative Study of Four Language Identification Systems,"In this paper, we compare four typical spoken language identification (LID) systems. We introduce a novel acoustic segment modeling approach for the LID system frontend. It is assumed that the overall sound characteristics of all spoken languages can be covered by a universal collection of acoustic segment models (ASMs) without imposing strict phonetic definitions. The ASM models are used to decode spoken utterances into strings of segment units in parallel phone recognition (PPR) and universal phone recognition (UPR) frontends. We also propose a novel approach to LID system backend design, where the statistics of ASMs and their co-occurrences are used to form ASM-derived feature vectors, in a vector space modeling (VSM) approach, as opposed to the traditional language modeling (LM) approach, in order to discriminate between individual spoken languages. Four LID systems are built to evaluate the effects of two different frontends and two different backends. We evaluate the four systems based on the 1996, 2003 and 2005 NIST Language Recognition Evaluation (LRE) tasks. The results show that the proposed ASM-based VSM framework reduces the LID error rate quite significantly when compared with the widely-used parallel PRLM method. Among the four configurations, the PPR-VSM system demonstrates the best performance across all of the tasks.",A Comparative Study of Four Language Identification Systems,"In this paper, we compare four typical spoken language identification (LID) systems. We introduce a novel acoustic segment modeling approach for the LID system frontend. It is assumed that the overall sound characteristics of all spoken languages can be covered by a universal collection of acoustic segment models (ASMs) without imposing strict phonetic definitions. The ASM models are used to decode spoken utterances into strings of segment units in parallel phone recognition (PPR) and universal phone recognition (UPR) frontends. We also propose a novel approach to LID system backend design, where the statistics of ASMs and their co-occurrences are used to form ASM-derived feature vectors, in a vector space modeling (VSM) approach, as opposed to the traditional language modeling (LM) approach, in order to discriminate between individual spoken languages. Four LID systems are built to evaluate the effects of two different frontends and two different backends. We evaluate the four systems based on the 1996, 2003 and 2005 NIST Language Recognition Evaluation (LRE) tasks. The results show that the proposed ASM-based VSM framework reduces the LID error rate quite significantly when compared with the widely-used parallel PRLM method. Among the four configurations, the PPR-VSM system demonstrates the best performance across all of the tasks.","We have successfully treated LID as a text categorization application with the topic category being the language identity itself. The VSM method can be extended to other spoken document classification tasks as well, for example, multilingual spoken document categorization by topic. We are also interested in exploring other language-specific features, such as syllabic and tonal properties. It is quite straightforward to incorporate specific salient features and examine their benefits. Furthermore, some high-frequency, language-specific words can also be converted into acoustic words and included in an acoustic word vocabulary, in order to increase the indexing power of these words for their corresponding languages.","A Comparative Study of Four Language Identification Systems. In this paper, we compare four typical spoken language identification (LID) systems. We introduce a novel acoustic segment modeling approach for the LID system frontend. It is assumed that the overall sound characteristics of all spoken languages can be covered by a universal collection of acoustic segment models (ASMs) without imposing strict phonetic definitions. The ASM models are used to decode spoken utterances into strings of segment units in parallel phone recognition (PPR) and universal phone recognition (UPR) frontends. We also propose a novel approach to LID system backend design, where the statistics of ASMs and their co-occurrences are used to form ASM-derived feature vectors, in a vector space modeling (VSM) approach, as opposed to the traditional language modeling (LM) approach, in order to discriminate between individual spoken languages. Four LID systems are built to evaluate the effects of two different frontends and two different backends. We evaluate the four systems based on the 1996, 2003 and 2005 NIST Language Recognition Evaluation (LRE) tasks. The results show that the proposed ASM-based VSM framework reduces the LID error rate quite significantly when compared with the widely-used parallel PRLM method. Among the four configurations, the PPR-VSM system demonstrates the best performance across all of the tasks.",2006
simov-etal-2014-system,http://www.lrec-conf.org/proceedings/lrec2014/pdf/1005_Paper.pdf,0,,,,,,,"A System for Experiments with Dependency Parsers. In this paper we present a system for experimenting with combinations of dependency parsers. The system supports initial training of different parsing models, creation of parsebank(s) with these models, and different strategies for the construction of ensemble models aimed at improving the output of the individual models by voting. The system employs two algorithms for construction of dependency trees from several parses of the same sentence and several ways for ranking of the arcs in the resulting trees. We have performed experiments with state-of-the-art dependency parsers including MaltParser (",A System for Experiments with Dependency Parsers,"In this paper we present a system for experimenting with combinations of dependency parsers. The system supports initial training of different parsing models, creation of parsebank(s) with these models, and different strategies for the construction of ensemble models aimed at improving the output of the individual models by voting. The system employs two algorithms for construction of dependency trees from several parses of the same sentence and several ways for ranking of the arcs in the resulting trees. We have performed experiments with state-of-the-art dependency parsers including MaltParser (",A System for Experiments with Dependency Parsers,"In this paper we present a system for experimenting with combinations of dependency parsers. The system supports initial training of different parsing models, creation of parsebank(s) with these models, and different strategies for the construction of ensemble models aimed at improving the output of the individual models by voting. The system employs two algorithms for construction of dependency trees from several parses of the same sentence and several ways for ranking of the arcs in the resulting trees. We have performed experiments with state-of-the-art dependency parsers including MaltParser (","This research has received partial funding from the EC's FP7 (FP7/2007-2013) under grant agreement number 610516: ""QTLeap: Quality Translation by Deep Language Engineering Approaches"".","A System for Experiments with Dependency Parsers. In this paper we present a system for experimenting with combinations of dependency parsers. The system supports initial training of different parsing models, creation of parsebank(s) with these models, and different strategies for the construction of ensemble models aimed at improving the output of the individual models by voting. The system employs two algorithms for construction of dependency trees from several parses of the same sentence and several ways for ranking of the arcs in the resulting trees. We have performed experiments with state-of-the-art dependency parsers including MaltParser (",2014
sun-etal-2018-open,https://aclanthology.org/D18-1455.pdf,0,,,,,,,"Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text. Open Domain Question Answering (QA) is evolving from complex pipelined systems to end-to-end deep neural networks. Specialized neural models have been developed for extracting answers from either text alone or Knowledge Bases (KBs) alone. In this paper we look at a more practical setting, namely QA over the combination of a KB and entitylinked text, which is appropriate when an incomplete KB is available with a large text corpus. Building on recent advances in graph representation learning we propose a novel model, GRAFT-Net, for extracting answers from a question-specific subgraph containing text and KB entities and relations. We construct a suite of benchmark tasks for this problem, varying the difficulty of questions, the amount of training data, and KB completeness. We show that GRAFT-Net is competitive with the state-of-the-art when tested using either KBs or text alone, and vastly outperforms existing methods in the combined setting.",Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text,"Open Domain Question Answering (QA) is evolving from complex pipelined systems to end-to-end deep neural networks. Specialized neural models have been developed for extracting answers from either text alone or Knowledge Bases (KBs) alone. In this paper we look at a more practical setting, namely QA over the combination of a KB and entitylinked text, which is appropriate when an incomplete KB is available with a large text corpus. Building on recent advances in graph representation learning we propose a novel model, GRAFT-Net, for extracting answers from a question-specific subgraph containing text and KB entities and relations. We construct a suite of benchmark tasks for this problem, varying the difficulty of questions, the amount of training data, and KB completeness. We show that GRAFT-Net is competitive with the state-of-the-art when tested using either KBs or text alone, and vastly outperforms existing methods in the combined setting.",Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text,"Open Domain Question Answering (QA) is evolving from complex pipelined systems to end-to-end deep neural networks. Specialized neural models have been developed for extracting answers from either text alone or Knowledge Bases (KBs) alone. In this paper we look at a more practical setting, namely QA over the combination of a KB and entitylinked text, which is appropriate when an incomplete KB is available with a large text corpus. Building on recent advances in graph representation learning we propose a novel model, GRAFT-Net, for extracting answers from a question-specific subgraph containing text and KB entities and relations. We construct a suite of benchmark tasks for this problem, varying the difficulty of questions, the amount of training data, and KB completeness. We show that GRAFT-Net is competitive with the state-of-the-art when tested using either KBs or text alone, and vastly outperforms existing methods in the combined setting.","Bhuwan Dhingra is supported by NSF under grants CCF-1414030 and IIS-1250956 and by grants from Google. Ruslan Salakhutdinov is supported in part by ONR grant N000141812861, Apple, and Nvidia NVAIL Award.","Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text. Open Domain Question Answering (QA) is evolving from complex pipelined systems to end-to-end deep neural networks. Specialized neural models have been developed for extracting answers from either text alone or Knowledge Bases (KBs) alone. In this paper we look at a more practical setting, namely QA over the combination of a KB and entitylinked text, which is appropriate when an incomplete KB is available with a large text corpus. Building on recent advances in graph representation learning we propose a novel model, GRAFT-Net, for extracting answers from a question-specific subgraph containing text and KB entities and relations. We construct a suite of benchmark tasks for this problem, varying the difficulty of questions, the amount of training data, and KB completeness. We show that GRAFT-Net is competitive with the state-of-the-art when tested using either KBs or text alone, and vastly outperforms existing methods in the combined setting.",2018
fonseca-etal-2016-lexfom,https://aclanthology.org/W16-5320.pdf,0,,,,,,,"Lexfom: a lexical functions ontology model. A lexical function represents a type of relation that exists between lexical units (wo rds or expressions) in any language. For examp le, the antonymy is a type of relat ion that is represented by the lexical function Anti: Anti(big) = small. Those relations include both paradigmatic relations, i.e. vertical relations, such as synonymy, antonymy and meronymy and syntagmatic relat ions, i.e. horizontal relations, such as objective qualification (legitimate demand), subjective qualificat ion (fruitful analysis), positive evaluation (good review) and support verbs (pay a visit, subject to an interrogation). In this paper, we present the Lexical Functions Ontology Model (lexfo m) to represent lexical functions and the relation among lexical units. Lexfo m is divided in four modules: lexical function representation (lfrep), lexical function family (lffam), lexical function semantic perspective (lfsem) and lexical function relations (lfrel). Moreover, we show how it co mbines to Lexical Model for Ontologies (lemon), for the transformation of lexical networks into the semantic web formats. So far, we have implemented 100 simp le and 500 comp lex lexical functions, and encoded about 8,000 syntagmatic and 46,000 paradigmat ic relations, for the French language.",{L}exfom: a lexical functions ontology model,"A lexical function represents a type of relation that exists between lexical units (wo rds or expressions) in any language. For examp le, the antonymy is a type of relat ion that is represented by the lexical function Anti: Anti(big) = small. Those relations include both paradigmatic relations, i.e. vertical relations, such as synonymy, antonymy and meronymy and syntagmatic relat ions, i.e. horizontal relations, such as objective qualification (legitimate demand), subjective qualificat ion (fruitful analysis), positive evaluation (good review) and support verbs (pay a visit, subject to an interrogation). In this paper, we present the Lexical Functions Ontology Model (lexfo m) to represent lexical functions and the relation among lexical units. Lexfo m is divided in four modules: lexical function representation (lfrep), lexical function family (lffam), lexical function semantic perspective (lfsem) and lexical function relations (lfrel). Moreover, we show how it co mbines to Lexical Model for Ontologies (lemon), for the transformation of lexical networks into the semantic web formats. So far, we have implemented 100 simp le and 500 comp lex lexical functions, and encoded about 8,000 syntagmatic and 46,000 paradigmat ic relations, for the French language.",Lexfom: a lexical functions ontology model,"A lexical function represents a type of relation that exists between lexical units (wo rds or expressions) in any language. For examp le, the antonymy is a type of relat ion that is represented by the lexical function Anti: Anti(big) = small. Those relations include both paradigmatic relations, i.e. vertical relations, such as synonymy, antonymy and meronymy and syntagmatic relat ions, i.e. horizontal relations, such as objective qualification (legitimate demand), subjective qualificat ion (fruitful analysis), positive evaluation (good review) and support verbs (pay a visit, subject to an interrogation). In this paper, we present the Lexical Functions Ontology Model (lexfo m) to represent lexical functions and the relation among lexical units. Lexfo m is divided in four modules: lexical function representation (lfrep), lexical function family (lffam), lexical function semantic perspective (lfsem) and lexical function relations (lfrel). Moreover, we show how it co mbines to Lexical Model for Ontologies (lemon), for the transformation of lexical networks into the semantic web formats. So far, we have implemented 100 simp le and 500 comp lex lexical functions, and encoded about 8,000 syntagmatic and 46,000 paradigmat ic relations, for the French language.",,"Lexfom: a lexical functions ontology model. A lexical function represents a type of relation that exists between lexical units (wo rds or expressions) in any language. For examp le, the antonymy is a type of relat ion that is represented by the lexical function Anti: Anti(big) = small. Those relations include both paradigmatic relations, i.e. vertical relations, such as synonymy, antonymy and meronymy and syntagmatic relat ions, i.e. horizontal relations, such as objective qualification (legitimate demand), subjective qualificat ion (fruitful analysis), positive evaluation (good review) and support verbs (pay a visit, subject to an interrogation). In this paper, we present the Lexical Functions Ontology Model (lexfo m) to represent lexical functions and the relation among lexical units. Lexfo m is divided in four modules: lexical function representation (lfrep), lexical function family (lffam), lexical function semantic perspective (lfsem) and lexical function relations (lfrel). Moreover, we show how it co mbines to Lexical Model for Ontologies (lemon), for the transformation of lexical networks into the semantic web formats. So far, we have implemented 100 simp le and 500 comp lex lexical functions, and encoded about 8,000 syntagmatic and 46,000 paradigmat ic relations, for the French language.",2016
bjorne-salakoski-2011-generalizing,https://aclanthology.org/W11-1828.pdf,1,,,,health,,,"Generalizing Biomedical Event Extraction. We present a system for extracting biomedical events (detailed descriptions of biomolecular interactions) from research articles. This system was developed for the BioNLP'11 Shared Task and extends our BioNLP'09 Shared Task winning Turku Event Extraction System. It uses support vector machines to first detect event-defining words, followed by detection of their relationships. The theme of the BioNLP'11 Shared Task is generalization, extending event extraction to varied biomedical domains. Our current system successfully predicts events for every domain case introduced in the BioNLP'11 Shared Task, being the only system to participate in all eight tasks and all of their subtasks, with best performance in four tasks.",Generalizing Biomedical Event Extraction,"We present a system for extracting biomedical events (detailed descriptions of biomolecular interactions) from research articles. This system was developed for the BioNLP'11 Shared Task and extends our BioNLP'09 Shared Task winning Turku Event Extraction System. It uses support vector machines to first detect event-defining words, followed by detection of their relationships. The theme of the BioNLP'11 Shared Task is generalization, extending event extraction to varied biomedical domains. Our current system successfully predicts events for every domain case introduced in the BioNLP'11 Shared Task, being the only system to participate in all eight tasks and all of their subtasks, with best performance in four tasks.",Generalizing Biomedical Event Extraction,"We present a system for extracting biomedical events (detailed descriptions of biomolecular interactions) from research articles. This system was developed for the BioNLP'11 Shared Task and extends our BioNLP'09 Shared Task winning Turku Event Extraction System. It uses support vector machines to first detect event-defining words, followed by detection of their relationships. The theme of the BioNLP'11 Shared Task is generalization, extending event extraction to varied biomedical domains. Our current system successfully predicts events for every domain case introduced in the BioNLP'11 Shared Task, being the only system to participate in all eight tasks and all of their subtasks, with best performance in four tasks.","We thank the Academy of Finland for funding, CSC -IT Center for Science Ltd for computational resources and Filip Ginter and Sofie Van Landeghem for help with the manuscript.","Generalizing Biomedical Event Extraction. We present a system for extracting biomedical events (detailed descriptions of biomolecular interactions) from research articles. This system was developed for the BioNLP'11 Shared Task and extends our BioNLP'09 Shared Task winning Turku Event Extraction System. It uses support vector machines to first detect event-defining words, followed by detection of their relationships. The theme of the BioNLP'11 Shared Task is generalization, extending event extraction to varied biomedical domains. Our current system successfully predicts events for every domain case introduced in the BioNLP'11 Shared Task, being the only system to participate in all eight tasks and all of their subtasks, with best performance in four tasks.",2011
lee-etal-2020-massively,https://aclanthology.org/2020.lrec-1.521.pdf,0,,,,,,,"Massively Multilingual Pronunciation Modeling with WikiPron. We introduce WikiPron, an open-source command-line tool for extracting pronunciation data from Wiktionary, a collaborative multilingual online dictionary. We first describe the design and use of WikiPron. We then discuss the challenges faced scaling this tool to create an automatically-generated database of 1.7 million pronunciations from 165 languages. Finally, we validate the pronunciation database by using it to train and evaluating a collection of generic grapheme-to-phoneme models. The software, pronunciation data, and models are all made available under permissive open-source licenses.",Massively Multilingual Pronunciation Modeling with {W}iki{P}ron,"We introduce WikiPron, an open-source command-line tool for extracting pronunciation data from Wiktionary, a collaborative multilingual online dictionary. We first describe the design and use of WikiPron. We then discuss the challenges faced scaling this tool to create an automatically-generated database of 1.7 million pronunciations from 165 languages. Finally, we validate the pronunciation database by using it to train and evaluating a collection of generic grapheme-to-phoneme models. The software, pronunciation data, and models are all made available under permissive open-source licenses.",Massively Multilingual Pronunciation Modeling with WikiPron,"We introduce WikiPron, an open-source command-line tool for extracting pronunciation data from Wiktionary, a collaborative multilingual online dictionary. We first describe the design and use of WikiPron. We then discuss the challenges faced scaling this tool to create an automatically-generated database of 1.7 million pronunciations from 165 languages. Finally, we validate the pronunciation database by using it to train and evaluating a collection of generic grapheme-to-phoneme models. The software, pronunciation data, and models are all made available under permissive open-source licenses.",We thank the countless Wiktionary contributors and editors without whom this work would have been impossible.,"Massively Multilingual Pronunciation Modeling with WikiPron. We introduce WikiPron, an open-source command-line tool for extracting pronunciation data from Wiktionary, a collaborative multilingual online dictionary. We first describe the design and use of WikiPron. We then discuss the challenges faced scaling this tool to create an automatically-generated database of 1.7 million pronunciations from 165 languages. Finally, we validate the pronunciation database by using it to train and evaluating a collection of generic grapheme-to-phoneme models. The software, pronunciation data, and models are all made available under permissive open-source licenses.",2020
joshi-etal-2020-dr,https://aclanthology.org/2020.findings-emnlp.335.pdf,1,,,,health,,,"Dr. Summarize: Global Summarization of Medical Dialogue by Exploiting Local Structures.. Understanding a medical conversation between a patient and a physician poses unique natural language understanding challenge since it combines elements of standard open-ended conversation with very domainspecific elements that require expertise and medical knowledge. Summarization of medical conversations is a particularly important aspect of medical conversation understanding since it addresses a very real need in medical practice: capturing the most important aspects of a medical encounter so that they can be used for medical decision making and subsequent follow ups. In this paper we present a novel approach to medical conversation summarization that leverages the unique and independent local structures created when gathering a patient's medical history. Our approach is a variation of the pointer generator network where we introduce a penalty on the generator distribution, and we explicitly model negations. The model also captures important properties of medical conversations such as medical knowledge coming from standardized medical ontologies better than when those concepts are introduced explicitly. Through evaluation by doctors, we show that our approach is preferred on twice the number of summaries to the baseline pointer generator model and captures most or all of the information in 80% of the conversations making it a realistic alternative to costly manual summarization by medical experts.",Dr. Summarize: Global Summarization of Medical Dialogue by Exploiting Local Structures.,"Understanding a medical conversation between a patient and a physician poses unique natural language understanding challenge since it combines elements of standard open-ended conversation with very domainspecific elements that require expertise and medical knowledge. Summarization of medical conversations is a particularly important aspect of medical conversation understanding since it addresses a very real need in medical practice: capturing the most important aspects of a medical encounter so that they can be used for medical decision making and subsequent follow ups. In this paper we present a novel approach to medical conversation summarization that leverages the unique and independent local structures created when gathering a patient's medical history. Our approach is a variation of the pointer generator network where we introduce a penalty on the generator distribution, and we explicitly model negations. The model also captures important properties of medical conversations such as medical knowledge coming from standardized medical ontologies better than when those concepts are introduced explicitly. Through evaluation by doctors, we show that our approach is preferred on twice the number of summaries to the baseline pointer generator model and captures most or all of the information in 80% of the conversations making it a realistic alternative to costly manual summarization by medical experts.",Dr. Summarize: Global Summarization of Medical Dialogue by Exploiting Local Structures.,"Understanding a medical conversation between a patient and a physician poses unique natural language understanding challenge since it combines elements of standard open-ended conversation with very domainspecific elements that require expertise and medical knowledge. Summarization of medical conversations is a particularly important aspect of medical conversation understanding since it addresses a very real need in medical practice: capturing the most important aspects of a medical encounter so that they can be used for medical decision making and subsequent follow ups. In this paper we present a novel approach to medical conversation summarization that leverages the unique and independent local structures created when gathering a patient's medical history. Our approach is a variation of the pointer generator network where we introduce a penalty on the generator distribution, and we explicitly model negations. The model also captures important properties of medical conversations such as medical knowledge coming from standardized medical ontologies better than when those concepts are introduced explicitly. Through evaluation by doctors, we show that our approach is preferred on twice the number of summaries to the baseline pointer generator model and captures most or all of the information in 80% of the conversations making it a realistic alternative to costly manual summarization by medical experts.",,"Dr. Summarize: Global Summarization of Medical Dialogue by Exploiting Local Structures.. Understanding a medical conversation between a patient and a physician poses unique natural language understanding challenge since it combines elements of standard open-ended conversation with very domainspecific elements that require expertise and medical knowledge. Summarization of medical conversations is a particularly important aspect of medical conversation understanding since it addresses a very real need in medical practice: capturing the most important aspects of a medical encounter so that they can be used for medical decision making and subsequent follow ups. In this paper we present a novel approach to medical conversation summarization that leverages the unique and independent local structures created when gathering a patient's medical history. Our approach is a variation of the pointer generator network where we introduce a penalty on the generator distribution, and we explicitly model negations. The model also captures important properties of medical conversations such as medical knowledge coming from standardized medical ontologies better than when those concepts are introduced explicitly. Through evaluation by doctors, we show that our approach is preferred on twice the number of summaries to the baseline pointer generator model and captures most or all of the information in 80% of the conversations making it a realistic alternative to costly manual summarization by medical experts.",2020
sanchez-martinez-etal-2020-english,https://aclanthology.org/2020.eamt-1.32.pdf,0,,,,,,,"An English-Swahili parallel corpus and its use for neural machine translation in the news domain. This paper describes our approach to create a neural machine translation system to translate between English and Swahili (both directions) in the news domain, as well as the process we followed to crawl the necessary parallel corpora from the Internet. We report the results of a pilot human evaluation performed by the news media organisations participating in the H2020 EU-funded project GoURMET.",An {E}nglish-{S}wahili parallel corpus and its use for neural machine translation in the news domain,"This paper describes our approach to create a neural machine translation system to translate between English and Swahili (both directions) in the news domain, as well as the process we followed to crawl the necessary parallel corpora from the Internet. We report the results of a pilot human evaluation performed by the news media organisations participating in the H2020 EU-funded project GoURMET.",An English-Swahili parallel corpus and its use for neural machine translation in the news domain,"This paper describes our approach to create a neural machine translation system to translate between English and Swahili (both directions) in the news domain, as well as the process we followed to crawl the necessary parallel corpora from the Internet. We report the results of a pilot human evaluation performed by the news media organisations participating in the H2020 EU-funded project GoURMET.","Acknowledgements: Work funded by the European Union's Horizon 2020 research and innovation programme under grant agreement number 825299, project Global Under-Resourced Media Translation (GoURMET). We thank the editors of the SAWA corpus for letting us use it for training. We also thank Wycliffe Muia (BBC) for help with Swahili examples and DW for helping in the manual evaluation.","An English-Swahili parallel corpus and its use for neural machine translation in the news domain. This paper describes our approach to create a neural machine translation system to translate between English and Swahili (both directions) in the news domain, as well as the process we followed to crawl the necessary parallel corpora from the Internet. We report the results of a pilot human evaluation performed by the news media organisations participating in the H2020 EU-funded project GoURMET.",2020
farreres-rodriguez-2004-selecting,http://www.lrec-conf.org/proceedings/lrec2004/pdf/324.pdf,0,,,,,,,"Selecting the Correct English Synset for a Spanish Sense. This work tries to enrich the Spanish Wordnet using a Spanish taxonomy as a knowledge source. The Spanish taxonomy is composed by Spanish senses, while Spanish Wordnet is composed by synsets, mostly linked to English WordNet. A set of weighted associations between Spanish words and Wordnet synsets is used for inferring associations between both taxonomies.",Selecting the Correct {E}nglish Synset for a {S}panish Sense,"This work tries to enrich the Spanish Wordnet using a Spanish taxonomy as a knowledge source. The Spanish taxonomy is composed by Spanish senses, while Spanish Wordnet is composed by synsets, mostly linked to English WordNet. A set of weighted associations between Spanish words and Wordnet synsets is used for inferring associations between both taxonomies.",Selecting the Correct English Synset for a Spanish Sense,"This work tries to enrich the Spanish Wordnet using a Spanish taxonomy as a knowledge source. The Spanish taxonomy is composed by Spanish senses, while Spanish Wordnet is composed by synsets, mostly linked to English WordNet. A set of weighted associations between Spanish words and Wordnet synsets is used for inferring associations between both taxonomies.",,"Selecting the Correct English Synset for a Spanish Sense. This work tries to enrich the Spanish Wordnet using a Spanish taxonomy as a knowledge source. The Spanish taxonomy is composed by Spanish senses, while Spanish Wordnet is composed by synsets, mostly linked to English WordNet. A set of weighted associations between Spanish words and Wordnet synsets is used for inferring associations between both taxonomies.",2004
peitz-etal-2013-rwth,https://aclanthology.org/W13-2224.pdf,0,,,,,,,"The RWTH Aachen Machine Translation System for WMT 2013. This paper describes the statistical machine translation (SMT) systems developed at RWTH Aachen University for the translation task of the ACL 2013 Eighth Workshop on Statistical Machine Translation (WMT 2013). We participated in the evaluation campaign for the French-English and German-English language pairs in both translation directions. Both hierarchical and phrase-based SMT systems are applied. A number of different techniques are evaluated, including hierarchical phrase reordering, translation model interpolation, domain adaptation techniques, weighted phrase extraction, word class language model, continuous space language model and system combination. By application of these methods we achieve considerable improvements over the respective baseline systems.",The {RWTH} {A}achen Machine Translation System for {WMT} 2013,"This paper describes the statistical machine translation (SMT) systems developed at RWTH Aachen University for the translation task of the ACL 2013 Eighth Workshop on Statistical Machine Translation (WMT 2013). We participated in the evaluation campaign for the French-English and German-English language pairs in both translation directions. Both hierarchical and phrase-based SMT systems are applied. A number of different techniques are evaluated, including hierarchical phrase reordering, translation model interpolation, domain adaptation techniques, weighted phrase extraction, word class language model, continuous space language model and system combination. By application of these methods we achieve considerable improvements over the respective baseline systems.",The RWTH Aachen Machine Translation System for WMT 2013,"This paper describes the statistical machine translation (SMT) systems developed at RWTH Aachen University for the translation task of the ACL 2013 Eighth Workshop on Statistical Machine Translation (WMT 2013). We participated in the evaluation campaign for the French-English and German-English language pairs in both translation directions. Both hierarchical and phrase-based SMT systems are applied. A number of different techniques are evaluated, including hierarchical phrase reordering, translation model interpolation, domain adaptation techniques, weighted phrase extraction, word class language model, continuous space language model and system combination. By application of these methods we achieve considerable improvements over the respective baseline systems.","This work was achieved as part of the Quaero Programme, funded by OSEO, French State agency for innovation.","The RWTH Aachen Machine Translation System for WMT 2013. This paper describes the statistical machine translation (SMT) systems developed at RWTH Aachen University for the translation task of the ACL 2013 Eighth Workshop on Statistical Machine Translation (WMT 2013). We participated in the evaluation campaign for the French-English and German-English language pairs in both translation directions. Both hierarchical and phrase-based SMT systems are applied. A number of different techniques are evaluated, including hierarchical phrase reordering, translation model interpolation, domain adaptation techniques, weighted phrase extraction, word class language model, continuous space language model and system combination. By application of these methods we achieve considerable improvements over the respective baseline systems.",2013
gokce-etal-2020-embedding,https://aclanthology.org/2020.acl-demos.36.pdf,1,,,,industry_innovation_infrastructure,,,"Embedding-based Scientific Literature Discovery in a Text Editor Application. Each claim in a research paper requires all relevant prior knowledge to be discovered, assimilated, and appropriately cited. However, despite the availability of powerful search engines and sophisticated text editing software, discovering relevant papers and integrating the knowledge into a manuscript remain complex tasks associated with high cognitive load. To define comprehensive search queries requires strong motivation from authors, irrespective of their familiarity with the research field. Moreover, switching between independent applications for literature discovery, bibliography management, reading papers, and writing text burdens authors further and interrupts their creative process. Here, we present a web application that combines text editing and literature discovery in an interactive user interface. The application is equipped with a search engine that couples Boolean keyword filtering with nearest neighbor search over text embeddings, providing a discovery experience tuned to an author's manuscript and his interests. Our application aims to take a step towards more enjoyable and effortless academic writing. The demo of the application 1 and a short video tutorial 2 are available online.",Embedding-based Scientific Literature Discovery in a Text Editor Application,"Each claim in a research paper requires all relevant prior knowledge to be discovered, assimilated, and appropriately cited. However, despite the availability of powerful search engines and sophisticated text editing software, discovering relevant papers and integrating the knowledge into a manuscript remain complex tasks associated with high cognitive load. To define comprehensive search queries requires strong motivation from authors, irrespective of their familiarity with the research field. Moreover, switching between independent applications for literature discovery, bibliography management, reading papers, and writing text burdens authors further and interrupts their creative process. Here, we present a web application that combines text editing and literature discovery in an interactive user interface. The application is equipped with a search engine that couples Boolean keyword filtering with nearest neighbor search over text embeddings, providing a discovery experience tuned to an author's manuscript and his interests. Our application aims to take a step towards more enjoyable and effortless academic writing. The demo of the application 1 and a short video tutorial 2 are available online.",Embedding-based Scientific Literature Discovery in a Text Editor Application,"Each claim in a research paper requires all relevant prior knowledge to be discovered, assimilated, and appropriately cited. However, despite the availability of powerful search engines and sophisticated text editing software, discovering relevant papers and integrating the knowledge into a manuscript remain complex tasks associated with high cognitive load. To define comprehensive search queries requires strong motivation from authors, irrespective of their familiarity with the research field. Moreover, switching between independent applications for literature discovery, bibliography management, reading papers, and writing text burdens authors further and interrupts their creative process. Here, we present a web application that combines text editing and literature discovery in an interactive user interface. The application is equipped with a search engine that couples Boolean keyword filtering with nearest neighbor search over text embeddings, providing a discovery experience tuned to an author's manuscript and his interests. Our application aims to take a step towards more enjoyable and effortless academic writing. The demo of the application 1 and a short video tutorial 2 are available online.",We acknowledge support from the Swiss National Science Foundation (grant 31003A 156976). We also thank the anonymous reviewers for their useful comments.,"Embedding-based Scientific Literature Discovery in a Text Editor Application. Each claim in a research paper requires all relevant prior knowledge to be discovered, assimilated, and appropriately cited. However, despite the availability of powerful search engines and sophisticated text editing software, discovering relevant papers and integrating the knowledge into a manuscript remain complex tasks associated with high cognitive load. To define comprehensive search queries requires strong motivation from authors, irrespective of their familiarity with the research field. Moreover, switching between independent applications for literature discovery, bibliography management, reading papers, and writing text burdens authors further and interrupts their creative process. Here, we present a web application that combines text editing and literature discovery in an interactive user interface. The application is equipped with a search engine that couples Boolean keyword filtering with nearest neighbor search over text embeddings, providing a discovery experience tuned to an author's manuscript and his interests. Our application aims to take a step towards more enjoyable and effortless academic writing. The demo of the application 1 and a short video tutorial 2 are available online.",2020
zhang-choi-2021-situatedqa,https://aclanthology.org/2021.emnlp-main.586.pdf,0,,,,,,,"SituatedQA: Incorporating Extra-Linguistic Contexts into QA. Answers to the same question may change depending on the extra-linguistic contexts (when and where the question was asked). To study this challenge, we introduce SITUATEDQA, an open-retrieval QA dataset where systems must produce the correct answer to a question given the temporal or geographical context. To construct SITUATEDQA, we first identify such questions in existing QA datasets. We find that a significant proportion of information seeking questions have context-dependent answers (e.g. roughly 16.5% of NQ-Open). For such context-dependent questions, we then crowdsource alternative contexts and their corresponding answers. Our study shows that existing models struggle with producing answers that are frequently updated or from uncommon locations. We further quantify how existing models, which are trained on data collected in the past, fail to generalize to answering questions asked in the present, even when provided with an updated evidence corpus (a roughly 15 point drop in accuracy). Our analysis suggests that open-retrieval QA benchmarks should incorporate extra-linguistic context to stay relevant globally and in the future. Our data, code, and datasheet are available at https: //situatedqa.github.io/.",{S}ituated{QA}: Incorporating Extra-Linguistic Contexts into {QA},"Answers to the same question may change depending on the extra-linguistic contexts (when and where the question was asked). To study this challenge, we introduce SITUATEDQA, an open-retrieval QA dataset where systems must produce the correct answer to a question given the temporal or geographical context. To construct SITUATEDQA, we first identify such questions in existing QA datasets. We find that a significant proportion of information seeking questions have context-dependent answers (e.g. roughly 16.5% of NQ-Open). For such context-dependent questions, we then crowdsource alternative contexts and their corresponding answers. Our study shows that existing models struggle with producing answers that are frequently updated or from uncommon locations. We further quantify how existing models, which are trained on data collected in the past, fail to generalize to answering questions asked in the present, even when provided with an updated evidence corpus (a roughly 15 point drop in accuracy). Our analysis suggests that open-retrieval QA benchmarks should incorporate extra-linguistic context to stay relevant globally and in the future. Our data, code, and datasheet are available at https: //situatedqa.github.io/.",SituatedQA: Incorporating Extra-Linguistic Contexts into QA,"Answers to the same question may change depending on the extra-linguistic contexts (when and where the question was asked). To study this challenge, we introduce SITUATEDQA, an open-retrieval QA dataset where systems must produce the correct answer to a question given the temporal or geographical context. To construct SITUATEDQA, we first identify such questions in existing QA datasets. We find that a significant proportion of information seeking questions have context-dependent answers (e.g. roughly 16.5% of NQ-Open). For such context-dependent questions, we then crowdsource alternative contexts and their corresponding answers. Our study shows that existing models struggle with producing answers that are frequently updated or from uncommon locations. We further quantify how existing models, which are trained on data collected in the past, fail to generalize to answering questions asked in the present, even when provided with an updated evidence corpus (a roughly 15 point drop in accuracy). Our analysis suggests that open-retrieval QA benchmarks should incorporate extra-linguistic context to stay relevant globally and in the future. Our data, code, and datasheet are available at https: //situatedqa.github.io/.","We would like to thank Sewon Min, Raymond Mooney, and members of UT NLP group for comments and discussions. The work is partially funded by Google Faculty Awards.","SituatedQA: Incorporating Extra-Linguistic Contexts into QA. Answers to the same question may change depending on the extra-linguistic contexts (when and where the question was asked). To study this challenge, we introduce SITUATEDQA, an open-retrieval QA dataset where systems must produce the correct answer to a question given the temporal or geographical context. To construct SITUATEDQA, we first identify such questions in existing QA datasets. We find that a significant proportion of information seeking questions have context-dependent answers (e.g. roughly 16.5% of NQ-Open). For such context-dependent questions, we then crowdsource alternative contexts and their corresponding answers. Our study shows that existing models struggle with producing answers that are frequently updated or from uncommon locations. We further quantify how existing models, which are trained on data collected in the past, fail to generalize to answering questions asked in the present, even when provided with an updated evidence corpus (a roughly 15 point drop in accuracy). Our analysis suggests that open-retrieval QA benchmarks should incorporate extra-linguistic context to stay relevant globally and in the future. Our data, code, and datasheet are available at https: //situatedqa.github.io/.",2021
boella-etal-2012-nlp,http://www.lrec-conf.org/proceedings/lrec2012/pdf/1035_Paper.pdf,1,,,,peace_justice_and_strong_institutions,,,"NLP Challenges for Eunomos a Tool to Build and Manage Legal Knowledge. In this paper, we describe how NLP can semi-automate the construction and analysis of knowledge in Eunomos, a legal knowledge management service which enables users to view legislation from various sources and find the right definitions and explanations of legal concepts in a given context. NLP can semi-automate some routine tasks currently performed by knowledge engineers, such as classifying norms, or linking key terms within legislation to ontological concepts. This helps overcome the resource bottleneck problem of creating specialist knowledge management systems. While accuracy is of the utmost importance in the legal domain, and the information should be verified by domain experts as a matter of course, a semi-automated approach can result in considerable efficiency gains.",{NLP} Challenges for Eunomos a Tool to Build and Manage Legal Knowledge,"In this paper, we describe how NLP can semi-automate the construction and analysis of knowledge in Eunomos, a legal knowledge management service which enables users to view legislation from various sources and find the right definitions and explanations of legal concepts in a given context. NLP can semi-automate some routine tasks currently performed by knowledge engineers, such as classifying norms, or linking key terms within legislation to ontological concepts. This helps overcome the resource bottleneck problem of creating specialist knowledge management systems. While accuracy is of the utmost importance in the legal domain, and the information should be verified by domain experts as a matter of course, a semi-automated approach can result in considerable efficiency gains.",NLP Challenges for Eunomos a Tool to Build and Manage Legal Knowledge,"In this paper, we describe how NLP can semi-automate the construction and analysis of knowledge in Eunomos, a legal knowledge management service which enables users to view legislation from various sources and find the right definitions and explanations of legal concepts in a given context. NLP can semi-automate some routine tasks currently performed by knowledge engineers, such as classifying norms, or linking key terms within legislation to ontological concepts. This helps overcome the resource bottleneck problem of creating specialist knowledge management systems. While accuracy is of the utmost importance in the legal domain, and the information should be verified by domain experts as a matter of course, a semi-automated approach can result in considerable efficiency gains.",,"NLP Challenges for Eunomos a Tool to Build and Manage Legal Knowledge. In this paper, we describe how NLP can semi-automate the construction and analysis of knowledge in Eunomos, a legal knowledge management service which enables users to view legislation from various sources and find the right definitions and explanations of legal concepts in a given context. NLP can semi-automate some routine tasks currently performed by knowledge engineers, such as classifying norms, or linking key terms within legislation to ontological concepts. This helps overcome the resource bottleneck problem of creating specialist knowledge management systems. While accuracy is of the utmost importance in the legal domain, and the information should be verified by domain experts as a matter of course, a semi-automated approach can result in considerable efficiency gains.",2012
wang-etal-2020-automated,https://aclanthology.org/2020.bea-1.18.pdf,1,,,,health,,,"Automated Scoring of Clinical Expressive Language Evaluation Tasks. Many clinical assessment instruments used to diagnose language impairments in children include a task in which the subject must formulate a sentence to describe an image using a specific target word. Because producing sentences in this way requires the speaker to integrate syntactic and semantic knowledge in a complex manner, responses are typically evaluated on several different dimensions of appropriateness yielding a single composite score for each response. In this paper, we present a dataset consisting of non-clinically elicited responses for three related sentence formulation tasks, and we propose an approach for automatically evaluating their appropriateness. Using neural machine translation, we generate correct-incorrect sentence pairs to serve as synthetic data in order to increase the amount and diversity of training data for our scoring model. Our scoring model uses transfer learning to facilitate automatic sentence appropriateness evaluation. We further compare custom word embeddings with pre-trained contextualized embeddings serving as features for our scoring model. We find that transfer learning improves scoring accuracy, particularly when using pre-trained contextualized embeddings.",Automated Scoring of Clinical Expressive Language Evaluation Tasks,"Many clinical assessment instruments used to diagnose language impairments in children include a task in which the subject must formulate a sentence to describe an image using a specific target word. Because producing sentences in this way requires the speaker to integrate syntactic and semantic knowledge in a complex manner, responses are typically evaluated on several different dimensions of appropriateness yielding a single composite score for each response. In this paper, we present a dataset consisting of non-clinically elicited responses for three related sentence formulation tasks, and we propose an approach for automatically evaluating their appropriateness. Using neural machine translation, we generate correct-incorrect sentence pairs to serve as synthetic data in order to increase the amount and diversity of training data for our scoring model. Our scoring model uses transfer learning to facilitate automatic sentence appropriateness evaluation. We further compare custom word embeddings with pre-trained contextualized embeddings serving as features for our scoring model. We find that transfer learning improves scoring accuracy, particularly when using pre-trained contextualized embeddings.",Automated Scoring of Clinical Expressive Language Evaluation Tasks,"Many clinical assessment instruments used to diagnose language impairments in children include a task in which the subject must formulate a sentence to describe an image using a specific target word. Because producing sentences in this way requires the speaker to integrate syntactic and semantic knowledge in a complex manner, responses are typically evaluated on several different dimensions of appropriateness yielding a single composite score for each response. In this paper, we present a dataset consisting of non-clinically elicited responses for three related sentence formulation tasks, and we propose an approach for automatically evaluating their appropriateness. Using neural machine translation, we generate correct-incorrect sentence pairs to serve as synthetic data in order to increase the amount and diversity of training data for our scoring model. Our scoring model uses transfer learning to facilitate automatic sentence appropriateness evaluation. We further compare custom word embeddings with pre-trained contextualized embeddings serving as features for our scoring model. We find that transfer learning improves scoring accuracy, particularly when using pre-trained contextualized embeddings.","We thank Beth Calamé, Julie Bird, Kristin Hinton, Christine Yang, and Emily Fabius for their contributions to data collection and annotation. This work was supported in part by NIH NIDCD awards R01DC012033 and R21DC017000. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NIH or NIDCD.","Automated Scoring of Clinical Expressive Language Evaluation Tasks. Many clinical assessment instruments used to diagnose language impairments in children include a task in which the subject must formulate a sentence to describe an image using a specific target word. Because producing sentences in this way requires the speaker to integrate syntactic and semantic knowledge in a complex manner, responses are typically evaluated on several different dimensions of appropriateness yielding a single composite score for each response. In this paper, we present a dataset consisting of non-clinically elicited responses for three related sentence formulation tasks, and we propose an approach for automatically evaluating their appropriateness. Using neural machine translation, we generate correct-incorrect sentence pairs to serve as synthetic data in order to increase the amount and diversity of training data for our scoring model. Our scoring model uses transfer learning to facilitate automatic sentence appropriateness evaluation. We further compare custom word embeddings with pre-trained contextualized embeddings serving as features for our scoring model. We find that transfer learning improves scoring accuracy, particularly when using pre-trained contextualized embeddings.",2020
luque-infante-lopez-2009-upper,https://aclanthology.org/W09-1009.pdf,0,,,,,,,"Upper Bounds for Unsupervised Parsing with Unambiguous Non-Terminally Separated Grammars. Unambiguous Non-Terminally Separated (UNTS) grammars have properties that make them attractive for grammatical inference. However, these properties do not state the maximal performance they can achieve when they are evaluated against a gold treebank that is not produced by an UNTS grammar. In this paper we investigate such an upper bound. We develop a method to find an upper bound for the unlabeled F 1 performance that any UNTS grammar can achieve over a given treebank. Our strategy is to characterize all possible versions of the gold treebank that UNTS grammars can produce and to find the one that optimizes a metric we define. We show a way to translate this score into an upper bound for the F 1. In particular, we show that the F 1 parsing score of any UNTS grammar can not be beyond 82.2% when the gold treebank is the WSJ10 corpus.",Upper Bounds for Unsupervised Parsing with Unambiguous Non-Terminally Separated Grammars,"Unambiguous Non-Terminally Separated (UNTS) grammars have properties that make them attractive for grammatical inference. However, these properties do not state the maximal performance they can achieve when they are evaluated against a gold treebank that is not produced by an UNTS grammar. In this paper we investigate such an upper bound. We develop a method to find an upper bound for the unlabeled F 1 performance that any UNTS grammar can achieve over a given treebank. Our strategy is to characterize all possible versions of the gold treebank that UNTS grammars can produce and to find the one that optimizes a metric we define. We show a way to translate this score into an upper bound for the F 1. In particular, we show that the F 1 parsing score of any UNTS grammar can not be beyond 82.2% when the gold treebank is the WSJ10 corpus.",Upper Bounds for Unsupervised Parsing with Unambiguous Non-Terminally Separated Grammars,"Unambiguous Non-Terminally Separated (UNTS) grammars have properties that make them attractive for grammatical inference. However, these properties do not state the maximal performance they can achieve when they are evaluated against a gold treebank that is not produced by an UNTS grammar. In this paper we investigate such an upper bound. We develop a method to find an upper bound for the unlabeled F 1 performance that any UNTS grammar can achieve over a given treebank. Our strategy is to characterize all possible versions of the gold treebank that UNTS grammars can produce and to find the one that optimizes a metric we define. We show a way to translate this score into an upper bound for the F 1. In particular, we show that the F 1 parsing score of any UNTS grammar can not be beyond 82.2% when the gold treebank is the WSJ10 corpus.","This work was supported in part by grant PICT 2006-00969, ANPCyT, Argentina. We would like to thank Pablo Rey (UDP, Chile) for his help with ILP, and Demetrio Martín Vilela (UNC, Argentina) for his detailed review.","Upper Bounds for Unsupervised Parsing with Unambiguous Non-Terminally Separated Grammars. Unambiguous Non-Terminally Separated (UNTS) grammars have properties that make them attractive for grammatical inference. However, these properties do not state the maximal performance they can achieve when they are evaluated against a gold treebank that is not produced by an UNTS grammar. In this paper we investigate such an upper bound. We develop a method to find an upper bound for the unlabeled F 1 performance that any UNTS grammar can achieve over a given treebank. Our strategy is to characterize all possible versions of the gold treebank that UNTS grammars can produce and to find the one that optimizes a metric we define. We show a way to translate this score into an upper bound for the F 1. In particular, we show that the F 1 parsing score of any UNTS grammar can not be beyond 82.2% when the gold treebank is the WSJ10 corpus.",2009
scarton-specia-2014-exploring,https://aclanthology.org/W14-3343.pdf,0,,,,,,,Exploring Consensus in Machine Translation for Quality Estimation. This paper presents the use of consensus among Machine Translation (MT) systems for the WMT14 Quality Estimation shared task. Consensus is explored here by comparing the MT system output against several alternative machine translations using standard evaluation metrics. Figures extracted from such metrics are used as features to complement baseline prediction models. The hypothesis is that knowing whether the translation of interest is similar or dissimilar to translations from multiple different MT systems can provide useful information regarding the quality of such a translation.,Exploring Consensus in Machine Translation for Quality Estimation,This paper presents the use of consensus among Machine Translation (MT) systems for the WMT14 Quality Estimation shared task. Consensus is explored here by comparing the MT system output against several alternative machine translations using standard evaluation metrics. Figures extracted from such metrics are used as features to complement baseline prediction models. The hypothesis is that knowing whether the translation of interest is similar or dissimilar to translations from multiple different MT systems can provide useful information regarding the quality of such a translation.,Exploring Consensus in Machine Translation for Quality Estimation,This paper presents the use of consensus among Machine Translation (MT) systems for the WMT14 Quality Estimation shared task. Consensus is explored here by comparing the MT system output against several alternative machine translations using standard evaluation metrics. Figures extracted from such metrics are used as features to complement baseline prediction models. The hypothesis is that knowing whether the translation of interest is similar or dissimilar to translations from multiple different MT systems can provide useful information regarding the quality of such a translation.,Acknowledgements: This work was supported by the EXPERT (EU Marie Curie ITN No. 317471) project.,Exploring Consensus in Machine Translation for Quality Estimation. This paper presents the use of consensus among Machine Translation (MT) systems for the WMT14 Quality Estimation shared task. Consensus is explored here by comparing the MT system output against several alternative machine translations using standard evaluation metrics. Figures extracted from such metrics are used as features to complement baseline prediction models. The hypothesis is that knowing whether the translation of interest is similar or dissimilar to translations from multiple different MT systems can provide useful information regarding the quality of such a translation.,2014
maslennikov-etal-2006-instance,https://aclanthology.org/P06-2074.pdf,0,,,,,,,"ARE: Instance Splitting Strategies for Dependency Relation-Based Information Extraction. Information Extraction (IE) is a fundamental technology for NLP. Previous methods for IE were relying on co-occurrence relations, soft patterns and properties of the target (for example, syntactic role), which result in problems of handling paraphrasing and alignment of instances. Our system ARE (Anchor and Relation) is based on the dependency relation model and tackles these problems by unifying entities according to their dependency relations, which we found to provide more invariant relations between entities in many cases. In order to exploit the complexity and characteristics of relation paths, we further classify the relation paths into the categories of 'easy', 'average' and 'hard', and utilize different extraction strategies based on the characteristics of those categories. Our extraction method leads to improvement in performance by 3% and 6% for MUC4 and MUC6 respectively as compared to the state-of-art IE systems.",{ARE}: Instance Splitting Strategies for Dependency Relation-Based Information Extraction,"Information Extraction (IE) is a fundamental technology for NLP. Previous methods for IE were relying on co-occurrence relations, soft patterns and properties of the target (for example, syntactic role), which result in problems of handling paraphrasing and alignment of instances. Our system ARE (Anchor and Relation) is based on the dependency relation model and tackles these problems by unifying entities according to their dependency relations, which we found to provide more invariant relations between entities in many cases. In order to exploit the complexity and characteristics of relation paths, we further classify the relation paths into the categories of 'easy', 'average' and 'hard', and utilize different extraction strategies based on the characteristics of those categories. Our extraction method leads to improvement in performance by 3% and 6% for MUC4 and MUC6 respectively as compared to the state-of-art IE systems.",ARE: Instance Splitting Strategies for Dependency Relation-Based Information Extraction,"Information Extraction (IE) is a fundamental technology for NLP. Previous methods for IE were relying on co-occurrence relations, soft patterns and properties of the target (for example, syntactic role), which result in problems of handling paraphrasing and alignment of instances. Our system ARE (Anchor and Relation) is based on the dependency relation model and tackles these problems by unifying entities according to their dependency relations, which we found to provide more invariant relations between entities in many cases. In order to exploit the complexity and characteristics of relation paths, we further classify the relation paths into the categories of 'easy', 'average' and 'hard', and utilize different extraction strategies based on the characteristics of those categories. Our extraction method leads to improvement in performance by 3% and 6% for MUC4 and MUC6 respectively as compared to the state-of-art IE systems.",,"ARE: Instance Splitting Strategies for Dependency Relation-Based Information Extraction. Information Extraction (IE) is a fundamental technology for NLP. Previous methods for IE were relying on co-occurrence relations, soft patterns and properties of the target (for example, syntactic role), which result in problems of handling paraphrasing and alignment of instances. Our system ARE (Anchor and Relation) is based on the dependency relation model and tackles these problems by unifying entities according to their dependency relations, which we found to provide more invariant relations between entities in many cases. In order to exploit the complexity and characteristics of relation paths, we further classify the relation paths into the categories of 'easy', 'average' and 'hard', and utilize different extraction strategies based on the characteristics of those categories. Our extraction method leads to improvement in performance by 3% and 6% for MUC4 and MUC6 respectively as compared to the state-of-art IE systems.",2006
hovy-etal-2013-learning,https://aclanthology.org/N13-1132.pdf,0,,,,,,,"Learning Whom to Trust with MACE. Non-expert annotation services like Amazon's Mechanical Turk (AMT) are cheap and fast ways to evaluate systems and provide categorical annotations for training data. Unfortunately, some annotators choose bad labels in order to maximize their pay. Manual identification is tedious, so we experiment with an item-response model. It learns in an unsupervised fashion to a) identify which annotators are trustworthy and b) predict the correct underlying labels. We match performance of more complex state-of-the-art systems and perform well even under adversarial conditions. We show considerable improvements over standard baselines, both for predicted label accuracy and trustworthiness estimates. The latter can be further improved by introducing a prior on model parameters and using Variational Bayes inference. Additionally, we can achieve even higher accuracy by focusing on the instances our model is most confident in (trading in some recall), and by incorporating annotated control instances. Our system, MACE (Multi-Annotator Competence Estimation), is available for download 1 .",Learning Whom to Trust with {MACE},"Non-expert annotation services like Amazon's Mechanical Turk (AMT) are cheap and fast ways to evaluate systems and provide categorical annotations for training data. Unfortunately, some annotators choose bad labels in order to maximize their pay. Manual identification is tedious, so we experiment with an item-response model. It learns in an unsupervised fashion to a) identify which annotators are trustworthy and b) predict the correct underlying labels. We match performance of more complex state-of-the-art systems and perform well even under adversarial conditions. We show considerable improvements over standard baselines, both for predicted label accuracy and trustworthiness estimates. The latter can be further improved by introducing a prior on model parameters and using Variational Bayes inference. Additionally, we can achieve even higher accuracy by focusing on the instances our model is most confident in (trading in some recall), and by incorporating annotated control instances. Our system, MACE (Multi-Annotator Competence Estimation), is available for download 1 .",Learning Whom to Trust with MACE,"Non-expert annotation services like Amazon's Mechanical Turk (AMT) are cheap and fast ways to evaluate systems and provide categorical annotations for training data. Unfortunately, some annotators choose bad labels in order to maximize their pay. Manual identification is tedious, so we experiment with an item-response model. It learns in an unsupervised fashion to a) identify which annotators are trustworthy and b) predict the correct underlying labels. We match performance of more complex state-of-the-art systems and perform well even under adversarial conditions. We show considerable improvements over standard baselines, both for predicted label accuracy and trustworthiness estimates. The latter can be further improved by introducing a prior on model parameters and using Variational Bayes inference. Additionally, we can achieve even higher accuracy by focusing on the instances our model is most confident in (trading in some recall), and by incorporating annotated control instances. Our system, MACE (Multi-Annotator Competence Estimation), is available for download 1 .","The authors would like to thank Chris Callison-Burch, Victoria Fossum, Stephan Gouws, Marc Schulder, Nathan Schneider, and Noah Smith for invaluable discussions, as well as the reviewers for their constructive feedback.","Learning Whom to Trust with MACE. Non-expert annotation services like Amazon's Mechanical Turk (AMT) are cheap and fast ways to evaluate systems and provide categorical annotations for training data. Unfortunately, some annotators choose bad labels in order to maximize their pay. Manual identification is tedious, so we experiment with an item-response model. It learns in an unsupervised fashion to a) identify which annotators are trustworthy and b) predict the correct underlying labels. We match performance of more complex state-of-the-art systems and perform well even under adversarial conditions. We show considerable improvements over standard baselines, both for predicted label accuracy and trustworthiness estimates. The latter can be further improved by introducing a prior on model parameters and using Variational Bayes inference. Additionally, we can achieve even higher accuracy by focusing on the instances our model is most confident in (trading in some recall), and by incorporating annotated control instances. Our system, MACE (Multi-Annotator Competence Estimation), is available for download 1 .",2013
chapin-1982-acl,https://aclanthology.org/P82-1024.pdf,0,,,,,,,"ACL in 1977. As I leaf through my own ""ACL (Historical)"" file (which, I am frightened to observe, goes back to the Fourth Annual Meeting, in 1966) , and focus in particular on 1977, when I was President, it strikes me that pretty much everything significant that happened in the Association that year was the work of other people.
Don Walker was completing the mammoth task of transferring all of the ACL's records from the East Coast to the West, paying off our indebtedness to the Center for Applied Linguistics, and in general getting the Association onto the firm financial and organizational footing which it has enjoyed to this day. Dave Hays was seeing to it that the microfiche journal kept on coming, and George Heldorn Joined him as Associate Editor that year to begin the move toward hard copy publication.",{ACL} in 1977,"As I leaf through my own ""ACL (Historical)"" file (which, I am frightened to observe, goes back to the Fourth Annual Meeting, in 1966) , and focus in particular on 1977, when I was President, it strikes me that pretty much everything significant that happened in the Association that year was the work of other people.
Don Walker was completing the mammoth task of transferring all of the ACL's records from the East Coast to the West, paying off our indebtedness to the Center for Applied Linguistics, and in general getting the Association onto the firm financial and organizational footing which it has enjoyed to this day. Dave Hays was seeing to it that the microfiche journal kept on coming, and George Heldorn Joined him as Associate Editor that year to begin the move toward hard copy publication.",ACL in 1977,"As I leaf through my own ""ACL (Historical)"" file (which, I am frightened to observe, goes back to the Fourth Annual Meeting, in 1966) , and focus in particular on 1977, when I was President, it strikes me that pretty much everything significant that happened in the Association that year was the work of other people.
Don Walker was completing the mammoth task of transferring all of the ACL's records from the East Coast to the West, paying off our indebtedness to the Center for Applied Linguistics, and in general getting the Association onto the firm financial and organizational footing which it has enjoyed to this day. Dave Hays was seeing to it that the microfiche journal kept on coming, and George Heldorn Joined him as Associate Editor that year to begin the move toward hard copy publication.",,"ACL in 1977. As I leaf through my own ""ACL (Historical)"" file (which, I am frightened to observe, goes back to the Fourth Annual Meeting, in 1966) , and focus in particular on 1977, when I was President, it strikes me that pretty much everything significant that happened in the Association that year was the work of other people.
Don Walker was completing the mammoth task of transferring all of the ACL's records from the East Coast to the West, paying off our indebtedness to the Center for Applied Linguistics, and in general getting the Association onto the firm financial and organizational footing which it has enjoyed to this day. Dave Hays was seeing to it that the microfiche journal kept on coming, and George Heldorn Joined him as Associate Editor that year to begin the move toward hard copy publication.",1982
yaghoobzadeh-schutze-2017-multi,https://aclanthology.org/E17-1055.pdf,0,,,,,,,"Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities. Entities are essential elements of natural language. In this paper, we present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity names) and entity (entity embeddings). We investigate state-of-theart learning methods on each level and find large differences, e.g., for deep learning models, traditional ngram features and the subword model of fasttext (Bojanowski et al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the entity level. We confirm experimentally that each level of representation contributes complementary information and a joint representation of all three levels improves the existing embedding based baseline for fine-grained entity typing by a large margin. Additionally, we show that adding information from entity descriptions further improves multi-level representations of entities.",Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities,"Entities are essential elements of natural language. In this paper, we present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity names) and entity (entity embeddings). We investigate state-of-theart learning methods on each level and find large differences, e.g., for deep learning models, traditional ngram features and the subword model of fasttext (Bojanowski et al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the entity level. We confirm experimentally that each level of representation contributes complementary information and a joint representation of all three levels improves the existing embedding based baseline for fine-grained entity typing by a large margin. Additionally, we show that adding information from entity descriptions further improves multi-level representations of entities.",Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities,"Entities are essential elements of natural language. In this paper, we present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity names) and entity (entity embeddings). We investigate state-of-theart learning methods on each level and find large differences, e.g., for deep learning models, traditional ngram features and the subword model of fasttext (Bojanowski et al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the entity level. We confirm experimentally that each level of representation contributes complementary information and a joint representation of all three levels improves the existing embedding based baseline for fine-grained entity typing by a large margin. Additionally, we show that adding information from entity descriptions further improves multi-level representations of entities.",Acknowledgments. This work was supported by DFG (SCHU 2246/8-2).,"Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities. Entities are essential elements of natural language. In this paper, we present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity names) and entity (entity embeddings). We investigate state-of-theart learning methods on each level and find large differences, e.g., for deep learning models, traditional ngram features and the subword model of fasttext (Bojanowski et al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the entity level. We confirm experimentally that each level of representation contributes complementary information and a joint representation of all three levels improves the existing embedding based baseline for fine-grained entity typing by a large margin. Additionally, we show that adding information from entity descriptions further improves multi-level representations of entities.",2017
qian-etal-2010-python,http://www.lrec-conf.org/proceedings/lrec2010/pdf/30_Paper.pdf,0,,,,,,,"A Python Toolkit for Universal Transliteration. We describe ScriptTranscriber, an open source toolkit for extracting transliterations in comparable corpora from languages written in different scripts. The system includes various methods for extracting potential terms of interest from raw text, for providing guesses on the pronunciations of terms, and for comparing two strings as possible transliterations using both phonetic and temporal measures. The system works with any script in the Unicode Basic Multilingual Plane and is easily extended to include new modules. Given comparable corpora, such as newswire text, in a pair of languages that use different scripts, ScriptTranscriber provides an easy way to mine transliterations from the comparable texts. This is particularly useful for underresourced languages, where training data for transliteration may be lacking, and where it is thus hard to train good transliterators. ScriptTranscriber provides an open source package that allows for ready incorporation of more sophisticated modules-e.g. a trained transliteration model for a particular language pair.",A Python Toolkit for Universal Transliteration,"We describe ScriptTranscriber, an open source toolkit for extracting transliterations in comparable corpora from languages written in different scripts. The system includes various methods for extracting potential terms of interest from raw text, for providing guesses on the pronunciations of terms, and for comparing two strings as possible transliterations using both phonetic and temporal measures. The system works with any script in the Unicode Basic Multilingual Plane and is easily extended to include new modules. Given comparable corpora, such as newswire text, in a pair of languages that use different scripts, ScriptTranscriber provides an easy way to mine transliterations from the comparable texts. This is particularly useful for underresourced languages, where training data for transliteration may be lacking, and where it is thus hard to train good transliterators. ScriptTranscriber provides an open source package that allows for ready incorporation of more sophisticated modules-e.g. a trained transliteration model for a particular language pair.",A Python Toolkit for Universal Transliteration,"We describe ScriptTranscriber, an open source toolkit for extracting transliterations in comparable corpora from languages written in different scripts. The system includes various methods for extracting potential terms of interest from raw text, for providing guesses on the pronunciations of terms, and for comparing two strings as possible transliterations using both phonetic and temporal measures. The system works with any script in the Unicode Basic Multilingual Plane and is easily extended to include new modules. Given comparable corpora, such as newswire text, in a pair of languages that use different scripts, ScriptTranscriber provides an easy way to mine transliterations from the comparable texts. This is particularly useful for underresourced languages, where training data for transliteration may be lacking, and where it is thus hard to train good transliterators. ScriptTranscriber provides an open source package that allows for ready incorporation of more sophisticated modules-e.g. a trained transliteration model for a particular language pair.",,"A Python Toolkit for Universal Transliteration. We describe ScriptTranscriber, an open source toolkit for extracting transliterations in comparable corpora from languages written in different scripts. The system includes various methods for extracting potential terms of interest from raw text, for providing guesses on the pronunciations of terms, and for comparing two strings as possible transliterations using both phonetic and temporal measures. The system works with any script in the Unicode Basic Multilingual Plane and is easily extended to include new modules. Given comparable corpora, such as newswire text, in a pair of languages that use different scripts, ScriptTranscriber provides an easy way to mine transliterations from the comparable texts. This is particularly useful for underresourced languages, where training data for transliteration may be lacking, and where it is thus hard to train good transliterators. ScriptTranscriber provides an open source package that allows for ready incorporation of more sophisticated modules-e.g. a trained transliteration model for a particular language pair.",2010
cardoso-2012-rembrandt,http://www.lrec-conf.org/proceedings/lrec2012/pdf/409_Paper.pdf,0,,,,,,,"Rembrandt - a named-entity recognition framework. Rembrandt is a named entity recognition system specially crafted to annotate documents by classifying named entities and ground them into unique identifiers. Rembrandt played an important role within our research over geographic IR, thus evolving into a more capable framework where documents can be annotated, manually curated and indexed. The goal of this paper is to present Rembrandt's simple but powerful annotation framework to the NLP community.",Rembrandt - a named-entity recognition framework,"Rembrandt is a named entity recognition system specially crafted to annotate documents by classifying named entities and ground them into unique identifiers. Rembrandt played an important role within our research over geographic IR, thus evolving into a more capable framework where documents can be annotated, manually curated and indexed. The goal of this paper is to present Rembrandt's simple but powerful annotation framework to the NLP community.",Rembrandt - a named-entity recognition framework,"Rembrandt is a named entity recognition system specially crafted to annotate documents by classifying named entities and ground them into unique identifiers. Rembrandt played an important role within our research over geographic IR, thus evolving into a more capable framework where documents can be annotated, manually curated and indexed. The goal of this paper is to present Rembrandt's simple but powerful annotation framework to the NLP community.","This work is supported by FCT for its LASIGE Multi-annual support, GREASE-II project (grant PTDC/EIA/73614/2006) and a PhD scholarship grant SFRH/BD/45480/2008, and by the Portuguese Government, the European Union (FEDER and FSE) through the Linguateca project, under contract ref.POSC/339/1.3/C/NAC, UMIC and FCCN.","Rembrandt - a named-entity recognition framework. Rembrandt is a named entity recognition system specially crafted to annotate documents by classifying named entities and ground them into unique identifiers. Rembrandt played an important role within our research over geographic IR, thus evolving into a more capable framework where documents can be annotated, manually curated and indexed. The goal of this paper is to present Rembrandt's simple but powerful annotation framework to the NLP community.",2012
uszkoreit-2012-quality,https://aclanthology.org/F12-4001.pdf,1,,,,industry_innovation_infrastructure,,,"Quality Translation for a Multilingual Continent - Priorities and Chances for European MT Research. Recent progress in translation technology has caused a real boost for research and technology deployment. At the same time, other areas of language technology also experience scientific advances and economic success stories. However, research in machine translation is still less affected by new developments in core areas of language processing than could be expected. One reason for the low level of interaction is certainly that the predominant research paradigm in MT has not started yet to systematically concentrate on high quality translation. Most of the research and nearly all of the application efforts have focused on solutions for informational inbound translation (assimilation MT). This focus has on the one hand enabled translation of information that normally is not translated at all. In this way MT has changed work and life of many people without ever infringing on the existing translation markets. In my talk I will present a new research approach dedicated to the analytical investigation of existing quality barriers. Such a systematic thrust can serve as the basis of scientifically guided combinations of technologies including hybrid approaches to transfer and the integration of advanced methods for syntactic and semantic processing into the translation process. Together with improved techniques for quality estimation, the expected results will drive translation technology into the direction badly needed by the multilingual European society.",Quality Translation for a Multilingual Continent - Priorities and Chances for {E}uropean {MT} Research,"Recent progress in translation technology has caused a real boost for research and technology deployment. At the same time, other areas of language technology also experience scientific advances and economic success stories. However, research in machine translation is still less affected by new developments in core areas of language processing than could be expected. One reason for the low level of interaction is certainly that the predominant research paradigm in MT has not started yet to systematically concentrate on high quality translation. Most of the research and nearly all of the application efforts have focused on solutions for informational inbound translation (assimilation MT). This focus has on the one hand enabled translation of information that normally is not translated at all. In this way MT has changed work and life of many people without ever infringing on the existing translation markets. In my talk I will present a new research approach dedicated to the analytical investigation of existing quality barriers. Such a systematic thrust can serve as the basis of scientifically guided combinations of technologies including hybrid approaches to transfer and the integration of advanced methods for syntactic and semantic processing into the translation process. Together with improved techniques for quality estimation, the expected results will drive translation technology into the direction badly needed by the multilingual European society.",Quality Translation for a Multilingual Continent - Priorities and Chances for European MT Research,"Recent progress in translation technology has caused a real boost for research and technology deployment. At the same time, other areas of language technology also experience scientific advances and economic success stories. However, research in machine translation is still less affected by new developments in core areas of language processing than could be expected. One reason for the low level of interaction is certainly that the predominant research paradigm in MT has not started yet to systematically concentrate on high quality translation. Most of the research and nearly all of the application efforts have focused on solutions for informational inbound translation (assimilation MT). This focus has on the one hand enabled translation of information that normally is not translated at all. In this way MT has changed work and life of many people without ever infringing on the existing translation markets. In my talk I will present a new research approach dedicated to the analytical investigation of existing quality barriers. Such a systematic thrust can serve as the basis of scientifically guided combinations of technologies including hybrid approaches to transfer and the integration of advanced methods for syntactic and semantic processing into the translation process. Together with improved techniques for quality estimation, the expected results will drive translation technology into the direction badly needed by the multilingual European society.",,"Quality Translation for a Multilingual Continent - Priorities and Chances for European MT Research. Recent progress in translation technology has caused a real boost for research and technology deployment. At the same time, other areas of language technology also experience scientific advances and economic success stories. However, research in machine translation is still less affected by new developments in core areas of language processing than could be expected. One reason for the low level of interaction is certainly that the predominant research paradigm in MT has not started yet to systematically concentrate on high quality translation. Most of the research and nearly all of the application efforts have focused on solutions for informational inbound translation (assimilation MT). This focus has on the one hand enabled translation of information that normally is not translated at all. In this way MT has changed work and life of many people without ever infringing on the existing translation markets. In my talk I will present a new research approach dedicated to the analytical investigation of existing quality barriers. Such a systematic thrust can serve as the basis of scientifically guided combinations of technologies including hybrid approaches to transfer and the integration of advanced methods for syntactic and semantic processing into the translation process. Together with improved techniques for quality estimation, the expected results will drive translation technology into the direction badly needed by the multilingual European society.",2012
schuster-etal-2020-stochastic,https://aclanthology.org/2020.pam-1.11.pdf,0,,,,,,,"Stochastic Frames. In the frame hypothesis (Barsalou, 1992; Löbner, 2014), human concepts are equated with frames, which extend feature lists by a functional structure consisting of attributes and values. For example, a bachelor is represented by the attributes GENDER and MARITAL STATUS and their values 'male' and 'unwed'. This paper makes the point that for many applications of concepts in cognition, including for concepts to be associated with lexemes in natural languages, the right structures to assume are not merely frames but stochastic frames in which attributes are associated with (conditional) probability distributions over values. The paper introduces the idea of stochastic frames and three applications of this idea: vagueness, ambiguity, and typicality.",Stochastic Frames,"In the frame hypothesis (Barsalou, 1992; Löbner, 2014), human concepts are equated with frames, which extend feature lists by a functional structure consisting of attributes and values. For example, a bachelor is represented by the attributes GENDER and MARITAL STATUS and their values 'male' and 'unwed'. This paper makes the point that for many applications of concepts in cognition, including for concepts to be associated with lexemes in natural languages, the right structures to assume are not merely frames but stochastic frames in which attributes are associated with (conditional) probability distributions over values. The paper introduces the idea of stochastic frames and three applications of this idea: vagueness, ambiguity, and typicality.",Stochastic Frames,"In the frame hypothesis (Barsalou, 1992; Löbner, 2014), human concepts are equated with frames, which extend feature lists by a functional structure consisting of attributes and values. For example, a bachelor is represented by the attributes GENDER and MARITAL STATUS and their values 'male' and 'unwed'. This paper makes the point that for many applications of concepts in cognition, including for concepts to be associated with lexemes in natural languages, the right structures to assume are not merely frames but stochastic frames in which attributes are associated with (conditional) probability distributions over values. The paper introduces the idea of stochastic frames and three applications of this idea: vagueness, ambiguity, and typicality.","This research was funded by the German Research Foundation (DFG) funded project: CRC 991 The Structure of Representations in Language, Cognition, and Science, specifically projects C09, D01 and a Mercator Fellowship awarded to Henk Zeevat. We would like to thank audiences at CoST 2019 at HHU Düsseldorf, the workshop on Records, Frames, and Attribute Spaces held at ZAS in Berlin, March 2018, and the Workshop on Uncertainty in Meaning and Representation in Linguistics and Philosophy held in Jelenia Góra, Poland, February, 2018.","Stochastic Frames. In the frame hypothesis (Barsalou, 1992; Löbner, 2014), human concepts are equated with frames, which extend feature lists by a functional structure consisting of attributes and values. For example, a bachelor is represented by the attributes GENDER and MARITAL STATUS and their values 'male' and 'unwed'. This paper makes the point that for many applications of concepts in cognition, including for concepts to be associated with lexemes in natural languages, the right structures to assume are not merely frames but stochastic frames in which attributes are associated with (conditional) probability distributions over values. The paper introduces the idea of stochastic frames and three applications of this idea: vagueness, ambiguity, and typicality.",2020
akram-hussain-2010-word,https://aclanthology.org/W10-3212.pdf,0,,,,,,,"Word Segmentation for Urdu OCR System. This paper presents a technique for word segmentation for the Urdu OCR system. Word segmentation or word tokenization is a preliminary task for Urdu language processing. Several techniques are available for word segmentation in other languages. A methodology is proposed for word segmentation in this paper which determines the boundaries of words given a sequence of ligatures, based on collocation of ligatures and words in the corpus. Using this technique, word identification rate of 96.10% is achieved, using trigram probabilities normalized over the number of ligatures and words in the sequence.",Word Segmentation for {U}rdu {OCR} System,"This paper presents a technique for word segmentation for the Urdu OCR system. Word segmentation or word tokenization is a preliminary task for Urdu language processing. Several techniques are available for word segmentation in other languages. A methodology is proposed for word segmentation in this paper which determines the boundaries of words given a sequence of ligatures, based on collocation of ligatures and words in the corpus. Using this technique, word identification rate of 96.10% is achieved, using trigram probabilities normalized over the number of ligatures and words in the sequence.",Word Segmentation for Urdu OCR System,"This paper presents a technique for word segmentation for the Urdu OCR system. Word segmentation or word tokenization is a preliminary task for Urdu language processing. Several techniques are available for word segmentation in other languages. A methodology is proposed for word segmentation in this paper which determines the boundaries of words given a sequence of ligatures, based on collocation of ligatures and words in the corpus. Using this technique, word identification rate of 96.10% is achieved, using trigram probabilities normalized over the number of ligatures and words in the sequence.",,"Word Segmentation for Urdu OCR System. This paper presents a technique for word segmentation for the Urdu OCR system. Word segmentation or word tokenization is a preliminary task for Urdu language processing. Several techniques are available for word segmentation in other languages. A methodology is proposed for word segmentation in this paper which determines the boundaries of words given a sequence of ligatures, based on collocation of ligatures and words in the corpus. Using this technique, word identification rate of 96.10% is achieved, using trigram probabilities normalized over the number of ligatures and words in the sequence.",2010
richardson-kuhn-2014-unixman,http://www.lrec-conf.org/proceedings/lrec2014/pdf/823_Paper.pdf,0,,,,,,,"UnixMan Corpus: A Resource for Language Learning in the Unix Domain. We present a new resource, the UnixMan Corpus, for studying language learning it the domain of Unix utility manuals. The corpus is built by mining Unix (and other Unix related) man pages for parallel example entries, consisting of English textual descriptions with corresponding command examples. The commands provide a grounded and ambiguous semantics for the textual descriptions, making the corpus of interest to work on Semantic Parsing and Grounded Language Learning. In contrast to standard resources for Semantic Parsing, which tend to be restricted to a small number of concepts and relations, the UnixMan Corpus spans a wide variety of utility genres and topics, and consists of hundreds of command and domain entity types. The semi-structured nature of the manuals also makes it easy to exploit other types of relevant information for Grounded Language Learning. We describe the details of the corpus and provide preliminary classification results.",{U}nix{M}an Corpus: A Resource for Language Learning in the {U}nix Domain,"We present a new resource, the UnixMan Corpus, for studying language learning it the domain of Unix utility manuals. The corpus is built by mining Unix (and other Unix related) man pages for parallel example entries, consisting of English textual descriptions with corresponding command examples. The commands provide a grounded and ambiguous semantics for the textual descriptions, making the corpus of interest to work on Semantic Parsing and Grounded Language Learning. In contrast to standard resources for Semantic Parsing, which tend to be restricted to a small number of concepts and relations, the UnixMan Corpus spans a wide variety of utility genres and topics, and consists of hundreds of command and domain entity types. The semi-structured nature of the manuals also makes it easy to exploit other types of relevant information for Grounded Language Learning. We describe the details of the corpus and provide preliminary classification results.",UnixMan Corpus: A Resource for Language Learning in the Unix Domain,"We present a new resource, the UnixMan Corpus, for studying language learning it the domain of Unix utility manuals. The corpus is built by mining Unix (and other Unix related) man pages for parallel example entries, consisting of English textual descriptions with corresponding command examples. The commands provide a grounded and ambiguous semantics for the textual descriptions, making the corpus of interest to work on Semantic Parsing and Grounded Language Learning. In contrast to standard resources for Semantic Parsing, which tend to be restricted to a small number of concepts and relations, the UnixMan Corpus spans a wide variety of utility genres and topics, and consists of hundreds of command and domain entity types. The semi-structured nature of the manuals also makes it easy to exploit other types of relevant information for Grounded Language Learning. We describe the details of the corpus and provide preliminary classification results.",,"UnixMan Corpus: A Resource for Language Learning in the Unix Domain. We present a new resource, the UnixMan Corpus, for studying language learning it the domain of Unix utility manuals. The corpus is built by mining Unix (and other Unix related) man pages for parallel example entries, consisting of English textual descriptions with corresponding command examples. The commands provide a grounded and ambiguous semantics for the textual descriptions, making the corpus of interest to work on Semantic Parsing and Grounded Language Learning. In contrast to standard resources for Semantic Parsing, which tend to be restricted to a small number of concepts and relations, the UnixMan Corpus spans a wide variety of utility genres and topics, and consists of hundreds of command and domain entity types. The semi-structured nature of the manuals also makes it easy to exploit other types of relevant information for Grounded Language Learning. We describe the details of the corpus and provide preliminary classification results.",2014
huber-hinrichs-2019-including,https://aclanthology.org/2019.gwc-1.4.pdf,0,,,,,,,"Including Swiss Standard German in GermaNet. GermaNet (Henrich and Hinrichs, 2010; Hamp and Feldweg, 1997) is a comprehensive wordnet of Standard German spoken in the Federal Republic of Germany. The GermaNet team aims at modelling the basic vocabulary of the language. German is an official language or a minority language in many countries. It is an official language in Austria, Germany and Switzerland, each with its own codified standard variety (Auer, 2014, p. 21), and also in Belgium, Liechtenstein, and Luxemburg. German is recognized as a minority language in thirteen additional countries, including Brasil, Italy, Poland, and Russia. However, the different standard varieties of German are currently not represented in GermaNet. With this project, we make a start on changing this by including one variety, namely Swiss Standard German, into GermaNet. This shall give a more inclusive perspective on the German language. We will argue that Swiss Standard German words, Helvetisms, are best included into the already existing wordnet GermaNet, rather than creating them as a separate wordnet.",Including {S}wiss Standard {G}erman in {G}erma{N}et,"GermaNet (Henrich and Hinrichs, 2010; Hamp and Feldweg, 1997) is a comprehensive wordnet of Standard German spoken in the Federal Republic of Germany. The GermaNet team aims at modelling the basic vocabulary of the language. German is an official language or a minority language in many countries. It is an official language in Austria, Germany and Switzerland, each with its own codified standard variety (Auer, 2014, p. 21), and also in Belgium, Liechtenstein, and Luxemburg. German is recognized as a minority language in thirteen additional countries, including Brasil, Italy, Poland, and Russia. However, the different standard varieties of German are currently not represented in GermaNet. With this project, we make a start on changing this by including one variety, namely Swiss Standard German, into GermaNet. This shall give a more inclusive perspective on the German language. We will argue that Swiss Standard German words, Helvetisms, are best included into the already existing wordnet GermaNet, rather than creating them as a separate wordnet.",Including Swiss Standard German in GermaNet,"GermaNet (Henrich and Hinrichs, 2010; Hamp and Feldweg, 1997) is a comprehensive wordnet of Standard German spoken in the Federal Republic of Germany. The GermaNet team aims at modelling the basic vocabulary of the language. German is an official language or a minority language in many countries. It is an official language in Austria, Germany and Switzerland, each with its own codified standard variety (Auer, 2014, p. 21), and also in Belgium, Liechtenstein, and Luxemburg. German is recognized as a minority language in thirteen additional countries, including Brasil, Italy, Poland, and Russia. However, the different standard varieties of German are currently not represented in GermaNet. With this project, we make a start on changing this by including one variety, namely Swiss Standard German, into GermaNet. This shall give a more inclusive perspective on the German language. We will argue that Swiss Standard German words, Helvetisms, are best included into the already existing wordnet GermaNet, rather than creating them as a separate wordnet.","We thank Reinhild Barkey, Ç agrı Çöltekin and Christiane Fellbaum for providing insight and expertise from which this project has greatly benefitted. Furthermore, we gratefully acknowledge the financial support of our research by the German Ministry for Education and Research (BMBF) as part of the CLARIN-D research infrastructure grant given to the University of Tübingen.","Including Swiss Standard German in GermaNet. GermaNet (Henrich and Hinrichs, 2010; Hamp and Feldweg, 1997) is a comprehensive wordnet of Standard German spoken in the Federal Republic of Germany. The GermaNet team aims at modelling the basic vocabulary of the language. German is an official language or a minority language in many countries. It is an official language in Austria, Germany and Switzerland, each with its own codified standard variety (Auer, 2014, p. 21), and also in Belgium, Liechtenstein, and Luxemburg. German is recognized as a minority language in thirteen additional countries, including Brasil, Italy, Poland, and Russia. However, the different standard varieties of German are currently not represented in GermaNet. With this project, we make a start on changing this by including one variety, namely Swiss Standard German, into GermaNet. This shall give a more inclusive perspective on the German language. We will argue that Swiss Standard German words, Helvetisms, are best included into the already existing wordnet GermaNet, rather than creating them as a separate wordnet.",2019
huang-2013-social,https://aclanthology.org/W13-4203.pdf,0,,,,,,,"Social Metaphor Detection via Topical Analysis. With massive social media data, e.g., comments, blog articles, or tweets, become available, there is a rising interest towards automatic metaphor detection from open social text. One of the most well-known approaches is detecting the violation of selectional preference. The idea of selectional preference is that verbs tend to have semantic preferences of their arguments. If we find that in some text, any arguments of these predicates are not of their preferred semantic classes, and it's very likely to be a metaphor. However, previously only few papers have focuses on leveraging topical analysis techniques in metaphor detection. Intuitively, both predicates and arguments exhibit strong tendencies towards a few specific topics, and this topical information provides additional evidence to facilitate identification of selectional preference among text. In this paper, we study how the metaphor detection technique can be influenced by topical analysis techniques based on our proposed threestep approach. We formally define the problem, and propose our approach for metaphor detection, and then we conduct experiments on a real-world data set. Though our experimental result shows that topics do not have strong impacts on the metaphor detection techniques, we analyze the result and present some insights based on our study.",Social Metaphor Detection via Topical Analysis,"With massive social media data, e.g., comments, blog articles, or tweets, become available, there is a rising interest towards automatic metaphor detection from open social text. One of the most well-known approaches is detecting the violation of selectional preference. The idea of selectional preference is that verbs tend to have semantic preferences of their arguments. If we find that in some text, any arguments of these predicates are not of their preferred semantic classes, and it's very likely to be a metaphor. However, previously only few papers have focuses on leveraging topical analysis techniques in metaphor detection. Intuitively, both predicates and arguments exhibit strong tendencies towards a few specific topics, and this topical information provides additional evidence to facilitate identification of selectional preference among text. In this paper, we study how the metaphor detection technique can be influenced by topical analysis techniques based on our proposed threestep approach. We formally define the problem, and propose our approach for metaphor detection, and then we conduct experiments on a real-world data set. Though our experimental result shows that topics do not have strong impacts on the metaphor detection techniques, we analyze the result and present some insights based on our study.",Social Metaphor Detection via Topical Analysis,"With massive social media data, e.g., comments, blog articles, or tweets, become available, there is a rising interest towards automatic metaphor detection from open social text. One of the most well-known approaches is detecting the violation of selectional preference. The idea of selectional preference is that verbs tend to have semantic preferences of their arguments. If we find that in some text, any arguments of these predicates are not of their preferred semantic classes, and it's very likely to be a metaphor. However, previously only few papers have focuses on leveraging topical analysis techniques in metaphor detection. Intuitively, both predicates and arguments exhibit strong tendencies towards a few specific topics, and this topical information provides additional evidence to facilitate identification of selectional preference among text. In this paper, we study how the metaphor detection technique can be influenced by topical analysis techniques based on our proposed threestep approach. We formally define the problem, and propose our approach for metaphor detection, and then we conduct experiments on a real-world data set. Though our experimental result shows that topics do not have strong impacts on the metaphor detection techniques, we analyze the result and present some insights based on our study.","Supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense US Army Research Laboratory contract number W911NF-12-C-0020. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government.We would also like to thank Zi Yang for his help of the topical analysis experiments, Teruko Mitamura and Eric Nyberg for their instructions, and Yi-Chia Wang and Dong Nguyen for the work of data collection.","Social Metaphor Detection via Topical Analysis. With massive social media data, e.g., comments, blog articles, or tweets, become available, there is a rising interest towards automatic metaphor detection from open social text. One of the most well-known approaches is detecting the violation of selectional preference. The idea of selectional preference is that verbs tend to have semantic preferences of their arguments. If we find that in some text, any arguments of these predicates are not of their preferred semantic classes, and it's very likely to be a metaphor. However, previously only few papers have focuses on leveraging topical analysis techniques in metaphor detection. Intuitively, both predicates and arguments exhibit strong tendencies towards a few specific topics, and this topical information provides additional evidence to facilitate identification of selectional preference among text. In this paper, we study how the metaphor detection technique can be influenced by topical analysis techniques based on our proposed threestep approach. We formally define the problem, and propose our approach for metaphor detection, and then we conduct experiments on a real-world data set. Though our experimental result shows that topics do not have strong impacts on the metaphor detection techniques, we analyze the result and present some insights based on our study.",2013
bhattacharja-2010-benglish,https://aclanthology.org/Y10-1011.pdf,0,,,,,,,"Benglish Verbs: A Case of Code-mixing in Bengali. In this article, we show how grammar can account for Benglish verbs, a particular type of complex predicate, which are constituted of an English word and a Bengali verb (e.g. /EksiDenT kOra/ 'to have an accident', /in kOra/ 'to get/come/put in' or /kOnfuz kOra/ 'to confuse'). We analyze these verbs in the light of a couple of models (e.g. Kageyama, 1991; Lieber, 1992; Matsumoto, 1996) which claim that complex predicates are necessarily formed in syntax. However, Benglish verbs like /in kOra/ or /kOnfuz kOra/ are problematic for these approaches because it is unclear how preposition in or flexional verb confuse can appear as the arguments of the verb /kOra/ 'to do' in an underlying syntactic structure. We claim that all Benglish verbs can be satisfactorily handled in Morphology in the light of Whole Word Morphology (Ford et al., 1997 and Singh, 2006).",Benglish Verbs: A Case of Code-mixing in {B}engali,"In this article, we show how grammar can account for Benglish verbs, a particular type of complex predicate, which are constituted of an English word and a Bengali verb (e.g. /EksiDenT kOra/ 'to have an accident', /in kOra/ 'to get/come/put in' or /kOnfuz kOra/ 'to confuse'). We analyze these verbs in the light of a couple of models (e.g. Kageyama, 1991; Lieber, 1992; Matsumoto, 1996) which claim that complex predicates are necessarily formed in syntax. However, Benglish verbs like /in kOra/ or /kOnfuz kOra/ are problematic for these approaches because it is unclear how preposition in or flexional verb confuse can appear as the arguments of the verb /kOra/ 'to do' in an underlying syntactic structure. We claim that all Benglish verbs can be satisfactorily handled in Morphology in the light of Whole Word Morphology (Ford et al., 1997 and Singh, 2006).",Benglish Verbs: A Case of Code-mixing in Bengali,"In this article, we show how grammar can account for Benglish verbs, a particular type of complex predicate, which are constituted of an English word and a Bengali verb (e.g. /EksiDenT kOra/ 'to have an accident', /in kOra/ 'to get/come/put in' or /kOnfuz kOra/ 'to confuse'). We analyze these verbs in the light of a couple of models (e.g. Kageyama, 1991; Lieber, 1992; Matsumoto, 1996) which claim that complex predicates are necessarily formed in syntax. However, Benglish verbs like /in kOra/ or /kOnfuz kOra/ are problematic for these approaches because it is unclear how preposition in or flexional verb confuse can appear as the arguments of the verb /kOra/ 'to do' in an underlying syntactic structure. We claim that all Benglish verbs can be satisfactorily handled in Morphology in the light of Whole Word Morphology (Ford et al., 1997 and Singh, 2006).",,"Benglish Verbs: A Case of Code-mixing in Bengali. In this article, we show how grammar can account for Benglish verbs, a particular type of complex predicate, which are constituted of an English word and a Bengali verb (e.g. /EksiDenT kOra/ 'to have an accident', /in kOra/ 'to get/come/put in' or /kOnfuz kOra/ 'to confuse'). We analyze these verbs in the light of a couple of models (e.g. Kageyama, 1991; Lieber, 1992; Matsumoto, 1996) which claim that complex predicates are necessarily formed in syntax. However, Benglish verbs like /in kOra/ or /kOnfuz kOra/ are problematic for these approaches because it is unclear how preposition in or flexional verb confuse can appear as the arguments of the verb /kOra/ 'to do' in an underlying syntactic structure. We claim that all Benglish verbs can be satisfactorily handled in Morphology in the light of Whole Word Morphology (Ford et al., 1997 and Singh, 2006).",2010
dethlefs-2011-bremen,https://aclanthology.org/W11-2847.pdf,0,,,,,,,The Bremen System for the GIVE-2.5 Challenge. This paper presents the Bremen system for the GIVE-2.5 challenge. It is based on decision trees learnt from new annotations of the GIVE corpus augmented with manually specified rules. Surface realisation is based on context-free grammars. The paper will address advantages and shortcomings of the approach and discuss how the present system can serve as a baseline for a future evaluation with an improved version using hierarchical reinforcement learning with graphical models.,The {B}remen System for the {GIVE}-2.5 Challenge,This paper presents the Bremen system for the GIVE-2.5 challenge. It is based on decision trees learnt from new annotations of the GIVE corpus augmented with manually specified rules. Surface realisation is based on context-free grammars. The paper will address advantages and shortcomings of the approach and discuss how the present system can serve as a baseline for a future evaluation with an improved version using hierarchical reinforcement learning with graphical models.,The Bremen System for the GIVE-2.5 Challenge,This paper presents the Bremen system for the GIVE-2.5 challenge. It is based on decision trees learnt from new annotations of the GIVE corpus augmented with manually specified rules. Surface realisation is based on context-free grammars. The paper will address advantages and shortcomings of the approach and discuss how the present system can serve as a baseline for a future evaluation with an improved version using hierarchical reinforcement learning with graphical models.,Thanks to the German Research Foundation DFG and the Transregional Collaborative Research Centre SFB/TR8 'Spatial Cognition' for partial support.,The Bremen System for the GIVE-2.5 Challenge. This paper presents the Bremen system for the GIVE-2.5 challenge. It is based on decision trees learnt from new annotations of the GIVE corpus augmented with manually specified rules. Surface realisation is based on context-free grammars. The paper will address advantages and shortcomings of the approach and discuss how the present system can serve as a baseline for a future evaluation with an improved version using hierarchical reinforcement learning with graphical models.,2011
iomdin-etal-2013-linguistic,https://aclanthology.org/W13-3402.pdf,0,,,,,,,"Linguistic Problems Based on Text Corpora. The paper is focused on self-contained linguistic problems based on text corpora. We argue that corpus-based problems differ from traditional linguistic problems because they make it possible to represent language variation. Furthermore, they often require basic statistical thinking from the students. The practical value of using data obtained from text corpora for teaching linguistics through linguistic problems is shown.",Linguistic Problems Based on Text Corpora,"The paper is focused on self-contained linguistic problems based on text corpora. We argue that corpus-based problems differ from traditional linguistic problems because they make it possible to represent language variation. Furthermore, they often require basic statistical thinking from the students. The practical value of using data obtained from text corpora for teaching linguistics through linguistic problems is shown.",Linguistic Problems Based on Text Corpora,"The paper is focused on self-contained linguistic problems based on text corpora. We argue that corpus-based problems differ from traditional linguistic problems because they make it possible to represent language variation. Furthermore, they often require basic statistical thinking from the students. The practical value of using data obtained from text corpora for teaching linguistics through linguistic problems is shown.",,"Linguistic Problems Based on Text Corpora. The paper is focused on self-contained linguistic problems based on text corpora. We argue that corpus-based problems differ from traditional linguistic problems because they make it possible to represent language variation. Furthermore, they often require basic statistical thinking from the students. The practical value of using data obtained from text corpora for teaching linguistics through linguistic problems is shown.",2013
ninomiya-etal-2009-deterministic,https://aclanthology.org/E09-1069.pdf,0,,,,,,,"Deterministic Shift-Reduce Parsing for Unification-Based Grammars by Using Default Unification. Many parsing techniques including parameter estimation assume the use of a packed parse forest for efficient and accurate parsing. However, they have several inherent problems deriving from the restriction of locality in the packed parse forest. Deterministic parsing is one of solutions that can achieve simple and fast parsing without the mechanisms of the packed parse forest by accurately choosing search paths. We propose (i) deterministic shift-reduce parsing for unification-based grammars, and (ii) best-first shift-reduce parsing with beam thresholding for unification-based grammars. Deterministic parsing cannot simply be applied to unification-based grammar parsing, which often fails because of its hard constraints. Therefore, it is developed by using default unification, which almost always succeeds in unification by overwriting inconsistent constraints in grammars.",Deterministic Shift-Reduce Parsing for Unification-Based Grammars by Using Default Unification,"Many parsing techniques including parameter estimation assume the use of a packed parse forest for efficient and accurate parsing. However, they have several inherent problems deriving from the restriction of locality in the packed parse forest. Deterministic parsing is one of solutions that can achieve simple and fast parsing without the mechanisms of the packed parse forest by accurately choosing search paths. We propose (i) deterministic shift-reduce parsing for unification-based grammars, and (ii) best-first shift-reduce parsing with beam thresholding for unification-based grammars. Deterministic parsing cannot simply be applied to unification-based grammar parsing, which often fails because of its hard constraints. Therefore, it is developed by using default unification, which almost always succeeds in unification by overwriting inconsistent constraints in grammars.",Deterministic Shift-Reduce Parsing for Unification-Based Grammars by Using Default Unification,"Many parsing techniques including parameter estimation assume the use of a packed parse forest for efficient and accurate parsing. However, they have several inherent problems deriving from the restriction of locality in the packed parse forest. Deterministic parsing is one of solutions that can achieve simple and fast parsing without the mechanisms of the packed parse forest by accurately choosing search paths. We propose (i) deterministic shift-reduce parsing for unification-based grammars, and (ii) best-first shift-reduce parsing with beam thresholding for unification-based grammars. Deterministic parsing cannot simply be applied to unification-based grammar parsing, which often fails because of its hard constraints. Therefore, it is developed by using default unification, which almost always succeeds in unification by overwriting inconsistent constraints in grammars.",,"Deterministic Shift-Reduce Parsing for Unification-Based Grammars by Using Default Unification. Many parsing techniques including parameter estimation assume the use of a packed parse forest for efficient and accurate parsing. However, they have several inherent problems deriving from the restriction of locality in the packed parse forest. Deterministic parsing is one of solutions that can achieve simple and fast parsing without the mechanisms of the packed parse forest by accurately choosing search paths. We propose (i) deterministic shift-reduce parsing for unification-based grammars, and (ii) best-first shift-reduce parsing with beam thresholding for unification-based grammars. Deterministic parsing cannot simply be applied to unification-based grammar parsing, which often fails because of its hard constraints. Therefore, it is developed by using default unification, which almost always succeeds in unification by overwriting inconsistent constraints in grammars.",2009
kiesel-etal-2021-image,https://aclanthology.org/2021.argmining-1.4.pdf,0,,,,,,,"Image Retrieval for Arguments Using Stance-Aware Query Expansion. Many forms of argumentation employ images as persuasive means, but research in argument mining has been focused on verbal argumentation so far. This paper shows how to integrate images into argument mining research, specifically into argument retrieval. By exploiting the sophisticated image representations of keyword-based image search, we propose to use semantic query expansion for both the pro and the con stance to retrieve ""argumentative images"" for the respective stance. Our results indicate that even simple expansions provide a strong baseline, reaching a precision@10 of 0.49 for images being (1) on-topic, (2) argumentative, and (3) on-stance. An in-depth analysis reveals a high topic dependence of the retrieval performance and shows the need to further investigate on images providing contextual information.",Image Retrieval for Arguments Using Stance-Aware Query Expansion,"Many forms of argumentation employ images as persuasive means, but research in argument mining has been focused on verbal argumentation so far. This paper shows how to integrate images into argument mining research, specifically into argument retrieval. By exploiting the sophisticated image representations of keyword-based image search, we propose to use semantic query expansion for both the pro and the con stance to retrieve ""argumentative images"" for the respective stance. Our results indicate that even simple expansions provide a strong baseline, reaching a precision@10 of 0.49 for images being (1) on-topic, (2) argumentative, and (3) on-stance. An in-depth analysis reveals a high topic dependence of the retrieval performance and shows the need to further investigate on images providing contextual information.",Image Retrieval for Arguments Using Stance-Aware Query Expansion,"Many forms of argumentation employ images as persuasive means, but research in argument mining has been focused on verbal argumentation so far. This paper shows how to integrate images into argument mining research, specifically into argument retrieval. By exploiting the sophisticated image representations of keyword-based image search, we propose to use semantic query expansion for both the pro and the con stance to retrieve ""argumentative images"" for the respective stance. Our results indicate that even simple expansions provide a strong baseline, reaching a precision@10 of 0.49 for images being (1) on-topic, (2) argumentative, and (3) on-stance. An in-depth analysis reveals a high topic dependence of the retrieval performance and shows the need to further investigate on images providing contextual information.",,"Image Retrieval for Arguments Using Stance-Aware Query Expansion. Many forms of argumentation employ images as persuasive means, but research in argument mining has been focused on verbal argumentation so far. This paper shows how to integrate images into argument mining research, specifically into argument retrieval. By exploiting the sophisticated image representations of keyword-based image search, we propose to use semantic query expansion for both the pro and the con stance to retrieve ""argumentative images"" for the respective stance. Our results indicate that even simple expansions provide a strong baseline, reaching a precision@10 of 0.49 for images being (1) on-topic, (2) argumentative, and (3) on-stance. An in-depth analysis reveals a high topic dependence of the retrieval performance and shows the need to further investigate on images providing contextual information.",2021
santini-etal-2006-implementing,https://aclanthology.org/P06-2090.pdf,0,,,,,,,"Implementing a Characterization of Genre for Automatic Genre Identification of Web Pages. In this paper, we propose an implementable characterization of genre suitable for automatic genre identification of web pages. This characterization is implemented as an inferential model based on a modified version of Bayes' theorem. Such a model can deal with genre hybridism and individualization, two important forces behind genre evolution. Results show that this approach is effective and is worth further research.",Implementing a Characterization of Genre for Automatic Genre Identification of Web Pages,"In this paper, we propose an implementable characterization of genre suitable for automatic genre identification of web pages. This characterization is implemented as an inferential model based on a modified version of Bayes' theorem. Such a model can deal with genre hybridism and individualization, two important forces behind genre evolution. Results show that this approach is effective and is worth further research.",Implementing a Characterization of Genre for Automatic Genre Identification of Web Pages,"In this paper, we propose an implementable characterization of genre suitable for automatic genre identification of web pages. This characterization is implemented as an inferential model based on a modified version of Bayes' theorem. Such a model can deal with genre hybridism and individualization, two important forces behind genre evolution. Results show that this approach is effective and is worth further research.",,"Implementing a Characterization of Genre for Automatic Genre Identification of Web Pages. In this paper, we propose an implementable characterization of genre suitable for automatic genre identification of web pages. This characterization is implemented as an inferential model based on a modified version of Bayes' theorem. Such a model can deal with genre hybridism and individualization, two important forces behind genre evolution. Results show that this approach is effective and is worth further research.",2006
dorr-etal-2002-duster,https://link.springer.com/chapter/10.1007/3-540-45820-4_4.pdf,0,,,,,,,DUSTer: a method for unraveling cross-language divergences for statistical word-level alignment. ,{DUST}er: a method for unraveling cross-language divergences for statistical word-level alignment,,DUSTer: a method for unraveling cross-language divergences for statistical word-level alignment,,,DUSTer: a method for unraveling cross-language divergences for statistical word-level alignment. ,2002
babych-etal-2007-translating,https://aclanthology.org/2007.mtsummit-papers.5.pdf,0,,,,,,,"Translating from under-resourced languages: comparing direct transfer against pivot translation. In this paper we compare two methods for translating into English from languages for which few MT resources have been developed (e.g. Ukrainian). The first method involves direct transfer using an MT system that is available for this language pair. The second method involves translation via a cognate language, which has more translation resources and one or more advanced translation systems (e.g. Russian for Slavonic languages). The comparison shows that it is possible to achieve better translation quality via the pivot language, leveraging on advanced dictionaries and grammars available for it and on lexical and syntactic similarities between the source and pivot languages. The results suggest that MT development efforts can be efficiently reused for families of closely related languages, and investing in MT for closely related languages can be more productive than developing systems from scratch for new translation directions. We also suggest a method for comparing the performance of a direct and pivot translation routes via automated evaluation of segments with varying translation difficulty.",Translating from under-resourced languages: comparing direct transfer against pivot translation,"In this paper we compare two methods for translating into English from languages for which few MT resources have been developed (e.g. Ukrainian). The first method involves direct transfer using an MT system that is available for this language pair. The second method involves translation via a cognate language, which has more translation resources and one or more advanced translation systems (e.g. Russian for Slavonic languages). The comparison shows that it is possible to achieve better translation quality via the pivot language, leveraging on advanced dictionaries and grammars available for it and on lexical and syntactic similarities between the source and pivot languages. The results suggest that MT development efforts can be efficiently reused for families of closely related languages, and investing in MT for closely related languages can be more productive than developing systems from scratch for new translation directions. We also suggest a method for comparing the performance of a direct and pivot translation routes via automated evaluation of segments with varying translation difficulty.",Translating from under-resourced languages: comparing direct transfer against pivot translation,"In this paper we compare two methods for translating into English from languages for which few MT resources have been developed (e.g. Ukrainian). The first method involves direct transfer using an MT system that is available for this language pair. The second method involves translation via a cognate language, which has more translation resources and one or more advanced translation systems (e.g. Russian for Slavonic languages). The comparison shows that it is possible to achieve better translation quality via the pivot language, leveraging on advanced dictionaries and grammars available for it and on lexical and syntactic similarities between the source and pivot languages. The results suggest that MT development efforts can be efficiently reused for families of closely related languages, and investing in MT for closely related languages can be more productive than developing systems from scratch for new translation directions. We also suggest a method for comparing the performance of a direct and pivot translation routes via automated evaluation of segments with varying translation difficulty.",,"Translating from under-resourced languages: comparing direct transfer against pivot translation. In this paper we compare two methods for translating into English from languages for which few MT resources have been developed (e.g. Ukrainian). The first method involves direct transfer using an MT system that is available for this language pair. The second method involves translation via a cognate language, which has more translation resources and one or more advanced translation systems (e.g. Russian for Slavonic languages). The comparison shows that it is possible to achieve better translation quality via the pivot language, leveraging on advanced dictionaries and grammars available for it and on lexical and syntactic similarities between the source and pivot languages. The results suggest that MT development efforts can be efficiently reused for families of closely related languages, and investing in MT for closely related languages can be more productive than developing systems from scratch for new translation directions. We also suggest a method for comparing the performance of a direct and pivot translation routes via automated evaluation of segments with varying translation difficulty.",2007
liu-2010-detecting,https://aclanthology.org/W10-0503.pdf,0,,,,,,,"Detecting Word Misuse in Chinese. Social Network Service (SNS) and personal blogs have become the most popular platform for online communication and sharing information. However because most modern computer keyboards are Latin-based, Asian language speakers (such as Chinese) has to rely on a input system which accepts Romanisation of the characters and convert them into characters or words in that language. In Chinese this form of Romanisation (usually called Pinyin) is highly ambiguous, word misuses often occur because the user choose a wrong candidate or deliverately substitute the word with another character string that has the identical Romanisation to convey certain semantics, or to achieve a sarcasm effect. In this paper we aim to develop a system that can automatically identify such word misuse, and suggest the correct word to be used.",Detecting Word Misuse in {C}hinese,"Social Network Service (SNS) and personal blogs have become the most popular platform for online communication and sharing information. However because most modern computer keyboards are Latin-based, Asian language speakers (such as Chinese) has to rely on a input system which accepts Romanisation of the characters and convert them into characters or words in that language. In Chinese this form of Romanisation (usually called Pinyin) is highly ambiguous, word misuses often occur because the user choose a wrong candidate or deliverately substitute the word with another character string that has the identical Romanisation to convey certain semantics, or to achieve a sarcasm effect. In this paper we aim to develop a system that can automatically identify such word misuse, and suggest the correct word to be used.",Detecting Word Misuse in Chinese,"Social Network Service (SNS) and personal blogs have become the most popular platform for online communication and sharing information. However because most modern computer keyboards are Latin-based, Asian language speakers (such as Chinese) has to rely on a input system which accepts Romanisation of the characters and convert them into characters or words in that language. In Chinese this form of Romanisation (usually called Pinyin) is highly ambiguous, word misuses often occur because the user choose a wrong candidate or deliverately substitute the word with another character string that has the identical Romanisation to convey certain semantics, or to achieve a sarcasm effect. In this paper we aim to develop a system that can automatically identify such word misuse, and suggest the correct word to be used.",,"Detecting Word Misuse in Chinese. Social Network Service (SNS) and personal blogs have become the most popular platform for online communication and sharing information. However because most modern computer keyboards are Latin-based, Asian language speakers (such as Chinese) has to rely on a input system which accepts Romanisation of the characters and convert them into characters or words in that language. In Chinese this form of Romanisation (usually called Pinyin) is highly ambiguous, word misuses often occur because the user choose a wrong candidate or deliverately substitute the word with another character string that has the identical Romanisation to convey certain semantics, or to achieve a sarcasm effect. In this paper we aim to develop a system that can automatically identify such word misuse, and suggest the correct word to be used.",2010
jiang-etal-2020-know,https://aclanthology.org/2020.tacl-1.28.pdf,0,,,,,,,"How Can We Know What Language Models Know?. Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as ''Obama is a by profession''. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as ''Obama worked as a '' may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know.",How Can We Know What Language Models Know?,"Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as ''Obama is a by profession''. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as ''Obama worked as a '' may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know.",How Can We Know What Language Models Know?,"Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as ''Obama is a by profession''. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as ''Obama worked as a '' may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know.","This work was supported by a gift from Bosch Research and NSF award no. 1815287. We would like to thank Paul Michel, Hiroaki Hayashi, Pengcheng Yin, and Shuyan Zhou for their insightful comments and suggestions.","How Can We Know What Language Models Know?. Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as ''Obama is a by profession''. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as ''Obama worked as a '' may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know.",2020
joshi-etal-2013-making,https://aclanthology.org/I13-2006.pdf,0,,,,,,,"Making Headlines in Hindi: Automatic English to Hindi News Headline Translation. News headlines exhibit stylistic peculiarities. The goal of our translation engine 'Making Headlines in Hindi' is to achieve automatic translation of English news headlines to Hindi while retaining the Hindi news headline styles. There are two central modules of our engine: the modified translation unit based on Moses and a co-occurrencebased post-processing unit. The modified translation unit provides two machine translation (MT) models: phrase-based and factor-based (both using in-domain data). In addition, a co-occurrence-based post-processing option may be turned on by a user. Our evaluation shows that this engine handles some linguistic phenomena observed in Hindi news headlines.",Making Headlines in {H}indi: Automatic {E}nglish to {H}indi News Headline Translation,"News headlines exhibit stylistic peculiarities. The goal of our translation engine 'Making Headlines in Hindi' is to achieve automatic translation of English news headlines to Hindi while retaining the Hindi news headline styles. There are two central modules of our engine: the modified translation unit based on Moses and a co-occurrencebased post-processing unit. The modified translation unit provides two machine translation (MT) models: phrase-based and factor-based (both using in-domain data). In addition, a co-occurrence-based post-processing option may be turned on by a user. Our evaluation shows that this engine handles some linguistic phenomena observed in Hindi news headlines.",Making Headlines in Hindi: Automatic English to Hindi News Headline Translation,"News headlines exhibit stylistic peculiarities. The goal of our translation engine 'Making Headlines in Hindi' is to achieve automatic translation of English news headlines to Hindi while retaining the Hindi news headline styles. There are two central modules of our engine: the modified translation unit based on Moses and a co-occurrencebased post-processing unit. The modified translation unit provides two machine translation (MT) models: phrase-based and factor-based (both using in-domain data). In addition, a co-occurrence-based post-processing option may be turned on by a user. Our evaluation shows that this engine handles some linguistic phenomena observed in Hindi news headlines.",,"Making Headlines in Hindi: Automatic English to Hindi News Headline Translation. News headlines exhibit stylistic peculiarities. The goal of our translation engine 'Making Headlines in Hindi' is to achieve automatic translation of English news headlines to Hindi while retaining the Hindi news headline styles. There are two central modules of our engine: the modified translation unit based on Moses and a co-occurrencebased post-processing unit. The modified translation unit provides two machine translation (MT) models: phrase-based and factor-based (both using in-domain data). In addition, a co-occurrence-based post-processing option may be turned on by a user. Our evaluation shows that this engine handles some linguistic phenomena observed in Hindi news headlines.",2013
tolmachev-etal-2019-shrinking,https://aclanthology.org/N19-1281.pdf,0,,,,,,,"Shrinking Japanese Morphological Analyzers With Neural Networks and Semi-supervised Learning. For languages without natural word boundaries, like Japanese and Chinese, word segmentation is a prerequisite for downstream analysis. For Japanese, segmentation is often done jointly with part of speech tagging, and this process is usually referred to as morphological analysis. Morphological analyzers are trained on data hand-annotated with segmentation boundaries and part of speech tags. A segmentation dictionary or character n-gram information is also provided as additional inputs to the model. Incorporating this extra information makes models large. Modern neural morphological analyzers can consume gigabytes of memory. We propose a compact alternative to these cumbersome approaches which do not rely on any externally provided n-gram or word representations. The model uses only unigram character embeddings, encodes them using either stacked bi-LSTM or a self-attention network, and independently infers both segmentation and part of speech information. The model is trained in an end-to-end and semisupervised fashion, on labels produced by a state-of-the-art analyzer. We demonstrate that the proposed technique rivals performance of a previous dictionary-based state-of-the-art approach and can even surpass it when training with the combination of human-annotated and automatically-annotated data. Our model itself is significantly smaller than the dictionarybased one: it uses less than 15 megabytes of space.",Shrinking {J}apanese Morphological Analyzers With Neural Networks and Semi-supervised Learning,"For languages without natural word boundaries, like Japanese and Chinese, word segmentation is a prerequisite for downstream analysis. For Japanese, segmentation is often done jointly with part of speech tagging, and this process is usually referred to as morphological analysis. Morphological analyzers are trained on data hand-annotated with segmentation boundaries and part of speech tags. A segmentation dictionary or character n-gram information is also provided as additional inputs to the model. Incorporating this extra information makes models large. Modern neural morphological analyzers can consume gigabytes of memory. We propose a compact alternative to these cumbersome approaches which do not rely on any externally provided n-gram or word representations. The model uses only unigram character embeddings, encodes them using either stacked bi-LSTM or a self-attention network, and independently infers both segmentation and part of speech information. The model is trained in an end-to-end and semisupervised fashion, on labels produced by a state-of-the-art analyzer. We demonstrate that the proposed technique rivals performance of a previous dictionary-based state-of-the-art approach and can even surpass it when training with the combination of human-annotated and automatically-annotated data. Our model itself is significantly smaller than the dictionarybased one: it uses less than 15 megabytes of space.",Shrinking Japanese Morphological Analyzers With Neural Networks and Semi-supervised Learning,"For languages without natural word boundaries, like Japanese and Chinese, word segmentation is a prerequisite for downstream analysis. For Japanese, segmentation is often done jointly with part of speech tagging, and this process is usually referred to as morphological analysis. Morphological analyzers are trained on data hand-annotated with segmentation boundaries and part of speech tags. A segmentation dictionary or character n-gram information is also provided as additional inputs to the model. Incorporating this extra information makes models large. Modern neural morphological analyzers can consume gigabytes of memory. We propose a compact alternative to these cumbersome approaches which do not rely on any externally provided n-gram or word representations. The model uses only unigram character embeddings, encodes them using either stacked bi-LSTM or a self-attention network, and independently infers both segmentation and part of speech information. The model is trained in an end-to-end and semisupervised fashion, on labels produced by a state-of-the-art analyzer. We demonstrate that the proposed technique rivals performance of a previous dictionary-based state-of-the-art approach and can even surpass it when training with the combination of human-annotated and automatically-annotated data. Our model itself is significantly smaller than the dictionarybased one: it uses less than 15 megabytes of space.",,"Shrinking Japanese Morphological Analyzers With Neural Networks and Semi-supervised Learning. For languages without natural word boundaries, like Japanese and Chinese, word segmentation is a prerequisite for downstream analysis. For Japanese, segmentation is often done jointly with part of speech tagging, and this process is usually referred to as morphological analysis. Morphological analyzers are trained on data hand-annotated with segmentation boundaries and part of speech tags. A segmentation dictionary or character n-gram information is also provided as additional inputs to the model. Incorporating this extra information makes models large. Modern neural morphological analyzers can consume gigabytes of memory. We propose a compact alternative to these cumbersome approaches which do not rely on any externally provided n-gram or word representations. The model uses only unigram character embeddings, encodes them using either stacked bi-LSTM or a self-attention network, and independently infers both segmentation and part of speech information. The model is trained in an end-to-end and semisupervised fashion, on labels produced by a state-of-the-art analyzer. We demonstrate that the proposed technique rivals performance of a previous dictionary-based state-of-the-art approach and can even surpass it when training with the combination of human-annotated and automatically-annotated data. Our model itself is significantly smaller than the dictionarybased one: it uses less than 15 megabytes of space.",2019
khayrallah-etal-2018-jhu,https://aclanthology.org/W18-6479.pdf,0,,,,,,,"The JHU Parallel Corpus Filtering Systems for WMT 2018. This work describes our submission to the WMT18 Parallel Corpus Filtering shared task. We use a slightly modified version of the Zipporah Corpus Filtering toolkit (Xu and Koehn, 2017), which computes an adequacy score and a fluency score on a sentence pair, and use a weighted sum of the scores as the selection criteria. This work differs from Zipporah in that we experiment with using the noisy corpus to be filtered to compute the combination weights, and thus avoids generating synthetic data as in standard Zipporah.",The {JHU} Parallel Corpus Filtering Systems for {WMT} 2018,"This work describes our submission to the WMT18 Parallel Corpus Filtering shared task. We use a slightly modified version of the Zipporah Corpus Filtering toolkit (Xu and Koehn, 2017), which computes an adequacy score and a fluency score on a sentence pair, and use a weighted sum of the scores as the selection criteria. This work differs from Zipporah in that we experiment with using the noisy corpus to be filtered to compute the combination weights, and thus avoids generating synthetic data as in standard Zipporah.",The JHU Parallel Corpus Filtering Systems for WMT 2018,"This work describes our submission to the WMT18 Parallel Corpus Filtering shared task. We use a slightly modified version of the Zipporah Corpus Filtering toolkit (Xu and Koehn, 2017), which computes an adequacy score and a fluency score on a sentence pair, and use a weighted sum of the scores as the selection criteria. This work differs from Zipporah in that we experiment with using the noisy corpus to be filtered to compute the combination weights, and thus avoids generating synthetic data as in standard Zipporah.",This work was in part supported by the IARPA MATERIAL project and a Google Faculty Research Award.,"The JHU Parallel Corpus Filtering Systems for WMT 2018. This work describes our submission to the WMT18 Parallel Corpus Filtering shared task. We use a slightly modified version of the Zipporah Corpus Filtering toolkit (Xu and Koehn, 2017), which computes an adequacy score and a fluency score on a sentence pair, and use a weighted sum of the scores as the selection criteria. This work differs from Zipporah in that we experiment with using the noisy corpus to be filtered to compute the combination weights, and thus avoids generating synthetic data as in standard Zipporah.",2018
ashihara-etal-2019-contextualized,https://aclanthology.org/D19-5552.pdf,0,,,,,,,"Contextualized context2vec. Lexical substitution ranks substitution candidates from the viewpoint of paraphrasability for a target word in a given sentence. There are two major approaches for lexical substitution: (1) generating contextualized word embeddings by assigning multiple embeddings to one word and (2) generating context embeddings using the sentence. Herein we propose a method that combines these two approaches to contextualize word embeddings for lexical substitution. Experiments demonstrate that our method outperforms the current state-ofthe-art method. We also create CEFR-LP, a new evaluation dataset for the lexical substitution task. It has a wider coverage of substitution candidates than previous datasets and assigns English proficiency levels to all target words and substitution candidates.",Contextualized context2vec,"Lexical substitution ranks substitution candidates from the viewpoint of paraphrasability for a target word in a given sentence. There are two major approaches for lexical substitution: (1) generating contextualized word embeddings by assigning multiple embeddings to one word and (2) generating context embeddings using the sentence. Herein we propose a method that combines these two approaches to contextualize word embeddings for lexical substitution. Experiments demonstrate that our method outperforms the current state-ofthe-art method. We also create CEFR-LP, a new evaluation dataset for the lexical substitution task. It has a wider coverage of substitution candidates than previous datasets and assigns English proficiency levels to all target words and substitution candidates.",Contextualized context2vec,"Lexical substitution ranks substitution candidates from the viewpoint of paraphrasability for a target word in a given sentence. There are two major approaches for lexical substitution: (1) generating contextualized word embeddings by assigning multiple embeddings to one word and (2) generating context embeddings using the sentence. Herein we propose a method that combines these two approaches to contextualize word embeddings for lexical substitution. Experiments demonstrate that our method outperforms the current state-ofthe-art method. We also create CEFR-LP, a new evaluation dataset for the lexical substitution task. It has a wider coverage of substitution candidates than previous datasets and assigns English proficiency levels to all target words and substitution candidates.",We thank Professor Christopher G. Haswell for his valuable comments and discussions. We also thank the anonymous reviewers for their valuable comments. This research was supported by the KDDI Foundation.,"Contextualized context2vec. Lexical substitution ranks substitution candidates from the viewpoint of paraphrasability for a target word in a given sentence. There are two major approaches for lexical substitution: (1) generating contextualized word embeddings by assigning multiple embeddings to one word and (2) generating context embeddings using the sentence. Herein we propose a method that combines these two approaches to contextualize word embeddings for lexical substitution. Experiments demonstrate that our method outperforms the current state-ofthe-art method. We also create CEFR-LP, a new evaluation dataset for the lexical substitution task. It has a wider coverage of substitution candidates than previous datasets and assigns English proficiency levels to all target words and substitution candidates.",2019
gonzalez-rubio-etal-2010-saturnalia,http://www.lrec-conf.org/proceedings/lrec2010/pdf/541_Paper.pdf,0,,,,,,,"Saturnalia: A Latin-Catalan Parallel Corpus for Statistical MT. Currently, a great effort is being carried out in the digitalisation of large historical document collections for preservation purposes. The documents in these collections are usually written in ancient languages, such as Latin or Greek, which limits the access of the general public to their content due to the language barrier. Therefore, digital libraries aim not only at storing raw images of digitalised documents, but also to annotate them with their corresponding text transcriptions and translations into modern languages. Unfortunately, ancient languages have at their disposal scarce electronic resources to be exploited by natural language processing techniques. This paper describes the compilation process of a novel Latin-Catalan parallel corpus as a new task for statistical machine translation (SMT). Preliminary experimental results are also reported using a state-of-the-art phrase-based SMT system. The results presented in this work reveal the complexity of the task and its challenging, but interesting nature for future development.",{S}aturnalia: A {L}atin-{C}atalan Parallel Corpus for Statistical {MT},"Currently, a great effort is being carried out in the digitalisation of large historical document collections for preservation purposes. The documents in these collections are usually written in ancient languages, such as Latin or Greek, which limits the access of the general public to their content due to the language barrier. Therefore, digital libraries aim not only at storing raw images of digitalised documents, but also to annotate them with their corresponding text transcriptions and translations into modern languages. Unfortunately, ancient languages have at their disposal scarce electronic resources to be exploited by natural language processing techniques. This paper describes the compilation process of a novel Latin-Catalan parallel corpus as a new task for statistical machine translation (SMT). Preliminary experimental results are also reported using a state-of-the-art phrase-based SMT system. The results presented in this work reveal the complexity of the task and its challenging, but interesting nature for future development.",Saturnalia: A Latin-Catalan Parallel Corpus for Statistical MT,"Currently, a great effort is being carried out in the digitalisation of large historical document collections for preservation purposes. The documents in these collections are usually written in ancient languages, such as Latin or Greek, which limits the access of the general public to their content due to the language barrier. Therefore, digital libraries aim not only at storing raw images of digitalised documents, but also to annotate them with their corresponding text transcriptions and translations into modern languages. Unfortunately, ancient languages have at their disposal scarce electronic resources to be exploited by natural language processing techniques. This paper describes the compilation process of a novel Latin-Catalan parallel corpus as a new task for statistical machine translation (SMT). Preliminary experimental results are also reported using a state-of-the-art phrase-based SMT system. The results presented in this work reveal the complexity of the task and its challenging, but interesting nature for future development.",,"Saturnalia: A Latin-Catalan Parallel Corpus for Statistical MT. Currently, a great effort is being carried out in the digitalisation of large historical document collections for preservation purposes. The documents in these collections are usually written in ancient languages, such as Latin or Greek, which limits the access of the general public to their content due to the language barrier. Therefore, digital libraries aim not only at storing raw images of digitalised documents, but also to annotate them with their corresponding text transcriptions and translations into modern languages. Unfortunately, ancient languages have at their disposal scarce electronic resources to be exploited by natural language processing techniques. This paper describes the compilation process of a novel Latin-Catalan parallel corpus as a new task for statistical machine translation (SMT). Preliminary experimental results are also reported using a state-of-the-art phrase-based SMT system. The results presented in this work reveal the complexity of the task and its challenging, but interesting nature for future development.",2010
li-etal-2013-multi,https://aclanthology.org/W13-3101.pdf,0,,,,,,,"Multi-document multilingual summarization corpus preparation, Part 1: Arabic, English, Greek, Chinese, Romanian. This document overviews the strategy, effort and aftermath of the MultiLing 2013 multilingual summarization data collection. We describe how the Data Contributors of MultiLing collected and generated a multilingual multi-document summarization corpus on 10 different languages: Arabic, Chinese, Czech, English, French, Greek, Hebrew, Hindi, Romanian and Spanish. We discuss the rationale behind the main decisions of the collection, the methodology used to generate the multilingual corpus, as well as challenges and problems faced per language. This paper overviews the work on Arabic, Chinese, English, Greek, and Romanian languages. A second part, covering the remaining languages, is available as a distinct paper in the MultiLing 2013 proceedings.","Multi-document multilingual summarization corpus preparation, Part 1: {A}rabic, {E}nglish, {G}reek, {C}hinese, {R}omanian","This document overviews the strategy, effort and aftermath of the MultiLing 2013 multilingual summarization data collection. We describe how the Data Contributors of MultiLing collected and generated a multilingual multi-document summarization corpus on 10 different languages: Arabic, Chinese, Czech, English, French, Greek, Hebrew, Hindi, Romanian and Spanish. We discuss the rationale behind the main decisions of the collection, the methodology used to generate the multilingual corpus, as well as challenges and problems faced per language. This paper overviews the work on Arabic, Chinese, English, Greek, and Romanian languages. A second part, covering the remaining languages, is available as a distinct paper in the MultiLing 2013 proceedings.","Multi-document multilingual summarization corpus preparation, Part 1: Arabic, English, Greek, Chinese, Romanian","This document overviews the strategy, effort and aftermath of the MultiLing 2013 multilingual summarization data collection. We describe how the Data Contributors of MultiLing collected and generated a multilingual multi-document summarization corpus on 10 different languages: Arabic, Chinese, Czech, English, French, Greek, Hebrew, Hindi, Romanian and Spanish. We discuss the rationale behind the main decisions of the collection, the methodology used to generate the multilingual corpus, as well as challenges and problems faced per language. This paper overviews the work on Arabic, Chinese, English, Greek, and Romanian languages. A second part, covering the remaining languages, is available as a distinct paper in the MultiLing 2013 proceedings.",,"Multi-document multilingual summarization corpus preparation, Part 1: Arabic, English, Greek, Chinese, Romanian. This document overviews the strategy, effort and aftermath of the MultiLing 2013 multilingual summarization data collection. We describe how the Data Contributors of MultiLing collected and generated a multilingual multi-document summarization corpus on 10 different languages: Arabic, Chinese, Czech, English, French, Greek, Hebrew, Hindi, Romanian and Spanish. We discuss the rationale behind the main decisions of the collection, the methodology used to generate the multilingual corpus, as well as challenges and problems faced per language. This paper overviews the work on Arabic, Chinese, English, Greek, and Romanian languages. A second part, covering the remaining languages, is available as a distinct paper in the MultiLing 2013 proceedings.",2013
liu-etal-2020-hiring,https://aclanthology.org/2020.acl-main.281.pdf,1,,,,decent_work_and_economy,,,"Hiring Now: A Skill-Aware Multi-Attention Model for Job Posting Generation. Writing a good job posting is a critical step in the recruiting process, but the task is often more difficult than many people think. It is challenging to specify the level of education, experience, relevant skills per the company information and job description. To this end, we propose a novel task of Job Posting Generation (JPG) that is cast as a conditional text generation problem to generate job requirements according to the job descriptions. To deal with this task, we devise a data-driven global Skill-Aware Multi-Attention generation model, named SAMA. Specifically, to model the complex mapping relationships between input and output, we design a hierarchical decoder that we first label the job description with multiple skills, then we generate a complete text guided by the skill labels. At the same time, to exploit the prior knowledge about the skills, we further construct a skill knowledge graph to capture the global prior knowledge of skills and refine the generated results. The proposed approach is evaluated on real-world job posting data. Experimental results clearly demonstrate the effectiveness of the proposed method 1 .",Hiring Now: A Skill-Aware Multi-Attention Model for Job Posting Generation,"Writing a good job posting is a critical step in the recruiting process, but the task is often more difficult than many people think. It is challenging to specify the level of education, experience, relevant skills per the company information and job description. To this end, we propose a novel task of Job Posting Generation (JPG) that is cast as a conditional text generation problem to generate job requirements according to the job descriptions. To deal with this task, we devise a data-driven global Skill-Aware Multi-Attention generation model, named SAMA. Specifically, to model the complex mapping relationships between input and output, we design a hierarchical decoder that we first label the job description with multiple skills, then we generate a complete text guided by the skill labels. At the same time, to exploit the prior knowledge about the skills, we further construct a skill knowledge graph to capture the global prior knowledge of skills and refine the generated results. The proposed approach is evaluated on real-world job posting data. Experimental results clearly demonstrate the effectiveness of the proposed method 1 .",Hiring Now: A Skill-Aware Multi-Attention Model for Job Posting Generation,"Writing a good job posting is a critical step in the recruiting process, but the task is often more difficult than many people think. It is challenging to specify the level of education, experience, relevant skills per the company information and job description. To this end, we propose a novel task of Job Posting Generation (JPG) that is cast as a conditional text generation problem to generate job requirements according to the job descriptions. To deal with this task, we devise a data-driven global Skill-Aware Multi-Attention generation model, named SAMA. Specifically, to model the complex mapping relationships between input and output, we design a hierarchical decoder that we first label the job description with multiple skills, then we generate a complete text guided by the skill labels. At the same time, to exploit the prior knowledge about the skills, we further construct a skill knowledge graph to capture the global prior knowledge of skills and refine the generated results. The proposed approach is evaluated on real-world job posting data. Experimental results clearly demonstrate the effectiveness of the proposed method 1 .",,"Hiring Now: A Skill-Aware Multi-Attention Model for Job Posting Generation. Writing a good job posting is a critical step in the recruiting process, but the task is often more difficult than many people think. It is challenging to specify the level of education, experience, relevant skills per the company information and job description. To this end, we propose a novel task of Job Posting Generation (JPG) that is cast as a conditional text generation problem to generate job requirements according to the job descriptions. To deal with this task, we devise a data-driven global Skill-Aware Multi-Attention generation model, named SAMA. Specifically, to model the complex mapping relationships between input and output, we design a hierarchical decoder that we first label the job description with multiple skills, then we generate a complete text guided by the skill labels. At the same time, to exploit the prior knowledge about the skills, we further construct a skill knowledge graph to capture the global prior knowledge of skills and refine the generated results. The proposed approach is evaluated on real-world job posting data. Experimental results clearly demonstrate the effectiveness of the proposed method 1 .",2020
zeng-etal-2019-neural,https://aclanthology.org/D19-1470.pdf,0,,,,,,,"Neural Conversation Recommendation with Online Interaction Modeling. The prevalent use of social media leads to a vast amount of online conversations being produced on a daily basis. It presents a concrete challenge for individuals to better discover and engage in social media discussions. In this paper, we present a novel framework to automatically recommend conversations to users based on their prior conversation behaviors. Built on neural collaborative filtering, our model explores deep semantic features that measure how a user's preferences match an ongoing conversation's context. Furthermore, to identify salient characteristics from interleaving user interactions, our model incorporates graph-structured networks, where both replying relations and temporal features are encoded as conversation context. Experimental results on two large-scale datasets collected from Twitter and Reddit show that our model yields better performance than previous stateof-the-art models, which only utilize lexical features and ignore past user interactions in the conversations.",Neural Conversation Recommendation with Online Interaction Modeling,"The prevalent use of social media leads to a vast amount of online conversations being produced on a daily basis. It presents a concrete challenge for individuals to better discover and engage in social media discussions. In this paper, we present a novel framework to automatically recommend conversations to users based on their prior conversation behaviors. Built on neural collaborative filtering, our model explores deep semantic features that measure how a user's preferences match an ongoing conversation's context. Furthermore, to identify salient characteristics from interleaving user interactions, our model incorporates graph-structured networks, where both replying relations and temporal features are encoded as conversation context. Experimental results on two large-scale datasets collected from Twitter and Reddit show that our model yields better performance than previous stateof-the-art models, which only utilize lexical features and ignore past user interactions in the conversations.",Neural Conversation Recommendation with Online Interaction Modeling,"The prevalent use of social media leads to a vast amount of online conversations being produced on a daily basis. It presents a concrete challenge for individuals to better discover and engage in social media discussions. In this paper, we present a novel framework to automatically recommend conversations to users based on their prior conversation behaviors. Built on neural collaborative filtering, our model explores deep semantic features that measure how a user's preferences match an ongoing conversation's context. Furthermore, to identify salient characteristics from interleaving user interactions, our model incorporates graph-structured networks, where both replying relations and temporal features are encoded as conversation context. Experimental results on two large-scale datasets collected from Twitter and Reddit show that our model yields better performance than previous stateof-the-art models, which only utilize lexical features and ignore past user interactions in the conversations.","This work is partially supported by the following HK grants: RGC-GRF (14232816, 14209416, 14204118, 3133237), NSFC (61877020) & ITF (ITS/335/18). Lu Wang is supported in part by National Science Foundation through Grants IIS-1566382 and IIS-1813341. We thank the three anonymous reviewers for the insightful suggestions on various aspects of this work.","Neural Conversation Recommendation with Online Interaction Modeling. The prevalent use of social media leads to a vast amount of online conversations being produced on a daily basis. It presents a concrete challenge for individuals to better discover and engage in social media discussions. In this paper, we present a novel framework to automatically recommend conversations to users based on their prior conversation behaviors. Built on neural collaborative filtering, our model explores deep semantic features that measure how a user's preferences match an ongoing conversation's context. Furthermore, to identify salient characteristics from interleaving user interactions, our model incorporates graph-structured networks, where both replying relations and temporal features are encoded as conversation context. Experimental results on two large-scale datasets collected from Twitter and Reddit show that our model yields better performance than previous stateof-the-art models, which only utilize lexical features and ignore past user interactions in the conversations.",2019
hope-etal-2021-extracting,https://aclanthology.org/2021.naacl-main.355.pdf,1,,,,health,industry_innovation_infrastructure,,"Extracting a Knowledge Base of Mechanisms from COVID-19 Papers. The COVID-19 pandemic has spawned a diverse body of scientific literature that is challenging to navigate, stimulating interest in automated tools to help find useful knowledge. We pursue the construction of a knowledge base (KB) of mechanisms-a fundamental concept across the sciences, which encompasses activities, functions and causal relations, ranging from cellular processes to economic impacts. We extract this information from the natural language of scientific papers by developing a broad, unified schema that strikes a balance between relevance and breadth. We annotate a dataset of mechanisms with our schema and train a model to extract mechanism relations from papers. Our experiments demonstrate the utility of our KB in supporting interdisciplinary scientific search over COVID-19 literature, outperforming the prominent PubMed search in a study with clinical experts. Our search engine, dataset and code are publicly available. 1 * * Equal contribution. 1 https://covidmechanisms.apps.allenai.org/ … a deep learning framework for design of antiviral candidate drugs Temperature increase can facilitate the destruction of SARS-COV-2 gpl16 antiserum blocks binding of virions to cellular receptors ...food price inflation is an unintended consequence of COVID-19 containment measures Retrieved from CORD-19 papers Ent1: deep learning Ent2: drugs Query mechanism relations",Extracting a Knowledge Base of Mechanisms from {COVID}-19 Papers,"The COVID-19 pandemic has spawned a diverse body of scientific literature that is challenging to navigate, stimulating interest in automated tools to help find useful knowledge. We pursue the construction of a knowledge base (KB) of mechanisms-a fundamental concept across the sciences, which encompasses activities, functions and causal relations, ranging from cellular processes to economic impacts. We extract this information from the natural language of scientific papers by developing a broad, unified schema that strikes a balance between relevance and breadth. We annotate a dataset of mechanisms with our schema and train a model to extract mechanism relations from papers. Our experiments demonstrate the utility of our KB in supporting interdisciplinary scientific search over COVID-19 literature, outperforming the prominent PubMed search in a study with clinical experts. Our search engine, dataset and code are publicly available. 1 * * Equal contribution. 1 https://covidmechanisms.apps.allenai.org/ … a deep learning framework for design of antiviral candidate drugs Temperature increase can facilitate the destruction of SARS-COV-2 gpl16 antiserum blocks binding of virions to cellular receptors ...food price inflation is an unintended consequence of COVID-19 containment measures Retrieved from CORD-19 papers Ent1: deep learning Ent2: drugs Query mechanism relations",Extracting a Knowledge Base of Mechanisms from COVID-19 Papers,"The COVID-19 pandemic has spawned a diverse body of scientific literature that is challenging to navigate, stimulating interest in automated tools to help find useful knowledge. We pursue the construction of a knowledge base (KB) of mechanisms-a fundamental concept across the sciences, which encompasses activities, functions and causal relations, ranging from cellular processes to economic impacts. We extract this information from the natural language of scientific papers by developing a broad, unified schema that strikes a balance between relevance and breadth. We annotate a dataset of mechanisms with our schema and train a model to extract mechanism relations from papers. Our experiments demonstrate the utility of our KB in supporting interdisciplinary scientific search over COVID-19 literature, outperforming the prominent PubMed search in a study with clinical experts. Our search engine, dataset and code are publicly available. 1 * * Equal contribution. 1 https://covidmechanisms.apps.allenai.org/ … a deep learning framework for design of antiviral candidate drugs Temperature increase can facilitate the destruction of SARS-COV-2 gpl16 antiserum blocks binding of virions to cellular receptors ...food price inflation is an unintended consequence of COVID-19 containment measures Retrieved from CORD-19 papers Ent1: deep learning Ent2: drugs Query mechanism relations","We like to acknowledge a grant from ONR N00014-18-1-2826. Authors would also like to thank anonymous reviewers, members of AI2, UW-NLP and the H2Lab at The University of Washington for their valuable feedback and comments.","Extracting a Knowledge Base of Mechanisms from COVID-19 Papers. The COVID-19 pandemic has spawned a diverse body of scientific literature that is challenging to navigate, stimulating interest in automated tools to help find useful knowledge. We pursue the construction of a knowledge base (KB) of mechanisms-a fundamental concept across the sciences, which encompasses activities, functions and causal relations, ranging from cellular processes to economic impacts. We extract this information from the natural language of scientific papers by developing a broad, unified schema that strikes a balance between relevance and breadth. We annotate a dataset of mechanisms with our schema and train a model to extract mechanism relations from papers. Our experiments demonstrate the utility of our KB in supporting interdisciplinary scientific search over COVID-19 literature, outperforming the prominent PubMed search in a study with clinical experts. Our search engine, dataset and code are publicly available. 1 * * Equal contribution. 1 https://covidmechanisms.apps.allenai.org/ … a deep learning framework for design of antiviral candidate drugs Temperature increase can facilitate the destruction of SARS-COV-2 gpl16 antiserum blocks binding of virions to cellular receptors ...food price inflation is an unintended consequence of COVID-19 containment measures Retrieved from CORD-19 papers Ent1: deep learning Ent2: drugs Query mechanism relations",2021
paris-vander-linden-1996-building,https://aclanthology.org/C96-2124.pdf,1,,,,industry_innovation_infrastructure,,,"Building Knowledge Bases for the Generation of Software Documentation. Automated text generation requires a underlying knowledge base fl'om which to generate, which is often difficult to produce. Software documentation is one domain in which parts of this knowledge base may be derived automatically. In this paper, we describe DRAFTER, an authoring support tool for generating usercentred software documentation, and in particular, we describe how parts of its required knowledge base can be obtained automatically.",Building Knowledge Bases for the Generation of Software Documentation,"Automated text generation requires a underlying knowledge base fl'om which to generate, which is often difficult to produce. Software documentation is one domain in which parts of this knowledge base may be derived automatically. In this paper, we describe DRAFTER, an authoring support tool for generating usercentred software documentation, and in particular, we describe how parts of its required knowledge base can be obtained automatically.",Building Knowledge Bases for the Generation of Software Documentation,"Automated text generation requires a underlying knowledge base fl'om which to generate, which is often difficult to produce. Software documentation is one domain in which parts of this knowledge base may be derived automatically. In this paper, we describe DRAFTER, an authoring support tool for generating usercentred software documentation, and in particular, we describe how parts of its required knowledge base can be obtained automatically.",,"Building Knowledge Bases for the Generation of Software Documentation. Automated text generation requires a underlying knowledge base fl'om which to generate, which is often difficult to produce. Software documentation is one domain in which parts of this knowledge base may be derived automatically. In this paper, we describe DRAFTER, an authoring support tool for generating usercentred software documentation, and in particular, we describe how parts of its required knowledge base can be obtained automatically.",1996
han-etal-2019-opennre,https://aclanthology.org/D19-3029.pdf,0,,,,,,,"OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction. OpenNRE is an open-source and extensible toolkit that provides a unified framework to implement neural models for relation extraction (RE). Specifically, by implementing typical RE methods, OpenNRE not only allows developers to train custom models to extract structured relational facts from the plain text but also supports quick model validation for researchers. Besides, OpenNRE provides various functional RE modules based on both TensorFlow and PyTorch to maintain sufficient modularity and extensibility, making it becomes easy to incorporate new models into the framework. Besides the toolkit, we also release an online system to meet real-time extraction without any training and deploying. Meanwhile, the online system can extract facts in various scenarios as well as aligning the extracted facts to Wikidata, which may benefit various downstream knowledge-driven applications (e.g., information retrieval and question answering). More details of the toolkit and online system can be obtained from http://github.com/ thunlp/OpenNRE.",{O}pen{NRE}: An Open and Extensible Toolkit for Neural Relation Extraction,"OpenNRE is an open-source and extensible toolkit that provides a unified framework to implement neural models for relation extraction (RE). Specifically, by implementing typical RE methods, OpenNRE not only allows developers to train custom models to extract structured relational facts from the plain text but also supports quick model validation for researchers. Besides, OpenNRE provides various functional RE modules based on both TensorFlow and PyTorch to maintain sufficient modularity and extensibility, making it becomes easy to incorporate new models into the framework. Besides the toolkit, we also release an online system to meet real-time extraction without any training and deploying. Meanwhile, the online system can extract facts in various scenarios as well as aligning the extracted facts to Wikidata, which may benefit various downstream knowledge-driven applications (e.g., information retrieval and question answering). More details of the toolkit and online system can be obtained from http://github.com/ thunlp/OpenNRE.",OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction,"OpenNRE is an open-source and extensible toolkit that provides a unified framework to implement neural models for relation extraction (RE). Specifically, by implementing typical RE methods, OpenNRE not only allows developers to train custom models to extract structured relational facts from the plain text but also supports quick model validation for researchers. Besides, OpenNRE provides various functional RE modules based on both TensorFlow and PyTorch to maintain sufficient modularity and extensibility, making it becomes easy to incorporate new models into the framework. Besides the toolkit, we also release an online system to meet real-time extraction without any training and deploying. Meanwhile, the online system can extract facts in various scenarios as well as aligning the extracted facts to Wikidata, which may benefit various downstream knowledge-driven applications (e.g., information retrieval and question answering). More details of the toolkit and online system can be obtained from http://github.com/ thunlp/OpenNRE.",This work is supported by the National Key Research and Development Program of China (No. ,"OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction. OpenNRE is an open-source and extensible toolkit that provides a unified framework to implement neural models for relation extraction (RE). Specifically, by implementing typical RE methods, OpenNRE not only allows developers to train custom models to extract structured relational facts from the plain text but also supports quick model validation for researchers. Besides, OpenNRE provides various functional RE modules based on both TensorFlow and PyTorch to maintain sufficient modularity and extensibility, making it becomes easy to incorporate new models into the framework. Besides the toolkit, we also release an online system to meet real-time extraction without any training and deploying. Meanwhile, the online system can extract facts in various scenarios as well as aligning the extracted facts to Wikidata, which may benefit various downstream knowledge-driven applications (e.g., information retrieval and question answering). More details of the toolkit and online system can be obtained from http://github.com/ thunlp/OpenNRE.",2019
albogamy-ramsay-2016-fast,https://aclanthology.org/L16-1238.pdf,0,,,,,,,"Fast and Robust POS tagger for Arabic Tweets Using Agreement-based Bootstrapping. Part-of-Speech (POS) tagging is a key step in many NLP algorithms. However, tweets are difficult to POS tag because they are short, are not always written maintaining formal grammar and proper spelling, and abbreviations are often used to overcome their restricted lengths. Arabic tweets also show a further range of linguistic phenomena such as usage of different dialects, romanised Arabic and borrowing foreign words. In this paper, we present an evaluation and a detailed error analysis of state-of-the-art POS taggers for Arabic when applied to Arabic tweets. On the basis of this analysis, we combine normalisation and external knowledge to handle the domain noisiness and exploit bootstrapping to construct extra training data in order to improve POS tagging for Arabic tweets. Our results show significant improvements over the performance of a number of well-known taggers for Arabic.",Fast and Robust {POS} tagger for {A}rabic Tweets Using Agreement-based Bootstrapping,"Part-of-Speech (POS) tagging is a key step in many NLP algorithms. However, tweets are difficult to POS tag because they are short, are not always written maintaining formal grammar and proper spelling, and abbreviations are often used to overcome their restricted lengths. Arabic tweets also show a further range of linguistic phenomena such as usage of different dialects, romanised Arabic and borrowing foreign words. In this paper, we present an evaluation and a detailed error analysis of state-of-the-art POS taggers for Arabic when applied to Arabic tweets. On the basis of this analysis, we combine normalisation and external knowledge to handle the domain noisiness and exploit bootstrapping to construct extra training data in order to improve POS tagging for Arabic tweets. Our results show significant improvements over the performance of a number of well-known taggers for Arabic.",Fast and Robust POS tagger for Arabic Tweets Using Agreement-based Bootstrapping,"Part-of-Speech (POS) tagging is a key step in many NLP algorithms. However, tweets are difficult to POS tag because they are short, are not always written maintaining formal grammar and proper spelling, and abbreviations are often used to overcome their restricted lengths. Arabic tweets also show a further range of linguistic phenomena such as usage of different dialects, romanised Arabic and borrowing foreign words. In this paper, we present an evaluation and a detailed error analysis of state-of-the-art POS taggers for Arabic when applied to Arabic tweets. On the basis of this analysis, we combine normalisation and external knowledge to handle the domain noisiness and exploit bootstrapping to construct extra training data in order to improve POS tagging for Arabic tweets. Our results show significant improvements over the performance of a number of well-known taggers for Arabic.",The authors would like to thank the anonymous reviewers for their encouraging feedback and insights. Fahad would also like to thank King Saud University for their financial support. Allan Ramsay's contribution to this work was partially supported by Qatar National Research Foundation (grant NPRP-7-1334-6 -039).,"Fast and Robust POS tagger for Arabic Tweets Using Agreement-based Bootstrapping. Part-of-Speech (POS) tagging is a key step in many NLP algorithms. However, tweets are difficult to POS tag because they are short, are not always written maintaining formal grammar and proper spelling, and abbreviations are often used to overcome their restricted lengths. Arabic tweets also show a further range of linguistic phenomena such as usage of different dialects, romanised Arabic and borrowing foreign words. In this paper, we present an evaluation and a detailed error analysis of state-of-the-art POS taggers for Arabic when applied to Arabic tweets. On the basis of this analysis, we combine normalisation and external knowledge to handle the domain noisiness and exploit bootstrapping to construct extra training data in order to improve POS tagging for Arabic tweets. Our results show significant improvements over the performance of a number of well-known taggers for Arabic.",2016
danescu-niculescu-mizil-etal-2009-without,https://aclanthology.org/N09-1016.pdf,0,,,,,,,"Without a 'doubt'? Unsupervised Discovery of Downward-Entailing Operators. An important part of textual inference is making deductions involving monotonicity, that is, determining whether a given assertion entails restrictions or relaxations of that assertion. For instance, the statement 'We know the epidemic spread quickly' does not entail 'We know the epidemic spread quickly via fleas', but 'We doubt the epidemic spread quickly' entails 'We doubt the epidemic spread quickly via fleas'. Here, we present the first algorithm for the challenging lexical-semantics problem of learning linguistic constructions that, like 'doubt', are downward entailing (DE). Our algorithm is unsupervised, resource-lean, and effective, accurately recovering many DE operators that are missing from the handconstructed lists that textual-inference systems currently use.",Without a {'}doubt{'}? Unsupervised Discovery of Downward-Entailing Operators,"An important part of textual inference is making deductions involving monotonicity, that is, determining whether a given assertion entails restrictions or relaxations of that assertion. For instance, the statement 'We know the epidemic spread quickly' does not entail 'We know the epidemic spread quickly via fleas', but 'We doubt the epidemic spread quickly' entails 'We doubt the epidemic spread quickly via fleas'. Here, we present the first algorithm for the challenging lexical-semantics problem of learning linguistic constructions that, like 'doubt', are downward entailing (DE). Our algorithm is unsupervised, resource-lean, and effective, accurately recovering many DE operators that are missing from the handconstructed lists that textual-inference systems currently use.",Without a 'doubt'? Unsupervised Discovery of Downward-Entailing Operators,"An important part of textual inference is making deductions involving monotonicity, that is, determining whether a given assertion entails restrictions or relaxations of that assertion. For instance, the statement 'We know the epidemic spread quickly' does not entail 'We know the epidemic spread quickly via fleas', but 'We doubt the epidemic spread quickly' entails 'We doubt the epidemic spread quickly via fleas'. Here, we present the first algorithm for the challenging lexical-semantics problem of learning linguistic constructions that, like 'doubt', are downward entailing (DE). Our algorithm is unsupervised, resource-lean, and effective, accurately recovering many DE operators that are missing from the handconstructed lists that textual-inference systems currently use.","Acknowledgments We thank Roy Bar-Haim, Cleo Condoravdi, and Bill MacCartney for sharing their systems' lists and information about their work with us; Mats Rooth for helpful conversations; Alex Niculescu-Mizil for technical assistance; and Eugene Charniak for reassuring remarks. We also thank Marisa Ferrara Boston, Claire Cardie, Zhong Chen, Yejin Choi, Effi Georgala, Myle Ott, Stephen Purpura, and Ainur Yessenalina at Cornell University, the UT-Austin NLP group, Roy Bar-Haim, Bill MacCartney, and the anonymous reviewers for for their comments on this paper. This paper is based upon work supported in part by DHS grant N0014-07-1-0152, National Science Foundation grant No. BCS-0537606, a Yahoo! Research Alliance gift, a CU Provost's Award for Distinguished Scholarship, and a CU Institute for the Social Sciences Faculty Fellowship. Any opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of any sponsoring institutions, the U.S. government, or any other entity.","Without a 'doubt'? Unsupervised Discovery of Downward-Entailing Operators. An important part of textual inference is making deductions involving monotonicity, that is, determining whether a given assertion entails restrictions or relaxations of that assertion. For instance, the statement 'We know the epidemic spread quickly' does not entail 'We know the epidemic spread quickly via fleas', but 'We doubt the epidemic spread quickly' entails 'We doubt the epidemic spread quickly via fleas'. Here, we present the first algorithm for the challenging lexical-semantics problem of learning linguistic constructions that, like 'doubt', are downward entailing (DE). Our algorithm is unsupervised, resource-lean, and effective, accurately recovering many DE operators that are missing from the handconstructed lists that textual-inference systems currently use.",2009
acl-1993-association,https://aclanthology.org/P93-1000.pdf,0,,,,,,,"31st Annual Meeting of the Association for Computational Linguistics. This volume contains the papers prepared for the 31 st Annual Meeting of the Association for Computational Linguistics, held 22-26 June 1993 at The Ohio State University in Columbus, Ohio. The cluster of papers in the final section stems from the student session, featured at the meeting for the 3rd successive year and testifying to the vigor of this emerging tradition. The number and quality of submitted papers was again gratifying, and all authors deserve our collective plaudits for the efforts they invested despite the well-known risks of submitting to a highly selective conference. It was their efforts that once again ensured a Meeting (and Proceedings) reflecting the highest standards in computational linguistics, offering a tour of some of the most significant recent advances and most lively research frontiers. Special thanks go to our invited speakers, Wolfgang Wahlster, Geoff Nunberg and Barbara Partee, for contributing their insights and panache to the conference; to Philip Cohen for concocting and coordinating a varied and relevant tutorial program, and to",31st Annual Meeting of the Association for Computational Linguistics,"This volume contains the papers prepared for the 31 st Annual Meeting of the Association for Computational Linguistics, held 22-26 June 1993 at The Ohio State University in Columbus, Ohio. The cluster of papers in the final section stems from the student session, featured at the meeting for the 3rd successive year and testifying to the vigor of this emerging tradition. The number and quality of submitted papers was again gratifying, and all authors deserve our collective plaudits for the efforts they invested despite the well-known risks of submitting to a highly selective conference. It was their efforts that once again ensured a Meeting (and Proceedings) reflecting the highest standards in computational linguistics, offering a tour of some of the most significant recent advances and most lively research frontiers. Special thanks go to our invited speakers, Wolfgang Wahlster, Geoff Nunberg and Barbara Partee, for contributing their insights and panache to the conference; to Philip Cohen for concocting and coordinating a varied and relevant tutorial program, and to",31st Annual Meeting of the Association for Computational Linguistics,"This volume contains the papers prepared for the 31 st Annual Meeting of the Association for Computational Linguistics, held 22-26 June 1993 at The Ohio State University in Columbus, Ohio. The cluster of papers in the final section stems from the student session, featured at the meeting for the 3rd successive year and testifying to the vigor of this emerging tradition. The number and quality of submitted papers was again gratifying, and all authors deserve our collective plaudits for the efforts they invested despite the well-known risks of submitting to a highly selective conference. It was their efforts that once again ensured a Meeting (and Proceedings) reflecting the highest standards in computational linguistics, offering a tour of some of the most significant recent advances and most lively research frontiers. Special thanks go to our invited speakers, Wolfgang Wahlster, Geoff Nunberg and Barbara Partee, for contributing their insights and panache to the conference; to Philip Cohen for concocting and coordinating a varied and relevant tutorial program, and to","We thank the reviewers for providing providing helpful, detailed reviews of the submissions, and for completing the reviews promptly. The careful thought that went into their review comments was obvious and impressive, and we are sure the student authors found the reviews beneficial. The Program Committee included the members of the Planning Committee and the following non-student members: Mary Dal- ","31st Annual Meeting of the Association for Computational Linguistics. This volume contains the papers prepared for the 31 st Annual Meeting of the Association for Computational Linguistics, held 22-26 June 1993 at The Ohio State University in Columbus, Ohio. The cluster of papers in the final section stems from the student session, featured at the meeting for the 3rd successive year and testifying to the vigor of this emerging tradition. The number and quality of submitted papers was again gratifying, and all authors deserve our collective plaudits for the efforts they invested despite the well-known risks of submitting to a highly selective conference. It was their efforts that once again ensured a Meeting (and Proceedings) reflecting the highest standards in computational linguistics, offering a tour of some of the most significant recent advances and most lively research frontiers. Special thanks go to our invited speakers, Wolfgang Wahlster, Geoff Nunberg and Barbara Partee, for contributing their insights and panache to the conference; to Philip Cohen for concocting and coordinating a varied and relevant tutorial program, and to",1993
hakala-etal-2013-evex,https://aclanthology.org/W13-2004.pdf,0,,,,,,,"EVEX in ST'13: Application of a large-scale text mining resource to event extraction and network construction. During the past few years, several novel text mining algorithms have been developed in the context of the BioNLP Shared Tasks on Event Extraction. These algorithms typically aim at extracting biomolecular interactions from text by inspecting only the context of one sentence. However, when humans interpret biomolecular research articles, they usually build upon extensive background knowledge of their favorite genes and pathways. To make such world knowledge available to a text mining algorithm, it could first be applied to all available literature to subsequently make a more informed decision on which predictions are consistent with the current known data. In this paper, we introduce our participation in the latest Shared Task using the largescale text mining resource EVEX which we previously implemented using state-ofthe-art algorithms, and which was applied to the whole of PubMed and PubMed Central. We participated in the Genia Event Extraction (GE) and Gene Regulation Network (GRN) tasks, ranking first in the former and fifth in the latter.",{EVEX} in {ST}{'}13: Application of a large-scale text mining resource to event extraction and network construction,"During the past few years, several novel text mining algorithms have been developed in the context of the BioNLP Shared Tasks on Event Extraction. These algorithms typically aim at extracting biomolecular interactions from text by inspecting only the context of one sentence. However, when humans interpret biomolecular research articles, they usually build upon extensive background knowledge of their favorite genes and pathways. To make such world knowledge available to a text mining algorithm, it could first be applied to all available literature to subsequently make a more informed decision on which predictions are consistent with the current known data. In this paper, we introduce our participation in the latest Shared Task using the largescale text mining resource EVEX which we previously implemented using state-ofthe-art algorithms, and which was applied to the whole of PubMed and PubMed Central. We participated in the Genia Event Extraction (GE) and Gene Regulation Network (GRN) tasks, ranking first in the former and fifth in the latter.",EVEX in ST'13: Application of a large-scale text mining resource to event extraction and network construction,"During the past few years, several novel text mining algorithms have been developed in the context of the BioNLP Shared Tasks on Event Extraction. These algorithms typically aim at extracting biomolecular interactions from text by inspecting only the context of one sentence. However, when humans interpret biomolecular research articles, they usually build upon extensive background knowledge of their favorite genes and pathways. To make such world knowledge available to a text mining algorithm, it could first be applied to all available literature to subsequently make a more informed decision on which predictions are consistent with the current known data. In this paper, we introduce our participation in the latest Shared Task using the largescale text mining resource EVEX which we previously implemented using state-ofthe-art algorithms, and which was applied to the whole of PubMed and PubMed Central. We participated in the Genia Event Extraction (GE) and Gene Regulation Network (GRN) tasks, ranking first in the former and fifth in the latter.","Computational resources were provided by CSC IT Center for Science Ltd., Espoo, Finland. The work of KH and FG was supported by the Academy of Finland, and of SVL by the Research Foundation Flanders (FWO). YVdP and SVL acknowledge the support from Ghent University (Multidisciplinary Research Partnership Bioinformatics: from nucleotides to networks).","EVEX in ST'13: Application of a large-scale text mining resource to event extraction and network construction. During the past few years, several novel text mining algorithms have been developed in the context of the BioNLP Shared Tasks on Event Extraction. These algorithms typically aim at extracting biomolecular interactions from text by inspecting only the context of one sentence. However, when humans interpret biomolecular research articles, they usually build upon extensive background knowledge of their favorite genes and pathways. To make such world knowledge available to a text mining algorithm, it could first be applied to all available literature to subsequently make a more informed decision on which predictions are consistent with the current known data. In this paper, we introduce our participation in the latest Shared Task using the largescale text mining resource EVEX which we previously implemented using state-ofthe-art algorithms, and which was applied to the whole of PubMed and PubMed Central. We participated in the Genia Event Extraction (GE) and Gene Regulation Network (GRN) tasks, ranking first in the former and fifth in the latter.",2013
banerjee-etal-2021-scrambled,https://aclanthology.org/2021.mtsummit-research.11.pdf,0,,,,,,,"Scrambled Translation Problem: A Problem of Denoising UNMT. In this paper, we identify an interesting kind of error in the output of Unsupervised Neural Machine Translation (UNMT) systems like Undreamt 1. We refer to this error type as Scrambled Translation problem. We observe that UNMT models which use word shuffle noise (as in case of Undreamt) can generate correct words, but fail to stitch them together to form phrases. As a result, words of the translated sentence look scrambled, resulting in decreased BLEU. We hypothesise that the reason behind scrambled translation problem is 'shuffling noise' which is introduced in every input sentence as a denoising strategy. To test our hypothesis, we experiment by retraining UNMT models with a simple retraining strategy. We stop the training of the Denoising UNMT model after a pre-decided number of iterations and resume the training for the remaining iterations-which number is also pre-decided-using original sentence as input without adding any noise. Our proposed solution achieves significant performance improvement UNMT models that train conventionally. We demonstrate these performance gains on four language pairs, viz., English-French, English-German, English-Spanish, Hindi-Punjabi. Our qualitative and quantitative analysis shows that the retraining strategy helps achieve better alignment as observed by attention heatmap and better phrasal translation, leading to statistically significant improvement in BLEU scores.",Scrambled Translation Problem: A Problem of Denoising {UNMT},"In this paper, we identify an interesting kind of error in the output of Unsupervised Neural Machine Translation (UNMT) systems like Undreamt 1. We refer to this error type as Scrambled Translation problem. We observe that UNMT models which use word shuffle noise (as in case of Undreamt) can generate correct words, but fail to stitch them together to form phrases. As a result, words of the translated sentence look scrambled, resulting in decreased BLEU. We hypothesise that the reason behind scrambled translation problem is 'shuffling noise' which is introduced in every input sentence as a denoising strategy. To test our hypothesis, we experiment by retraining UNMT models with a simple retraining strategy. We stop the training of the Denoising UNMT model after a pre-decided number of iterations and resume the training for the remaining iterations-which number is also pre-decided-using original sentence as input without adding any noise. Our proposed solution achieves significant performance improvement UNMT models that train conventionally. We demonstrate these performance gains on four language pairs, viz., English-French, English-German, English-Spanish, Hindi-Punjabi. Our qualitative and quantitative analysis shows that the retraining strategy helps achieve better alignment as observed by attention heatmap and better phrasal translation, leading to statistically significant improvement in BLEU scores.",Scrambled Translation Problem: A Problem of Denoising UNMT,"In this paper, we identify an interesting kind of error in the output of Unsupervised Neural Machine Translation (UNMT) systems like Undreamt 1. We refer to this error type as Scrambled Translation problem. We observe that UNMT models which use word shuffle noise (as in case of Undreamt) can generate correct words, but fail to stitch them together to form phrases. As a result, words of the translated sentence look scrambled, resulting in decreased BLEU. We hypothesise that the reason behind scrambled translation problem is 'shuffling noise' which is introduced in every input sentence as a denoising strategy. To test our hypothesis, we experiment by retraining UNMT models with a simple retraining strategy. We stop the training of the Denoising UNMT model after a pre-decided number of iterations and resume the training for the remaining iterations-which number is also pre-decided-using original sentence as input without adding any noise. Our proposed solution achieves significant performance improvement UNMT models that train conventionally. We demonstrate these performance gains on four language pairs, viz., English-French, English-German, English-Spanish, Hindi-Punjabi. Our qualitative and quantitative analysis shows that the retraining strategy helps achieve better alignment as observed by attention heatmap and better phrasal translation, leading to statistically significant improvement in BLEU scores.",,"Scrambled Translation Problem: A Problem of Denoising UNMT. In this paper, we identify an interesting kind of error in the output of Unsupervised Neural Machine Translation (UNMT) systems like Undreamt 1. We refer to this error type as Scrambled Translation problem. We observe that UNMT models which use word shuffle noise (as in case of Undreamt) can generate correct words, but fail to stitch them together to form phrases. As a result, words of the translated sentence look scrambled, resulting in decreased BLEU. We hypothesise that the reason behind scrambled translation problem is 'shuffling noise' which is introduced in every input sentence as a denoising strategy. To test our hypothesis, we experiment by retraining UNMT models with a simple retraining strategy. We stop the training of the Denoising UNMT model after a pre-decided number of iterations and resume the training for the remaining iterations-which number is also pre-decided-using original sentence as input without adding any noise. Our proposed solution achieves significant performance improvement UNMT models that train conventionally. We demonstrate these performance gains on four language pairs, viz., English-French, English-German, English-Spanish, Hindi-Punjabi. Our qualitative and quantitative analysis shows that the retraining strategy helps achieve better alignment as observed by attention heatmap and better phrasal translation, leading to statistically significant improvement in BLEU scores.",2021
jurgens-etal-2014-twitter,https://aclanthology.org/W14-3906.pdf,0,,,,,,,"Twitter Users \#CodeSwitch Hashtags! \#MoltoImportante \#wow. When code switching, individuals incorporate elements of multiple languages into the same utterance. While code switching has been studied extensively in formal and spoken contexts, its behavior and prevalence remains unexamined in many newer forms of electronic communication. The present study examines code switching in Twitter, focusing on instances where an author writes a post in one language and then includes a hashtag in a second language. In the first experiment, we perform a large scale analysis on the languages used in millions of posts to show that authors readily incorporate hashtags from other languages, and in a manual analysis of a subset the hashtags, reveal prolific code switching, with code switching occurring for some hashtags in over twenty languages. In the second experiment, French and English posts from three bilingual cities are analyzed for their code switching frequency and its content.",{T}witter Users {\#}{C}ode{S}witch Hashtags! {\#}{M}olto{I}mportante {\#}wow,"When code switching, individuals incorporate elements of multiple languages into the same utterance. While code switching has been studied extensively in formal and spoken contexts, its behavior and prevalence remains unexamined in many newer forms of electronic communication. The present study examines code switching in Twitter, focusing on instances where an author writes a post in one language and then includes a hashtag in a second language. In the first experiment, we perform a large scale analysis on the languages used in millions of posts to show that authors readily incorporate hashtags from other languages, and in a manual analysis of a subset the hashtags, reveal prolific code switching, with code switching occurring for some hashtags in over twenty languages. In the second experiment, French and English posts from three bilingual cities are analyzed for their code switching frequency and its content.",Twitter Users \#CodeSwitch Hashtags! \#MoltoImportante \#wow,"When code switching, individuals incorporate elements of multiple languages into the same utterance. While code switching has been studied extensively in formal and spoken contexts, its behavior and prevalence remains unexamined in many newer forms of electronic communication. The present study examines code switching in Twitter, focusing on instances where an author writes a post in one language and then includes a hashtag in a second language. In the first experiment, we perform a large scale analysis on the languages used in millions of posts to show that authors readily incorporate hashtags from other languages, and in a manual analysis of a subset the hashtags, reveal prolific code switching, with code switching occurring for some hashtags in over twenty languages. In the second experiment, French and English posts from three bilingual cities are analyzed for their code switching frequency and its content.",,"Twitter Users \#CodeSwitch Hashtags! \#MoltoImportante \#wow. When code switching, individuals incorporate elements of multiple languages into the same utterance. While code switching has been studied extensively in formal and spoken contexts, its behavior and prevalence remains unexamined in many newer forms of electronic communication. The present study examines code switching in Twitter, focusing on instances where an author writes a post in one language and then includes a hashtag in a second language. In the first experiment, we perform a large scale analysis on the languages used in millions of posts to show that authors readily incorporate hashtags from other languages, and in a manual analysis of a subset the hashtags, reveal prolific code switching, with code switching occurring for some hashtags in over twenty languages. In the second experiment, French and English posts from three bilingual cities are analyzed for their code switching frequency and its content.",2014
ilinykh-dobnik-2022-attention,https://aclanthology.org/2022.findings-acl.320.pdf,0,,,,,,,"Attention as Grounding: Exploring Textual and Cross-Modal Attention on Entities and Relations in Language-and-Vision Transformer. We explore how a multi-modal transformer trained for generation of longer image descriptions learns syntactic and semantic representations about entities and relations grounded in objects at the level of masked self-attention (text generation) and crossmodal attention (information fusion). We observe that cross-attention learns the visual grounding of noun phrases into objects and high-level semantic information about spatial relations, while text-to-text attention captures low-level syntactic knowledge between words. This concludes that language models in a multi-modal task learn different semantic information about objects and relations cross-modally and uni-modally (text-only).",Attention as Grounding: Exploring Textual and Cross-Modal Attention on Entities and Relations in Language-and-Vision Transformer,"We explore how a multi-modal transformer trained for generation of longer image descriptions learns syntactic and semantic representations about entities and relations grounded in objects at the level of masked self-attention (text generation) and crossmodal attention (information fusion). We observe that cross-attention learns the visual grounding of noun phrases into objects and high-level semantic information about spatial relations, while text-to-text attention captures low-level syntactic knowledge between words. This concludes that language models in a multi-modal task learn different semantic information about objects and relations cross-modally and uni-modally (text-only).",Attention as Grounding: Exploring Textual and Cross-Modal Attention on Entities and Relations in Language-and-Vision Transformer,"We explore how a multi-modal transformer trained for generation of longer image descriptions learns syntactic and semantic representations about entities and relations grounded in objects at the level of masked self-attention (text generation) and crossmodal attention (information fusion). We observe that cross-attention learns the visual grounding of noun phrases into objects and high-level semantic information about spatial relations, while text-to-text attention captures low-level syntactic knowledge between words. This concludes that language models in a multi-modal task learn different semantic information about objects and relations cross-modally and uni-modally (text-only).",The research reported in this paper was supported by a grant from the Swedish Research Council (VR project 2014-39) for the establishment of the Centre for Linguistic Theory and Studies in Probability (CLASP) at the University of Gothenburg.,"Attention as Grounding: Exploring Textual and Cross-Modal Attention on Entities and Relations in Language-and-Vision Transformer. We explore how a multi-modal transformer trained for generation of longer image descriptions learns syntactic and semantic representations about entities and relations grounded in objects at the level of masked self-attention (text generation) and crossmodal attention (information fusion). We observe that cross-attention learns the visual grounding of noun phrases into objects and high-level semantic information about spatial relations, while text-to-text attention captures low-level syntactic knowledge between words. This concludes that language models in a multi-modal task learn different semantic information about objects and relations cross-modally and uni-modally (text-only).",2022
hovy-2010-distributional,https://aclanthology.org/W10-3401.pdf,0,,,,,,,"Distributional Semantics and the Lexicon. The lexicons used in computational linguistics systems contain morphological, syntactic, and occasionally also some semantic information (such as definitions, pointers to an ontology, verb frame filler preferences, etc.). But the human cognitive lexicon contains a great deal more, crucially, expectations about how a word tends to combine with others: not just general information-extraction-like patterns, but specific instantial expectations. Such information is very useful when it comes to listening in bad aural conditions and reading texts in which background information is taken for granted; without such specific expectation, one would be hard-pressed (and computers are completely unable) to form coherent and richly connected multi-sentence interpretations.
Over the past few years, NLP work has increasingly treated topic signature word distributions (also called 'context vectors', 'topic models', etc.) as a de facto replacement for semantics. Whether the task is wordsense disambiguation, certain forms of textual entailment, information extraction, paraphrase learning, and so on, it turns out to be very useful to consider a word(sense) as being defined by the distribution of word(senses) that regu-larly accompany it (in the classic words of Firth, ""you shall know a word by the company it keeps""). And this is true not only for individual wordsenses, but also for larger units such as topics: the product of LDA and similar topic characterization engines is similar.",Distributional Semantics and the Lexicon,"The lexicons used in computational linguistics systems contain morphological, syntactic, and occasionally also some semantic information (such as definitions, pointers to an ontology, verb frame filler preferences, etc.). But the human cognitive lexicon contains a great deal more, crucially, expectations about how a word tends to combine with others: not just general information-extraction-like patterns, but specific instantial expectations. Such information is very useful when it comes to listening in bad aural conditions and reading texts in which background information is taken for granted; without such specific expectation, one would be hard-pressed (and computers are completely unable) to form coherent and richly connected multi-sentence interpretations.
Over the past few years, NLP work has increasingly treated topic signature word distributions (also called 'context vectors', 'topic models', etc.) as a de facto replacement for semantics. Whether the task is wordsense disambiguation, certain forms of textual entailment, information extraction, paraphrase learning, and so on, it turns out to be very useful to consider a word(sense) as being defined by the distribution of word(senses) that regu-larly accompany it (in the classic words of Firth, ""you shall know a word by the company it keeps""). And this is true not only for individual wordsenses, but also for larger units such as topics: the product of LDA and similar topic characterization engines is similar.",Distributional Semantics and the Lexicon,"The lexicons used in computational linguistics systems contain morphological, syntactic, and occasionally also some semantic information (such as definitions, pointers to an ontology, verb frame filler preferences, etc.). But the human cognitive lexicon contains a great deal more, crucially, expectations about how a word tends to combine with others: not just general information-extraction-like patterns, but specific instantial expectations. Such information is very useful when it comes to listening in bad aural conditions and reading texts in which background information is taken for granted; without such specific expectation, one would be hard-pressed (and computers are completely unable) to form coherent and richly connected multi-sentence interpretations.
Over the past few years, NLP work has increasingly treated topic signature word distributions (also called 'context vectors', 'topic models', etc.) as a de facto replacement for semantics. Whether the task is wordsense disambiguation, certain forms of textual entailment, information extraction, paraphrase learning, and so on, it turns out to be very useful to consider a word(sense) as being defined by the distribution of word(senses) that regu-larly accompany it (in the classic words of Firth, ""you shall know a word by the company it keeps""). And this is true not only for individual wordsenses, but also for larger units such as topics: the product of LDA and similar topic characterization engines is similar.",,"Distributional Semantics and the Lexicon. The lexicons used in computational linguistics systems contain morphological, syntactic, and occasionally also some semantic information (such as definitions, pointers to an ontology, verb frame filler preferences, etc.). But the human cognitive lexicon contains a great deal more, crucially, expectations about how a word tends to combine with others: not just general information-extraction-like patterns, but specific instantial expectations. Such information is very useful when it comes to listening in bad aural conditions and reading texts in which background information is taken for granted; without such specific expectation, one would be hard-pressed (and computers are completely unable) to form coherent and richly connected multi-sentence interpretations.
Over the past few years, NLP work has increasingly treated topic signature word distributions (also called 'context vectors', 'topic models', etc.) as a de facto replacement for semantics. Whether the task is wordsense disambiguation, certain forms of textual entailment, information extraction, paraphrase learning, and so on, it turns out to be very useful to consider a word(sense) as being defined by the distribution of word(senses) that regu-larly accompany it (in the classic words of Firth, ""you shall know a word by the company it keeps""). And this is true not only for individual wordsenses, but also for larger units such as topics: the product of LDA and similar topic characterization engines is similar.",2010
osenova-etal-2010-exploring,http://www.lrec-conf.org/proceedings/lrec2010/pdf/721_Paper.pdf,0,,,,,,,"Exploring Co-Reference Chains for Concept Annotation of Domain Texts. The paper explores the co-reference chains as a way for improving the density of concept annotation over domain texts. The idea extends authors' previous work on relating the ontology to the text terms in two domains-IT and textile. Here IT domain is used. The challenge is to enhance relations among concepts instead of text entities, the latter pursued in most works. Our ultimate goal is to exploit these additional chains for concept disambiguation as well as sparseness resolution at concept level. First, a gold standard was prepared with manually connected links among concepts, anaphoric pronouns and contextual equivalents. This step was necessary not only for test purposes, but also for better orientation in the co-referent types and distribution. Then, two automatic systems were tested on the gold standard. Note that these systems were not designed specially for concept chaining. The conclusion is that the state-of-the-art co-reference resolution systems might address the concept sparseness problem, but not so much the concept disambiguation task. For the latter, word-sense disambiguation systems have to be integrated.",Exploring Co-Reference Chains for Concept Annotation of Domain Texts,"The paper explores the co-reference chains as a way for improving the density of concept annotation over domain texts. The idea extends authors' previous work on relating the ontology to the text terms in two domains-IT and textile. Here IT domain is used. The challenge is to enhance relations among concepts instead of text entities, the latter pursued in most works. Our ultimate goal is to exploit these additional chains for concept disambiguation as well as sparseness resolution at concept level. First, a gold standard was prepared with manually connected links among concepts, anaphoric pronouns and contextual equivalents. This step was necessary not only for test purposes, but also for better orientation in the co-referent types and distribution. Then, two automatic systems were tested on the gold standard. Note that these systems were not designed specially for concept chaining. The conclusion is that the state-of-the-art co-reference resolution systems might address the concept sparseness problem, but not so much the concept disambiguation task. For the latter, word-sense disambiguation systems have to be integrated.",Exploring Co-Reference Chains for Concept Annotation of Domain Texts,"The paper explores the co-reference chains as a way for improving the density of concept annotation over domain texts. The idea extends authors' previous work on relating the ontology to the text terms in two domains-IT and textile. Here IT domain is used. The challenge is to enhance relations among concepts instead of text entities, the latter pursued in most works. Our ultimate goal is to exploit these additional chains for concept disambiguation as well as sparseness resolution at concept level. First, a gold standard was prepared with manually connected links among concepts, anaphoric pronouns and contextual equivalents. This step was necessary not only for test purposes, but also for better orientation in the co-referent types and distribution. Then, two automatic systems were tested on the gold standard. Note that these systems were not designed specially for concept chaining. The conclusion is that the state-of-the-art co-reference resolution systems might address the concept sparseness problem, but not so much the concept disambiguation task. For the latter, word-sense disambiguation systems have to be integrated.",The work reported here is done within the context of the EU project -Language Technology for Lifelong Learning (LTfLL). We would also like to thank the three anonymous reviewers for their valuable remarks as specialists and readers.,"Exploring Co-Reference Chains for Concept Annotation of Domain Texts. The paper explores the co-reference chains as a way for improving the density of concept annotation over domain texts. The idea extends authors' previous work on relating the ontology to the text terms in two domains-IT and textile. Here IT domain is used. The challenge is to enhance relations among concepts instead of text entities, the latter pursued in most works. Our ultimate goal is to exploit these additional chains for concept disambiguation as well as sparseness resolution at concept level. First, a gold standard was prepared with manually connected links among concepts, anaphoric pronouns and contextual equivalents. This step was necessary not only for test purposes, but also for better orientation in the co-referent types and distribution. Then, two automatic systems were tested on the gold standard. Note that these systems were not designed specially for concept chaining. The conclusion is that the state-of-the-art co-reference resolution systems might address the concept sparseness problem, but not so much the concept disambiguation task. For the latter, word-sense disambiguation systems have to be integrated.",2010
nallapati-etal-2016-abstractive,https://aclanthology.org/K16-1028.pdf,0,,,,,,,"Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond. In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling keywords , capturing the hierarchy of sentence-toword structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research.",Abstractive Text Summarization using Sequence-to-sequence {RNN}s and Beyond,"In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling keywords , capturing the hierarchy of sentence-toword structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research.",Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond,"In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling keywords , capturing the hierarchy of sentence-toword structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research.",,"Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond. In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling keywords , capturing the hierarchy of sentence-toword structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research.",2016
reinhard-gibbon-1991-prosodic,https://aclanthology.org/E91-1023.pdf,0,,,,,,,"Prosodic Inheritance and Morphological Generalisations. Prosodic Inheritance (PI) morphology provides uniform treatment of both concatenative and non-concatenative morphological and phonological generalisations using default inheritance. Models of an extensive range of German Umlaut and Arabic intercalation facts, implemented in DATR, show that the PI approach also covers 'hard cases' more homogeneously and more extensively than previous computational treatments.",Prosodic Inheritance and Morphological Generalisations,"Prosodic Inheritance (PI) morphology provides uniform treatment of both concatenative and non-concatenative morphological and phonological generalisations using default inheritance. Models of an extensive range of German Umlaut and Arabic intercalation facts, implemented in DATR, show that the PI approach also covers 'hard cases' more homogeneously and more extensively than previous computational treatments.",Prosodic Inheritance and Morphological Generalisations,"Prosodic Inheritance (PI) morphology provides uniform treatment of both concatenative and non-concatenative morphological and phonological generalisations using default inheritance. Models of an extensive range of German Umlaut and Arabic intercalation facts, implemented in DATR, show that the PI approach also covers 'hard cases' more homogeneously and more extensively than previous computational treatments.",,"Prosodic Inheritance and Morphological Generalisations. Prosodic Inheritance (PI) morphology provides uniform treatment of both concatenative and non-concatenative morphological and phonological generalisations using default inheritance. Models of an extensive range of German Umlaut and Arabic intercalation facts, implemented in DATR, show that the PI approach also covers 'hard cases' more homogeneously and more extensively than previous computational treatments.",1991
walker-etal-2012-annotated,http://www.lrec-conf.org/proceedings/lrec2012/pdf/1114_Paper.pdf,0,,,,,,,"An Annotated Corpus of Film Dialogue for Learning and Characterizing Character Style. Interactive story systems often involve dialogue with virtual dramatic characters. However, to date most character dialogue is written by hand. One way to ease the authoring process is to (semi-)automatically generate dialogue based on film characters. We extract features from dialogue of film characters in leading roles. Then we use these character-based features to drive our language generator to produce interesting utterances. This paper describes a corpus of film dialogue that we have collected from the IMSDb archive and annotated for linguistic structures and character archetypes. We extract different sets of features using external sources such as LIWC and SentiWordNet as well as using our own written scripts. The automation of feature extraction also eases the process of acquiring additional film scripts. We briefly show how film characters can be represented by models learned from the corpus, how the models can be distinguished based on different categories such as gender and film genre, and how they can be applied to a language generator to generate utterances that can be perceived as being similar to the intended character model.",An Annotated Corpus of Film Dialogue for Learning and Characterizing Character Style,"Interactive story systems often involve dialogue with virtual dramatic characters. However, to date most character dialogue is written by hand. One way to ease the authoring process is to (semi-)automatically generate dialogue based on film characters. We extract features from dialogue of film characters in leading roles. Then we use these character-based features to drive our language generator to produce interesting utterances. This paper describes a corpus of film dialogue that we have collected from the IMSDb archive and annotated for linguistic structures and character archetypes. We extract different sets of features using external sources such as LIWC and SentiWordNet as well as using our own written scripts. The automation of feature extraction also eases the process of acquiring additional film scripts. We briefly show how film characters can be represented by models learned from the corpus, how the models can be distinguished based on different categories such as gender and film genre, and how they can be applied to a language generator to generate utterances that can be perceived as being similar to the intended character model.",An Annotated Corpus of Film Dialogue for Learning and Characterizing Character Style,"Interactive story systems often involve dialogue with virtual dramatic characters. However, to date most character dialogue is written by hand. One way to ease the authoring process is to (semi-)automatically generate dialogue based on film characters. We extract features from dialogue of film characters in leading roles. Then we use these character-based features to drive our language generator to produce interesting utterances. This paper describes a corpus of film dialogue that we have collected from the IMSDb archive and annotated for linguistic structures and character archetypes. We extract different sets of features using external sources such as LIWC and SentiWordNet as well as using our own written scripts. The automation of feature extraction also eases the process of acquiring additional film scripts. We briefly show how film characters can be represented by models learned from the corpus, how the models can be distinguished based on different categories such as gender and film genre, and how they can be applied to a language generator to generate utterances that can be perceived as being similar to the intended character model.",,"An Annotated Corpus of Film Dialogue for Learning and Characterizing Character Style. Interactive story systems often involve dialogue with virtual dramatic characters. However, to date most character dialogue is written by hand. One way to ease the authoring process is to (semi-)automatically generate dialogue based on film characters. We extract features from dialogue of film characters in leading roles. Then we use these character-based features to drive our language generator to produce interesting utterances. This paper describes a corpus of film dialogue that we have collected from the IMSDb archive and annotated for linguistic structures and character archetypes. We extract different sets of features using external sources such as LIWC and SentiWordNet as well as using our own written scripts. The automation of feature extraction also eases the process of acquiring additional film scripts. We briefly show how film characters can be represented by models learned from the corpus, how the models can be distinguished based on different categories such as gender and film genre, and how they can be applied to a language generator to generate utterances that can be perceived as being similar to the intended character model.",2012
zhai-huang-2015-pilot,https://aclanthology.org/2015.mtsummit-papers.5.pdf,0,,,,,,,"A pilot study towards end-to-end MT training. Typical MT training involves several stages, including word alignment, rule extraction, translation model estimation, and parameter tuning. In this paper, different from the traditional pipeline, we investigate the possibility of end-to-end MT training, and propose a framework which combines rule induction and parameter tuning in one single module. Preliminary experiments show that our learned model achieves comparable translation quality to the traditional MT training pipeline. * Work done while Prof. Liang Huang was in City University of New York.",A pilot study towards end-to-end {MT} training,"Typical MT training involves several stages, including word alignment, rule extraction, translation model estimation, and parameter tuning. In this paper, different from the traditional pipeline, we investigate the possibility of end-to-end MT training, and propose a framework which combines rule induction and parameter tuning in one single module. Preliminary experiments show that our learned model achieves comparable translation quality to the traditional MT training pipeline. * Work done while Prof. Liang Huang was in City University of New York.",A pilot study towards end-to-end MT training,"Typical MT training involves several stages, including word alignment, rule extraction, translation model estimation, and parameter tuning. In this paper, different from the traditional pipeline, we investigate the possibility of end-to-end MT training, and propose a framework which combines rule induction and parameter tuning in one single module. Preliminary experiments show that our learned model achieves comparable translation quality to the traditional MT training pipeline. * Work done while Prof. Liang Huang was in City University of New York.","We thank the three anonymous reviewers for the valuable comments, and Kai Zhao for discussions. This project was supported in part by DARPA FA8750-13-2-0041 (DEFT), NSF IIS-1449278, and a Google Faculty Research Award.","A pilot study towards end-to-end MT training. Typical MT training involves several stages, including word alignment, rule extraction, translation model estimation, and parameter tuning. In this paper, different from the traditional pipeline, we investigate the possibility of end-to-end MT training, and propose a framework which combines rule induction and parameter tuning in one single module. Preliminary experiments show that our learned model achieves comparable translation quality to the traditional MT training pipeline. * Work done while Prof. Liang Huang was in City University of New York.",2015
kucuk-etal-2014-named,http://www.lrec-conf.org/proceedings/lrec2014/pdf/380_Paper.pdf,0,,,,,,,"Named Entity Recognition on Turkish Tweets. Various recent studies show that the performance of named entity recognition (NER) systems developed for well-formed text types drops significantly when applied to tweets. The only existing study for the highly inflected agglutinative language Turkish reports a drop in F-Measure from 91% to 19% when ported from news articles to tweets. In this study, we present a new named entity-annotated tweet corpus and a detailed analysis of the various tweet-specific linguistic phenomena. We perform comparative NER experiments with a rule-based multilingual NER system adapted to Turkish on three corpora: a news corpus, our new tweet corpus, and another tweet corpus. Based on the analysis and the experimentation results, we suggest system features required to improve NER results for social media like Twitter.",Named Entity Recognition on {T}urkish Tweets,"Various recent studies show that the performance of named entity recognition (NER) systems developed for well-formed text types drops significantly when applied to tweets. The only existing study for the highly inflected agglutinative language Turkish reports a drop in F-Measure from 91% to 19% when ported from news articles to tweets. In this study, we present a new named entity-annotated tweet corpus and a detailed analysis of the various tweet-specific linguistic phenomena. We perform comparative NER experiments with a rule-based multilingual NER system adapted to Turkish on three corpora: a news corpus, our new tweet corpus, and another tweet corpus. Based on the analysis and the experimentation results, we suggest system features required to improve NER results for social media like Twitter.",Named Entity Recognition on Turkish Tweets,"Various recent studies show that the performance of named entity recognition (NER) systems developed for well-formed text types drops significantly when applied to tweets. The only existing study for the highly inflected agglutinative language Turkish reports a drop in F-Measure from 91% to 19% when ported from news articles to tweets. In this study, we present a new named entity-annotated tweet corpus and a detailed analysis of the various tweet-specific linguistic phenomena. We perform comparative NER experiments with a rule-based multilingual NER system adapted to Turkish on three corpora: a news corpus, our new tweet corpus, and another tweet corpus. Based on the analysis and the experimentation results, we suggest system features required to improve NER results for social media like Twitter.",This study is supported in part by a postdoctoral research grant from TÜBİTAK.,"Named Entity Recognition on Turkish Tweets. Various recent studies show that the performance of named entity recognition (NER) systems developed for well-formed text types drops significantly when applied to tweets. The only existing study for the highly inflected agglutinative language Turkish reports a drop in F-Measure from 91% to 19% when ported from news articles to tweets. In this study, we present a new named entity-annotated tweet corpus and a detailed analysis of the various tweet-specific linguistic phenomena. We perform comparative NER experiments with a rule-based multilingual NER system adapted to Turkish on three corpora: a news corpus, our new tweet corpus, and another tweet corpus. Based on the analysis and the experimentation results, we suggest system features required to improve NER results for social media like Twitter.",2014
gilbert-riloff-2013-domain,https://aclanthology.org/P13-2015.pdf,0,,,,,,,"Domain-Specific Coreference Resolution with Lexicalized Features. Most coreference resolvers rely heavily on string matching, syntactic properties, and semantic attributes of words, but they lack the ability to make decisions based on individual words. In this paper, we explore the benefits of lexicalized features in the setting of domain-specific coreference resolution. We show that adding lexicalized features to off-the-shelf coreference resolvers yields significant performance gains on four domain-specific data sets and with two types of coreference resolution architectures.",Domain-Specific Coreference Resolution with Lexicalized Features,"Most coreference resolvers rely heavily on string matching, syntactic properties, and semantic attributes of words, but they lack the ability to make decisions based on individual words. In this paper, we explore the benefits of lexicalized features in the setting of domain-specific coreference resolution. We show that adding lexicalized features to off-the-shelf coreference resolvers yields significant performance gains on four domain-specific data sets and with two types of coreference resolution architectures.",Domain-Specific Coreference Resolution with Lexicalized Features,"Most coreference resolvers rely heavily on string matching, syntactic properties, and semantic attributes of words, but they lack the ability to make decisions based on individual words. In this paper, we explore the benefits of lexicalized features in the setting of domain-specific coreference resolution. We show that adding lexicalized features to off-the-shelf coreference resolvers yields significant performance gains on four domain-specific data sets and with two types of coreference resolution architectures.","This material is based upon work supported by the National Science Foundation under Grant No. IIS-1018314 and the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0172. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the U.S. government.","Domain-Specific Coreference Resolution with Lexicalized Features. Most coreference resolvers rely heavily on string matching, syntactic properties, and semantic attributes of words, but they lack the ability to make decisions based on individual words. In this paper, we explore the benefits of lexicalized features in the setting of domain-specific coreference resolution. We show that adding lexicalized features to off-the-shelf coreference resolvers yields significant performance gains on four domain-specific data sets and with two types of coreference resolution architectures.",2013
alvez-etal-2018-cross,https://aclanthology.org/L18-1723.pdf,0,,,,,,,"Cross-checking WordNet and SUMO Using Meronymy. We report on the practical application of a black-box testing methodology for the validation of the knowledge encoded in WordNet, SUMO and their mapping by using automated theorem provers. Our proposal is based on the part-whole information provided by WordNet, out of which we automatically create a large set of tests. Our experimental results confirm that the proposed system enables the validation of some pieces of information and also the detection of missing information or inconsistencies among these resources.",Cross-checking {W}ord{N}et and {SUMO} Using Meronymy,"We report on the practical application of a black-box testing methodology for the validation of the knowledge encoded in WordNet, SUMO and their mapping by using automated theorem provers. Our proposal is based on the part-whole information provided by WordNet, out of which we automatically create a large set of tests. Our experimental results confirm that the proposed system enables the validation of some pieces of information and also the detection of missing information or inconsistencies among these resources.",Cross-checking WordNet and SUMO Using Meronymy,"We report on the practical application of a black-box testing methodology for the validation of the knowledge encoded in WordNet, SUMO and their mapping by using automated theorem provers. Our proposal is based on the part-whole information provided by WordNet, out of which we automatically create a large set of tests. Our experimental results confirm that the proposed system enables the validation of some pieces of information and also the detection of missing information or inconsistencies among these resources.","This work has been partially funded by the Spanish Projects TUNER (TIN2015-65308-C5-1-R) and GRAMM (TIN2017-86727-C2-2-R), the Basque Project LoRea (GIU15/30) and the UPV/EHU project OEBU (EHUA16/33).","Cross-checking WordNet and SUMO Using Meronymy. We report on the practical application of a black-box testing methodology for the validation of the knowledge encoded in WordNet, SUMO and their mapping by using automated theorem provers. Our proposal is based on the part-whole information provided by WordNet, out of which we automatically create a large set of tests. Our experimental results confirm that the proposed system enables the validation of some pieces of information and also the detection of missing information or inconsistencies among these resources.",2018
paul-etal-2009-mining,https://aclanthology.org/W09-1111.pdf,1,,,,partnership,,,"Mining the Web for Reciprocal Relationships. In this paper we address the problem of identifying reciprocal relationships in English. In particular we introduce an algorithm that semi-automatically discovers patterns encoding reciprocity based on a set of simple but effective pronoun templates. Using a set of most frequently occurring patterns, we extract pairs of reciprocal pattern instances by searching the web. Then we apply two unsupervised clustering procedures to form meaningful clusters of such reciprocal instances. The pattern discovery procedure yields an accuracy of 97%, while the clustering procedures indicate accuracies of 91% and 82%. Moreover, the resulting set of 10,882 reciprocal instances represent a broad-coverage resource.",Mining the Web for Reciprocal Relationships,"In this paper we address the problem of identifying reciprocal relationships in English. In particular we introduce an algorithm that semi-automatically discovers patterns encoding reciprocity based on a set of simple but effective pronoun templates. Using a set of most frequently occurring patterns, we extract pairs of reciprocal pattern instances by searching the web. Then we apply two unsupervised clustering procedures to form meaningful clusters of such reciprocal instances. The pattern discovery procedure yields an accuracy of 97%, while the clustering procedures indicate accuracies of 91% and 82%. Moreover, the resulting set of 10,882 reciprocal instances represent a broad-coverage resource.",Mining the Web for Reciprocal Relationships,"In this paper we address the problem of identifying reciprocal relationships in English. In particular we introduce an algorithm that semi-automatically discovers patterns encoding reciprocity based on a set of simple but effective pronoun templates. Using a set of most frequently occurring patterns, we extract pairs of reciprocal pattern instances by searching the web. Then we apply two unsupervised clustering procedures to form meaningful clusters of such reciprocal instances. The pattern discovery procedure yields an accuracy of 97%, while the clustering procedures indicate accuracies of 91% and 82%. Moreover, the resulting set of 10,882 reciprocal instances represent a broad-coverage resource.",,"Mining the Web for Reciprocal Relationships. In this paper we address the problem of identifying reciprocal relationships in English. In particular we introduce an algorithm that semi-automatically discovers patterns encoding reciprocity based on a set of simple but effective pronoun templates. Using a set of most frequently occurring patterns, we extract pairs of reciprocal pattern instances by searching the web. Then we apply two unsupervised clustering procedures to form meaningful clusters of such reciprocal instances. The pattern discovery procedure yields an accuracy of 97%, while the clustering procedures indicate accuracies of 91% and 82%. Moreover, the resulting set of 10,882 reciprocal instances represent a broad-coverage resource.",2009
hatzivassiloglou-mckeown-1995-quantitative,https://aclanthology.org/P95-1027.pdf,0,,,,,,,"A Quantitative Evaluation of Linguistic Tests for the Automatic Prediction of Semantic Markedness. We present a corpus-based study of methods that have been proposed in the linguistics literature for selecting the semantically unmarked term out of a pair of antonymous adjectives. Solutions to this problem are applicable to the more general task of selecting the positive term from the pair. Using automatically collected data, the accuracy and applicability of each method is quantified, and a statistical analysis of the significance of the results is performed. We show that some simple methods are indeed good indicators for the answer to the problem while other proposed methods fail to perform better than would be attributable to chance. In addition, one of the simplest methods, text frequency, dominates all others. We also apply two generic statistical learning methods for combining the indications of the individual methods, and compare their performance to the simple methods. The most sophisticated complex learning method offers a small, but statistically significant, improvement over the original tests.",A Quantitative Evaluation of Linguistic Tests for the Automatic Prediction of Semantic Markedness,"We present a corpus-based study of methods that have been proposed in the linguistics literature for selecting the semantically unmarked term out of a pair of antonymous adjectives. Solutions to this problem are applicable to the more general task of selecting the positive term from the pair. Using automatically collected data, the accuracy and applicability of each method is quantified, and a statistical analysis of the significance of the results is performed. We show that some simple methods are indeed good indicators for the answer to the problem while other proposed methods fail to perform better than would be attributable to chance. In addition, one of the simplest methods, text frequency, dominates all others. We also apply two generic statistical learning methods for combining the indications of the individual methods, and compare their performance to the simple methods. The most sophisticated complex learning method offers a small, but statistically significant, improvement over the original tests.",A Quantitative Evaluation of Linguistic Tests for the Automatic Prediction of Semantic Markedness,"We present a corpus-based study of methods that have been proposed in the linguistics literature for selecting the semantically unmarked term out of a pair of antonymous adjectives. Solutions to this problem are applicable to the more general task of selecting the positive term from the pair. Using automatically collected data, the accuracy and applicability of each method is quantified, and a statistical analysis of the significance of the results is performed. We show that some simple methods are indeed good indicators for the answer to the problem while other proposed methods fail to perform better than would be attributable to chance. In addition, one of the simplest methods, text frequency, dominates all others. We also apply two generic statistical learning methods for combining the indications of the individual methods, and compare their performance to the simple methods. The most sophisticated complex learning method offers a small, but statistically significant, improvement over the original tests.","This work was supported jointly by the Advanced Research Projects Agency and the Office of Naval Research under contract N00014-89-J-1782, and by the National Science Foundation under contract GER-90-24069. It was conducted under the auspices of the Columbia University CAT in High Performance Computing and Communications in Healthcare, a New York State Center for Advanced Technology supported by the New York State Science and Technology Foundation. We wish to thank Judith Klavans, Rebecca Passonneau, and the anonymous reviewers for providing us with useful comments on earlier versions of the paper.","A Quantitative Evaluation of Linguistic Tests for the Automatic Prediction of Semantic Markedness. We present a corpus-based study of methods that have been proposed in the linguistics literature for selecting the semantically unmarked term out of a pair of antonymous adjectives. Solutions to this problem are applicable to the more general task of selecting the positive term from the pair. Using automatically collected data, the accuracy and applicability of each method is quantified, and a statistical analysis of the significance of the results is performed. We show that some simple methods are indeed good indicators for the answer to the problem while other proposed methods fail to perform better than would be attributable to chance. In addition, one of the simplest methods, text frequency, dominates all others. We also apply two generic statistical learning methods for combining the indications of the individual methods, and compare their performance to the simple methods. The most sophisticated complex learning method offers a small, but statistically significant, improvement over the original tests.",1995
bangalore-etal-2006-finite,https://aclanthology.org/2006.iwslt-evaluation.2.pdf,0,,,,,,,"Finite-state transducer-based statistical machine translation using joint probabilities. In this paper, we present our system for statistical machine translation that is based on weighted finite-state transducers. We describe the construction of the transducer, the estimation of the weights, acquisition of phrases (locally ordered tokens) and the mechanism we use for global reordering. We also present a novel approach to machine translation that uses a maximum entropy model for parameter estimation and contrast its performance to the finite-state translation model on the IWSLT Chinese-English data sets.",Finite-state transducer-based statistical machine translation using joint probabilities,"In this paper, we present our system for statistical machine translation that is based on weighted finite-state transducers. We describe the construction of the transducer, the estimation of the weights, acquisition of phrases (locally ordered tokens) and the mechanism we use for global reordering. We also present a novel approach to machine translation that uses a maximum entropy model for parameter estimation and contrast its performance to the finite-state translation model on the IWSLT Chinese-English data sets.",Finite-state transducer-based statistical machine translation using joint probabilities,"In this paper, we present our system for statistical machine translation that is based on weighted finite-state transducers. We describe the construction of the transducer, the estimation of the weights, acquisition of phrases (locally ordered tokens) and the mechanism we use for global reordering. We also present a novel approach to machine translation that uses a maximum entropy model for parameter estimation and contrast its performance to the finite-state translation model on the IWSLT Chinese-English data sets.",,"Finite-state transducer-based statistical machine translation using joint probabilities. In this paper, we present our system for statistical machine translation that is based on weighted finite-state transducers. We describe the construction of the transducer, the estimation of the weights, acquisition of phrases (locally ordered tokens) and the mechanism we use for global reordering. We also present a novel approach to machine translation that uses a maximum entropy model for parameter estimation and contrast its performance to the finite-state translation model on the IWSLT Chinese-English data sets.",2006
rohith-ramakrishnan-etal-2021-analysis,https://aclanthology.org/2021.paclic-1.75.pdf,0,,,,,,,"Analysis of Text-Semantics via Efficient Word Embedding using Variational Mode Decomposition. In this paper, we propose a novel method which establishes a newborn relation between Signal Processing and Natural Language Processing (NLP) method via Variational Mode Decomposition (VMD). Unlike the modern Neural Network approaches for NLP which are complex and often masked from the end user, our approach involving Term Frequency-Inverse Document Frequency (TF-IDF) aided with VMD dials down the complexity retaining the performance with transparency. The performance in terms of Machine Learning based approaches and semantic relationships of words along with the methodology of the above mentioned approach is analyzed and discussed in this paper.",Analysis of Text-Semantics via Efficient Word Embedding using Variational Mode Decomposition,"In this paper, we propose a novel method which establishes a newborn relation between Signal Processing and Natural Language Processing (NLP) method via Variational Mode Decomposition (VMD). Unlike the modern Neural Network approaches for NLP which are complex and often masked from the end user, our approach involving Term Frequency-Inverse Document Frequency (TF-IDF) aided with VMD dials down the complexity retaining the performance with transparency. The performance in terms of Machine Learning based approaches and semantic relationships of words along with the methodology of the above mentioned approach is analyzed and discussed in this paper.",Analysis of Text-Semantics via Efficient Word Embedding using Variational Mode Decomposition,"In this paper, we propose a novel method which establishes a newborn relation between Signal Processing and Natural Language Processing (NLP) method via Variational Mode Decomposition (VMD). Unlike the modern Neural Network approaches for NLP which are complex and often masked from the end user, our approach involving Term Frequency-Inverse Document Frequency (TF-IDF) aided with VMD dials down the complexity retaining the performance with transparency. The performance in terms of Machine Learning based approaches and semantic relationships of words along with the methodology of the above mentioned approach is analyzed and discussed in this paper.",,"Analysis of Text-Semantics via Efficient Word Embedding using Variational Mode Decomposition. In this paper, we propose a novel method which establishes a newborn relation between Signal Processing and Natural Language Processing (NLP) method via Variational Mode Decomposition (VMD). Unlike the modern Neural Network approaches for NLP which are complex and often masked from the end user, our approach involving Term Frequency-Inverse Document Frequency (TF-IDF) aided with VMD dials down the complexity retaining the performance with transparency. The performance in terms of Machine Learning based approaches and semantic relationships of words along with the methodology of the above mentioned approach is analyzed and discussed in this paper.",2021
di-eugenio-glass-2004-squibs,https://aclanthology.org/J04-1005.pdf,0,,,,,,,"Squibs and Discussions: The Kappa Statistic: A Second Look. In recent years, the kappa coefficient of agreement has become the de facto standard for evaluating intercoder agreement for tagging tasks. In this squib, we highlight issues that affect κ and that the community has largely neglected. First, we discuss the assumptions underlying different computations of the expected agreement component of κ. Second, we discuss how prevalence and bias affect the κ measure.",Squibs and Discussions: The Kappa Statistic: A Second Look,"In recent years, the kappa coefficient of agreement has become the de facto standard for evaluating intercoder agreement for tagging tasks. In this squib, we highlight issues that affect κ and that the community has largely neglected. First, we discuss the assumptions underlying different computations of the expected agreement component of κ. Second, we discuss how prevalence and bias affect the κ measure.",Squibs and Discussions: The Kappa Statistic: A Second Look,"In recent years, the kappa coefficient of agreement has become the de facto standard for evaluating intercoder agreement for tagging tasks. In this squib, we highlight issues that affect κ and that the community has largely neglected. First, we discuss the assumptions underlying different computations of the expected agreement component of κ. Second, we discuss how prevalence and bias affect the κ measure.",This work is supported by grant N00014-00-1-0640 from the Office of Naval Research. Thanks to Janet Cahn and to the anonymous reviewers for comments on earlier drafts.,"Squibs and Discussions: The Kappa Statistic: A Second Look. In recent years, the kappa coefficient of agreement has become the de facto standard for evaluating intercoder agreement for tagging tasks. In this squib, we highlight issues that affect κ and that the community has largely neglected. First, we discuss the assumptions underlying different computations of the expected agreement component of κ. Second, we discuss how prevalence and bias affect the κ measure.",2004
le-roux-etal-2013-combining,https://aclanthology.org/D13-1116.pdf,0,,,,,,,"Combining PCFG-LA Models with Dual Decomposition: A Case Study with Function Labels and Binarization. It has recently been shown that different NLP models can be effectively combined using dual decomposition. In this paper we demonstrate that PCFG-LA parsing models are suitable for combination in this way. We experiment with the different models which result from alternative methods of extracting a grammar from a treebank (retaining or discarding function labels, left binarization versus right binarization) and achieve a labeled Parseval F-score of 92.4 on Wall Street Journal Section 23-this represents an absolute improvement of 0.7 and an error reduction rate of 7% over a strong PCFG-LA product-model baseline. Although we experiment only with binarization and function labels in this study, there is much scope for applying this approach to other grammar extraction strategies.",Combining {PCFG}-{LA} Models with Dual Decomposition: A Case Study with Function Labels and Binarization,"It has recently been shown that different NLP models can be effectively combined using dual decomposition. In this paper we demonstrate that PCFG-LA parsing models are suitable for combination in this way. We experiment with the different models which result from alternative methods of extracting a grammar from a treebank (retaining or discarding function labels, left binarization versus right binarization) and achieve a labeled Parseval F-score of 92.4 on Wall Street Journal Section 23-this represents an absolute improvement of 0.7 and an error reduction rate of 7% over a strong PCFG-LA product-model baseline. Although we experiment only with binarization and function labels in this study, there is much scope for applying this approach to other grammar extraction strategies.",Combining PCFG-LA Models with Dual Decomposition: A Case Study with Function Labels and Binarization,"It has recently been shown that different NLP models can be effectively combined using dual decomposition. In this paper we demonstrate that PCFG-LA parsing models are suitable for combination in this way. We experiment with the different models which result from alternative methods of extracting a grammar from a treebank (retaining or discarding function labels, left binarization versus right binarization) and achieve a labeled Parseval F-score of 92.4 on Wall Street Journal Section 23-this represents an absolute improvement of 0.7 and an error reduction rate of 7% over a strong PCFG-LA product-model baseline. Although we experiment only with binarization and function labels in this study, there is much scope for applying this approach to other grammar extraction strategies.","We are grateful to the reviewers for their helpful comments. We also thank Joachim Wagner for providing feedback on an early version of the paper. This work has been partially funded by the Labex EFL (ANR/CGI). 9 Their other system relying on the self-trained version of the BLLIP parser achieves 92.6 F1. ACL-08: HLT, pages 586-594. ","Combining PCFG-LA Models with Dual Decomposition: A Case Study with Function Labels and Binarization. It has recently been shown that different NLP models can be effectively combined using dual decomposition. In this paper we demonstrate that PCFG-LA parsing models are suitable for combination in this way. We experiment with the different models which result from alternative methods of extracting a grammar from a treebank (retaining or discarding function labels, left binarization versus right binarization) and achieve a labeled Parseval F-score of 92.4 on Wall Street Journal Section 23-this represents an absolute improvement of 0.7 and an error reduction rate of 7% over a strong PCFG-LA product-model baseline. Although we experiment only with binarization and function labels in this study, there is much scope for applying this approach to other grammar extraction strategies.",2013
marie-fujita-2019-unsupervised-joint,https://aclanthology.org/P19-1312.pdf,0,,,,,,,"Unsupervised Joint Training of Bilingual Word Embeddings. State-of-the-art methods for unsupervised bilingual word embeddings (BWE) train a mapping function that maps pre-trained monolingual word embeddings into a bilingual space. Despite its remarkable results, unsupervised mapping is also well-known to be limited by the dissimilarity between the original word embedding spaces to be mapped. In this work, we propose a new approach that trains unsupervised BWE jointly on synthetic parallel data generated through unsupervised machine translation. We demonstrate that existing algorithms that jointly train BWE are very robust to noisy training data and show that unsupervised BWE jointly trained significantly outperform unsupervised mapped BWE in several cross-lingual NLP tasks.",Unsupervised Joint Training of Bilingual Word Embeddings,"State-of-the-art methods for unsupervised bilingual word embeddings (BWE) train a mapping function that maps pre-trained monolingual word embeddings into a bilingual space. Despite its remarkable results, unsupervised mapping is also well-known to be limited by the dissimilarity between the original word embedding spaces to be mapped. In this work, we propose a new approach that trains unsupervised BWE jointly on synthetic parallel data generated through unsupervised machine translation. We demonstrate that existing algorithms that jointly train BWE are very robust to noisy training data and show that unsupervised BWE jointly trained significantly outperform unsupervised mapped BWE in several cross-lingual NLP tasks.",Unsupervised Joint Training of Bilingual Word Embeddings,"State-of-the-art methods for unsupervised bilingual word embeddings (BWE) train a mapping function that maps pre-trained monolingual word embeddings into a bilingual space. Despite its remarkable results, unsupervised mapping is also well-known to be limited by the dissimilarity between the original word embedding spaces to be mapped. In this work, we propose a new approach that trains unsupervised BWE jointly on synthetic parallel data generated through unsupervised machine translation. We demonstrate that existing algorithms that jointly train BWE are very robust to noisy training data and show that unsupervised BWE jointly trained significantly outperform unsupervised mapped BWE in several cross-lingual NLP tasks.","We would like to thank the reviewers for their useful comments and suggestions. A part of this work was conducted under the program ""Promotion of Global Communications Plan: Research, Development, and Social Demonstration of Multilingual Speech Translation Technology"" of the Ministry of Internal Affairs and Communications (MIC), Japan. 13 We used the News Commentary corpora provided by WMT for en→de and en→fr to train SMT systems performing at 15.4 and 20.1 BLEU points on Newstest2016 en-de and Newstest2014 en-fr, respectively.","Unsupervised Joint Training of Bilingual Word Embeddings. State-of-the-art methods for unsupervised bilingual word embeddings (BWE) train a mapping function that maps pre-trained monolingual word embeddings into a bilingual space. Despite its remarkable results, unsupervised mapping is also well-known to be limited by the dissimilarity between the original word embedding spaces to be mapped. In this work, we propose a new approach that trains unsupervised BWE jointly on synthetic parallel data generated through unsupervised machine translation. We demonstrate that existing algorithms that jointly train BWE are very robust to noisy training data and show that unsupervised BWE jointly trained significantly outperform unsupervised mapped BWE in several cross-lingual NLP tasks.",2019
kuo-etal-2012-exploiting,https://aclanthology.org/P12-2067.pdf,1,,,,peace_justice_and_strong_institutions,,,"Exploiting Latent Information to Predict Diffusions of Novel Topics on Social Networks. This paper brings a marriage of two seemly unrelated topics, natural language processing (NLP) and social network analysis (SNA). We propose a new task in SNA which is to predict the diffusion of a new topic, and design a learning-based framework to solve this problem. We exploit the latent semantic information among users, topics, and social connections as features for prediction. Our framework is evaluated on real data collected from public domain. The experiments show 16% AUC",Exploiting Latent Information to Predict Diffusions of Novel Topics on Social Networks,"This paper brings a marriage of two seemly unrelated topics, natural language processing (NLP) and social network analysis (SNA). We propose a new task in SNA which is to predict the diffusion of a new topic, and design a learning-based framework to solve this problem. We exploit the latent semantic information among users, topics, and social connections as features for prediction. Our framework is evaluated on real data collected from public domain. The experiments show 16% AUC",Exploiting Latent Information to Predict Diffusions of Novel Topics on Social Networks,"This paper brings a marriage of two seemly unrelated topics, natural language processing (NLP) and social network analysis (SNA). We propose a new task in SNA which is to predict the diffusion of a new topic, and design a learning-based framework to solve this problem. We exploit the latent semantic information among users, topics, and social connections as features for prediction. Our framework is evaluated on real data collected from public domain. The experiments show 16% AUC","This work was also supported by National Science Council, National Taiwan University and Intel Corporation under Grants NSC 100-2911-I-002-001, and 101R7501.","Exploiting Latent Information to Predict Diffusions of Novel Topics on Social Networks. This paper brings a marriage of two seemly unrelated topics, natural language processing (NLP) and social network analysis (SNA). We propose a new task in SNA which is to predict the diffusion of a new topic, and design a learning-based framework to solve this problem. We exploit the latent semantic information among users, topics, and social connections as features for prediction. Our framework is evaluated on real data collected from public domain. The experiments show 16% AUC",2012
druck-etal-2009-active,https://aclanthology.org/D09-1009.pdf,0,,,,,,,"Active Learning by Labeling Features. Methods that learn from prior information about input features such as generalized expectation (GE) have been used to train accurate models with very little effort. In this paper, we propose an active learning approach in which the machine solicits ""labels"" on features rather than instances. In both simulated and real user experiments on two sequence labeling tasks we show that our active learning method outperforms passive learning with features as well as traditional active learning with instances. Preliminary experiments suggest that novel interfaces which intelligently solicit labels on multiple features facilitate more efficient annotation.",Active Learning by Labeling Features,"Methods that learn from prior information about input features such as generalized expectation (GE) have been used to train accurate models with very little effort. In this paper, we propose an active learning approach in which the machine solicits ""labels"" on features rather than instances. In both simulated and real user experiments on two sequence labeling tasks we show that our active learning method outperforms passive learning with features as well as traditional active learning with instances. Preliminary experiments suggest that novel interfaces which intelligently solicit labels on multiple features facilitate more efficient annotation.",Active Learning by Labeling Features,"Methods that learn from prior information about input features such as generalized expectation (GE) have been used to train accurate models with very little effort. In this paper, we propose an active learning approach in which the machine solicits ""labels"" on features rather than instances. In both simulated and real user experiments on two sequence labeling tasks we show that our active learning method outperforms passive learning with features as well as traditional active learning with instances. Preliminary experiments suggest that novel interfaces which intelligently solicit labels on multiple features facilitate more efficient annotation.",We thank Kedar Bellare for helpful discussions and Gau- ,"Active Learning by Labeling Features. Methods that learn from prior information about input features such as generalized expectation (GE) have been used to train accurate models with very little effort. In this paper, we propose an active learning approach in which the machine solicits ""labels"" on features rather than instances. In both simulated and real user experiments on two sequence labeling tasks we show that our active learning method outperforms passive learning with features as well as traditional active learning with instances. Preliminary experiments suggest that novel interfaces which intelligently solicit labels on multiple features facilitate more efficient annotation.",2009
wang-etal-2021-enpar,https://aclanthology.org/2021.eacl-main.251.pdf,0,,,,,,,"ENPAR:Enhancing Entity and Entity Pair Representations for Joint Entity Relation Extraction. Current state-of-the-art systems for joint entity relation extraction (Luan et al., 2019; Wadden et al., 2019) usually adopt the multi-task learning framework. However, annotations for these additional tasks such as coreference resolution and event extraction are always equally hard (or even harder) to obtain. In this work, we propose a pre-training method ENPAR to improve the joint extraction performance. EN-PAR requires only the additional entity annotations that are much easier to collect. Unlike most existing works that only consider incorporating entity information into the sentence encoder, we further utilize the entity pair information. Specifically, we devise four novel objectives, i.e., masked entity typing, masked entity prediction, adversarial context discrimination, and permutation prediction, to pretrain an entity encoder and an entity pair encoder. Comprehensive experiments show that the proposed pre-training method achieves significant improvement over BERT on ACE05, SciERC, and NYT, and outperforms current state-of-the-art on ACE05.",{ENPAR}:Enhancing Entity and Entity Pair Representations for Joint Entity Relation Extraction,"Current state-of-the-art systems for joint entity relation extraction (Luan et al., 2019; Wadden et al., 2019) usually adopt the multi-task learning framework. However, annotations for these additional tasks such as coreference resolution and event extraction are always equally hard (or even harder) to obtain. In this work, we propose a pre-training method ENPAR to improve the joint extraction performance. EN-PAR requires only the additional entity annotations that are much easier to collect. Unlike most existing works that only consider incorporating entity information into the sentence encoder, we further utilize the entity pair information. Specifically, we devise four novel objectives, i.e., masked entity typing, masked entity prediction, adversarial context discrimination, and permutation prediction, to pretrain an entity encoder and an entity pair encoder. Comprehensive experiments show that the proposed pre-training method achieves significant improvement over BERT on ACE05, SciERC, and NYT, and outperforms current state-of-the-art on ACE05.",ENPAR:Enhancing Entity and Entity Pair Representations for Joint Entity Relation Extraction,"Current state-of-the-art systems for joint entity relation extraction (Luan et al., 2019; Wadden et al., 2019) usually adopt the multi-task learning framework. However, annotations for these additional tasks such as coreference resolution and event extraction are always equally hard (or even harder) to obtain. In this work, we propose a pre-training method ENPAR to improve the joint extraction performance. EN-PAR requires only the additional entity annotations that are much easier to collect. Unlike most existing works that only consider incorporating entity information into the sentence encoder, we further utilize the entity pair information. Specifically, we devise four novel objectives, i.e., masked entity typing, masked entity prediction, adversarial context discrimination, and permutation prediction, to pretrain an entity encoder and an entity pair encoder. Comprehensive experiments show that the proposed pre-training method achieves significant improvement over BERT on ACE05, SciERC, and NYT, and outperforms current state-of-the-art on ACE05.",The authors wish to thank the reviewers for their helpful comments and suggestions. This research is (partially) supported by NSFC (62076097 ,"ENPAR:Enhancing Entity and Entity Pair Representations for Joint Entity Relation Extraction. Current state-of-the-art systems for joint entity relation extraction (Luan et al., 2019; Wadden et al., 2019) usually adopt the multi-task learning framework. However, annotations for these additional tasks such as coreference resolution and event extraction are always equally hard (or even harder) to obtain. In this work, we propose a pre-training method ENPAR to improve the joint extraction performance. EN-PAR requires only the additional entity annotations that are much easier to collect. Unlike most existing works that only consider incorporating entity information into the sentence encoder, we further utilize the entity pair information. Specifically, we devise four novel objectives, i.e., masked entity typing, masked entity prediction, adversarial context discrimination, and permutation prediction, to pretrain an entity encoder and an entity pair encoder. Comprehensive experiments show that the proposed pre-training method achieves significant improvement over BERT on ACE05, SciERC, and NYT, and outperforms current state-of-the-art on ACE05.",2021
ling-etal-2015-contexts,https://aclanthology.org/D15-1161.pdf,0,,,,,,,"Not All Contexts Are Created Equal: Better Word Representations with Variable Attention. We introduce an extension to the bag-ofwords model for learning words representations that take into account both syntactic and semantic properties within language. This is done by employing an attention model that finds within the contextual words, the words that are relevant for each prediction. The general intuition of our model is that some words are only relevant for predicting local context (e.g. function words), while other words are more suited for determining global context, such as the topic of the document. Experiments performed on both semantically and syntactically oriented tasks show gains using our model over the existing bag of words model. Furthermore, compared to other more sophisticated models, our model scales better as we increase the size of the context of the model.",Not All Contexts Are Created Equal: Better Word Representations with Variable Attention,"We introduce an extension to the bag-ofwords model for learning words representations that take into account both syntactic and semantic properties within language. This is done by employing an attention model that finds within the contextual words, the words that are relevant for each prediction. The general intuition of our model is that some words are only relevant for predicting local context (e.g. function words), while other words are more suited for determining global context, such as the topic of the document. Experiments performed on both semantically and syntactically oriented tasks show gains using our model over the existing bag of words model. Furthermore, compared to other more sophisticated models, our model scales better as we increase the size of the context of the model.",Not All Contexts Are Created Equal: Better Word Representations with Variable Attention,"We introduce an extension to the bag-ofwords model for learning words representations that take into account both syntactic and semantic properties within language. This is done by employing an attention model that finds within the contextual words, the words that are relevant for each prediction. The general intuition of our model is that some words are only relevant for predicting local context (e.g. function words), while other words are more suited for determining global context, such as the topic of the document. Experiments performed on both semantically and syntactically oriented tasks show gains using our model over the existing bag of words model. Furthermore, compared to other more sophisticated models, our model scales better as we increase the size of the context of the model.","The PhD thesis of Wang Ling is supported by FCT grant SFRH/BD/51157/2010. This research was supported in part by the U.S. Army Research Laboratory, the U.S. Army Research Office under contract/grant number W911NF-10-1-0533 and NSF IIS-1054319 and FCT through the plurianual contract UID/CEC/50021/2013 and grant number SFRH/BPD/68428/2010.","Not All Contexts Are Created Equal: Better Word Representations with Variable Attention. We introduce an extension to the bag-ofwords model for learning words representations that take into account both syntactic and semantic properties within language. This is done by employing an attention model that finds within the contextual words, the words that are relevant for each prediction. The general intuition of our model is that some words are only relevant for predicting local context (e.g. function words), while other words are more suited for determining global context, such as the topic of the document. Experiments performed on both semantically and syntactically oriented tasks show gains using our model over the existing bag of words model. Furthermore, compared to other more sophisticated models, our model scales better as we increase the size of the context of the model.",2015
xu-etal-2002-study,https://aclanthology.org/P02-1025.pdf,0,,,,,,,"A Study on Richer Syntactic Dependencies for Structured Language Modeling. We study the impact of richer syntactic dependencies on the performance of the structured language model (SLM) along three dimensions: parsing accuracy (LP/LR), perplexity (PPL) and worderror-rate (WER, N-best re-scoring). We show that our models achieve an improvement in LP/LR, PPL and/or WER over the reported baseline results using the SLM on the UPenn Treebank and Wall Street Journal (WSJ) corpora, respectively. Analysis of parsing performance shows correlation between the quality of the parser (as measured by precision/recall) and the language model performance (PPL and WER). A remarkable fact is that the enriched SLM outperforms the baseline 3-gram model in terms of WER by 10% when used in isolation as a second pass (N-best re-scoring) language model.",A Study on Richer Syntactic Dependencies for Structured Language Modeling,"We study the impact of richer syntactic dependencies on the performance of the structured language model (SLM) along three dimensions: parsing accuracy (LP/LR), perplexity (PPL) and worderror-rate (WER, N-best re-scoring). We show that our models achieve an improvement in LP/LR, PPL and/or WER over the reported baseline results using the SLM on the UPenn Treebank and Wall Street Journal (WSJ) corpora, respectively. Analysis of parsing performance shows correlation between the quality of the parser (as measured by precision/recall) and the language model performance (PPL and WER). A remarkable fact is that the enriched SLM outperforms the baseline 3-gram model in terms of WER by 10% when used in isolation as a second pass (N-best re-scoring) language model.",A Study on Richer Syntactic Dependencies for Structured Language Modeling,"We study the impact of richer syntactic dependencies on the performance of the structured language model (SLM) along three dimensions: parsing accuracy (LP/LR), perplexity (PPL) and worderror-rate (WER, N-best re-scoring). We show that our models achieve an improvement in LP/LR, PPL and/or WER over the reported baseline results using the SLM on the UPenn Treebank and Wall Street Journal (WSJ) corpora, respectively. Analysis of parsing performance shows correlation between the quality of the parser (as measured by precision/recall) and the language model performance (PPL and WER). A remarkable fact is that the enriched SLM outperforms the baseline 3-gram model in terms of WER by 10% when used in isolation as a second pass (N-best re-scoring) language model.",,"A Study on Richer Syntactic Dependencies for Structured Language Modeling. We study the impact of richer syntactic dependencies on the performance of the structured language model (SLM) along three dimensions: parsing accuracy (LP/LR), perplexity (PPL) and worderror-rate (WER, N-best re-scoring). We show that our models achieve an improvement in LP/LR, PPL and/or WER over the reported baseline results using the SLM on the UPenn Treebank and Wall Street Journal (WSJ) corpora, respectively. Analysis of parsing performance shows correlation between the quality of the parser (as measured by precision/recall) and the language model performance (PPL and WER). A remarkable fact is that the enriched SLM outperforms the baseline 3-gram model in terms of WER by 10% when used in isolation as a second pass (N-best re-scoring) language model.",2002
barron-cedeno-etal-2016-convkn,https://aclanthology.org/S16-1138.pdf,0,,,,,,,"ConvKN at SemEval-2016 Task 3: Answer and Question Selection for Question Answering on Arabic and English Fora. We describe our system, ConvKN, participating to the SemEval-2016 Task 3 ""Community Question Answering"". The task targeted the reranking of questions and comments in real-life web fora both in English and Arabic. ConvKN combines convolutional tree kernels with convolutional neural networks and additional manually designed features including text similarity and thread specific features. For the first time, we applied tree kernels to syntactic trees of Arabic sentences for a reranking task. Our approaches obtained the second best results in three out of four tasks. The only task we performed averagely is the one where we did not use tree kernels in our classifier.",{C}onv{KN} at {S}em{E}val-2016 Task 3: Answer and Question Selection for Question Answering on {A}rabic and {E}nglish Fora,"We describe our system, ConvKN, participating to the SemEval-2016 Task 3 ""Community Question Answering"". The task targeted the reranking of questions and comments in real-life web fora both in English and Arabic. ConvKN combines convolutional tree kernels with convolutional neural networks and additional manually designed features including text similarity and thread specific features. For the first time, we applied tree kernels to syntactic trees of Arabic sentences for a reranking task. Our approaches obtained the second best results in three out of four tasks. The only task we performed averagely is the one where we did not use tree kernels in our classifier.",ConvKN at SemEval-2016 Task 3: Answer and Question Selection for Question Answering on Arabic and English Fora,"We describe our system, ConvKN, participating to the SemEval-2016 Task 3 ""Community Question Answering"". The task targeted the reranking of questions and comments in real-life web fora both in English and Arabic. ConvKN combines convolutional tree kernels with convolutional neural networks and additional manually designed features including text similarity and thread specific features. For the first time, we applied tree kernels to syntactic trees of Arabic sentences for a reranking task. Our approaches obtained the second best results in three out of four tasks. The only task we performed averagely is the one where we did not use tree kernels in our classifier.","This research is developed by the Arabic Language Technologies (ALT) group at the Qatar Computing Research Institute (QCRI), HBKU, Qatar Foundation in collaboration with MIT. It is part of the Interactive sYstems for Answer Search (IYAS) project. This work has been partially supported by the EC project CogNet, 671625 (H2020-ICT-2014-2, Research and Innovation action) and by an IBM Faculty Award.","ConvKN at SemEval-2016 Task 3: Answer and Question Selection for Question Answering on Arabic and English Fora. We describe our system, ConvKN, participating to the SemEval-2016 Task 3 ""Community Question Answering"". The task targeted the reranking of questions and comments in real-life web fora both in English and Arabic. ConvKN combines convolutional tree kernels with convolutional neural networks and additional manually designed features including text similarity and thread specific features. For the first time, we applied tree kernels to syntactic trees of Arabic sentences for a reranking task. Our approaches obtained the second best results in three out of four tasks. The only task we performed averagely is the one where we did not use tree kernels in our classifier.",2016
schwenk-2012-continuous,https://aclanthology.org/C12-2104.pdf,0,,,,,,,"Continuous Space Translation Models for Phrase-Based Statistical Machine Translation. This paper presents a new approach to perform the estimation of the translation model probabilities of a phrase-based statistical machine translation system. We use neural networks to directly learn the translation probability of phrase pairs using continuous representations. The system can be easily trained on the same data used to build standard phrase-based systems. We provide experimental evidence that the approach seems to be able to infer meaningful translation probabilities for phrase pairs not seen in the training data, or even predict a list of the most likely translations given a source phrase. The approach can be used to rescore n-best lists, but we also discuss an integration into the Moses decoder. A preliminary evaluation on the English/French IWSLT task achieved improvements in the BLEU score and a human analysis showed that the new model often chooses semantically better translations. Several extensions of this work are discussed.",Continuous Space Translation Models for Phrase-Based Statistical Machine Translation,"This paper presents a new approach to perform the estimation of the translation model probabilities of a phrase-based statistical machine translation system. We use neural networks to directly learn the translation probability of phrase pairs using continuous representations. The system can be easily trained on the same data used to build standard phrase-based systems. We provide experimental evidence that the approach seems to be able to infer meaningful translation probabilities for phrase pairs not seen in the training data, or even predict a list of the most likely translations given a source phrase. The approach can be used to rescore n-best lists, but we also discuss an integration into the Moses decoder. A preliminary evaluation on the English/French IWSLT task achieved improvements in the BLEU score and a human analysis showed that the new model often chooses semantically better translations. Several extensions of this work are discussed.",Continuous Space Translation Models for Phrase-Based Statistical Machine Translation,"This paper presents a new approach to perform the estimation of the translation model probabilities of a phrase-based statistical machine translation system. We use neural networks to directly learn the translation probability of phrase pairs using continuous representations. The system can be easily trained on the same data used to build standard phrase-based systems. We provide experimental evidence that the approach seems to be able to infer meaningful translation probabilities for phrase pairs not seen in the training data, or even predict a list of the most likely translations given a source phrase. The approach can be used to rescore n-best lists, but we also discuss an integration into the Moses decoder. A preliminary evaluation on the English/French IWSLT task achieved improvements in the BLEU score and a human analysis showed that the new model often chooses semantically better translations. Several extensions of this work are discussed.","This work was partially financed by the French government (COSMAT, ANR-09-CORD-004), the European Commission (MATECAT, ICT-2011.4.2 -287688) and the DARPA BOLT project.","Continuous Space Translation Models for Phrase-Based Statistical Machine Translation. This paper presents a new approach to perform the estimation of the translation model probabilities of a phrase-based statistical machine translation system. We use neural networks to directly learn the translation probability of phrase pairs using continuous representations. The system can be easily trained on the same data used to build standard phrase-based systems. We provide experimental evidence that the approach seems to be able to infer meaningful translation probabilities for phrase pairs not seen in the training data, or even predict a list of the most likely translations given a source phrase. The approach can be used to rescore n-best lists, but we also discuss an integration into the Moses decoder. A preliminary evaluation on the English/French IWSLT task achieved improvements in the BLEU score and a human analysis showed that the new model often chooses semantically better translations. Several extensions of this work are discussed.",2012
plank-etal-2016-multilingual,https://aclanthology.org/P16-2067.pdf,0,,,,,,,"Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. Bidirectional long short-term memory (bi-LSTM) networks have recently proven successful for various NLP sequence modeling tasks, but little is known about their reliance to input representations, target languages, data set size, and label noise. We address these issues and evaluate bi-LSTMs with word, character, and unicode byte embeddings for POS tagging. We compare bi-LSTMs to traditional POS taggers across languages and data sizes. We also present a novel bi-LSTM model, which combines the POS tagging loss function with an auxiliary loss function that accounts for rare words. The model obtains state-of-the-art performance across 22 languages, and works especially well for morphologically complex languages. Our analysis suggests that bi-LSTMs are less sensitive to training data size and label corruptions (at small noise levels) than previously assumed.",Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss,"Bidirectional long short-term memory (bi-LSTM) networks have recently proven successful for various NLP sequence modeling tasks, but little is known about their reliance to input representations, target languages, data set size, and label noise. We address these issues and evaluate bi-LSTMs with word, character, and unicode byte embeddings for POS tagging. We compare bi-LSTMs to traditional POS taggers across languages and data sizes. We also present a novel bi-LSTM model, which combines the POS tagging loss function with an auxiliary loss function that accounts for rare words. The model obtains state-of-the-art performance across 22 languages, and works especially well for morphologically complex languages. Our analysis suggests that bi-LSTMs are less sensitive to training data size and label corruptions (at small noise levels) than previously assumed.",Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss,"Bidirectional long short-term memory (bi-LSTM) networks have recently proven successful for various NLP sequence modeling tasks, but little is known about their reliance to input representations, target languages, data set size, and label noise. We address these issues and evaluate bi-LSTMs with word, character, and unicode byte embeddings for POS tagging. We compare bi-LSTMs to traditional POS taggers across languages and data sizes. We also present a novel bi-LSTM model, which combines the POS tagging loss function with an auxiliary loss function that accounts for rare words. The model obtains state-of-the-art performance across 22 languages, and works especially well for morphologically complex languages. Our analysis suggests that bi-LSTMs are less sensitive to training data size and label corruptions (at small noise levels) than previously assumed.",We thank the anonymous reviewers for their feedback. AS is funded by the ERC Starting Grant LOWLANDS No. 313695. YG is supported by The Israeli Science Foundation (grant number 1555/15) and a Google Research Award.,"Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. Bidirectional long short-term memory (bi-LSTM) networks have recently proven successful for various NLP sequence modeling tasks, but little is known about their reliance to input representations, target languages, data set size, and label noise. We address these issues and evaluate bi-LSTMs with word, character, and unicode byte embeddings for POS tagging. We compare bi-LSTMs to traditional POS taggers across languages and data sizes. We also present a novel bi-LSTM model, which combines the POS tagging loss function with an auxiliary loss function that accounts for rare words. The model obtains state-of-the-art performance across 22 languages, and works especially well for morphologically complex languages. Our analysis suggests that bi-LSTMs are less sensitive to training data size and label corruptions (at small noise levels) than previously assumed.",2016
jiang-etal-2021-lnn,https://aclanthology.org/2021.acl-long.64.pdf,0,,,,,,,"LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking. Entity linking (EL), the task of disambiguating mentions in text by linking them to entities in a knowledge graph, is crucial for text understanding, question answering or conversational systems. Entity linking on short text (e.g., single sentence or question) poses particular challenges due to limited context. While prior approaches use either heuristics or blackbox neural methods, here we propose LNN-EL, a neuro-symbolic approach that combines the advantages of using interpretable rules based on first-order logic with the performance of neural learning. Even though constrained to using rules, LNN-EL performs competitively against SotA black-box neural approaches, with the added benefits of extensibility and transferability. In particular, we show that we can easily blend existing rule templates given by a human expert, with multiple types of features (priors, BERT encodings, box embeddings, etc), and even scores resulting from previous EL methods, thus improving on such methods. For instance, on the LC-QuAD-1.0 dataset, we show more than 4% increase in F1 score over previous SotA. Finally, we show that the inductive bias offered by using logic results in learned rules that transfer well across datasets, even without fine tuning, while maintaining high accuracy. * Equal contribution; Author Hang Jiang did this work while interning at IBM.",{LNN}-{EL}: A Neuro-Symbolic Approach to Short-text Entity Linking,"Entity linking (EL), the task of disambiguating mentions in text by linking them to entities in a knowledge graph, is crucial for text understanding, question answering or conversational systems. Entity linking on short text (e.g., single sentence or question) poses particular challenges due to limited context. While prior approaches use either heuristics or blackbox neural methods, here we propose LNN-EL, a neuro-symbolic approach that combines the advantages of using interpretable rules based on first-order logic with the performance of neural learning. Even though constrained to using rules, LNN-EL performs competitively against SotA black-box neural approaches, with the added benefits of extensibility and transferability. In particular, we show that we can easily blend existing rule templates given by a human expert, with multiple types of features (priors, BERT encodings, box embeddings, etc), and even scores resulting from previous EL methods, thus improving on such methods. For instance, on the LC-QuAD-1.0 dataset, we show more than 4% increase in F1 score over previous SotA. Finally, we show that the inductive bias offered by using logic results in learned rules that transfer well across datasets, even without fine tuning, while maintaining high accuracy. * Equal contribution; Author Hang Jiang did this work while interning at IBM.",LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking,"Entity linking (EL), the task of disambiguating mentions in text by linking them to entities in a knowledge graph, is crucial for text understanding, question answering or conversational systems. Entity linking on short text (e.g., single sentence or question) poses particular challenges due to limited context. While prior approaches use either heuristics or blackbox neural methods, here we propose LNN-EL, a neuro-symbolic approach that combines the advantages of using interpretable rules based on first-order logic with the performance of neural learning. Even though constrained to using rules, LNN-EL performs competitively against SotA black-box neural approaches, with the added benefits of extensibility and transferability. In particular, we show that we can easily blend existing rule templates given by a human expert, with multiple types of features (priors, BERT encodings, box embeddings, etc), and even scores resulting from previous EL methods, thus improving on such methods. For instance, on the LC-QuAD-1.0 dataset, we show more than 4% increase in F1 score over previous SotA. Finally, we show that the inductive bias offered by using logic results in learned rules that transfer well across datasets, even without fine tuning, while maintaining high accuracy. * Equal contribution; Author Hang Jiang did this work while interning at IBM.","We thank Ibrahim Abdelaziz, Pavan Kapanipathi, Srinivas Ravishankar, Berthold Reinwald, Salim Roukos and anonymous reviewers for their valuable inputs and feedback.","LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking. Entity linking (EL), the task of disambiguating mentions in text by linking them to entities in a knowledge graph, is crucial for text understanding, question answering or conversational systems. Entity linking on short text (e.g., single sentence or question) poses particular challenges due to limited context. While prior approaches use either heuristics or blackbox neural methods, here we propose LNN-EL, a neuro-symbolic approach that combines the advantages of using interpretable rules based on first-order logic with the performance of neural learning. Even though constrained to using rules, LNN-EL performs competitively against SotA black-box neural approaches, with the added benefits of extensibility and transferability. In particular, we show that we can easily blend existing rule templates given by a human expert, with multiple types of features (priors, BERT encodings, box embeddings, etc), and even scores resulting from previous EL methods, thus improving on such methods. For instance, on the LC-QuAD-1.0 dataset, we show more than 4% increase in F1 score over previous SotA. Finally, we show that the inductive bias offered by using logic results in learned rules that transfer well across datasets, even without fine tuning, while maintaining high accuracy. * Equal contribution; Author Hang Jiang did this work while interning at IBM.",2021
jimenez-lopez-becerra-bonache-2016-machine,https://aclanthology.org/W16-4101.pdf,0,,,,,,,"Could Machine Learning Shed Light on Natural Language Complexity?. In this paper, we propose to use a subfield of machine learning-grammatical inference-to measure linguistic complexity from a developmental point of view. We focus on relative complexity by considering a child learner in the process of first language acquisition. The relevance of grammatical inference models for measuring linguistic complexity from a developmental point of view is based on the fact that algorithms proposed in this area can be considered computational models for studying first language acquisition. Even though it will be possible to use different techniques from the field of machine learning as computational models for dealing with linguistic complexity-since in any model we have algorithms that can learn from data-, we claim that grammatical inference models offer some advantages over other tools.",Could Machine Learning Shed Light on Natural Language Complexity?,"In this paper, we propose to use a subfield of machine learning-grammatical inference-to measure linguistic complexity from a developmental point of view. We focus on relative complexity by considering a child learner in the process of first language acquisition. The relevance of grammatical inference models for measuring linguistic complexity from a developmental point of view is based on the fact that algorithms proposed in this area can be considered computational models for studying first language acquisition. Even though it will be possible to use different techniques from the field of machine learning as computational models for dealing with linguistic complexity-since in any model we have algorithms that can learn from data-, we claim that grammatical inference models offer some advantages over other tools.",Could Machine Learning Shed Light on Natural Language Complexity?,"In this paper, we propose to use a subfield of machine learning-grammatical inference-to measure linguistic complexity from a developmental point of view. We focus on relative complexity by considering a child learner in the process of first language acquisition. The relevance of grammatical inference models for measuring linguistic complexity from a developmental point of view is based on the fact that algorithms proposed in this area can be considered computational models for studying first language acquisition. Even though it will be possible to use different techniques from the field of machine learning as computational models for dealing with linguistic complexity-since in any model we have algorithms that can learn from data-, we claim that grammatical inference models offer some advantages over other tools.","This research has been supported by the Ministerio de Economía y Competitividad under the project number FFI2015-69978-P (MINECO/FEDER) of the Programa Estatal de Fomento de la Investigación Científica y Técnica de Excelencia, Subprograma Estatal de Generación de Conocimiento.","Could Machine Learning Shed Light on Natural Language Complexity?. In this paper, we propose to use a subfield of machine learning-grammatical inference-to measure linguistic complexity from a developmental point of view. We focus on relative complexity by considering a child learner in the process of first language acquisition. The relevance of grammatical inference models for measuring linguistic complexity from a developmental point of view is based on the fact that algorithms proposed in this area can be considered computational models for studying first language acquisition. Even though it will be possible to use different techniques from the field of machine learning as computational models for dealing with linguistic complexity-since in any model we have algorithms that can learn from data-, we claim that grammatical inference models offer some advantages over other tools.",2016
song-etal-2010-active,https://aclanthology.org/W10-4121.pdf,0,,,,,,,"Active Learning Based Corpus Annotation. Opinion Mining aims to automatically acquire useful opinioned information and knowledge in subjective texts. Research of Chinese Opinioned Mining requires the support of annotated corpus for Chinese opinioned-subjective texts. To facilitate the work of corpus annotators, this paper implements an active learning based annotation tool for Chinese opinioned elements which can identify topic, sentiment, and opinion holder in a sentence automatically.",Active Learning Based Corpus Annotation,"Opinion Mining aims to automatically acquire useful opinioned information and knowledge in subjective texts. Research of Chinese Opinioned Mining requires the support of annotated corpus for Chinese opinioned-subjective texts. To facilitate the work of corpus annotators, this paper implements an active learning based annotation tool for Chinese opinioned elements which can identify topic, sentiment, and opinion holder in a sentence automatically.",Active Learning Based Corpus Annotation,"Opinion Mining aims to automatically acquire useful opinioned information and knowledge in subjective texts. Research of Chinese Opinioned Mining requires the support of annotated corpus for Chinese opinioned-subjective texts. To facilitate the work of corpus annotators, this paper implements an active learning based annotation tool for Chinese opinioned elements which can identify topic, sentiment, and opinion holder in a sentence automatically.","The author of this paper would like to thank Information Retrieval Lab, Harbin Institute of Technology for providing the tool (LTP) used in experiments. This research was supported by National Natural Science Foundation of China Grant No.60773087.","Active Learning Based Corpus Annotation. Opinion Mining aims to automatically acquire useful opinioned information and knowledge in subjective texts. Research of Chinese Opinioned Mining requires the support of annotated corpus for Chinese opinioned-subjective texts. To facilitate the work of corpus annotators, this paper implements an active learning based annotation tool for Chinese opinioned elements which can identify topic, sentiment, and opinion holder in a sentence automatically.",2010
rello-basterrechea-2010-automatic,https://aclanthology.org/W10-0301.pdf,0,,,,,,,"Automatic conjugation and identification of regular and irregular verb neologisms in Spanish. In this paper, a novel system for the automatic identification and conjugation of Spanish verb neologisms is presented. The paper describes a rule-based algorithm consisting of six steps which are taken to determine whether a new verb is regular or not, and to establish the rules that the verb should follow in its conjugation. The method was evaluated on 4,307 new verbs and its performance found to be satisfactory both for irregular and regular neologisms. The algorithm also contains extra rules to cater for verb neologisms in Spanish that do not exist as yet, but are inferred to be possible in light of existing cases of new verb creation in Spanish.",Automatic conjugation and identification of regular and irregular verb neologisms in {S}panish,"In this paper, a novel system for the automatic identification and conjugation of Spanish verb neologisms is presented. The paper describes a rule-based algorithm consisting of six steps which are taken to determine whether a new verb is regular or not, and to establish the rules that the verb should follow in its conjugation. The method was evaluated on 4,307 new verbs and its performance found to be satisfactory both for irregular and regular neologisms. The algorithm also contains extra rules to cater for verb neologisms in Spanish that do not exist as yet, but are inferred to be possible in light of existing cases of new verb creation in Spanish.",Automatic conjugation and identification of regular and irregular verb neologisms in Spanish,"In this paper, a novel system for the automatic identification and conjugation of Spanish verb neologisms is presented. The paper describes a rule-based algorithm consisting of six steps which are taken to determine whether a new verb is regular or not, and to establish the rules that the verb should follow in its conjugation. The method was evaluated on 4,307 new verbs and its performance found to be satisfactory both for irregular and regular neologisms. The algorithm also contains extra rules to cater for verb neologisms in Spanish that do not exist as yet, but are inferred to be possible in light of existing cases of new verb creation in Spanish.","We would like to express or gratitude to the Molino de Ideas s.a. engineering team who have successfully implemented the method, specially to Daniel Ayuso de Santos and Alejandro de Pablos López.","Automatic conjugation and identification of regular and irregular verb neologisms in Spanish. In this paper, a novel system for the automatic identification and conjugation of Spanish verb neologisms is presented. The paper describes a rule-based algorithm consisting of six steps which are taken to determine whether a new verb is regular or not, and to establish the rules that the verb should follow in its conjugation. The method was evaluated on 4,307 new verbs and its performance found to be satisfactory both for irregular and regular neologisms. The algorithm also contains extra rules to cater for verb neologisms in Spanish that do not exist as yet, but are inferred to be possible in light of existing cases of new verb creation in Spanish.",2010
saint-dizier-2016-argument,https://aclanthology.org/L16-1156.pdf,0,,,,,,,"Argument Mining: the Bottleneck of Knowledge and Language Resources. Given a controversial issue, argument mining from natural language texts (news papers, and any form of text on the Internet) is extremely challenging: domain knowledge is often required together with appropriate forms of inferences to identify arguments. This contribution explores the types of knowledge that are required and how they can be paired with reasoning schemes, language processing and language resources to accurately mine arguments. We show via corpus analysis that the Generative Lexicon, enhanced in different manners and viewed as both a lexicon and a domain knowledge representation, is a relevant approach. In this paper, corpus annotation for argument mining is first developed, then we show how the generative lexicon approach must be adapted and how it can be paired with language processing patterns to extract and specify the nature of arguments. Our approach to argument mining is thus knowledge driven",Argument Mining: the Bottleneck of Knowledge and Language Resources,"Given a controversial issue, argument mining from natural language texts (news papers, and any form of text on the Internet) is extremely challenging: domain knowledge is often required together with appropriate forms of inferences to identify arguments. This contribution explores the types of knowledge that are required and how they can be paired with reasoning schemes, language processing and language resources to accurately mine arguments. We show via corpus analysis that the Generative Lexicon, enhanced in different manners and viewed as both a lexicon and a domain knowledge representation, is a relevant approach. In this paper, corpus annotation for argument mining is first developed, then we show how the generative lexicon approach must be adapted and how it can be paired with language processing patterns to extract and specify the nature of arguments. Our approach to argument mining is thus knowledge driven",Argument Mining: the Bottleneck of Knowledge and Language Resources,"Given a controversial issue, argument mining from natural language texts (news papers, and any form of text on the Internet) is extremely challenging: domain knowledge is often required together with appropriate forms of inferences to identify arguments. This contribution explores the types of knowledge that are required and how they can be paired with reasoning schemes, language processing and language resources to accurately mine arguments. We show via corpus analysis that the Generative Lexicon, enhanced in different manners and viewed as both a lexicon and a domain knowledge representation, is a relevant approach. In this paper, corpus annotation for argument mining is first developed, then we show how the generative lexicon approach must be adapted and how it can be paired with language processing patterns to extract and specify the nature of arguments. Our approach to argument mining is thus knowledge driven",,"Argument Mining: the Bottleneck of Knowledge and Language Resources. Given a controversial issue, argument mining from natural language texts (news papers, and any form of text on the Internet) is extremely challenging: domain knowledge is often required together with appropriate forms of inferences to identify arguments. This contribution explores the types of knowledge that are required and how they can be paired with reasoning schemes, language processing and language resources to accurately mine arguments. We show via corpus analysis that the Generative Lexicon, enhanced in different manners and viewed as both a lexicon and a domain knowledge representation, is a relevant approach. In this paper, corpus annotation for argument mining is first developed, then we show how the generative lexicon approach must be adapted and how it can be paired with language processing patterns to extract and specify the nature of arguments. Our approach to argument mining is thus knowledge driven",2016
pasca-2015-interpreting,https://aclanthology.org/N15-1037.pdf,0,,,,,,,"Interpreting Compound Noun Phrases Using Web Search Queries. A weakly-supervised method is applied to anonymized queries to extract lexical interpretations of compound noun phrases (e.g., ""fortune 500 companies""). The interpretations explain the subsuming role (""listed in"") that modifiers (fortune 500) play relative to heads (companies) within the noun phrases. Experimental results over evaluation sets of noun phrases from multiple sources demonstrate that interpretations extracted from queries have encouraging coverage and precision. The top interpretation extracted is deemed relevant for more than 70% of the noun phrases.",Interpreting Compound Noun Phrases Using Web Search Queries,"A weakly-supervised method is applied to anonymized queries to extract lexical interpretations of compound noun phrases (e.g., ""fortune 500 companies""). The interpretations explain the subsuming role (""listed in"") that modifiers (fortune 500) play relative to heads (companies) within the noun phrases. Experimental results over evaluation sets of noun phrases from multiple sources demonstrate that interpretations extracted from queries have encouraging coverage and precision. The top interpretation extracted is deemed relevant for more than 70% of the noun phrases.",Interpreting Compound Noun Phrases Using Web Search Queries,"A weakly-supervised method is applied to anonymized queries to extract lexical interpretations of compound noun phrases (e.g., ""fortune 500 companies""). The interpretations explain the subsuming role (""listed in"") that modifiers (fortune 500) play relative to heads (companies) within the noun phrases. Experimental results over evaluation sets of noun phrases from multiple sources demonstrate that interpretations extracted from queries have encouraging coverage and precision. The top interpretation extracted is deemed relevant for more than 70% of the noun phrases.","The paper benefits from comments from Jutta Degener, Mihai Surdeanu and Susanne Riehemann. Data extracted by Haixun Wang and Jian Li is the source of the IsA vocabulary of noun phrases used in the evaluation.","Interpreting Compound Noun Phrases Using Web Search Queries. A weakly-supervised method is applied to anonymized queries to extract lexical interpretations of compound noun phrases (e.g., ""fortune 500 companies""). The interpretations explain the subsuming role (""listed in"") that modifiers (fortune 500) play relative to heads (companies) within the noun phrases. Experimental results over evaluation sets of noun phrases from multiple sources demonstrate that interpretations extracted from queries have encouraging coverage and precision. The top interpretation extracted is deemed relevant for more than 70% of the noun phrases.",2015
haffari-etal-2011-ensemble,https://aclanthology.org/P11-2125.pdf,0,,,,,,,"An Ensemble Model that Combines Syntactic and Semantic Clustering for Discriminative Dependency Parsing. We combine multiple word representations based on semantic clusters extracted from the (Brown et al., 1992) algorithm and syntactic clusters obtained from the Berkeley parser (Petrov et al., 2006) in order to improve discriminative dependency parsing in the MST-Parser framework (McDonald et al., 2005). We also provide an ensemble method for combining diverse cluster-based models. The two contributions together significantly improves unlabeled dependency accuracy from 90.82% to 92.13%.",An Ensemble Model that Combines Syntactic and Semantic Clustering for Discriminative Dependency Parsing,"We combine multiple word representations based on semantic clusters extracted from the (Brown et al., 1992) algorithm and syntactic clusters obtained from the Berkeley parser (Petrov et al., 2006) in order to improve discriminative dependency parsing in the MST-Parser framework (McDonald et al., 2005). We also provide an ensemble method for combining diverse cluster-based models. The two contributions together significantly improves unlabeled dependency accuracy from 90.82% to 92.13%.",An Ensemble Model that Combines Syntactic and Semantic Clustering for Discriminative Dependency Parsing,"We combine multiple word representations based on semantic clusters extracted from the (Brown et al., 1992) algorithm and syntactic clusters obtained from the Berkeley parser (Petrov et al., 2006) in order to improve discriminative dependency parsing in the MST-Parser framework (McDonald et al., 2005). We also provide an ensemble method for combining diverse cluster-based models. The two contributions together significantly improves unlabeled dependency accuracy from 90.82% to 92.13%.","This research was partially supported by NSERC, Canada (RGPIN: 264905). We would like to thank Terry Koo for his help with the cluster-based features for dependency parsing and Ryan McDonald for the MSTParser source code which we modified and used for the experiments in this paper.","An Ensemble Model that Combines Syntactic and Semantic Clustering for Discriminative Dependency Parsing. We combine multiple word representations based on semantic clusters extracted from the (Brown et al., 1992) algorithm and syntactic clusters obtained from the Berkeley parser (Petrov et al., 2006) in order to improve discriminative dependency parsing in the MST-Parser framework (McDonald et al., 2005). We also provide an ensemble method for combining diverse cluster-based models. The two contributions together significantly improves unlabeled dependency accuracy from 90.82% to 92.13%.",2011
chung-2005-market,https://aclanthology.org/Y05-1007.pdf,0,,,,,,,"MARKET Metaphors: Chinese, English and Malay. In this paper, MARKET metaphors used by different communities (Chinese, Malay and English) are laid out based on the frequency counts of these metaphors and their occurrences in different syntactic positions. The results show that certain types of metaphors have preferences for different syntactic positions for 'market.' For instance, MARKET IS A PERSON in all three languages prefers to place 'market' in the subject position. In addition to this finding, the choice of metaphor types by different speech communities may also reflect their perspectives regarding their country's economy. This is evidenced by the fewer instances of MARKET IS COMPETITION in the English data. The instances that describe how the market falls (plunges and crashes) may reflect the speakers' concerns with the maintenance of their power in the market rather than the competitiveness of their market. Therefore, through using quantitative data, this paper is able to infer the economic status of these speech communities. This can be done not only through analyzing the semantic meanings of the metaphors but also their interface with syntax.","{MARKET} Metaphors: {C}hinese, {E}nglish and {M}alay","In this paper, MARKET metaphors used by different communities (Chinese, Malay and English) are laid out based on the frequency counts of these metaphors and their occurrences in different syntactic positions. The results show that certain types of metaphors have preferences for different syntactic positions for 'market.' For instance, MARKET IS A PERSON in all three languages prefers to place 'market' in the subject position. In addition to this finding, the choice of metaphor types by different speech communities may also reflect their perspectives regarding their country's economy. This is evidenced by the fewer instances of MARKET IS COMPETITION in the English data. The instances that describe how the market falls (plunges and crashes) may reflect the speakers' concerns with the maintenance of their power in the market rather than the competitiveness of their market. Therefore, through using quantitative data, this paper is able to infer the economic status of these speech communities. This can be done not only through analyzing the semantic meanings of the metaphors but also their interface with syntax.","MARKET Metaphors: Chinese, English and Malay","In this paper, MARKET metaphors used by different communities (Chinese, Malay and English) are laid out based on the frequency counts of these metaphors and their occurrences in different syntactic positions. The results show that certain types of metaphors have preferences for different syntactic positions for 'market.' For instance, MARKET IS A PERSON in all three languages prefers to place 'market' in the subject position. In addition to this finding, the choice of metaphor types by different speech communities may also reflect their perspectives regarding their country's economy. This is evidenced by the fewer instances of MARKET IS COMPETITION in the English data. The instances that describe how the market falls (plunges and crashes) may reflect the speakers' concerns with the maintenance of their power in the market rather than the competitiveness of their market. Therefore, through using quantitative data, this paper is able to infer the economic status of these speech communities. This can be done not only through analyzing the semantic meanings of the metaphors but also their interface with syntax.",,"MARKET Metaphors: Chinese, English and Malay. In this paper, MARKET metaphors used by different communities (Chinese, Malay and English) are laid out based on the frequency counts of these metaphors and their occurrences in different syntactic positions. The results show that certain types of metaphors have preferences for different syntactic positions for 'market.' For instance, MARKET IS A PERSON in all three languages prefers to place 'market' in the subject position. In addition to this finding, the choice of metaphor types by different speech communities may also reflect their perspectives regarding their country's economy. This is evidenced by the fewer instances of MARKET IS COMPETITION in the English data. The instances that describe how the market falls (plunges and crashes) may reflect the speakers' concerns with the maintenance of their power in the market rather than the competitiveness of their market. Therefore, through using quantitative data, this paper is able to infer the economic status of these speech communities. This can be done not only through analyzing the semantic meanings of the metaphors but also their interface with syntax.",2005
gliwa-etal-2019-samsum,https://aclanthology.org/D19-5409.pdf,0,,,,,,,"SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization. This paper introduces the SAMSum Corpus, a new dataset with abstractive dialogue summaries. We investigate the challenges it poses for automated summarization by testing several models and comparing their results with those obtained on a corpus of news articles. We show that model-generated summaries of dialogues achieve higher ROUGE scores than the model-generated summaries of news-in contrast with human evaluators' judgement. This suggests that a challenging task of abstractive dialogue summarization requires dedicated models and non-standard quality measures. To our knowledge, our study is the first attempt to introduce a high-quality chatdialogues corpus, manually annotated with abstractive summarizations, which can be used by the research community for further studies.",{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization,"This paper introduces the SAMSum Corpus, a new dataset with abstractive dialogue summaries. We investigate the challenges it poses for automated summarization by testing several models and comparing their results with those obtained on a corpus of news articles. We show that model-generated summaries of dialogues achieve higher ROUGE scores than the model-generated summaries of news-in contrast with human evaluators' judgement. This suggests that a challenging task of abstractive dialogue summarization requires dedicated models and non-standard quality measures. To our knowledge, our study is the first attempt to introduce a high-quality chatdialogues corpus, manually annotated with abstractive summarizations, which can be used by the research community for further studies.",SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization,"This paper introduces the SAMSum Corpus, a new dataset with abstractive dialogue summaries. We investigate the challenges it poses for automated summarization by testing several models and comparing their results with those obtained on a corpus of news articles. We show that model-generated summaries of dialogues achieve higher ROUGE scores than the model-generated summaries of news-in contrast with human evaluators' judgement. This suggests that a challenging task of abstractive dialogue summarization requires dedicated models and non-standard quality measures. To our knowledge, our study is the first attempt to introduce a high-quality chatdialogues corpus, manually annotated with abstractive summarizations, which can be used by the research community for further studies.","We would like to express our sincere thanks to Tunia Błachno, Oliwia Ebebenge, Monika Jędras and Małgorzata Krawentek for their huge contribution to the corpus collection -without their ideas, management of the linguistic task and verification of examples we would not be able to create this paper. We are also grateful for the reviewers' helpful comments and suggestions.","SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization. This paper introduces the SAMSum Corpus, a new dataset with abstractive dialogue summaries. We investigate the challenges it poses for automated summarization by testing several models and comparing their results with those obtained on a corpus of news articles. We show that model-generated summaries of dialogues achieve higher ROUGE scores than the model-generated summaries of news-in contrast with human evaluators' judgement. This suggests that a challenging task of abstractive dialogue summarization requires dedicated models and non-standard quality measures. To our knowledge, our study is the first attempt to introduce a high-quality chatdialogues corpus, manually annotated with abstractive summarizations, which can be used by the research community for further studies.",2019
klafka-ettinger-2020-spying,https://aclanthology.org/2020.acl-main.434.pdf,0,,,,,,,"Spying on Your Neighbors: Fine-grained Probing of Contextual Embeddings for Information about Surrounding Words. Although models using contextual word embeddings have achieved state-of-the-art results on a host of NLP tasks, little is known about exactly what information these embeddings encode about the context words that they are understood to reflect. To address this question, we introduce a suite of probing tasks that enable fine-grained testing of contextual embeddings for encoding of information about surrounding words. We apply these tasks to examine the popular BERT, ELMo and GPT contextual encoders, and find that each of our tested information types is indeed encoded as contextual information across tokens, often with nearperfect recoverability-but the encoders vary in which features they distribute to which tokens, how nuanced their distributions are, and how robust the encoding of each feature is to distance. We discuss implications of these results for how different types of models break down and prioritize word-level context information when constructing token embeddings.",Spying on Your Neighbors: Fine-grained Probing of Contextual Embeddings for Information about Surrounding Words,"Although models using contextual word embeddings have achieved state-of-the-art results on a host of NLP tasks, little is known about exactly what information these embeddings encode about the context words that they are understood to reflect. To address this question, we introduce a suite of probing tasks that enable fine-grained testing of contextual embeddings for encoding of information about surrounding words. We apply these tasks to examine the popular BERT, ELMo and GPT contextual encoders, and find that each of our tested information types is indeed encoded as contextual information across tokens, often with nearperfect recoverability-but the encoders vary in which features they distribute to which tokens, how nuanced their distributions are, and how robust the encoding of each feature is to distance. We discuss implications of these results for how different types of models break down and prioritize word-level context information when constructing token embeddings.",Spying on Your Neighbors: Fine-grained Probing of Contextual Embeddings for Information about Surrounding Words,"Although models using contextual word embeddings have achieved state-of-the-art results on a host of NLP tasks, little is known about exactly what information these embeddings encode about the context words that they are understood to reflect. To address this question, we introduce a suite of probing tasks that enable fine-grained testing of contextual embeddings for encoding of information about surrounding words. We apply these tasks to examine the popular BERT, ELMo and GPT contextual encoders, and find that each of our tested information types is indeed encoded as contextual information across tokens, often with nearperfect recoverability-but the encoders vary in which features they distribute to which tokens, how nuanced their distributions are, and how robust the encoding of each feature is to distance. We discuss implications of these results for how different types of models break down and prioritize word-level context information when constructing token embeddings.","We would like to thank Itamar Francez and Sam Wiseman for helpful discussion, and anonymous reviewers for their valuable feedback. This material is based upon work supported by the National Science Foundation under Award No. 1941160. ","Spying on Your Neighbors: Fine-grained Probing of Contextual Embeddings for Information about Surrounding Words. Although models using contextual word embeddings have achieved state-of-the-art results on a host of NLP tasks, little is known about exactly what information these embeddings encode about the context words that they are understood to reflect. To address this question, we introduce a suite of probing tasks that enable fine-grained testing of contextual embeddings for encoding of information about surrounding words. We apply these tasks to examine the popular BERT, ELMo and GPT contextual encoders, and find that each of our tested information types is indeed encoded as contextual information across tokens, often with nearperfect recoverability-but the encoders vary in which features they distribute to which tokens, how nuanced their distributions are, and how robust the encoding of each feature is to distance. We discuss implications of these results for how different types of models break down and prioritize word-level context information when constructing token embeddings.",2020
huang-etal-2022-distilling,https://aclanthology.org/2022.fever-1.3.pdf,0,,,,,,,"Distilling Salient Reviews with Zero Labels. Many people read online reviews to learn about real-world entities of their interest. However, majority of reviews only describes general experiences and opinions of the customers, and may not reveal facts that are specific to the entity being reviewed. In this work, we focus on a novel task of mining from a review corpus sentences that are unique for each entity. We refer to this task as Salient Fact Extraction. Salient facts are extremely scarce due to their very nature. Consequently, collecting labeled examples for training supervised models is tedious and cost-prohibitive. To alleviate this scarcity problem, we develop an unsupervised method ZL-Distiller, which leverages contextual language representations of the reviews and their distributional patterns to identify salient sentences about entities. Our experiments on multiple domains (hotels, products, and restaurants) show that ZL-Distiller achieves state-of-theart performance and further boosts the performance of other supervised/unsupervised algorithms for the task. Furthermore, we show that salient sentences mined by ZL-Distiller provide unique and detailed information about entities, which benefit downstream NLP applications including question answering and summarization. * Work done during internship at Megagon Labs. † Work done while at Megagon Labs. The Fifth Workshop on Fact Extraction and VERification (FEVER). Co-located with Association for Computational Linguistics 2022.",Distilling Salient Reviews with Zero Labels,"Many people read online reviews to learn about real-world entities of their interest. However, majority of reviews only describes general experiences and opinions of the customers, and may not reveal facts that are specific to the entity being reviewed. In this work, we focus on a novel task of mining from a review corpus sentences that are unique for each entity. We refer to this task as Salient Fact Extraction. Salient facts are extremely scarce due to their very nature. Consequently, collecting labeled examples for training supervised models is tedious and cost-prohibitive. To alleviate this scarcity problem, we develop an unsupervised method ZL-Distiller, which leverages contextual language representations of the reviews and their distributional patterns to identify salient sentences about entities. Our experiments on multiple domains (hotels, products, and restaurants) show that ZL-Distiller achieves state-of-theart performance and further boosts the performance of other supervised/unsupervised algorithms for the task. Furthermore, we show that salient sentences mined by ZL-Distiller provide unique and detailed information about entities, which benefit downstream NLP applications including question answering and summarization. * Work done during internship at Megagon Labs. † Work done while at Megagon Labs. The Fifth Workshop on Fact Extraction and VERification (FEVER). Co-located with Association for Computational Linguistics 2022.",Distilling Salient Reviews with Zero Labels,"Many people read online reviews to learn about real-world entities of their interest. However, majority of reviews only describes general experiences and opinions of the customers, and may not reveal facts that are specific to the entity being reviewed. In this work, we focus on a novel task of mining from a review corpus sentences that are unique for each entity. We refer to this task as Salient Fact Extraction. Salient facts are extremely scarce due to their very nature. Consequently, collecting labeled examples for training supervised models is tedious and cost-prohibitive. To alleviate this scarcity problem, we develop an unsupervised method ZL-Distiller, which leverages contextual language representations of the reviews and their distributional patterns to identify salient sentences about entities. Our experiments on multiple domains (hotels, products, and restaurants) show that ZL-Distiller achieves state-of-theart performance and further boosts the performance of other supervised/unsupervised algorithms for the task. Furthermore, we show that salient sentences mined by ZL-Distiller provide unique and detailed information about entities, which benefit downstream NLP applications including question answering and summarization. * Work done during internship at Megagon Labs. † Work done while at Megagon Labs. The Fifth Workshop on Fact Extraction and VERification (FEVER). Co-located with Association for Computational Linguistics 2022.",,"Distilling Salient Reviews with Zero Labels. Many people read online reviews to learn about real-world entities of their interest. However, majority of reviews only describes general experiences and opinions of the customers, and may not reveal facts that are specific to the entity being reviewed. In this work, we focus on a novel task of mining from a review corpus sentences that are unique for each entity. We refer to this task as Salient Fact Extraction. Salient facts are extremely scarce due to their very nature. Consequently, collecting labeled examples for training supervised models is tedious and cost-prohibitive. To alleviate this scarcity problem, we develop an unsupervised method ZL-Distiller, which leverages contextual language representations of the reviews and their distributional patterns to identify salient sentences about entities. Our experiments on multiple domains (hotels, products, and restaurants) show that ZL-Distiller achieves state-of-theart performance and further boosts the performance of other supervised/unsupervised algorithms for the task. Furthermore, we show that salient sentences mined by ZL-Distiller provide unique and detailed information about entities, which benefit downstream NLP applications including question answering and summarization. * Work done during internship at Megagon Labs. † Work done while at Megagon Labs. The Fifth Workshop on Fact Extraction and VERification (FEVER). Co-located with Association for Computational Linguistics 2022.",2022
lowe-etal-1994-language,https://aclanthology.org/H94-1087.pdf,0,,,,,,,"Language Identification via Large Vocabulary Speaker Independent Continuous Speech Recognition. The goal of this study is to evaluate the potential for using large vocabulary continuous speech recognition as an engine for automatically classifying utterances according to the language being spoken. The problem of language identification is often thought of as being separate from the problem of speech recognition. But in this paper, as in Dragon's earlier work on topic and speaker identification, we explore a unifying approach to all three message classification problems based on the underlying stochastic process which gives rise to speech. We discuss the theoretical framework upon which our message classification systems are built and report on a series of experiments in which this theory is tested, using large vocabulary continuous speech recognition to distinguish English from Spanish.",Language Identification via Large Vocabulary Speaker Independent Continuous Speech Recognition,"The goal of this study is to evaluate the potential for using large vocabulary continuous speech recognition as an engine for automatically classifying utterances according to the language being spoken. The problem of language identification is often thought of as being separate from the problem of speech recognition. But in this paper, as in Dragon's earlier work on topic and speaker identification, we explore a unifying approach to all three message classification problems based on the underlying stochastic process which gives rise to speech. We discuss the theoretical framework upon which our message classification systems are built and report on a series of experiments in which this theory is tested, using large vocabulary continuous speech recognition to distinguish English from Spanish.",Language Identification via Large Vocabulary Speaker Independent Continuous Speech Recognition,"The goal of this study is to evaluate the potential for using large vocabulary continuous speech recognition as an engine for automatically classifying utterances according to the language being spoken. The problem of language identification is often thought of as being separate from the problem of speech recognition. But in this paper, as in Dragon's earlier work on topic and speaker identification, we explore a unifying approach to all three message classification problems based on the underlying stochastic process which gives rise to speech. We discuss the theoretical framework upon which our message classification systems are built and report on a series of experiments in which this theory is tested, using large vocabulary continuous speech recognition to distinguish English from Spanish.",,"Language Identification via Large Vocabulary Speaker Independent Continuous Speech Recognition. The goal of this study is to evaluate the potential for using large vocabulary continuous speech recognition as an engine for automatically classifying utterances according to the language being spoken. The problem of language identification is often thought of as being separate from the problem of speech recognition. But in this paper, as in Dragon's earlier work on topic and speaker identification, we explore a unifying approach to all three message classification problems based on the underlying stochastic process which gives rise to speech. We discuss the theoretical framework upon which our message classification systems are built and report on a series of experiments in which this theory is tested, using large vocabulary continuous speech recognition to distinguish English from Spanish.",1994
carroll-etal-2000-engineering,https://aclanthology.org/W00-2007.pdf,0,,,,,,,"Engineering a Wide-Coverage Lexicalized Grammar. We discuss a number of practical issues that have arisen in the development of a wide-coverage lexicalized grammar for English. In particular, we consider the way in which the design of the •~rammar and of its encoding was infiuenced by issues relating to the size of the grammar.",Engineering a Wide-Coverage Lexicalized Grammar,"We discuss a number of practical issues that have arisen in the development of a wide-coverage lexicalized grammar for English. In particular, we consider the way in which the design of the •~rammar and of its encoding was infiuenced by issues relating to the size of the grammar.",Engineering a Wide-Coverage Lexicalized Grammar,"We discuss a number of practical issues that have arisen in the development of a wide-coverage lexicalized grammar for English. In particular, we consider the way in which the design of the •~rammar and of its encoding was infiuenced by issues relating to the size of the grammar.",,"Engineering a Wide-Coverage Lexicalized Grammar. We discuss a number of practical issues that have arisen in the development of a wide-coverage lexicalized grammar for English. In particular, we consider the way in which the design of the •~rammar and of its encoding was infiuenced by issues relating to the size of the grammar.",2000
penkale-2013-tailor,https://aclanthology.org/2013.tc-1.13.pdf,0,,,,,,,"Tailor-made quality-controlled translation. Traditional 'one-size-fits-all' models are failing to meet businesses' requirements. To support the growing demand for cost-effective translation, fine-grained control of quality is required, enabling fit-for-purpose content to be delivered at predictable quality and cost levels. This paper argues for customisable levels of quality, detailing the variables which can be altered to achieve a certain level of quality, and showing how this model can be implemented within Lingo24's Coach translation platform.",Tailor-made quality-controlled translation,"Traditional 'one-size-fits-all' models are failing to meet businesses' requirements. To support the growing demand for cost-effective translation, fine-grained control of quality is required, enabling fit-for-purpose content to be delivered at predictable quality and cost levels. This paper argues for customisable levels of quality, detailing the variables which can be altered to achieve a certain level of quality, and showing how this model can be implemented within Lingo24's Coach translation platform.",Tailor-made quality-controlled translation,"Traditional 'one-size-fits-all' models are failing to meet businesses' requirements. To support the growing demand for cost-effective translation, fine-grained control of quality is required, enabling fit-for-purpose content to be delivered at predictable quality and cost levels. This paper argues for customisable levels of quality, detailing the variables which can be altered to achieve a certain level of quality, and showing how this model can be implemented within Lingo24's Coach translation platform.",,"Tailor-made quality-controlled translation. Traditional 'one-size-fits-all' models are failing to meet businesses' requirements. To support the growing demand for cost-effective translation, fine-grained control of quality is required, enabling fit-for-purpose content to be delivered at predictable quality and cost levels. This paper argues for customisable levels of quality, detailing the variables which can be altered to achieve a certain level of quality, and showing how this model can be implemented within Lingo24's Coach translation platform.",2013
patrick-li-2009-cascade,https://aclanthology.org/U09-1014.pdf,1,,,,health,,,"A Cascade Approach to Extracting Medication Events. Information Extraction, from the electronic clinical record is a comparatively new topic for computational linguists. In order to utilize the records to improve the efficiency and quality of health care, the knowledge content should be automatically encoded; however this poses a number of challenges for Natural Language Processing (NLP). In this paper, we present a cascade approach to discover the medicationrelated information (MEDICATION, DOSAGE, MODE, FREQUENCY, DURATION, REASON, and CONTEXT) from narrative patient records. The prototype of this system was used to participate the i2b2 2009 medication extraction challenge. The results show better than 90% accuracy on 5 out of 7 entities used in the study.",A Cascade Approach to Extracting Medication Events,"Information Extraction, from the electronic clinical record is a comparatively new topic for computational linguists. In order to utilize the records to improve the efficiency and quality of health care, the knowledge content should be automatically encoded; however this poses a number of challenges for Natural Language Processing (NLP). In this paper, we present a cascade approach to discover the medicationrelated information (MEDICATION, DOSAGE, MODE, FREQUENCY, DURATION, REASON, and CONTEXT) from narrative patient records. The prototype of this system was used to participate the i2b2 2009 medication extraction challenge. The results show better than 90% accuracy on 5 out of 7 entities used in the study.",A Cascade Approach to Extracting Medication Events,"Information Extraction, from the electronic clinical record is a comparatively new topic for computational linguists. In order to utilize the records to improve the efficiency and quality of health care, the knowledge content should be automatically encoded; however this poses a number of challenges for Natural Language Processing (NLP). In this paper, we present a cascade approach to discover the medicationrelated information (MEDICATION, DOSAGE, MODE, FREQUENCY, DURATION, REASON, and CONTEXT) from narrative patient records. The prototype of this system was used to participate the i2b2 2009 medication extraction challenge. The results show better than 90% accuracy on 5 out of 7 entities used in the study.","We would like to acknowledge the contribution of Stephen Crawshaw, Yefeng Wang and other members in the Health Information Technologies Research Laboratory.Deidentified clinical records used in this research were provided by the i2b2 National Center for Biomedical Computing funded by U54LM008748 and were originally prepared for the Shared Tasks for Challenges in NLP for Clinical Data organized by Dr. Ozlem Uzuner, i2b2 and SUNY.","A Cascade Approach to Extracting Medication Events. Information Extraction, from the electronic clinical record is a comparatively new topic for computational linguists. In order to utilize the records to improve the efficiency and quality of health care, the knowledge content should be automatically encoded; however this poses a number of challenges for Natural Language Processing (NLP). In this paper, we present a cascade approach to discover the medicationrelated information (MEDICATION, DOSAGE, MODE, FREQUENCY, DURATION, REASON, and CONTEXT) from narrative patient records. The prototype of this system was used to participate the i2b2 2009 medication extraction challenge. The results show better than 90% accuracy on 5 out of 7 entities used in the study.",2009
brook-weiss-etal-2021-qa,https://aclanthology.org/2021.emnlp-main.778.pdf,0,,,,,,,"QA-Align: Representing Cross-Text Content Overlap by Aligning Question-Answer Propositions. Multi-text applications, such as multidocument summarization, are typically required to model redundancies across related texts. Current methods confronting consolidation struggle to fuse overlapping information. In order to explicitly represent content overlap, we propose to align predicate-argument relations across texts, providing a potential scaffold for information consolidation. We go beyond clustering coreferring mentions, and instead model overlap with respect to redundancy at a propositional level, rather than merely detecting shared referents. Our setting exploits QA-SRL, utilizing question-answer pairs to capture predicate-argument relations, facilitating laymen annotation of cross-text alignments. We employ crowd-workers for constructing a dataset of QA-based alignments, and present a baseline QA alignment model trained over our dataset. Analyses show that our new task is semantically challenging, capturing content overlap beyond lexical similarity and complements cross-document coreference with proposition-level links, offering potential use for downstream tasks.",{QA}-Align: Representing Cross-Text Content Overlap by Aligning Question-Answer Propositions,"Multi-text applications, such as multidocument summarization, are typically required to model redundancies across related texts. Current methods confronting consolidation struggle to fuse overlapping information. In order to explicitly represent content overlap, we propose to align predicate-argument relations across texts, providing a potential scaffold for information consolidation. We go beyond clustering coreferring mentions, and instead model overlap with respect to redundancy at a propositional level, rather than merely detecting shared referents. Our setting exploits QA-SRL, utilizing question-answer pairs to capture predicate-argument relations, facilitating laymen annotation of cross-text alignments. We employ crowd-workers for constructing a dataset of QA-based alignments, and present a baseline QA alignment model trained over our dataset. Analyses show that our new task is semantically challenging, capturing content overlap beyond lexical similarity and complements cross-document coreference with proposition-level links, offering potential use for downstream tasks.",QA-Align: Representing Cross-Text Content Overlap by Aligning Question-Answer Propositions,"Multi-text applications, such as multidocument summarization, are typically required to model redundancies across related texts. Current methods confronting consolidation struggle to fuse overlapping information. In order to explicitly represent content overlap, we propose to align predicate-argument relations across texts, providing a potential scaffold for information consolidation. We go beyond clustering coreferring mentions, and instead model overlap with respect to redundancy at a propositional level, rather than merely detecting shared referents. Our setting exploits QA-SRL, utilizing question-answer pairs to capture predicate-argument relations, facilitating laymen annotation of cross-text alignments. We employ crowd-workers for constructing a dataset of QA-based alignments, and present a baseline QA alignment model trained over our dataset. Analyses show that our new task is semantically challenging, capturing content overlap beyond lexical similarity and complements cross-document coreference with proposition-level links, offering potential use for downstream tasks.","We would like to thank the anonymous reviewers for their thorough and insightful comments. The work described herein was supported in part by grants from Intel Labs, Facebook, and the Israel Science Foundation grant 1951/17.","QA-Align: Representing Cross-Text Content Overlap by Aligning Question-Answer Propositions. Multi-text applications, such as multidocument summarization, are typically required to model redundancies across related texts. Current methods confronting consolidation struggle to fuse overlapping information. In order to explicitly represent content overlap, we propose to align predicate-argument relations across texts, providing a potential scaffold for information consolidation. We go beyond clustering coreferring mentions, and instead model overlap with respect to redundancy at a propositional level, rather than merely detecting shared referents. Our setting exploits QA-SRL, utilizing question-answer pairs to capture predicate-argument relations, facilitating laymen annotation of cross-text alignments. We employ crowd-workers for constructing a dataset of QA-based alignments, and present a baseline QA alignment model trained over our dataset. Analyses show that our new task is semantically challenging, capturing content overlap beyond lexical similarity and complements cross-document coreference with proposition-level links, offering potential use for downstream tasks.",2021
jappinen-etal-1988-locally,https://aclanthology.org/C88-1056.pdf,0,,,,,,,"Locally Governed Trees and Dependecncy Parsing. ~ paper desc[J.~s the notion of ]pcall.y gove~:ned t~:ees as a n